Coding practice websites are a great way to get familiar with a new language. For me, they are invaluable not only for giving me tasks to solve, but for letting me see countless other solutions, giving me an opportunity to discover so many language-specific ways of solving the problem. A lot of times you can accomplish a task in a new language in a very similar way to the one you already know, which prevents you from learning the philosophy and style of the new language.
A little while ago I had some spare time between projects, and was spending a lot of it on Codesignal trying to come up with various solutions for challenges. After completing a task, I would inspect other submissions for better techniques, sometimes looking at other languages as well. This prompted me to think about how the solutions spoke about programming languages themselves. For example, even without gathering any stats, you can tell a difference of the runtime limit of each language; some have a few seconds, others can take up to 20.
I mentioned my thoughts to a few coworkers and they found it interesting as well, so I decided to gather some basic data in order to have a short discussion about the findings. I will first present the results (of the most popular and most challenging programming languages for example) and touch up on some details afterwards.
Codesignal (formerly Codefights) is one of very many websites in this category, and it would be interesting to do a broader study. I chose Codesignal since when I had the idea, I had almost completed the intro to arcade, meaning I had access to the solutions. These numbers have no serious implications, obviously. But they are great for facilitating discussion, like comparing the results against general and our own prejudices about different languages.
What I used
await and had to constantly fix errors. As soon as this is published, I will print out the code, delete the repo and burn the papers for dramatic effect (because I can’t drill the hard drive like cool guys).
Funny bit. My plan was to rerun the script after a few days to collect the results that didn’t match, and lather rinse repeat on those smaller subsets until it was empty. After the first check, most of the results didn’t match. I tried to not pay attention to the pain of my code performing that poorly and quietly restarted the script. But it kept being incorrect until the concept of new submissions being made finally hit me. So yeah, at least it’s a bit less embarrassing than my code being shittier than it already is, but I still roll my eyes over remembering the fact. Since I had no lazy way of checking results, I added some error handling and improved the script to get more accuracy.
There was also a bonus drinking game when we would run the script on some task, and while it was clicking through solutions to gather numbers, guess if the solution of the next language would have fewer or more lines. It was like a programming version of the “Higher or lower” card game, but most people had a poker face reaction about that and didn’t find it as exciting as me. That means there are no good drunk stories for resulting from it for me to tell, so I’ll finish it here.
How do these results reflect your experience? Which confirm your expectations or knowledge? Which ones were surprising or contradictory?