1 minute read

I’ve been playing around a lot recently with Elixir and its ecosystem. In particular, I’ve been doing exercises on Exercism to get better at solving common coding challenges in the Elixir Way™—that is, writing idiomatic Elixir.

For reference, I’m using GitHub’s Copilot service set to use the Claude Sonnet 3.5 model. I’m leveraging both code autocompletion and a chat interface.

I have a few observations about how having an ever-present AI coding assistant is changing things.


Hard to Generate Idiomatic Elixir

For the most part, the generated code works well. However, the model doesn’t often generate idiomatic Elixir unless explicitly prompted to rewrite the code in a specific way. For example, I’ve noticed it prefers standard control structures (e.g., if/else, case) over function clauses with guards. (one of the fun aspects of writing Elixir!)


Solving Challenges: Fun or Frustrating?

On one hand, solving simple challenges with the model was easy and effective. The autocomplete would pop up and suggest the full solution. Great! This was exactly what I wanted, as it removed the need to tediously consider edge cases for straightforward problems.

However, with slightly more challenging exercises, the model would often behave the same way—taking all the fun out of the process. This was, of course, expected. Boo.


Still Need to Learn the Patterns

I feel like the models aren’t quite there yet when it comes to understanding the broader Elixir ecosystem. Maybe I’m wrong, but it seems like they lack familiarity with some of the idiomatic patterns that make Elixir code elegant and concise.


Reading and Understanding the Code: A Different Skillset

Another thing I noticed with slightly more challenging problems: it wasn’t immediately obvious what the generated code was doing. I’d often need to spend time reading and analyzing it—just as I would if it had been written by another engineer, one I didn’t necessarily trust.


Are coding exercises still useful?

One thing that strikes me: the coding challenges are solved pretty well with the model. That raises questions about the futility of doing them! Perhaps a new way to learn Elixir, or any other new language, is to watch the AI solve them and then read the code to understand. Like I pointed out, the generated solutions might not be idiomatic. I have no doubt with future models or with better prompts from the Copilot agent, normal Elixir code can be generated.