When it comes to the state of the art of AI-Assisted Coding (perhaps not exactly “Software Engineering”) in August 2025, you may be confused.

You will find examples of engineers who think it is all crap. You will also find examples where it has “considerably changed my relationship to writing and maintaining code at scale”.

The Recurse Center’s blog post has even more thoughts about the wide range of reactions.

What Is Going On? Why Are Opinions On AI So Varied?

The future is already here - it’s just not very evenly distributed - William Gibson

I’ll tell you what is going on, Time and Tools.

Time

The actual time that you first experience AI-assisted coding will impact your initial impressions of course. Working with a state of the art agent in August 2025 is extremely different than Github Copilot of 2023.

The models keep getting better. The prompting is better. The context windows are larger.

Tools

I’m actually not talking about the literal AI tool you use. When I say “Tools”, I actually mean MCP Tools that your assistant has access to.

Tools can help solve all the traditional problems with first-gen AI assistants:

  • Hallucinating libraries and methods? - Give it access to docs/LSP so it can see for itself!
  • Writing code that doesn’t compile? - Give it permission to compile the code so it can iterate!
  • Lacking context around the problem you are solving? - Pre-populate its context or give it access to what you know!

It is kinda hard to describe how nice it is when the AI “gets it”, and it is much easier to get it when you can say things like:

  • Can you read this slack thread (LINK) and incorporate the suggestions?
  • Can you run the gh command and look at the comments for the PR on this branch and tell me what you think?
  • This codebase is running live on this server (SERVER), can you ssh to it and figure out why it is leaking memory?
  • Can you figure out why the sound is not working on my local Linux desktop? Run whatever commands you need to troubleshoot.

Times Are Changing

I don’t know where this is all going exactly, but Bret Victor’s The Future of Programming has some hints:

This talk was published in 2013, twelve years before GPT-5 was released.

Victor’s main thesis is that, it is important that we keep an open mind when it comes to what computer programming is.

If we think we “know” what computer programming is, we get locked into dogma, and perhaps we will discard new ideas.

Victor gives a few historical examples of this with things like the first assembler, or the introduction of Fortran.

It feels like one a new historical example is in the making, right now.

English Is The Next High-Level Programming Language

Our LLM-based coding assistants may not be doing “constraint based programming” or “direct manipulation of data”, but it certainly is coding based on goals.

On a few occasions, I’ve made (AI-assisted) code changes on repositories using programming languages I know nothing about. Is it dangerous? Maybe, but only when the AI assistant isn’t… good. If the AI assistant is extremely good at writing code, at some point we are not going to bother to check it.

Analogously, we don’t “check” assembler programs are writing correct machine code (but you can). We don’t “check” the python interpreter is correctly taking our python code and turning it into bytecode (but you can). We also don’t “check” that our GCC is creating correct assembly from our C code (but you can).

We will get to the same place with AI generated code, generated from English and our goals. Eventually we will stop bothering to check the AI’s work, because it will be better than us, just like GCC is better at writing assembly than me.

Granted, our LLMs are not as deterministic as mechanical compilers, but does that really matter?

Just like if I take file of C code and run it through a more modern compiler, it will produce dramatically different Assembly code than a compiler from 30 years ago. I still don’t check it.

Ten years from now, we will ask for modern LLMs to scan through source code and refine it over time, and we won’t bother to check its work.

The Divide

During this transitional period, there will be a divide between software engineers that do vs do not use AI tools.

You might think that the next generation is doomed because they won’t be growing up learning how to “properly” code, but I think this makes the same kind of mistake that Bret Victor warns us about.

Just like one does not need to know assembly language to write C code, one also does not need to know Javascript to write frontend code, if you can express your ideas clearly in english (or whatever natural language you use).

The divide is here already, just not evenly distributed.


Comment via email