Coding with LLMs progress
Making some new additions to the workout log. Amazed by how much the LLMs have improved over just a few months, since I last used them (Claude and ChatGPT) to generate code.
While context windows have expanded the last couple of years, for anything more complex than a couple of hundred lines, my experience up until a couple of months ago was that you had essentially one shot to get it right. If your first prompt didn't produce valid code that created the desired output, you had to go back and refine your initial prompt and start over. As soon as you began conversing to clarify specifications about logic and output, decay would set in the models would start breaking things when trying to fix something else.
Yesterday, I threw Claude a 500 lines long script. Went back and forward to get it tweaked, and on the 16th iteration it produced the exact output I had in mind when I began.
While I think there's a lot of positives to come from forcing clarity (and brevity) in your initial specification (in this case, a prompt) it's amazing to see the progress that's happening here in such a short time.
As a reminder, I'm just a doofus who never wrote a single line of code before LLMs opened this whole new world of creating things that always felt out of reach.