Skip to main content


Here's the table of contents for my lengthy new piece on how I use LLMs to help me write code simonwillison.net/2025/Mar/11/…
in reply to Simon Willison

It includes detailed examples, including the full Claude Code process I used to build this new Colophon page, which presents the Git commit histories for each of my collection of LLM-assisted web tools in one place tools.simonwillison.net/coloph…
in reply to Simon Willison

do you have a video that shows how you actually do the actual coding/llm interaction?
in reply to Simon Willison

Colophon update: I added automated documentation for all 78 of my tools, written by piping the HTML through Claude 3.7 Sonnet

I was hesitant to do this at first but the utility of the resulting explanations convinced me it was worthwhile

More details here: simonwillison.net/2025/Mar/13/…

in reply to Simon Willison

I decided the descriptions it wrote were too long, so I added "Keep it to 2-3 sentences" to the prompt and rebuilt them all to be more concise: github.com/simonw/tools/commit…
in reply to Simon Willison

I do this a lot for justfile recipe descriptions.

This doubles to help Claude Code too know which just recipes it can run as well. So it's become a bit of a nice tip for helping LLMs write and test code more efficiently.

in reply to Simon Willison

my 2c, keep the longer versions! I don't dislike them.

Humans will scan them anyways, but the added keywords might make them more discoverable.

in reply to François Leblanc

@leblancfg I found them a little bit grating to be honest, just a little
Bit too much marketingese in there

The new short ones feel exactly right to me - I just read all 79 of them and didn't feel like there was any fluff in there at all

in reply to Simon Willison

I just transitioned to Cursor/claude by generating summaries of my old ChatGPT development chats and adding it into each file as a comment for Claude to read. Seems to have worked well. I’m getting good results for the new code I’m generating in GitHub.com/adrianco/megpt - I’m trying to get this all working before I go to GTC :-)
in reply to Simon Willison

I've been using Claude now recently and I have to agree that there's also a lot of "writing to think". And then the response helps me not get stuck. I've been trying to get it to write less code unless I specifically ask for it (or it asks me) but it forgets. I've learned a few new things and more than a few times has it been stupid or wrong. Which I think is a good ratio.
in reply to Simon Willison

“LLM tools that obscure that context from me are less effective.”

100%! I would love to share something I’ve been working on with you for feedback. Can I DM or email you?

in reply to Simon Willison

have you been crediting the authors of the training data ? Do you know what licenses the training data used, and if it is compatible with the license(s) of the software you write ?
in reply to cube

@qbe the training data for almost all of these models remains frustratingly secretive - the few models that DO document their training data inevitably grabbed everything on GitHub that had any form of open source license attached to it, entirely ignoring the issue of attribution entirely
@cube
in reply to Simon Willison

@qbe I've come to terms with this personally because it feels similar enough to me reading a bunch of code online to get inspiration and then writing my own for scratch - I am aware that MANY people do not share my opinion on that

In terms of legality, I take mild comfort in the fact that if this turns out to be illegal it won't just be me, a sizable portion of the code written across the whole industry over the past few years will need to be thrown away too!

@cube
in reply to Simon Willison

are you using rules yet? Global or project-specific guidelines that your tool adds to every new chat.

This is particular powerful if you enable the agent to essential write its own rules and design documentation, which then gets injected as context everywhere.

There's a great start here based on Cline : github.com/nickbaumann98/cline…

in reply to joshwa

@joshwa I've not tried IDE based rules yet - I use Claude Projects with custom instructions, that's the closest I've got so far
in reply to Simon Willison

got this open to absorb with a coffee later ☕️ Read the intro. Looks great!

My pre-take is that I find if I write the prompt as if I were writing a ticket for a junior developer, I get pretty good results, often first pass with the newer models.

Thanks for sharing 🎁

in reply to Simon Willison

thanks for summarizing in one article your personal experience with writing code with LLM. 🙏

I'll post some sentences I found interesting 👇

#LLM #AI #Code #Insights #Learning

This entry was edited (3 weeks ago)
in reply to Simon Willison

could you possibly also mention areas where it slowed you down? One thing I've noticed is that it encourages you to go down rabbit holes. It is happy to produce complex answers to whatever your problem is, it doesn't try to question if there's a simpler answer to your question. So I've ended up spending time trying to solve a more complex problem because the LLM is "nearly there", but then you're in an unfamiliar world and it's hard to gauge when it's no longer productive.
in reply to Simon Willison

This is really useful, thanks for writing this!

Have you experimented with getting LLMs to work with your own libraries that may change regularly (and that the model probably won't have been trained on anyway). Is the best approach to just load these libraries' codebases into the context window alongside your code? Better to load the documentation?

in reply to Evan Hensleigh

@futuraprime I've been trying Claude Projects for that recently - their new GitHub integration means you can configure Claude to read all (or some) of your GitHub repository as part of every prompt, it works really well: support.anthropic.com/en/artic…
in reply to Simon Willison

Oh wow, I'd totally missed this. That looks really handy, thanks!