Here's the table of contents for my lengthy new piece on how I use LLMs to help me write code simonwillison.net/2025/Mar/11/…
Here’s how I use LLMs to help me write code
Online discussions about using Large Language Models to help write code inevitably produce comments from developers who’s experiences have been disappointing. They often ask what they’re doing wrong—how come some …Simon Willison’s Weblog
Simon Willison
in reply to Simon Willison • • •tools.simonwillison.net colophon
tools.simonwillison.netPeter Hoffmann
in reply to Simon Willison • • •Simon Willison
in reply to Simon Willison • • •Colophon update: I added automated documentation for all 78 of my tools, written by piping the HTML through Claude 3.7 Sonnet
I was hesitant to do this at first but the utility of the resulting explanations convinced me it was worthwhile
More details here: simonwillison.net/2025/Mar/13/…
My tools colophon now has AI-generated descriptions
Simon Willison’s WeblogSimon Willison
in reply to Simon Willison • • •Knock length of descriptions down to 2-3 sentences · simonw/tools@b9eadb0
GitHubJeff Triplett
in reply to Simon Willison • • •I do this a lot for justfile recipe descriptions.
This doubles to help Claude Code too know which just recipes it can run as well. So it's become a bit of a nice tip for helping LLMs write and test code more efficiently.
François Leblanc
in reply to Simon Willison • • •my 2c, keep the longer versions! I don't dislike them.
Humans will scan them anyways, but the added keywords might make them more discoverable.
Simon Willison
in reply to François Leblanc • • •@leblancfg I found them a little bit grating to be honest, just a little
Bit too much marketingese in there
The new short ones feel exactly right to me - I just read all 79 of them and didn't feel like there was any fluff in there at all
Adrian Cockcroft
in reply to Simon Willison • • •GitHub - adrianco/meGPT
GitHubMorten Grøftehauge
in reply to Simon Willison • • •Dickson Tan
in reply to Simon Willison • • •Adam Avenir
in reply to Simon Willison • • •“LLM tools that obscure that context from me are less effective.”
100%! I would love to share something I’ve been working on with you for feedback. Can I DM or email you?
Simon Willison
in reply to Adam Avenir • • •Adam Avenir
in reply to Simon Willison • • •cube
in reply to Simon Willison • • •Simon Willison
in reply to cube • • •Simon Willison
in reply to Simon Willison • • •@qbe I've come to terms with this personally because it feels similar enough to me reading a bunch of code online to get inspiration and then writing my own for scratch - I am aware that MANY people do not share my opinion on that
In terms of legality, I take mild comfort in the fact that if this turns out to be illegal it won't just be me, a sizable portion of the code written across the whole industry over the past few years will need to be thrown away too!
joshwa
in reply to Simon Willison • • •are you using rules yet? Global or project-specific guidelines that your tool adds to every new chat.
This is particular powerful if you enable the agent to essential write its own rules and design documentation, which then gets injected as context everywhere.
There's a great start here based on Cline : github.com/nickbaumann98/cline…
cline_docs/prompting/custom instructions library/cline-memory-bank.md at main · nickbaumann98/cline_docs
GitHubSimon Willison
in reply to joshwa • • •Carlton Gibson 🇪🇺
in reply to Simon Willison • • •got this open to absorb with a coffee later ☕️ Read the intro. Looks great!
My pre-take is that I find if I write the prompt as if I were writing a ticket for a junior developer, I get pretty good results, often first pass with the newer models.
Thanks for sharing 🎁
Paolo Melchiorre
in reply to Simon Willison • • •thanks for summarizing in one article your personal experience with writing code with LLM. 🙏
I'll post some sentences I found interesting 👇
#LLM #AI #Code #Insights #Learning
Ian Channing 🦈
in reply to Simon Willison • • •Evan Hensleigh
in reply to Simon Willison • • •This is really useful, thanks for writing this!
Have you experimented with getting LLMs to work with your own libraries that may change regularly (and that the model probably won't have been trained on anyway). Is the best approach to just load these libraries' codebases into the context window alongside your code? Better to load the documentation?
Simon Willison
in reply to Evan Hensleigh • • •Using the GitHub Integration | Anthropic Help Center
support.anthropic.comEvan Hensleigh
in reply to Simon Willison • • •