I basically don't read anything that looks LLM-generated at this point, so I'd prefer that you included your rambling version alongside if you still want to have a generated version.
You seem to like SQL, don't you? Your README reads a bit fallacious: Just because SQL is a great alive language, it doesn't make your language a great one. I think it's called survivor bias.
Hmmm here's some proof of an unsubstantiated claim about the ai detection I've been doing.
Look at me: I can detect ai, and then I post about it in the comment sections. Here's a website. Hmmm I'm going to flag and down vote posts that don't agree with my opinions that I post on the hn comment section.
That's a virtue signal.
Edit:
What's sad is, dang is going to come comment on this instead of address the bigger convo that nearly every comment section is full of llm accusations and witch-hunts.
This is, in the parlance of every day people, fucking insufferable.
Tone policing and chastising is worst than the article in question. I would rather read slop then some internet stranger patiently talking down to someone with an air of moral superiority.
I think it's funny that this is simply a battle you and those folks above are not going to win.
This comment is evidence that your write-up would have been just fine and understandable by other humans. Using AI to write your technical writing for you makes me lose trust in what you're saying. No one cares if you're a non-native English speaker, just write.
Your English seems good enough to communicate. I'd encourage you to trust your abilities; any misunderstandings can be clarified with follow-up questions if necessary.
Me explaining to a teacher why i cheated on the test: well did you stop to consider the cognitive load of doing the problems myself and how much easier it was to cheat?
Hmm, I don’t see this much anymore. I typically start a project in plan mode and tell Claude to do some research to bring me 2-3 alternatives. Then we talk about the pros and cons before deciding on the libraries, etc.
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
Thanks for asking! My initial prompt is like starting a new project with gstack's side project mode. It does do a web search for whether there is an existing LLM wiki typo correction solution, but it does not tell Claude to try to improve based on a conventional NLP approach rather than trying to write everything from scratch.
The root issue here is that Claude (and most LLMs) optimize for producing working code, not minimal code. When given an ambiguous task they'll reach for a full implementation before checking if a library exists.\n\nA pattern I've found helps: before writing any code, explicitly ask the model to list its assumptions and identify what libraries/modules could handle each part. Something like 'before coding, tell me what existing Python packages could solve each sub-problem.' This forces a discovery step.\n\nThe CLAUDE.md / system prompt approach also works well - you can specify project conventions like 'always check PyPI before implementing utility functions from scratch.' Takes a bit of upfront setup but catches this class of error reliably.
This is why you should set up a project ruleset/constitution when you start. Do you want to prefer libraries or inline code? You can even choose at what point you think the trade off is worth it. 1000 lines of code? 10 functions? You can choose whatever.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
I consider myself AI skeptical-ish and I detest when people defend LLMs with "it's user error, prompt better," but in this case it actually is user error.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
Thanks! I will set a more detailed Claude.md next time. And an additional thing that kind of frustrates me is that even when I explicitly tell it to migrate to existing packages, Claude still stubbornly sticks to its custom rule set rather than replacing it with existing SOTA.
Seen this pattern repeatedly building a shell plugin with Claude. It defaults to writing everything from first principles rather than reaching for existing tools. 200 lines of custom YAML parsing when a one-liner would do. Adding "always check if a library or existing tool solves this before writing custom code" to CLAUDE.md cut this down significantly.
honest question, no shade, wasn't that a but your fault for not googling or asking it to consiser existing approaches and solutions? AI will be as dumb as you let it imo. i always ask it to do a bit of research as i craft a plan with it.
On the other hand, I often want an LLM to write things from scratch instead of bringing in 10x the surface area in unnecessary dependencies, and I very, very rarely get better results when these things are let loose on a cesspool of a web. Given that real people have vastly different preferences, you either have to cater to a subset or else require everyone to be a bit more specific with their desires. It's not that surprising.
Yeah, and you can tell the AI to just write the bits of the code that it actually needs for the functionality you are using. If you end up needing more of it, that is fine, the AI will just write more of it when it needs it.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.
Fake writing: Claude wrote 10 paragraphs instead of import human
https://www.pangram.com/history/dee030c0-0362-43d0-8fbd-bbab...
A lot of people write articles, then use an LLM to sum it up.
I do that for most of my docs because I ramble.
I prefer my rambling, but I think others like the tidied up LLM summary.
How does this document score - just wrote it today, used Gemini to sum it up: https://github.com/cuzzo/clear/blob/master/docs/retrospectiv...
I basically don't read anything that looks LLM-generated at this point, so I'd prefer that you included your rambling version alongside if you still want to have a generated version.
The great part about Git History is that the rambling typo filled version is available if you want it!
https://github.com/cuzzo/clear/commit/8c8a50f1d2c8fa2e8dca64...
I do LLM scaffolding, fill in my words, then pass over it manually.
On my worst, I rant worse than listening to an LLM.
When I make an effort, my personality comes through at no cost to the reader.
So far I haven't found a prompt that can impersonate me properly. I'm open to it.
Seeding it with a handful of my favorite articles gets me halfway, but the editing is still needed.
You seem to like SQL, don't you? Your README reads a bit fallacious: Just because SQL is a great alive language, it doesn't make your language a great one. I think it's called survivor bias.
[flagged]
> virtual signaling
We just say random words now?
Hmmm here's some proof of an unsubstantiated claim about the ai detection I've been doing.
Look at me: I can detect ai, and then I post about it in the comment sections. Here's a website. Hmmm I'm going to flag and down vote posts that don't agree with my opinions that I post on the hn comment section.
That's a virtue signal.
Edit:
What's sad is, dang is going to come comment on this instead of address the bigger convo that nearly every comment section is full of llm accusations and witch-hunts.
They serve as helpful warnings to other readers so they don’t need to waste their time. I’m sure I’m not the only one to appreciate a heads up.
Not much we can do here if you think everything is about bragging.
https://news.ycombinator.com/reply?id=48104057&goto=item%3Fi...
This is, in the parlance of every day people, fucking insufferable.
Tone policing and chastising is worst than the article in question. I would rather read slop then some internet stranger patiently talking down to someone with an air of moral superiority.
I think it's funny that this is simply a battle you and those folks above are not going to win.
Well I have great news. Complaining about AI detection comments is not going to make it go away.
How is it fake if i can read and understand it?
lol sorry i am not a native english speaker and thus i let claude to write a post mortem analysis on what it has done wrong :D
This comment is evidence that your write-up would have been just fine and understandable by other humans. Using AI to write your technical writing for you makes me lose trust in what you're saying. No one cares if you're a non-native English speaker, just write.
Your English seems good enough to communicate. I'd encourage you to trust your abilities; any misunderstandings can be clarified with follow-up questions if necessary.
Maybe you didn't stop to consider the cognitive load of writing in a second language and how much delegating to AI reduces it.
Me explaining to a teacher why i cheated on the test: well did you stop to consider the cognitive load of doing the problems myself and how much easier it was to cheat?
This neglects the cognitive load of reading LLM-generated text, which is often overly verbose, awkward, and confusing.
There are actually multiple ways to completely eliminate the cognitive load of any sort of activity, so please, don't settle on half measures
Imagine the horror of practicing a skill in order to get better at it!
If you use AI to help your writing
1) Let people know that you did that
2) Try to include a link to another page, which shows the prompt and your original version of the writing, even if it is in your native language
This will help people understand what you wanted to say.
Edit: The fact that you used AI to write your post, when your post is saying that you can’t trust AI to do a good job, is… super ironic lol :)
At the very least not providing a disclaimer is disrespectful to your readers.
sorry, i will include a disclaimer in the future
Hmm, I don’t see this much anymore. I typically start a project in plan mode and tell Claude to do some research to bring me 2-3 alternatives. Then we talk about the pros and cons before deciding on the libraries, etc.
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
thx! Now I tried to add a hook to force Claude to search for existing solutions both within and outside the codebase every time I demand a new feature
Posts like this really need to include the prompts.
Thanks for asking! My initial prompt is like starting a new project with gstack's side project mode. It does do a web search for whether there is an existing LLM wiki typo correction solution, but it does not tell Claude to try to improve based on a conventional NLP approach rather than trying to write everything from scratch.
The root issue here is that Claude (and most LLMs) optimize for producing working code, not minimal code. When given an ambiguous task they'll reach for a full implementation before checking if a library exists.\n\nA pattern I've found helps: before writing any code, explicitly ask the model to list its assumptions and identify what libraries/modules could handle each part. Something like 'before coding, tell me what existing Python packages could solve each sub-problem.' This forces a discovery step.\n\nThe CLAUDE.md / system prompt approach also works well - you can specify project conventions like 'always check PyPI before implementing utility functions from scratch.' Takes a bit of upfront setup but catches this class of error reliably.
Thanks! Will try to add this to my Claude.md
This is why you should set up a project ruleset/constitution when you start. Do you want to prefer libraries or inline code? You can even choose at what point you think the trade off is worth it. 1000 lines of code? 10 functions? You can choose whatever.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
I consider myself AI skeptical-ish and I detest when people defend LLMs with "it's user error, prompt better," but in this case it actually is user error.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
Thanks! I will set a more detailed Claude.md next time. And an additional thing that kind of frustrates me is that even when I explicitly tell it to migrate to existing packages, Claude still stubbornly sticks to its custom rule set rather than replacing it with existing SOTA.
Seen this pattern repeatedly building a shell plugin with Claude. It defaults to writing everything from first principles rather than reaching for existing tools. 200 lines of custom YAML parsing when a one-liner would do. Adding "always check if a library or existing tool solves this before writing custom code" to CLAUDE.md cut this down significantly.
Regardless of how you feel about the default behavior, this is the type of preference that Claude really listens to in your CLAUDE.md.
If you tell it to leverage dependencies, it will. If you (like me) prefer that it avoid dependencies, it will.
Tbh that is some engineering teams I’ve worked on…
honest question, no shade, wasn't that a but your fault for not googling or asking it to consiser existing approaches and solutions? AI will be as dumb as you let it imo. i always ask it to do a bit of research as i craft a plan with it.
Thanks. I wasn't running this bare. I had gstack installed now and tried Superpowers before. Next time i will try a more detailed claude.md setting.
On the other hand, I often want an LLM to write things from scratch instead of bringing in 10x the surface area in unnecessary dependencies, and I very, very rarely get better results when these things are let loose on a cesspool of a web. Given that real people have vastly different preferences, you either have to cater to a subset or else require everyone to be a bit more specific with their desires. It's not that surprising.
Yeah, and you can tell the AI to just write the bits of the code that it actually needs for the functionality you are using. If you end up needing more of it, that is fine, the AI will just write more of it when it needs it.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.
> If you end up needing more of it, that is fine, the AI will just write more of it when it needs it.
This kind of incremental addition without planning in advance leads to inconsistent code with lots of redundancies.
Opus 4.7 tends to do this overkill stuff regardless of what you’re trying to do
pebkac
[dead]
[dead]