Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using classic dev books to guide AI agents?
1 point by ZLStas 6 hours ago | hide | past | favorite | 4 comments
I’ve been experimenting with a small open-source project that turns ideas from classic software engineering books (Clean Code, DDIA, etc.) into structured “skills” that AI agents can reuse for tasks like code review, system design, and trade-off analysis.

Each skill is an opinionated, book-inspired instruction set rather than a summary or excerpt.

I’m trying to understand: - Is this kind of abstraction useful in practice? - Where would you expect something like this to fail? - Would you trust AI tools more if their behavior was explicitly grounded in known books?

I’m not sure yet whether this should stay a learning experiment or evolve into something more serious, so I’d really value critical feedback from people who build or rely on developer tools.

 help



So what I have found personally, is that asking questions gets the best results. So instead of feeding it too much generalized information, I ask LLMs to research what matters most for them in the current development space to create something that would be up to date, effective, clean, etc, etc.

My goal is to.. and I want to know more.

LLMs seem to do better when helping you learn than creating, as the predictions seem to change focus.

Then you could use the books to resource based on out final concept.


That's a great point — asking focused questions definitely gets better results than dumping generalized knowledge. I think both approaches can complement each other. Where I see book-based skills adding value is in the iterative review loop: you let the LLM review your code against well-known principles (like Clean Code or DDIA patterns), it flags issues and suggests improvements, and you apply them repeatedly. Over multiple passes, the code quality compounds significantly. So it's less about feeding the LLM static rules and more about giving it a structured lens to evaluate through. The LLM still does the thinking — the books just sharpen its focus. That said, I'm still figuring out how to run this evaluation properly. A colleague of mine has been experimenting with spinning up sub-agents to review the outputs of the main LLM flow — essentially an automated review layer. That might be the right pattern: one agent creates, another evaluates against known principles. Curious if anyone else has tried something similar.

Are you using Claude Code?

I find it to be SIGNIFICANTLY better than Projects in any other form, because the amount of layers you can create.

You can store the full books, have your workflows, etc.


Yes, I use Claude — both the chat and Claude Code. Projects are great for layering context, but my concern with storing full books is that they eat up a huge chunk of the context window. I'd rather spend that budget on actual project context — the codebase, architecture decisions, domain specifics. That's where the distilled skill files come in: you get the core principles from the book in a compact, actionable form without burning through your context on hundreds of pages of text. At least that's how I see it



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: