I’ve been experimenting with a small open-source project that turns ideas from classic software engineering books (Clean Code, DDIA, etc.) into structured “skills” that AI agents can reuse for tasks like code review, system design, and trade-off analysis.
Each skill is an opinionated, book-inspired instruction set rather than a summary or excerpt.
I’m trying to understand:
- Is this kind of abstraction useful in practice?
- Where would you expect something like this to fail?
- Would you trust AI tools more if their behavior was explicitly grounded in known books?
I’m not sure yet whether this should stay a learning experiment or evolve into something more serious, so I’d really value critical feedback from people who build or rely on developer tools.
My goal is to.. and I want to know more.
LLMs seem to do better when helping you learn than creating, as the predictions seem to change focus.
Then you could use the books to resource based on out final concept.
reply