they're definitely on the roadmap but not in the alpha yet. The syntax like CREATE CONSTRAINT ON (n:Person) ASSERT n.email IS UNIQUE is designed to work, and we've got the error handling infrastructure in place, but the actual enforcement logic is planned for v0.2.0 and beyond. Same goes for relationship constraints - the foundation's there with schema validation, but the complex constraint types are coming in future versions.
Indices are similar - we've got the query planner logic ready to use them, and label indexing is partially implemented as a foundation. Property indexes are definitely planned, with basic support in v0.2.0 and composite indexes following in v0.4.0. The current alpha focuses on getting the core Cypher operations (CREATE/MATCH) solid before layering on the optimization features.
The roadmap shows:
v0.2.0 (Q1 2026): Property indexes and basic constraints
v0.4.0 (Q3 2026): Advanced indexing (composite, spatial)
v1.0.0 (2027): Full constraint support
For now, you can enforce constraints at the application level or use raw SQLite constraints on the backing tables if you need that functionality immediately. The alpha is really about proving the core graph operations work end-to-end before adding the enterprise features.
Thanks for the suggestions! I'm familiar with both.
Different category though - this is a SQLite extension, not a standalone database. The value prop is:
Zero friction - If you're already using SQLite (Python scripts, mobile apps, embedded systems), just .load graph_extension and you have graph capabilities
Mix SQL + Cypher - Join your relational tables with graph traversals in the same query
Works everywhere SQLite works - Serverless functions, Raspberry Pi, iOS apps, wherever
Leverage SQLite's ecosystem - All existing tools, bindings, deployment patterns just work
Kuzu and CozoDB are excellent if you want a dedicated graph database. But if you've already got SQLite (which is everywhere), this lets you add graph features without rearchitecting.
Think of it like SQLite's FTS5 extension for full-text search - you're not competing with Elasticsearch, you're giving SQLite users a lightweight option that fits their existing workflow.
Thanks! As a Postgres user first, I really appreciate that comparison. Apache AGE does great work.
Graph databases are crucial for AI memory, especially paired with vector databases. Graph for relationships, vectors for semantic similarity - particularly powerful for embedded systems and robotics where you need lightweight, on-device reasoning.
The storage model is just regular SQLite tables. When you create a graph, it makes two backing tables:
my_graph_nodes -- id, labels (JSON array), properties (JSON object)
my_graph_edges -- id, source, target, edge_type, properties (JSON object)
It's an edge list, not adjacency lists.
Query processing is not transpiling Cypher directly. There's a pipeline:
Cypher → AST → Logical Plan → Physical Plan (optimizer) → Iterators → SQL queries
The iterators generate SQL on-the-fly to fetch from those backing tables. Basically the Volcano model.
graphFindEdgesByType is Actually deprecated and is a no-op now. The comment says "edge lookups are done via SQL queries." They used to have in-memory structures but moved to just generating SQL like:
SELECT e.target, e.id, e.edge_type
FROM my_graph_edges e
WHERE e.source = 123 AND e.edge_type = 'KNOWS'
So it's "build SQL queries as needed during execution" rather than "transpile the whole Cypher query upfront."
I agree but LLMs are very good at generating XML. Additionally SCXML which AgentML extends has been around and finalized for over 15 years. So generating AgentML works incredibly well.
I get your point, however I wonder how much better they are than JSON when using structured output endpoints, which is likely what you would want to use with such a format.
Good question - we're working on case studies for this.
My theory: models are heavily trained on HTML/XML and many use XML tags in their own system prompts, so they're naturally fluent in that syntax. Makes nested structures more reliable in our testing.
Structured output endpoints help JSON a lot though.