Hacker Newsnew | past | comments | ask | show | jobs | submit | gwillen85's commentslogin

You can execute the above `OPENAI_API_KEY=YOUR_API_KEY amlx run researcher.aml`


So on constraints:

they're definitely on the roadmap but not in the alpha yet. The syntax like CREATE CONSTRAINT ON (n:Person) ASSERT n.email IS UNIQUE is designed to work, and we've got the error handling infrastructure in place, but the actual enforcement logic is planned for v0.2.0 and beyond. Same goes for relationship constraints - the foundation's there with schema validation, but the complex constraint types are coming in future versions.

Indices are similar - we've got the query planner logic ready to use them, and label indexing is partially implemented as a foundation. Property indexes are definitely planned, with basic support in v0.2.0 and composite indexes following in v0.4.0. The current alpha focuses on getting the core Cypher operations (CREATE/MATCH) solid before layering on the optimization features.

The roadmap shows: v0.2.0 (Q1 2026): Property indexes and basic constraints v0.4.0 (Q3 2026): Advanced indexing (composite, spatial) v1.0.0 (2027): Full constraint support

For now, you can enforce constraints at the application level or use raw SQLite constraints on the backing tables if you need that functionality immediately. The alpha is really about proving the core graph operations work end-to-end before adding the enterprise features.


Thanks for the suggestions! I'm familiar with both. Different category though - this is a SQLite extension, not a standalone database. The value prop is:

Zero friction - If you're already using SQLite (Python scripts, mobile apps, embedded systems), just .load graph_extension and you have graph capabilities Mix SQL + Cypher - Join your relational tables with graph traversals in the same query Works everywhere SQLite works - Serverless functions, Raspberry Pi, iOS apps, wherever Leverage SQLite's ecosystem - All existing tools, bindings, deployment patterns just work

Kuzu and CozoDB are excellent if you want a dedicated graph database. But if you've already got SQLite (which is everywhere), this lets you add graph features without rearchitecting.

Think of it like SQLite's FTS5 extension for full-text search - you're not competing with Elasticsearch, you're giving SQLite users a lightweight option that fits their existing workflow.


This reminds me of the apache age postgres extension as well. Very cool work


Thanks! As a Postgres user first, I really appreciate that comparison. Apache AGE does great work.

Graph databases are crucial for AI memory, especially paired with vector databases. Graph for relationships, vectors for semantic similarity - particularly powerful for embedded systems and robotics where you need lightweight, on-device reasoning.


Great Question!

The storage model is just regular SQLite tables. When you create a graph, it makes two backing tables: my_graph_nodes -- id, labels (JSON array), properties (JSON object) my_graph_edges -- id, source, target, edge_type, properties (JSON object) It's an edge list, not adjacency lists.

Query processing is not transpiling Cypher directly. There's a pipeline: Cypher → AST → Logical Plan → Physical Plan (optimizer) → Iterators → SQL queries The iterators generate SQL on-the-fly to fetch from those backing tables. Basically the Volcano model.

graphFindEdgesByType is Actually deprecated and is a no-op now. The comment says "edge lookups are done via SQL queries." They used to have in-memory structures but moved to just generating SQL like: SELECT e.target, e.id, e.edge_type FROM my_graph_edges e WHERE e.source = 123 AND e.edge_type = 'KNOWS'

So it's "build SQL queries as needed during execution" rather than "transpile the whole Cypher query upfront."


This will also be used in the yet to be released `memlite` which is our first wasm component for AgentML


Interesting, yet the xml syntax feels quite verbose vs JSON for example.


I agree but LLMs are very good at generating XML. Additionally SCXML which AgentML extends has been around and finalized for over 15 years. So generating AgentML works incredibly well.


I get your point, however I wonder how much better they are than JSON when using structured output endpoints, which is likely what you would want to use with such a format.


That's a fair point. We're considering adding JSON as a first-class citizen alongside XML - similar to OpenAPI supporting both JSON and YAML.

But you're right that structured output endpoints make JSON generation more reliable, so supporting both formats long-term makes sense.


I'm also curious if you know if anyone has any definitive test sets on this? Kind of like how Simon Willison uses the bird on the bicycle?


Good question - we're working on case studies for this.

My theory: models are heavily trained on HTML/XML and many use XML tags in their own system prompts, so they're naturally fluent in that syntax. Makes nested structures more reliable in our testing.

Structured output endpoints help JSON a lot though.


    <agentml xmlns="github.com/agentflare-ai/agentml" 
             datamodel="ecmascript">
      <state id="respond">
        <openai:generate model="grok-4-fast-reasoning"
          promptexpr="`continue: ${conversationHistory(10)}`"
          location="_event"/>
        <transition event="done" target="send"/>
      </state>
      <state id="send">
        <send event="output" data="_event.data"/>
        <transition target="respond"/>
      </state>
    </agentml>
AgentML is an open-source XML language for building deterministic AI agents. Write once, run anywhere.

The problem: LLM agents are flaky, locked to specific frameworks, and nearly impossible to debug or audit.

The fix: Declare agent behavior in XML using state machines. State transitions are explicit, outputs are schema-bound, execution is traceable.

Key features:

* No hallucinated tool calls (structured outputs only)

* Built-in memory (SQLite + graph storage)

* 80% fewer tokens via runtime snapshots

* CLI: amlx validate, amlx run

* Swap models freely (OpenAI, Grok, Ollama)

Install: curl -fsSL sh.agentml.dev | sh

Run: amlx run chat.aml

Runtime: Go/WASM (agentmlx). Coming soon: LangGraph export, Python SDK.

GitHub: https://github.com/agentflare-ai/agentml

Docs + Demo: https://www.agentml.dev/

What's your biggest agent pain point - framework lock-in, debugging, or compliance?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: