Simplicity is beautiful — NeuroChain DSL makes it real. AI operations are part of the code, not hidden layers. Scripts and the CLI are the primary way to work, while the demos are light public test surfaces. Run ONNX models on the CPU and keep full control.
# Sentiment
AI: "models/distilbert-sst2/model.onnx"
set mood from AI: "This product is great!"
if mood == "Positive":
neuro "👍"
NeuroChain DSL unifies AI and logic into one language so you can build solutions smartly and simply.
SST2, Toxic, FactCheck, Intent, MacroIntent and Intent Stellar models run on the CPU without heavy infrastructure.
No external cloud layers and smaller attack surface, more control.
AI built-ins are part of the syntax, no hidden layers.
# Intent
set cmd from AI: "Please stop."
if cmd == "StopCommand": neuro "Stopping process"
No heavy environments or unnecessary dependencies, just code and run.
Enter NeuroChain DSL commands and run them from your browser. Pick one or many models and see output and logs instantly.
.nc filesNeuroChain parses Stellar/Soroban intents into typed plans with explicit guardrails before submit.
POST /api/stellar/intent-planOne local-first engine, multiple runtime surfaces, and a guarded Stellar intent layer presented as one coherent stack.
Lexer -> Parser -> Interpreter -> Intent/Template -> Inference
POST /api/analyze
DSL analysis and local inference.
POST /api/stellar/intent-plan
Typed Stellar planning surface.
Typed plans flow through allowlist, contract policy, and intent safety before anything deeper happens.
Built to stay legible under load: explicit execution flow, safer defaults, and guardrails before anything deeper happens.
Try the WebUI or add your own script and get moving right away.