Simplicity is beautiful — NeuroChain DSL makes it real. AI operations are part of the code, not hidden layers. Scripts and the CLI are the primary way to work, the WebUI is a light demo/test surface. Run ONNX models on the CPU and keep full control.
# Sentiment
AI: "models/distilbert-sst2/model.onnx"
set mood from AI: "This product is great!"
if mood == "Positive":
neuro "👍"
NeuroChain DSL unifies AI and logic into one language so you can build solutions smartly and simply.
SST2, Toxic, FactCheck, Intent and MacroIntent models run on the CPU without heavy infrastructure.
No external cloud layers and smaller attack surface, more control.
AI built-ins are part of the syntax, no hidden layers.
# Intent
set cmd from AI: "Please stop."
if cmd == "StopCommand": neuro "Stopping process"
No heavy environments or unnecessary dependencies, just code and run.
Enter NeuroChain DSL commands and run them from your browser. Pick one or many models and see output and logs instantly.
.nc filesNeuroChain drives Snake in real time: the intent model reads commands and the game reacts instantly.
[Engine (NeuroChain)]
Lexer → Parser → Interpreter → Inference (tract-onnx, CPU)
# Local inference — no external cloud layers
[API (Rust, Axum)]
POST /api/analyze # parses NeuroChain DSL and runs tract-onnx inference (CPU)
Try the WebUI or add your own script and get moving right away.