NeuroChain DSL programming language with AI directly into code

Simplicity is beautiful — NeuroChain DSL makes it real. AI operations are part of the code, not hidden layers. Scripts and the CLI are the primary way to work, while the demos are light public test surfaces. Run ONNX models on the CPU and keep full control.

# Sentiment
AI: "models/distilbert-sst2/model.onnx"
set mood from AI: "This product is great!"
if mood == "Positive":
    neuro "👍"

Why NeuroChain DSL?

NeuroChain DSL unifies AI and logic into one language so you can build solutions smartly and simply.

ONNX / CPU

SST2, Toxic, FactCheck, Intent, MacroIntent and Intent Stellar models run on the CPU without heavy infrastructure.

Local & Offline

No external cloud layers and smaller attack surface, more control.

AI in the Code

AI built-ins are part of the syntax, no hidden layers.

# Intent
set cmd from AI: "Please stop."
if cmd == "StopCommand": neuro "Stopping process"

Lightweight

No heavy environments or unnecessary dependencies, just code and run.

WebUI Demo

Enter NeuroChain DSL commands and run them from your browser. Pick one or many models and see output and logs instantly.

  • Model picker: SST2 / Toxic / FactCheck / Intent / MacroIntent
  • Multi-model runs (one request per selected model)
  • Drag & Drop for .nc files
  • Local API Base URL support (connect to your own server)
  • Download output & keyboard shortcuts

Stellar Demo

NeuroChain parses Stellar/Soroban intents into typed plans with explicit guardrails before submit.

  • Model picker: Intent Stellar
  • Dedicated endpoint: POST /api/stellar/intent-plan
  • Guardrails: allowlist / policy / intent safety (exit 3 / 4 / 5)
  • Testnet-first workflow for reviewer-friendly verification

Architecture

One local-first engine, multiple runtime surfaces, and a guarded Stellar intent layer presented as one coherent stack.

Engine Local-first
NeuroChain Core

Lexer -> Parser -> Interpreter -> Intent/Template -> Inference

tract-onnx CPU runtime No external cloud layers
Runtime Surfaces Shared path
One runtime, many entry points
REPL .nc scripts CLI API
API Rust / Axum
Live endpoints
POST /api/analyze DSL analysis and local inference.
POST /api/stellar/intent-plan Typed Stellar planning surface.
Stellar Layer Guarded by default
IntentStellar planning

Typed plans flow through allowlist, contract policy, and intent safety before anything deeper happens.

Setup

  1. Try it online: no install.
  2. Locally: clone from GitHub.
  3. Server: run an Axum-compatible backend if needed; WebUI and Stellar Demo can point to it.

Execution Guarantees

Built to stay legible under load: explicit execution flow, safer defaults, and guardrails before anything deeper happens.

  • Inspectable execution Parser, interpreter, inference, and policy steps stay visible instead of disappearing into hidden orchestration.
  • Explicit runtime path Scripts, CLI, and API all trace back to the same readable engine behavior.
  • Policy-gated execution Allowlist, contract policy, and intent safety are checked before any deeper action path.
  • Testnet-proven rollout Developer flows harden on testnet first, creating a clearer path toward mainnet once behavior and guardrails are stable.

Ready to build?

Try the WebUI or add your own script and get moving right away.