Code that thinks. Software that evolves.
A statically typed, compiled language with embedded AI reasoning, a full standard library, and local LLM integration. Declare intelligent decisions in your source code — the compiler handles the rest.
Clean syntax inspired by Go's simplicity and Rust's safety — with AI reasoning built in.
module main
use std.http
fn main() {
server := http.new(port: 8080)
server.get("/") => fn(req) -> Response {
Response.text("Hello from AID")
}
server.get("/health") => fn(req) -> Response {
Response.json({ status: "ok", language: "AID", version: "0.2.0" })
}
server.start()
}
reason classify_ticket(text: string) -> string {
goal: "Classify a support ticket"
constraints: [
"Return one of: billing, technical, general, urgent",
"Outage or down → always urgent"
]
examples: [
("My card was charged twice", "billing"),
("Server is down", "urgent")
]
fallback: "general"
}
evolve classify_ticket {
track: true
retrain_every: 500
min_accuracy: 0.95
}
// With Cortex running → uses local LLM
// Without Cortex → falls back to keyword matching
module main
use std.http
use std.db
use std.env
fn main() {
env.load_dotenv()
db.connect("sqlite://data.db")
db.migrate("migrations/")
server := http.new(port: 8080)
server.get("/items") => fn(req) -> Response {
items := db.query("SELECT * FROM items")
Response.json({ items: items })
}
server.start()
}
module main
use std.http
use std.auth
fn main() {
server := http.new(port: 8080)
server.post("/login") => fn(req) -> Response {
token := auth.jwt_sign(claims, "secret")
Response.json({ token: token })
}
server.post("/register") => fn(req) -> Response {
hash := auth.hash_password(req.body.password)
Response.json({ hash: hash })
}
// Protected route with JWT middleware
server.get("/admin") => auth.middleware(fn(req) -> Response {
Response.json({ message: "Welcome, admin" })
})
}
// Cortex V1: Local AI for reason blocks
// No cloud. No API keys. Runs on your machine.
reason analyze_sentiment(text: string) -> string {
goal: "Determine sentiment of customer feedback"
constraints: [
"Return: positive, negative, or neutral",
"Consider context and sarcasm"
]
examples: [
("Love the new feature!", "positive"),
("This is broken again", "negative"),
("It works I guess", "neutral")
]
fallback: "neutral"
}
// $ aid cortex pull ← download model
// $ aid cortex serve ← start sidecar
// $ aid build main.aid ← LLM-powered!
Features that don't exist in any other language.
Declare AI-powered decisions in your code. Define the goal, set constraints, provide examples — the compiler generates the logic.
Your code improves itself. Runtime telemetry feeds back into the next build. Every deploy gets smarter.
Stop writing route tables. The compiler analyzes your handlers and builds routes at compile time.
Describe validation rules in English. The compiler generates type-safe validators automatically.
Every build generates complete docs. If the code changes, the docs change. They can never drift.
Compile to WebAssembly. Write once, deploy anywhere — edge, cloud, browser.
Four modules that cover 90% of what APIs need. Built-in, zero config.
Connect, query, execute, migrate. Built on rusqlite with auto column-to-JSON mapping.
db.connect("sqlite://data.db")
db.migrate("migrations/")
items := db.query("SELECT * FROM items")
Environment variables and .env files. Config-driven servers out of the box.
env.load_dotenv()
port := env.get("PORT")
secret := env.require("JWT_SECRET")
JWT tokens, bcrypt passwords, API keys, auth middleware. All generated from simple function calls.
token := auth.jwt_sign(claims, secret)
hash := auth.hash_password("pass")
auth.middleware(protected_handler)
HTML templates with {{variables}}, {{#each}} loops, {{#if}} conditionals. Plus static file serving.
content := html.template("page.html", data)
html.render(content)
html.serve_static("public/")
Your reason blocks, powered by a local LLM. No cloud. No API keys. No data leaves your machine.
$ aid cortex pull
Downloads TinyLlama-1.1B (~670MB)
$ aid cortex serve
Wraps llama.cpp as a local HTTP server
$ aid build main.aid
Reason blocks auto-detect Cortex and use LLM
[model]
path = ".cortex/models/tinyllama.gguf"
temperature = 0.3
max_tokens = 100
[sidecar]
port = 8090
timeout_ms = 5000
[fallback]
enabled = true
Every AID feature in one production-ready application.
// webhook-classifier/main.aid (~300 lines)
module main
use std.http
use std.db
use std.env
use std.auth
use std.html
reason classify_webhook(payload: string) -> string {
goal: "Classify incoming webhook"
constraints: [
"Return one of: payment, alert, deploy,
user_event, notification, security,
monitoring, other"
]
// ... examples, fallback
}
evolve classify_webhook {
track: true
retrain_every: 1000
}
Everything you need to be productive from day one.
aid newScaffold a new project in seconds. Choose between api (full REST API with templates, migrations) or minimal (just the essentials).
Syntax highlighting and autocomplete for .aid files. Install from the VS Code marketplace.
aid.toml manifest, semver versioning, central registry + Git sources. The foundation for aid install.
From source to binary in one command. The Cortex Engine handles AI reasoning at compile time.
Up and running in 60 seconds.
AID isn't replacing these languages — it's adding what they can't do.
| Feature | AID | Go | Rust | Python |
|---|---|---|---|---|
| AI reasoning built-in | ✅ | ❌ | ❌ | ❌ |
| Self-improving code | ✅ | ❌ | ❌ | ❌ |
| Local LLM integration | ✅ | ❌ | ❌ | ❌ |
| Built-in auth (JWT/bcrypt) | ✅ | ❌ | ❌ | ❌ |
| Built-in database | ✅ | ❌ | ❌ | ❌ |
| Auto-documentation | ✅ | ❌ | ✅ | ❌ |
| Type safety | ✅ | ✅ | ✅ | ❌ |
| WASM target | ✅ | 🟡 | ✅ | ❌ |
| HTTP built-in | ✅ | ✅ | ❌ | ❌ |