v0.2.0 — The 3 Pillars

AID

Code that thinks. Software that evolves.

A statically typed, compiled language with embedded AI reasoning, a full standard library, and local LLM integration. Declare intelligent decisions in your source code — the compiler handles the rest.

58Tests
12Examples
4Std Modules
0Cloud APIs

See it in action

Clean syntax inspired by Go's simplicity and Rust's safety — with AI reasoning built in.

module main

use std.http

fn main() {
    server := http.new(port: 8080)

    server.get("/") => fn(req) -> Response {
        Response.text("Hello from AID")
    }

    server.get("/health") => fn(req) -> Response {
        Response.json({ status: "ok", language: "AID", version: "0.2.0" })
    }

    server.start()
}
reason classify_ticket(text: string) -> string {
    goal: "Classify a support ticket"
    constraints: [
        "Return one of: billing, technical, general, urgent",
        "Outage or down → always urgent"
    ]
    examples: [
        ("My card was charged twice", "billing"),
        ("Server is down", "urgent")
    ]
    fallback: "general"
}

evolve classify_ticket {
    track: true
    retrain_every: 500
    min_accuracy: 0.95
}

// With Cortex running → uses local LLM
// Without Cortex → falls back to keyword matching
module main

use std.http
use std.db
use std.env

fn main() {
    env.load_dotenv()
    db.connect("sqlite://data.db")
    db.migrate("migrations/")

    server := http.new(port: 8080)

    server.get("/items") => fn(req) -> Response {
        items := db.query("SELECT * FROM items")
        Response.json({ items: items })
    }

    server.start()
}
module main

use std.http
use std.auth

fn main() {
    server := http.new(port: 8080)

    server.post("/login") => fn(req) -> Response {
        token := auth.jwt_sign(claims, "secret")
        Response.json({ token: token })
    }

    server.post("/register") => fn(req) -> Response {
        hash := auth.hash_password(req.body.password)
        Response.json({ hash: hash })
    }

    // Protected route with JWT middleware
    server.get("/admin") => auth.middleware(fn(req) -> Response {
        Response.json({ message: "Welcome, admin" })
    })
}
// Cortex V1: Local AI for reason blocks
// No cloud. No API keys. Runs on your machine.

reason analyze_sentiment(text: string) -> string {
    goal: "Determine sentiment of customer feedback"
    constraints: [
        "Return: positive, negative, or neutral",
        "Consider context and sarcasm"
    ]
    examples: [
        ("Love the new feature!", "positive"),
        ("This is broken again", "negative"),
        ("It works I guess", "neutral")
    ]
    fallback: "neutral"
}

// $ aid cortex pull    ← download model
// $ aid cortex serve   ← start sidecar
// $ aid build main.aid ← LLM-powered!

Built different

Features that don't exist in any other language.

🧠

Reason Blocks

Declare AI-powered decisions in your code. Define the goal, set constraints, provide examples — the compiler generates the logic.

🧬

Evolve

Your code improves itself. Runtime telemetry feeds back into the next build. Every deploy gets smarter.

🎯

Intent Routing

Stop writing route tables. The compiler analyzes your handlers and builds routes at compile time.

📜

Contracts

Describe validation rules in English. The compiler generates type-safe validators automatically.

📄

Auto-Documentation

Every build generates complete docs. If the code changes, the docs change. They can never drift.

WASM Target

Compile to WebAssembly. Write once, deploy anywhere — edge, cloud, browser.

Standard Library

Four modules that cover 90% of what APIs need. Built-in, zero config.

🗄️

std.db

SQLite

Connect, query, execute, migrate. Built on rusqlite with auto column-to-JSON mapping.

db.connect("sqlite://data.db")
db.migrate("migrations/")
items := db.query("SELECT * FROM items")
🌍

std.env

.env files

Environment variables and .env files. Config-driven servers out of the box.

env.load_dotenv()
port := env.get("PORT")
secret := env.require("JWT_SECRET")
🔐

std.auth

JWT + bcrypt

JWT tokens, bcrypt passwords, API keys, auth middleware. All generated from simple function calls.

token := auth.jwt_sign(claims, secret)
hash := auth.hash_password("pass")
auth.middleware(protected_handler)
🌐

std.html

Templates

HTML templates with {{variables}}, {{#each}} loops, {{#if}} conditionals. Plus static file serving.

content := html.template("page.html", data)
html.render(content)
html.serve_static("public/")

🧠 Cortex V1 — Local AI

Your reason blocks, powered by a local LLM. No cloud. No API keys. No data leaves your machine.

1

Pull a model

$ aid cortex pull

Downloads TinyLlama-1.1B (~670MB)

2

Start the sidecar

$ aid cortex serve

Wraps llama.cpp as a local HTTP server

3

Build normally

$ aid build main.aid

Reason blocks auto-detect Cortex and use LLM

🛡️ Always works: If Cortex isn't running, reason blocks fall back to V1 keyword matching. Your code never breaks.
cortex.toml
[model]
path = ".cortex/models/tinyllama.gguf"
temperature = 0.3
max_tokens = 100

[sidecar]
port = 8090
timeout_ms = 5000

[fallback]
enabled = true

🚀 Showcase: Webhook Classifier

Every AID feature in one production-ready application.

🗄️ SQLite storage for webhooks + classifications
🌍 Config via .env (port, DB path, API keys)
🔐 JWT-protected admin + API key ingestion
🌐 Dark-themed dashboard with live stats
🧠 3 reason blocks (classify, priority, source)
🧬 3 evolve blocks tracking accuracy
📜 Contract validation for webhook payloads
🎯 Intent-routed CRUD endpoints
// webhook-classifier/main.aid (~300 lines)
module main

use std.http
use std.db
use std.env
use std.auth
use std.html

reason classify_webhook(payload: string) -> string {
    goal: "Classify incoming webhook"
    constraints: [
        "Return one of: payment, alert, deploy,
         user_event, notification, security,
         monitoring, other"
    ]
    // ... examples, fallback
}

evolve classify_webhook {
    track: true
    retrain_every: 1000
}

🛠️ Developer Tools

Everything you need to be productive from day one.

🆕

aid new

Scaffold a new project in seconds. Choose between api (full REST API with templates, migrations) or minimal (just the essentials).

🎨

VS Code Extension

Syntax highlighting and autocomplete for .aid files. Install from the VS Code marketplace.

📦

Package Spec

aid.toml manifest, semver versioning, central registry + Git sources. The foundation for aid install.

How it works

From source to binary in one command. The Cortex Engine handles AI reasoning at compile time.

.aid source
Parser
Cortex Engine
Transpiler
Rust
Native Binary
WASM Binary
Auto-Docs

Get started

Up and running in 60 seconds.

$ git clone https://github.com/danilo-telnyx/aid-lang.git
$ cd aid-lang/compiler && cargo build --release
$ ./target/release/aid new my-api
$ cd my-api
$ ../target/release/aid build main.aid
$ ./build/aid-main
# → 🚀 Server running on http://localhost:8080
# Optional: enable local AI
$ ../target/release/aid cortex pull
$ ../target/release/aid cortex serve

How AID compares

AID isn't replacing these languages — it's adding what they can't do.

FeatureAIDGoRustPython
AI reasoning built-in
Self-improving code
Local LLM integration
Built-in auth (JWT/bcrypt)
Built-in database
Auto-documentation
Type safety
WASM target🟡
HTTP built-in

License

Business Source License 1.1

  • Free for personal use
  • Free for open source projects
  • Free for internal company use
  • Commercial license for revenue-generating products
  • Converts to Apache 2.0 on Feb 19, 2030

Read COMMERCIAL.md →