Charl, a language for AI

An experimental language with native tensors, autograd, and neural networks built in. Designed for modern machine learning.

Quick Example

// Neural network training with automatic differentiation
let X = tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]], [4, 2])
let Y = tensor([0.0, 1.0, 1.0, 0.0], [4, 1])

// Initialize parameters with gradient tracking
let W1 = tensor_with_grad([0.5, -0.3, 0.2, 0.4], [2, 2])
let b1 = tensor_with_grad([0.1, -0.1], [2])

// Create optimizer
let optimizer = adam_create(0.01)

let epoch = 0
while epoch < 100 {
    // Forward pass
    let h1 = nn_linear(X, W1, b1)
    let pred = nn_sigmoid(h1)

    // Compute loss
    let loss = nn_mse_loss(pred, Y)

    // Backward pass - automatic differentiation
    tensor_backward(loss)

    // Update parameters
    let params = [W1, b1]
    let updated = adam_step(optimizer, params)
    W1 = updated[0]
    b1 = updated[1]

    epoch = epoch + 1
}
Automatic gradient computation and optimization

AI Features Built Into The Language

Charl treats tensors, gradients, and neural networks as first-class language features, not external libraries. This enables better performance, type safety, and developer experience.

Everything You Need, Built In

// Tensors are a native type
let x = tensor([1.0, 2.0, 3.0], [3])
let y = tensor_add(tensor_mul(x, 2.0), 1.0)

// Gradient tracking for automatic differentiation
let params = tensor_with_grad([2.0, 3.0], [2])
let result = tensor_mul(params, x)
let loss = nn_mse_loss(result, target)

// Backward pass computes all gradients automatically
tensor_backward(loss)
let gradients = tensor_grad(params)

// Built-in optimizers
let optimizer = adam_create(0.001)
let updated = adam_step(optimizer, [params])

// Neural network layers
let hidden = nn_linear(input, weights, bias)
let activated = nn_relu(hidden)
let output = nn_sigmoid(activated)
Tensors
Native type system
Built-in operations
Autograd
Automatic differentiation
Backward pass
Optimizers
SGD, Adam, RMSProp
Ready for training

Features

Tensor Operations

Native tensor type with operations: add, mul, matmul, reshape, transpose. Support for multi-dimensional arrays with automatic memory management.

tensor([1.0, 2.0], [2, 1])

Automatic Differentiation

Built-in autograd system with computation graph tracking. Backward pass computes gradients for all parameters automatically.

tensor_backward(loss)

Neural Network Layers

Built-in layers: Linear, Conv2D, Pooling, BatchNorm, LayerNorm, Dropout. Activations: ReLU, Sigmoid, Tanh, Softmax, GELU.

nn_linear(x, W, b)

Optimizers

SGD, Adam, and RMSProp optimizers with momentum and adaptive learning rates. Parameter updates with automatic gradient application.

adam_step(opt, params)

Loss Functions

MSE loss and cross-entropy loss with automatic gradient computation. Returns tensors for backpropagation through the network.

nn_mse_loss(pred, Y)

GPU Support

WGPU backend for GPU acceleration. Transfer tensors between CPU and GPU. Cross-platform support (Vulkan, Metal, DirectX).

tensor_to_gpu(t)

Installation

Linux / macOS

# Download and extract
tar -xzf charl-linux-x86_64.tar.gz

# Move to PATH
sudo mv charl /usr/local/bin/

# Verify installation
charl --version

Windows

# Extract the zip file
# Move charl.exe to your PATH

# Verify installation
charl.exe --version
View all download options →