Tensor Module
Core tensor operations for numerical computing and automatic differentiation.
Tensor::new
Create a new tensor from data.
fn Tensor::new(data: [T], requires_grad: bool = false) -> Tensor
Parameters
data- Array of values to initialize tensorrequires_grad- Enable gradient tracking (default: false)
Returns
A new Tensor object
Example
let t = Tensor::new([1.0, 2.0, 3.0])
let grad_t = Tensor::new([1.0, 2.0], requires_grad: true)
Tensor::zeros / Tensor::ones
Create tensors filled with zeros or ones.
fn Tensor::zeros(shape: [int32], dtype: Type = float64) -> Tensor
fn Tensor::ones(shape: [int32], dtype: Type = float64) -> Tensor
Example
let zeros = Tensor::zeros([3, 4]) // 3x4 tensor of 0.0
let ones = Tensor::ones([2, 3, 4]) // 2x3x4 tensor of 1.0
let int_zeros = Tensor::zeros([5], int32) // Integer tensor
Tensor::randn / Tensor::rand
Create tensors with random values from normal or uniform distribution.
fn Tensor::randn(shape: [int32], mean: float64 = 0.0, std: float64 = 1.0) -> Tensor
fn Tensor::rand(shape: [int32], low: float64 = 0.0, high: float64 = 1.0) -> Tensor
Example
let normal = Tensor::randn([100, 100]) // Normal(0, 1)
let custom_normal = Tensor::randn([10], 5.0, 2.0) // Normal(5, 2)
let uniform = Tensor::rand([50, 50]) // Uniform(0, 1)
let custom_uniform = Tensor::rand([10], -1.0, 1.0) // Uniform(-1, 1)
Tensor Methods
| Method | Description |
|---|---|
| shape() | Returns the shape of the tensor |
| reshape(shape) | Returns new tensor with specified shape |
| transpose() | Transposes the last two dimensions |
| sum() | Sum of all elements |
| mean() | Mean of all elements |
| max() | Maximum element |
| min() | Minimum element |
| backward() | Compute gradients via backpropagation |
| to(device) | Move tensor to specified device (cpu/gpu) |
Neural Networks Module
High-level API for building and training neural networks.
NeuralNetwork
Create a simple feedforward neural network.
fn NeuralNetwork(layer_sizes: [int32]) -> Model
Example
// Create 3-layer network: 784 -> 128 -> 10
let model = NeuralNetwork([784, 128, 10])
model.add_layer(Dense(784, 128, activation: "relu"))
model.add_layer(Dense(128, 10, activation: "softmax"))
model.compile(
optimizer: Adam(learning_rate: 0.001),
loss: "categorical_crossentropy"
)
Sequential
Sequential model for stacking layers linearly.
fn Sequential(layers: [Layer]) -> Model
Example
let model = Sequential([
Dense(units: 128, activation: "relu", input_shape: [784]),
Dropout(rate: 0.2),
Dense(units: 64, activation: "relu"),
Dense(units: 10, activation: "softmax")
])
model.compile(
optimizer: Adam(lr: 0.001),
loss: "sparse_categorical_crossentropy",
metrics: ["accuracy"]
)
Model Methods
| Method | Description |
|---|---|
| compile(optimizer, loss, metrics) | Configure model for training |
| train(x, y, epochs, batch_size) | Train the model on data |
| predict(x) | Generate predictions |
| evaluate(x, y) | Evaluate model on test data |
| summary() | Print model architecture |
| save(path) | Save model to file |
| load(path) | Load model from file |
Autograd Module
Automatic differentiation for gradient computation.
Gradient Computation
Compute gradients automatically using the computational graph.
// Enable gradient tracking
let x = Tensor::new([2.0, 3.0], requires_grad: true)
let y = Tensor::new([4.0, 5.0], requires_grad: true)
// Forward pass - builds computational graph
let z = x * y + x.pow(2)
let loss = z.sum()
// Backward pass - compute gradients
loss.backward()
// Access gradients
print(x.grad) // dL/dx
print(y.grad) // dL/dy
Custom Gradients
Define custom backward passes for new operations.
// Custom autograd function
struct MyFunction {
fn forward(ctx: Context, x: Tensor) -> Tensor {
ctx.save_for_backward([x])
return x * x
}
fn backward(ctx: Context, grad_output: Tensor) -> Tensor {
let saved = ctx.saved_tensors()
let x = saved[0]
return grad_output * 2 * x
}
}
// Use custom function
let x = Tensor::new([1.0, 2.0, 3.0], requires_grad: true)
let y = MyFunction::apply(x)
y.sum().backward()
Optimizers Module
Optimization algorithms for training neural networks.
| Optimizer | Signature | Description |
|---|---|---|
| SGD | SGD(params, lr, momentum) | Stochastic Gradient Descent |
| Adam | Adam(params, lr, betas, eps) | Adaptive Moment Estimation |
| AdamW | AdamW(params, lr, weight_decay) | Adam with weight decay |
| RMSprop | RMSprop(params, lr, alpha) | Root Mean Square Propagation |
Optimizer Usage
// Initialize optimizer
let optimizer = Adam(
params: model.parameters(),
learning_rate: 0.001,
betas: (0.9, 0.999),
eps: 1e-8
)
// Training loop
for epoch in range(100) {
// Forward pass
let predictions = model(x_train)
let loss = cross_entropy_loss(predictions, y_train)
// Backward pass
loss.backward()
// Update parameters
optimizer.step()
// Clear gradients
optimizer.zero_grad()
}
Layers Module
Neural network layer components.
| Layer | Signature | Description |
|---|---|---|
| Dense | Dense(in, out, activation) | Fully connected layer |
| Conv2D | Conv2D(filters, kernel, activation) | 2D convolution layer |
| MaxPooling2D | MaxPooling2D(pool_size, stride) | 2D max pooling |
| Dropout | Dropout(rate) | Dropout regularization |
| BatchNorm | BatchNorm(features, momentum) | Batch normalization |
| LSTM | LSTM(input_size, hidden_size) | Long Short-Term Memory |
| Attention | Attention(dim, num_heads) | Multi-head attention |
| Flatten | Flatten() | Flatten input to 1D |
Layer Examples
// Dense (fully connected) layer
let dense = Dense(
units: 128,
activation: "relu",
input_shape: [784]
)
// Convolutional layer
let conv = Conv2D(
filters: 32,
kernel_size: 3,
activation: "relu",
padding: "same"
)
// LSTM layer
let lstm = LSTM(
input_size: 128,
hidden_size: 256,
num_layers: 2,
bidirectional: true
)
// Multi-head attention
let attention = Attention(
dim: 512,
num_heads: 8,
dropout: 0.1
)
GPU Operations Module
GPU-accelerated tensor operations.
Device Management
// Check GPU availability
if gpu::is_available() {
print("GPU is available")
print("GPU count:", gpu::device_count())
}
// Create tensor on GPU
let x = Tensor::randn([1000, 1000], device: "gpu")
let y = Tensor::randn([1000, 1000], device: "gpu")
// Operations automatically run on GPU
let z = x @ y // Matrix multiplication on GPU
// Move tensor between devices
let cpu_tensor = x.to("cpu")
let gpu_tensor = cpu_tensor.to("gpu")
Performance Tips
Best Practice: Keep data on GPU between operations to avoid transfer overhead.
Note: Large batch sizes improve GPU utilization.
Tip: Use mixed precision training for faster computation and lower memory usage.
Knowledge Graphs Module
Knowledge representation and graph embeddings.
KnowledgeGraph
Create and manipulate knowledge graphs.
// Create knowledge graph
let kg = KnowledgeGraph()
// Add entities
kg.add_entity("Paris", type: "City")
kg.add_entity("France", type: "Country")
kg.add_entity("Europe", type: "Continent")
// Add relations
kg.add_relation("Paris", "capital_of", "France")
kg.add_relation("France", "located_in", "Europe")
// Query
let capital = kg.query("?x capital_of France")
print(capital) // ["Paris"]
// Get all relations for entity
let paris_relations = kg.get_relations("Paris")
// Embed knowledge graph
let embeddings = kg.embed(
method: "TransE",
dim: 128,
epochs: 100
)
Embedding Methods
| Method | Description |
|---|---|
| TransE | Translating embeddings model |
| DistMult | Bilinear diagonal model |
| ComplEx | Complex embeddings |
| RotatE | Rotation-based embeddings |
Reasoning Module
Symbolic and neural reasoning systems.
ChainOfThought
Multi-step reasoning with language models.
// Create chain-of-thought reasoner
let cot = ChainOfThought(
model: language_model,
max_steps: 10,
temperature: 0.7
)
// Solve problem with reasoning steps
let problem = "What is 15% of 240?"
let solution = cot.solve(problem)
// Access reasoning steps
for step in solution.steps {
print("Step:", step.description)
print("Result:", step.result)
}
print("Final answer:", solution.answer)
LogicReasoner
First-order logic reasoning.
// Create logic reasoner
let reasoner = LogicReasoner()
// Add facts
reasoner.add_fact("mortal(socrates)")
reasoner.add_fact("human(socrates)")
// Add rules
reasoner.add_rule("forall X: human(X) -> mortal(X)")
// Query
let is_mortal = reasoner.query("mortal(socrates)")
print(is_mortal) // true
// Infer new facts
let inferred = reasoner.infer_all()
for fact in inferred {
print("Inferred:", fact)
}
Quick Reference
Common Imports
// Tensor operations
use charl::tensor::Tensor
// Neural networks
use charl::nn::{Sequential, Dense, Conv2D, LSTM}
// Optimizers
use charl::optim::{SGD, Adam, AdamW}
// Autograd
use charl::autograd::{backward, grad}
// Knowledge graphs
use charl::knowledge::KnowledgeGraph
// Reasoning
use charl::reasoning::{ChainOfThought, LogicReasoner}