Gimle Papers Preprint

Asgard: A Compiler for Dynamical Systems

A framework that compiles mathematical equations into typed compositional circuits and executes them via JAX. Unlike implicit approaches that learn black-box dynamics, Asgard maintains explicit mathematical structure throughout — enabling formal composition, algebraic rewriting, and interchangeable semantics.

Read the full paper (PDF) | Asgard website

Implicit vs Explicit

Two approaches to learning dynamical systems — and the trade-off between them

Implicit modeling

Neural ODEs, PINNs, and their extensions

A neural network approximates the dynamics as a black-box function $\dot{x} = f_\theta(x)$. Powerful because it sidesteps structural identification in favour of function approximation.

But an implicit model cannot be inspected to understand why a system behaves as it does. Two learned subsystems cannot be composed through typed interfaces. The standard tools of control theory — Lyapunov analysis, reachability, formal verification — do not apply.

Flexible & scalable  /  Black box — no understanding

Explicit modeling

The traditional approach

You observe the phenomenon, gather data, and with expert knowledge define a set of governing equations. Classical methods (SINDy, symbolic regression) recover explicit structure, but require hand-crafted function libraries, struggle with hybrid or stochastic systems, and scale poorly.

Interpretable, composable, verifiable — but it doesn't scale to complexity.

Interpretable & trustworthy  /  Doesn't scale

Asgard takes a different approach.

Treat the mathematical structure of a dynamical system as the primary object of computation. Equations are parsed, compiled into a typed intermediate representation — circuits — and executed under a choice of semantics. The structure remains visible and manipulable at every stage.

The Pipeline

From equation to typed circuit to executable JAX program

Parse LEAN-style notation Rewrite Symbolic identities Compile Typed circuits Execute JAX program
1
Parse

Equations are written in a LEAN-inspired notation. For example, f = f0 + int(f, x) is the ODE $f' = f$ with $f(0) = f_0$. The syntax supports integration, differentiation, stochastic processes, lambda abstractions, and named parameters. The parser produces a structured tree — no numerical meaning is assigned yet.

2
Rewrite

Symbolic identities simplify the tree before compilation. For example, the fundamental theorem of calculus: diff(int(f, x), x) → f. This reduces equations at the symbolic level, before any numerics are involved.

3
Compile

The equation becomes a typed signal-flow circuit built from atomic operations (constants, integration, addition, multiplication) and categorical combinators (composition, parallel, feedback). Every wire carries a signal, every node has typed input and output degrees, and the type system guarantees that connections are valid.

4
Execute

A calculus is selected — real, stochastic, or discrete — and the circuit compiles to a JAX program. The same circuit produces different results under different calculi, without any structural change. All execution paths are fully differentiable.

Circuits: A Typed Intermediate Representation

Typed signal-flow networks built from atomics and combinators

Atomic operations

Computational primitives with fixed type signatures $(d_\text{in}, d_\text{out})$:

OperationTypeMeaning
const(c)(0, 1)Constant stream
param(n)(0, 1)Named parameter stream
sin, cos, exp(0, 1)Transcendental coefficient streams
id(1, 1)Identity (pass-through)
register(x)(1, 1)Formal integration
deregister(x)(1, 1)Formal differentiation
scalar(s)(1, 1)Multiply by a scalar
add(2, 1)Pointwise addition
multiplication(2, 1)Cauchy product
var(n)(1, 1)Variable reference (replaced by id during trace wiring)
split(1, 2)Duplicate a signal
swap(2, 2)Exchange two signals
terminal(1, 0)Discard a signal
black_box_source(n)(0, 1)External callable with no input
stochastic_register(1, 1)Wiener integral $\int \cdot \, dW$
jump_register(1, 1)Poisson integral $\int \cdot \, dN$
black_box(n)(1, 1)External callable

Combinators

Wiring rules that compose atomics into larger circuits:

Composition $f \circ g$

Feeds the output of $f$ into $g$. Requires $d_\text{out}(f) = d_\text{in}(g)$.

Monoidal product $f \otimes g$

Runs $f$ and $g$ in parallel. Degrees add: $d_\text{in} = d_\text{in}(f) + d_\text{in}(g)$.

Trace

Feeds the last output back to the first input: a feedback loop / fixed point. Degrees decrease by one.

Power series composition

Computes $f(g(x))$ in coefficient space — the power series analogue of function composition.

Example — compiling $f' = f$

The equation $f = f_0 + \int f , dx$ compiles to:

$$\text{trace}\big(\text{comp}(\text{mon}(\text{var}(f_0),; \text{comp}(\text{id},; \text{reg}(x))),; \text{add})\big)$$
var(f₀) register(x) add f₀ out trace (feedback)

The feedback variable $f$ enters via the trace, passes through register (integration), runs in parallel with $f_0$, and the two signals are summed. The output feeds back.

Categorical Structure

A proof system for circuit equivalence, not just a type system

These circuits form a traced monoidal category. The categorical axioms — associativity of composition, the interchange law for the monoidal product, naturality and vanishing for trace — are not merely theoretical properties. They constitute a proof system for circuit equivalence. Two circuits connected by a chain of rewrites compute the same function.

Identity law

$\text{comp}(\text{id},\; f) = f$

Scalar fusion

$\text{comp}(\text{scalar}(a),\; \text{scalar}(b)) = \text{scalar}(ab)$

Cancellation

$\text{comp}(\text{register},\; \text{deregister}) = \text{id}$

Associativity

$\text{comp}(f,\; \text{comp}(g,\; h)) = \text{comp}(\text{comp}(f,\; g),\; h)$

Simplification

Reduce circuits to simpler equivalent forms. Semantics-preserving rewrites eliminate redundant operations, fuse scalars, and cancel inverse operations.

Equivalence proofs

Show that two differently structured circuits compute the same function, without running them — by exhibiting a rewrite path between them.

Approximation

Rewrite a circuit into a structurally simpler form that is approximately equivalent — truncating higher-order terms or replacing subnetworks with known expressions.

Formal composition

Build complex models by wiring simpler circuits. The type system guarantees that input/output degrees match at every connection point.

Closed-form solutions

A trace-free circuit is a direct computation with no feedback. Finding a closed-form solution is equivalent to finding a rewrite that eliminates the trace.

Structure search

The rewrite system defines a search space over circuit topologies. Each move is a valid, typed rewrite — enabling beam search, genetic algorithms, or RL over the space of models.

Closed-form solutions as trace elimination

A circuit containing trace represents a system defined by a fixed-point equation — an ODE, a recurrence, a feedback loop. A trace-free circuit is a direct computation: it maps inputs to outputs without iteration. We conjecture that a closed-form solution exists if and only if there is a valid rewrite from the traced circuit to a trace-free equivalent — reframing "does this ODE have a closed-form solution?" as a question about the rewrite theory of traced monoidal categories.

One Circuit, Many Meanings

A compiled circuit is semantics-free — select a calculus to obtain a result

Real

register → ∫ dt

Streams are Taylor coefficients. Solves deterministic ODEs via Picard iteration in coefficient space. Produces the full analytic structure rather than trajectories at discrete time points.

Stochastic

register → ∫ dW, ∫ dN

Multi-index expansion with Monte Carlo sampling. Supports Itô, Stratonovich, and jump-diffusion processes (Merton, Kou, Bates).

Discrete

register → unit delay

Streams are sequence terms. The same circuit structure that solves an ODE can generate the Fibonacci sequence. Multiplication becomes convolution.

Develop deterministically, execute stochastically.

The same circuit that solves $f' = f$ under the real calculus (producing Taylor coefficients $c_n = 1/n!$ for $e^x$) can, under the discrete calculus, generate a recurrence. You can develop and validate a model's structure deterministically, then switch to stochastic execution to study noise effects — without changing the model itself.

Execution Model

Multiple execution strategies, all fully differentiable — including through feedback loops

Stream calculus

For deterministic ODEs

Computes Taylor coefficients via Picard iteration in coefficient space. JIT-compilable as a single JAX loop. Produces the full analytic structure — all coefficients — rather than trajectories at discrete time points. Adaptive truncation order detects when coefficients become negligible and stops early.

Stochastic Taylor expansion

Stream calculus meets Monte Carlo

Deterministic coefficients $c_\alpha$ are computed once via Picard iteration over a multi-index tree. Each Monte Carlo path then samples iterated integrals $I_\alpha$ and evaluates the sum. Because coefficients are shared across all paths, this is efficient for large ensembles.

The multi-index tree encodes iterated integrals against $dt$, $dW$ (Wiener), and $dN$ (Poisson) — so jump-diffusion processes are handled natively. Antithetic variates provide up to 2x variance reduction. Convergence tiling handles long time horizons.

Time-stepping integrators

Classical and stochastic schemes

The same circuit can be run under different numerical schemes without any structural change. Both Itô and Stratonovich interpretations are available — the Stratonovich variants use Heun predictor-corrector.

MethodOrderNotes
Deterministic
RK44.0Classical fixed-step, four evaluations per step
RK455.0Adaptive step-size via embedded 4th/5th-order pair
Stochastic
Euler-Maruyama0.5Default for SDEs, single evaluation per step
Milstein1.0Uses JAX autodiff for $\partial\sigma/\partial x$
SRK1 (Platen)1.0Derivative-free, works with non-differentiable diffusions

Differentiability through feedback

Because all execution paths compile to JAX, circuits are fully differentiable — including through trace (feedback loops). At the converged fixed point, gradients are computed analytically via the implicit function theorem: $\partial x^* / \partial \theta = -(I - \partial F / \partial x)^{-1} \, \partial F / \partial \theta$, implemented as a jax.custom_vjp. This enables gradient-based parameter identification and end-to-end optimisation of hybrid models.

Hybrid Systems

Combining known physics with learned components

Not every part of a dynamical system has a known analytic form. A chemical reaction rate, a turbulence closure, or a control policy may be best represented by a learned function.

Asgard supports this via the black_box atomic — a typed slot in the circuit that applies an external function (neural network, lookup table, any callable) to its input, with gradients flowing through automatically.

The explicit structure captures what is known; the black box captures what is not. Crucially, the overall circuit remains typed and composable — the black box participates in the same categorical algebra as any other atomic.

Fit parameters

Of a known model structure — inverse problems solved by gradient descent through the full circuit

Train neural networks

Embedded within an otherwise explicit circuit — hybrid physics-ML with end-to-end differentiation

Search for structure

Using the rewrite system as a neighbourhood operator — beam search, genetic algorithms, or RL over circuit topologies

Coupled systems

Multiple interacting equations, each its own circuit. Correlated Brownian motions, jump processes, and bidirectional coupling via simultaneous Picard iteration.

Explicit where possible, learned where necessary.

The optimizer sees the entire circuit as a differentiable function from parameters to predictions, regardless of internal complexity. Known physics provides inductive bias; learned components fill in the gaps.

From Equation to Circuit to Result

Three examples through the full pipeline — illustrating the syntax-semantics separation

Example 1 — Deterministic ODE

The exponential: $f' = f,\; f(0) = 1$

Syntax. f = f0 + int(f, x)

Circuit. trace(comp(mon(var(f0), comp(id, register(x))), add))

Real calculus. Picard iteration yields $c_n = 1/n!$ — the Taylor coefficients of $e^x$, exact within the convergence radius.

Discrete calculus. The same circuit computes a discrete recurrence. The structure is unchanged; the meaning depends on the calculus.

Example 2 — Stochastic Process

Geometric Brownian motion

Syntax. Y = sde($mu * X, $sigma * X, X)

Circuit. Contains register (drift $\int \mu S\,dt$) and stochastic_register (diffusion $\int \sigma S\,dW$), wired with trace for feedback.

Execution. Stream calculus, Milstein (strong order 1.0), Euler-Maruyama, or Stratonovich — same circuit, different semantics. Both Itô and Stratonovich interpretations available.

Example 3 — Discrete Recurrence

Fibonacci: $f_n = f_{n-1} + f_{n-2}$

Circuit. Uses trace for feedback, register for the unit delay, and multiplication for sequence convolution with recurrence coefficients $\sigma = (1, 1)$.

Discrete calculus. Produces $0, 1, 1, 2, 3, 5, 8, 13, \ldots$ — note the structural parallel with the exponential: both use trace and register, but the calculus determines continuous function vs discrete sequence.

The Bet

Implicit models give you predictions.
Explicit structure gives you predictions and the ability to reason about why.

If you can recover the structure, you get understanding for free. Asgard treats the mathematical structure of dynamical systems as a first-class computational object — preserving interpretability, composability, and verifiability while supporting automatic differentiation, GPU acceleration, and stochastic simulation.

Combined with Hugin (agentic reasoning) and Mimir (foundation model), Asgard provides the formal computing backbone of the Gimle stack.

Read the full paper (PDF) | Asgard website | Back to Gimle Papers