Asgard: A Compiler for Dynamical Systems

This is a condensed version of the full paper. The complete version with formal definitions and proofs is available on arXiv (coming soon). You can also find more information about Asgard on the Asgard website.

The dominant approach to learning dynamical systems is implicit: a neural network approximates the dynamics as a black-box function $\dot{x} = f_\theta(x)$. Neural ODEs, PINNs, and their extensions are powerful precisely because they sidestep structural identification in favour of function approximation.

But this flexibility comes at a cost. An implicit model cannot be inspected to understand why a system behaves as it does. Two learned subsystems cannot be composed through typed interfaces. The standard tools of control theory — Lyapunov analysis, reachability, formal verification — do not apply to black-box neural dynamics.

Asgard takes a different approach: treat the mathematical structure of a dynamical system as the primary object of computation.

The pipeline

Equations are parsed, compiled into a typed intermediate representation — circuits — and executed under a choice of semantics. The structure remains visible and manipulable at every stage.

1
Parse

Equations are written in a LEAN-inspired notation. f = f0 + int(f, x) is the ODE $f' = f$ with $f(0) = f_0$. The parser produces a structured tree — no numerical meaning is assigned yet.

2
Rewrite

Symbolic identities simplify the tree before compilation. For example, the fundamental theorem of calculus: diff(int(f, x), x) → f.

3
Compile

The equation becomes a typed signal-flow circuit built from atomic operations (constants, integration, addition, multiplication) and categorical combinators (composition, parallel, feedback).

4
Execute

A calculus is selected — real, stochastic, or discrete — and the circuit compiles to a JAX program. The same circuit produces different results under different calculi, without any structural change.

Circuits: a typed intermediate representation

Compilation transforms an equation into a circuit — a typed signal-flow network. Every wire carries a signal, every node has typed input and output degrees, and the type system guarantees that connections are valid.

Circuits are built from two kinds of components:

Atomic operations — computational primitives with fixed type signatures:

OperationTypeMeaning
const(c)(0, 1)Constant stream
register(x)(1, 1)Formal integration
deregister(x)(1, 1)Formal differentiation
add(2, 1)Pointwise addition
multiplication(2, 1)Cauchy product
split(1, 2)Duplicate a signal
swap(2, 2)Exchange two signals
black_box(n)(1, 1)External callable (e.g. neural net)
stochastic_register(1, 1)Wiener integral $\int \cdot \, dW$
jump_register(1, 1)Poisson integral $\int \cdot \, dN$

Combinators — wiring rules that compose atomics:

For example, $f' = f$ with $f(0) = 1$ compiles to:

$$\text{trace}\big(\text{composition}(\text{monoidal}(\text{var}(f_0),; \text{composition}(\text{id},; \text{register}(x))),; \text{add})\big)$$

The feedback variable $f$ enters via the trace. It passes through register (integration), runs in parallel with the initial condition $f_0$, and the two signals are summed by add. The output feeds back — implementing the recursive definition.

Categorical structure

These circuits form a traced monoidal category. This is not merely a theoretical observation — the categorical axioms constitute a proof system for circuit equivalence. Two circuits connected by a chain of rewrites compute the same function, which enables:

One circuit, many meanings

A compiled circuit is semantics-free — it describes the structure of a computation, not its numerical interpretation. To obtain a result, you select a calculus:

Real

register → ∫ dt

Streams are Taylor coefficients. Solves deterministic ODEs via Picard iteration in coefficient space.

Stochastic

register → ∫ dW, ∫ dN

Multi-index expansion with Monte Carlo sampling. Supports Itô, Stratonovich, and jump-diffusion processes.

Discrete

register → unit delay

Streams are sequence terms. The same circuit structure that solves an ODE can generate the Fibonacci sequence.

The same circuit that solves $f' = f$ under the real calculus (producing Taylor coefficients $c_n = 1/n!$ for $e^x$) can, under the discrete calculus, generate a recurrence — register becomes a unit delay, multiplication becomes sequence convolution.

This separation has a practical consequence: you can develop and validate a model's structure deterministically, then switch to stochastic execution to study noise effects — without changing the model itself.

Execution

Circuits compile to JAX programs. The framework provides multiple execution strategies:

Stream calculus — for deterministic ODEs, computes Taylor coefficients via Picard iteration. JIT-compilable as a single JAX loop. Produces the full analytic structure rather than trajectories at discrete time points.

Time-stepping integrators — Euler-Maruyama (order 0.5), Milstein (order 1.0, using JAX autodiff for diffusion derivatives), SRK1/Platen (order 1.0 without derivatives), and classical RK4 for deterministic systems. Both Itô and Stratonovich interpretations available.

Stochastic Taylor expansion — deterministic coefficients computed once via Picard iteration, then combined with sampled iterated integrals across Monte Carlo paths. Particularly efficient for large ensembles.

All execution paths are fully differentiable through JAX, including through feedback loops (via the implicit function theorem). This enables gradient-based parameter identification and end-to-end optimisation of hybrid models.

Hybrid systems

Not every part of a dynamical system has a known analytic form. Asgard supports this via the black_box atomic — a typed slot in the circuit that applies an external function (a neural network, a lookup table, any callable) to its input, with gradients flowing through automatically.

A circuit can combine known physics (conservation laws as register, add, scalar) with a learned correction term (black_box). The explicit structure captures what is known; the black box captures what is not. Crucially, the overall circuit remains typed and composable.

Given observed data, you can:

The bet

Implicit models give you predictions. Explicit structure gives you predictions and the ability to reason about why. Asgard's bet is straightforward: if you can recover the structure, you get understanding for free.

Combined with Hugin (the agentic reasoning layer) and Mimir (the foundation model), Asgard provides the formal computing backbone of the Gimle stack — the part that turns equations into computable, composable, differentiable objects.