Skip to content

jNO Documentation

Build neural operator solvers for PDEs in JAX with a symbolic DSL, flexible model architectures, and multi-stage training controls.

  • New to jNO?

    Follow the shortest path from install to first solved PDE.

    Start here

  • Want runnable scripts?

    Browse curated examples and the imported tutorial repository by chapter.

    Tutorial index

  • Optimizing model quality?

    Jump directly to architecture, controls, and tuning references.

    Core concepts

Start Here

If you are new to jNO, follow this path:

  1. Installation: set up jNO with conda, pip, or uv.
  2. Getting Started: run your first problem and learn the basic workflow.
  3. Domain and Geometry: define your domain and collocation strategy.
  4. Training: configure optimizer schedules and solver phases.
  5. Tutorials: run chapter-based scripts from basics to operator architectures.
  6. Foundation Models: see the currently exposed foundation-model families and their expected inputs.

Core Guides

Guide Focus
Domain and Geometry Built-in geometries, mesh loading, variable sampling, time-dependent setups
Neural Network Architectures MLP, FNO, U-Net, DeepONet, Poseidon, custom wrappers
Differential Operators grad, laplacian, jacobian, hessian, divergence, curl, math ops
Training Solver setup, optimizers, LR schedules, parallel execution
Model Controls Freeze, masks, LoRA, parameter groups, dtype and initialization
Adaptive Resampling RAD, RARD, HA, CR3, PINNFluence
Hyperparameter Tuning Architecture and training sweeps with Nevergrad
Save, Load and Configuration Serialization, signed artifacts, TOML config and logging

Tutorials

Browse the chapter-based walkthroughs in Tutorials, covering the core PDE problem classes and operator architecture examples.

Foundation Models

Guide Focus
Foundation Models Poseidon, Morph, and Walrus and other foundation models together with their expected input layouts

Experimental

Experimental workflows such as FEM and Variational PINNs are listed separately from the main tutorial track.

About

jNO is a research-oriented framework for PDE solving with neural operators developed by the AI-augmented Simulation Group of Fraunhofer IISB. It supports symbolic residual definitions, architecture experimentation, adaptive point sampling, and distributed execution. The documentation is a work in progress and will be expanded with more examples, guides, and API references over time.

Warning

This is an actively evolving research codebase. APIs and behavior may change.

Citation

If jNO is used we would appreciate to cite the following paper:

@article{armbruster2026jNO,
  author  = {Armbruster, Leon, ....},
  title   = {{jNO}: A JAX Library for Neural Operator and PDE Foundation Model Training},
  journal = {},
  year    = {},
}