Research, prototypes, and foundational work toward ASI

Reliable intelligence in the wild.

Monodromy Labs is an AI research company with a long-horizon goal: to build systems that reason, plan, and learn continuously while making their decisions auditable, stable, and safe. Today we operate as a research-led company: open, service-oriented, and obsessed with the foundations.

Deciphering and implementing the stable thinking processes that carried us from monkey → flight → Moon.
Manifesto

What Monodromy Labs believes.

This is the current draft of our public manifesto: a statement of intent for how we want to study intelligence, build systems, and collaborate with the research community.

Full manifesto

Intelligence must become more than prediction.

We believe the next generation of intelligent systems will not emerge from scale alone. Large models have shown that knowledge can be compressed into astonishingly efficient pattern machinery, but pattern completion is not yet the same as grounded understanding. Useful intelligence must stay coherent when the world changes, when evidence is partial, when tools fail, and when decisions carry consequences.

For that reason, Monodromy Labs is built around one conviction: reliable AI will come from fusing probabilistic and deterministic elements into a single working ecology. The probabilistic side gives us compression, flexibility, intuition, and broad generalization. The deterministic side gives us invariants, verifiability, memory structure, symbolic precision, and the ability to say why a system did what it did. We do not see these as competing philosophies. We see them as complementary parts of a mature intelligence stack.

We are interested in stable thinking processes: procedures that continue to work under distribution shift, that can be inspected after the fact, that can revise themselves without collapsing, and that remain useful when moved from benchmark theater into the wild. In our view, the future of AI is not stronger generation, but the construction of reasoning systems that can plan, learn continuously, integrate memory, and remain safe because their internal commitments are traceable.

Monodromy Labs therefore treats products as crystallized research. A product matters when it reveals a durable mechanism. A prototype matters when it teaches us something about reliability. A paper matters when it sharpens the map. Our lab exists to build those maps, test them in code, and turn the useful parts into research services, systems, and public artifacts that move the field forward.

01

Intelligence needs structure.

Foundation models are powerful, but intelligence becomes durable only when it is scaffolded by memory, constraints, tools, and explicit world interaction.

02

Grounding has many valid forms.

World models, RL, RAG, symbolic systems, graphs, simulators, and contracts are not hacks around intelligence. They are among its necessary organs.

03

Auditability is not optional.

Systems that affect software, science, or operations must be inspectable. We care about traces, causal paths, confidence, and post-hoc explanation rooted in mechanism.

04

Benchmarks are signals, not destinations.

We value evaluation, but we optimize for robustness outside the curated sandbox. The real test of intelligence is the wild: messy environments, new tasks, and long-horizon consequences.

Research directions

The questions we are working on.

Our current mode is research services and applied investigation. We help teams think through difficult problems in reasoning, autonomy, validation, and research software — while building our own long-term agenda.

G

Grounded reasoning systems

How should models anchor their decisions in world state, external memory, tools, and explicit intermediate structure?

world models RAG causal contracts
H

Hybrid probabilistic–deterministic stacks

We study architectures where generative priors and deterministic mechanisms are fused together.

symbolic tools formal checks invariants
S

Safe and auditable autonomy

Agents should expose why they acted, what they changed, and what risks remain.

traces validation loops risk-aware policies
Our operating view

LLMs are extraordinary distillation engines. But distilled knowledge becomes intelligence only when it is tied back to reality.

That is why Monodromy prioritizes grounding over generation alone.

AIR

The first system born in the lab.

AIR (Autonomous Issue Resolver) is our first AI product. It makes software maintenance more autonomous by navigating repositories through data lineage rather than by reading files as disconnected text.

What makes it different

Data-first repository reasoning

AIR centers on a Data-First Transformation Graph, where data states are nodes and transformations are edges. That lets the agent follow causal paths through a codebase instead of relying only on file-first retrieval.

DTG navigation causal localization explicit data lineage
How it operates

A risk-aware multi-agent loop

AIR combines graph navigation, targeted editing, and test-driven validation under a control policy that rewards minimal, validated fixes and discourages unstable over-editing.

navigation editing validation
Why it matters

Autonomous maintenance as a grounding problem

AIR reflects our broader thesis: reliable AI improves when it is embedded inside explicit structures and consequence loops. In this case, the grounding medium is code state, graph structure, tests, and repository constraints.

Current signal

Early research results

In our paper, AIR reports strong early results on repository-scale repair benchmarks and an exploratory OSS study. We present it as a research direction and a platform for future work — not as a finished claim to general autonomy.

87.1% SWE-bench Verified
73.5% SWE-bench Lite
70 / 100 Exploratory OSS study
People

From first principles to working systems.

The lab’s first chapter is deeply shaped by the founder’s trajectory: from scientific training and large-scale engineering to research on reasoning, causality, and trustworthy AI tools.

Founder story

Monodromy Labs grows out of a simple pattern: serious ideas become powerful only when they can survive contact with reality.

The founder’s path runs through physics, informatics, industrial AI systems, research engineering, and frontier reasoning work.

Aliaksei Kaliutau

Aliaksei’s path bridges frontier research and ambitious engineering. He first stood out at 17, winning a national technology competition (designed a neuroprocessor on FPGA). With a foundation in Physics and Informatics and an MSc in AI from Imperial College London, he has focused his work on one central challenge: making machine intelligence more reliable. He has built research software for reasoning evaluation, engineered large-scale AI and robotics infrastructure, designed advanced reasoning and tool-use tasks for frontier models, and founded Monodromy at I-X, Imperial College London.

That arc matters for Monodromy. The lab is not built from theory alone and not from product intuition alone. It is built from repeated attempts to turn ambitious ideas into trustworthy systems, and to make AI useful in research and industry.

physics & informatics AI MSc at Imperial research software reasoning systems

Built where the bar is high.

Imperial
Global science institution
Ocado Tech
Advanced robotics at scale
I-X
Frontier AI research environment

A foundation spanning elite science, real-world robotics, and frontier AI research.

Contact

Explore the frontier of machine cognition

We are interested in research collaborations, early design partnerships, and serious conversations about reliable intelligence.

Research collaborations

Partner with Monodromy Labs on reasoning, grounding, agent architecture, or trustworthy AI research.

Experimental systems

Build a prototype that implements hybrid reasoning, auditable agents, or safer autonomy in a real workflow.

Research software

Use our software, tooling, and infrastructure for ambitious AI experiments that demand engineering discipline.