Founded 2026 in London

A new AI research company with deep technical roots

Isometron was founded by researchers from DeepMind, OpenAI, and leading universities. We're combining world-class R&D capabilities with a rigorous commitment to building AI that is safe, reliable, and genuinely helpful.

Why we started Isometron

The AI systems being built today will shape the next century. We believe they deserve a research-first approach.

After years of working at leading AI labs, our founding team saw a need for a new kind of AI company: one that combines deep technical research with a genuine commitment to safety and transparency from day one.

We're not racing to deploy the largest models. Instead, we're focused on understanding how AI systems work, how to make them reliable, and how to ensure they remain aligned with human values as they become more capable.

Our team brings together expertise in machine learning, interpretability, alignment research, and systems engineering. We publish our research openly and collaborate with the broader AI safety community.

Our values

Safety

We prioritize the safe development and deployment of AI systems, conducting extensive testing and research to understand and mitigate potential risks.

Transparency

We publish our research, share our methods, and communicate openly about both our progress and our limitations.

Long-term thinking

We make decisions based on what's best for humanity over decades and centuries, not just what's expedient today.

Our R&D Focus

Interpretability

Understanding what happens inside neural networks. We develop techniques to explain model decisions and identify potential failure modes before deployment.

Alignment

Building AI systems that reliably do what humans intend. Our research focuses on training methods that produce helpful, honest, and harmless behavior.

Robustness

Creating AI that works reliably across diverse conditions. We study how models behave in novel situations and develop techniques to improve consistency.

Scalable Oversight

Developing methods to verify AI behavior as systems become more capable. We research how humans can maintain meaningful control over advanced AI.

Location

London, United Kingdom

We're based in London, at the heart of Europe's AI research community. London offers access to exceptional talent from world-class universities and a thriving ecosystem of AI companies and research institutions.