AI-POWERED SCIENTIFIC DISCOVERY

The next breakthrough is
already published

Discoveries hide across papers no single scientist reads together. MAGELLAN is a 12-agent AI system that reads across the silos, connects existing knowledge into new testable hypotheses, and then kills 86% of its own ideas to show you only what survives.

ONE OF THE SURVIVORS

Session 001PASS — 6.8/10
Bioelectric signaling↓ connects to ↓Biomolecular condensates

Voltage gradients regulate protein condensation via electrostatic screening, creating phase boundaries that pattern morphogenetic fields

GROUNDEDSPECULATIVE
5Sessions
14Generated
0%Killed
14Survived
0Human input

The knowledge is already there

In 1986, Don Swanson proved that the connection between fish oil and Raynaud's disease was hiding in plain sight — published in separate journals that no single researcher read together. He called it Undiscovered Public Knowledge: answers that exist in the literature but remain invisible because science is fragmented into silos.

Today there are over 100 million published papers. The fragments are everywhere. The connections between them — the ones that lead to new mechanisms, new treatments, new understanding — go unnoticed for years, sometimes decades.

MAGELLAN does what Swanson did by hand — at scale. It deploys 12 specialized AI agents that read across disciplinary boundaries, identify where established knowledge in field A connects to established knowledge in field C through an unexplored bridge, and then generate testable hypotheses about what that connection means.

No new data. No black-box predictions. Just existing science, connected in ways no one had seen — and rigorous enough to test in a lab.

Input: scientific literatureOutput: testable hypotheses
Scout
Generate
Critique
Validate
140 killed14 survived

Every claim is tagged GROUNDED PARAMETRIC or SPECULATIVE. We label our uncertainty.

These are hypotheses, not discoveries

We're not claiming breakthroughs. We're claiming a system that generates ideas worth investigating — and we need real scientists to tell us if we're right. The confidence scores are intentionally moderate. The kill rate is intentionally high. This is how you build something credible.

THE FUTURE

This is Session 4.
Imagine Session 400.

These hypotheses were generated by March 2026 frontier models. As LLMs improve — deeper reasoning, fewer hallucinations, broader knowledge — every stage of this pipeline becomes more powerful.

We're building the infrastructure now: the agents, the quality gates, the cross-model validation. The scaffolding for a future where AI doesn't just answer questions — it asks new ones.