We Must Know, We Will Know

BCV is investing in Periodic, an AI lab designed to accelerate scientific discovery. We’re excited to work with Dogus, Liam, and the rest of the Periodic Team. Here’s how we think about the tremendous opportunity in Periodic.
In 1610, Galileo pointed his telescope towards Jupiter and saw something like this:

Jupiter and the Galilean Moons under 20x magnification.
You can see his sketches in Siderius Nuncius:

Galileo's drawings of Jupiter and its Medicean Stars from Sidereus Nuncius.
Jupiter’s moons Ganymede, Callisto, Io, and Europa are too small to see from Earth with the naked eye; nobody was going to discover them before the invention of the telescope. But once the telescope was invented, new astronomical discoveries came very quickly. There’s always a pre-history to these things, but the first telescope patent we know about comes from Netherlands spectacle maker Hans Lipperhey in 1608; Galileo started making his own telescopes in 1609, and on January 7th, 1610, he discovered the Galilean moons. Showing just how quickly things moved, he only beat the competition by one day — German astronomer Simon Marius independently documented the moons on January 8th.
The history of science is full of similar examples in which technological progress enables the invention of new scientific instruments which in turn leads to new scientific discovery. If you want a cyclotron for particle physics experiments, you need powerful electromagnets. (Lawrence’s original cyclotron used a donated 80-ton electromagnet from Federal Telegraph.) If you want a radio telescope that can detect the CMB, you need cryogenic masers. Etc.
Galileo had the newly-invented telescope. We have newly-developed AI systems. What can we see now that we couldn’t before?
Everything is Computer
To think about the future of computational methods in science, it’s useful to examine the past. Here’s Richard Hamming:
"For example, I came up with the observation at that time that 9⁄10 experiments were done in the lab and one in 10 on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. 9⁄10 experiments would be done on the computer and one in 10 in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they’ve been proved wrong while I have been proved right. They built laboratories when they didn’t need them. I saw that computers were transforming science because I spent a lot of time asking “What will be the impact of computers on science and how can I change it?” I asked myself, “How is it going to change Bell Labs?” I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now."
— Richard Hamming in “You and Your Research
I think that Hamming, writing about his experience at Bell Labs between 1946 and 1976, was mainly thinking about computer simulation. You write down known equations for physical phenomena, and you let the simulations roll out. Maybe you’re interested in simulating the orbital mechanics of the solar system. No problem. You write down the initial positions and velocities of every planet and moon, you compute all the pairwise forces, you derive the net velocity & position updates over some small time step , and you run the sim forward in time. In principle this is no different than what Newton might have done 400 years earlier. But in practice, the calculations are too intensive to do by hand. What was new was a computer that could do a stupefying amount of calculation for you. Here’s Stanislaw Ulam, talking about simulating neutron diffusion in a hydrogen bomb explosion circa 1950:
"The wartime, or rather, immediate postwar Metropolis-Frankel calculation was very schematic compared to what I had in mind. More ambitious calculations of this sort had become possible since computers had improved both in speed and in the size of their memories. The steps outlined involved a fantastic number of arithmetical operations. Johnny [von Neumann] said to me one day, “This computation will require more multiplications than have ever been done before by all of humanity.” But when I estimated roughly the number of multiplications performed by all the world’s school children in the last fifty years, I found that this number was larger by about a factor of ten!
Ours was the biggest problem ever, vastly larger than any astronomical calculation done to that date on hand computers, and it needed the most advanced electronic equipment available."
— Stanislaw Ulam in Adventures of a Mathematician
This is all so commonplace now that we barely think about it. We take it for granted that a video game running on commodity consumer hardware might have a physics engine rivaling the best scientific simulations of the recent past.
One way to think about this is that computing removed an “obvious” bottleneck to discovery — that bottleneck being the ability to perform a vast number of calculations in a short amount of time. If the thing standing between you and an accurate picture of what happens in a hydrogen bomb in the first seconds is 40 million multiplications, you are out of luck in 1920. But by 1950, you only need about 24 hours of ENIAC time.
In this framing, a natural question to ask is: what “obvious” bottlenecks to scientific discovery are removed by AI?
Waiting for Superconductors
One of Periodic’s initial areas of focus is materials science, so let’s study a materials science example: the discovery of high-temperature superconductors in the 1980s. Let’s look at what actually happened, and then think about how today’s AI systems could have removed barriers to discovery.
The most salient fact here is that YBCO superconductors are not hard to synthesize. Six months after their discovery, 18-year-olds were making YBCO in their high-school chemistry labs. So: what took so long? Why did we discover YBCO in 1987 instead of 1977 or 1967? Let’s start with the history.

Please try this at home, kids.
In the early 1970s, people started discovering weird superconductors. They knew they were weird at the time:
“Lithium titanate seems to be one of those rare examples of idiosyncratic superconductivity in the sense that there exist no other superconducting compounds which are related both chemically and crystallographically. For this reason, all attempts to raise its transition temperature by means of chemical perturbation have failed, and for the same reason, we are at a loss to explain why the superconducting transition temperature [14 Kelvin] should be so high in the first place."
Matthias would know: he discovered 100s of BCS Superconductors in the 1950s and 1960s, and came up with a set of rules for finding them. Interestingly, one of his earlier rules, “stay away from oxygen”, was already contradicted by the time he wrote the above: the lithium titanite compound he mentions is . (And in fact, was known to be superconducting as early as 1964.) YBCO would break another of his rules, “stay away from insulators”.
Over the next 10 years, materials scientists discovered more and more of these weird superconductors that didn’t seem to be explained by the BCS theory. Against this backdrop, in the early 1980s, Claude Michel began experimenting with systems, and observed metal-like conductivity at temperatures over 100° C — this was highly unusual, because these materials are typically insulators. In one of the greater tragedies in materials science history, Michel’s group never tested for superconductivity.
But Bednorz and Müller did. They had been looking for oxide superconductors for a couple of years, motivated by some Jahn-Teller ideas that turned out not to be true, but which put them on an unusual path relative to other superconductor researchers. Bednorz and Müller read Michel’s papers, synthesized a bunch of crystals, and in 1985 found a family of materials that superconducted at 30 K, breaking the previous temperature record by 12°. They won a Nobel Prize for this work.
Paul Chu read the Bednorz & Müller paper, reproduced it, and began testing different ratios of barium, lanthanum, and copper. Two days before Thanksgiving in 1986, his group found a ratio that superconducted at a staggering 73 K, more than doubling the previous high-temperature record. Chu’s group began testing hundreds of new compositions, discovering lots of new superconductors. Along the way, they tried substituting Yttrium for Lanthanum, and on January 29, 1987, observed superconductivity at 93 Kelvin. This is the famous YBCO superconductor, the first ever discovered with a critical temperature above the boiling point of liquid nitrogen (77 K).
Reviewing all this in 2025, a few bottlenecks stand out. The first is simply lab throughput. Bernd Matthias discovered hundreds of superconductors; Michel’s group synthesized tons of different oxygen-deficient perovskites; Paul Chu churned through hundreds of attempts just in the system. Typically, only the successful experiments are published — for every synthesis we know about, there are probably 10x or 100x more failed attempts that never saw their day in a journal. There are only so many grad students, meaning only so many experiments. Going back to Ulam, you start thinking differently if you can easily do 4,000,000 multiplications instead of painstakingly doing 4,000 by slide rule. Similarly, you might start thinking differently if you could run radically more solid state syntheses.
The second bottleneck is data loss. The compression from physical experiment to journal article is extremely lossy; the vast majority of experimental data never made it into a format that an LLM could ingest. Going back to Galileo, think about how much data is lost in moving from a 32 fps video feed of Jupiter to a few symbols on a page. Worse than the data we lost is the data we never collected. Michels never tests his samples for superconductivity, missing out on an important discovery — and anybody who did want to test for superconductivity had to re-synthesize the materials from scratch. One begins to feel that taking the basic step of recording ≈everything in LLM-friendly format would do a lot to avoid missed opportunities.
The third bottleneck is literature review. Somebody has to read the journal articles. But there are so many of them:

Derek de Solia Price, Science Since Babylon.
Most of these papers are irrelevant, but knowing which are and which aren’t requires some kind of review. Michel’s papers weren’t even on superconductivity, and could have easily gotten buried in the torrent of other materials science research.
Finally, there’s a fourth bottleneck that isn’t at the forefront of the YBCO story — the limits to direct simulation. Ab initio methods become intractable as solid state systems grow in complexity. Superconductivity is a subtle phenomenon, and even the basic mechanisms of non-BCS superconductivity is still poorly understood in theory, making it hard to rely on approximate or simplified calculations. As with protein folding, even 70 years of Moore’s Law isn’t enough for large-scale direct simulation. But, also as with protein folding, one wonders whether specialized deep learning models can give us accurate predictions about the physical properties of a new material, without the cost of direct simulation.
Putting all these points together, you get a picture of what you’d want to do in order to speed up the discovery of YBCO and other crystalline solids. You’d want a high-throughput solid state synthesis lab, where you could run 100x or 1000x the number of experiments that a normal lab could. You’d want to collect detailed data from every experiment in an LLM-friendly format. You’d want a system for scanning journal articles, and advancing ideas for promising experiments. Finally, you’d want algorithms that could predict physical properties without requiring direct simulation.
If you were organizing this as a new lab, you’d want a team with deep expertise in AI and in solid state chemistry. Aspirationally, you’d try to recruit people who played a major role in the current deep learning wave and who have a profound understanding of how the underlying systems work — maybe the people who invented the attention mechanism or who were part of the original ChatGPT research team, for example. You’d also want the leads of the most successful AI-for-science projects, and people who had built high-throughput labs in the past. You’d want a deep integration between algorithms and physical experiment. All of this would be expensive, so you’d also want to raise hundreds of millions of dollars for GPUs, for top talent, and for the lab itself.
And you’d call the company Periodic.
Bibliography
“First-Hand: Discovery of Superconductivity at 93 K in YBCO: The View from Ground Zero”
“Perovskite-Type Oxides — The New Approach to High-Tc Superconductivity”
“Possible High Tc Superconductivity in the Ba-La-Cu-O System”
Solid State Chemistry and its Applications, §8.3: Superconductors
“Superconductivity and Lattice Instabilities”