a project outcomes report

For a number of years now, I’ve been a member of an academic collaboration devoted both to studying Internet latency and to designing schemes to generally speed things up on-line. At the end of 2018, our group received an NSF grant to facilitate this research work. Now, three years later, it’s time to submit the final report. As part of the NSF’s close-out process, an accessible research outcomes summary for the general public (containing up to 800 words, and including up to six images) is required. This sounds like a spec list for an oklo.org item, so I’m posting a slightly expanded draft version here.

Everyone knows the frustration of a page that seems to take forever to load. Moreover, even when the interlaced networks that comprise the Web function nominally to deliver requested assets, there exist Internet applications that would benefit dramatically from reduced latency. Examples of such time-sensitive endeavors run the gamut from telepresence and virtual reality to header bidding and blockchain propagation.

At a fundamental level, network speeds are limited by the finite velocity of electromagnetic waves — the speed of light. The maximum propagation speed for light occurs in vacuum. Light is slowed down in air by a negligible amount (0.03%), but in the conventional solid-core fiber optic cables that currently carry the bulk of Internet traffic, light travels at only about 2/3 of the vacuum maximum. Over long distances, and when many back-and-forth exchanges of information are required, this signaling slowdown becomes material. In addition, the actual over-land and under-sea paths taken by cables are often substantially longer than the minimum A to B distance between data centers.

Over the last decade, there has been a flurry of construction of long-haul line-of-sight microwave networks that adhere as closely as possible to great-circle paths. These are operated by latency-sensitive trading firms, who, in aggregate, have mounted significant research and development projects to create global information circuits that are as fast as possible, while simultaneously maximizing bandwidth and uptime.

How feasible would it be to take the lessons learned and apply them at scale to speed up the Internet as a whole? This is a tricky question to answer because the fastest existing long-distance networks were entirely privately funded and their performance remains fully proprietary. Just how fast are they, really? How well do they hold up when weather moves through? How much data can they carry?

Government database scraping provides a first approach to evaluate the performance of the ultra-low latency networks. In the US, if one wishes to communicate with microwaves, one needs a broadcast license. The FCC maintains a publicly-searchable list of all licenses and licensees, and this data can be assembled to monitor the construction, consolidation, and improvement of point-to-point networks. The figure just below, from our 2020 paper, shows two snapshots in the evolution of the New Line Network, which connects the CME data center located in Aurora, Illinois to the trading centers of suburban New Jersey. Over time, the New Line has clearly grown to provide ever more bandwidth at ever higher availability with a shortest path that adheres ever more closely to the geodesic.

The development and build-out of speed-of-light networks has significant parallels with the emergence of transcontinental railroads during the Nineteenth Century.

The Union Pacific system of railroad and steamship lines, 1900. Library of Congress

In April of 2020, in the licensed microwave bands, there were nine separate FCC-registered networks spanning the Chicago to New Jersey Corridor and linking the CME to the NY4 data center that hosts a variety of equity and options exchanges. The New Line, with a 3.96171 millisecond path-latency (compared to a geodesic minimum latency of 3.955 ms) is seen to be neck-and-neck with several competitors:

In the above table, APA stands for Alternate Path Availability, and indicates the fraction of links that can be removed (for example by heavy rain) such that the path latency of the remaining network is not more than 5% greater than the speed-of-light minimum.

A completely independent monitoring technique consists of correlating precisely time-stamped trading data from Chicago and New Jersey, and measuring the statistical delay between events that occur at one end of the network, and the responses that occur at the other end. As part of the capstone paper for our NSF-funded research, we undertook this analysis using gigabytes of tick data for the E-mini S&P500 near-month futures contract (that trades in Illinois) and the SPY S&P500 ETF (that trades in New Jersey). In work of this type, there are subtle issues associated with measuring the absolute lowest latencies at which information transport occurs across the relay; these subtleties stem from the operational details of the exchange matching engines. For the purpose, however, of demonstrating that the networks consistently run end-to-end within a few percent of the physical limit, even during periods plagued by heavy weather, the signal correlations measured over long stretches of trading provide a remarkably powerful network probe.

The timing of New Jersey response to Illinois events. Over three weeks of stock-market trading (sliced into 15-minute increments along the y-axis), word of price movements always traveled within a few percent of the speed of light, even when when the weather was inclement. This figure illustrates that a large-scale, nationwide speed-of-light network is a real operational possibility…

By taking these (and other) real-world insights into account, and applying them to a transcontinental network design, we’re excited to release — at the 19th USENIX Symposium on Networked Systems Design and Implementation Conference — our most up-to-date vision of what a speed-of-light Internet service provision (a c-ISP) could look like, and what its performance would be.

A 100 Gbps, 1.05×stretch network across 120 cities in the US. Blue links (thin) need no additional towers beyond those currently listed in the FCC database. Green (thicker) and red links (thickest) need 1 and 2 series of additional towers respectively. Black dashed links are fiber.

geysers

The headline images from Cassini at Saturn were the curtain sheets of water vapor and ice crystals erupting from the tiger stripe terrain of Enceladus’ south polar regions.

In the ensuing fifteen years, Enceladus has accreted a lot of habitability hype, so it’s easy to forget that it’s actually a very small satellite. Its diameter, in fact, is less than the driving distance between Hicks Dome and the Crater of Diamonds State Park.

With the small size comes a small escape velocity — 240 m/s — a typical jet airliner speed at cruising altitude. When liquid water welling up in tidally flexed cracks is exposed to the vacuum that surrounds Enceladus, the exit speed of the boiled-off molecules is fast enough that water molecules will readily random walk away from the satellite. The moon acts like a giant low-activity comet. No kimberlite-styled cryptoexplosions are required to launch the H2O into space, just exposure to liquidity when the cracks are forced open at the far point of the slightly eccentric (e=0.0047) orbit.

Jupiter’s Europa, by contrast, is an ice-covered world of far vaster extent. With its kilometers-thick ice shell, it should be keeping a tight lid on its depths, but curiously, evidence has emerged that water is somehow being transiently sprayed to heights of order 200 kilometers above the surface. A 2019 paper by Paganini et al. draws on a combined a set of Keck near-infrared spectral emission lines to support the conclusion that on one occasion out of seventeen, thousands of tons of water were fluorescing in Europa’s tenuous exosphere.

Two instances with no fluorescing water, one instance with fluorescing water (source).

Skeptics and cynics will be quick to remark that one 3-sigma measurement produced in seventeen tries works out to a 1 in 20 chance even with a perfectly normal distribution. And they’re right. Results that are weird and spectacular are generally wrong, and geysers on Europa providing astrobiology fans with fresh organic produce from ten kilometers down would appear to qualify on both counts. Nonetheless, Earth managed to rocket gem-quality diamonds from the mantle up into rural Arkansas, so a down-home precedent clearly exists. Furthermore, HST has captured evidence of UV emission from photolyzed water in the vicinity of Europa, which provides support to the one-in-seventeen result from Keck. Maybe eruptions actually are occurring on Europa?

A serious problem, however, lies in lofting the water to heights of 200 kilometers above the ice. That requires venting velocities of order 700 m/s, pointing toward regular full-blown cryptoexplosions.

Nicole Shibley and I recently published a paper that outlines how such a process could operate:

The cryptoexplosive mechanism is initiated by convective overturn in the Europan ocean, which permits clathrated CO2 to rise to the top of the water column and propagate into cracks. Once the clathrates ascend to within ten kilometers of the surface, the pressure is low enough so that they dissociate, producing explosions that are sufficiently energetic to send carbonated water to dizzying heights. The process, in fact, draws on the peer-reviewed literature on champagne cork dynamics.

… just in time for New Years!

cryptoexplosions

Image Source

Geologically speaking, not much has happened in Illinois since the Permian. In particular, in the ultra-flat vicinity of Urbana where I grew up, there is no exposed bedrock at all. If one takes the effort to dig down through meters of topsoil and glacial till, one eventually hits gray, unmetamorphosed 320-million year old sediments from the Carboniferous — the Mattoon Formation — the crumbly picture of a disappointing drill core.

Something about those miles of dull strata directly underfoot instills a vicarious appeal into the IGS maps for the entire state. And while they are dominated by the vast bland extent of the coal-rich Illinois basin, the margins of the map hint at strange and exotic features, including one, Hicks Dome, that’s so weird that it merits its own circular inset.

The southeastern tip of Illinois is riddled with faults, and pocked by a mysterious, crater-like 275-million year old blemish that is still clearly visible in aerial photographs. The strata that comprise this feature — the Hicks Dome — were pushed up thousands of feet by a carbon dioxide-driven explosion that brecciated the rock miles below and squeezed dikes of lava through cracks toward the surface.

The resulting igneous rocks, which are colored red in the diagram just above, can be extracted from drill cores. Mineralogical tests indicate that the lava ascended at least 25 km from a source of origin in the upper mantle. Then later, during the Jurassic, geothermally heated brines seeped through the faults that shattered the sediments and interacted with limestone to form fabulous fluorite crystals.

Illinois fluorite

An even more remarkable example of a deep-Earth intrusion is located in Arkansas, 383 miles southwest of Hicks Dome. The Prairie Creek Diatreme, more evocatively known as the Crater of Diamonds, is a 106-Myr old igneous pipe filled with solidified lamproite lava that was driven upward (as also occurred at Hicks Dome) by exsolving CO2 gas from a source at least 100 miles down. The crater, moreover, is indifferently salted with actual diamonds, including the occasional big-deal gem.

Source: Arkansas Geologic Survey

Diamonds are metastable when removed from deep mantle conditions, and so in order for them to survive a lava-soaked mix-mastered trip to Earth’s surface, the transport has to be rapid. The lamproite that brought up the diamonds must have smashed its way through a hundred miles of rock with a vertical velocity of order a hundred miles per hour, a marshaling of forces that is undeniably impressive, and indeed, at first glance, completely non-intuitive.

Satisfyingly, a hundred and six million years after the cryptoexplosion, Arkansas is running the lamproite pipe as a State Park, offering end-runs around de Beers at very reasonable rates:

Earth’s last diamond-bearing eruptions occurred tens of millions of years ago. It’s a good thing they aren’t every-day occurrences. Methane gas, however, is currently causing new-school cryptoexplosions that leave ominous lake-filled craters in the permafrost of the Siberian tundra.

On Earth, the crypto prefix can generally be detached from cryptoexplosions once the techniques of laboratory and field geology have been brought to bear. Kimberlite eruptions may seem superficially crazy, but the basic mechanism of their operation is increasingly well understood. Truly mysterious explosions need to occur off Earth, in locations where it’s not yet possible to go in and root around after the fact…

flying objects

Image source: Sam Cabot

All the small things. Truth cares truth brings. I’ll take one lift. Your ride best trip… Vintage Blink played in the background. Tubular radio bulbs placed a diffuse glow on the distressed wood and polished concrete surfaces. The researcher pushed away a half-finished bowl of microgreens and, before taking another sip of single-origin espresso, eyed me with a look somewhere between amusement and concern.

“I mean really. You sound like a relic. You’ve gotta move on. You’ve gotta get with the times. ‘Oumuamua is so 2020. Everyone who’s anyone now is working on UAPs.”

It’s true that the cutting-edge has progressed to bigger and weirder things. Indeed, it’s now been over four years since ‘Oumuamua raced out of sight, and I can’t seem to let that mysterious cosmic visitor out of mind.

The ISO story has been worn smooth through years of retelling, and the details are probably well known to anyone who reads oklo.org: ‘Oumuamua entered the Solar System on a strongly hyperbolic trajectory consistent with a pre-encounter galactocentric obit that was quite close to the local standard of rest. It closed to within 0.25 AU of the Sun. Then, just after passing Earth’s orbit on its outbound leg, it was detected by Pan-STARRs at the end of the third week of October, 2017. Global follow-up efforts with space and ground-based telescopes were quickly mounted. ‘Oumuamua was observed to have a strongly varying light curve, no detectable coma, a slightly reddish color, and it experienced a small but significant non-gravitational acceleration on its way out.

For over a year, I was very enthusiastic about the possibility that ‘Oumuamua’s properties could be explained by appealing to a composition rich in molecular hydrogen ice. Darryl Seligman and I published a paper outlining this idea, which generated a fair amount of interest in the wider media. Last year, however, Yale graduate student Garrett Levine carried out a very detailed investigation to trace how macroscopic objects rich in molecular hydrogen ice might form in the cores of the densest, coldest molecular clouds. Our final conclusion was that while it’s not impossible, it’s very difficult for the present-day Universe to manufacture solid H2. The microwave background temperature just isn’t quite cold enough yet

Recently, Metaculus (which has been undergoing rapid development) launched an on-line journal featuring fortified essays in which an in-depth article on a topic of interest is linked to a set of questions on which readers can predict. Garrett wrote an essay the outlines the future detection and research prospects for ISOs. Everyone’s encouraged to read it and place predictions.

Add your prediction here

Grab all they can

Music from your formative years stays with you — generally in latent form, but at other times echoing resurgently through the amorphous cycles of nostalgia that stretch out into decades.

For me, it was the era of The Sisters of Mercy, New Order, The Psychedelic Furs, and Depeche Mode. Listening to the old LPs sometimes occasions a near-electric jolt when stanzas that seemed obscure are suddenly infused with stunning up-to-the-moment relevance. Stuck inside of Memphis with the mobile home, sing…

Several days ago, I noticed a new Metaculus question with a curious, almost clickbait-worthy title, When will we meet grabby aliens?

The reference is to a recent paper by Hanson, Martin, McCarter and Paulson that has been getting traction in response to write-ups by Scott Aronson and others. Hanson et al.’s abstract rebrands as “grabby”, a subset of extraterrestrials that would appear to bear certain similarities to the antagonists in Starship Troopers.

The parameters of the GC model determine how fast and in what manner space-time fills up with grabby civilizations, and are specified by (1) the rate at which grabby civilizations emerge, (2) the rate at which they expand, and (3) the number of “hard steps” (bottlenecks) in the so-called Great Filter, whatever it is that prevents non-living matter from making the transition to living matter.

The Hanson et al. abstract strikes me as a more or less point-by-point rephrasing of Everything Counts by Depeche Mode. Aside from the line about Korea (maybe misheard along the lines of “Here we are now in containers”?) everything in the song is fully relevant.

The grabbing hands grab all they can
All for themselves, after all
It’s a competitive world

Given the model assumptions, grabby civilizations blister out within the universe in a manner determined by the values of the three parameters. The paper has an attractive figure that illustrates one particular outcome, with 193 randomly candy-coated GCs appearing over several Hubble Times across a 2D slice 41.7 billion light years on a side.

The paper’s take-away argument is that we’re living at some point in the clear space above the GC surfaces, and at some point in the future we’ll either become a GC or we’ll be steamrolled by one. Moreover, it’s argued that at the present moment, it’s likely that a “third to a half of the universe is within grabby-controlled volumes.” Hmm.

The Metaculus question asks for predictions of the probability distribution over the number of years before we or our descendants encounter a GC. At the moment, the median of the community PDF is a staggering 22.7 billion years, with a significant peak at 2 billion years. Clearly, the emerging consensus is that this question might take a while to resolve.

Herman Bondi, Tommy Gold and Fred Hoyle’s steady state theory of cosmology introduced the so-called perfect cosmological principle, which holds that the universe is homogeneous and isotropic in both space and time. Two papers outlining the their theory appeared in 1948, and maintained considerable influence until evidence that the Big Bang occurred became incontrovertible. A satisfying anecdote relates that the steady-state theory was inspired by the circular plot of the British post-war horror film, The Dead of Night.

If a horror movie can act as the aesthetic pivot for a debunked cosmological theory, it stands to reason that Depeche Mode may have pointed toward the resolution of the Fermi Paradox. The key lies in the fact that if it’s a competitive world, then

Everything counts in large amounts.

When one talks about aliens and grabby extraterrestrial civilizations, one is really talking about computation. And if grabby computation is irreversible and device-based, then planets are really nowhere. They just don’t matter. A wind of catalyzed nano-devices within the outflow from a single dust-spewing extreme asymptotic giant branch star can accomplish of order 10^62 bit operations, a factor of ten million times more than a “habitable” planet can muster over 5 billion years if totally covered with solar panels. Again, when it comes to the big picture, planets are completely irrelevant.

Here’s a link to our working paper, The Black Clouds, which discusses how extreme Asymptotic Giant Branch stars can be commandeered in the service of computation. We might just be immersed in a colored region, and the WISE sources in the Mollweide projection above might just be our unfriendly local GC.

Or, as the song says,

Confidence taken in
By a suntan and a grin
.

Metaculus

A few years ago, I put up several posts describing Metaculus, the online prediction site that I co-founded with Anthony Aguirre and several other partners. In the interim, the site has grown substantially. It’s now logged roughly half a million predictions from a community of more than 10,000 users on a panoply of nearly 7,000 questions. Among the subset of binary questions that have resolved, the track record shows that Metaculus’ Brier Score stands at an impressive 0.117.

As the site has grown we’ve added staff, including a full-time CEO and a CTO, and a roster of analysts and question writers. We’re running real money competitions, including a $50K forecasting tournament on topics related to the development of artificial intelligence.

Many oklo.org readers may find interest in the Fermi-Drake question series, where we’re accumulating predictions on the terms of the famous equation for N.

Many readers will have their own opinions. Make your prediction count!

Beyond N, there are many active questions that touch on astrophysical topics.

The full list of astrophysics and cosmology questions is here.

including the ultimate question:

currently registering a 71% probability of resolving positively.

The Language Models

Writer’s block. Procrastination. Envy of those for whom words flow smoothly! Luxurious blocks of text. Paragraphs, Essays, Books.The satisfying end results of productivity made real.

Or, as Dorothy Parker put it, “I hate writing, I love having written.”

Over the past few years, this dynamic has kept me both keenly and uneasily interested in natural language generation — the emerging ability of computers to produce coherent prose. In a post that went up just under four years ago, I reported on experiments that used Torch-rnn to write in the vein of Oscar Wilde and Joris-Karl Huysmans, the acknowledged masters of the decadent literary style. A splendidly recursive quote from the Picture of Dorian Gray has Wilde describing the essence of Huysmans’ A Rebours.

Based on a 793587-character training set composed of the two novels, 2017-era laptop-without-a-GPU level language modeling — which worked by predicting the next character in sequence, one after another — could channel the aesthetic of décadence for strings of several words in a row, and could generate phrases, grammar and syntax more or less correctly. But there was zero connection from one sentence to the next. The results were effectively gibberish. Disappointing.

In the interim, progress in machine writing has been accelerating. Funding for artificial intelligence research is ramping up. Last year, a significant threshold was achieved by GPT-3, a language model developed and announced by OpenAI. The model contains 175 billion parameters and was trained (at a cost of around $4.6M) on hundreds of billions of words of English-language text. It is startlingly capable.

A drawback to GPT-3 is that it’s publicly available only through awkward, restrictive interfaces, and it can’t be fine-tuned in its readily available incarnations. “A.I. Dungeons” anyone? But its precursor model, the 2019-vintage GPT-2, which contains a mere 1.5 billion parameters, is open source and python wrappers for it are readily available.

For many years, oklo.org was primarily devoted to extrasolar planets. Looking back through the archives, one can find various articles that I wrote about the new worlds as they arose. One can also look back at contemporary media reports of the then-latest planetary discoveries. Here’s a typical example from a decade ago, the beginning of an article written by Dennis Overbye for the New York Times.

In collaboration with Simone Williams, a Yale undergraduate student, we scraped the media archives from the past two decades to assemble a library of news articles describing the discovery of potentially habitable extrasolar planets. Once all the articles were collected, we developed a consistent labeling schema, an example of which is shown just below. The Courier-font text is a summary “prompt” containing the characteristics of the planet being written about, as well as a record of the article’s source, while the Times-font text is the actual article describing an actual detected planet (with the title consistently bolded in san serif). In this case, it’s another piece by Dennis Overbye from 2007 reporting Gliese 581 c:

A benefit of the GPT series is that they are pre-trained. Fine tuning on the corpus of articles takes less than an hour using Google cloud GPUs.

And the result?

Here’s an imaginary article describing the discovery of a completely manufactured planet (albeit with a real name) with completely manufactured properties.

It’s definitely not perfect, but it’s also not that bad

Strata

East Rock rises abruptly from the flat New Haven city streets that surround it. Approaching from the south or the west, its diabase ramparts rear up forbiddingly.

While most of the East Coast’s ranges date to the continental collisions that assembled Pangea, the igneous intrusions and the sedimentary rocks of central Connecticut are roughly half as old and stem from Pangea’s demise. East Rock Park is a Jurassic Park, and two hundred million years ago, the rifts that eventually grew to become the Atlantic Ocean were opening just south of town. The rift valley floor was sinking, sediment was accumulating to fill the growing depression, and the sill of lava that eventually solidified into East Rock was squeezing out in a thick viscous sheet.

Several miles north of East Rock, an extensive road cut reveals layers of sediment from near the Jurassic-Triassic boundary. Beds of reddish sandstones and fused conglomerates of mud and pebbles are tilted at an angle of about 10 degrees, a remnant of the sinking and foundering that the rock layers suffered after they formed. The strata are varied and clearly visible, representing sediments that accumulated in a rift valley that alternated between a seasonal playa during dry periods and long-lived lake bed during wet periods.

Near the dawn of the Jurassic, New Haven was located in the tropics. During rain-soaked epochs, the shores of the rift lakes were a year-round riot of green with pterosaurs soaring in the skies. Perhaps there was nothing overt in those long-departed scenes to suggest that the end-Triassic extinction was either near or was already underway. And now, two hundred million years later, the layered record of ancient climate change stands mute and unvarying as the engines of loaded dump trucks roar and strain against the freeway grades.

A close look at the road cut shows a banding pattern that starts to repeat as one ascends from the lower exposed layers at the left of the photo to the upper exposed layers on the right. A look at the literature indicates that the rate of deposition in Central Connecticut 200 million years ago was about 1 millimeter of sediment per year, so the span recorded in the exposure is about 20,000 years. The repetition reflects one precession cycle of Earth’s spin, which dictates how the seasons align with the varying distance from the Sun stemming from the eccentricity of Earth’s orbit.

The sedimentary record in the New Haven area from the period of Pangea’s rifting is continuous over something like 20 million years, and the tilted layers of rocks fill Connecticut’s central valley to depths measured in miles. Cores drilled through this colossal lens of sediment reveal that Earth’s ancient orbital eccentricity variations are faithfully recorded in the strata.

In the course of an afternoon, with a laptop and the Rebound code, one can integrate the full Solar System back to the moment when the hardened lava that makes up present-day East Rock was glowing toothpaste-red and pushing its way into the then-newly lithified strata.

Looking back over the past five million years, Earth’s secular eccentricity variations are plainly apparent. In particular, the ~400 kyr envelope produced by the beating of the Venus-dominated g2~7.2/yr and Jupiter-dominated g5~4.3”/yr secular frequencies is clearly visible.

This ~400 kyr (or more precisely, 405 kyr pattern) has been remarkably stable over Earth’s history. Running the clock back to the dawn of the Jurassic 200 million years ago, it shows no real change in character from the present. The plot just below runs for 10 million years forward from the formation of East Rock. A low-pass filter has been applied to the full eccentricity variation (shown by the light orange curve) to show the 400 kyr variation swinging up and down. Just as it does today.

Signals

Image Source

In the late 1950s, orbital measurements of the Martian moon Phobos were interpreted to suggest that the satellite’s orbit was decaying faster than expected. This prompted the Russian astrophysicist Iosef Shklovsky to propose a “thin sheet metal” structure for Phobos, thereby explaining its anomalous acceleration and implying that it is of artificial origin.

The ensuing jolt of public interest in this hypothesis spurred an invigorating side-line for Shklovsky, who progressed from a drably monochrome list of equation-heavy papers — replete with sober titles such as On the Nature of the Fine Structure of Emission of Active Regions on the Sun — to glamorously teaming up with Carl Sagan to (among other things) advocate examination of “paleocontact” with extraterrestrials and for scrutiny of myths and religious lore for indications of influences from out there.

My guess is that it’s quite likely Shklovsky was well aware that promoting a field that later generated efforts such as Erich von Daniken’s Chariots of the Gods, and the History Channel’s Ancient Aliens was largely just for fun. There is, after all, no harm in broadening the public’s exposure to a wide variety of ideas. Right?

As described on his Wikipedia page, on the occasion of a visit to the Berkeley Astronomy Department, Shklovsky was memorably asked by a graduate student if UFO sightings are as common in the Soviet Union as in the United States.

“No,” he replied. “In this area the Americans are far more advanced than us.”

Delayed Feedback

Last weekend, there was an engrossing article in the New York Times.

Titled, ‘A Frankenstein’ that Never Lived, the piece’s top-line summary runs, “On Jan. 4, 1981, the effects-heavy production opened and closed on the same night. Forty years later, the creators revisit a very expensive Broadway flop.

According to the article, prior to the open, hopes for success of the big-budget enactment of Mary Shelley’s 1818 classic had run high, but the show’s prospects were dashed in large and immediate measure by Frank Rich’s dreadful review in the Times. Rich’s write-up brims with both arch sarcasm and gut-punch lines such as, “we feel nothing except the disappointment that comes from witnessing an evening of misspent energy.” Reading the article, I can feel a queasy, visceral sympathy for everyone who worked on that production.

At the end of the last century, Fred Adams and I were riding high on our forecasts for the denouement of the entire cosmos. Our trade book — The Five Ages of the Universe — had, to our great thrill, just been published by an imprint of Simon and Schuster, and we were enjoying the modest acclaim that had proceeded from dividing the entire past and the entire future of the Universe into five thumbnail-friendly eras of time.

The sales, the buzz, and the idle dreams of pop-culture stardom all came to a crashing halt from one day to the next when the New York Times ran its review of our book. I can recall the sinking feeling at the moment when our agent called Fred to let us know what had happened. “Brace yourself.” I think that was what she said.

Dick Teresi’s review was entitled, “The First Squillion Years”, and the title telegraphed the intent. It starts out bad, it gets progressively worse, and finally ends rather crushingly, with “Imagine an astronomer looking back at us ages hence. Perhaps he can read the bound galleys in my hand. Maybe he will be kinder than I have been here.” Yikes.

It’s amazing, however, how quickly one recalibrates. The book tour that had been in the cards was scaled way back to a handful of local appearances, with Borders Books in Ann Arbor standing in as the high point. Sales dried up. Just like that, I was back to trying to find bugs in Fortran codes. Over time, it became clear to me that it was almost certainly just as well.

Rather amusingly, the distant future is currently having something of a moment. There have been several new trade books on the topic, including at least one that got a review in the Times that was entirely different in tone that the one we drew. And then, a fantastic coda for the whole episode arrived unexpectedly a week or two ago as I was leafing through a recent issue of the New Yorker.

I’ve always enjoyed Roz Chast’s cartoons. She has a great sense of humor. It was oddly gratifying to see that she’d somehow been seized to lift out our eras and our thumbnails, elevating them to a full-page cartoon.