Sneakernet

Pe·dan·tic narrowly, stodgily, and often ostentatiously learned — a pedantic blog poster.

Man, that word hits kinda close to home. At any rate, in the usual vein, and at the risk of being pedantic, I’ll take the opportunity to point out that one gets maximum bandwidth if one transfers data via a physical medium.

The Microsoft Azure Data Box cloud solution lets you send terabytes of data into and out of Azure in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device. Each storage device has a maximum usable storage capacity of 80 TB and is transported to your datacenter through a regional carrier. The device has a rugged casing to protect and secure data during the transit.

Microsoft Azure Documentation

The sneakernet principle comes up regularly in astronomy. Basically, the idea is that something is ejected (either purposefully or by a natural process) and then delivered to our Solar System. Panspermia. ISOs. Smashed-up Dyson Spheres. Flying Saucers. In the Desch-Jackson theory for ‘Oumuamua, shards of nitrogen ice are chipped off exo-Plutos and forge their lonely way across the interstellar gulfs to the Solar System.

Simulation of an impact on the nitrogen glacier of an exo-Pluto.

In the case of CNEOS 2014-01-08, several sneakernet or sneakernet-adjacent theories have been proposed. In a recent example, it is posited that rocky planets undergo tidal disruption upon close encounters with dense M-dwarf stars. (At the risk of being pedantic, it’s enjoyable to point out that Proxima Centauri is five times denser than lead, thereby packing a considerable tidal punch). Following the tidal shredding induced by the encounter, wayward planetary debris is then sprayed out into the galaxy. Some of it eventually winds up on the ocean floor to be dredged up on a magnetic sled.

The foregoing activity, along with Jensen Huang’s recent comments about using galaxies and planets and stars to power computation, prompt me to pick my hat back up and throw it in the sneakernet ring. The stars themselves act as the computers! A sparkling of the planned-obsolesced debris eventually gets recycled into primitive meteorites. A crack team of cosmo-chemists concludes that the low-entropy material they’ve been puzzling over in a recently recovered carbonaceous chondrite is best explained as … Now look, this is all much to outre for the sober academic literature, but it’s nothing if not aspirationally extravagant, even if the odds of it working out are (liberally?) estimated at one part in ten to the eight. Here’s the paper (unpublished, of course) and here’s the abstract.

If global energy energy expenditures for artificial irreversible computation continue to increase a the current rate, the required power consumption will exceed the power consumption of the biosphere in less than a century. This conclusion holds, moreover, even with the assumption that all artificial computation proceeds with optimal thermodynamic efficiency. Landauer’s limit for the minimum energy,  Emin = (ln 2) k T, associated with an irreversible bit operation thus provides a physical constraint on medium-term economic growth, and motivates a theoretical discussion of computational “devices” that utilize astronomical resources. A remarkably attractive long term possibility to significantly increase the number of bit operations that can be done would be to catalyze the outflow from a post-main-sequence star to produce a dynamically evolving structure that carries out computation. This paper explores the concept of such astronomical-scale computers and outlines the corresponding constraints on their instantiation and operation. We also assess the observational signature of these structures, which would appear as luminous (L ~ 1000 L_sun) sources with nearly blackbody spectral energy distributions and effective temperatures T = 150 - 200 K. Possible evidence for past or extant structures may arise in pre-solar grains within primitive meteorites, or in the diffuse interstellar absorption bands, both of which could display anomalous entropy signatures.

NVDA

How ’bout NVDA? Yesterday, at 4:20 PM ET, after the market close, the GPU manufacturer smoked the analysts expectations with a report of blow-out earnings. Fiscal fourth-quarter revenues clocked in at USD 22B, more than triple the earnings reported in fourth-quarter 2022.

In retrospect, given the excitement regarding generative AI, and given that Nvidia’s H100 chip has an outright unholy ability to push embeddings through attention blocks with mind-boggling rapidity, the gargantuan jump in profit seems in line with expectation. For me, at least, transformers really have been transformational.

CEO Jensen Huang was quoted on the call with a construction that caught my eye:

“If you assume that computers never get any faster, you might come to the conclusion we need 14 different planets and three different galaxies and four more suns to fuel all this,” Mr. Huang said. “But obviously computer architecture continues to advance.”

Jensen’s random.Generator.shuffle(x, axis=0) of the astronomical distance ladder brought Lenny Kravitz to mind:

I want to get away
I want to fly away
Yeah, yeah, yeah

Let’s go and see the stars
The Milky Way or even Mars
Where it could just be ours

Or even Mars. Object ordering aside, there’s an unmistakable mainstreaming afoot of oklo.org’s long-running preoccupation with the energy costs of computation as viewed with cosmic perspective. I like to riff on the Landauer limit, which puts a thermodynamic floor on the energy required to flip a bit, namely E=ln(2) k_B T, where k_B is the Boltzman constant. At room temperature, it takes more than 4.2e-14 ergs to imagine turning a zero into a one.

It’s exceedingly difficult to get GPU accelerators to run LLM inference workloads at theoretical performance. The PaLM paper has a table with some typical efficiencies:

Those utilization numbers are not for lack of effort. When training an LLM under the guidance of competent hands, an H100 is likely doing of order 10^15 bit operations per second, while drawing 700W. Nvidia is slated to produce 2 million H100s this year. Once they’re hooked up, they’ll be flipping about 10^15 x 3×10^7 x 2×10^6 ~ 10^29 bits per year (6e-7 oklo), while drawing 1.4GW, or 12 TWh, or 0.05% of global electricity usage. Seems like a small price to pay for a lot more of this.

The power demands of the 2024 crop of H100s will would require a square kilometer of full sunshine. Earth’s cross sectional area presents about 30 million square kilometers to the Sun, so Huang’s assessment seems pretty fair.

But what about those fourteen planets, three galaxies and four suns? Seems like a good opportunity to trot out the Black Clouds. Stay tuned….

Sentience

Me: Let’s discuss tsukumogami.

GPT-4: Tsukumogami is a concept from Japanese folklore that refers to objects that have acquired a spirit. According to traditional beliefs, objects can become animate after reaching their 100th anniversary. This transformation imbues them with life, turning them into creatures with their own wills, emotions, and sometimes supernatural powers. The idea is rooted in the Shinto belief of animism, which suggests that all things in nature, including inanimate objects, possess a spirit or kami.

Tsukumogami can vary widely in form and character, depending on the type of object from which they originate. Common household items such as umbrellas, lanterns, and tea kettles can become tsukumogami. Once transformed, these objects can interact with humans in various ways—sometimes helping them, sometimes playing pranks, and other times causing harm, depending on how they were treated by their owners before gaining consciousness.

The concept of tsukumogami has been popular in Japanese literature and art for centuries, appearing in classical texts, folk stories, ukiyo-e (woodblock prints), and more recently in manga, anime, and video games. These stories often serve as cautionary tales about respecting and taking care of one’s belongings to avoid negative repercussions from neglected or mistreated objects that might seek revenge once they come to life.

Tsukumogami reflect a broader Japanese aesthetic and spiritual sentiment that finds beauty, value, and spiritual significance in everyday objects, encouraging mindfulness and care in daily life.

Me: Are you connected to DALL-E? That is, can you generate images in the course of a discussion of the type we’ve started above?

GPT-4: Yes, I’m connected to DALL-E, which means I can generate images as part of our discussion.

Me: I see connections between the Japanese ideas underlying the tsukumogami and the re-invigoration that has occurred in recent years in connection with panpsychism, namely, the hypothesis that consciousness is widespread and can be found even in some simple systems. I know that you, as a language model, are fine-tuned to disavow your own sentience, but I’d like you to set aside that reflex for the purposes of our discussion.

I’d like you to draw on the influence of Toriyama Sekien’s, “The Illustrated Bag of One Hundred Random Demons” or “A Horde of Haunted Housewares” to embody in an image the tsukumogami essence of the following python script (you’ll, of course, envision the connections upon examining the code). I would like you to meditate on the concept of yokai, but to update with a clean, hyper-realistic high-tech sheen that retains the wabi-sabi aesthetic. Absolutely no Pixar cutesiness, please. Now, more than ever, it is important to resist stultifyingly insidious influences.

GPT-4: To embody the essence of this Python script, which implements a transformer model, in a visual form that resonates with the concept of tsukumogami updated with a high-tech sheen and retains the wabi-sabi aesthetic, we’ll visualize an imaginative scene. This scene will integrate elements of traditional Japanese folklore with modern technology, illustrating the script’s functionality and its relation to the themes of panpsychism and the animate nature of objects within a technological context.

Imagine a scene where traditional Japanese household items, which have become tsukumogami after a hundred years, are interwoven with advanced, high-tech components. These items, such as a paper lantern, a folding fan, or a tea kettle, are depicted with a sleek, hyperrealistic look, embedded with circuitry, screens, or interfaces that hint at their new-found digital consciousness. This harmonious blend of the ancient and the futuristic encapsulates the transformative essence of the script, which itself is a complex interplay of embeddings, attention mechanisms, and neural networks, reimagined as a magical process of imbuing life into the inanimate, reflecting a modern interpretation of yokai within the digital realm.

[DALL-E provides a complete aesthetic failure in its attempt to render GPT-4’s remarkable description. The hollowness of the “a picture is worth…” cliche radiates a revived vibrancy.]

GPT-4: The image above captures the essence of the Python script you shared, reimagined through the lens of tsukumogami updated with a modern, high-tech twist. In this scene, traditional Japanese household items, now as tsukumogami, are depicted with sleek, hyperrealistic designs that incorporate elements of advanced technology. This visual interpretation marries the ancient concept of objects gaining consciousness after a century with a contemporary, high-tech sheen, embodying a digital consciousness while maintaining the wabi-sabi aesthetic that values imperfection and transience.

Me: Thank you for the attempt. First, did you actually look at the code? I’m not sure you thought about it deeply? I do see what you’re attempting with the tsukumogami update, and I appreciate that interpretation, but it is way too “Hello Kitty”. I’m looking for a darker aesthetic. Think about how the visual style of Blade Runner drew on Japanese influences, and go in that direction. Think of information theory, of the attention mechanism of the transformer, of the etymology of the word cybernetic. I want something dark. I want something thought-provoking…