I made a bad decision to try and make this piece of art, in one form or another, 18 or so years ago. I think that my unconscious was programmed at an early age by basic cable and a bunch of Hanna-Barbera cartoons that came on really early in the morning on Cartoon Network, before school (especially The New Scooby-Doo Movies with Sonny & Cher, Mama Cass, the Harlem Globetrotters, and so on), and maybe also The Brady Bunch on Nick at Nite? I mean, Middle-earth is great and all, but the fictitious universe that I’d like to explore is the sort of accidental techno-utopia that might be called ‘Mr. Magoo universe,’ where everything really is happy-go-lucky, because the federal government can deflect asteroids with nuclear explosives, the air and water are disgustingly clean, cities are car-free Georgist paradises with fare-free public transport, and you can drive a fuel cell vehicle from coast to coast on a single tank of anhydrous ammonia while emitting only electron-beam-crosslinked natural rubber tire wear particles. That way, when people get hammered on the American Bicentennial, they’ve really got something to celebrate: the accomplishment of decarbonization going almost completely unnoticed. It was about as close to a quick, seamless infrastructure hot-swap as it gets. I also watched a lot of the History Channel as a kid, and as much as it pains me to say this, World War II shook out a little bit differently, and long story short, the Cold War is between the United States and Wang Jingwei’s Nationalist China (at least until he dies in 1970). The point is that it’s a purely nationalistic conflict with no meaningful economic dimension, mercifully lifting the deliriant fogs of ‘capitalism’ and ‘communism’ from the American public consciousness. These people can also think more clearly because of the lower environmental levels of lead, because high-octane coal-derived BTX and agave-derived semi-cellulosic ethanol were cost-competitive with gasoline from the 1920s on. It goes on and on, it’s ridiculous, and the book or whatever is just this test function that probes this model universe, and just so happens to start in Waukegan, Illinois in 1976.
There’s no good point of entry. Here, then, are two dreams I had, a screed, descriptions of some of the inanimate objects in the book, and some mood boards.
There’s nothing I love more than taking a bunch of melatonin and being suddenly awoken at ten in the morning, with the ‘save state’ of a weird dream still intact in my mind. I had two these dreams that, of course, didn’t take place in the world of the novel, but felt as though they took place in a ‘nearby’ universe. In particular, they took place in the ‘past,’ before I was born.
The Train (2011)
I woke up inside this passenger railroad car, which seemed to have the spartan interior of a school bus. I couldn’t make anything out through the windows, in the predawn darkness. I stood up and looked around, seeing maybe three or four other people. It occurred to me that we were all in high school. I looked down at my body, and found myself wearing a beige unitard, the color of a prosthetic limb or a crash test dummy, and numbered by a little patch on the breast. I pinched the fabric, and indeed, it was spandex.
The first light of dawn had only just appeared when the train arrived at the school. The tracks went straight into the building, through an opening in the side. The train was repeatedly X-rayed as it crept inside, and after a few minutes it came to a full stop. We disembarked in the middle of a busy hallway, full of students milling about, but also these taller, adult technicians in clean room suits. Along each wall, it would alternate between, on the one hand, rows of split pea green lockers and doors to classrooms, and on the other hand, thick red-orange windows to hot cells.
As all of these students were getting books and such for first period, the technicians were using remote manipulators to handle radioactive materials. They were laser-focused on their work, seemingly unaware of all the chatter and slamming shut of locker doors, but also the deafening noise of the still-running diesel-electric locomotive, towering above the crowd. You could feel it inside your head and chest.
Whatever my first-period class was, it was in a windowless room. There were tiles on the floor (some of their corners were chipped), cinder block walls with thick white paint, and all the usual ceiling tiles and fluorescent lamps up above. There was a TV up in the corner of the room, by the teacher’s desk, and you could hear the relay click as it was turned on remotely for the morning announcements. We stood up, sang the national anthem of the Canadian Soviet Socialist Republic, and sat back down.
The television presenter looked kind of like Leonard Nimoy. He was lanky and gaunt, with a widow’s peak, and enormous aviator glasses. The news was all about how the men’s hockey team defeated Sweden in overtime at May Day Stadium (in Ottawa), and how the State Committee for Statistics had announced that oil production in the Athabasca was up six percent, year to date. “The tar will provide for you, Young Pioneers. The tar will provide.” Just seconds after he signed off, there was a piercing and shrill alarm. The teacher told us that there had been an accident, and to line up single file for evacuation. I could hear the feet of the desks scraping across the floor as we all stood up. The dream blacked out.
I woke up trapped inside a hollow fiberglass tree. I was bent over, with my legs inside the trunk, and my upper body inside a branch. There was an acrylic glass bubble on the underside of this branch, right around my face, and I could see out. The acrylic was scratched, and my breath was fogging up the window, so I could just barely make out the scene outside. It was this enormous shortgrass plain, stretching out in all directions, to a horizon obscured by fog. It was a bit like the original version of gm_flatgrass from Garry’s Mod, but with just enough terrain variation that it was out of the uncanny valley, looking like somewhere in the Great Plains. There were these bolt-straight train tracks, and on the other side, a brown horse grazing serenely.
My arms were pinned up against my torso, and I writhed to see if there was any way to move through the tree, or break out of it. It was futile, but at least I wasn’t being asphyxiated. The sound of my breath was tinny, reverberating inside the leafless plastic tree. The brown and green colors of the scene were muted and flattened by the oppressive blanket of clouds overhead, and the wet gray fog. The fog seemed to almost damp sound as well, such was the stillness and silence. At some point, I saw something glinting on one of the rails, and realized that there was a derail device only tens of feet from the tree.
That’s when I heard the train horn, and became filled with dread. I could both hear and feel the train coming. There were several seconds between when the locomotive derailed, and when the dream ended. The horse was completely unaffected, though, and continued to graze. It didn’t even flinch as the train folded up like an accordion, only ten or so feet away.
The Bomb (2013)
It was pitch black. I stuck my hands out to feel for anything in front of me, and tried to walk in a straight line. I could tell that the ground or floor underneath me was hard. I didn’t think that I was outside, because there were these tinny echoes, the air was still, and it seemed almost dank. Eventually I found a metal wall, and started following it with my hand. After walking for ten minutes or so, I made out a tiny speck of light in the distance, and twenty minutes after that, I could finally see that I was walking inside what seemed to be a giant HVAC duct of some sort, maybe 15 feet tall and 30 feet across. The light was coming from a tee off to the right, and I started to hear voices, and some kind of activity.
I hugged the right side of the duct. As I peered around the corner of the tee, I saw all kinds of people, from all walks of life, working on and around this very long object. There were all these parts lying around, labeled with QR codes. The object looked like some sort of golden multi-cell RF accelerator cavity, maybe a hundred feet long, and with many lobes along its length. It was in some sort of steel cradle or fixture, and there were many wires, cables and boxes attached to it. I didn’t feel threatened, so I appeared around the corner, to see if anyone knew how to get out of there.
I walked up to one of the first people I saw, and asked him what was going on. He explained that it was a Facebook event inside a Major League Baseball stadium, and there was an all-you-can-eat buffet for anyone who helped build this piece of Ikea furniture. This uniformed SS officer came out of nowhere and scared the shit out of me. He had an iPhone, and explained that when you scanned the QR code on a part, it would pull up these Ikea-style assembly instructions. He also pointed out the buffet table, with the sneeze guard, the hotel pans, and everything. It looked good, and I really wanted to grab a plate.
Right as he was holding up an Allen wrench and asking me if I was going to take the deal, I glanced over to the object, and suddenly realized that I was actually looking at the radiation case of a thermonuclear weapon with 40 or so stages, and a yield of more than a gigaton of TNT. I felt this rush of adrenaline, broke out into a cold sweat, and tried with all of my might to relax my face, so I that didn’t even have even one single faint, fleeting micro-expression of distress, because I felt like this guy was going to kill me if he found out that I knew. I told him that, while the offer was tempting, I was about to have severe diarrhea, and needed to leave.
I could tell that he didn’t completely buy my excuse. He sort of tilted his head back and squinted, but luckily for me, in that instant, someone somewhere had some problem, and this Nazi had to quickly attend to it before it cascaded into more problems. He was going to come back to me.
I was panicking, trying get the fuck out of there, and I saw this vent near the foot of the wall, which was remarkable because the floor, walls and ceiling were otherwise featureless. The screws were loose enough that I could get my fingertips underneath the grille, and pry it off the wall. I looked in and saw a room on other side of the wall, and went head-first through this hole. I barely fit, and these raw sheet metal edges scraped across my skin as I wiggled through. I got a bunch of abrasions, but I didn’t see any blood, so there must not have been any burrs or anything.
The other side was this bathroom, covered floor to ceiling in mint green tiles. The door was dark, bookmatched wood, several inches thick. I opened the door, and found myself in a portico, at the front entrance to a downtown hotel. This cream-colored Cord 812S convertible pulled up, and this dude with a stovepipe hat and fur coat got out. He thought I was the valet and threw me his keys, looked me up and down, and told me the car was worth more than my life. I told him that I would try and save both, and tried to get in the car as fast as I could without seeming suspiciously overeager.
Even as I was negotiating that first S-turn to get out onto the street, it was obvious that I was wringing the car’s neck. I wasn’t looking at the guy, so I couldn’t tell if he saw or heard me peel out; I was looking over my left shoulder to see if I had to merge into traffic. There wasn’t any, and as I looked ahead, I was astonished by the lifelessness of this city. The few cars on the six- or eight-lane street were all parallel parked, and there weren’t any pedestrians at all. The city was perfectly grid-planned, and looking between the rows of skyscrapers, I could see this street extend all the way out to a vanishing point on the horizon. I did not hesitate to blow through all the intersections, and the daylight to my left and right seemed to strobe.
At a certain point, the city just ended. The shade of the city was gray-blue, and then this peachy, pink-orange desert world exploded into my field of vision. The street necked down into two-lane, pitch-black, bolt-straight asphalt road on a salt flat. It was bright and my pupils contracted, but it felt cool even with the sunlight on my skin. The air was sort of salty and gritty, and there was this haze or aerosol over the landscape. I saw mountains all along the horizon, and just instinctively knew that they would protect me from the weapon effects. The wind noise was deafening with the top down, and after a minute or so, I looked in the rear-view mirror to find that the city was tiny, almost as tall as it was wide, seemingly in the center of this salt flat, concentric with this circular mountain range.
I drove straight for maybe 15 minutes, at 100 miles an hour or so. The wind altered the shape of my hair, and when I ran my fingers through it, it felt sandy.
The salt flat did end, much to my relief. I then had to navigate this network of valleys, which were low-lying but with a lot of elevation change in and of themselves, almost like rolling hills. I had no idea where I was going, just looking up at the tops of the mountains as reference points, and making educated guesses about which way to turn at each intersection. If you looked at a map of these roads, they would appear jagged and energetic. There was no way to tell in what general direction a road would ultimately take you, because it would run straight as far as you could see, sharply turn in places with no obstacles or landmarks of any sort, and run straight again. The sun was getting low on the horizon, and the mountains began to cast three-dimensional shadows in the haze, with the same blue-gray tinge as the shadows of the skyscrapers.
I kept twisting my head around to look back at the city, and eventually it did disappear behind a mountain slope, to my enormous relief. I knew that I would survive, and lost focus. The rocks had a faint pink cast, with these flowing bands of golden brown color, almost like Jupiter, or half-mixed strawberry banana yogurt. A lot of the rocks had these bowls or dimples on top, which were full of standing water, even though everything else in the desert was utterly desiccated.
By the time the sun set, I had pressed far enough into the mountain range, and a motel appeared. The woman at the front desk seemed to be bored out of her mind, almost catatonic. She barely said anything or made eye contact. I don’t know how I paid for the room. The motel bed was very stiff, elastic, and low to the ground. I turned the little black plastic CRT on, hearing the 15 kilohertz whine and feeling the static electricity near the screen, and watched the baseball game for a bit. There was a sliding glass door to a slab of concrete too small to be described as a patio. It felt good to just slide the door back and forth, with its heft and smooth action. My feet were bare, and I could feel my body heat leak into this cool slab. The mountain ridge was perpendicular to my line of sight, and I somehow knew that the city was directly on the other side.
The sight of this mountain ridge was disturbing. Even the foot of the mountain was impossibly far away, far beyond my ability to tell distance, and yet the mountains then rose so high up that I had to tilt my head up to see the ridge. It was sterile and barren, and felt dangerous, like an oncoming rogue wave. My mind wandered, and I wondered whether I was right to be so alarmed, if I had really seen what I thought I saw, and if I had stolen a car for no reason.
After dusk, it was impossible to tell where the mountains ended and the night sky began. By day, the sky was completely cloudless, and yet by night, it was completely starless. The motel windows only threw light about a hundred feet away, and beyond that was a black void. The ground was this powdery rock with the occasional pebble, almost totally inorganic. The air had actually warmed up. It was comfortable, and perfectly still.
Then the sky lit up. I couldn’t hear anything, not even any normal or pulsatile tinnitus. I had 20/2 vision. The light should have been unbearably painful, and yet it wasn’t. The detail should have been smeared out by bloom, and yet there was none. It was the starkest contrast that I had ever seen, and I could follow every delicate fractal feature of the ridgeline along the boundary of the blackest black and whitest white, scanning it from left to right over what felt like a full minute, but was actually a thousandth of a second. I closed my eyes, and yet I still saw everything as if they were open. The ridgeline was permanently burned into my mind.

From about 1980 to 2020, the transgression of planetary boundaries began in earnest, and the window of opportunity to steer human development in the direction of the “comprehensive technology” and “stabilized world” scenarios from The Limits to Growth closed. The problem was stated best by John von Neumann, in his 1955 paper Can We Survive Technology?
“In the first half of this century the accelerating industrial revolution encountered an absolute limitation — not on technological progress as such but on an essential safety factor. This safety factor, which had permitted the industrial revolution to roll on from the mid-eighteenth to the early twentieth century, was essentially a matter of geographical and political Lebensraum: an ever broader geographical scope for technological activities, combined with an ever broader political integration of the world. Within this expanding framework it was possible to accommodate the major tensions created by technological progress. Now this safety mechanism is being sharply inhibited; literally and figuratively, we are running out of room. At long last, we begin to feel the effects of the finite, actual size of the earth in a critical way. Thus the crisis does not arise from accidental events or human errors. It is inherent in technology’s relation to geography on the one hand and to political organization on the other… The carbon dioxide released into the atmosphere by industry’s burning of coal and oil — more than half of it during the last generation — may have changed the atmosphere’s composition sufficiently to account for a general warming of the world by about one degree Fahrenheit… another fifteen degrees of warming would probably melt the ice of Greenland and Antarctica and produce world-wide tropical to semi-tropical climate… Technology — like science — is neutral all through, providing only means of control applicable to any purpose, indifferent to all… there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today. Of course, any technology interacts with geography, and each imposes its own geographical rules and modalities. The technology that is now developing and that will dominate the next decades seems to be in total conflict with traditional and, in the main, momentarily still valid, geographical and political units and concepts. This is the maturing crisis of technology. What kind of action does this situation call for? Whatever one feels inclined to do, one decisive trait must be considered: the very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic. In looking for a solution, it is well to exclude one pseudo-solution at the start. The crisis will not be resolved by inhibiting this or that apparently particularly obnoxious form of technology. For one thing, the parts of technology, as well as of the underlying sciences, are so intertwined that in the long run nothing less than a total elimination of all technological progress would suffice for inhibition. Also, on a more pedestrian and immediate basis, useful and harmful techniques lie everywhere so close together that it is never possible to separate the lions from the lambs… Similarly, a separation into useful and harmful subjects in any technological sphere would probably diffuse into nothing in a decade… Finally and, I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization. Only if those disasters that we fear had already occurred, only if humanity were already completely disillusioned about technological civilization, could such a step be taken… For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment.”
The book is an attempt to fulfill two objectives. First, I want to convey just how contemptible, unforgivable, childish and knowing the failure to kick the “business as usual” habit at any point in the 1980–2020 period was. It would seem that, if humanity were considered as a superorganism, that superorganism would have a severe drug problem and executive functioning deficit. I have some experience in this area, and I should know. It’s not so much that these decisions guaranteed that this or that disaster will happen at some point in the future, it’s that there was a cavalier attitude, power vacuum and abdication of responsibility at the commanding heights of planetary-technological civilization. The transgression of planetary boundaries was downplayed as beyond any reasonable planning horizon, overblown by sadomasochistic Brahmin Left hypocrites, and an issue in human development of no particular importance, which would be handled in due course by unspecified non-coercive, spontaneous self-organization, or perhaps divine intervention. The complexity, abstractness and historical novelty of these problems certainly puts them in a sort of collective blind spot, but the elite must pay its own way by handling precisely these problems. The book is set in 1976–1977 because I want to turn the future-orientation of science fiction on its head as a way of mercilessly, disrespectfully dunking on humanity (and beating a live horse for once). Second, I want to envision a clean break from business as usual, in the full-blown and fully-integrated form of a fictitious universe. The joke is that, although this world is obviously a contrivance, it is what might be called a ‘Mr. Magoo universe,’ in which an exceedingly high level of human development has been attained in a freak historical accident, rather than through “patience, flexibility,” and “intelligence.” As these virtues are virtually non-existent, sustainable development could only ever have happened by accident, and this is non-negotiable for my suspension of disbelief.
It’s safe to say that adulthood involves, among other things, making compromises, coping with grief, and planning ahead. These might be described as the virtues of amenability, equanimity, and foresightedness. In particular, when an adult is confronted with a problem that will only become worse with time, they address it at the earliest opportunity. In my opinion, the unabated continuation of business as usual through the 1980–2020 period speaks to the intellectual and spiritual bankruptcy of the developed world. I understand why feeling the “the effects of the finite, actual size of the earth in a critical way” is depressing to people steeped in “the industrial way of life,” because I feel that too. It is certainly painful to give up on the cornucopian dream of “an ever broader geographical scope for technological activities.” But that’s life, and it can’t be helped.
The irony here is that the “comprehensive technology” and “stabilized world” scenarios would not have been “contrary to the whole ethos of the industrial age” at all, in the grand scheme of things. Much as the New Deal salvaged and rehabilitated capitalism by reining in its excesses (and thereby preserved the existing social order), the goal of these techno-economic reforms was the adaptation and survival of technological civilization, with many or most of its perks and conveniences intact. By kicking the can down the road for 40 years, we have greatly magnified the risk that humanity will ultimately become “completely disillusioned about technological civilization,” and “the industrial way of life” will be lost.
It takes a finely-calibrated sense of taste to know when to be techno-optimistic, and when to be techno-pessimistic. The best technological prescriptions are syncretic, freely mixing high and low technology, as well as acceptance and rejection of conventional wisdom. The best example of this in the book is the passage from conventionally-bred, non-native, annual monocultures to genetically-engineered, native, perennial polycultures. The idea is that the agroecosystems with the lowest input costs and environmental impacts are the multistory perennial polycultures most closely resembling the forest and prairie ecosystems that exist in the absence of human intervention. Genetic engineering makes it possible to domesticate wild plants de novo (without the linkage drags that would otherwise cause many of their desirable traits to be unintentionally bred out) and build “multifunctional” agroecosystems that provide not only the usual provision services, but also substantial ecosystem services. These perennials are high-yielding and mechanically-harvestable, but retain all of their natural pest resistance traits, among other things.
While decarbonization is probably the most important single departure from business as usual, global warming and ocean acidification are but two of the nine planetary boundaries. The use of land, water, and raw materials must never be forgotten. There may be a menu of “suitable new political forms and procedures” to choose from, but they will all involve high state capacity as a tactical necessity of resolving large-scale coordination problems. The political solutions will come largely from John Roemer’s “Kantian equilibrium,” and perhaps also Georgism. It may be too late for the “comprehensive technology” and “stabilized world” scenarios, but it is never too late for the underlying political and technological innovations (and exnovations). In this way, the book is just as much a vision of our future as it is a vision of a past that never was. It is a shamelessly, transparently deluded and escapist retro-futuristic techno-utopian fantasy, but I swear to God that at the end of the day, all of that is just a means to an end — helping to create the ideal conditions for an Enlightenment to actually occur for the first time in history.
The Techno-Economic Setting
Almost no science fiction deserves to be called “science fiction,” or even “science fantasy.” It’s all imagination and no straitjacket, almost never addressing practical problems in a detail-oriented, multi-disciplinary, and physics-based way, making it little more than a style exercise. And it’s not just that it’s useless, it’s that it has a completely misplaced sense of what is futuristic. The world we live in is not modern at all, and as such, there is a vast and almost entirely unexplored overlap between the futuristic and the mundane.
I hope you’ll believe me when I say that my motivation for doing all of this technology-oriented worldbuilding is not to wallow in esoterica for its own sake, show off something I (rightly or wrongly) believe to clever, or express some kind of uncritical lust for technology. It has to do with my debilitating affinity for what I perceive as technological beauty, elegance, and perfection. I think of myself as tending to a garden of pure Forms in the Platonic ideal world, where the perfect all-in-one washer-dryer and the perfect district heating and cooling system are on display, next to the regular dodecahedron and the number 137, all on plinths, and with informational placards.
It’s very important to note that although a lot of these machines are weapons, or weapon-adjacent, they’re really just a way of articulating a certain philosophy, and approach to problem solving. I would contend that weapons are some of the most telling cultural artifacts produced by any technological civilization. One of the themes in the book is that, in a world of ubiquitous illiteracy, tribalistic one-upmanship is the only motivation for technological and human development comprehensible to ordinary people. It might be for the wrong reasons, but it is very much the right thing, so I can’t complain. By no means is this all just about weapons — there’s plenty of other stuff like a deep space mission to land cosmonauts on Callisto and robotically search for the hypothetical subsurface ocean on Europa, vintage clothing made from NaOH- and NaClO₂-treated Calotropis fruit fiber, towering airlift bioreactors for the submerged fermentation and biopulping of wood chips by domesticated Ceriporiopsis subvermispora, affordable thermochemical and electrochemical routes to titanium, heart-healthy vegetable shortening fractionated from the “high-stearic-high-oleic” oil of transgenic edible Jatropha curcas, transgenic Parthenium argentatum that secretes PHB granules in its “bark parenchyma cells,” the effort (at the intersection of the Buffalo Commons and Jurassic Park) to replenish stocks of bison using interspecies embryo transfer, manipulative wildlife management, and DNA from the archeological record for genetic rescue… And Karen Carpenter is still alive.
#1: EG&G Asterius. The stadium-sized cryogenic vacuum blast chamber for laser-triggered bomb-pumped laser-driven nuclear micro-bomb physics experiments that make inertial-confinement fusion ignition as easy as killing ants with a magnifying glass.
#2: General Electric Mark 5 × Aerojet UGM-96A. An almost solid cylinder of unstoppable mass murder. It may be ten pounds of SLBM in a five-pound missile tube, but it’s surprisingly easy to live with over the 15-year maintenance interval (truly the Honda NSX of nuclear weapon systems).
#3: Xerox Paperless Desktop. This stupidly overpowered home computer pushes silicon, static CMOS and DRAM to their absolute limits, with Cu-Cu thermocompression bonding of a general-purpose triggered-instruction tile processor and eight DRAM chips into a back-thinned chip stack MCM, connected through a Foturan glass printed circuit board to an electron-beam-addressed phase-change RAM (EBAPCRAM) storage tube. The hermetic encapsulation of the electrical components in borosilicate glass by chemical vapor deposition at intermediate temperatures allows two-phase immersion cooling in automotive antifreeze, with a high critical heat flux.
#4: Aerojet-Babcock & Wilcox M-1. This is the nuclear thermal rocket engine with 1,084 seconds of specific impulse that helped land American cosmonauts on Mars in 1969 and return them safely to the Earth in 1971 (Hugh Dryden “preferred ‘cosmonaut’ on the grounds that flights would occur in and to the broader cosmos, while ‘astronaut’ suggested flight specifically to the stars”). The Stationary Liquid Fuel Fast Reactor is “suspended within the expansion-deflection nozzle bell,” and “gamma shield material (tungsten) can be integrated into a high-temperature nozzle throat component, because the nozzle’s radial outflow sonic throat is located forward of the reactor” and “the reactor’s location is remote from the majority of supersonic gas that expands off the nozzle wall, having been directed there by the annular throat.” “It is implicit in [the] design concept that the tungsten nozzle… weight be accounted as nearly zero, because the gamma shielding provided by [this] forward-mounted component is necessary to shield controls, components, propellant, and crew.” This takes thousands of pounds out of the engine, greatly increases the TWR, and even “has a small specific impulse advantage, because [the] high surface area, gamma heated tungsten nozzle heats reactor outlet gases slightly, whereas nuclear thermal rocket engines of conventional design lose about five seconds of specific impulse by cooling nozzle gases with a cryogenically cooled throat.” The hydrogen propellant flows forward through tungsten alloy microtubes that are 98% isotopically enriched in ¹⁸⁴W in order to reduce thermal and epithermal neutron absorption, alloyed with 5at% rhenium for reasonable low-temperature ductility, and immersed in (and wetted by) the molten ²³³U-rich alloy fuel. The ²³³U has less than a third of the critical mass of ²³⁵U (as well as a slightly higher thermal conductivity), and is bred from ²³²Th in a two-fluid molten salt reactor with a hydrous ⁷LiOD-NaOD fuel salt.
“Molten uranium has several advantages over solid ceramic fuels such as UO₂, UC, and UN. The relatively high thermal conductivity of uranium plus convective flow of the liquid would reduce thermal gradients and thus result in a nearly isothermal heat source. Also, because the uranium density is higher than in solid ceramic fuels, smaller and lighter reactors are possible for space vehicles. In addition, fuel element swelling, a major problem with solid fuels, would be eliminated by using a molten fuel… Metals generally exhibit much better thermal conductivity and thermal shock resistance than ceramics. In addition, the fabricability… would be much better with a metal than with a ceramic or ceramic-coated metal… no eutectic or chemical compound forms between tungsten and uranium… tungsten and uranium exhibit low mutual solubilities at elevated temperatures… After uranium solidifies at 1130°C, its density increases… over 5% during cooling to room temperature because of thermal contraction and allotropic transformations… Tungsten, on the other hand, exhibits a relatively minor thermal contraction when cooled from 1130°C. Thus, the shrinking uranium can cause stresses in the capsule walls which, if the capsule-uranium bond is strong enough, can be relieved by forming cracks in the walls… if liquid uranium fuel elements were used in a nuclear reactor… shutdown or cyclic operation at temperatures below the melting point of uranium could result in container cracks and in subsequent fuel losses.”
“The… liquid metal fuel reactor… could prove useful for space power generation since it could be operated as a relatively compact fast reactor… One of the most advantageous features of a fluid fuel reactor is its inherent safety and ease of control. A liquid fuel which expands on heating results in a negative temperature coefficient of reactivity. Since the rate of expansion is limited only by the speed of sound in the liquid, this effect is essentially instantaneous and tends to make the reactor self-regulating.”
The “high actinide concentration in combination with a… high heat transfer capability… leads to a high power density” and small “core volume… vessel diameter and shield diameter,” and therefore a relatively light shadow shield. “Refractory metal structural integrity may result in improved fission fragment retention,” and the “rugged construction may offer improved shock loading.” The large, prompt, negative temperature coefficient of reactivity “due to the thermal expansion of liquid fuel” not only stabilizes the power level globally without any “rapid control system,” but also stabilizes it locally, providing “inherent hot spot compensation.” The “fuel-to-coolant thermal resistance” is low, both because the fuel and “fuel container” are electronically conductive, and because tungsten, uranium, and hydrogen are chemically compatible up to 3500K, obviating any protective coatings and the thermal contact resistances (phononic band structure mismatches) that they introduce. This avoids “very steep temperature gradients, which in turn yield undesirable high fuel and low coolant temperatures,” and increases Iₛₚ. It is very important that the outlet of the reactor be isothermal, with the same “thermal margin” to the “maximum use temperature” of the tungsten alloy everywhere, in order to minimize thermal stress in the floating tubesheet, and maximize the “mixed mean reactor outlet gas temperature.” In other words, it is important for “radial power shaping” to “allow for significantly more thermal margin… resulting in significantly hotter coolant outlet temperatures,” and in turn, higher Iₛₚ. The hope is that the thermal margins will be no more than 200K and as little as 100K almost uniformly across the outlet, resulting in mixed mean outlet temperatures of 3300K or even 3400K, comparable to those of reactors with advanced tricarbide fuels.
“At a high level, power shaping is useful for reducing the peak fuel temperature for the same overall core power. In the case of [nuclear thermal propulsion] this would be used to either increase the core power and thus the rocket performance, or the reactor size could be made smaller, improving the [TWR]. Power shaping is also useful for reducing differences in the coolant outlet temperature which allows for a simpler and potentially lighter manifold or orifice… since the outlet coolant temperature of 2700K is only 100K lower than the fuel temperature limit of 2800K, even a small local power excursion at the bottom of the core can exceed thermal limits. Therefore, shifting power to higher in the core can increase the thermal margin.”
“High-temperature, long-lived, fast-spectrum, fully-enriched reactors are of interest for power generation in space. The small size of these systems generally necessitates using an external control system which is based on leakage, and/or absorptions of neutrons… Reactor designers have used power flattening, i.e., uniform power generation per unit volume of core, in order to decrease the ratios of peak-to-average core temperature and fuel burnup.”
This is much like solid-liquid “interface location and morphology control” in the growth of bulk single crystals from the melt, where the “interface shape is determined by the melting-point isotherm which… is set by heat transfer,” and “the heater’s thermal geometry must be specifically tailored… to achieve the desired planar isotherms.” The “thermal gradients caused by surface cooling of a material with internal heat generation” (in particular those “temperature gradients in the radial direction that are generated by heat losses”) may be flattened by strategically placed thermal insulation and neutron reflectors that modulate the losses of heat and neutrons “to the surrounding environment,” as well as a “modest departure” from the “simple geometry” of the fuel container. Most importantly, the M-1 is designed for “long operating life with multiple restarts and temperature cycling.” Fast reactors lend themselves to this, as the neutron economy is largely unaffected by the gradual transformation of the binary eutectic fuel into a witches’ brew of innumerable actinides and fission products.
“Although… binary eutectics have not yet been considered as liquid metal fissionable mixtures for nuclear power plants, they are… characterized by a very high actinide concentration, which optimizes the neutron economy and thus the transmutation capability of the reactor. Their melting point is 800°C, which qualifies them for operation. The boiling points are well above 2000°C, so that the operating temperature can also be increased… The high thermal conductivity makes continuous pumping of the fission fluid obsolete. Overall, this leads to an increase in power density and thus also to higher power plant efficiency… The concentration of fissile materials must be significantly increased for fast fission reactors… fission products could initially be completely dissolved in the alloy due to the low mass turnover of nuclear fission. However, the outstanding neutron economy… allows such long operating times without reprocessing of the fissionable mixture that… the concentration can increase to such an extent that agglomeration effects occur. In addition, the actinide nuclides also change due to sterile neutron capture and beta decay; in addition to new uranium and plutonium, protactinium, neptunium and americium are produced, for example. This results in mixtures that differ significantly in quality and quantity from binary eutectics. The mixtures with more than two components and in particular increased fission product concentration may deviate in the parameter range from the values for eutectics, but as long as the solidus temperature and the total viscosity are low enough for pumping, this does not affect the operability.”
The fuel container is “topologically similar to a once-through steam generator.” “The architecture of microtube heat exchangers is similar to conventional shell-and-tube exchangers but with very small tube diameters on the order of 1mm. Baffles are not typically needed in microtube heat exchangers because the small tube sizes and spacing can provide high enough heat transfer without creating crossflow. Microtube heat exchangers can accommodate a much higher number of tubes than conventional shell-and-tube heat exchangers.” The idea is that the microtubes have a ratio of surface area to volume comparable to that of a bed of “grain-sized spherical particles” with a “total outer diameter of 2mm,” but the fuel container does not partition the fuel axially, radially, or tangentially. The liquid fuel is a continuous phase, directly accessible through penetrations in the “fuel container,” with the advantages that three-dimensional mass flow occurs all the way up to the scale of the entire active region so that power shaping is maximized and mixing of the fuel “as contrasted to only localized irradiation of a solid fuel provides improved fuel utilization,” outgassing removes fission product gases more or less for free, the flow of cover gas into and out of the reactor can pressure-balance the fuel and propellant, the fuel can be foamed by gas bubble injection as it cools and thickens during engine shutdown so that “melt-freeze cycling… does not overstress the fuel tank,” and refueling in space (should the fuel container outlive the fuel) is conceivable because it “does not require dismantling or reconfiguring the core.” These features “eliminate the traditional concern of fuel cladding integrity due to irradiation and buildup of fission gas pressure, and thus… allow a very high burnup,” and may accommodate reductions in the thickness of the microtube walls, or even the rhenium content of the alloy, both of which would decrease the thermal resistance and increase Iₛₚ. The inner and outer reflectors of the “dual rotating windowed reflector” are both “capable of independently controlling the reactor through its full range from startup to shutdown throughout its life,” and the “manipulating of reflectors keeps the reactor compact and reduces the amount of core penetrations.” “Thaw is initiated within the reactor core by a programmed addition of reactivity through movement of the control reflectors. Reactivity additions are initially small so that core temperature increases are gradual to avoid severe temperature gradients and the possibility of overstress. Accommodation of volume expansion is provided by engineered voids or gas-filled bubbles emplaced within the reactor vessel.” “The entire system is designed for repeated freeze and thaw cycles, and rethaw/restart on orbit is not any different than the initial thaw and startup… The reactor is self-thawed by nuclear heating.” The fuel is mushy, “soft and wettable in character” over a large temperature range, “and can be extruded through and around intermittent solid barriers… taking advantage of the extrudability characteristic in avoiding the possibility of structural damage.” In the hot, flowing hydrogen environment, “tungsten has the lowest vaporization rate of all materials.” “The high operating temperatures are well above the brittle-ductile region of refractory metals,” and damage “caused by the high neutron flux as well as thermal stress will be automatically healed at those high temperatures (annealing in metals),” as “the ability of displaced atoms to return to a vacancy site or other relatively innocuous location is enhanced.” The hope is that, as the reactor is broken in and aged, “thermal stresses induced by the thermal gradients will beneficially relax with time due to creep” (this might actually be done on the ground, in a non-nuclear “high-fidelity thermal simulator”), and enough decay heat will build up that the engine “does not fully cool between major burns” and the fuel “can easily be maintained in a molten state.” The high Iₛₚ, high TWR, and long service life of the M-1 allow the General Dynamics Space Tug (with its lightweight, “zero boil-off” stainless steel balloon tank) to complete “advanced space missions” throughout the Solar System, and then boost back to propellant depots to be reloaded with propellant and reused. This reduces the rate at which these workhorse orbital maneuvering vehicles are expended and replaced at any given level of Space Transportation System activity. The M-1 is historically significant in and of itself, but even more significant as the design base for later space, marine, and terrestrial systems that couple the ultra-high-temperature reactor to a Rankine cycle power conversion system with a magnetohydrodynamic generator in which the electrical conductivity of the “100% enriched ⁸⁷Rb” working fluid is enhanced by neutron activation and “non-equilibrium ionization by ⁸⁸Rb decay products,” without “the need for an ionizing seed” or electron-beam irradiation (note that this is only possible if the reactor is internally-cooled, and the working fluid flows through the active region). This includes the Westinghouse-Babcock & Wilcox R-1 magnetoplasmadynamic thruster, and the fleet of large, commercial, ‘cosmoderivative’ B-4000 power plants (with nameplate capacities of about 4GWₜ and 3GWₑ) based thereon. The B-4000 is a ²³⁸U converter with an exceptionally hard neutron spectrum, ⁸⁷Rb Rankine MHD topping cycle, and one of two bottoming cycles: a thermomechanical Brayton cycle inert gas turbine, or a thermochemical sulfur-iodine water-splitting cycle with SiC bayonet reactors that eliminate “the need for high-temperature connections, seals and gaskets and the associated leaks and corrosion problems” (the S-I cycle is the foundation of the hydrogen and ammonia economies). The high-pressure turbine blades are TZM, which “is almost insensitive to [the] helium environment up to 1000°C,” and “exhibits a completely different creep-resistance behaviour as compared to the nickel-base alloys, mainly because of its high melting point,” such that “almost flat creep curves make a 100,000-hour lifetime appear possible with an uncooled blade,” although the working fluid is probably an He-Xe mixture, with a higher density that significantly decreases the size and capital cost of the turbomachinery, but without significantly increasing the size and capital cost of the heat exchangers. The topping cycle could also reject heat to a calciner at 1000°C (in which case the concentrated CO₂ waste stream is reacted with “calcium-rich saline evaporitic brine” from salt wells to produce “saleable construction aggregate… gypsum, halite, sylvite, mirabilite, and hydrochloric acid”), or even a glass production plant at 1300°C (a temperature on the ragged edge of what is possible in molten salt reactors). The reactor has an outlet temperature of at least 2500°C, and so the combined cycle has preposterously high thermal efficiency even if it rejects heat to air.
While the operating temperatures of the reactor are limited ultimately by the structural materials holding the reactor together, and not by the temperature of the fuel itself, the upper limits can be increased now considerably above the 400-500°C range of most fission reactors, up to the point almost of vaporization of the liquid uranium fuel. This occurs at atmospheric pressures in the approximate range of 3500°C, but even higher at super-atmospheric pressures. Thus the efficiency of power generation can be increased from 30-35% to 50-65% at the 1000-2500°C range and up to 75% at temperatures approaching 3500°C.
In those plants that are water-cooled, the mass of cooling water consumed per unit of energy produced is very low (and the waste heat is almost always utilized by “an industrial process or district heating system,” rather than discharged directly into the environment). The generation of process heat at ultra-high temperatures also results in very low consumption of uranium and production of “fission product wastes” per unit of generated electrical (or chemical) energy. These outlet temperatures and thermal efficiencies are only possible with molten metal fuels, which might be said to have a generally higher degree of physio-chemical compatibility with metallic structures than molten salts. It is a myth that molten metal fuels in general have high vapor pressures, and preclude operation at both high temperatures and low pressures; uranium boils “at atmospheric pressures in the approximate range of 3500°C, but even higher at super-atmospheric pressures,” for example 5000K at 10–20atm. It’s interesting how propulsion systems tend to be very good design bases for stationary power plants in that, because they are subject to severe weight and volume constraints, stationary power plants derived from them generally have small footprints, and low capital costs (which largely determine the levelized costs of electricity). This can be seen in the compelling business case for aeroderivative gas turbines in peaking power plants, and in how the real-world Aircraft Reactor Experiment (“the first reactor to use circulating molten salt fuel”) paved the way for “compact” molten salt reactors with high outlet temperatures and thermal power densities, and commercial prospects good enough to merit serious consideration of a nuclear renaissance. It’s hard to make a direct comparison because the duty cycles couldn’t be more different, but it’s not outrageous to think that a nuclear thermal rocket engine, designed for a power density rivaling that of chemical rocket engines and years of unattended operation in deep space where it cannot be serviced, could be the starting point for a uniquely small and low-maintenance terrestrial power plant. What I found surprising is that, although nuclear thermal rocket engines are almost certainly the nuclear propulsion systems most sensitive to weight, volume, and complexity, fast reactors seem to have the upper hand over thermal reactors in this application, and the implications for the design of stationary power plants run counter to conventional wisdom. “The advantage of thermal spectrum reactors is that criticality can be achieved with less fissile material in the core. In turn, the advantages of fast reactors are that no moderator is necessary, thereby allowing more space for fuel…” “Fast spectrum reactors… tend to be much more compact than thermal spectrum systems, although that does not automatically translate to lower mass systems with higher thrust-to-weight. The inherently higher fissile mass of a fast spectrum system is an important disadvantage.” It seems that you can get quite a lot for the price of a large fissile inventory and “core non-fuel volume,” especially with a metallic “concentrated (undiluted) fuel fluid.” Indeed, the core of LAMPRE (a proto-SLFFR with plutonium-rich alloy fuel, a tantalum vessel, and sodium coolant) was cylinder with a 6.25" outer diameter and 6.5" height, and the business case for the SLFFR is that the simple geometry of the “fuel container and minimal reactivity control requirements will result in a very simple reactor core system,” operation “at a high power density and temperature… would lead to a compact core system… the high operating temperature would yield a high thermal efficiency,” and the combination “of these characteristics… reduce the capital cost per unit energy produced drastically.” The insensitivity of the neutron economy to fission products and refractory metal structures in the active region permits “an internally cooled LMFR to reduce fuel inventory,” with a stationary (rather than circulating) fuel that “remains in the core except for small amounts which are withdrawn for reprocessing” by relatively simple “co-located pyroprocessing” using “equipment that better approximates a continuous flow process for the purpose of aiding mechanical automation.” “Economic considerations limiting the inactive fissionable material inventory, and requiring maximum utilization of the active inventory result in very small cores, high heat fluxes, and the impossibility of a circulation system. For these reasons, the passage of the coolant through the core rather than the use of an external exchanger seems mandatory for the fast power reactors.” If, for example, someone throws a toaster into a bathtub in No Frills, they may very well be using a service of the Army Corps of Engineers, and a spin-off of NASA interplanetary space propulsion research. Large, commercial nuclear power plants are no less “Army civil works” than locks and dams, as the ACE is uniquely well-positioned to both deliver mains electricity to “homes and businesses through the electrical grid,” and help carry out the nuclear deterrence and non-proliferation missions, as well as promote “power system resilience” and the conservation of energy resources, all of which contribute to Comprehensive National Power. In the alternate-history EIA Annual Energy Review Sankey diagram, you will see that nuclear energy and fish-friendly, run-of-the-river hydropower from the ACE meet most of the demand for energy in the United States, and the balance is provided by hot dry rock geothermal energy (often in the form of “combined heat, power and metals production”).
#5: 1927 Ford Model A × 1953 Ford Mystère. These four-door sedans embody the aversion of two or three environmental disasters (though certainly not all of the environmental disasters wrought by road transport). The futuristic Model A embraces the arc-welded steel semi-monocoque of Joseph Ledwinka and the aerodynamic design of Paul Jaray. In order to minimize the height and frontal area, it has a rear-mounted “transverse engine and transaxle,” without any exhaust pipes or driveshafts (or “conventional frame rails”) “running back from the front of the car” that “would need extra space to fit between the underbody and the road.” The radiator ingests air “through inlets along the rear fender surfaces” at surprisingly high “ram airflow levels,” removing “some of the low energy boundary layer from the quarter panels, delaying body-side airflow separation and reducing base drag,” as well as exhausting “cooling air out the back” to “help efficiency by filling the wake.” The Model A also has “a full-length flat undertray,” and “inset extra glass panels” to “achieve a curved airflow around the A-pillar.” The use of long and continuous seam welds, rather than spot welds, reduces structural weight. “Closed-section structure is advantageous for increasing the strength and reducing the weight of car bodies, but… resistance spot welding that needs access both from outside and inside is not applicable. When resistance spot welding must be applied to closed-section parts, working holes are required, which, in a great number, spoil the structural rigidity of the work. As a solution, arc welding is sometimes opted for because of the workability from one side only.” The narrow, large-diameter nylon-belted radial tires have a small frontal area and very low rolling resistance. The idea is that, because the engine power required to overcome aerodynamic drag and rolling resistance is relatively low to begin with, and tight integration of the radiator with the base of the vehicle alleviates base drag, accommodation of the “additional cooling required for Stirling engines… by enlarging… the existing cooling water radiators” does not cause unacceptable ‘drag compounding.’ As such, the Model A has supremely low emissions of “engine noise, tire/road noise and wind noise.” The Vulcan four-cycle double-acting Stirling engine is ready for prime time, with a variable-angle wobble plate for simple “variable-stroke power control,” “a pressurized ‘crankcase’ with a… rotating shaft seal containing the working fluid and making it possible to avoid the reciprocating rod sealing problem,” self-lubricating powder metallurgy composite “cylinder wall sleeves” for “hot rings” that eliminate “appendix gaps” and the associated losses (the hot extrusion of the sleeves “both consolidates and shapes” so that “near net shape and finish can be achieved” in a single process step, “dramatically reducing the manufacturing costs compared to plasma spray deposition”), and “castable, low-cost, iron-based superalloy” “individual stacked heater heads” with a “relatively short warm-up time.” “A controller system constantly monitors the engine coolant temperature and adjusts the fan pitch in order to provide adequate cooling airflow, which is dependent on the actual ambient temperature of the load. This will reduce the fan parasitic load (power draw) and hence fuel consumption. Reversing the blades will help clean the radiator fins of any dirt or debris lodged in the fins.” “Forced draft units require slightly less horsepower since the fans are moving a lower volume of air at the inlet than they would at the outlet.” This external-combustion engine can run on inexpensive coal-derived carbon black with “nearly flat torque and efficiency curves” (“the efficiency falls to no less than” 3% “below peak over approximately 70% of the total load range,” and the “advantageous torque characteristics at low engine speeds” permit “transmissions with fewer gear steps than a corresponding diesel engine”) and no muffler. Weirdly enough, this goes back to the World War I naval blockade of Imperial Germany. In order to meet my alternate-historical objectives while also maintaining adequate suspension of disbelief, I need them to still lose World War I, but do it in a very precise way that sets up everything afterwards, and especially World War II, which is so violent, apocalyptic and trippy that only the United States and Nationalist China (Republic of China, 1912 to the present) survive to fill the international power vacuum, after the complete and total annihilation of the Soviet Union, France, the United Kingdom, Imperial Japan, and Nazi Germany. The idea is that, being wracked by severe shortages of raw materials, Imperial Germany made enormous strides in the utilization of natural resources — ranging from the selective sulfidation of nickel oxides in low-grade laterite ores from the Balkans, to non-abrasive inorganic hydraulic fracturing fluids that “penetrate deeply into the formation through complex fracture networks, forming in situ proppants to allow entire induced fractures to contribute to oil and gas production” in the Posidonia Shale — then unconditionally surrendered, transferred this body of “scientific and technical know-how as intellectual reparations” to the Entente Cordiale, and inadvertently laid the foundations of not only commercial geothermal energy, but also the cornucopia of tightly-coupled geothermal, geochemical, and geomechanical systems that would come to be known as ‘geotechnology.’ This had an immense impact on the 20th century thereafter, by opening up innumerable opportunities for the “extraction and production of raw materials” across the world, and considerably reducing the “volume of world trade,” the strategic interdependence of the international community, and the degree to which any political objectives could be met by intervening in the “extraction and production of raw materials.” By 1918, researchers had turned the problem of “carbon deposition and metal dusting” on its head (a Cinderella effect), and mastered the continuous precipitation of carbon black as syngas cools in a fluidized bed of solid catalyst particles. This made a largely closed thermochemical loop of underground coal gasification and above-ground “reverse gasification” possible by regenerating much of the injected water (this might be important in dry coal seams where the supply of makeup water from maintenance of the “gasification chamber below hydrostatic pressure in the surrounding aquifer to ensure that all groundwater flow in the area is directed inward” is inadequate). This occurred against the background of cutting-edge research at the Kaiser Wilhelm Institute for Coal Research, the Westfälische Berggewerkschaftskasse, Siemens, and the broader “network of research organizations” in “academia, industry and government laboratories.” I still have a lot of work to do, but right now I’m thinking that this involved the advent of “swept-frequency explosive seismic sources” that provide the copious 200Hz power necessary to resolve coal seams that are both deep and thin (echoing the real-world Vines Branch experiment of 1921), high-linearity molecular electronic transducer seismometers (“a closer relative to the old wet cells that used to operate doorbells than to anything now used in the electric-electronic field”) “assembled and processed… with simple robotic devices” and placed downhole below the “weathered-altered layer interface,” high-vacuum hot-cathode “electron beam recorders” that combine two emerging technologies of the fin de siècle (film photography and cathode ray tubes) to “produce very high resolution recordings on a virtually grainless recording medium” (similar to that in Lippmann holography) with the 100dB dynamic range and very high “density contrast in variable-density registration” necessary for “data to be recorded faithfully without using any automatic gain control” (the sheet beam current is modulated by a high-dynamic-range “electrostatic quadra-deflector” and the film transport mechanism has a “closed-loop, phase-locked servo-control system”), analog optical computers for seismic data processing using low-pressure mercury-vapor short-arc lamps (with water-cooled anodes) powerful enough to overcome the losses incurred “by changing the magnification of the Köhler illumination system to reduce the effective source size” and enhance the coherence of highly monochromatic line emissions (this allows deterministic deconvolution of the returns with a reference signal from a geophone near the seismic source, not entirely unlike the inverse of convolution by a noise wheel), “telemetry drill pipe” with high-reliability vacuum tube “repeater amplifiers” based on those in submarine communications cables, high-temperature downhole molecular electronic transducer inclinometers and even azimuth sensors with molten LiI electrolytes, downhole mud motors, rotary steerable systems, new cements and low-nickel steels for service in “thermo-chemically hostile” environments, and the “removal of sulfur, chlorine, nitrogen and mercury compounds from syngas at elevated temperature and pressures.” This cluster of technologies made it possible to exploit the most abundant fossil fuel reserves of all — coal seams too deep or thin to mine — and precipitated the ‘Black Revolution’ or ‘Coal Renaissance’ in the last months of Imperial Germany. This was far too late to influence the outcome of the war, but the seeds of the Black Revolution were soon planted across the world. The limits of underground coal gasification were soon pushed so far that ultra-deep coal seams bearing large quantities of geothermal heat were brought up to temperature with relatively small injections of oxygen. The significantly lower lifecycle costs of solid-fuel Stirling engine vehicles made carbon black the de facto standard motor fuel by the end of the Roaring Twenties. In addition, pump gasoline was blended with inexpensive “BTEX and aromatic amines” from underground coal thermal treatment (a sort of in situ mild gasification used to extract any valuable “phenolic and aromatic compounds” from the coal seam in preparation for UCG), and inexpensive “semi-cellulosic” ethanol from the consolidated bioprocessing of shredded whole Agave plants (with “low water consumption, high productivity on marginal land unsuitable for food crops, and low recalcitrance to conversion”) by a plant-pathogenic Fusarium oxysporum strain selectively bred for “sustained thermotolerance” in a continuous culture apparatus. This accidental three-pronged attack crowds and prices leaded gasoline out of the market for motor fuels, and averts the loss of some 184 million IQ points in the United States alone (the obsolescence of gasoline engines outside of aviation blunts the public health impact of blending toxic and carcinogenic aromatics into pump gasoline). With the coal ash left in place (underground and in situ) by the thermochemical regeneration of the carbon, and the all-important pre-combustion sulfur and mercury removal, the carbon black is, for all practical purposes, pure carbon. In addition, Stirling engine combustors with aggressive exhaust gas recirculation can easily achieve low PM and NOₓ emissions (without expensive platinum-group metal catalysts) because of their steady flow and low pressures. While the heavy-duty Stirling engines in “stationary power, railway locomotives, marine propulsion and the large off-highway vehicles used in mining, forestry and agriculture” certainly could burn an endless variety of torrefied organic wastes in fluidized bed combustors (as the “comminuting and scouring action of the fluidized beds” prevents fouling of the “heater tubes or heat pipe boiling sections” with ash), far more economic value can be created by using these wastes as substrates for the growth of thermophilic filamentous fungi in large-scale, low-pressure airlift bioreactors (to produce single-cell protein, secreted single-cell oil, and cellulosic fodder). As such, the Coal Renaissance is unavoidably carbon-positive in the short run (relative to the baseline Hydrocarbon Society), but by setting the stage for direct carbon and then direct ammonia fuel cells, it is nevertheless carbon-neutral in the long run, and in an ironic twist, winds up accelerating decarbonization. Hence the Mystère, the spiritual successor to the Model A with an aluminum alloy semi-monocoque joined by high-speed friction stir welding (using “self-reacting” contra-rotating pin tools with “scroll-type” shoulders), plus an alkaline direct carbon fuel cell as a drop-in replacement for the Stirling engine, compatible with existing “motor fuel storage and dispensing infrastructure,” but with double the tank-to-wheel efficiency. The sulfur- and ash-free carbon black fuel is “distributed pneumatically,” and dispersed in the electrolyte to form a “paste, slurry, or wetted aggregation of carbon particles” that is pumped into the anode chambers. The trick is to humidify the molten hydroxide salt — sparging it with water vapor, and maintaining a high partial pressure of water vapor in the cover gas — in order to disfavor the carbonation that kills the ionic conductivity at intermediate temperatures. This is not at all unlike the humidification of proton-exchange membranes. In NaOH electrolytes, carbonate “crystals are very much smaller and can easily be washed away” without plugging the pores of gas diffusion electrodes, and “impurities can not only be filtered out of the liquid electrolyte, but the whole electrolyte can easily be exchanged like the oil in the car.” The AFC lies in a Goldilocks zone, hot enough for the combustion reaction to proceed at reasonable rates without expensive PGM catalysts, but cool (and hydrous) enough that the chemical equilibrium disfavors both the corrosion of conventional steels or aluminum alloys by the molten salt (and the fuel cell can be constructed without expensive “ceramics or exotic alloys”) and the less-exothermic partial oxidation of carbon to CO. “Because of the alkaline chemistry, oxygen reduction reaction kinetics at the cathode are much more facile than in acidic cells.” The service temperature of about 450°C and reasonably good chemical kinetics help to meet the challenging size, weight and power requirements of the automotive application, and in many cases, the silver was obtained by recycling the cylinder wall sleeves of old Stirling engines (a solid lubricant taking on a second life as a catalyst). The power inverter of the Mystère is based on orientation-independent ignitron switches, rather than silicon IGBT switches. The OIIs are descended from the liquid metal plasma valves of the interwar period, which combined “the most desirable properties of classical liquid-metal arc devices with those of classical vacuum-arc devices,” and “spurred the rapid development of a broad range of basic power circuit topologies… including phase control, natural commutation, forced commutation, DC-to-AC inversion, cycloconversion, and many others.” “Regular ignitrons must be maintained precisely vertical, and are sensitive to movement (jolts, vibration, oscillation). Hence, for mobile applications such as locomotives, special construction features were added inside the ignitron, such as a splash screen, mercury retaining baffles, and a cathode-spot limiting/anchor ring.”
“Unlike the conventional mercury converter valve, the LMPV is a single-gap, single-anode device that utilizes the rapid recovery properties of high vacuum to give good voltage hold-off characteristics following current conduction in the vacuum arc mode. Since the ambient vapor density in the LMPV is much lower than in the multi-anode device, voltage hold-off between electrodes is determined by the dielectric properties of vacuum. Because there is then no need to subdivide the interelectrode gap to support the applied high voltage, no interelectrode grids are required and the valve size can be substantially reduced. Additionally, the LMPV has only one anode, rather than the six of a conventional valve… The condenser area can be kept relatively small since the amount of mercury emanating from the cathode is small, this is a result of the limited free surface of mercury in the cathode throat. The cathode is force-fed mercury in such quantities that only an amount of mercury necessary to provide short-term overcurrents is actually exposed to the discharge chamber. At an operating electron-to-atom emission ratio of 50 (much larger than that of a normal mercury arc discharge), the amount of mercury in the cathode throat is so small that it can be held in place by surface tension forces. The LMPV derives its orientation independence from this fact. Mercury ions and neutrals impinging on the anode are reflected to the condenser wall, which is also the vacuum envelope of the valve. Since the condenser temperature is kept below -35°C and since the sticking coefficient at that temperature (even for relatively fast mercury atoms) is close to unity, nearly all mercury particles are rapidly condensed. The ambient vapor pressure can thus be kept at a low level, with consequent improvement in the dielectric integrity of the valve… Appendage ion pumps, which have no moving parts, maintain a low internal pressure in the valve… The voltage drop across the plasma during the conduction phase is 20-30V (discharge voltage) and is almost independent of valve current. This voltage is considerably lower than that for either the conventional mercury or the solid-state valve, resulting in greatly reduced operating losses.” “Arc spots form on the inner and outer periphery of the cathode groove such that the arc power is distributed and maximum cooling results. Furthermore, the molybdenum becomes wetted by the mercury so that the arc spots are anchored at the [Hg-Mo] interface, thereby eliminating droplet ejection and insuring gravity independence… The use of this liquid metal cathode in which the arc spots are anchored and mercury evaporation is controlled eliminates a number of the main problems associated with the conventional use of mercury as a discharge cathode. For example, with a conventional device, such as an ignitron, which employs an extensive mercury pool, the evaporation of cathode material from the surface… greatly exceeds the efflux of mercury from the cathode spot itself. Furthermore, as the cathode spot moves erratically over the mercury surface, mercury droplets are ejected into the interelectrode space where they vaporize and cause a significant addition to the neutral vapor density. The resultant pressure combined with the Paschen and vacuum breakdown relationships result in relatively inferior high voltage and recovery properties for these devices in comparison to the LMPV… The cathode structure is constructed largely of molybdenum which resists arc erosion while also being easily wetted by mercury.”
This made it possible to “build an electronic commutator… for traction purposes” well within the weight, volume, acceleration and vibration constraints of electric and Stirling-electric locomotives (and electric multiple units). The advent of the LMPV paved the way for the construction of an HVDC supergrid across the continental United States during the New Deal. This allowed electric power to be transmitted from “remote, isolated” hydroelectric and geothermal power plants to distant “urban load centers” with low losses, “fully decoupled… hydro turbine speed and the frequency of the AC grid” (allowing the use of low-cost, fish-friendly, fixed-geometry, variable-speed runners with blades that are slowed and “remain opened” at part load), and even gave rise to the Yellowstone Caldera Authority, an ambitious project to “completely drain the heat away” from the “Yellowstone magma chamber,” reducing the risk of a catastrophic supereruption and generating biblically large quantities of electric power as a bonus (making extensive use of extended reach drilling with miles of horizontal displacement, and heavy-lift airships with phase-change gasbags originally developed to “eliminate the need for logging roads”). As the “ride harshness that might result from high-pressure tires… gets considerably more challenging with lightweight vehicles” without a large mass to provide “a stable reaction member for the suspension system,” the Mystère has a regenerative active suspension system descended from those of main battle tanks, with “four independent variable-displacement pump/motor combinations… to actuate each individual suspension unit.” The “pump/motors are driven on a common shaft… and can be designed to be very compact,” and when “it is necessary to remove fluid from a strut, the pump displaces overcenter to become a motor,” so that the “pressure in the strut then drives the motor, recovering the energy.” As “most of the energy expended is recovered after the turn or after the imbalance of the vehicle has been restored,” there are no losses associated with “control valve metering,” and there is no “effective rolling resistance” associated with “energy dissipated by the shock absorber,” the “average power requirement and fuel economy would be relatively unchanged with increasing induced vehicle motions.” The air conditioning system of the Mystère is based on a Stirling cycle heat pump optimized for low temperature lifts, using benign single-phase hydrogen rather than a two-phase halocarbon as a working fluid (implementing isothermal processes by simultaneously transferring work and sensible heat, rather than transferring latent heat, as well as taking advantage of the existing hydrogen infrastructure used to periodically recharge Stirling engines with working fluid). It’s important to note that, because land leases (functionally equivalent to land value taxes), fare-free public transport, car-free zones and congestion pricing are ubiquitous in the cities of No Frills (as are public toilets, multilevel streets, and the “prohibition of outdoor advertising”), the Mystère is optimized for the highway driving cycle. Although it does have a fiber-cathode Ni-MH battery and 400V electrical system, the battery is only large enough to power the vehicle as the fuel cell warms up, and then provide “mild levels of regenerative braking.” It is true that, on urban driving cycles, regenerative braking and acceleration with round-trip efficiency higher than about 60% makes battery-electric vehicles highly insensitive to “vehicle mass,” but on the highway, the heavy battery blows up rolling resistance, and quite possibly frontal area. The decisive advantage of FCEVs over BEVs is their freedom from catastrophic, runaway mass compounding as range is increased. As such, the Mystère isn’t just light — it’s Colin Chapman light. “With a gross vehicle weight equal to the curb weight of today’s subcompacts, power steering and power brakes could… potentially be eliminated as they were in the Ultralite, further cutting costs and improving control at higher speeds while still maintaining ease of maneuverability.” The low weight, traction control, and anti-lock braking system all reduce the emission of tire wear particles. The high embodied energy of the aluminum alloy unibody is alleviated by “mass decompounding… from downsizing the engine and chassis components,” and the high corrosion resistance allows this energy cost to be amortized over a long service life (the electroplating of aluminum onto many of the steel parts in molten bromide salt baths also helps to extend the service life). Furthermore, aluminum smelters built after World War II use a “Hall-Héroult multipolar design, involving multiple inert anodes and wettable cathodes arrayed vertically in a fluoride electrolyte cell,” which eliminates “direct emission of CO₂ and other greenhouse gases… by removing the carbon anode and its manufacturing process” and permits “the use of an insulated, corrosion-free lined steel vessel without the frozen salt ledge… and thus significantly reduces the heat loss and increases the energy efficiency.” These are solid oxide membrane electrolyzer cells, with YSZ electrolytes similar to those of solid oxide fuel cells. All of this is necessary to understand how, in a freak historical accident, decarbonization began in 1973. This is an extremely unnatural and unbelievable violation of the Iron Law of Kakistocracy — my belief that all forms of social stratification, desirable and undesirable alike, are equally likely to be locked in by the overwhelmingly powerful network effects and path dependence that dominate historical progress, such that the emergence of aristocracy, meritocracy, or technocracy is exceedingly unlikely — which can only happen due to the coalescence of an inordinately large number of historical factors. First, the development costs of geothermal energy systems were greatly reduced by spin-offs from underground coal gasification (as the underlying technologies are dual-use), and geothermal energy ultimately became cost-competitive with fossil energy in many market segments. By the end of the Roaring Twenties, the oil and natural gas industries had not only expanded into hot dry rock geothermal energy, but also begun their radical diversification into the big tent geotechnology industry. By 1970, the conventional wisdom was that the abolition of fossil fuels would simply be a transition to the publicly-funded utilization of alkalinity “at a cost and on a scale to match the withdrawal of fossil fuels,” somewhere in the ballpark of “direct air capture via enhanced weathering” with natural circulation of groundwater between deep, hot, hydraulically-fractured ultramafic rock and “shallow and expansive” ponds (welling alkalinity up and dissolved inorganic carbon down). They therefore pled nolo contendere in the court of public opinion. Second, the replacement of stationary and marine steam turbines with closed-cycle inert gas turbines during the interwar period and the replacement of heat engines with direct carbon fuel cells after World War II greatly increased the efficiency with which fossil fuels were utilized. Third, the nuclear industry took capital costs very seriously from day one, and elected to reduce them by radically increasing the outlet temperatures and thermal power densities of nuclear reactors, permitting factory construction of multi-gigawatt containerized, road- and rail-mobile, plug-and-play electric power units, and making nuclear energy cost-competitive with fossil energy in many market segments. Fourth, the nationalization of land and collection of “unified estate and gift taxes” after 1913 captured economic rents, and slowed the accumulation of “material wealth” by fossil fuel oligarchs. In 1912, the miraculously even Republican Party split delivered the presidency, Congress, and a “supermajority of governorships and state legislatures” to the Democratic Party in “the largest landslide in American history,” and cleared the path for a political program known as the ‘New Freedom,’ perhaps best described as a corporatist, Georgist contract between capital and labor, and breaking of a political impasse. The deal was that land (including fossil fuels and minerals) and railroads would be nationalized, a “unified estate and gift tax” would be levied, and a postal bank would be established (specifically, a “fully government-owned-and-operated” hybrid bank combining the “general service” of a retail bank with the “explicit government credit support” of a zombie bank) in order to horizontally separate, pillarize and compartmentalize the FIRE industry along class lines, and in exchange, no income, payroll, corporate, capital gains, or other taxes on (nominally) desirable economic activity would ever be levied, nor would any regulations ever be imposed on any bank other than the postal bank (by 1976 the Post Office Savings Bank is “the largest financial institution in the world by assets,” a “key instrument of dirigisme,” and the “primary financial services provider” to “the vast majority of American households,” able to single-handedly maintain the “internal balance of full employment and price stability” through “household channels” of “countercyclical monetary policy transmission” alone). The Supreme Court later ruled that enhanced geothermal systems were “buildings or improvements” under the Land Act of 1913, and therefore private property “not subject to ground leases.” Fifth, automotive direct carbon fuel cells turned out to be rather easy to convert into direct ammonia fuel cells, allowing the CO₂ emissions of an entire swath of the transportation sector (including long-haul trucking) to be neutralized by programs similar in scope and complexity to recalls of defective parts, and obviating any politically suicidal reduction of the modal share of private motor vehicles. The “ammonia is directly fed to the fuel cell anode where it is catalytically oxidized in the anode reaction,” without any separate “device which dissociates ammonia into nitrogen and hydrogen,” as “direct use of ammonia… even at temperatures as low as 200°C, is made possible due to the very chemically aggressive nature of the melt.” Furthermore, “dissolving anhydrous ammonia in the electrolyte loop… provides the theoretical advantage of fully utilizing the ammonia because the unconsumed ammonia remains in the electrolyte and is not ‘tailed’ like a gaseous fuel,” although the “use of ammonia in solution as a fuel is disfavored by its much reduced solubility in alkaline electrolytes,” which causes “mass transfer difficulties.” The trick is probably to use microchannels in the gas diffusion anode to deliver the anhydrous ammonia fuel very close to the three-phase interfaces — close enough that “locally acidic” conditions enhance mass transfer by shifting the chemical equilibrium in favor of dissolved ammonium, but not so close that gaseous ammonia crosses over into the exhaust (ammonia slip reduces fuel utilization and necessitates exhaust aftertreatment). The GDA will have a hydrophilic layer sandwiched between two amphiphilic layers with bicontinuous phases that “create hydrophobic pores for… gas diffusion… without bubble formation.” The fully-flooded hydrophilic layer will act as a supported liquid membrane and — more importantly — an acidic, wet, and electrochemically-active combination getter and vapor barrier for ammonia. I’m not sure, but the same high water activity and “acidic” hydroxide melt that disfavors the formation of carbonate anions in the direct carbon AFC (2OH⁻ + CO₂ ⇌ CO₃²⁻ + H₂O) may also favor the formation of ammonium cations (OH⁻ + NH₄⁺ ⇌ NH₃ + H₂O) in the direct ammonia AFC, although I will be the first to admit that I do not grasp the subtleties of acidity versus oxoacidity, in what senses this molten salt really is acidic, and how this would really impact things like ammonia mass transfer at the anode and oxygen reduction at the cathode, all told. It’s endlessly fascinating to me. Anyway, these programs also involved the retrofitment of devices for “the passive collection of brake wear particles” using partial or full “enclosure of disc brake mechanisms,” as well as “non-marking,” carbon black-free, sulfur-free, zinc-free, electron-beam-crosslinked natural rubber tires compatible with high-albedo cool pavements. Sixth, the “non-fossil fuel energy,” “insurance and financial services,” “agriculture, forestry and fishing,” and “recreation and tourism” industries lobbied the federal government to abolish the fossil fuel industry, reducing competition in the market for energy and precluding any future economic headwinds from substantial global warming and ocean acidification. This involved the recruitment and funding of a preponderance of “foundations, think tanks, politically-connected law firms, consultancies, and lobbying organizations” (masters of the dark arts in the vein of the real-world Lee Atwater and Frank Luntz) in order to conduct a two-pronged public relations campaign that would parlay the eventual detection of the global warming signal into a fever-pitched political crisis. The first prong targeted homeowners, emphasizing “higher insurance premiums, decreased property values in high-risk areas, and increased costs for repairs and maintenance.” The second prong targeted white people, stoking fears of a “global northward shift of agricultural climate zones,” mass immigration from the Global South, and an apocalyptic race war (not unlike Helter Skelter). It is politically suicidal to openly advocate for decarbonization, because it raises fears of excruciating withdrawal from the opium of modern conveniences, and the loss aversion, psychological reactance, and psychic numbing make it nearly impossible to get any traction. Instead, one must advocate for decarbonization indirectly, as a mere side effect of another political program — in this case, a program to assuage deep-seated “public and political anxiety about… social and demographic changes.” Just as “emotional wedge issues” can get ordinary people to vote against their own best interests, they can get them to actually vote in their own best interests. This is appealing to white people as it allows them to, on a conscious level, performatively sacrifice themselves for their children by nominally supporting decarbonization, and on an unconscious level, act upon the tribalistic and xenophobic urges that are repressed and taboo in a nominally pluralistic society, having been granted permission to do so by their symbolic parental authority (the tastemaking and trendsetting power elite that shapes respectability politics). These people were exploited, but that’s what they get for failing to meet or exceed the baseline level of scientific literacy that is a prerequisite for the actual existence of a republic, in which ordinary people meaningfully participate in the political process. The global warming signal was indeed detected in 1973 (the concentration of CO₂ in the atmosphere was already 350ppm, as extremely fast human development in East Asia overwhelmed reductions in the carbon intensity of economic activities), and the trap was sprung. The fossil fuel industry — such as it existed within the milieu of the geotechnology industry — was already a wounded animal when the nuclear and hydroelectric industries sicced their army of political mercenaries on it. These mercenaries used expert “word and phrase choices” to frame global warming and ocean acidification as the signature issues of responsible, upstanding white parents and homeowners, and mold public opinion like Silly Putty. With a highly visible and enfranchised community whipped up into a frenzy, the political climate was ripe for the Supreme Court to rule that greenhouse gases were air pollutants according to the Clean Air Act’s “capacious definition,” and that the EPA therefore had the “statutory authority” and legal obligation to “regulate greenhouse gas emissions.” The political window briefly opened, decarbonization was expeditiously threaded through, and as passions cooled and ordinary people were distracted by other shiny objects, this chapter in American political history was largely forgotten. By 1976, no one thinks twice about filling their car’s fuel tank with anhydrous ammonia. I will have you know that anhydrous ammonia has similar thermophysical properties to the LPG that powers tens of millions of vehicles around the world right now, has a higher density of hydrogen than liquid hydrogen, and (considered as a form of hydrogen storage) easily exceeds the Department of Energy’s Ultimate technical targets for gravimetric and volumetric density without any graphite-epoxy overwraps, superinsulation, or exotic metal-organic frameworks. With appropriate “technical and regulatory measures” in place, “the use of ammonia as a transport fuel wouldn’t cause more risks than currently used fuels.”
#6: General Dynamics Space Freighter. It’s big. Check back later for more detail.
#7: The Great Supercomputers. Although the Zuse V3 of 1937 was not a supercomputer in the normal sense, it created conditions favorable to their development, and became the baseline with respect to which they were defined. The V3 was shockingly modern, with “a binary memory unit (capable of storing 64 floating-point numbers), a binary floating-point processor, a control unit, and I/O devices… It is indeed amazing how [Konrad] Zuse was able to find the adequate architecture right from the beginning.” More importantly, the V3 had an MTBF orders of magnitude higher than any other contemporary computer, because it assiduously avoided vacuum tubes and hand assembly. The ferractor amplifiers were fabricated using “machine-winding” of permalloy tape around stainless steel bobbins, and the rod memory cells were fabricated by a similar process. The hand-built Zuse V1 mercury-wetted reed relay computer solved the wire routing problem (accounting for the “third-dimensional build-up of lead density”) and recorded a “sequential program of machine control instructions” on a punched tape, according to which an automatic wire wrap machine would attach “interconnecting wiring with solderless wrapped connections” to ferractors and rod memory cells mounted on chassis, using “movable carriages containing wrapping tool assemblies and dressing fingers that are positioned on modular points to form a desired wire pattern.” The ferractors were highly reliable, being “non-mechanical” relays with “no contacts, moving parts, filaments, or other features which account for most of the failures associated with other types of amplifiers (except for those using transistors),” and possibly “potted in hermetic seal headers” for “complete protection against shock, vibration and humidity.” In addition, the wire-wrapped connections have reliability one order of magnitude higher than machine-soldered connections. The V3 was not only more reliable than contemporary vacuum-tube computers, but also “more flexible,” being “designed to execute a long and modifiable sequence of instructions contained on a punched tape” (rather than “plugboards and switches”) and with “a practical instruction set for the typical engineering applications of the 1940s” that included a conditional branch. The rod memory was large enough to store some programs. Overall, the V3 was a user-friendly mathematical appliance for scientists and engineers, and was conducive to the development of the high-level procedural programming language Plankalkül. The proliferation of V3 clones and variants across the world in the immediate aftermath of World War II brought affordable, reliable computers (with reasonable size, weight, power and cooling requirements) within reach of many workers in government, industry, and academia. The success of the V3 convinced most of the computer architecture community that computers would be “all-magnetic” for the foreseeable future, and set the stage for three developments that would ultimately converge in the first commercial supercomputer, the Sperry 1103 of 1954. First, it was realized that the highly regular and uniform data-level parallelism of “many standard problems of computational physics and engineering” could be exploited without the “problem of writing code for a parallel processor,” providing a shortcut to “outstanding performance for the class of problems which are expressible in terms of matrix algebra.” “The problem of writing code for a parallel processor does not exist for a MATRISC processor, as the parallelism is embedded within the matrix operators… The writing of software for the MATRISC architecture follows the von Neumann coding paradigm. The code is sequential. However, each matrix operation… is itself a parallel operation, and so the inherent parallelism of the MATRISC concept is hidden from the applications programmer. Software is written not to match a problem to a particular architecture, but simply to express the problem in terms of linear algebra.” This calls to mind the real-world Thinking Machines CM-2, and how “there are a number of software primitives that map to the Connection Machine’s architecture in a very natural way so that they are very efficient… they correspond closely with familiar operations of vector calculus and differential geometry. This is what makes the Connection Machine naturally efficient for many standard problems of computational physics and engineering… The approach of the Connection Machine programmer… is to configure the machine like the domain of the problem that is being solved, i.e., like a three-dimensional grid… Nature is massively parallel. Molecules in a fluid and stars in a galaxy do not update their positions sequentially — they do so in parallel. A container of water can be thought of as a Navier-Stokes computer with ~10²³ computational elements…” Second, photolithography or rather ‘photoetching’ was invented at the National Bureau of Standards in 1949, as a means to simplify the manufacturing of proximity fuzes (as well as further miniaturize them). Third, the sputter deposition of defect-free permalloy thin films on glass substrates made not only thin-film memory possible, but also thin-film magnetic amplifiers. These active devices are similar to thin-film transistors in that they lend themselves to true “monolithic 3D stacking.” While the “lower mobility of TFTs compared to conventional CMOS” results in slower switching speeds, monolithic “3D circuit fabrication with conventional CMOS technology is not possible” because the transistors “must reside on the silicon substrate layer where the necessary p and n channels have been created,” and with “no viable means to fabricate transistors above the first layer of CMOS circuitry on the substrate,” the three-dimensional circuits can only be fabricated “via stacking of die.” The monolithic 3D stacking of thin-film magnetic amplifiers and memory cells “decreased total wiring length, and thus reduced interconnect delay times,” reduced “power dissipation through the minimization of parasitic impedances on the logic lines,” and brought “the circuits physically closer together which enables them to achieve higher throughput.” The classified, limited-production Sperry ATLAS of 1953 was the first unqualified supercomputer, with a general-purpose, three-dimensional decoupled systolic array processor and two-phase (flammable) hydrocarbon immersion cooling. ATLAS was commissioned by the Office of Naval Intelligence to speed up O(n³) operations on n × n matrices (in particular Gaussian elimination in certain “matrix problems arising both in cryptanalysis and cryptography”) as well as use “an r × r array of compute resources” to perform n × n matrix operations “where n > r.” The urge to “solve many regular problems with varying sizes on a constant-sized tiled array” led naturally to stream algorithms, decoupled systolic arrays that “decouple memory accesses from computation and move the memory accesses off the critical path,” and the repeated application of block-partitioning algorithms to “partition a large problem into systolic phases, each of which can be solved without simulation given the resources at hand.” “The network is register-mapped… and it is a programmed routing network permitting any globally-orchestrated communication pattern on the network topology. The latter is important for stream algorithms that change patterns between the phases of the computation.” The processing elements were “simple, ALU-like structures” with “hardware support for some cryptographic operations” as well as basic control flow (using predication). The “spatial programming model” in which “each PE is statically configured” with a single instruction “instead of streaming in a sequence of instructions from an instruction cache” removed any limitation on code size, and ATLAS therefore had a “no instruction set architecture” in which the compiler “directly maps the program to the datapath” and “controls the processor operation and the routing of data” “at the level of multiplexer select signals” by generating “unencoded,” application-specific control words for “each ALU and its associated wiring.” The 1103 is the double-precision floating-point variant of the integer ATLAS. With the power to solve “problems with varying sizes” and “any globally-orchestrated communication pattern” on a “constant-sized tiled array,” the 1103 was the first truly “general-purpose systolic array” available to scientists and engineers, achieving high numerical performance on a wide variety of workloads, and with low programming effort. Of course, these supercomputers were unsurpassed in the “class of O(n³) problems including matrix-matrix multiplication, QR decomposition, and the symmetric eigenvalue problem” which are “natural candidates for implementation in three-dimensional systolic arrays.” They also helped to pave the way for triggered instructions, by elucidating the trade-off between performance and programmability in the design space between processors “using systolic or synchronous networking schemes that cause neighboring processing elements to move exactly in lockstep” and wavefront arrays with “more complicated, asynchronous intracellular communications.” The thin-film magnetic amplifier is still very much alive in 1976, but only in niche applications, such as the host interface logic embedded in PRAM secondary storage devices (like the SSD of the Xerox Paperless Briefcase). The Control Data Star of 1966 is the spiritual successor to the 1103, implemented in CMOS and cooled to about 90K by immersion in liquid nitrogen. The bare, back-thinned silicon chips are integrated into a “brick” using solderless face-to-back Cu-Cu thermocompression bonding and “the 3D technologies of wafer feedthroughs and microbridges… to propagate logic signals throughout the entire structure,” eliminating “the chip-level and board-level packaging and the intra-board wiring required by conventional levels of circuit integration” and achieving a “much higher level of integration at the lowest packaging level.” This “elimination of the chip-level and board-level packaging” eliminates CTE mismatches between dissimilar materials and, in turn, the thermal stresses caused by temperature cycling. What’s interesting is that, at this low temperature, the leakage of electric charge from the DRAM capacitors is negligible, and the memory becomes “quasi-static,” with both the high areal bit density of DRAM, and the “refresh-free operation” of SRAM. This “eliminates conflicts between refresh and read/write” and the power consumption associated with refresh. The memory has high bandwidth and low latency, and the “heat leakage into the cryostat” along “thermal conduction paths from the room-temperature portion to the cold section” is low, because the “cryo-DRAM” is not “physically separated” from the systolic array by any “thermally insulating interface.” The IBM System/2π of 1971 is the first superconducting supercomputer, with liquid helium immersion cooling. The “true monolithic 3D” integration is a throwback to the 1103, because “contrary to CMOS technology where the transistor layer is implemented on a substrate, Josephson junctions can be fabricated at any layer,” providing “the opportunity for the utilization of 3D architectures.” The System/2π is not based on Josephson junctions, however, but rather their charge-flux duals known as “quantum phase-slip junctions,” making it possible to design relatively simple logic circuits with zero static power dissipation. The charge-based, voltage-biased digital logic can also “use optical fibers as a medium in conjunction with fast optical modulators that can be efficiently driven by electrical signals at low temperatures.” The memory is “made entirely of superconductors,” and read out non-destructively. The thermal power density and coolant volume fraction are so low that the volumetric density of circuit integration is limited almost entirely by the manufacturing process. With clock speeds in excess of 700GHz and circuit integration far beyond even wafer scale, there are serious questions about whether the System/2π is the end of history for classical supercomputing. The power consumption is so low that the monolithic 3DIC receives power over fiber (in addition to the usual data). The optical fibers have a much lower thermal conductivity than the metallic wires that would otherwise be fed through the cryostat, stemming the heat leak and allowing the refrigerator to be downsized. The immense power of the System/2π spills out across the entire computer industry, as it can take a high-level “abstract structural description” of a computer, thoroughly explore its “software-hardware optimization space,” and reliably converge upon a “globally optimal design” by alternating between “hybrid hardware/software synthesis” and “true hardware/software co-simulation” including the transistor-level simulation of entire VLSI circuits.
#8: Bonneville Dam. It’s fish-friendly. I’ll expound upon this later.
#9: Lockheed Argus. This is a constellation of at least 32 nuclear-powered space-based radars in LEO. These “multi-purpose, highly software-defined satellite platforms” are similar in size to modular space stations, similar in shape to deep space probes with nuclear-electric propulsion systems, and can “provide worldwide coverage for aircraft detection and tracking,” transmit launch cues and midcourse updates to missiles, illuminate targets with pencil beams, “both detect and track stealth aircraft… and guide missiles close enough for their own internal guidance systems to take over,” provide highly jamming-resistant “communications and PNT services,” sound and heat the top side of the ionosphere, “obtain oceanographic and meteorological data,” “probe the subsurface,” gather ELINT, beam microwave power, and “wage electronic warfare.” They can even cue missile launches aboard submerged submarines “traveling near or at patrol speeds without slowing down much if at all,” using cavity-dumped HgBr dissociation lasers that emit water-penetrating blue-green light. To be of any real use, Argus required a raft of new technologies.
On-orbit assembly. The space environment is both a blessing and a curse for radars. On the one hand, the very long ranges require very high gain, and as such, very large apertures. “Spaceborne surveillance [is difficult] due to the very large distance (on the order of thousands of kilometers) between the radar antenna and the targets to be detected. The first problem is that of the power required to be transmitted… because the product of the square of the antenna effective area and power transmitted is proportional to the fourth power of range. Therefore the need arises for a high-power transmitter and for an antenna with a very large effective area, or in other words, having high gain. The second problem is that of the very high [angular resolution] required to achieve accurate location at long range, comparable to that achievable with ground-based systems operating at much shorter ranges. This last requirement too needs an antenna with a very narrow main lobe, and therefore very high gain. The dimensions of antennae with such performance characteristics are particularly large (ranging from a few hundred to a few thousand times the radar operating wavelength).” On the other hand, whereas the sizes of fixed, ground-based radars are limited by their assembly “on Earth,” in a “gravity environment,” there is no such limitation of the sizes of space-based radars assembled “in the microgravity environment of orbit.” “In the history of spaceflight, almost all spacecraft have been manufactured and assembled on the ground, then integrated into a launch vehicle for delivery into orbit. This approach imposes significant limitations on the size, volume, and design of payloads that can be accommodated within the fairing of a single launch vehicle… For reconnaissance missions… orbital assembly could provide the ability to assemble larger apertures than feasible on fully assembled satellites to achieve greater spatial resolution… Some systems or their components simply cannot be built on Earth… ultra-thin mirrors, gossamer structures, reflectors, trusses, and panels simply cannot be made in a gravity environment… In terms of the imaging aspects of ISR, MW, and BDA, in many cases the spatial resolution and signal strength requirements are significantly more demanding than for civilian Earth observations… For collection of radio frequency signals, extremely large antennas, with linear dimensions on the order of hundreds to thousands of meters, would provide unprecedented levels of resolution.” With “linear dimensions” of at least tens of meters and the Westinghouse-Babcock & Wilcox SNAP-2 space power system, the Bendix AN/DPQ-9 search and Sylvania AN/DPQ-7 track radars have the exceptionally high power-aperture (and power-aperture-aperture) products necessary for microwave power beaming, electronic warfare, the transmission of midcourse updates through the plasma sheaths surrounding endoatmospheric anti-aircraft boost-glide missiles, and — most importantly — the detection and tracking of low-observable aircraft and cruise missiles. Furthermore, Argus satellites are in LEO, at relatively low orbital altitudes, using atmosphere-breathing electric propulsion systems to arrest orbital decay, and allowing each “persistent platform” to operate for at least “15–20 years,” and perhaps indefinitely with on-orbit servicing. They have powerful EOTS and IRST systems (that correct for atmospheric aberrations using adaptive electron optics with no moving parts, as I will detail later in the series) which can be cued by and slaved to the track radars. I cannot stress enough that the ubiquity of nuclear-powered space-based radars in LEO completely changes the face of low- and medium-intensity aerial warfare. “An aircraft passing through the enemy air defense environment may be visible from several different angles as it approaches the target, attacks, and departs. The RCS of an aircraft will be different depending on what aspect or angle the enemy radar sees. Sound tactics call for minimizing the larger radar reflecting aspects… In a tactical scenario, striking aircraft do not plan to overfly the air defense radars. Instead, pilots plan missions to fly around some threats, and through the lesser threats to the maximum extent possible… air defenses are placed so as to provide overlapping coverage and seal off attack routes… low observables ‘shrink’ the distance for early warning detection and perhaps more dramatically, for fire control radar. RCS reduction can, in effect, open up narrow gaps in what was intended to be overlapping SAM ring coverage. With careful planning, an LO aircraft can greatly increase its chance for survival in the duel with the ground-based air defenders by flying through the gaps in coverage.” Low-observable aircraft “may be very susceptible to a look-down type of radar” in LEO, for a number of reasons. First, the aircraft will be illuminated from all directions — all 4π steradians of solid angle — both above and below the horizon. It will certainly be possible to apply more “radar absorbing materials to the upper surface of stealth aircraft,” but radar stealth is about “shape, shape, shape, and materials,” and discarding the simplification that search radars will be near or below the horizon makes it far harder to shape the outer mold line of an aircraft in a way that increases its survivability. Second, on-orbit assembly makes it possible to build search radars far larger than any aircraft, with gain sufficient for engagement-quality target tracking at even the half-wave resonant frequencies of large, strategic bomber aircraft. Third, the “coverage cones” of the search radars sweep through the atmosphere at 17,500mph or so, and even if the reduction of radar observables does “open up narrow gaps in what was intended to be overlapping… coverage,” the rapid time evolution and short lives of these gaps will make it impossible for a high-subsonic stealth aircraft to “thread its way between the degraded… detection rings.” It is essentially impossible to “plan missions to fly around” the global coverage of the space-based search radars, and “minimizing the larger radar reflecting aspects” of an aircraft would require it to fly along a tortuous path due to the rapid time-variation of the azimuths and elevations from which it is illuminated, dramatically shrinking combat radii. In some cases, the satellite will directly overfly the aircraft, illuminating it from the directions in which its radar cross-section is the largest — substantially normal to its planform area.
Multi-megawatt space power systems. The thermal efficiency of the Westinghouse-Babcock & Wilcox SNAP-2 nuclear-electric power system is reasonable despite the high (radiative) heat rejection temperature, due to the ultra-high reactor outlet temperature. With both a high thermal efficiency and very large radiator area, SNAP-2 has an electrical power rating comparable to that of the power plants at fixed, ground-based ABM radar installations. This provides for high effective radiated power even in the Earth’s shadow, without the need to store and retrieve enormous quantities of energy over an enormous number of day-night cycles. The “sandwich core heat-pipe radiators” avoid “significant thermal losses due to… long chains of thermal resistance, stemming from multiple materials, their respective bonds, and physical geometries” by combining “heat pipes, radiator, and structural components into one system using a single material of construction.” The radiators are kept in a “sun slicer” attitude to reduce drag, and also kept “perpendicular to the sun flux vector” using “thermal radiator rotary joints.”
Multi-megawatt radars. The VHF/UHF twystrodes of the search radars and L/X-band TWTs of the tracking radars operate in the ion-focused regime, with “stabilized hollow-cathode plasma electron guns” and intense, space-charge-neutralized electron beams. In the IFR, the beams can not only have “high current density and high brightness,” but also “self-focus,” such that the power amplifiers can “operate without focusing magnetic fields.” This “absence of heavy and bulky solenoids makes the pasotron a compact, lightweight source of high-power microwave radiation and very attractive for airborne and mobile applications,” allowing the power amplifiers to be miniaturized and packaged inside the transmit-receive modules of active electronically-scanned arrays. The power, efficiency, bandwidth, and tolerance to “gamma rays, X-rays, neutrons, and protons” of these tubes are unmatched by any solid-state RF power amplifiers — they are truly weapons-grade, HPM devices. The radars may eschew semiconductor power amplifiers, but the performance-seeking “feedback and logic systems” of the gas-filled “smart tube” power amplifiers very much embrace semiconductor amplifiers, in order to “maintain the correct amplitude and phase relationships for optimum combining” and perform “ongoing performance optimization” more generally. The cathodes are perhaps best described as pseudospark switches from which electron beams just happen to be extracted (the “low-temperature plasma… acts as a copious source of electrons and can be regarded as a low-work function surface that facilitates electron extraction”). The “hollow-cathode pulsers modulate the beam currents to generate arbitrary pulse waveforms,” such that each tube “becomes its own hard-tube modulator.” In order to raise the upper limit on the PRF of the cathodes from the 1.5kHz typical of pasotrons to the 100kHz or more necessary for pulse-Doppler signal processing with ambiguous range and unambiguous speed, additional electrodes and circuits are necessary to accelerate the ignition and quenching of the hollow-cathode plasmas. As such, each switch is “triggered by injecting charge carriers from an auxiliary discharge into the hollow cathode,” and its recovery of “voltage hold-off characteristics following current conduction” is expedited by “forced removal of the products of the preceding breakdown… not only from the cathode cavity but also from the main gap of the switch.”
High-performance data processors. Signal processing is both a blessing and curse for space-based radars. On the one hand, “the large dynamic range of the received signals, the non-homogeneous and non-stationary nature of the interference, and the need to fulfill the surveillance and detection functions in real time” requires preposterous numerical performance. On the other hand, signal processing is algebraically ‘nice,’ and many signal processing algorithms are not only massively parallel, but also massively systolic. In just the case of space-time adaptive processing, “Givens rotations, Householder reflections,” the Gram-Schmidt process, the Jacobi method, the Hestenes-Jacobi algorithm, and the Lanczos algorithm all “enjoy the possibility to be mapped onto a parallel processor like a systolic array.” Systolic arrays can “effectively exploit massive parallelism” in a wide variety of “computationally intensive applications,” including “large-matrix multiplication, feature extraction, cluster analysis, and radar signal processing.” The data processors of the AN/DPQ-7 and -9 have combinations of “dedicated engines,” “O(n³) processors,” and “arrays of processing elements which are not homogeneous” “concentrated on the primary set of operations… which incur the highest penalties in execution.” The minimization of wire delays demands silver interconnects on back-thinned semi-insulating InP substrates, and “the ability to stack and interconnect chips in the vertical dimension.” The minimization of gate delays at reasonable supply voltages demands “all n-type” InGaAs E/D-HEMTs with high electron saturation velocities at low critical electric fields. The combination of enhancement- and depletion-mode devices (using a single epi-structure, at that) allows the use of quasi-complementary digital logic with high speed, CMOS-like zero static power dissipation, and even the ability to enhance radiation hardness by detecting single-event upsets “at the output of a critical logic block by taking the [XOR] of the two outputs, which should always be [true].” The minimization of the coolant volume fraction demands two-phase immersion cooling with a radiation-hard “pure saturated fluorocarbon.” The integrated circuits have “clock rates well into the… millimeter wave region,” and directly interconvert analog and digital signals “with no frequency conversion” or discarding of phase information, up to “the highest frequency in the passband.” The “charge-based” RAM combines high speed, non-volatility, non-destructive readout, SRAM-like power dissipation, and DRAM-like cell sizes using “oxide-free” III-V compound semiconductor heterostructures. The growth of InP bulk single crystals is a story in and of itself. The crystal growth furnace has a submerged heater, bottomless crucible, and radioimaging system. It synthesizes InP in situ, using “active temperature feedback to aid in the control of melt stoichiometry,” vigorously mixes the “large top melt” to “ensure bulk melt homogenization prior to growth,” and then pulls an InP bulk single crystal down using a vertical, “detached,” “continuous Bridgman” method. The submerged heater has an array of coaxial “heating elements which are selectively controlled to produce a desired radial temperature distribution… thereby influencing the flow pattern within the melt.” The “direct observation of the melt-solid interface” by the radioimaging system provides “knowledge of the interface location” and “width of the gap between heater and growth interface,” and a “computer uses this information, in a closed-loop system configuration, to control the… heating elements as necessary” to provide precise “interface location and morphology control.” The “thermal boundary conditions” above and “around the enclosed melt” are precisely controlled to “prevent convective currents from being established,” “stabilize meniscus-controlled solidification,” and “promote and maintain dewetted growth” “on Earth in large-scale systems.” The degree of “radial segregation” is low and the “incorporation of dopants” is uniform, because the “small melt zone” is “well-mixed,” “diffusion-controlled,” and “convection-free.” The thermal stresses do not “exceed the low yield strength” of InP and “generate dislocations or… cracks,” because the axial temperature gradient is low, and “the grown crystal surface is free… thereby reducing the crucible induced thermal stresses.” The degree of longitudinal segregation is low, because the “continuous Bridgman” process has “far fewer start up/shut down periods” and “transient thermal effects… in the early and late stages of growth” than a “batch Bridgman” process, all else equal. The crystal is free of “inadvertent inclusions,” because there are no process steps in which polycrystalline InP is “handled and… thus prone to contamination.”
Vision Board I
America — and ultimately a world — free of the threat of nuclear destruction… is an illusion… Even if a perfect missile defense were possible, other delivery systems for nuclear weapons would still endanger millions of Americans.” Checkmate.
Vision Board II (Dusk/Night)









Vision Board III (Day)










Open Questions
Can a “boost-glide transport” take off from a conventional runway with little to no “carried oxidizer,” meet the sideline and community noise requirements of existing international airports with no “LOX ground servicing facilities,” efficiently utilize the “cryogenic exergy” of supercooled liquid hydrogen fuel in an “air collection and enrichment system,” and achieve boomless overland supersonic flight by virtue of flying above the sensible atmosphere, on a suborbital ballistic trajectory? Can the ballistic coefficient and wing loading be made low enough to keep air temperatures in the surrounding flowfield substantially below 1300°C during reentry, and minimize the production of ozone-depleting thermal NOₓ? How would the global warming effects of the stratospheric water vapor injection compare to that of a supersonic or hypersonic transport, all else equal?
The classic “flying flatiron” shape of the Rockwell BGT appears carefully considered, with both a hypersonic L/D of 3.0 conducive to antipodal range, and reasonable low-speed aerodynamic characteristics. This is no surprise, as Rockwell International was assisted by the McDonnell Douglas Astronautics Company (of Model 176 fame) in the design of the BGT. At a glance, the combination of “vehicle packaging efficiency” and structural complexity seems reasonable.
“As the number of lobes for a tank increases, the lobe intersection angle increases so the lobed tank more efficiently fills the cross-sectional area of the lifting body shape. Thus, for a specified cross-sectional tank area, the associated lobe radius can be decreased allowing the lobed tanks to be more efficiently packaged within the lifting body [outer mold line]. This improves the overall vehicle packaging efficiency (which reduces vehicle weight). Increasing the number of lobes also should improve engine integration and thrust load paths, and reduce the amount of TPS support structure. However, increasing the number of lobes generally increases the weight associated with internal tension membranes and lobe skin joints.”
The Rockwell BGT is a VTHL vehicle, and of course, considerable effort and care would have to be taken to adapt it to HTHL operation. Most importantly, it would have to take off “with either a very small or even zero amount of carried oxidizer.” “In-flight oxygen collection enables a significant reduction in takeoff gross weight of space launchers, thereby allowing airplane-like operations… The absence of oxygen at takeoff results in a significantly lower vehicle takeoff weight, which in turn allows a smaller wing, a lighter landing gear, and smaller engines, all of which lead to lower operating costs… Operating costs are further reduced by the elimination of LOX ground servicing facilities. Safety is additionally enhanced as horizontal takeoff and landing offers increased abort capabilities and reduces the impact of a rocket engine failure during launch. The reduced takeoff gross weight furthermore means that the launch vehicle can meet all airport noise and safety regulations, allowing it to operate from virtually any airport.” The propellant tanks of this notional HTHL BGT would be similar to those of the Andrews Gryphon booster, with a central LEA (liquid enriched air) tank sandwiched between fore and aft LH₂ tanks in order to limit the CG shift during the “collection period.” This would allow “differential burn-off from the two hydrogen tanks” to trim the vehicle without any drag penalty during the ascent, and if the “‘multicell’ payload compartment” remains stacked on top of this central LEA tank, the CG would also be insensitive to the payload weight.
The ability to “fly trajectories that are not vertical and use lift as well as thrust” during the endoatmospheric boost phase means that the “initial thrust-to-system-mass might be only about 0.7 — thereby saving propulsion system mass and avoiding some of the aft CG balancing problems in the empty condition,” as in Leonard Cormier’s Windjammer concept, on which the Boeing RASV was based. This would permit the downsizing of the propulsion system, and perhaps even the addition of a “boat-tailed upper aft body to enhance lift and reduce base drag” like that of the USAF Flight Dynamics Laboratory FDL-8 (and its “aerodynamic analogue,” the Martin Marietta X-24B), which was designed to provide “acceptable low speed performance without compromising the hypersonic characteristics” of earlier shapes (including their “inherent stability over the entire Mach number range from Mach 22 to landing,” and “angle of attack… limited to the 25-30° range, not the 40-45° operated with the Space Shuttle during the early reentry phase, a design approach causing stability and control challenges” due to bursting of the leading edge vortices). I see a number of interesting possibilities for this “upper aft body.” In all likelihood, low-bypass turbofan engines with a relatively high tolerance for inlet distortion will be buried in the afterbody of the HTHL BGT, perfect for boundary layer ingestion and a “significant reduction in terms of propulsive power required for cruise” due to a “combination of wake filling and using the slower-moving boundary layer flow for propulsion instead of freestream air.” This BLI might have the knock-on effect of delaying the separation of airflow from the “boat-tailed upper aft body,” making it possible to locally increase camber and lift (especially with twin tails acting as end fences or sidewalls). The engines may actually be sized by this “propulsive power required for cruise,” rather than the usual engine-out takeoff capability, because the vehicle gains perhaps hundreds of thousands of pounds of weight during the initial aerobic, turbofan-propelled phase of flight (taxi, takeoff, climb, and cruise) as it burns off LH₂ but collects LEA, on top of the usual net thrust lapse with higher airspeed. It might be possible for a “two-dimensional rectangular slot nozzle” in close proximity to the trailing edge to generate “jet-induced supercirculation lift” by vectoring the thrust down (the resulting nose-down pitching moment would be trimmed out by “an upload on the canard which would increase the overall lift of the configuration”). It might also be possible for twin “vertical stabilizers to shield lateral noise,” and a “horizontal beaver tail” to impede “the downward travel of sound.” If the rocket engine has a truncated linear plug nozzle, like that of the Rocketdyne XRS-2200, then the turbofan engines may “exhaust through the plug base,” as in the Rocket Plug Nozzle Combined Cycle propulsion system. It would be interesting to see if some form of leading-edge vortex flaps could improve low-speed lift (much like on a fixed-sweep arrow-wing SST) without blowing up the complexity of the TPS.
The HTHL BGT will probably take off, cruise climb up to the tropopause where air is available at a “low ambient temperature (~215K),” cruise subsonically while collecting LEA, transition to air-independent rocket propulsion, and boost onto a suborbital trajectory with the aid of aerodynamic lift, as well as zero inlet drag (and heating) and high net thrust. The higher altitude, lower air density, somewhat lower exhaust velocity with respect to the freestream, and overall TWR of about 0.7 will all tend to reduce the rocket engine noise perceived “vertically underneath the accelerating vehicle,” relative to that of an equidistant vertical takeoff at ground level. The minimum-energy trajectory may involve a supersonic climb steep enough that some of the “sound waves are then almost horizontal,” the boom and noise footprints are largely “confined to the vicinity of the airport,” and the noise lasts for only a few minutes. In the unrealistic but nevertheless remarkable case of a “near-vertical trajectory, no shockwave encounters the ground, and the energy is dissipated in all horizontal radial directions.” “Major parts of the [BGT] trajectory are flown at very high altitudes,” such that it generates “much less intense boom noise than is to be expected for hypersonic air-breathing cruisers.” Although the “final approach sonic boom… is not expected to significantly differ from noise experienced in the vicinity of normal airports,” the degree to which the footprint can be “confined to the vicinity of the airport” using a “steep-gradient descent… accomplished either at very high angle of attack, or at a near-zero angle of attack using the air brake” would seem to be limited by the tolerance of “untrained and elderly passengers” to deceleration (especially eyeballs-out). The shape of the nose will be dictated primarily by the compromise between hypersonic drag and “stagnation-point convective heat transfer,” but it’s interesting to note that blunt noses can be used to achieve a “flat-top boom signature at the ground.”
Researchers earlier thought that those factors which improved the efficiency of an aircraft would also tend to lower the sonic boom; however, it has now been found that this is not necessarily so… The low boom aircraft [has] an extremely blunt nose and special shaping so that even though there is a high shock level at the aircraft, and thus a high drag level, the pattern of propagation is such that no further coalescence of shocks occurs. There, in fact, are no other shocks behind the bow shock; there is only an expansion field. Because of this, the shock at ground level is greatly attenuated. The drag configured, sharp-nosed aircraft, on the other hand, had a comparatively lower shock at the aircraft, but because of shock coalescence the ground signature has a relatively higher level shock.
I’m struck by the similarity between, on the one hand, a “spatular two-dimensional nose” that “reduces the nose shock wave drag by as much as 40% and enables increasing vehicle volume without altering substantially aerodynamic characteristics,” and on the other hand, the “blunt wing apex” or “platypus nose with a blended wing root” “used to start the equivalent areas due to volume and lift together” and tailor “details of the body nose shape and the overall blending of volume and lifting effects” for “alleviation of sonic boom effects.” I have no idea how much more flexibility all of these things would give flight planners and air traffic controllers, or whether the BGT could fly some overland routes supersonically (unlike a strictly endoatmospheric SST). The Rockwell BGT is an ultra-long-haul spaceliner with just enough range to fly from London to Sydney, and I can’t help but wonder if, on a merely long-haul flight for which the minimum-energy trajectory is a boost-glide trajectory with an overland glide (London to Los Angeles for example), a higher-energy boost-skip trajectory with an overwater pull-up maneuver (over Hudson Bay) and “sonic-boom-free” overland skip could be flown instead. In other cases, a dogleg maneuver and overwater glide could be performed in lieu of an overland glide.
An HTHL BGT would address two of the issues with suborbital passenger transport using ballistic, VTVL vehicles with pure rocket propulsion. First, because the “selection of potential… launch and landing sites is likely to be heavily influenced by acoustic noise and sonic boom constraints… launching from currently existing airports seems an unlikely option,” and these vehicles “would operate out of special superhub airfields where noise would be less critical.” As a result, they cannot efficiently convert savings in block time into savings in “door-to-door travel times,” due to the additional time required for “the feed portion of the trip and the passenger transfer at each end of the high-speed leg.” “In determining time savings, it is not sufficient to merely look at the origin-destination time of the RLV flight. The journey time for the full supply chain or door-to-door passenger service… must be considered.” As preposterously short as the block time of an SRLV is, it is still desirable for it to provide direct, point-to-point (versus hub-and-spoke) service between airports close to city centers. Second, the high “deceleration forces characteristic of ballistic reentry” are intolerable to “untrained or elderly passengers.” If the entire flying public is to enjoy the benefits of point-to-point commercial space transportation, the vehicle must have “an L/D high enough to limit deceleration during reentry to approximately 2g.”
I am not married to this HTHL BGT at all, because the climate impact is still a mystery to me. It is a little-known fact that even a decarbonized air transportation system would make a significant contribution to radiative forcing. In No Frills, large subsonic aircraft are powered by LH₂ “produced in a climate- and carbon-neutral manner.” They also have engines with unconventional thermodynamic cycles and high “cryogenic exergy efficiency,” as well as continuously rheocast and friction stir welded Al-Li alloy airframes (circumventing the “problems associated with direct-chill casting of alloys with greater than about 2.7wt% lithium which arise because of compositional segregation and ingot cracking” without resort to expensive powder metallurgy processes, by stirring the alloy in the semisolid state and thereby milling the solid primary phase into “‘non-dendritic,’ small, rounded particles” for “reduced shrinkage and segregation” in the solid state). In many cases, these aircraft also have contra-rotating tractor propfans with high propulsive efficiency.
“The global warming effect of H₂O from aircraft cruising in the troposphere is negligibly small compared to that of CO₂, hence it is here assumed to be zero below 10km. Above 10km it has a small impact on global warming, but it increases with altitude… it appears that partial compensation occurs between increasing contrail coverage from cryoplane traffic (due to higher H₂O emissions) and decreasing optical thickness of cryoplane contrails (due to fewer but larger ice particles, as there are less condensation nuclei present in the exhaust)… As the effect of CO₂ on global warming is independent of the discharging altitude and since the emissions of CO₂ increase by decreasing altitude, the influence on global warming increases by decreasing cruise altitude… For the cryoplane the situation is different. As the fuel contains no carbon, the CO₂ curve vanishes. The shape of the NOₓ curve is similar to that of the conventional aircraft, but the magnitude of the GWP values is smaller, as the cryoplane is assumed to emit less NOₓ. As the cryoplane discharges significantly more water vapor (2.6 times more if the same energy consumption is assumed), the H₂O effect of this aircraft is essentially greater than for the conventional aircraft. In contrast to the conventional aircraft, the H₂O effect is totally dominating for the highest flight levels, leading to a continuously decreasing contribution to global warming with decreasing cruise altitude (at least for the considered flight levels). In this case the effect of a decreasing GWP effect with decreasing flight altitude overcompensates the effect of increasing H₂O emissions.”
“The penalties occasioned by the density and temperature of liquid hydrogen… are more than overcome by the tremendous advantage of the heating value of the fuel… The LH₂ aircraft are lighter, require smaller wings but larger fuselages, use smaller engines, can take off in shorter distances, and use less energy per seat-mile in performing their missions.”
These subsonic aircraft will have a climate impact, albeit one mitigated by a preponderance of factors. The radiative forcing is probably small enough to be offset by reflective surfaces, and especially “cool roofing and cool pavements” pigmented by the selective emitter BaSO₄, which occurs naturally as the mineral baryte, and has electronic and phononic band structures that combine high emissivity in the infrared with high reflectivity in the visible. The same is not true of large supersonic and hypersonic aircraft powered by LH₂ “produced in a climate- and carbon-neutral manner.”
“H₂O that is emitted near the surface has a very short residence time (hours) and thereby no considerable climate impact. Super- and hypersonic aviation emit at very high altitudes (15 to 35 km), and H₂O residence times increase with altitude from months to several years, with large latitudinal variations. Therefore, emitted H₂O has a substantial impact on climate via high altitude H₂O changes… Our calculations show that the climate impact, i.e., mean surface temperature change derived from the stratosphere-adjusted radiative forcing, of hypersonic transport is estimated to be roughly 8–20 times larger than a subsonic reference aircraft with the same transport volume (revenue passenger kilometers) and that the main contribution stems from H₂O… Due to larger fuel consumption with higher speeds at high cruise altitudes on the one hand and the atmospheric conditions at these cruise altitudes (recombination, lifetime of H₂O) on the other, hypersonic aircraft have a considerably larger climate impact than subsonic and supersonic aircraft.”
However, this study is of a hypersonic cruise aircraft such as the Reaction Engines LAPCAT A2. The HTHL BGT will climb up from the tropopause to the “main engine burnout altitude” of around 67km over the course of perhaps ten minutes. It should be compared, instead, to the Reaction Engines Skylon launch vehicle.
“It is often assumed that H₂-fueled rocket engines have no impact on the global atmosphere since the only significant emission is H₂O. However, in great enough quantities the emissions from these rockets can alter the stratosphere in many ways. H₂O emissions can change stratospheric temperatures and alter the photochemistry controlling ozone. Furthermore, rockets burning liquid H₂ and oxygen use an H₂-rich mixture rather than a stoichiometric ratio for enhanced thrust and emit H₂ and HOₓ in the plume in addition to H₂O. Enhancements in HOₓ can catalytically destroy O₃. Superheated air in the engine and exhaust plume result in the production of NOₓ, which also catalytically destroys O₃… the excess H₂ likely oxidizes into H₂O in the plume due to high temperatures… When air is heated to temperatures exceeding 1800K, as in a jet engine or behind the shock wave around a spacecraft during reentry, NOₓ is produced through the extended Zeldovich mechanism… NOₓ would only be produced in H₂-fueled rocket engines in significant amounts (>0.01% of total flow) in afterburning reactions, which occur when ambient air is entrained into the hot under-oxidized plume. Afterburning is generally not a significant factor for rocket engines above the tropopause. Therefore it is assumed that during this phase of flight, at altitudes greater than 28km, significant NOₓ production is unlikely… Using analytic approximations and a numerical integration, Park (1976) calculated that the NOₓ produced during a Space Shuttle reentry is 4.5-9% of the mass of the spacecraft. Park and Rakich (1980) later updated this value to 17.5±5.3% of the spacecraft mass, with a peak emission at 68km. While the predicted Skylon mass is comparable to the Space Shuttle mass, the Skylon reentry flight path is different from that of the Shuttle, and this would affect NOₓ production. Skylon is expected to require more time above 5km/s during reentry than the Shuttle did, which would tend to produce more NOₓ. However, these high speeds would occur at a higher altitude than for the Space Shuttle, which would tend to decrease NOₓ production… Park (1976) compared NOₓ formation between the Space Shuttle and meteorites based on the total mass entering the top of the atmosphere. Assuming the natural formation rate of upper atmospheric NOₓ is from 5.7×10⁷kg of meteorites producing their weight in NOₓ every year, then 10⁵ Skylon flight reentries would produce a factor of 20 more NOₓ than natural production from meteorites. Meteorites produce roughly [five times] more NOₓ per mass than the Space Shuttle due to their much higher velocity when entering the atmosphere… The ERF from 10⁵ Skylon rocket launches per year is not significant; however, 3×10⁵ and 10⁶ launches per year cause statistically significant radiative forcing of 0.09W/m² and 0.28W/m², respectively… WACCM model simulations of estimated Skylon rocket emissions indicate that 10⁵ or more flights per year, as proposed to build a space-based power system, significantly affect the atmosphere in several ways. Global stratospheric O₃ decreases by approximately -1.4 DU in comparison to the reference simulation. This depletion is about 10% of the historic peak depletion of global total column ozone in the 1990s due to anthropogenic emissions of synthetic gases. The O₃ depletion is mostly (~75%) due to NOₓ emissions. At 10⁶ flights per year, the effects on O₃ are more robust, with estimated O₃ loss enhanced by about an order of magnitude relative to 10⁵ flights per year. Other climate effects include tropospheric O₃ increases, PSC and PMC increases, and slightly positive ERF. The radiative forcing is mostly caused by the increased stratospheric and mesospheric water vapor. The enhancement in PSC fraction contributes to O₃ loss, although the contribution to global loss is smaller than that of destruction from gas-phase catalytic chemistry. The O₃ perturbation is the largest and likely the most significant impact from these flights. In comparison, the other calculated impacts are small compared to natural variability or the consequences of other anthropogenic activities, or are statistically insignificant at least when considering up to 10⁵ flights per year. It is useful to compare the calculated Skylon ERF with estimates of the ERF associated with black carbon emitted by hydrocarbon-fueled rocket engines. Ross et al. (2010) estimated that a scenario of 10³ flights per year of a small suborbital hydrocarbon burning rocket would produce ERF of 0.04W/m², approximately the same as the Skylon scenario of 10⁵ orbital flights per year. The hydrocarbon fuel mass used in the suborbital rocket scenario is only approximately 0.03% of the H₂ fuel mass used in 10⁵ Skylon orbital flights, yet the predicted ERF is similar… Compared with estimates of the global impacts of a hydrocarbon-fueled rocket, our results substantiate assertions that, by some measures, H₂-fueled rockets are indeed ‘clean,’ though such conclusions must be made with caution.”
My best guess is that, if the HTHL BGT achieves significant penetration in the ultra-long-haul market segment, there will be about 10⁵ flights per year by the turn of the century. In the less likely scenario that it also achieves significant penetration in the merely long-haul segment, that figure would be closer to 10⁶ flights per year by 2000. On the one hand, the HTHL BGT will carry 200 passengers and transition to rocket propulsion at around Mach 0.85, whereas Skylon will carry 30 passengers “in a special passenger module” and transition to rocket propulsion at Mach 5.4 (with a significantly higher “time-averaged specific impulse”). On the other hand, the HTHL BGT is a merely suborbital vehicle that “would never reach orbit,” uses “wings for converting rocket speed and altitude into aerodynamic lift and range” which “roughly doubles the range… over the purely ballistic trajectory,” and takes off “with either a very small or even zero amount of carried oxidizer” which “results in a significantly lower vehicle takeoff weight, [and] in turn allows a smaller wing, a lighter landing gear, and smaller engines.” I am in no position to say that one injects more water vapor into the stratosphere per flight than the other, and so I will hedge my bets, and assume that their H₂O emissions are in the same ballpark. The moral of the story would then be that while the radiative forcing might be “small compared to natural variability” and “statistically insignificant,” ozone depletion by thermal NOₓ will be a major issue (my jaw dropped when I read that the Space Shuttle orbiter might have generated a fifth of its weight in NOₓ during reentry). The question is whether the ballistic coefficient and wing loading can be reduced to the point that the flowfield around the vehicle remains almost entirely below 1800K during reentry, at an acceptable cost to performance. It certainly helps that, during reentry, the internal propellant tanks (and especially the notoriously large LH₂ tanks) are almost empty, resulting in a vehicle with a large planform area and low overall density. It may be possible to simultaneously offset the radiative forcing and “neutralize the acids involved in the catalytic destruction of ozone” by injecting a “solid aerosol composed of alkaline metal salts that react with hydrogen halides and nitric and sulfuric acid to form stable salts” into the stratosphere. It’s unfortunate that the TPS is passive, seeing as how the active transpiration cooling of the General Dynamics Space Freighter upper stage TPS by the chemically reducing hydrogen fuel raises the possibility of external SNCR of NOₓ in the surrounding flowfield (at the expense of higher water vapor emissions). The climate impact of the HTHL BGT must be alleviated if it is to serve the important political purpose of promoting the public perception that decarbonization is not technologically regressive.
I really don’t know what to make of this General Dynamics patent from 1990, because it just seems too good to be true.
“The novel air liquefiers all provide for a separation and tap-off of liquids and gases with different boiling temperatures… The spiral flow arrangement forces the liquid to the outside wall by centrifugal force without any moving parts and thereby separates liquid constituents (water, nitrogen, oxygen) for easy collection (tap-off) as the temperature drops along the flow passage. The low vapor pressure of the condensed and subcooled liquid nitrogen and liquid oxygen lowers the pressure inside the condenser tubes or channels; thus creating a near vacuum which results in ambient air being sucked into the condenser tubes or channels and thereby making the liquefier self-feeding with ambient air without any fan or compressor at the air inlet.”
It reminds me of the cyclonic separators used to prevent the ingestion of dust and sand by “helicopter turboshaft engines that operate around unprepared airfields,” as well as vacuum steam condensers. It’s remarkable that it removes liquid water from the airflow before it ices the heat exchanger tubes. This would not only prevent “frosting” — a major headache that has historically plagued liquid air cycle engines because it increases the air-to-hydrogen thermal resistance and degrades the performance of the heat exchanger — but also prevent it without spraying the heat exchanger with an additional consumable fluid (such as methanol or ethanol) that increases the complexity and cost of ground operations. It is imperative that the ACES tolerate “rain storms and ingested runway water” (all-weather flight capability is a necessity), and this might just be the trick. I suspect that the centrifugation of the helical two-phase flow over many turns would be more effective than the sinuous flow between the “plates” in this Rolls-Royce patent from 1989. With no moving parts, this “air liquefier” is more like a Ranque-Hilsch vortex tube than a rotating fractional distillation unit. It is a very good sign that the “liquid air jet engine installation” is tankless, with an “air liquefier” that generates LEA just-in-time, at a rate sufficient for steady-state operation of a “liquid air jet engine,” because this implies that the HTHL BGT will only have to collect LEA for perhaps tens of minutes, rather than “loiter for a few hours… until the required mass of [LEA] is collected.” The block time of the HTHL BGT will therefore be only somewhat longer than that of the VTHL BGT, and still far shorter than that of an HST or SST. The “air liquefier” may further reduce “aft CG balancing problems in the empty condition” because “liquid air collectors, separators and pumps can be mounted up front” in the nose, and the “liquid nitrogen and liquid oxygen can be transferred easily through the vehicle in small tubular connector lines to the propulsion unit.” I have not yet found any references to this “air liquefier” in any of the LACE literature, which is somewhat confusing and disturbing. Does it not work, or is it just that obscure? Was there hardware? Were there tests?
There are, however, two additional features that would be necessary in order for the HTHL BGT to collect LEA without the utilization of LH₂ being “grossly inefficient and extravagant.” This is often called “leaning out the cycle” in the LACE literature.
It is well known to utilize the heat sink capacity of a cryogenic fuel to liquefy ambient air and then burn the fuel with the air in a suitable power plant… In order to achieve complete air liquefaction in such engines, it has been found that the fuel/air ratio required in the heat exchanger is several times greater than that normally utilized in a conventional combustion chamber not concerned with air liquefaction. Thus, in effect, the heat exchanger and the combustion chamber are, on the basis of fuel/air ratio, mismatched. Under such conditions, a rather large excess of fuel is admitted to the combustion chamber and the net result is a relatively high specific fuel consumption for such an engine.
First, the “nitrogen-rich waste stream” must counterflow against the very stream of air from which it is separated, and cool it.
Because liquid air is mostly nitrogen, tanking it means carrying a large amount of inert weight that will not aid combustion, raising overall system weight to the point where the advantages of the cycle disappear. However, through the use of a cryogenic rotary air separator, the lighter liquid nitrogen can be removed from the air and… used to regeneratively cool the currently incoming air. This nitrogen can also be expelled through the engine’s nozzle as inert mass; while it does not combust, it can still aid in the production of thrust. The resulting tanked [LEA] is about 10% nitrogen.
My best guess is that the “air liquifier” will comprise a single-phase air precooler, and a two-phase air condenser. The heat transfer fluid in the condenser will simply be supercritical hydrogen fuel. In contrast, the precooler will have two parallel “cylindrical coils,” with separate hydrogen and depleted air heat transfer fluids. It is interesting to note that supercritical nitrogen has been seriously considered as a working fluid for closed Brayton cycle power plants, and can compare favorably with helium even in intercooled and regenerative cycles with high degrees of heat exchange. As I imagine it, the liquid depleted air is supercooled to some degree — the SSME HPFTP discharged hydrogen at about 51K, so even at 5,956psi the temperature difference to liquid depleted air supercooled below about 77K may not be prohibitively small, and the boiling point of the depleted air might be raised by liquefying bleed air from the turbofan compressor — then turbo-pumped above its critical pressure, so that the absence of distinct liquid and gas phases prevents film boiling from impairing heat transfer in the precooler. The counterflowing stream of supercritical depleted air can be introduced at whatever point along the length of the “air liquifier” has about the same temperature as the depleted air turbopump outlet, at the design point. This combined-cycle “air liquefier” would impose a considerable weight increment relative to the baseline, and of course the gamble is that it would lean out the cycle, downsize the LH₂ tanks, and realize an even larger weight decrement (namely, structural weight).
Second, the other major headache experienced by liquid air cycle engines must be alleviated, namely, the “temperature pinch.” Inside the condenser, a stream of saturated air isothermally rejects latent heat to a counterflowing stream of hydrogen, which accepts sensible heat. The temperature of the air remains constant as it flows through the condenser, whereas the temperature of the hydrogen rises as it counterflows. At a given mass flow of air, the mass flow and temperature rise of the hydrogen are negatively correlated. As the mass flow of hydrogen decreases, the temperature differences along the condenser decrease, “the reduced driving forces in the heat exchange process will need… more heat transfer area,” and there will eventually come a point at which the temperature difference between the air and hydrogen at the “condenser face” is so low that the mass or volume of the condenser is prohibitively large. In other words, the “required fuel flow… is dictated by the precooler ‘pinch point,’ whereby the hydrogen fuel must have sufficient thermal capacity to absorb the enthalpy equal to the latent heat of condensation of air at its saturated conditions. Since the temperature rise of the hydrogen is limited to the saturated temperature of air, the mass flow of hydrogen must therefore increase to achieve the required capacity rate.” The pinch can perhaps best be alleviated by the “cascading condenser system” patented by Marquardt in 1961.
“The withdrawing of… the hydrogen at a point ahead of incipient liquefaction of the air and expanding this hydrogen permits the hydrogen to be returned to the heat exchanger at a temperature substantially below the air temperature at incipient liquefaction. Thus, the fuel/air ratio can be substantially less than would be required if sufficient fuel were utilized to liquefy the air and maintain the proper temperature differential at the pinch point in the normal heat exchange relationship and without expansion of hydrogen… if the hydrogen is raised to a high pressure, for example [40atm], substantial cooling of the [hydrogen] will take place during its expansion [through the] turbine… The cooling of… the hydrogen fuel by expansion through [the turbine] ensures that at all points in the heat exchanger, the hydrogen temperature will be below that of the air, and that proper heat transfer will continue throughout the heat exchanger unit.”
This is exactly like the “stepwise expansions in turbines with intervening reheat” in an Ericsson cycle approximation, in which “conventional steady-flow machinery [is] arranged to approximate constant-temperature expansion and compression.” In “a hydrogen/air heat exchanger, atmospheric air can be cooled as the liquid hydrogen is boiled, requiring no energy expenditure from the aircraft’s systems.” As such, cascading condenser systems “function as both recovery systems for atmospheric air thermal energy to boil liquid hydrogen as well as systems to liquefy air,” and implement a sort of expander cycle, in which the “expansion turbines” generate a considerable amount of shaft power with which upstream “liquid hydrogen pumps” can achieve high pressure rises, returning the favor by providing plenty of legroom for “stepwise expansions” in a virtuous circle of higher pressure ratios and thermal efficiencies.
It is imperative that the RCS/ACS and OMS use “the same propellants as the launch propulsion system to simplify… logistics,” minimize the number of fluids aboard the vehicle, and assiduously avoid toxic hypergolic propellants. Interestingly, an “integrated… OMS and RCS… could have a higher… dry mass fraction, but scavenging of propellants could lead to a lower on-orbit mass fraction.” I wonder if, instead of scarfed and toed-out bell nozzles with cosine losses, the forward down-firing RCS thrusters might have a refractory metal micronozzle array fabricated using Aerojet platelet technology, under the chin and flush with the lower surface. The “pulse mode operation” of thrusters is somewhat more difficult without hypergolic propellants, but because these platelet micronozzle arrays can double as platelet catalyst beds, they can catalytically ignite mixtures of gaseous hydrogen and oxygen even at low temperatures, with much less complexity than spark (or corona) ignition.
In a rocket, it is relatively easy to “engineer a structure within given weight constraints without depending on anticipated advanced materials technology or unrealistic weight reduction programs,” due to “the avoidance of concentrated point loads… the gentle melding and diffusion of loads into circular structures” that not only have high structural efficiency but also accommodate “cryogenic or high-pressure storage vessels” with a high volumetric efficiency, and the orientation of “the primary load paths in the axial direction during all phases of flight, allowing for a more efficient structural solution.” The ability of the vehicle to “reach closure (i.e. complete the mission)” is extremely sensitive to the structural mass fraction, and this can cause showstopping problems for spaceplanes, because their structural efficiency is inherently lower than that of rockets. The Rockwell X-33 envisioned how a wing-body vehicle might cope with this disadvantage, in that “the wing loads were carried through the aft spar directly to the thrust structure, while the forward wing spar carried the loads into the intertank structure forward of the oxygen tank (the aft tank)… to make the propellant tanks… the primary load-carrying structures while minimizing or eliminating the need for secondary structures throughout.”
The non-axisymmetric airframe and dual propulsion systems will make keeping the GTOW below the level necessary for “airplane-like operations… from virtually any airport” a tremendous challenge. The airframe will probably therefore be made from the “in situ ductile-ductile metal matrix composite” Lockalloy, a sort of beryllium dispersion-reinforced aluminum alloy. The beryllium dispersoid lends this material a low density, high thermal conductivity, high heat capacity, and low CTE, making it an “effective heat sink and structural material… weight- and cost-competitive with competing thermal protection and structural systems.” The “high thermal conductivity and dimensional stability leads to low thermal gradients, stresses, and warpage,” and the “absorption of heat lowers insulation requirements.” There might also be extensive use of SPF/DB ODS titanium alloy sandwich panels.
The Advanced Metallic Honeycomb thermal protection system panel consists of “a foil-gauge metallic box encapsulating lightweight, fibrous ceramic insulation.” “The weight of the metallic box is offset to some extent by the low density, efficient fibrous insulations used. The inherent ductility of the metallic materials used offers the potential for a more robust TPS… In addition, the encapsulated designs are inherently waterproof, and the mechanical fasteners allow for easy removal and reattachment.” This would seem to be de rigueur for a BGT that would “operate routinely and safely as a commercial transport aircraft over international routes.” The honeycomb sandwich panel facesheet is made from the oxide dispersion-strengthened superalloy PM2000, and I wonder if Sm₂O₃ or Tm₂O₃ dispersoids with “high emissivity over an extended temperature range (1500K through 2700K)” would reduce the “radiation equilibrium temperatures.” I also wonder if the nose cap and leading edges might be a metallic analogue of CMC-based TUFROC, with rhodium- or iridium-base “refractory superalloy” facesheets that combine the high-temperature strength of refractory metals with the high-temperature oxidation resistance of nickel-base superalloys. This would eliminate catastrophic failure modes in which spalling or particle impact erosion of environmental barrier coatings exposes the underlying bulk metal to oxidative erosion, and the high weight would be “offset to some extent by the low density, efficient fibrous insulations used.” The Columbia disaster, the 200 or so passengers, and the threat of bird strikes would seem to demand facesheets with “inherent ductility,” high density, and high impact resistance.
I am of the opinion that “airport-to-orbit” is a curse rather than a blessing for space launch, because it severely limits the degree to which launch costs can be lowered by simply upsizing the vehicle. Indeed, “rockets become more cost-effective the bigger they are — a rocket that can launch twice the payload mass will not be twice as expensive in operational costs, while engineering costs will not scale 1:1,” “even with near-exponential increases in the size of engines and airframes, there is only a linear increase in cost,” “increasing size leads to an increase in the percentage of payload carried,” and “only development cost amortization grows with increasing vehicle size, all other cost factors decrease (specifically) for larger vehicles because of the smaller number of launches required for the same annual mass. Therefore, the trend is towards larger vehicles and smaller launch rates.” The use of landing gear flies of the face of the “avoidance of concentrated point loads.” “Heavy-lift RLVs probably will have to land vertically due to the weight of the propulsion system.” SSTO vehicles pay a heavy price for their simplified ground operations, because “TSTO vehicles require smaller propellant mass fractions,” “offer greater margin and have higher payload potential than SSTO vehicles.” This is why I have envisioned the General Dynamics Space Freighter, a fully-reusable VTVL TSTO SHLLV similar to the generic two-stage ballistic HLLV from the Satellite Power System program, but with Millennium Express-style body flaps that give the stages a precision alighting capability. This allows the stages to alight in freshwater ponds similar in size to dry docks, so that gantry cranes similar to those at shipyards can rapidly and vertically integrate them directly on an adjacent launch pad. “There is a simple standardized interface point between the vehicle and the payload canister. Most payload operations occur offline. A flight-ready, pre-canisterized payload arrives at the turnaround facility for integration with the vehicle.” We should be thinking in terms of rockets that can launch an entire International Space Station (some assembly required) at once, and with ground operations more like the loading and unloading of a container ship than the turnaround of an airliner.
It is interesting to note that the Rockwell BGT “will incorporate improved occupant restraints and seat attitude adjustments,” and “the short flight time and acceleration environment precludes onboard meal service.” With the OML shrink-wrapped around the multicell payload compartment, “passenger windows” are conceivable.