China for the far side of the moon

China sets out for the far side of the moon

By Dennis Normile |


Chang’e-4 will explore a 12-kilometer-deep lunar crater likely formed by a giant asteroid impact.
CNSA

China’s ambitious program of lunar exploration is about to attempt a spacefaring first: On 8 December it will launch a probe intended to land on the far side of the moon. Besides boasting rights, the Chang’e-4 lander and rover are expected to produce a host of new insights into the moon’s composition and history. “Chang’e-4 is an historical mission,” says Bernard Foing, director of the European Space Agency’s (ESA’s) International Lunar Exploration Working Group in Noordwijk, the Netherlands.

Remote observations have shown that the far side of the moon, invisible from Earth, has a much thicker, older crust and is pockmarked by more and deeper craters than the near side, where large dark plains called maria, formed by ancient lava flows, have erased much of the cratering. The big difference “is still a mystery,” Foing says, and Chang’e’s trip “can give clues.”

China started its lunar program 3 decades after the United States and the Soviet Union ended theirs. Chinese geologists eager to study the moon convinced the government to establish the Lunar Exploration Program under the China National Space Administration (CNSA) in 2004. The agency launched Chang’e-1 and Chang’e-2, named after a Chinese moon goddess, in 2007 and 2010, respectively; both produced “a lot of good science,” including high-resolution lunar images and new altimetry measurements, says planetary scientist James Head of Brown University.

In 2013, Chang’e-3 became the first craft to land on the moon since the Soviet Union’s Luna 24 sample return mission in 1976. The lander and the small rover it carried gathered data on the moon’s topography, mineralogy, and elemental abundance. In a first, the rover was equipped with a ground-penetrating radar that profiled buried lava flows and regolith, the broken up rock and dust that makes up the lunar soil.

Chang’e-4 was designed as an identical backup to Chang’e-3, but when that mission proved successful, China’s planners became more ambitious. Going to the far side promised “unique and original science” as well as a chance to “develop China’s deep space observational capabilities,” says Li Chunlai, deputy director-general of the Chinese Academy of Sciences’s National Astronomical Observatories of China (NAOC) in Beijing, which advises CNSA on the program’s science objectives.

Because the moon will block direct radio contact with the lander and rover, Chang’e-4 will rely on a communications relay satellite, launched in May. Called Queqiao, it’s traveling in a loop 65,000 kilometers beyond the moon at Earth-moon Lagrange Point 2, a gravitational balance point. Chang’e-4 itself will land in the Von Kármán crater within the South Pole–Aitken basin. Likely formed by a giant asteroid impact, the basin is roughly 2500 kilometers across and 12 kilometers deep. “It’s the moon’s largest, deepest, and oldest impact structure,” says planetary geoscientist Xiao Long of the China University of Geosciences in Wuhan.

The impact may have brought material from the moon’s upper mantle to the surface, a scenario that data from a visible and near-infrared imaging spectrometer might be able to verify. The imaging spectrometer will also explore the geochemical composition of far-side soil, which is likely to differ from the near side because of the same processes that produced the difference in crust thickness.

The rover’s ground-penetrating radar—similar to that on Chang’e-3—will provide another look down to about 100 meters beneath the surface, probing the depth of the regolith and looking for subsurface structures. Combining the radar data with surface images from cameras on the lander and rover might advance scientists’ understanding of the cratering process.

Going to the far side also opens “a totally new window for radio astronomy,” says Ping Jinsong, a NAOC radio astronomer. On Earth, and even in near-Earth space, natural and humanmade interference hampers low-frequency radio observations. The moon blocks this noise. So the mission carries a trio of low-frequency receivers: one on the lander, one—a collaboration with the Netherlands—on Queqiao, and a third on a microsatellite released from Queqiao into a lunar orbit. (Contact with a second microsatellite carrying a fourth receiver has been lost.) The receivers will listen for solar radio bursts, signals from aurorae on other planets, and the faint signals from the primordial clouds of hydrogen gas that coalesced into the universe’s first stars.

China’s ambitious lunar program will continue with Chang’e-5, a sample return mission, due for launch next year. It will retrieve up to 2 kilograms of soil and rock from the Oceanus Procellarum, a vast lunar mare on the near side untouched by previous landings, and one of the moon’s youngest volcanic flows. “It’s a great objective and will potentially yield some fantastic science,” says Bradley Jolliff, a planetary scientist at Washington University in St. Louis, Missouri, who has urged the United States to launch its own lunar sample return mission.

If China continues its tradition of developing moon missions in pairs, a second sample return mission, Chang’e-6, might follow. Head notes that NASA, ESA, Japan, Russia, and India have all taken a renewed interest in our planet’s companion, which holds clues to Earth’s own history. “Chang’e-4 and 5 are a major part of this renaissance,” Head says, “and in many ways are the current vanguard.”

 

Mars mission got lucky

Mars mission got lucky: NASA lander touched down in a sand-filled crater, easing study of planet’s interior

By Paul Voosen |


Pictures from InSight show the lander sits within a flat, sand-filled crater.
JPL-Caltech/NASA

On 27 November, the day after the successful touchdown of NASA’s InSight lander on Mars, after the television crews had departed, technicians here at the Jet Propulsion Laboratory (JPL) were already at work, simulating Mars for a full-size model of the lander, which they call ForeSight. Scientists don’t yet know exactly where on Mars InSight is. But the first few images sent back to Earth have established its immediate environment—and that the lander is slightly tilted, by 4°. So yesterday, NASA engineers were playing in the sand, moving fake Mars rocks into position. They heaved ForeSight up on their shoulders while shoving small blocks underneath a lander leg to get it listing just right.

Looking on from a gallery above ForeSight was Matt Golombek, the JPL geologist who will lead the placement of two of InSight’s instruments, a heat probe and seismometer. From the few photos returned so far, he says, much has been learned about its location, which closely resembles martian terrains previously scouted by the Spirit rover.

For example, InSight landed in what’s called a hollow, a crater that has been filled in with soil and leveled flat. In images taken from the elbow of the lander’s stowed robotic arm, the edge of the crater is visible. Once the team determines the diameter of the crater—it could be meters, maybe tens of meters—researchers can infer its depth and the amount of sand blown into it. Either way, this bodes well for the heat probe instrument, called HP3, which should penetrate the material with ease. “This is about as good news for HP3 as you could possibly hope,” he says.

Landing in the hollow was fortunate for another reason. InSight didn’t quite hit the bull’s-eye of its target landing zone, and ended up in terrain that, overall, is rockier than desired. But the hollow is mostly devoid of rocks. One, about 20 centimeters across, sits close to the lander’s feet, whereas three smaller ones lie farther away—but none poses a threat to placing the instruments. The hollow is flat and lacks sand dunes, and small pebbles indicate a surface dense enough to support the weight of the instruments. “We won’t have any trouble whatsoever,” Golombek says.

The biggest mystery for the lander team right now is figuring out exactly where it is. A Mars orbiter set to image the center of the landing zone on Thursday will miss the lander, because it missed the center slightly. An instrument on InSight called the inertial measurement unit has pinned the location to within a 5-kilometer-wide circle. InSight’s entry, descent, and landing team will refine that estimate down to a kilometer or less. “But they haven’t done that yet because they were so happy to have landed safely that we don’t know what they did last night,” Golombek says with a smile. “And they have not yet shown up today.”

There is one more technique that could help: InSight’s third primary experiment, called the Rotation and Interior Structure Experiment (RISE). The main purpose of RISE’s two sensitive listening antennas is to detect wobbles in the martian core. But the InSight team can also use them to map the lander’s latitude and longitude by using the radio signals of passing orbiters. That has given the geologists a location to within about 100 meters or so.

Now, a friendly competition is on. Golombek and his peers hope to beat the satellites to fixing InSight’s location. They should have until 6 December, when an orbiter will likely capture it. Right now, they’re stretching out the scant imagery, trying to compare their hollow to existing high-resolution maps. Their job will get much easier next week, when the camera on the robotic arm’s elbow will be extended to photograph the lander’s terrain in detail. For now, the arm is stowed—Tuesday was about simple steps, like firing off the small charges that secure the arm to the deck. But later this week, after the camera caps come off and the arm is released, the detailed reconnaissance will begin.

 

Crater under Greenland’s ice

Massive crater under Greenland’s ice points to climate-altering impact in the time of humans

By Paul Voosen |

On a bright July day 2 years ago, Kurt Kjær was in a helicopter flying over northwest Greenland—an expanse of ice, sheer white and sparkling. Soon, his target came into view: Hiawatha Glacier, a slow-moving sheet of ice more than a kilometer thick. It advances on the Arctic Ocean not in a straight wall, but in a conspicuous semicircle, as though spilling out of a basin. Kjær, a geologist at the Natural History Museum of Denmark in Copenhagen, suspected the glacier was hiding an explosive secret. The helicopter landed near the surging river that drains the glacier, sweeping out rocks from beneath it. Kjær had 18 hours to find the mineral crystals that would confirm his suspicions.

What he brought home clinched the case for a grand discovery. Hidden beneath Hiawatha is a 31-kilometer-wide impact crater, big enough to swallow Washington, D.C., Kjær and 21 co-authors report today in a paper in Science Advances. The crater was left when an iron asteroid 1.5 kilometers across slammed into Earth, possibly within the past 100,000 years.

Though not as cataclysmic as the dinosaur-killing Chicxulub impact, which carved out a 200-kilometer-wide crater in Mexico about 66 million years ago, the Hiawatha impactor, too, may have left an imprint on the planet’s history. The timing is still up for debate, but some researchers on the discovery team believe the asteroid struck at a crucial moment: roughly 13,000 years ago, just as the world was thawing from the last ice age. That would mean it crashed into Earth when mammoths and other megafauna were in decline and people were spreading across North America.

The impact would have been a spectacle for anyone within 500 kilometers. A white fireball four times larger and three times brighter than the sun would have streaked across the sky. If the object struck an ice sheet, it would have tunneled through to the bedrock, vaporizing water and stone alike in a flash. The resulting explosion packed the energy of 700 1-megaton nuclear bombs, and even an observer hundreds of kilometers away would have experienced a buffeting shock wave, a monstrous thunder-clap, and hurricane-force winds. Later, rock debris might have rained down on North America and Europe, and the released steam, a greenhouse gas, could have locally warmed Greenland, melting even more ice.

The news of the impact discovery has reawakened an old debate among scientists who study ancient climate. A massive impact on the ice sheet would have sent meltwater pouring into the Atlantic Ocean—potentially disrupting the conveyor belt of ocean currents and causing temperatures to plunge, especially in the Northern Hemisphere. “What would it mean for species or life at the time? It’s a huge open question,” says Jennifer Marlon, a paleoclimatologist at Yale University.

A decade ago, a small group of scientists proposed a similar scenario. They were trying to explain a cooling event, more than 1000 years long, called the Younger Dryas, which began 12,800 years ago, as the last ice age was ending. Their controversial solution was to invoke an extraterrestrial agent: the impact of one or more comets. The researchers proposed that besides changing the plumbing of the North Atlantic, the impact also ignited wildfires across two continents that led to the extinction of large mammals and the disappearance of the mammoth-hunting Clovis people of North America. The research group marshaled suggestive but inconclusive evidence, and few other scientists were convinced. But the idea caught the public’s imagination despite an obvious limitation: No one could find an impact crater.

Proponents of a Younger Dryas impact now feel vindicated. “I’d unequivocally predict that this crater is the same age as the Younger Dryas,” says James Kennett, a marine geologist at the University of California, Santa Barbara, one of the idea’s original boosters.

But Jay Melosh, an impact crater expert at Purdue University in West Lafayette, Indiana, doubts the strike was so recent. Statistically, impacts the size of Hiawatha occur only every few million years, he says, and so the chance of one just 13,000 years ago is small. No matter who is right, the discovery will give ammunition to Younger Dryas impact theorists—and will turn the Hiawatha impactor into another type of projectile. “This is a hot potato,” Melosh tells Science. “You’re aware you’re going to set off a firestorm?”

It started with a hole. In 2015, Kjær and a colleague were studying a new map of the hidden contours under Greenland’s ice. Based on variations in the ice’s depth and surface flow patterns, the map offered a coarse suggestion of the bedrock topography—including the hint of a hole under Hiawatha.

Kjær recalled a massive iron meteorite in his museum’s courtyard, near where he parks his bicycle. Called Agpalilik, Inuit for “the Man,” the 20-ton rock is a fragment of an even larger meteorite, the Cape York, found in pieces on northwest Greenland by Western explorers but long used by Inuit people as a source of iron for harpoon tips and tools. Kjær wondered whether the meteorite might be a remnant of an impactor that dug the circular feature under Hiawatha. But he still wasn’t confident that it was an impact crater. He needed to see it more clearly with radar, which can penetrate ice and reflect off bedrock.

Kjær’s team began to work with Joseph MacGregor, a glaciologist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who dug up archival radar data. MacGregor found that NASA aircraft often flew over the site on their way to survey Arctic sea ice, and the instruments were sometimes turned on, in test mode, on the way out. “That was pretty glorious,” MacGregor says.

The radar pictures more clearly showed what looked like the rim of a crater, but they were still too fuzzy in the middle. Many features on Earth’s surface, such as volcanic calderas, can masquerade as circles. But only impact craters contain central peaks and peak rings, which form at the center of a newborn crater when—like the splash of a stone in a pond—molten rock rebounds just after a strike. To look for those features, the researchers needed a dedicated radar mission.

Coincidentally, the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, had just purchased a next-generation ice-penetrating radar to mount across the wings and body of their Basler aircraft, a twin-propeller retrofitted DC-3 that’s a workhorse of Arctic science. But they also needed financing and a base close to Hiawatha.

Kjær took care of the money. Traditional funding agencies would be too slow, or prone to leaking their idea, he thought. So he petitioned Copenhagen’s Carlsberg Foundation, which uses profits from its global beer sales to finance science. MacGregor, for his part, enlisted NASA colleagues to persuade the U.S. military to let them work out of Thule Air Base, a Cold War outpost on northern Greenland, where German members of the team had been trying to get permission to work for 20 years. “I had retired, very serious German scientists sending me happy-face emojis,” MacGregor says.

Three flights, in May 2016, added 1600 kilometers of fresh data from dozens of transits across the ice—and evidence that Kjær, MacGregor, and their team were onto something. The radar revealed five prominent bumps in the crater’s center, indicating a central peak rising some 50 meters high. And in a sign of a recent impact, the crater bottom is exceptionally jagged. If the asteroid had struck earlier than 100,000 years ago, when the area was ice free, erosion from melting ice farther inland would have scoured the crater smooth, MacGregor says. The radar signals also showed that the deep layers of ice were jumbled up—another sign of a recent impact. The oddly disturbed patterns, MacGregor says, suggest “the ice sheet hasn’t equilibrated with the presence of this impact crater.”

But the team wanted direct evidence to overcome the skepticism they knew would greet a claim for a massive young crater, one that seemed to defy the odds of how often large impacts happen. And that’s why Kjær found himself, on that bright July day in 2016, frenetically sampling rocks all along the crescent of terrain encircling Hiawatha’s face. His most crucial stop was in the middle of the semicircle, near the river, where he collected sediments that appeared to have come from the glacier’s interior. It was hectic, he says—”one of those days when you just check your samples, fall on the bed, and don’t rise for some time.”

In that outwash, Kjær’s team closed its case. Sifting through the sand, Adam Garde, a geologist at the Geological Survey of Denmark and Greenland in Copenhagen, found glass grains forged at temperatures higher than a volcanic eruption can generate. More important, he discovered shocked crystals of quartz. The crystals contained a distinctive banded pattern that can be formed only in the intense pressures of extraterrestrial impacts or nuclear weapons. The quartz makes the case, Melosh says. “It looks pretty good. All the evidence is pretty compelling.”

Now, the team needs to figure out exactly when the collision occurred and how it affected the planet.

The Younger Dryas, named after a small white and yellow arctic flower that flourished during the cold snap, has long fascinated scientists. Until human-driven global warming set in, that period reigned as one of the sharpest recent swings in temperature on Earth. As the last ice age waned, about 12,800 years ago, temperatures in parts of the Northern Hemisphere plunged by as much as 8°C, all the way back to ice age readings. They stayed that way for more than 1000 years, turning advancing forest back into tundra.

The trigger could have been a disruption in the conveyor belt of ocean currents, including the Gulf Stream that carries heat northward from the tropics. In a 1989 paper in Nature, Kennett, along with Wallace Broecker, a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory, and others, laid out how meltwater from retreating ice sheets could have shut down the conveyor. As warm water from the tropics travels north at the surface, it cools while evaporation makes it saltier. Both factors boost the water’s density until it sinks into the abyss, helping to drive the conveyor. Adding a pulse of less-dense freshwater could hit the brakes. Paleoclimate researchers have largely endorsed the idea, although evidence for such a flood has been lacking until recently.

Then, in 2007, Kennett suggested a new trigger. He teamed up with scientists led by Richard Firestone, a physicist at Lawrence Berkeley National Laboratory in California, who proposed a comet strike at the key moment. Exploding over the ice sheet covering North America, the comet or comets would have tossed light-blocking dust into the sky, cooling the region. Farther south, fiery projectiles would have set forests alight, producing soot that deepened the gloom and the cooling. The impact also could have destabilized ice and unleashed meltwater that would have disrupted the Atlantic circulation.

The climate chaos, the team suggested, could explain why the Clovis settlements emptied and the megafauna vanished soon afterward. But the evidence was scanty. Firestone and his colleagues flagged thin sediment layers at dozens of archaeological sites in North America. Those sediments seemed to contain geochemical traces of an extraterrestrial impact, such as a peak in iridium, the exotic element that helped cement the case for a Chicxulub impact. The layers also yielded tiny beads of glass and iron—possible meteoritic debris—and heavy loads of soot and charcoal, indicating fires.

The team met immediate criticism. The decline of mammoths, giant sloths, and other species had started well before the Younger Dryas. In addition, no sign existed of a human die-off in North America, archaeologists said. The nomadic Clovis people wouldn’t have stayed long in any site. The distinctive spear points that marked their presence probably vanished not because the people died out, but rather because those weapons were no longer useful once the mammoths waned, says Vance Holliday, an archaeologist at The University of Arizona in Tucson. The impact hypothesis was trying to solve problems that didn’t need solving.

The geochemical evidence also began to erode. Outside scientists could not detect the iridium spike in the group’s samples. The beads were real, but they were abundant across many geological times, and soot and charcoal did not seem to spike at the time of the Younger Dryas. “They listed all these things that aren’t quite sufficient,” says Stein Jacobsen, a geochemist at Harvard University who studies craters.

Yet the impact hypothesis never quite died. Its proponents continued to study the putative debris layer at other sites in Europe and the Middle East. They also reported finding microscopic diamonds at different sites that, they say, could have been formed only by an impact. (Outside researchers question the claims of diamonds.)

Now, with the discovery of Hiawatha crater, “I think we have the smoking gun,” says Wendy Wolbach, a geochemist at De-Paul University in Chicago, Illinois, who has done work on fires during the era.

The impact would have melted 1500 gigatons of ice, the team estimates—about as much ice as Antarctica has lost because of global warming in the past decade. The local greenhouse effect from the released steam and the residual heat in the crater rock would have added more melt. Much of that freshwater could have ended up in the nearby Labrador Sea, a primary site pumping the Atlantic Ocean’s overturning circulation. “That potentially could perturb the circulation,” says Sophia Hines, a marine paleoclimatologist at Lamont-Doherty.

Leery of the earlier controversy, Kjær won’t endorse that scenario. “I’m not putting myself in front of that bandwagon,” he says. But in drafts of the paper, he admits, the team explicitly called out a possible connection between the Hiawatha impact and the Younger Dryas.

Banded patterns in the mineral quartz are diagnostic of shock waves from an extraterrestrial impact. ADAM GARDE, GEUS

The evidence starts with the ice. In the radar images, grit from distant volcanic eruptions makes some of the boundaries between seasonal layers stand out as bright reflections. Those bright layers can be matched to the same layers of grit in cataloged, dated ice cores from other parts of Greenland. Using that technique, Kjær’s team found that most ice in Hiawatha is perfectly layered through the past 11,700 years. But in the older, disturbed ice below, the bright reflections disappear. Tracing the deep layers, the team matched the jumble with debris-rich surface ice on Hiawatha’s edge that was previously dated to 12,800 years ago. “It was pretty self-consistent that the ice flow was heavily disturbed at or prior to the Younger Dryas,” MacGregor says.

Other lines of evidence also suggest Hiawatha could be the Younger Dryas impact. In 2013, Jacobsen examined an ice core from the center of Greenland, 1000 kilometers away. He was expecting to put the Younger Dryas impact theory to rest by showing that, 12,800 years ago, levels of metals that asteroid impacts tend to spread did not spike. Instead, he found a peak in platinum, similar to ones measured in samples from the crater site. “That suggests a connection to the Younger Dryas right there,” Jacobsen says.

For Broecker, the coincidences add up. He had first been intrigued by the Firestone paper, but quickly joined the ranks of naysayers. Advocates of the Younger Dryas impact pinned too much on it, he says: the fires, the extinction of the megafauna, the abandonment of the Clovis sites. “They put a bad shine on it.” But the platinum peak Jacobsen found, followed by the discovery of Hiawatha, has made him believe again. “It’s got to be the same thing,” he says.

Yet no one can be sure of the timing. The disturbed layers could reflect nothing more than normal stresses deep in the ice sheet. “We know all too well that older ice can be lost by shearing or melting at the base,” says Jeff Severinghaus, a paleoclimatologist at the Scripps Institution of Oceanography in San Diego, California. Richard Alley, a glaciologist at Pennsylvania State University in University Park, believes the impact is much older than 100,000 years and that a subglacial lake can explain the odd textures near the base of the ice. “The ice flow over growing and shrinking lakes interacting with rough topography might have produced fairly complex structures,” Alley says.

A recent impact should also have left its mark in the half-dozen deep ice cores drilled at other sites on Greenland, which document the 100,000 years of the current ice sheet’s history. Yet none exhibits the thin layer of rubble that a Hiawatha-size strike should have kicked up. “You really ought to see something,” Severinghaus says.

Brandon Johnson, a planetary scientist at Brown University, isn’t so sure. After seeing a draft of the study, Johnson, who models impacts on icy moons such as Europa and Enceladus, used his code to recreate an asteroid impact on a thick ice sheet. An impact digs a crater with a central peak like the one seen at Hiawatha, he found, but the ice suppresses the spread of rocky debris. “Initial results are that it goes a lot less far,” Johnson says.

Even if the asteroid struck at the right moment, it might not have unleashed all the disasters envisioned by proponents of the Younger Dryas impact. “It’s too small and too far away to kill off the Pleistocene mammals in the continental United States,” Melosh says. And how a strike could spark flames in such a cold, barren region is hard to see. “I can’t imagine how something like this impact in this location could have caused massive fires in North America,” Marlon says.

It might not even have triggered the Younger Dryas. Ocean sediment cores show no trace of a surge of freshwater into the Labrador Sea from Greenland, says Lloyd Keigwin, a paleoclimatologist at the Woods Hole Oceanographic Institution in Massachusetts. The best recent evidence, he adds, suggests a flood into the Arctic Ocean through western Canada instead.

An external trigger may be unnecessary in any case, Alley says. During the last ice age, the North Atlantic saw 25 other cooling spells, probably triggered by disruptions to the Atlantic’s overturning circulation. None of those spells, known as Dansgaard-Oeschger (D-O) events, was as severe as the Younger Dryas, but their frequency suggests an internal cycle played a role in the Younger Dryas, too. Even Broecker agrees that the impact was not the ultimate cause of the cooling. If D-O events represent abrupt transitions between two regular states of the ocean, he says, “you could say the ocean was approaching instability and somehow this event knocked it over.”

Still, Hiawatha’s full story will come down to its age. Even an exposed impact crater can be a challenge for dating, which requires capturing the moment when the impact altered existing rocks—not the original age of the impactor or its target. Kjær’s team has been trying. They fired lasers at the glassy spherules to release argon for dating, but the samples were too contaminated. The researchers are inspecting a blue crystal of the mineral apatite for lines left by the decay of uranium, but it’s a long shot. The team also found traces of carbon in other samples, which might someday yield a date, Kjær says. But the ultimate answer may require drilling through the ice to the crater floor, to rock that melted in the impact, resetting its radioactive clock. With large enough samples, researchers should be able to pin down Hiawatha’s age.

Given the remote location, a drilling expedition to the hole at the top of the world would be costly. But an understanding of recent climate history—and what a giant impact can do to the planet—is at stake. “Somebody’s got to go drill in there,” Keigwin says. “That’s all there is to it.”

 

Transient Execution Attacks

A Systematic Evaluation of Transient Execution Attacks and Defenses

Modern processor optimizations such as branch prediction and out-of-order execution are crucial for performance. Recent research on transient execution attacks including Spectre and Meltdown showed, however, that exception or branch misprediction events may leave secret-dependent traces in the CPU’s microarchitectural state. This observation led to a proliferation of new Spectre and Meltdown attack variants and even more ad-hoc defenses (e.g., microcode and software patches). Unfortunately, both the industry and academia are now focusing on finding efficient defenses that mostly address only one specific variant or exploitation methodology. This is highly problematic, as the state-of-the-art provides only limited insight on residual attack surface and the completeness of the proposed defenses.
In this paper, we present a sound and extensible systematization of transient execution attacks. Our systematization uncovers 7 (new) transient execution attacks that have been overlooked and not been investigated so far. This includes 2 new Meltdown variants: Meltdown-PK on Intel, and Meltdown-BR on Intel and AMD. It also includes 5 new Spectre mistraining strategies. We evaluate all 7 attacks in proof-of-concept implementations on 3 major processor vendors (Intel, AMD, ARM). Our systematization does not only yield a complete picture of the attack surface, but also allows a systematic evaluation of defenses. Through this systematic evaluation, we discover that we can still mount transient execution attacks that are supposed to be mitigated by rolled out patches.

 

 

New definition of the kilogram

Metric system overhaul will dethrone the one, true kilogram

By Adrian Cho |

The atoms in a sphere of silicon-28 were counted to fix the Avogadro constant and redefine the mole. A copy of Le Grand K, the kilogram standard, can be seen in the sphere’s reflection.

Like an aging monarch, Le Grand K is about to bow to modernity. For 130 years, this gleaming cylinder of platinum-iridium alloy has served as the world’s standard for mass. Kept in a bell jar and locked away at the International Bureau of Weights and Measures (BIPM) in Sèvres, France, the weight has been taken out every 40 years or so to calibrate similar weights around the world. Now, in a revolution far less bloody than the one that cost King Louis XVI his head, it will cede its throne as the one, true kilogram.

When the 26th General Conference on Weights and Measures (CGPM) convenes next week in Versailles, France, representatives of the 60 member nations are expected to vote to redefine the International System of Units (SI) so that four of its base units—the kilogram, ampere, kelvin, and mole—are defined indirectly, in terms of physical constants that will be fixed by fiat. They’ll join the other three base units—the second, meter, and candela (a measure of a light’s perceived brightness)—that are already defined that way. The rewrite eliminates the last physical artifact used to define a unit, Le Grand K.

The shift aims to make the units more stable and allow investigators to develop ever more precise and flexible techniques for converting the constants into measurement units. “That’s the beauty of the redefinition,” says Estefanía de Mirandés, a physicist at BIPM. “You are not limited to one technology.” But even proponents of the arcane changes acknowledge they may bewilder nonexperts. “Cooler heads have said, ‘What are we going to do about teaching people to use this?’” says Jon Pratt, a physicist at the U.S. National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland.

The new SI generalizes the trade-off already exploited to define the meter more precisely in terms of the speed of light. Until 1983, light’s speed was something to be measured in terms of independently defined meters and seconds. However, that year, the 17th CGPM defined the speed of light as exactly 299,792,458 meters per second. The meter then became the measurable thing: the distance light travels in 1/299,792,458 seconds. (The second was pegged to the oscillations of microwave radiation from cesium atoms in 1967.)

The new SI plays the same game with the other units. For example, it defines the kilogram in terms of the Planck constant, which pops up all over quantum mechanics. The constant is now fixed as exactly 6.62607015×10-34 kilogram meters squared per second. Because the kilogram appears in that definition, any experiment that previously measured the constant becomes a way to measure out a kilogram instead.

Such experiments are much harder than clocking light speed, a staple of undergraduate physics. One technique employs a device called a Kibble balance, which is a bit like the mythical scales of justice. A mass on one side is counterbalanced by the electric force produced by an electrical coil on the other side, hanging in a magnetic field. To balance the weight, a current must run through the coil. Researchers can equate the mass to that current times an independent voltage generated when they remove the mass and move the coil up and down in the magnetic field.

The real trickiness enters in sizing up the current and voltage, with quantum mechanical devices that do it in terms of the charge of the electron and the Planck constant. Now that the new SI has fixed those constants, the balance can be used to mete out a slug with a mass of exactly 1 kilogram. The redefinition also effectively makes the quantum techniques the SI standards for measuring voltages and currents, says James Olthoff, a NIST physicist. Until now, the SI has defined the ampere impractically, in terms of the force between infinitely long current carrying wires separated by a meter.

But applying the complex new definitions will baffle anybody without an advanced degree in physics, argues Gary Price, a metrologist in Sydney, Australia, who used to advise Australia’s National Standards Commission. In fact, he argues, the new SI fails to meet one of the basic requirements of a units system, which is to specify the amount of mass with which to measure masses, the amount of length with which to measure lengths, and so on. “The new SI is not weights and measures at all,” Price says.

Metrologists considered more intuitive redefinitions, Olthoff says. For example, you could define the kilogram as the mass of some big number of a particular atom. But such a standard would be impractical, Olthoff says. Somewhat ironically, researchers have already counted the atoms in exquisitely round, 1-kilogram spheres of silicon-28 to fix an exact value for the mole, formerly defined as the measurable number of carbon-12 atoms in 12 grams of the stuff.

If approved, the new SI goes into effect in May 2019. In the short term, little will change, Pratt says. NIST will continue to propagate weight standards by calibrating its kilogram weights—although now it will do so with its Kibble balance. Eventually, Pratt says, researchers could develop tabletop balances that companies could use to calibrate their own microgram weights.

Next up is a rethink of the second. Metrologists are developing more precise atomic clocks that use optical radiation with higher frequencies than the current cesium standard. They should form the basis for a finer definition of the second, De Mirandés says, perhaps in 2030.

As for Le Grand K, BIPM will keep it and will periodically calibrate it as a secondary mass standard, De Mirandés says. That’s a fairly dignified end for a deposed French king.

Metric makeover

 

An impending vote is expected to redefine metric base units in terms of fixed physical constants.

 

Metric unit Quantity Defining constant
Kilogram Mass Planck constant
Meter Distance Speed of light
Second Time Cesium radiation frequency
Ampere Current Electron’s charge
Kelvin Temperature Boltzmann constant
Mole Amount of substance Avogadro constant
Candela Luminous intensity Efficacy of light of a specific frequency

Emergent Spacetime Supersymmetry

Observation of Emergent Spacetime Supersymmetry at Superconducting Quantum Criticality

Zi-Xiang Li, Abolhassan Vaezi, Christian B. Mendl, Hong Yao

No definitive evidence of spacetime supersymmetry (SUSY) that transmutes fermions into bosons and vice versa has been revealed in nature so far. Moreover, whether spacetime SUSY in 2+1 and higher dimensions can occur or emerge in generic microscopic models remains open. Here, we introduce a lattice realization of a single Dirac fermion with attractive Hubbard interactions that preserves both time-reversal and chiral symmetries. By performing numerically-exact sign-problem-free determinant quantum Monte Carlo simulations, we show that the interacting single Dirac fermion in 2+1 dimensions features a superconducting quantum critical point (QCP). More remarkably, we demonstrate that the N=2 spacetime SUSY in 2+1D emerges at the superconducting QCP by showing that the fermions and bosons have identical anomalous dimensions 1/3, a hallmark of the emergent SUSY. To the best of our knowledge, this is the first observation of emergent 2+1D spacetime SUSY in quantum microscopic models. We further show some experimental signatures which can be measured to test such emergent SUSY in candidate systems such as the surface of 3D topological insulators.

Numerical observation of emergent spacetime supersymmetry at quantum criticality

Abstract

No definitive evidence of spacetime supersymmetry (SUSY) that transmutes fermions into bosons and vice versa has been revealed in nature so far. Moreover, the question of whether spacetime SUSY in 2 + 1 and higher dimensions can emerge in generic lattice microscopic models remains open. Here, we introduce a lattice realization of a single Dirac fermion in 2 + 1 dimensions with attractive interactions that preserves both time-reversal and chiral symmetries. By performing sign problem–free determinant quantum Monte Carlo simulations, we show that an interacting single Dirac fermion in 2 + 1 dimensions features a superconducting quantum critical point (QCP). We demonstrate that the N=2 spacetime SUSY in 2 + 1 dimensions emerges at the superconducting QCP by showing that the fermions and bosons have identical anomalous dimensions 1/3, a hallmark of the emergent SUSY. We further show some experimental signatures that may be measured to test such emergent SUSY in candidate systems.

 

Truly Intelligent Machines

To Build Truly Intelligent Machines, Teach Them Cause and Effect

Kevin Hartnett

Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.

Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.

But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.

In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.

Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil. Quanta Magazine sat down with Pearl at a recent conference in San Diego and later held a follow-up interview with him by phone. An edited and condensed version of those conversations follows.

Why is your new book called “The Book of Why”?

It means to be a summary of the work I’ve been doing the past 25 years about cause and effect, what it means in one’s life, its applications, and how we go about coming up with answers to questions that are inherently causal. Oddly, those questions have been abandoned by science. So I’m here to make up for the neglect of science.

(Dansk kommentar: Science = Computer Science = Datalogi)

That’s a dramatic thing to say, that science has abandoned cause and effect. Isn’t that exactly what all of science is about?

Of course, but you cannot see this noble aspiration in scientific equations. The language of algebra is symmetric: If X tells us about Y, then Y tells us about X. I’m talking about deterministic relationships. There’s no way to write in mathematics a simple fact — for example, that the upcoming storm causes the barometer to go down, and not the other way around.

Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X. It sounds like a terrible thing to say against science, I know. If I were to say it to my mother, she’d slap me.

But science is more forgiving: Seeing that we lack a calculus for asymmetrical relations, science encourages us to create one. And this is where mathematics comes in. It turned out to be a great thrill for me to see that a simple calculus of causation solves problems that the greatest statisticians of our time deemed to be ill-defined or unsolvable. And all this with the ease and fun of finding a proof in high-school geometry.

You made your name in AI a few decades ago by teaching machines how to reason probabilistically. Explain what was going on in AI at the time.

The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.

Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.

Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?

In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.

I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

People are excited about the possibilities for AI. You’re not?

As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.

No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.

What are the prospects for having machines that share our intuition about cause and effect?

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.

The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models. 

When you share these ideas with people working in AI today, how do they react?

AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.

Are you suggesting there’s a trend developing away from machine learning?

Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?

That was the last thing I wanted to ask you.

I’m glad you didn’t ask me about free will.

In that case, what do you think about free will?

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.

In what way?

You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.

Will it be obvious when robots have free will?

I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the first sign will be communication; the next will be better soccer.

Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?

It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.

So how will we know when AI is capable of committing evil?

When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.

Kommentar

Man har inden for fysikkens verden en tilsvarende problemstilling. Maxwells ligninger for det elektromagnetiske felt er tidssymmetriske, så der er ingen orientering af tidsaksen. Årsag og virkning eksisterer ikke.

Hvordan forklarer man så, at en dipolantennes spændingsvariationer er årsagen til den udsendte elektromagnetiske stråling? Man griber ind ved at udregne det udsendte felt ud fra det retarderede potential.

De fysiske teoriers tidssymmetri fik filosoffen Bertrand Russell til at hævde (1913), at eksistensen af årsag og virkning var ren fiktion. Han har vist aldrig været udsat for at skulle beregne radioubølgers udsendelse fra en antenne, selvom radiotelegrafien var opfundet på dette tidspunkt.

Der er en nyhed om om sådanne robotter:

Robots—like people—use ‘imagination’ to learn concepts

By Chris Burns |

Instead of relying on a list of rules or training on a massive data set like standard computers, a new computational framework for learning lets robots come up with their own concepts by detecting abstract differences in images and then recreating them in real life. Watch the video to learn more.

 

 

 

New generation of flow batteries

New generation of ‘flow batteries’ could eventually sustain a grid powered by the sun and wind

Batteries already power electronics, tools, and cars; soon, they could help sustain the entire electric grid. With the rise of wind and solar power, energy companies are looking for ways to keep electrons flowing when the sun doesn’t shine and the wind ebbs. Giant devices called flow batteries, using tanks of electrolytes capable of storing enough electricity to power thousands of homes for many hours, could be the answer. But most flow batteries rely on vanadium, a somewhat rare and expensive metal, and alternatives are short-lived and toxic.

Last week, researchers reported overcoming many of these drawbacks with a potentially cheap, long-lived, and safe flow battery. The work is part of a wave of advances generating optimism that a new generation of flow batteries will soon serve as a backstop for the deployment of wind and solar power on a grand scale. “There is lots of progress in this field right now,” says Ulrich Schubert, a chemist at Friedrich Schiller University in Jena, Germany.

Lithium-ion batteries—the sort in laptops and Teslas—have a head start in grid-scale applications. Lithium batteries already bank backup power for hospitals, office parks, and even towns. But they don’t scale up well to the larger sizes needed to provide backup power for cities, says Michael Perry, associate director for electrochemical energy systems at United Technologies Research Center in East Hartford, Connecticut.

That’s where flow batteries come in. They store electrical charge in tanks of liquid electrolyte that is pumped through electrodes to extract the electrons; the spent electrolyte returns to the tank. When a solar panel or turbine provides electrons, the pumps push spent electrolyte back through the electrodes, where the electrolyte is recharged and returned to the holding tank. Scaling up the batteries to store more power simply requires bigger tanks of electrolytes. Vanadium has become a popular electrolyte component because the metal charges and discharges reliably for thousands of cycles. Rongke Power, in Dalian, China, for example, is building the world’s largest vanadium flow battery, which should come online in 2020. The battery will store 800 megawatt-hours of energy, enough to power thousands of homes. The market for flow batteries—led by vanadium cells and zinc-bromine, another variety—could grow to nearly $1 billion annually over the next 5 years, according to the market research firm MarketsandMarkets.

But the price of vanadium has risen in recent years, and experts worry that if vanadium demand skyrockets, prices will, too. A leading alternative replaces vanadium with organic compounds that also grab and release electrons. Organic molecules can be precisely tailored to meet designers’ needs, says Tianbiao Liu, a flow battery expert at Utah State University in Logan. But organics tend to degrade and need replacement after a few months, and some compounds work only with powerful acidic or basic electrolytes that can eat away at the pumps and prove dangerous if their tanks leak.

Researchers are now in the midst of “a second wave of progress” in organic flow batteries, Schubert says. In July, a group led by Harvard University materials scientist Michael Aziz reported in Joule that they had devised a long-lived organic molecule that loses only 3% of its charge-carrying capacity per year. Although that’s still not stable enough, it was a big jump from previous organic flow cell batteries that lost a similar amount every day, Liu says.

Iron, which is cheap and good at grabbing and giving up electrons, is another promising alternative. A Portland, Oregon, company called EES, for example, sells such batteries. But EES’s batteries require electrolytes operating at a pH between one and four, with acidity similar to vinegar’s.

Now, Liu and his colleagues have come up with a flow battery that operates at neutral pH. They started with an iron-containing electrolyte, ferrocyanide, that has been studied in the past. But in previous ferrocyanide batteries, the electrolyte was dissolved in water containing sodium or potassium salts, which provide positively charged ions that move through the cell to balance the electron movement during charging and discharging. Ferrocyanide isn’t very soluble in those salt solutions, limiting the electrical storage capacity of the battery.

So Liu and his colleagues replaced the salts with a nitrogen-based compound called ammonium that allows at least twice as much ferrocyanide to dissolve, doubling the battery’s capacity. The resulting battery is not as energy-dense as a vanadium flow battery. But in last week’s issue of Joule, Liu and his colleagues reported that their iron-based organic flow battery shows no signs of degradation after 1000 charge-discharge cycles, equivalent to about 3 years of operation. And because the electrolytes are neutral pH and water-based, a leak likely wouldn’t produce environmental damage.

“Overall, that’s an excellent piece of work,” says Qing Wang, a materials scientist at the National University of Singapore. Still, he and others caution that the battery is sluggish to charge and discharge. Liu says he and his colleagues plan to test other electrolyte additives, among other fixes, to boost conductivity.

It’s too early to say which flow battery chemistry—if any—will support the renewable grid of the future. Another contender uses electrolytes made from metal-containing organic compounds called polyoxometalates, which store far more energy in the same volume than the competition. In the 10 October issue of Nature Chemistry, for example, researchers led by Leroy Cronin, a chemist at the University of Glasgow in the United Kingdom, reported a polyoxometalate flow battery that stores up to 40 times as much charge as vanadium cells of the same volume. The downside for now is that these electrolytes are highly viscous and thus more challenging to pump through the battery, Cronin says. “Today, no one flow battery fills all the needs,” Schubert says. That means there’s still plenty of room for innovation.