Kan alle kvanteberegninger løses?

Gödel’s incompleteness theorems are connected to unsolvable calculations in quantum physics.

Nature, 09 December 2015

Kurt Gödel (left) demonstrated that some mathematical statements are undecidable; Alan Turing (right) connected that proof to unresolvable algorithms in computer science.

A logical paradox at the heart of mathematics and computer science turns out to have implications for the real world, making a basic question about matter fundamentally unanswerable.

In 1931, Austrian-born mathematician Kurt Gödel shook the academic world when he announced that some statements are ‘undecidable’, meaning that it is impossible to prove them either true or false. Three researchers have now found that the same principle makes it impossible to calculate an important property of a material — the gaps between the lowest energy levels of its electrons — from an idealized model of its atoms.

The result also raises the possibility that a related problem in particle physics — which has a US$1-million prize attached to it — could be similarly unsolvable, says Toby Cubitt, a quantum-information theorist at University College London and one of the authors of the study.

The finding, published on 9 December in Nature, and in a longer, 140-page version on the arXiv preprint server, is “genuinely shocking, and probably a big surprise for almost everybody working on condensed-matter theory”, says Christian Gogolin, a quantum information theorist at the Institute of Photonic Sciences in Barcelona, Spain.

From logic to physics

Gödel’s finding was first connected to the physical world in 1936, by British mathematician Alan Turing. “Turing thought more clearly about the relationship between physics and logic than Gödel did,” says Rebecca Goldstein, a US author who has written a biography of Gödel.

Turing reformulated Gödel’s result in terms of algorithms executed by an idealized computer that can read or write one bit at a time. He showed that there are some algorithms that are undecidable by such a ‘Turing machine’: that is, it’s impossible to tell whether the machine could complete the calculations in a finite amount of time. And there is no general test to see whether any particular algorithm is undecidable. The same restrictions apply to real computers, since any such devices are mathematically equivalent to a Turing machine.

Since the 1990s, theoretical physicists have tried to embody Turing’s work in idealized models of physical phenomena. But “the undecidable questions that they spawned did not directly correspond to concrete problems that physicists are interested in”, says Markus Müller, a theoretical physicist at Western University in London, Canada, who published one such model with Gogolin and another collaborator in 2012.

“I think it’s fair to say that ours is the first undecidability result for a major physics problem that people would really try to solve,” says Cubitt.

Spectral gap

Cubitt and his collaborators focused on calculating the ‘spectral gap’: the gap between the lowest energy level that electrons can occupy in a material, and the next one up. This determines some of a material’s basic properties. In some materials, for example, lowering the temperature causes the gap to close, which leads the material to become a superconductor.

The team started with a theoretical model of a material: an infinite 2D crystal lattice of atoms. The quantum states of the atoms in the lattice embody a Turing machine, containing the information for each step of a computation to find the material’s spectral gap.

Cubitt and his colleagues showed that for an infinite lattice, it is impossible to know whether the computation ends, so that the question of whether the gap exists remains undecidable.

For a finite chunk of 2D lattice, however, the computation always ends in a finite time, leading to a definite answer. At first sight, therefore, the result would seem to have little relation to the real world. Real materials are always finite, and their properties can be measured experimentally or simulated by computer.

But the undecidability ‘at infinity’ means that even if the spectral gap is known for a certain finite-size lattice, it could change abruptly — from gapless to gapped or vice versa — when the size increases, even by just a single extra atom. And because it is “provably impossible” to predict when — or if — it will do so, Cubitt says, it will be difficult to draw general conclusions from experiments or simulations.

Million-dollar question

Cubitt says that the team ultimately wants to study a related problem in particle physics called the Yang–Mills mass-gap problem, which the Clay Mathematics Institute in Peterborough, New Hampshire, has named one of its Millennium Prize Problems. The institute is offering $1 million to anyone who is able to solve it.

The mass-gap problem relates to the observation that the particles that carry the weak and strong nuclear force have mass. This is also why the weak and strong nuclear forces have limited range, unlike gravity and electromagnetism, and why quarks are only found as part of composite particles such as protons or neutrons, never in isolation. The problem is that there is no rigorous mathematical theory which explains why the force-carriers have mass, when photons, the carriers of the electromagnetic force, are massless.

Cubitt hopes that eventually, his team’s methods and ideas will show that the Yang–Mills mass-gap problem is undecidable. But at the moment it doesn’t seem obvious how to do it, he says. “We’re a long way from winning the $1 million.”

Nature | doi:10.1038/nature.2015.18983

 

Kvante-teleportering

Alice og Bob befinder sig langt fra hinanden som i min tidligere Blog om supertæt kodning. De besidder hver én elektron. Disse to elektroner er fælles om tilstanden (|00>+|11>)/√2, som udtrykker entanglement. Alice er desuden i besiddelse af en anden elektron i den ukendte kvantetilstand a|0>+b|1>. Alice har ingen idé om sandsynligheds-amplitudernes værdier, men hun og Bob ønsker at ændre Bobs elektron, så den får tilstanden a|0>+b|1>. De ønsker, at teleportere tilstanden fra Alices elektron til Bobs elektron. Dette kræver kun, som vi skal se, at Alice sender Bob to klassiske bit, selvom en elektrons tilstand kun er begrænset af betingelsen: a²+b² = 1.  Det er imponerende, at vi kan sende en tilstand med et kontinuum af muligheder ved kun at overføre to klassiske bit. Alice starter med en ukendt qubit, som overføres til Bob uden at nogen af parterne kender dens tilstand.

Her følger en opsummering af nogle vigtige gates og kredsløb:
Hadamards gate, som virker på én qubit, er defineret ved
H(|0>) = (|0>+|1>)/√2 og H(|1>) = (|0>-|1>)/√2.
Et CNOT-gate, som virker på to qubits, er defineret ved
CNOT(|x>|y>) = |x>|x⊕y>, hvor x⊕y ≡ x+y mod 2.

To eller flere gates efter hverandre kaldes et kredsløb.
Bells kredsløb, som virker på to qubits, er defineret ved
B(|x>|y>) = CNOT(H(|x>)|y>).
Den første qubit sendes gennem et hadamard-gate, hvorefter den sendes videre til den første indgang på et CNOT-gate. Den anden qubit sendes til den anden indgang på det samme CNOT-gate.
Hadamards matrix er defineret ved H ≡ [[1,1]’,[1,-1]’]/√2. Den opfylder betingelsen H = H’, hvorfor den er sin egen inverse matrix. Den er derfor ortogonal, så Hadamards gate er reversibel. CNOT permuterer qubits, hvorfor den også er ortogonal og reversibel. Bell-kredsløbet er derfor et reversibelt kredsløb.

Det inverse Bell-kredsløb, som virker på to qubits, er defineret ved
B-1(|x>|y>) = H(|x>)|x⊕y>, hvor x⊕y ≡ x+y mod 2.

Bell-kredsløbets virkning på de 4 mulige qubit-par er som følger:
B(|00>) = (|00>+|11>)/√2 og B-1(|00>) = (|00>+|10>)/√2
B(|01>) = (|01>+|10>)/√2  og B-1(|01>) =  (|01>+|11>)/√2
B(|10>) = (|00>-|11>)/√2  og B-1(|10>) =  (|01>-|11>)/√2
B(|11>) = (|01>-|10>)/√2  og  B-1(|11>) =  (|00>-|10>)/√2

Vi kan udlede nogle få ting om, hvordan metoden må fungere. Bob skal slutte med en elektron,  som befinder sig i tilstanden (a|0>+b|1>).  Bob og Alice begynder med hver at besidde en elektron fra den sammenfiltrede tilstand (|00>+|11>)/√2. For at afvikle denne korrelation mellem de to elektroner, må en eller anden udføre en måling. Det kan ikke være Bob, idet han ved en måling vil ende med enten |0> eller |1>. Det må derfor være Alice, som udfører en måling. Den tredje elektron må på en eller anden måde involveres. Alice må gøre noget for at sammenfiltre den tredje elektron med hendes anden elektron, som er sammenfiltret med Bobs elektron. Alice kan anvende det inverse Bell-kredsløb på de 2 qubits, (a|0>+b|1>) og (|00>+|11>)/√2, som hun har kontrol over.

Tensorproduktet af de tre qubits er i begyndelsen givet ved
(a|0>+b|1>)⊗(|00>+|11>)/√2 =
a/√2|00>|0> +
b/√2|10>|0> +
a/√2|01>|1> +
b/√2|11>|1>

Alice anvender det inverse Bell-kredsløb, B-1, på de to første af de tre qubits i tensorproduktet. Summen antager herefter denne form:
(a/2)(|00>+|10>)|0> + (b/2)(|01>-|11>)|0> +
(a/2)(|01>+|11>)|1> + (b/2)(|00>-|10>)|1>

Jeg samler alle faktorer til de ordnede basisvektorer (|00>,|01>,|10>,|11>) for de to første qubits, som Alice har adgang til:
[|00>(a|0>+b|1>)+|01>(a|1>+b|0>)+|10>(a|0>-b|1>)+|11>(a|1>-b|0>)]/2.

Amplituderne a og b opfylder betingelsen, a²+b²=1, så sandsynligheden for at måle hver af de 4 basisvektorer er (1/2)²(a²+b²) = 1/4. Alice foretager nu en måling på det første qubit-par, idet hun noterer basisvektoren:
|00>(a|0>+b|1>)
|01>(a|1>+b|0>)
|10>(a|0>-b|1>)
|11>(a|1>-b|0>)

Alice sender herefter 2 klassiske bits, nemlig én af [00,01,10,11], så bob ved, hvilket tilfæld det drejer sig om. Kun ved bitkombinationen 00 vil Bobs elektron have det ønskede spin. Bob har heldigvis mulighed for at transformere elektronens spin til den ønskede tilstand i de 3 andre tilfælde.

Man må forstå, at Alices måling medfører, at Bobs elektron foretager et kvantespring til én af de 4 tilstande angivet ovenfor. Bob må for at opnå den oprindelige tilstand foretage en Pauli-transformation. Der findes 4 ortogonale Pauli-matricer:
I = [[1,0]’,[0,1]’]
Z = [[1,0]’,[0,-1]’]
X = [[0,1]’,[1,0]’]
Y = [[0,-1]’,[1,0]’]

Om disse Pauli-gates gælder:
I(a|0>+b|1>) = a|0>+b|1>
Z(a|0>-b|1>) = a|0>+b|1>
X(b|0>+a|1>) = a|0>+b|1>
Y(a|1>-b|0>) = a|0>+b|1>

Det er vigtigt at forstå, at Alices måling øjeblikkeligt får Bobs elektron til at springe fra et elektronpar med entanglement til én af 4 mulige spin-tilstande: a|0>+b|1>, a|1>+b|0>, a|0>-b|1>, a|1>-b|0>. Dette er et udtryk for, at kvantemekanik ikke er lokalt realistisk (beklager, Einstein). Bob kender først den aktuelle spin-tilstand, når han har modtaget et signal med de 2 klassiske bit. Bob kan herefter anvende den relevante Pauli-transformation. Signalet med de 2 bit kan ikke overskride lyshastigheden. Det er derfor klart, at ingen elektron flyttes fra Alice til Bob. Det er informationen om elektronens kvantetilstand, som flyttes med under lysets hastighed. Gennem målingen destruerer Alice sin elektrons kvantetilstand.

Teleportering anviser en metode til at transportere en qubit fra ét sted til et andet uden faktisk at transportere den materielle partiklen. Metoden anvendes på forskellige måder til at korrigere fejl. Dette er ekstremt vigtigt for kvanteberegninger.

f(R) modified gravity

Realistic simulations of galaxy formation in f(R) modified gravity

Nature Astronomy (2019)

Future astronomical surveys will gather information that will allow gravity to be tested on cosmological scales, where general relativity is currently poorly constrained. We present a set of cosmological hydrodynamical simulations that follow galaxy formation in f(R) modified gravity models and are dedicated to finding observational signatures to help distinguish general relativity from alternatives using this information. The simulations employ the IllustrisTNG model and a new modified gravity solver in AREPO, allowing the interplay of baryonic feedback and modified gravity to be studied in the same simulation, and the degeneracy between them in the matter power spectrum to be resolved. We find that the neutral hydrogen power spectrum is suppressed substantially in f(R) gravity, which allows this model to be constrained using upcoming data from the Square Kilometre Array. Disk galaxies can form in our f(R) gravity simulations, even in the partially screened regime, and their galaxy stellar properties are only mildly affected. We conclude that modified gravity allows the formation of realistic galaxies and leaves observable signatures on large scales.

 

Debate over Hubble constant

Debate intensifies over speed of expanding universe

By Joshua Sokol |

This week, leading experts at clocking one of the most contested numbers in the cosmos—the Hubble constant, the rate at which the universe expands—gathered in hopes that new measurements could point the way out of a brewing storm in cosmology.

No luck so far. A hotly anticipated new cosmic yardstick, reliant on red giants, has served only to muddle the debate about the actual value of the constant, and other measurements brought no resolution. “It was the craziest conference I’ve been to,” said Daniel Scolnic, an astrophysicist at Duke University in Durham, North Carolina. “Everyone felt like they were on this rollercoaster.”

The meeting, at the Kavli Institute for Theoretical Physics in Santa Barbara, California, was the latest episode in a saga stretching back to the 1920s, when Edwin Hubble established that the farther one looks into space, the faster galaxies are speeding away from Earth. Since then, scientists have devoted entire careers to refining the rate of that flow, Hubble’s eponymous constant, or H0. But recently, the problem has hardened into a transdisciplinary dispute.

On one side are cosmologists who gather data from the greatest distances, such as a map of the big bang’s afterglow recorded by the European satellite Planck. They compare the apparent size of features in that afterglow with their actual size, as predicted by theory, to calculate an H0 of about 67. That means distant galaxies should be flying away from the Milky Way 67 kilometers per second faster for every additional megaparsec astronomers gaze out into space.

But when astronomers look at actual galaxies, using delicate chains of inferences to make up for the universe’s frustrating lack of tick marks, they get a different number. Over the past few years, a team led by Nobel laureate Adam Riess from Johns Hopkins University in Baltimore, Maryland, has cataloged standard candles: astrophysical objects with a known brightness, whose distance can be calculated based on how bright they appear from Earth. The team uses the supernovae explosions of white dwarf stars as standard beacons to measure distances far out into the swelling universe; they calibrate the brightness of nearby supernovae by monitoring variable stars, called cepheids, in the same galaxies. The stars’ light waxes and wanes at a rate that signals their intrinsic brightness. Earlier this year, this team, dubbed SH0ES, reported an H0 of about 74, a standard-bearing measurement for the astronomers’ side.

If the discrepancy between the cosmologists and the astronomers can’t be chalked up to a subtle, hidden methodological flaw, modern physics itself could be due for a revision. Theorists, salivating at the possibility, have begun to dream up hidden ingredients in the early universe—new particles or interactions—that could patch over the gulf. But they haven’t found a fix that doesn’t cause new problems. With stakes that high, astronomers put their heads together in Santa Barbara to double and triple check the SH0ES result against other ways to measure the constant.

A team called H0LiCOW relied on gravitational lenses, freak cosmic alignments where the light from a very distant, flickering beacon called a quasar is bent into multiple images on the sky by the gravity of another, intervening galaxy. Each image is formed by light traveling along a different path across expanding space. Because of that, though, the flickers don’t all arrive at Earth at the same time. Based on the time delays and not-so-simple geometry, the team calculated the H0 from six different such systems and came up with a value of roughly 73—“very close” to the SH0ES results, says Geoff Chih-Fan Chen, a team member at the University of California, Davis. The team didn’t check its final number—published just before the meeting on the preprint server arXiv—until the very end of its analysis to avoid bias, Chen says. “Some people will unconsciously want to get the right answer.”

One point for possible new physics. But the meeting brought a twist. On the first evening, the Carnegie-Chicago Hubble Program team, led by Wendy Freedman, a veteran H0 measurer at the University of Chicago in Illinois, uploaded its own long-anticipated paper—already accepted to The Astrophysical Journal—to arXiv. Freedman’s team sought to develop a new type of standard candle. “If we put all our eggs in the cepheid basket,” Freedman says, “we will never uncover our unknown unknowns.”

Instead, her team looked toward old, swollen stars called red giants. These stars have already exhausted the hydrogen fuel at their hearts, converting it to a core of helium that sits, inert, as a hydrogen shell around the core continues to burn. The star, seen from afar, grows brighter and brighter. But at a certain, predictable limit the temperature and pressure in the core grow high enough to burn helium, too, generating an explosive flash of energy that rearranges the interior of the star, ultimately causing it to begin to dim. By finding the very brightest red giants in a distant galaxy—the ones that toe this theoretical limit—the team could use them as standard candles to calculate distances and its own H0.

One day after the paper appeared, Freedman presented the result to the meeting: a surprisingly low H0 of about 70. “It definitely felt like an album drop,” says Scolnic, a SH0ES team member. The value was stuck between the competing sides—and slightly favored the cosmologists. “It has caused at least some people to pause for a second, and say, ‘Well, maybe it’s not as clear cut,’” Freedman says.

The SH0ES team had huddled together as soon as Freedman’s paper came out, and members were ready to question some of her team’s underlying premises after her talk. They also pointed to a trio of other, if less-precise, Hubble results debuted in Santa Barbara that rely on independent astrophysical concepts—clouds of water circling the centers of faraway galaxies, other kinds of variable stars, and the rate at which the luminosities of galaxies fall off from their center to their edge.

A combined measurement that averaged all these astronomical results together still gave a value of 73. Unless hidden biases still lurk in the data, the gulf between that value and the cosmologists’ lower number remains near or above the 5σ statistical standard physicists use to divide possible flukes from the real deal.

In Riess’s mind, at least, astronomers are nearing a consensus that the Hubble gulf highlights a true difference between the ancient and more recent universe. “You’re left with a problem, discrepancy, crisis,” Riess says. “The biggest argument at the meeting, I thought, was about what word to use.”

 

Spektr-RG x-ray observatory

Telescope designed to study mysterious dark energy keeps Russia’s space science hopes alive

By Daniel Clery |

At 6:31 p.m. local time on 13 July, Russia’s Spektr-RG x-ray observatory was launched from the Baikonur Cosmodrome in Kazakhstan. It will begin a 4-year survey of the x-ray sky.

Russia’s beleaguered space science program is hoping for a rare triumph this month. Spektr-RG, an x-ray satellite to be launched on 21 June from Kazakhstan, aims to map all of the estimated 100,000 galaxy clusters that can be seen across the universe. Containing as many as 1000 galaxies and the mass of 1 million billion suns, the clusters are the largest structures bound by gravity in the universe. Surveying them should shed light on the evolution of the universe and the nature of the dark energy that is accelerating its expansion.

First proposed more than 30 years ago as part of a Soviet plan for a series of ambitious “great observatories” along the lines of NASA’s Hubble Space Telescope, Spektr-RG fell victim to cost cutting in cash-strapped, post-Soviet Russia. But roughly €500 million satellite, which will carry German and Russian x-ray telescopes, was reborn early last decade with a new mission: not just to scan the sky for interesting x-ray sources, such as supermassive black holes gorging on infalling material, but to map enough galaxy clusters to find out what makes the universe tick. The new goal meant further delays. “There have been many ups and downs,” says Peter Predehl, leader of the team at the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany, that built one of the satellite’s two telescopes. “Whenever we thought we were out of the woods, a new one came along.”

Spektr-RG was born in the late 1980s. Glasnost was encouraging Soviet researchers to collaborate with Western colleagues, and studies of SN 1987A, the nearest supernova in modern times, had demonstrated the power of x-rays for tracing such violent events. Rashid Sunyaev of Moscow’s Space Research Institute (IKI) proposed an x-ray observatory to orbit above Earth’s atmosphere, which blocks x-rays. The 6-ton mission soon bristled with five telescopes and involved 20 institutes in 12 countries including the United States. But after the collapse of the Soviet Union, Roscosmos struggled to keep its Mir space station aloft and contribute to the growing International Space Station (ISS). “They told us the spacecraft was too large for Russia, too ambitious,” says Sunyaev, now at the Max Planck Institute for Astrophysics in Garching. “It just died.”

Resurrection began in 2003 with plans for a smaller mission with a U.K.-built all-sky x-ray monitor and MPE’s x-ray survey telescope, called ROSITA—which had been destined for the ISS but was grounded by the Challenger space shuttle disaster. The new impetus was cosmology. Studies of distant supernovae in the 1990s had revealed that the expansion of the universe is accelerating. Researchers wanted to know more about dark energy, the mysterious force that was causing it, and whether it varied in space or over time. Galaxy clusters are among the best indicators, says x-ray astronomer Andrew Fabian of the Institute of Astronomy (IoA) in Cambridge, U.K. “Clusters are the most massive objects in the universe, the pinnacle of galaxy formation, and are very sensitive to cosmological models.”

They are best seen in x-rays because the gaps between galaxies are filled with gas that is heated to millions of degrees as the galaxies jostle together to form a cluster. By mapping the clusters, says Esra Bulbul of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, who recently joined the MPE team, Spektr-RG “will study the evolution of the structure of the universe.”

The challenge was to boost the capabilities of the existing ROSITA telescope, which could only garner up to 10,000 galaxy clusters. Discussions led to a €90 million “extended” eROSITA, paid for by MPE and the German Aerospace Center, DLR. It is an array of seven identical telescopes with five times the effective collecting area of the original instrument. Russia and Germany signed an agreement in 2007 with launch penciled in for 2012.

But mission development was not smooth. The U.K. instrument failed to win funding and was replaced with a Russian telescope, called ART-XC, which will complement eROSITA by detecting scarcer high energy x-rays. Though harder to collect, the higher energy photons are particularly useful for seeing the supermassive black holes at galactic centers, because they pierce the clouds of gas and dust that shroud them.

Making the mirrors for eROSITA also proved much harder than expected. Because x-rays would penetrate a traditional flat telescope mirror, focusing them requires cylindrical mirrors that gather x-ray photons in glancing, low-angle reflections off inner surfaces. Each of eROSITA’s seven scopes contains 54 gold-plated cylindrical mirrors, nested inside one another, that must be shaped precisely to bring the photons to a focus. Making them proved so difficult that the MPE team had to fire its main contractor part way through. “It almost killed us,” Predehl says.

A decision to site the telescope at a quiet, gravitationally balanced point beyond the moon, outside the shelter of Earth’s magnetic field, meant electronics had to be hardened against solar radiation. Incompatibility between the German and Russian electronics delayed the launch, as did problems with the spacecraft’s communications system and a change in launch rocket.

Now that Spektr-RG is finally ready, expectations are high. “It’s going to be revolutionary in terms of numbers,” says IoA astronomer George Lansbury, taking x-ray studies into “the big data regime.”

It may also be a rare high point for Russia’s great observatories program. Previously, only one has made it into orbit: 2011’s Spektr-R, a radio astronomy mission that fell short of expectations and could not be revived after malfunctioning earlier this year.

Astronomers may face a long wait for Spektr-RG’s successors: the ultraviolet telescope Spektr-UV and Spektr-M, a millimeter-wave radio telescope. Spektr-UV has survived moments of near-death, most recently in 2014 when Russia’s annexation of Ukraine’s Crimean peninsula caused major Ukrainian partners to withdraw. The mission is now slated for a 2025 launch, but, Sunyaev says, some collaborators, including a German team supplying a spectrograph, have dropped out. Spektr-M, which would come next, is not yet fully funded, he says. And in the meantime, rival telescopes launched by other countries may scoop up the science the Russian missions aim to do.

“Russia is doing as much as possible with the budget available,” says Spektr-RG chief Mikhail Pavlinsky of IKI. He notes that Roscosmos’s lean budget, worth $20.5 billion over 10 years, faces multiple demands. Russia is building the landing system for the European ExoMars rover, due to launch next year, and like other countries it hopes to return to the moon with the Luna 25 lander in 2021. For Russia’s astrophysicists, Pavlinksy says, “It means slow progress.”

 

HD 139139: A Bizarre Star

Astronomers Don’t Know What to Make of This Incredibly Bizarre Star

Unusual dips of light observed by the Kepler space telescope have so far confounded attempts at an explanation.

Here in this isolated corner of the Milky Way, our Earth-bound human lives might seem tiny and parochial on a galactic scale. But the universe—vast, deep and presumably infinite—can always be counted on to deliver the unexpected.

Within the bounty of information from the Kepler space telescope, a now-retired exoplanet-hunting observatory, astronomers have spotted a peculiar star whose characteristics defy many of their preconceived notions. After staring at the data about it for more than a year, the team who discovered the strange object, named HD 139139, still does not know what to make of it.

“We’ve never seen anything like this in Kepler, and Kepler’s looked at 500,000 stars,” says Andrew Vanderburg, an astronomer at the University of Texas at Austin and co-author of a new paper on the star, posted to the preprint server ArXiv.

The space-based Kepler telescope, which ended its mission in October 2018, examined light from stars for periodic dips in their brightness—which are usually presumed to indicate a planet passing in front of their face. Astronomers analyze the observatory’s data using algorithms that search for these repeating eclipses of starlight.

But some of these patterns are too complex for computers to tease out; volunteer citizen scientists also comb through the Kepler catalogue, using the human brain’s power to uncover surprising signals. In the spring of 2018 some of these layperson astronomers contacted Vanderburg and told him to check out HD 139139, a sunlike star roughly 350 light-years away.

“When I got that e-mail, I looked more closely and said, ‘okay, this definitely looks like a multiplanet system. But I can’t find any [planets] that appear to line up,'” he recalls.

The star had 28 distinct dips in its light, each of which lasted between 45 minutes and 7.5 hours—and none of which seemed to repeat. The patterns looked more like noise than signal, and at first the team thought it might be some kind of instrument glitch. Yet after careful reanalysis, the data seemed to check out as real.

The first astronomical explanation was that HD 139139 was surrounded by a bunch of planets, at least 14 and perhaps as many as 28—an astounding number, far greater than any known system. Based on their almost-identical light curves, these worlds would all be nearly the same size: slightly bigger than Earth.

The issue is that the extremely short duration of the dips in light suggested that any putative exoplanets would be passing quickly in front of the star, indicating close-in orbits. For none of them to repeat in the 80 days Kepler stared at the star strained credulity.

Another possibility was that a second body—a large planet or unseen star—was tugging gravitationally on the obscurants, causing them to sometimes hasten or delay their eclipsing and produce what looked like a random pattern. But such an object would also be pulling on the central star. After observing HD 139139 with ground-based telescopes, the team found no indication it was being tugged in this way.

The researchers looked into the idea that a planet might be disintegrating in front of the star, producing clouds of dust that would sometimes create dips and other times would not. Such cases have been seen in a small handful of systems, but in all of those examples astronomers were still able to identify the evaporating planet’s orbital period.

Finally, the team wondered if there were short-lived starspots (cooler patches analogous to sunspots) that suddenly appeared and disappeared on HD 139139’s face—a situation nobody had really seen before. Vanderburg says this idea was considered mostly because he and his colleagues felt uncomfortable writing up a paper that did not include some kind of natural explanation.

He says one more possibility was left out of the paper, albeit an extremely unconventional one: an extraterrestrial mega-engineering project blocking the star’s light. Similar speculations arose in 2015, when citizen scientists discovered a Kepler star with odd patterns and notified astronomer Tabetha Boyajian at Louisiana State University at Baton Rouge. The light dips in that case looked intriguing enough that astrophysicist Jason Wright of Pennsylvania State University organized a campaign to listen to the object, eventually nicknamed Tabby’s star, for leaking radio transmissions. The undertaking ultimately turned up nothing unusual.

“When people in our community hear about something like this, the running joke is it might be aliens,” Vanderburg says. The possibility crossed his mind, he adds, as the seemingly-random dips reminded him of the scene in the film Contact in which Jodie Foster’s character begins hearing blips from outer space that trace out a prime number sequence.

The newly noticed star will certainly be added to the list of those investigated for signs of technological activity, Wright says. But he considers it more likely astronomers will eventually settle on an explanation that does not involve intelligent extraterrestrials.Boyajian agrees. “I think we have to consider all options before we go there,” she says. “This is one of those systems where it’s probably not going to be figured out without more data.”

Yet few other observatories can match Kepler’s extreme precision. Most ground-based telescopes are not sensitive enough to see the involved light dips, and it is difficult for researchers to reserve the relatively long stretches of time they would require on a powerful orbiting instrument such as the Hubble Space Telescope. NASA’s recently launched Transiting Exoplanet Survey Satellite (TESS) is not scheduled to look at HD 139139 during its primary campaign, though perhaps if the satellite receives an extended mission it could.

Vanderburg says he and his colleagues are hoping someone in the astronomy community will think of something they have not. In the meantime, the situation remains another example of the universe’s never-ending diversity.

“This put us in our place,” Boyajian says. “It humbles us and reminds us that we really don’t know everything.”

The Random Transiter — EPIC 249706694/HD 139139

We have identified a star, EPIC 249706694 (HD 139139), that was observed during K2 Campaign 15 with the Kepler extended mission that appears to exhibit 28 transit-like events over the course of the 87-day observation. The unusual aspect of these dips, all but two of which have depths of 200±80 ppm, is that they exhibit no periodicity, and their arrival times could just as well have been produced by a random number generator. We show that no more than four of the events can be part of a periodic sequence. We have done a number of data quality tests to ascertain that these dips are of astrophysical origin, and while we cannot be absolutely certain that this is so, they have all the hallmarks of astrophysical variability on one of two possible host stars (a likely bound pair) in the photometric aperture. We explore a number of ideas for the origin of these dips, including actual planet transits due to multiple or dust emitting planets, anomalously large TTVs, S- and P-type transits in binary systems, a collection of dust-emitting asteroids, “dipper-star” activity, and short-lived starspots. All transit scenarios that we have been able to conjure up appear to fail, while the intrinsic stellar variability hypothesis would be novel and untested.

 

Hayabusa2 collected 2nd sample

In a first, a Japanese spacecraft appears to have collected samples from inside an asteroid

By Dennis Normile |

Japan’s Hayabusa2 successfully completed its second touchdown on the asteroid Ryugu and probably captured material from its interior that was exposed by firing a projectile into the asteroid earlier this year. It is the first collection of subsurface materials from a solar system body other than the moon.

Engineers and technicians in the spacecraft’s control room near Tokyo could be seen erupting into cheers and applause on a YouTube live stream when Project Manager Yuichi Tsuda proclaimed the operation a success just before 11 a.m. local time.

At an afternoon press briefing, Tsuda said, “Everything went perfectly.” He joked that if a score of 100 indicated perfection, “I would give this a score of 1000.”

Hayabusa2 was launched by the Japan Aerospace Exploration Agency’s Institute of Space and Astronautical Science in Sagamihara, near Tokyo, in December 2014 and reached Ryugu in June 2018.

Since then it has conducted remote observations, released several rovers that hopped around on the asteroid, and made a February touchdown to retrieve surface samples. To get interior material, Hayabusa2 in April released a tiny spacecraft that exploded and sent a nonexplosive, 2-kilogram copper projectile into Ryugu, creating a crater. Subsequent remote examination of the site indicated material ejected from the crater had accumulated about 20 meters to one side.

That area became the target for the second touchdown, which occurred this morning. Engineers moved the spacecraft into position above the target site over the previous day and then placed it into autonomous mode. As the craft touched down, it fired a tantalum bullet into the surface, likely kicking dust and rock fragments into a collection horn. The craft then ascended.

The team won’t know for certain what is in the sample return capsule until it returns to Earth in December 2020. “But we expect that we obtained some subsurface samples,” said project scientist Seiichiro Watanabe, a planetary scientist at Nagoya University in Japan. They will be able to compare these subsurface samples with those collected from the surface. The team believes comparing the surface samples subjected to eons of space weathering and the more pristine material from the interior will provide clues to the origins and evolution of the solar system.

Watanabe noted that NASA’s in-progress Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer mission also plans to bring samples from an asteroid, named Bennu, back to Earth in 2023. But at least for the near future, Japan is the only nation that will have acquired samples from both the surface and interior of an asteroid, Watanabe said. The samples “will have great significance scientifically,” he said.

Hayabusa2 will continue remote observations until December 2020. “We shouldn’t waste even a single day,” Tsuda said.

Japanese spacecraft snags second sample from asteroid

Scientists celebrated another success with Japan’s Hayabusa 2 spacecraft late Wednesday (U.S. time), when the robot explorer accomplished a second pinpoint touch-and-go landing on asteroid Ryugu, this time to collect a sample of pristine dust and rock excavated by an explosive impactor earlier this year.

Using rocket thrusters to control its descent, and guided by a laser range finder, Hayabusa 2 glacially approached Ryugu on autopilot Wednesday, slowing to a relative speed of about 4 inches per second (10 centimeters) per second in the final phase of the landing.

Hayabusa 2 maneuvered over a bright navigation aid released on the asteroid’s surface earlier this year to mark the landing site, then went in for the final descent, with the probe’s sampling horn extending from the front of the spacecraft.

Telemetry data and imagery downlinked from Hayabusa 2 show the spacecraft briefly touched down on the asteroid at 9:06 p.m. EDT Wednesday (0106 GMT; 10:06 a.m. Japan Standard Time Thursday), and began climbing away from Ryugu seconds later, pulsing its thrusters to counteract the half-mile-wide (900-meter-wide) asteroid’s feeble gravity.

In a press conference around four hours later, officials hailed the brief landing as a perfect success, following the mission’s first touch-and-go landing on Ryugu in February.

“Hayabusa 2 today executed a second touchdown, and we were able to obtain (information about) the history of the solar system,” said Yuichi Tsuda, Hayabusa 2’s project manager at the Japan Aerospace Exploration Agency.

Ground teams cheered when data streaming back from the spacecraft, currently orbiting the sun in lock-step with Ryugu more than 151 million miles (244 million kilometers) from Earth, confirmed the touchdown.

Launched in December 2014, Hayabusa 2 is Japan’s mission to travel to an asteroid and collect samples for return to Earth. Scientists are eager to analyze specimens from Ryugu, a dark asteroid rich in carbon, a critical building block of life.

Researchers will study the samples for clues about the formation of the solar system 4.6 billion years ago, and perhaps the origin of water and life on Earth.

Mission managers last month decided to send Hayabusa 2 on a second sampling run to gather bits of rock and dust from a second location on Ryugu, providing scientists with more varied materials to examine when the mission returns to Earth late next year.

Hayabusa 2’s sampling mechanism works by firing a metal bullet into the asteroid once the probe’s sampler horn contacts the surface. The projectile is designed to force bits of rock and dust through the sampler horn into a collection chamber inside spacecraft.

Takanao Saiki, Hayabusa 2’s project engineer and flight director at JAXA, told reporters in a press briefing Thursday that data downlinked by the spacecraft showed the temperature rose in the projectile’s firing mechanism at the time of landing, suggesting the system functioned as intended.

Three images taken by a camera on-board Hayabusa 2 showed the sampling horn contacting the asteroid, then violently blasting away debris from the surface. Countless tiny asteroid fragments were visible around the spacecraft in the final snapshot in the three-image sequence released by JAXA.

“The third picture is really amazing,” said Makoto Yoshikawa, Hayabusa 2’s mission manager “It’s really awesome, a large amount of chips of rocks are flying off.”

“This is a wonderful picture, I think,” Tsuda said. “Hayabusa 2 touched the surface of Ryugu, so this is evidence.”

A different view of the landing site taken by Hayabusa 2’s navigation camera shows a cloud of debris left behind moments after the spacecraft took off from the asteroid.

With its second and final sample collection complete, Hayabusa 2 started to climb back to a “home position” roughly 12 miles (20 kilometers) from the asteroid. The spacecraft closed the lid to the sample catcher device containing the asteroid pay dirt, and ground teams will later send commands to seal it inside the re-entry canister that will carry the material through Earth’s atmosphere at the end of the mission.

“There’s nothing I need to complain about, everything moved perfectly,” Tsuda said through a translator. “It was a perfect operation, so … it’s a 1,000 score out of 100.”

Not only did the specimens gathered Wednesday come from a different location on Ryugu than the first sampling run, scientists say the newly-captured materials originated from underneath the asteroid’s surface, where they may have escaped radiation and other space weathering effects for billions of years.

The pristine samples were exposed during a daring, unprecedented bombing run by the Hayabusa 2 spacecraft in April. The probe deployed an explosive charge to fire into the asteroid at high speed, carving a fresh crater and ejecting buried materials around the impact site, ripe for retrieval by Hayabusa 2.

“We decided to obtain the samples in this particular area so that we would be able to sample the subsurface materials … and because our operation was perfectly conducted, therefore, we can observe that we obtained some subsurface samples,” said Seiichiro Watanabe, Hayabusa 2’s project scientist from Nagoya University.

“Bringing the subsurface materials (back to Earth) will be something no other country can do in the coming 20 years or so,” Watanabe said.

Hayabusa 2’s sampler carrier has three chambers to separate materials gathered from each landing. Officials decided to press ahead with the second sampling run after assessing the scientific benefits and engineering risks of the maneuver, but with two samples now on-board the spacecraft, mission managers do not plan to attempt a third sampling run.

While Hayabusa 2 explores Ryugu, NASA’s OSIRIS-REx mission is surveying asteroid Bennu before moving in to collect a sample there in 2020 for return to scientists on Earth in 2023.

OSIRIS-REx is designed to bring home at least 60 grams, or 2.1 ounces of samples from Bennu, significantly more than Hayabusa 2. But OSIRIS-REx is only expected to collect a single sample from one location on Bennu’s surface.

NASA and JAXA agreed in 2014 to share their asteroid samples.

Named for a dragon’s palace in a famous Japanese fairy tale, asteroid Ryugu completes one circuit of the sun every 1.3 years. Its path briefly brings it inside Earth’s orbit, making Ryugu a potentially hazardous asteroid.

The orbit also made Ryugu an attractive candidate for a sample return mission.

The Hayabusa 2 spacecraft arrived at Ryugu in June 2018, and deployed three mobile scouts to hop around the asteroid’s surface last September and October, achieving another first in space exploration.

Hayabusa 2 will depart Ryugu in November or December and fire its ion engines to head for Earth, where it will release a re-entry capsule protected by a heat shield to land in Australia in December 2020.

“We have captured the samples,” Tsuda said. “We must make sure that it comes back to Earth, so we need to continue with the operations properly.”

Follow Stephen Clark on Twitter: @StephenClark1.

 

Cheap solar power

Giant batteries and cheap solar power are shoving fossil fuels off the grid

By Robert F. Service |

This month, officials in Los Angeles, California, are expected to approve a deal that would make solar power cheaper than ever while also addressing its chief flaw: It works only when the sun shines. The deal calls for a huge solar farm backed up by one of the world’s largest batteries. It would provide 7% of the city’s electricity beginning in 2023 at a cost of 1.997 cents per kilowatt hour (kWh) for the solar power and 1.3 cents per kWh for the battery. That’s cheaper than any power generated with fossil fuel.

“Goodnight #naturalgas, goodnight #coal, goodnight #nuclear,” Mark Jacobson, an atmospheric scientist at Stanford University in Palo Alto, California, tweeted after news of the deal surfaced late last month. “Because of growing economies of scale, prices for renewables and batteries keep coming down,” adds Jacobson, who has advised countries around the world on how to shift to 100% renewable electricity. As if on cue, last week a major U.S. coal company—West Virginia–based Revelation Energy LLC—filed for bankruptcy, the second in as many weeks.

The new solar plus storage effort will be built in Kern County in California by 8minute Solar Energy. The project is expected to create a 400-megawatt solar array, generating roughly 876,000 megawatt hours (MWh) of electricity annually, enough to power more than 65,000 homes during daylight hours. Its 800-MWh battery will store electricity for after the sun sets, reducing the need for natural gas–fired generators.

Precipitous price declines have already driven a shift toward renewables backed by battery storage. In March, an analysis of more than 7000 global storage projects by Bloomberg New Energy Finance reported that the cost of utility-scale lithium-ion batteries had fallen by 76% since 2012, and by 35% in just the past 18 months, to $187 per MWh. Another market watch firm, Navigant, predicts a further halving by 2030, to a price well below what 8minute has committed to.

Large-scale battery storage generally relies on lithium-ion batteries—scaled-up versions of the devices that power laptops and most electric vehicles. But Jane Long, an engineer and energy policy expert who recently retired from Lawrence Livermore National Laboratory in California, says batteries are only part of the energy storage answer, because they typically provide power for only a few hours. “You also need to manage for long periods of cloudy weather, or winter conditions,” she says.

Local commitments to switch to 100% renewables are also propelling the rush toward grid-scale batteries. By Jacobson’s count, 54 countries and eight U.S. states have required a transition to 100% renewable electricity. In 2010, California passed a mandate that the state’s utilities install electricity storage equivalent to 2% of their peak electricity demand by 2024.

Although the Los Angeles project may seem cheap, the costs of a fully renewable–powered grid would add up. Last month, the energy research firm Wood Mackenzie estimated the cost to decarbonize the U.S. grid alone would be $4.5 trillion, about half of which would go to installing 900 billion watts, or 900 gigawatts (GW), of batteries and other energy storage technologies. (Today, the world’s battery storage capacity is just 5.5 GW.) But as other cities follow the example of Los Angeles, that figure is sure to fall.

 

Forest against global warming

Adding 1 billion hectares of forest could help check global warming

By Alex Fox |

Global temperatures could rise 1.5° C above industrial levels by as early as 2030 if current trends continue, but trees could help stem this climate crisis. A new analysis finds that adding nearly 1 billion additional hectares of forest could remove two-thirds of the roughly 300 gigatons of carbon humans have added to the atmosphere since the 1800s.

“Forests represent one of our biggest natural allies against climate change,” says Laura Duncanson, a carbon storage researcher at the University of Maryland in College Park and NASA who was not involved in the research. Still, she cautions, “this is an admittedly simplified analysis of the carbon restored forests might capture, and we shouldn’t take it as gospel.”

The latest report from the United Nations’s Intergovernmental Panel on Climate Change recommended adding 1 billion hectares of forests to help limit global warming to 1.5° C by 2050. Ecologists Jean-Francois Bastin and Tom Crowther of the Swiss Federal Institute of Technology in Zurich and their co-authors wanted to figure out whether today’s Earth could support that many extra trees, and where they might all go.

They analyzed nearly 80,000 satellite photographs for current forest coverage. The team then categorized the planet according to 10 soil and climate characteristics. This identified areas that were more or less suitable for different types of forest. After subtracting existing forests and areas dominated by agriculture or cities, they calculated how much of the planet could sprout trees.

Earth could naturally support 0.9 billion hectares of additional forest—an area the size of the United States—without impinging on existing urban or agricultural lands, the researchers report today in Science. Those added trees could sequester 205 gigatons of carbon in the coming decades, roughly five times the amount emitted globally in 2018.

“This work captures the magnitude of what forests can do for us,” says ecologist Greg Asner of Arizona State University in Tempe, who was not involved in the research. “They need to play a role if humanity is going to achieve our climate mitigation goals.”

Adding forests wouldn’t just sequester carbon. Forests provide a host of added benefits including enhanced biodiversity, improved water quality, and reduced erosion. Estimates of how much forest restoration on this scale would cost vary, but based on prices of about $0.30 a tree, Crowther says it could be roughly $300 billion.

Exactly how much carbon future forests could store may not be crystal clear, but Duncanson says NASA has new instruments in space—like the Global Ecosystem Dynamics Investigation (GEDI) aboard the International Space Station—that will use lasers to create high-resolution 3D maps of Earth’s forests from canopy to floor. These data will add much-needed precision to existing estimates of aboveground carbon storage.

“With GEDI we can take this paper as a stepping stone and inform it with much more accurate carbon estimates,” Duncanson says. “There have always been large uncertainties on large-scale carbon totals, but we have richer data coming soon.”

 

Atmosphere of Gliese 3470 b

Astronomers probe atmosphere of alien world that’s a cross between Earth and Neptune

By Katie Camero |

Gliese 3470 b isn’t like anything in our solar system. The strange world—midway between Earth and Neptune in mass—orbits a star about half the mass of the sun roughly 100 light-years away. Now, astronomers have taken a detailed look at Gliese 3470 b’s atmosphere, the first time researchers have done so for a planet like this.

Astronomers used NASA’s Hubble and Spitzer space telescopes to measure which frequencies of starlight Gliese 3470 b absorbs and reflects as it circles around its star. The planet has a relatively thin atmosphere comprised primarily of hydrogen and helium, NASA announced yesterday. That’s similar to the atmosphere of the sun, with the exception of heavy elements such as oxygen and carbon. The planet (above) also has a hefty rocky core, the analysis reveals.

Gliese 3470 b appears to have formed close to its star, where it still sits today. This might explain why the planet was able to develop its unconventional atmosphere. One hypothesis is that it was able to corral gases from a primordial disk of gas surrounding its star. Typically when this happens, planets become giant gas worlds known as “hot Jupiters.” But Gliese 3470 b stayed relatively small, perhaps because the disk of gas dissipated before the planet was able to bulk up, the team speculates.

NASA’s new James Webb Space Telescope—Hubble’s successor set to launch in 2021—will penetrate the planet’s atmosphere to greater depths. Until then, astronomers have to solve another mystery about Gliese 3470 b: whether to call it a “super-Earth” or a “sub-Neptune.”