Multi-planetary resonant chains

Analytical model of multi-planetary resonant chains and constraints on migration scenarios

ABSTRACT: Resonant chains are groups of planets for which each pair is in resonance, with an orbital period ratio locked at a rational value (2/1, 3/2, etc.). Such chains naturally form as a result of convergent migration of the planets in the proto-planetary disk. In this article, I present an analytical model of resonant chains of any number of planets. Using this model, I show that a system captured in a resonant chain can librate around several possible equilibrium configurations. The probability of capture around each equilibrium depends on how the chain formed, and especially on the order in which the planets have been captured in the chain. Therefore, for an observed resonant chain, knowing around which equilibrium the chain is librating allows for constraints to be put on the formation and migration scenario of the system. I apply this reasoning to the four planets orbiting Kepler-223 in a 3:4:6:8 resonant chain. I show that the system is observed around one of the six equilibria predicted by the analytical model. Using N-body integrations, I show that the most favorable scenario to reproduce the observed configuration is to first capture the two intermediate planets, then the outermost, and finally the innermost.

 

Concluding Henrietta Leavitt’s Work

Concluding Ms. Henrietta Leavitt’s Work on Classical Cepheids in the Magellanic System and other updates of the OGLE Collection of Variable Stars

ABSTRACT: More than a century ago, Ms. Henrietta Leavitt discovered the first Cepheids in the Magellanic Clouds together with the famous period-luminosity relationship revealed by these stars, which soon after revolutionized our view of the Universe. Over the years, the number of known Cepheids in these galaxies has steadily increased with the breakthrough in the last two decades thanks to the new generation of large-scale long-term sky variability surveys.
Here we present the final upgrade of the OGLE Collection of Cepheids in the Magellanic System which already contained the vast majority of known Cepheids. The updated collection now comprises 9649 classical and 262 anomalous Cepheids. Type-II Cepheids will be updated shortly. Thanks to high completeness of the OGLE survey the sample of classical Cepheids includes virtually all stars of this type in the Magellanic Clouds. Thus, the OGLE survey concludes the work started by Ms. Leavitt.
Additionally, the OGLE sample of RR Lyrae stars in the Magellanic System has been updated. It now counts 46 443 variables. A collection of seven anomalous Cepheids in the halo of our Galaxy detected in front of the Magellanic Clouds is also presented.
OGLE photometric data are available to the astronomical community from the OGLE Internet Archive. The time-series photometry of all pulsating stars in the OGLE Collection has been supplemented with new observations.

 

Karlsruhe Tritium Neutrino experiment

Weighing the universe’s most elusive particle

By Adrian Cho | Jun. 29, 2017 , 10:00 AM

The silvery vacuum chamber resembles a zeppelin, the vaguely Art Deco lines of the welds between its stainless steel panels looking at once futuristic and old-fashioned. One tenth the size of the Hindenburg—but still as big as a blue whale—the vessel looms in a hangarlike building here at the Karlsruhe Institute of Technology (KIT), seemingly ready to float away. Although it is earthbound, the chamber has an ethereal purpose: weighing the most elusive and mysterious of subatomic particles, the neutrino.

Physicists dreamed up the Karlsruhe Tritium Neutrino (KATRIN) experiment in 2001. Now, the pieces of the €60 million project are finally coming together, and KATRIN researchers plan to start taking data early next year. “This is really the final countdown,” says Guido Drexlin, a physicist at KIT and co-spokesperson for the roughly 140 researchers working on the project.

It might seem absurd that physicists don’t know how much neutrinos weigh, given that the universe contains more of them than any other type of matter particle. Every cubic centimeter of space averages roughly 350 primordial neutrinos lingering from the big bang, and every second, the sun sends trillions of higher-energy neutrinos streaming through each of us. Yet no one notices, because the particles interact with matter so feebly. Spotting just a few of them requires a detector weighing many tons. There’s no simple way to weigh a neutrino.

Instead, for the past 70 years, physicists have tried to infer the neutrino’s mass by studying a particular nuclear decay from which the particle emerges—the beta decay of tritium. Time and again, these experiments have set only upper limits on the neutrino’s mass. KATRIN may be physicists’ last, best hope to measure it—at least without a revolutionary new technology. “This is the end of the road,” says Peter Doe, a physicist and KATRIN member from the University of Washington (UW) in Seattle.

KATRIN physicists have no guarantee that they’ll succeed. From very different kinds of experiments—such as giant subterranean detectors that spot neutrinos from space—they now know that the neutrino cannot be massless. But in recent years, data from even farther afield—maps of the cosmos on the grandest scales—suggest that the neutrino might be too light for KATRIN to grasp. Still, even cosmologists say the experiment is worth doing. If the neutrino mass does elude KATRIN, their current understanding of the cosmos will have passed another test.

A definitive measurement, on the other hand, would be potentially revolutionary. “If KATRIN finds something,” says Licia Verde, a cosmologist at University of Barcelona in Spain, “cosmologists will be left scratching their heads and saying, ‘Where did we go wrong?'”

Neutrinos first betrayed their existence through an absence. In 1914, U.K. physicist James Chadwick was studying beta decay, a form of radioactive decay in which a nucleus emits an electron, transforming a neutron into a proton. Conservation of energy suggested that the electrons from a particular nucleus, say lead-214, should always emerge with the same energy. Instead, Chadwick showed that they emerge with a range of energies extending down to zero, as if energy were disappearing.

That observation caused a minor crisis in physics. The great Danish theorist Niels Bohr even suggested that energy might not be conserved on the atomic scale. However, in 1930, the puckish Austrian theorist Wolfgang Pauli solved the problem more simply. In beta decay, he speculated, a second, unseen particle emerges with the electron and absconds with a random fraction of the energy. The particle had to be light—less than 1% of the mass of a proton—and, to avoid detection, uncharged.

Three years later, Italian physicist Enrico Fermi dubbed the hypothetical particle the neutrino. It would elude detection for another 23 years. But, in developing a fuller theory of beta decay, Fermi immediately realized that the electrons’ energy spectrum holds a clue to a key property of the neutrino: its mass. If the particle is massless, the spectrum should extend up to the same energy the electron would have if it emerged alone—corresponding to decays in which the neutrino emerges with virtually no energy. If the neutrino has mass, the spectrum should fall short of the limit by an amount equal to the mass. To weigh the neutrino, physicists had only to precisely map the upper end of the electron spectrum in beta decay.

That measurement requires exquisite precision, however. For decades, physicists have striven to achieve it with tritium, the simplest nucleus to undergo beta decay. In 1949, a first study concluded that the neutrino weighed less than 500 electron volts (eV), 1/1000 the mass of the electron. Since then, successive experiments have cut the upper limit in half roughly every 8 years, says Hamish Robertson, a KATRIN physicist at UW. “There’s a sort of Moore’s Law for the neutrino mass,” he says, referring to the trend that, for many years, described the regular shrinking of transistors on microchips. The upper limit now stands at about 2 eV—two-billionths the mass of the lightest atom—as experimenters in Mainz, Germany, and Troitsk, Russia, independently reported in 1999.

In 2001, those teams and others gathered in a castle high on a hill in the hamlet of Bad Liebenzell in Germany’s Black Forest and decided to push further, by mounting the definitive tritium beta decay experiment. “That was the point of origin, the big bang for KATRIN,” KIT’s Drexlin says. KATRIN experimenters hope to lower the mass limit by a factor of 10, to 0.2 eV—or, better yet, to come up with an actual measurement of the neutrino mass.

To perform the experiment, scientists need a supply of tritium—a highly radioactive isotope of hydrogen produced in certain nuclear reactors that’s tightly regulated because of its potential health hazards and weapons applications. The search for it brought them to KIT, which already had a facility, unique in the Western Hemisphere, for processing and recycling tritium.

With tritium in hand, physicists then have to collect the beta electrons it emits without altering their energies. They cannot, for example, put tritium gas in a container with a thin crystalline window, because passing through even the thinnest window would sap the electrons’ energy enough to ruin KATRIN’s measurement.

Instead, KATRIN depends on a device called a windowless gaseous tritium source: an open-ended pipe 10 meters long that tritium enters from a port in the middle. Superconducting magnets surrounding the pipe generate a field 70,000 times as strong as Earth’s. Beta decay electrons from the tritium spiral in the magnetic field to the pipe’s ends, where pumps suck out the uncharged tritium molecules. Set it up right, with not so much tritium that the gas itself slows the electrons, and the source should produce 100 billion electrons per second.

Finally, physicists must measure the electrons’ energies. That’s where KATRIN’s zeppelinlike vacuum chamber comes into play. Still riding the magnetic field lines from the source, the electrons enter the chamber from one end. The magnetic field, now supplied by graceful hoops of wire encircling the blimp, weakens to a mere six times Earth’s field as the field lines spread out. That spreading is key, as it forces the electrons to move along the lines, and not around them.

Once the electrons are moving in precisely the same direction, physicists can measure their energies. Electrodes lining the chamber create an electric field that pushes against the onrushing electrons and opposes their motion. Only those electrons that have enough energy can push past the electric field and reach the detector at the far end of the chamber. So by varying the strength of the electric field and counting the electrons that hit the detector, physicists can trace the spectrum. KATRIN researchers will concentrate on the spectrum’s upper end, the all-important region mapped out by just one in every 5 trillion electrons from the decays.

Everything has to be tuned perfectly. Additional coils of wire around the spectrometer must precisely cancel Earth’s magnetic field, or else the electrons will run into the zeppelin’s wall. The specific voltages of the myriad electrodes must be stable to parts per million. The vacuum within the spectrometer must be held at 0.01 picobar, a pressure as low as at the moon’s surface and an unprecedented level for such a big chamber. And the temperature of the tritium source must be kept at a frigid 30 kelvins to slow the molecules so their motion doesn’t affect the energy of the ejected electrons.

KATRIN physicists have run into some nettlesome surprises. For example, to avoid stray magnetic fields, they had the concrete floor below the 200-metric-ton chamber reinforced not with rebar of ordinary steel, which is magnetic, but with nonmagnetic stainless steel. Still, magnetic fields from the ordinary steel in the concrete walls played havoc with the spectrometer, says Kathrin Valerius, a physicist at KIT. “We had to demagnetize the building,” she says, a painstaking process that required passing an electromagnet over every square meter of the walls.

Working out the kinks took longer than expected, putting the experiment roughly 7 years behind original plans. No single issue slowed it down, says Johannes Blümer, KIT’s head of physics and mathematics. “Things turned out to be much more complex than we thought initially,” he says. “Everything has to be perfect and perfectly stable.”

The wait is almost over. Last October, physicists fired electrons from an electron gun through the spectrometer. This summer, they will calibrate it with a sample of krypton-83, which emits electrons of a fixed energy. Later this year, they will connect the tritium works, ready for next year’s data taking. In a single week KATRIN should outperform all previous experiments, Drexlin says, but researchers will still need to take data for at least 5 years to make their ultimate measurement.

Read more by following the link at top of this blog.

 

Carbon nanotubes to make transistors

Scientists use carbon nanotubes to make the world’s smallest transistors

Carbon nanutube

By Matthew Hutson | Jun. 29, 2017 , 2:00 PM

As computing has moved into the nanoscopic realm, it’s getting harder and harder for engineers to follow Moore’s Law, which says, essentially, that the processing speed of computer chips should double every year or two. But IBM researchers have just reported a new way to keep Silicon Valley on the right side of at least this law, using a delicate material to make microchips’ basic processing elements—transistors—smaller and faster than ever before. For decades, computing speed has increased as silicon transistors have shrunk, but they’re currently near their size limits. So scientists have been experimenting with carbon nanotubes, rolled-up sheets of carbon atoms just 1 nanometer, or a billionth of a meter, in diameter. But difficulties working with the material have meant that, for optimal performance, nanotube transistors have to be even larger than current silicon transistors, which are about 100 nanometers across. To cut that number down, a team of scientists used a new technique to build the contacts that draw current into and out of the carbon nanotube transistor. They constructed the contacts out of molybdenum, which can bond directly to the ends of the nanotubes, making them smaller. They also added cobalt so the bonding could take place at a lower temperature, allowing them to shrink the gap between the contacts. Another advance allowed for practical transistors. Carrying enough electrical current from one contact to another requires several nanotube “wires.” The researchers managed to lay several parallel nanotubes close together in each transistor. The total footprint of the transistor: just 40 nanometers, they report today in Science. Electrical tests showed their new transistors to be faster and more efficient than ones made of silicon. Silicon Valley may soon have to make way for Carbon Valley.

Carbon nanotube transistors scaled to a 40-nanometer footprint

Abstract

The International Technology Roadmap for Semiconductors challenges the device research community to reduce the transistor footprint containing all components to 40 nanometers within the next decade. We report on a p-channel transistor scaled to such an extremely small dimension. Built on one semiconducting carbon nanotube, it occupies less than half the space of leading silicon technologies, while delivering a significantly higher pitch-normalized current density—above 0.9 milliampere per micrometer at a low supply voltage of 0.5 volts with a subthreshold swing of 85 millivolts per decade. Furthermore, we show transistors with the same small footprint built on actual high-density arrays of such nanotubes that deliver higher current than that of the best-competing silicon devices under the same overdrive, without any normalization. We achieve this using low-resistance end-bonded contacts, a high-purity semiconducting carbon nanotube source, and self-assembly to pack nanotubes into full surface-coverage aligned arrays.

 

 

A warmer and wetter early Mars

A warmer and wetter solution for early Mars and the challenges with transient warming

ABSTRACT: The climate of early Mars has been hotly debated for decades. Although most investigators believe that the geology indicates the presence of surface water, disagreement has persisted regarding how warm and wet the surface must have been and how long such conditions may have existed. Although the geologic evidence is most easily explained by a persistently warm climate, the perceived difficulty that climate models have in generating warm surface conditions has seeded various models that assume a cold and glaciated early Mars punctuated by transient warming episodes. However, I use a single-column radiative-convective climate model to show that it is relatively more straightforward to satisfy warm and relatively unglaciated early Mars conditions, requiring only about 1 percent H2 and 3 bar CO2 or about 20 percent H2 and 0.55 bar CO2. In contrast, the reflectivity of surface ice greatly increases the difficulty to transiently warm an initially frozen surface. Surface pressure thresholds required for warm conditions increase about 10 to 60 percent for transient warming models, depending on ice cover fraction. No warm solution is possible for ice cover fractions exceeding 40, 70, and 85 percent for mixed snow and ice and 25, 35, and 49 percent for fresher snow and ice at H2 concentrations of 3, 10, and 20 percent, respectively. If high temperatures (298 to 323 K) were required to produce the observed surface clay amounts on a transiently warm early Mars (Bishop et al), I show that such temperatures would have required surface pressures that exceed available paleopressure constraints for nearly all H2 concentrations considered (1 to 20 percent). I then argue that a warm and semi-arid climate remains the simplest and most logical solution to Mars paleoclimate.

 

Graphene som lyssejl

I 2010 blev nobelprisen tildelt Andre Geim og Konstantin Novoselov “for groundbreaking experiments regarding the two-dimensional material graphene”.

Graphene – the perfect atomic lattice

A thin flake of ordinary carbon, just one atom thick, lies behind this year’s Nobel Prize in Physics. Andre Geim and Konstantin Novoselov have shown that carbon in such a flat form has exceptional properties that originate from the remarkable world of quantum physics.

Graphene is a form of carbon. As a material it is completely new – not only the thinnest ever but also the strongest. As a conductor of electricity it performs as well as copper. As a conductor of heat it outperforms all other known materials. It is almost completely transparent, yet so dense that not even helium, the smallest gas atom, can pass through it. Carbon, the basis of all known life on earth, has surprised us once again.

Geim and Novoselov extracted the graphene from a piece of graphite such as is found in ordinary pencils. Using regular adhesive tape they managed to obtain a flake of carbon with a thickness of just one atom. This at a time when many believed it was impossible for such thin crystalline materials to be stable.

However, with graphene, physicists can now study a new class of two-dimensional materials with unique properties. Graphene makes experiments possible that give new twists to the phenomena in quantum physics. Also a vast variety of practical applications now appear possible including the creation of new materials and the manufacture of innovative electronics. Graphene transistors are predicted to be substantially faster than today’s silicon transistors and result in more efficient computers.

Since it is practically transparent and a good conductor, graphene is suitable for producing transparent touch screens, light panels, and maybe even solar cells.

When mixed into plastics, graphene can turn them into conductors of electricity while making them more heat resistant and mechanically robust. This resilience can be utilised in new super strong materials, which are also thin, elastic and lightweight. In the future, satellites, airplanes, and cars could be manufactured out of the new composite materials.

This year’s Laureates have been working together for a long time now. Konstantin Novoselov, 36, first worked with Andre Geim, 51, as a PhD-student in the Netherlands. He subsequently followed Geim to the United Kingdom. Both of them originally studied and began their careers as physicists in Russia. Now they are both professors at the University of Manchester.

Playfulness is one of their hallmarks, one always learns something in the process and, who knows, you may even hit the jackpot. Like now when they, with graphene, write themselves into the annals of science.

Et billede taget med et skanderende tunnelmikroskop.

Et enkelt lag graphene har en tæthed på 0.77 mg/m2, og en optisk absorption på 2.3% uafhængig af bølgelængden. Graphene er 0.345 nm tykt, og det har en fysisk styrke på 42 N/m. Det er vanskeligt at bestemme det nøjagtige smeltepunkt, men det ser ud til at være over 4900 K. Bemærk: den optiske absorptionskoefficient er k=πα, hvor α=1/137 er finstrukturkonstanten. Dette giver k=pi/137=0.0229. Dette eksakte udtryk for absortpionen skyldes graphens kvantemekaniske egenskaber.

Kraften på et lyssejl er P/c, hvor P er den absorberede effekt, og c er lyshastigheden. Denne energi tabes som varmestråling, som er proportional med T4. Den højest opnåelige acceleration skalerer derfor med T4/m, hvor T er sejlets temperatur, og m er dets masse per areal.

Lad os betragte et enkelt lag graphene med en arbejdstemperatur på 4000K. Sejlet udstråler hovedsageligt optisk stråling, så emissiviteten vil være lig absorptionen i den optiske del af spektret, nemlig ε=2.3%. Sejlets totale (fra begge sider) udstråling bliver derfor 2σεT4, hvor σ=5.670 x 10-8 W/m2/K4 er Stefan-Boltzmanns konstant. Ved indsættelse får jeg

2σεT4 = 2 x 5.670 x 10-8 x 0.023 x 40004 = 668 kW/m2

Hvis denne effekt absorberes vinkelret på den ene side frembringes der et ramtryk på 6.68 x 105/3 x 108 = 0.0022 N/m2, som virker på massen
7.7 x 10-7 kg/m2. Dette medfører accelerationen

a = 0.0022/7.7 x 10-7 = 2890 m/s2 = 295 g

Det hjælper ikke at medtage 2 lag graphene, da både absorptionen og massen fordobles.

Det er interessant at beregne sejlets acceleration, hvis det belyses af Solen i Jordens afstand fra Solen, samt at sammenligne denne med Jordens acceleration i cirlelbevægelsen omkring Solen.

Ramtrykket fra sollyset er 0.023 x 1360/3 x 108 = 104 nN/m2 = 104 nPa.

Den tilsvarende acceleration er a = 1.04 x 10-7/7.7 x 10-7  = 0.135 m/s2.

Jordens acceleration i cirkelbanen er givet ved a = v2/r, hvor v er Jordens hastighed, og r er dens afstand fra Solen:

a = 300002/150 x 109 = 0.0060 m/s2.

Ramtrykkets acceleration bort fra Solen er altså 22.5 gange tyngdekraftens acceleration ind mod Solen på trods af, at kun 2.3% af sollyset bliver absorberet af sejlet. Sejlet accelererer altså netto 21.5 gange så meget bort fra Solen, som Jorden accelererer ind mod solen.

Undvigelseshastigheden ve er generelt givet ve2 = 2vc2, hvor vc er cirkelhastigheden, og kun tyngdekraften virker.

Hvis et graphen-sejl starter i hvile i en afstand med cirkelhastighed vc vil sluthastigheden vt være givet ved udtrykket vt2 = (2 x 21.5)vc2.
Forholdet mellem lystrykket og tyngdekraften varierer ikke med afstanden, da de begge varierer omvendt med afstandens kvadrat.

Sluthastigheden for et sejl, som slippes i Jordens afstand er

vt = sqrt(43)30 km/s = 197 km/s.

Hvis man kunne slippe sejlet i en afstand af 1.5 millioner km fra Solens centrum, hvor cirkelhastigheden er 300 km/s, ville sluthastigheden blive 10 gange højere, altså vt = 2000 km/s. Hvis sejlet skal medbringe en nyttelast, vil sluthastigheden selvfølgelig blive mindre.

 

Doubt on Planet Nine

New haul of distant worlds casts doubt on Planet Nine

By Joshua Sokol | Jun. 21, 2017 , 9:00 AM

In early 2016, astronomers made a stunning claim: A giant planet was patrolling the farthest reaches of our solar system. Planet Nine, as they called it, was too far away to see directly. So its existence was inferred from the way its gravity had herded six distant icy worlds into clustered orbits.

Since then, the case for Planet Nine has been bolstered by other evidence, such as a peculiar tilt to the sun’s spin axis, along with a few more of these strange objects, which have elongated orbits of more than 4000 years and never come closer to the sun than Neptune. Now, a survey has found four more of these extreme bodies. The problem: They don’t display the tell-tale clustering. That’s a substantial blow for Planet Nine enthusiasts.

“We find no evidence of the orbit clustering needed for the Planet Nine hypothesis in our fully independent survey,” says Cory Shankman, an astronomer at the University of Victoria in Canada and a member of the Outer Solar System Origins Survey (OSSOS), which since 2013 has found more than 800 objects out near Neptune using the Canada-France-Hawaii Telescope in Hawaii. In a paper posted to arXiv on 16 June and soon to be published in The Astronomical Journal, the OSSOS team describes eight of its most distant discoveries, including four of the type used to make the initial case for Planet Nine.

“I think it’s great work, and it’s exciting to keep finding these,” says Scott Sheppard, an astronomer at the Carnegie Institution for Science in Washington, D.C., who was among the first to suspect a large planet in the distant solar system. But he says three of the four new objects do have clustered orbits consistent with a Planet Nine. The fourth, an object called 2015 GT50, seems to skew the entire set of OSSOS worlds toward a random distribution. But that is not necessarily a knockout blow, he says. “We always expected that there would be some that don’t fit in.”

The OSSOS team says any apparent clustering in their new objects is likely to be the result of bias in their survey. Weather patterns and a telescope’s location, for instance, determine what areas of the sky it can look at and when. It is also harder to see faint solar system objects in bright areas on the sky like the galactic center.

Such biases make OSSOS more likely to find objects in regions that support the Planet Nine hypothesis, says OSSOS team member Michele Bannister, an astronomer at Queen’s University Belfast in the United Kingdom. When the team corrects for that effect, the apparent clustering vanishes. By contrast, the OSSOS team says, many details of the surveys behind the original six objects are unpublished, making it impossible to understand their biases.

That argument does not impress Mike Brown, an astronomer at the California Institute of Technology (Caltech) in Pasadena, who along with Caltech colleague Konstantin Batygin catapulted Planet Nine into the mainstream with their bold claim. “Their main conclusion is that their observations are hopelessly biased, and it’s true,” he says. “But they then kind of make the leap of faith that everybody else’s must be biased, too.” For Brown, any biases in the hodgepodge of surveys that found the earlier objects should average out. That would make the clustering real—whether caused by Planet Nine or not.

So far, astronomers have found only a dozen of the most distant probes of Planet Nine’s supposed sphere of influence. Finding more objects could help settle the question. So could the most direct kind of evidence: an actual image of Planet Nine, which other surveys hope to capture.

“Perhaps the most attractive thing about the Planet Nine hypothesis is that it has a well-defined observational resolution,” Batygin says. “It’s either there or not.”

 

Radio Emission from ε Eridani

Radio Emission from the Exoplanetary System ε Eridani

ABSTRACT: As part of a wider search for radio emission from nearby systems known or suspected to contain extrasolar planets ε Eridani was observed by the Jansky Very Large Array (VLA) in the 2-4 GHz and 4-8 GHz frequency bands. In addition, as part of a separate survey of thermal emission from solar-like stars, ε Eri was observed in the 8-12 GHz and the 12-18 GHz bands of the VLA. Quasi-steady continuum radio emission from ε Eri was detected in the three high-frequency bands at levels ranging from approximately 55-83 μJy. The emission in the 2-4 GHz emission is shown to be the result of a radio flare of a few minutes in duration that is up to 50% circularly polarized — no radio emission is detected following the flare. Both the K2V star and a possible Jupiter-like planet are considered as the source of the radio emission. While a planetary origin for the radio emission cannot be definitively ruled out, given that ε Eri is known to be a moderately active “young Sun”, we conclude that the observed radio emission likely originates from the star.

 

A warm or a cold early Earth?

A warm or a cold early Earth? New insights from a 3-D climate-carbon model

ABSTRACT: Oxygen isotopes in marine cherts have been used to infer hot oceans during the Archean with temperatures between 60°C (333 K) and 80°C (353 K). Such climates are challenging for the early Earth warmed by the faint young Sun. The interpretation of the data has therefore been controversial. 1D climate modeling inferred that such hot climates would require very high levels of CO2 (2-6 bars). Previous carbon cycle modeling concluded that such stable hot climates were impossible and that the carbon cycle should lead to cold climates during the Hadean and the Archean. Here, we revisit the climate and carbon cycle of the early Earth at 3.8 Ga using a 3D climate-carbon model. We find that CO2 partial pressures of around 1 bar could have produced hot climates given a low land fraction and cloud feedback effects. However, such high CO2 partial pressures should not have been stable because of the weathering of terrestrial and oceanic basalts, producing an efficient stabilizing feedback. Moreover, the weathering of impact ejecta during the Late Heavy Bombardment (LHB) would have strongly reduced the CO2 partial pressure leading to cold climates and potentially snowball Earth events after large impacts. Our results therefore favor cold or temperate climates with global mean temperatures between around 8°C (281 K) and 30°C (303 K) and with 0.1-0.36 bar of CO2 for the late Hadean and early Archean. Finally, our model suggests that the carbon cycle was efficient for preserving clement conditions on the early Earth without necessarily requiring any other greenhouse gas or warming process.

 

P9: Stability of Outer Solar System

Evaluating the Dynamical Stability of Outer Solar System Objects in the Presence of Planet Nine

ABSTRACT: We evaluate the dynamical stability of a selection of outer solar system objects in the presence of the proposed new Solar System member Planet Nine. We use a Monte Carlo suite of numerical N-body integrations to construct a variety of orbital elements of the new planet and evaluate the dynamical stability of eight Trans-Neptunian objects (TNOs) in the presence of Planet Nine. These simulations show that some combinations of orbital elements (a,e) result in Planet Nine acting as a stabilizing influence on the TNOs, which can otherwise be destabilized by interactions with Neptune. These simulations also suggest that some TNOs transition between several different mean-motion resonances during their lifetimes while still retaining approximate apsidal anti-alignment with Planet Nine. This behavior suggests that remaining in one particular orbit is not a requirement for orbital stability. As one product of our simulations, we present an a posteriori probability distribution for the semi-major axis and eccentricity of the proposed Planet Nine based on TNO stability. This result thus provides additional evidence that supports the existence of this proposed planet. We also predict that TNOs can be grouped into multiple populations of objects that interact with Planet Nine in different ways: one population may contain objects like Sedna and 2012 VP113, which do not migrate significantly in semi-major axis in the presence of Planet Nine and tend to stay in the same resonance; another population may contain objects like 2007 TG422 and 2013 RF98, which may both migrate and transition between different resonances.