Abiogenesis: Part 1-2

Frankenstein or a Submarine Alkaline Vent: Who Is Responsible for Abiogenesis?

Michael Russell

Part 1: What is life–that it might create itself?

Origin of life models based on “energized assemblages of building blocks” are untenable in principle. This is fundamentally a consequence of the fact that any living system is in a physical state that is extremely far from equilibrium, a condition it must itself build and sustain. This in turn requires that it carries out all of its molecular transformations–obligatorily those that convert, and thereby create, disequilibria–using case‐specific mechanochemical macromolecular machines. Mass‐action solution chemistry is quite unable to do this. We argue in Part 2 of this series that this inherent dependence of life on disequilibria‐converting macromolecular machines is also an obligatory requirement for life at its emergence. Therefore, life must have been launched by the operation of abiotic macromolecular machines driven by abiotic, but specifically “life‐like”, disequilibria, coopted from mineral precipitates that are chemically and physically active. Models grounded in “chemistry‐in‐a‐bag” ideas, however energized, should not be considered.

Frankenstein or a Submarine Alkaline Vent: Who is Responsible for Abiogenesis?

Part 2: As life is now, so it must have been in the beginning

We argued in Part 1 of this series that because all living systems are extremely far‐from‐equilibrium dynamic confections of matter, they must necessarily be driven to that state by the conversion of chemically specific external disequilibria into specific internal disequilibria. Such conversions require task‐specific macromolecular engines. We here argue that the same is not only true of life at its emergence; it is the enabling cause of that emergence; although here the external driving disequilibria, and the conversion engines needed must have been abiotic. We argue further that the initial step in life’s emergence can only create an extremely simple non‐equilibrium “seed” from which all the complexity of life must then develop. We assert that this complexity develops incrementally and progressively, each step tested for value added “in flight.” And we make the case that only the submarine alkaline hydrothermal vent (AHV) model has the potential to satisfy these requirements.

Disse to artikler kræver lidt baggrund. Schrödinger udgav i 1944 en lille bog med titlen What is Life, baseret på nogle populære forelæsninger, som forsøgte at forklare levende organismer ud fra en teoretisk fysikers synspunkt. Schrödinger havde en fantastisk evne til at forklare komplicerede sammenhænge på en let tilgængelig måde. Han opfandt således entanglement for at illustrere Einsteins problemer med Max Borns fortolkning af den kvantemekaniske måling. En levende organisme er en kompliceret maskine, som befinder sig meget, meget langt fra termisk ligevægt. Den kan kun opretholde denne usandsynlige tilstand, hvis den tilføres negativ entropi via et kompliceret stofskifte styret af en form for genetisk kode (genetik var kendt i 1943, men ikke den genetiske kode). Entropien er i den klassiske mekanik et udtryk for et isoleret makroskopisk systems sandsynlighed. Systemet har maksimal entropi i termisk ligevægt.

En organisme er en art maskine, som styres af informationen i den genetiske kode. Informationsindholdet i en digital kode blev først fundet af Claude Shannon 5 år senere. Informationen i N-bit er givet ved den negative relative entropi i forhold til maximale entropi svarende til, at alle bit har sandsynlighede 0.5. En strøm af bit med P(1)=P(0)=0.5 hat et minimalt informationsindhold. Dette er vigtigt ved valg af et sikkert kodeord. Schrödinger havde ret. En levende organisme er ekstremt langt fra termisk ligevægt. Sandsynligheden for, at et sådent spring sker tilfældigt er lig nul.

Den sædvanlige forestilling om livets oprindelse indtil for 20 år siden var at tilføre simple organiske molekyler energi via elektriske udladninger i atmosfæren eller ultraviolet bestråling på jordens overflade. Man har også forsøgt at tilføre energien til en pose med organiske molekyler. Men i alle tilfælde ender molekylerne i en tilstand af kaos, som hurtigt opnår termisk ligevægt. Forfatterne kalder fremgangsmåden for Dr. Frankensteins metode.

Hvordan kan man definere liv, så det kan skabe sig selv?

Anatomy of a white smokerJeg kan måske bedst henvise til en blog i  “The Planetary Report” af Michael L. Wong:

The Making of Life

 

The urgency of Arctic change

The urgency of Arctic change

Abstract

This article provides a synthesis of the latest observational trends and projections for the future of the Arctic. First, the Arctic is already changing rapidly as a result of climate change. Contemporary warm Arctic temperatures and large sea ice deficits (75% volume loss) demonstrate climate states outside of previous experience. Modeled changes of the Arctic cryosphere demonstrate that even limiting global temperature increases to near 2 °C will leave the Arctic a much different environment by mid-century with less snow and sea ice, melted permafrost, altered ecosystems, and a projected annual mean Arctic temperature increase of +4 °C. Second, even under ambitious emission reduction scenarios, high-latitude land ice melt, including Greenland, are foreseen to continue due to internal lags, leading to accelerating global sea level rise throughout the century. Third, future Arctic changes may in turn impact lower latitudes through tundra greenhouse gas release and shifts in ocean and atmospheric circulation. Arctic-specific radiative and heat storage feedbacks may become an obstacle to achieving a stabilized global climate. In light of these trends, the precautionary principle calls for early adaptation and mitigation actions.

1. Introduction

During September 2017 the icebreaker Healy headed north in the Chukchi Sea, north of the Bering Strait, with scientists in search of sea ice to study biological and chemical oceanographic changes in the “new Arctic”. Where in previous years there had been sea ice, this time they found no ice in their target area. This anecdote is an example of a larger truth—the Arctic is currently changing at an unprecedented rate, driven by increasing temperatures due to increases in atmospheric greenhouse gas (GHG) concentrations.

This article provides a synthesis of the latest observed trends in the Arctic cryosphere, and their feedbacks to the global climate system, and builds on recent assessments by the Arctic Monitoring and Assessment Program. The impact of climate change on the Arctic will remain large even if much of the world adopts aggressive mitigation of GHG emissions. The stated goal of limiting global temperature rise to “well below” 2 °C by the end of the century would result in an annual Arctic temperatures increase by ∼4 °C by mid-century, with major consequences to local and global climate, ecosystems and societal systems.

These trends do come with uncertainty. For example, the rates of change for ongoing Arctic cryospheric feedbacks are substantially positive, but are incompletely understood and may strengthen over the next decades. Such feedbacks involve albedo and heat storage shifts from loss of glaciers, sea ice, and snow cover; increased carbon releases from permafrost; shifts in clouds and increases in water vapor; and atmospheric and ocean transport changes. The magnitude of exactly how much changes in the Arctic will affect the larger global climate system is also open to question. Continued scientific research is required to better underpin both mitigation and adaptation planning.

 

Clocks confirm General Relativity

After botched launch, orbiting atomic clocks confirm Einstein’s theory of relativity

By Adrian Cho |

Making lemonade from lemons, two teams of physicists have used data from misguided satellites to put Albert Einstein’s theory of gravity, the general theory of relativity, to an unexpected test. The opportunistic experiment confirms to unprecedented precision a key prediction of the theory—that time ticks slower near a massive body like Earth than it does farther away.

As Einstein explained, gravity arises because massive bodies warp space-time. Free-falling objects follow the straightest possible paths in that curved space-time, which to us appear as the parabolic arc of a thrown ball or the circular or elliptical orbit of a satellite. As part of that warping, time should tick more slowly near a massive body than it does farther away. That bizarre effect was first confirmed to low precision in 1959 in an experiment on Earth and in 1976 by Gravity Probe A, a 2-hour rocket-born experiment that compared the ticking of an atomic clock on the rocket with another on the ground.

In 2014, scientists got another chance to test the effect when two of the 26 satellites now in Europe’s Galileo global navigation system, like the one pictured above, were accidentally launched into elliptical orbits instead of circular ones. The satellites now rise and fall by 8500 kilometers on every 13-hour orbit, which should cause their ticking to speed up and slow down by about one part in 10 billion over the course of each orbit. Now, two teams of physicists have tracked the variations and have shown, to five times better precision than before, that they match the predictions of general relativity, they report 4 December in Physical Review Letters. That’s not bad, considering the satellites weren’t designed to do the experiment. However, another experiment set to be launched to the space station in 2020 aims to search for similar deviations with five times greater precision still.

 

The ‘faint young sun’ paradox

Did our ancient sun go on a diet? Bands of martian rock could solve the ‘faint young sun’ paradox

By Joshua Sokol |

When Earth was a mass of newly minted rock some 4.5 billion years ago, the solar system was a cold place. Physicists predict our young sun put out some 15% to 25% less energy than it does today—enough to freeze over Earth’s oceans and make Mars even colder. Yet ancient rocks suggest water flowed across both planets, posing a perplexing puzzle.

For years, climate modelers solved this so-called “faint young sun” paradox by proposing that ancient atmospheres on both planets had the right composition of greenhouse gases to insulate them and keep them above freezing. But if the young sun reached its current weight only after a diet—shedding perhaps 5% of its early mass in a stellar wind of escaping particles—it would have burned brighter in its past than predicted, resolving the paradox. The only problem with that hypothesis? Scientists have had no way of knowing whether this stellar slim-down happened.

Now, astronomers say they have come up with a potential “fingerprint” of the sun’s ancient mass—climate cycles preserved in bands of martian rocks. To find their marker, Christopher Spalding, a planetary astronomer at Yale University, geobiologist Woodward Fischer at the California Institute of Technology in Pasadena, and astronomer Gregory Laughlin of Yale started with an orbital cycle that both Earth and Mars experience. As the solar system’s planets revolve around the sun, their own gravity tweaks each other’s orbits.

One of many such interactions pulls the orbits of Earth and Mars back and forth between a more circular path and a more elliptical one. This pattern, a relative of the cycles responsible for Earth’s ice ages, repeats every 405,000 years. According to the team’s simulations, that cycle has kept dependable time over the entire history of the solar system.

Spalding’s team proposes that, as their changing orbits took them closer and farther from the sun, the climates on Earth and Mars shifted, leaving cyclical striping patterns in sedimentary rock, like the layered bands on the walls of Scandinavian fjords. When the orbits of the early planets took them closer to the sun, for example, already wet areas would receive more heat, more rainfall, or snow, and thus more erosion. Layers of sediment would be relatively thicker at these times than in colder parts of the cycle.


Layers of rocks on Mars could record a 400,000-year climate cycle.
MSS/JPL-Caltech/NASA

And that means it could be used to track the mass of the sun. If the sun were 5% heavier a few billion years ago, it would have tugged the planets harder, increasing the cycle’s frequency by a matching 5%, to roughly once per 386,000 years.

Earth, unfortunately, preserves little of its ancient rock, because of the churn of plate tectonics. But Mars does. Spalding suggests a future rover there, armed with dating equipment, could do the trick, he reports in a paper accepted to The Astrophysical Journal Letters. “You’ll have to do it as a side project,” he says, “because everyone wants to find life more than they want to find 400,000-year banding.”

In 2006, another team laid the groundwork for Spalding’s hypothesis, pointing out the linear relationship between the sun’s mass and the larger family of interplanetary orbital cycles. But they stopped at that point because they felt “the climate record, or the geological record, does not have enough resolution,” says Renu Malhotra, a planetary scientist at the University of Arizona in Tucson who led the earlier study. She has similar reservations about Spalding’s approach, she says.

Meanwhile, Dawn Sumner, a geobiologist at the University of California, Davis, and a member of NASA’s Curiosity rover team says modern Mars rovers could do at least part of the work Spalding’s team has suggested. Curiosity has already measured the thicknesses of sedimentary layers on exposed slopes, and the newly selected landing site for the 2020 rover seems to have steep cliffs, which may reveal similar stripes. “If we found the right spot, this is something people will do,” she says.

But Sumner is less sanguine about dating the various layers, crucial to reveal minute changes in orbital cycles. On Earth, she says, that kind of precision dating requires lots of fieldwork to find the best samples and cart them back to the lab. A rover, by contrast, would be hard-pressed to do it all on site. Facing that obstacle, she says, “It’s probably impossible to test it in the next few decades on Mars.”

 

Bell’s Theorem

Bell’s Theorem

First published Wed Jul 21, 2004; substantive revision Thu Jun 11, 2009

Bell’s Theorem is the collective name for a family of results, all showing the impossibility of a Local Realistic interpretation of quantum mechanics. There are variants of the Theorem with different meanings of “Local Realistic.” In John S. Bell’s pioneering paper of 1964 the realism consisted in postulating in addition to the quantum state a “complete state”, which determines the results of measurements on the system, either by assigning a value to the measured quantity that is revealed by the measurement regardless of the details of the measurement procedure, or by enabling the system to elicit a definite response whenever it is measured, but a response which may depend on the macroscopic features of the experimental arrangement or even on the complete state of the system together with that arrangement. Locality is a condition on composite systems with spatially separated constituents, requiring an operator which is the product of operators associated with the individual constituents to be assigned a value which is the product of the values assigned to the factors, and requiring the value assigned to an operator associated with an individual constituent to be independent of what is measured on any other constituent. From his assumptions Bell proved an inequality (the prototype of “Bell’s Inequality”) which is violated by the Quantum Mechanical predictions made from an entangled state of the composite system. In other variants the complete state assigns probabilities to the possible results of measurements of the operators rather than determining which result will be obtained, and nevertheless inequalities are derivable; and still other variants dispense with inequalities. The incompatibility of Local Realistic Theories with Quantum Mechanics permits adjudication by experiments, some of which are described here. Most of the dozens of experiments performed so far have favored Quantum Mechanics, but not decisively because of the “detection loophole” or the “communication loophole.” The latter has been nearly decisively blocked by a recent experiment and there is a good prospect for blocking the former. The refutation of the family of Local Realistic Theories would imply that certain peculiarities of Quantum Mechanics will remain part of our physical worldview: notably, the objective indefiniteness of properties, the indeterminacy of measurement results, and the tension between quantum nonlocality and the locality of Relativity Theory.

En populær fremstilling af kvantemekanikkens muligheder er givet i bogen:

DANCE OF THE PHOTONS
From Einstein to quantum teleportation

Anton Zeilinger

Kvantelotteri med to fotoner

 

Ammonia for power

Ammonia for power

Abstract

A potential enabler of a low carbon economy is the energy vector hydrogen. However, issues associated with hydrogen storage and distribution are currently a barrier for its implementation. Hence, other indirect storage media such as ammonia and methanol are currently being considered. Of these, ammonia is a carbon free carrier which offers high energy density; higher than compressed air. Hence, it is proposed that ammonia, with its established transportation network and high flexibility, could provide a practical next generation system for energy transportation, storage and use for power generation. Therefore, this review highlights previous influential studies and ongoing research to use this chemical as a viable energy vector for power applications, emphasizing the challenges that each of the reviewed technologies faces before implementation and commercial deployment is achieved at a larger scale. The review covers technologies such as ammonia in cycles either for power or CO2 removal, fuel cells, reciprocating engines, gas turbines and propulsion technologies, with emphasis on the challenges of using the molecule and current understanding of the fundamental combustion patterns of ammonia blends.

1. Introduction

Renewable energy is playing an increasingly important role in addressing some of the key challenges facing today’s global society, such as the cost of energy, energy security and climate change. The exploitation of renewable energy looks set only to increase across the world as nations seek to meet their legislative and environmental obligations with respect to greenhouse gas emissions. There is broad agreement that energy storage is crucial for overcoming the inherent intermittency of renewable resources and increasing their share of generation capacity.

Thus, future energy systems require effective, affordable methods for energy storage. To date, a number of mechanical, electrical, thermal, and chemical approaches have been developed for storing electrical energy for utility-scale services. Storage solutions such as lithium batteries or redox cells are unlikely to be able to provide the required capacity for grid-scale energy storage. Pumped hydro and methods such as compressed gas energy storage suffer from geological constraints to their deployment. The only sufficiently flexible mechanism allowing large quantities of energy to be stored over long time periods at any location is chemical energy storage.

Chemical storage of energy can be considered via hydrogen or carbon-neutral hydrogen derivatives. One such example is ammonia, which has been identified as a sustainable fuel for mobile and remote applications. Similar to synthesised hydrogen, ammonia is a product that can be obtained either from fossil fuels, biomass or other renewable sources such as wind and photovoltaics, where excessive electrical supply can be converted into some non-electrical form of energy. Some advantages of ammonia over hydrogen are its lower cost per unit of stored energy, i.e. over 182 days ammonia storage would cost 0.54 $/kg-H2 compared to 14.95 $/kg-H2 of pure hydrogen storage, higher volumetric energy density (7.1–2.9 MJ/L), easier and more widespread production, handling and distribution capacity, and better commercial viability. Ammonia produced by harvesting of renewable sources has the following properties.

1.

It is itself carbon-free, has no direct greenhouse gas effect, and can be synthesized with an entirely carbon-free process from renewable power sources;

2.

It has an energy density of 22.5 MJ/kg, comparable to that of fossil fuels (low-ranked coals have around 20 MJ/kg; natural gas has around 55 MJ/kg, LNG 54 MJ/kg, and hydrogen 142 MJ/kg);

3.

It can easily be rendered liquid by compression to 0.8 MPa at atmospheric temperature; and,

4.

An established, reliable infrastructure already exists for both ammonia storage and distribution (including pipeline, rail, road, ship); today around 180 million tons of NH3 are produced and transported annually.

1.1. Interest in ammonia for power

Ammonia has recently started to receive attention internationally as a consequence of the primary benefits outlined in the previous section. For example, Japan has been looking for renewable alternatives for their energy consumption requirements over the last few decades, due to lack of natural energy resource. Hydrogen has been presented as an attractive solution that could meet their energy demands, accompanied by reduction in greenhouse gas emissions. However, Japan has clearly recognised the potential of ammonia to serve as the hydrogen carrying energy vector, and a 22-member consortium led by Tokyo Gas has been created to curate “Green Ammonia” promoted by the Cross-Ministerial Strategic Innovation Program (SIP) of Japan, seeking to demonstrate hydrogen, ammonia and hydrides as building blocks of a hydrogen economy. The Japan Science and Technology Agency (JST) has announced the intentions of the consortium to develop a strategy for “forming an ammonia value chain” that promotes the leadership of the country in the production and use of the chemical worldwide. All consortium members have extensive knowledge of handling ammonia, with multimillion projects in progress or under consideration. For example, IHI Corporation and Tohoku University plan to invest $8.8 M in 2017 to set up a duel-fuel gas turbine that co-fires one part of ammonia to five parts of methane; similarly, Chugoku Electric Power Company intends to conduct co-firing experiments with coal and ammonia (at 0.6%) at one of their power plants, paying $373,000 for the implementation of this project.

(Read paper to learn more)

 

Definition of cause and effect

A Treatise of Human Nature by David Hume (1739)

“Thus we remember to have seen that species of objects we call flame, and to have felt that species of sensation we call heat, we likewise call to mind their constant conjunction in all past instances. Without any further ceremony, we call the one cause and the other effect, and infer the existence of the one from the other”.

Once we observe flame and heat together a sufficient number of times, and note that flame has temporal precedence, we agree to call flame the cause of heat.

Hume synes i 1739 at være tilfreds med en definition alene baseret på en korrelation eller association mellem observerede fænomener. Men han er sandsynligvis plaget af kravet om en tidslig rækkefølge mellem årsag og virkning. Hvordan klarer man dette uden et specifikt krav om længden af forsinkelsen? 9 år senere kommer afhandlingen:

An Enquiry Concerning Human Understanding by David Hume (1748)

“We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words (than the word ‘followed’), where, if the first object had not been, the second never had existed“.

This is a completely new, counterfactual definition: “if the first object had not been, the second had never existed”.

Den menneskelige hjerne har en instinktiv evne til at forestille sig, hvad der kunne være sket, hvis vi ikke havde observeret fakta.  Bevidstheden har evnen til at opstille causale hypoteser for, hvordan verden fungerer. Alle interessante spørgsmål vedrører hypoteser om verdens ud over statistik baseret på rene observationsdata. Vore handlinger løber helt af sporet, hvis de alene er datadrevne.

David Humes første artikel fra 1739 har undertitlen:

An Attempt to introduce the experimental Method of Reasoning into MORAL SUBJECTS.

De mange bankskandaler skyldes, at handlingerne har været datadrevne i modsætning til at være hypotesedrevne. Karl Pearsons datadrevne verdensopfattelse, som benægtede eksistensen af åsag og virkning, medførte idèen om racegygiejnens nødvendighed. Hvis man ikke har metoder til at afgøre, hvilke korrelationer, som er virkelige, har man bevæget sig ud på et skråplan.

Citaterne er taget fra bogen The Book of Why by Judea Pearl and Dana Mackenzie (2018).

Anvendes Humes kontrafaktuelle definition af årsag og virkning også andre steder. Pearl gør opmærksom på, at kontrafaktuelle argumenter i princippet anvendes i amerikansk jura i følgende form “Conduct is the cause of a result when it is an antecedent but for which the result in question would not have occurred“. (antecedent = forløber). Men forløbet mellem årsag og virkning formidles ofte via indirekte hændelser. Et klassisk eksempel “the falling piano”, hvor den anklagede angriber et offer, som undslipper og løber ind under det faldende flygel og dræbes.

Ovenstående eksempel fra amerikansk jura viser, at det er vigtigt af finde den eller de mekanismer, som formidler en årsags virkning.

Der rasede i 1950’erne politisk krig om årsagen til det store antal tilfælde af lungekræft. Den ene lejr mente, at det eksploderende salg af cigaretter var årsagen. Den anden lejr var ikke overraskende tobaksindustrien, som indkalde R. A. Fisher som statistisk ekspertvidne. Fisher delte Karl Pearsons opfattelse af, at årsag-virkning er et forældet begreb, som ikke findes. Alle statistiske korrelationer mellem observerede data kan forklares ved en ukendt faktor, som påvirker begge ender af en korrelation. Fisher foreslog, at den stærke korrelation mellem rygning og lungekræft kunne skyldes et ukendt rygegen som illustreret med følgende figur:

Causal diagram for the smoking gene example.

Det overraskede mig meget, at begrebet årsag-virkning kun blev anvendt inden for fysikkens verden. Det er velkendt, at alle fundamentale fysiske lover er tidssymmetriske. Men dette afholder os ikke fra at arbejde med årsag og virkning. De maxwellske ligninger er tidssymmetriske, men dette hindrer os ikke i at beregne udstrålingen fra en antenne ved at anvende de retarderede potentialer fra ladningsfordelingen. I den statistiske fysik bevæger et isoleret system sig altid fra en mindre sandsynlig til en mere sandsynlig tilstand. I kvantemekanikken er selve observationen årsag til en ændring i bølgefunktionens tilstand.

Der skete en overraskende udvikling i 2008. Genforskere viste, at der faktisk findes et rygegen, som R. A. Fisher havde foreslået. Opdagelsen blev gjort ved anvendelsen af en ny analyseteknik ved navn a genome-wide association study (GWAS). Dette er en BigData metode, som tillader forskerne at kæmme sig gennem hele genomet på udkik efter gener, som forekommer hyppigere hos mennesker med visse sygdomme. Bemærk ordet “association”. Metoden finder ikke en årsagssammenhæng. Den lokaliserer et gen, som kan være interessant at studere nærmere.

Man fandt genet rs16969968, eller kort Mr. Big, som koder for nicotin-receptorer i lungeceller. Det kaldes Mr. Big, da det forøger risikoen for lungekræft med 77%. Vi kan let omskrive det kausale diagram ovenfor til at afspejle den nye viden om et rygegen:

Causal diagram slightly rearranged.

Spørgsmålet er nu: Hvordan virker genet? Får det mennesker til at ryge mere og inhalere dybere? Eller gør det lungecellerne mere udsatte for kræft?

En epidemiolog, Tyler VanderWeele, analyserede genets funktion nærmere:

1. It does not significantly increase cigarette consumption.
2. It does not cause lung canser through a smoking-independent path.
3. For those people who do smoke, it significantly increases the risk of lung cancer.

Vekselvirkningen mellem genet og personens rygevaner er altafgørende.

Dette er blot et enkelt eksempel på, at en generel definition af årsag og virkning kan anvendes til at bestemme, hvordan årsagen formidles til virkningen. En statistisk analyse ved anvendelse af BigData metoder kan aldrig finde årsager, men kun associationer, som man kan undersøge nærmere ved anvendelse af kausale metoder.