Mystery of Cosmic He Abundance

THE MYSTERY OF THE COSMIC HELIUM ABUNDANCE

By Prof. F. Hoyle, F.R.S., and Dr. R. J. Tayler

University of Cambridge (1964)

It is usually supposed that the original material of the Galaxy was pristine material. Even solar material is usually regarded as ‘uncooked’, apart from the small concentrations of heavy elements amounting to about 2 per cent by mass which are believed on good grounds to have been produced by nuclear reactions in stars. However, the presence of helium, in a ratio by mass to hydrogen of about 1:2, shows that this is not strictly the case. Granted this, it is still often assumed in astrophysics that the ‘cooking’ has been of a mild degree, involving temperatures of less than than 108 K, such as occurs inside main-sequence stars. However, if present observations of a uniformly high helium content in our Galaxy and its neighbours are correct, it is difficult to suppose that all the helium has been produced in ordinary stars.

It is the purpose of this article to suggest that mild ‘cooking’ is not enough and that most, if not all, of the material of our everyday world, of the Sun, of the stars in our Galaxy and probably of the whole local group of galaxies, if not the whole Universe, has been ‘cooked’ to a temperature in excess of 1010 K. The conclusion is reached that: (i) the Universe had a singular origin or is oscillatory, or (ii) the occurrence of massive objects has been more frequent than has hitherto been supposed.

The section on observations (Table 1) is omitted.

We begin our argument by noticing that helium production in ordinary stars is inadequate to explain the values in Table 1, if they are general throughout the Galaxy, by a factor of about 10. Multiplying the present-day optical emission of the Galaxy, ∼4×1043 ergs sec-1, by the age of the Galaxy, ∼3×1017 sec, and then dividing by the energy production per gram, ∼ 6×1018 ergs g-1, for the process H → He, gives ∼2×1042 g (109M). This is the mass of hydrogen that must be converted to helium in order to supply the present-day optical output of the Galaxy for the whole of its lifetime. Allowance for emission in the ultra-violet and in the infra-red increases the required hydrogen-burning, but probably not by a factor more than ∼3. Since the total mass of the Galaxy is ∼1011 M the value of the He/H to be expected from H → He inside stars is only ∼0.01. While it is true that the Galaxy may have been much more luminous in the past than it is now, there is no evidence that this was the case.

Next, we shift the discussion to a ‘radiation origin’ of the Universe, in which the rest mass energy density is less than the energy density of radiation. The relation between the T10, measured in units of 1010 K, and the time t in seconds can be worked out from the equations of relativistic cosmology and is:

(1) T10 = 1.52 t-1/2

In the theory of Alpher, Bethe and Gamow the density was given by:

(2) ρ ≈ 10-4T103 g cm-3

a relation obtained from the following considerations. The material is taken at t=0 to be entirely neutrons. At t≃103 sec, T10≃0.05, approximately half the neutrons have decayed. If the density is too low the resulting protons do not combine with the remaining neutrons, and very little helium is formed. On the other hand, if the density is too high there is a complete combination of neutrons and protons, and with the further combination of the resulting deuterium into helium very little hydrogen remains as t increases. Thus only by a rather precise adjustment of the density, that is, by (2), can the situation be arranged so that hydrogen and helium emerge in approximately equal amounts.

It was pointed out by Hayashi and Alpher,Follin and Herman that the assumption of material initially composed wholly of neutrons is not correct. The radiation field generates electron-positron pairs by:

(3) γ + γ ⇌ e + e+

and the pairs promote the following reactions:

(4) n + e+ ⇌ p + ν
(5) p + e ⇌ n + ν

The situation evidently depends on the rates of these reactions. It turnes out that for sufficiently small t the balance of the reactions is thermodynamic. This meanes that not only are protons generated by (4) and (5) , but also that the energy densities of the pairs and of the neutrinos must be included in the cosmological equations. At T10≃102, even μ-neutrinos are produced and these too should be included. The effects of these new contributions to the energy density is to modify (1) to:

(6) T10≃1.04 t-1/2

The values of σv for (4), (5), read from left to right, are:

(7)  π2(ℏ/mc)3[(W±W0)/mc2]2[ln(2)/(fτ)lab]

in which the well-known Coulomb factor has been taken as unity, and where the symbols have the following significance: m, electronic mass; W, the energy, including rest mass, of the positron or electron; W0, energy difference between the neutron and protron (2.54 mc2); ± apply to (4) and (5) respectively; (fτ)lab, the fτ-value for free neutron decay (1,175 sec).

To obtain the rates of reactions (4) and (5), read from left to right, multiply (7), using the appropriate sign, by the number of positrons/electrons with energies between W and W+dW and then integrate the product with respect to W from zero to infinity. The corresponding values for the rates of (4) and (5) read from right to left can then easily be obtained by noticing that in thermodynamic equilibrium:

(8) n/p = exp(-W0/kT)

where n, p represent the densities of neutrons and protons.

If, however, one is content with accuracy to within a few per cent it is sufficient simply to write W=mc2+qkT in (7) and to multiply by the total number density of pairs. The value then chosen for q is that which makes mc2+qkT equal to the average electron energy at temperature T; this is a slowly varying function of T and is given by Chandrasekhar. The pair density is:

(9) π-2(kT/ℏc)3(mc2/kT)2K2(mc2/kT)

where K2 is the modified Bessel function of the second kind and second order. For T10∼1 this expression can be closely approximated by the simple form 30aT34k. The number density of neutrino-antineutrino pairs (of both kinds) is 15aT34k.

Adopting this simplified procedure, the reaction rate for n+e+→p+ν is easily seen to be:

(10) 0.071T103(1+0.476qT10)2 per neutron per sec.

The effect of n+ν→p+e is approximately to double the rate at which neutrons are converted to protons. Using equation (8), the rates of the inverse reactions are obtained by multiplying equation (10) by exp(-W0/kT)=exp(-1.506/T10). Hence the following differential equation determines the variation of n/(n+p) with time:

(11) d[n/(n+p)]/dt=0.142T103(1+0.476qT10)2 ×
[(n/(n+p))(1+exp(-1.506/T10))-exp(-1.506/T10)]

To express this in a form convenient for numerical integration use T10 as the independent variable. With the aid of equation (6):

(12) d[n/(n+p)]/dT10=0.308(1+0.476qT10)2 ×
[(n/(n+p))(1+exp(-1.506/T10))-exp(-1.506/T10)]

Equation (12) can be integrated from a sufficiently high temperature, at which the neutrons and protrons are almost in thermodynamic balance, down to the temperature at which the pairs disappear and deuterons are formed. The results are insensitive to starting temperature if it is chosen above T10=2.5. When the protons and neutrons are in thermodynamic balance the right-hand side of (12) is zero and the initial value of n/(n+p) is chosen to make this right-hand side zero.

An important question evidently arises as to the precise value of T10 down to which equation (12) should be integraded. Rather surprisingly, it appears the deuterium combines into helium, through D(D,n)He3(n,p)T(p,γ)He4, at a temperature as high as T10=0.3, in spite of the small binding energy of deuterium. (The concentration of deuterium used in establishing this conclusion was just that which exists for statistical equilibrium in n+p⇌D+γ.) Hence, equation (12) must not be integrated to T10 below 0.3. We estimate that integration down to T10=0.5 probably gives the most reliable result, because (12) overestimates the rate of conversion of neutrons to protons below T10=0.5.

Mr. J. Faulkner has solved the equation for several starting temperatures. Provided T10>2.5 initially, he finds n/(n+p)=0.18 at T10=0.5, giving:

(13) He/H=n/2(p-n)≃0.14

a result in good agreement with the calculations of Alpher, Follin and Herman. Allowing for the approximation in our integration procedure we estimate that this value is not more uncertain than 0.14±0.02. It should be particularly noted that, unlike the result of Alpher, Bethe and Gamow, this value depends only slightly on the assumed material density; essentially this result is obtained provided the density is high enough for deuterons to be formed in a time short compared to the neutron half-life and low enough for the rest mass energy density of the nucleons to be neglected in comparison with the energy density of the radiation field.

Before comparing this result with observation we note that variations of the cosmological conditions which led to equation (6) all seem as if they would have the effect of increasing He/H. If the rest mass energy density were not less than the sum of the energy densities of radiation, pairs and neutrinos, the Universe would have to expand faster at a given temperature in order to overcome the increased gravity, the time-scale would be shorter and the coefficient on the right-hand side of equation (12) would be reduced. Similarly, if there were more than two kinds of neutrino the expansion would have to be faster in order to overcome the gravitational attraction of the extra neutrinos, and the time-scale would again be shorter; and the smaller the coefficient on the right-hand side of equation (12) the larger the ratio He/H turns out to be.

We can now say that if the Universe originated in a singular way the He/H  ratio cannot be less than about 0.14. This value is of the same order of magnitude as  the observed ratios although it is somewhat larger than most of them. However, if it can be established empirically that the ratio is appreciably less than this in any astronomical object in which diffusive separation is out of the question, we can assert that the Universe did not have a singular origin. The importance of the value 0.09 for the Sun is clear; should this value be confirmed by further investigations the cosmological implications will be profound. (A similar situation arises in the case of an oscillating universe. The maximum temperature, achieved at moments of maximum density, must be high enough for all nuclei to be disrupted, that is, T10>1. Otherwise, after a few oscillations all hydrogen would be converted into hheavier nuclei, and this is manifestly not the case.)

It is reasonable, however, to argue in an opposite way. The fact that observed He/H values never differ from 0.14 by more than a factor 2, combined with the fact that the observed values are of necessity subject to some uncertainty, could be interpreted as evidence that the Universe did have a singular origin (or that it is oscillatory).  The difficulty of explaining the observed values in terms of hydrogen-burning in ordinary stars supports this point of view. So far as we are aware, there is only one strong counter to this argument, namely, that there is nothing really special to cosmology in the foregoing discussion. A similar result for the He/H ratio will always be obtained if matter is heated above T10=1, and if the time-scale of the process is similar to that given by equation (6).  In this connexion it may well be important that the physical conditions inside massive objects or superstars simulate a radiation Universe. Hoyle, Fowler, Burbidge and Burbidge (1964) were led, for reasons independent of those of the present article, to consider temperatures exactly inthe region T10≃1. These authors give the following differential equation between the time t and the density ρ, in such a superstar:

(14) dt = (24πGρ)-1/2dρ/ρ

and also the following relation for an object of mass M:

(15) ρ = 2.8×106(M/M)1/2T103 g cm-3

Eliminating ρ and dρ we have:

(16) dt = (M/2.44×104)1/4dT10/T102.5

whereas the differential form of equation (6) is:

(17) dt = -2.08dT10/T103

The difference of sign arises because equation (16) was given for a contracting object. For re-expansion of an object the sign must be reversed, so that the time-scales are are identical if M≃5×105 M. It may be significant that this is about the largest mass in which the temperature T can be reached without the object being required to collapse inside the Schwarzschild critical radius. If collapse inside this radius followed by re-emergence be permitted, larger masses can be considered. The time-scale is then increased above equation (17) and this has the effect of giving a smaller He/H ratio than that calculated above. If an object is inside the Schwarzschild radius and neutrinos do not escape from it, the conditions are closely similar to the cosmological case. On the other hand, the calculation must be slightly changed for objects that do not enter the Schwarzschild radius, since neutrinos are certainly not contained within them. Thus if the same time-scale were used, that is, M≃105 – 106 M, absence of neutrinos would reduce the right-hand side of equation (12) by a factor of 2. A corresponding calculation leads to n/(n+p)=0.22, also in reasonable agreement with observations, especially as all material need not have passed through massive objects. However, a more detailed discussion of massive objects will be required to decide whether the required amount of helium cannot only be produced but also ejected from them.

This brings us back to our opening remarks. There has always been difficulty in explaining the high helium content of cosmic material in terms of ordinary stellar processes. The mean luminosities of galaxies come out appreciably too high on such a hypothesis. The arguments presented here make it clear, we believe, that the helium was produced in a far more dramatic way. Either the Universe has had at least one high-temperature, high-density phase, or massive objects must play (or have played) a larger part in astrophysical evolution than has hitherto been supposed. Clearly the approximate calculations of this present article must be repeated more accurately, but we would stress two general points:
(1) the weak interaction cross-sections turn out to be just of the right order of magnitude for interesting effects to occur in the time-scale available;
(2) for a wide range of physical conditions (for example, nucleon density) roughly the observed amount of helium is produced.

 

Primordial Black/White Holes

Quantum insights on Primordial Black Holes as Dark Matter

Francesca Vidotto

2nd World Summit on Exploring the Dark Side of the Universe,
25-29 June 2018

A recent understanding on how quantum effects may affect black-hole evolution opens new scenarios for dark matter, in connection with the presence of black holes in the very early universe. Quantum fluctuations of the geometry allow for black holes to decay into white holes via a tunnelling. This process yields to an explosion and possibly to a long remnant phase, that cures the information paradox. Primordial black holes undergoing this evolution constitute a peculiar kind of decaying dark matter, whose lifetime depends on their mass M and can be as short as M2. As smaller black holes explode earlier, the resulting signal have a peculiar fluence-distance relation. I discuss the different emission channels that can be expected from the explosion (sub-millimetre, radio, TeV) and their detection challenges. In particular, one of these channels produces an observed wavelength that scales with the redshift following a unique flattened wavelength-distance function, leaving a signature also in the resulting diffuse emission. I conclude presenting the first insights on the cosmological constraints, concerning both the explosive phase and the subsequent remnant phase.

 

Haavelmo’s Structual Equations

Regression and Causation

Bryant Chen and Judea Pearl
TECHNICAL REPORT, R-395
September 10, 2013

This report surveys six influential econometric textbooks in terms of their mathematical treatment of causal concepts. It highlights conceptual and notational differences among the authors and points to areas where they deviate significantly from modern standards of causal analysis. We find that econonometric textbooks vary from complete denial to partial acceptance of the causal content of econometric equations and, uniformly, fail to provide coherent mathematical notation that distinguishes causal from statistical concepts. This survey also provides a panoramic view of the state of causal thinking in econometric education which, to the best of our knowledge, has not been surveyed before.

Appendix A

This appendix provides formal definitions of interventions and counterfactuals as they have emerged from Haavelmo’s interpretation of structural equations. Key to this interpretation is a procedure for reading counterfactual information in a system of economic equations, formulated as follows:

Definition 1 (unit-level counterfactuals). Let M be a fully specified structural model and X and Y two arbitrary sets of variables in M. Let Mx be a modified version of M, with the equation(s) determining X replaced by the equation(s) X=x. Denote the solution for Y in the modified model by the symbol YMx(u), where u stands for the values that the exogenous variables take for a given individual (or unit) in the population. The counterfactual Yx(u) (Read: “The value of Y in unit u, had X been x”) is defined by

(A.1)   Yx(u) ≜ YMx(u)

In words: The counterfactual Yx(u) in model M is defined by the solution for Y in the modified submodel Mx, with the exogenous variables held at U=u.

Figure 1

For example, consider the model depicted in Figure 1(a), which stands for the structural
equations:

Y = fY(X,Z,UY)
X = fX(Z,UX)
Z = fZ(UZ)

Here, fY, fX, fZ are arbitrary functions and UX, UY, UZ are arbitrarily distributed omitted factors. The modified model MX consists of the equations

Y = fY(X,Z,UY)
X = x
Z = fZ(UZ)

and is depicted in Figure 1b. The counterfactual Yx(u) at unit u = (uX,uY,uZ) would take the value Yx(u) = fY(x,fZ(uZ),uY), which can be computed from the model. When u is unknown, the counterfactual becomes a random variable, written as Yx = fY(x,Z,UY) with x treated as constant, and Z and UY random variables governed by the original model. Clearly, the distribution P(Yx=y) depends on both the distribution of the exogenous variables P(YX,UY,YZ) and on the functions fX, fY, fZ.

In the linear case, however, the expectation E[Yx] is rather simple. Writing

Y = aX + bZ + UY
X = cZ + UX
Z = UZ

gives

Yx = ax + bZ + UY

and

E[Yx] = ax + bE[Z]

Remarkably, the average effect of an intervention can be predicted without making any commitment to functional or distributional form. This can be seen by defining an intervention operator do(x) as follows:

(A.2) P(Y=y|do(x)) ≜ P(Yx=y) ≜ PMx(Y=y)

In words, the distribution of Y under the intervention do(X=x) is equal to the distribution of Y in the modified model Mx, in which the dependence of Z on X is disabled (as shown in Figure 1b). Accordingly, we can use Mx to define average causal effects:

Definition 2 (Average causal effect). The average causal effect of X on Y, denoted by E[Y|do(x)] is defined by

(A.3) E[Y|do(x)] ≜ E[Yx] = E[YMx]

Note that Definition 2 encodes the effect of interventions not in terms of the model’s parameters but in the form of a procedure that modifies the structure of the model. It thus liberates economic analysis from its dependence on parameteric representations and permits a totally non-parametric calculus of causes and counterfactuals that makes the connection between assumptions and conclusions explicit and transparent.

If we further assume that the exogenous variables (UX,UY,UZ) are mutually independent (but arbitrarily distributed) we can write down the post-intervention distribution immediately, by comparing the graph of Figure 1b to that of Figure 1a. If the pre-intervention joint probability distribution is factored into (using the chain rule):

(A.4) P(x,y,z) = P(z)P(x|z)P(y|x,z)

the post-intervention distribution must have the factor P(x|z) removed, to reflect the missing arrow in Figure 1b. This yields:

P(x,y,z|do(X=x0)) = P(z)P(y|x,z) if x=x0 else 0.

In particular, for the outcome variable Y we have P(y|do(x)) = ∑zP(z)P(y|x,z), which reflects the operation commonly known as “adjusting for Z” or “controlling for Z”. Likewise, we have E[Y|do(x)] = ∑zP(z)E[Y|x,z], which can be estimated by regression using the pre-intervention data.
In the simple model of Figure 1a the selection of Z for adjustment was natural, since Z is a confounder that causes both X and Y. In general, the selection of appropriate sets for adjustment is not a trivial task; it can be accomplished nevertheless by a simple graphical procedure (called “backdoor”) once we specify the graph structure.

 

Counterfactual definition of effects

In the book The Book of Why Judea Pearl tells the story of how he arrived at a counterfactual definition of cause and effect. But this book is not ment to be a mathematical introduction to the subject. A proper introduction is found in CAUSAL INFERENCE IN STATISTICS by Pearl, Glymour, and Jewell. In the following I am going to explain the section:

A Tool Kit for Mediation

A basic mediation model with no confounding can be created by the browser program DAGitty:

DAGitty

The basic mediation model, with no confounding.

The canonical model for a typical mediation problem takes the form:

t = fT(uT)
m = fM(t,uM)
y = fY(t,m,uY)

where T (treatment), M (mediator), and Y (outcome) are discrete or continous random variables, fT, fM, and fY are arbitrary functions, and Ut, Um, and Uy represent, respectively, omitted factors that influence T, M, and Y. The triplet U = (Ut,Um,Uy) is a random vector that accounts for all variations among individuals. The omitted factors are assumed to be arbitrarily distributed but mutually independent.

Using these structual equiation for the model M the outcome is given by this equation:

y = YM(u,t) = fY(fT(u),fM(t,uM),uY), where u = uT

This formula makes it possible to introduce two different models, M0 and M1, each with its own structure function, fM0 and fM1, respectively:

M0: fM0(uM) = fM(0,uM)
M1: fM1(uM) = fM(1,uM)

Using the same formalism, the outcome for each of these models are

y = YM0(u) = fY(fT(u),fM0(uM),uY) = YM(u,0)
y = YM1(u) = fY(fT(u),fM1(uM),uY) = YM(u,1)

Pearl suggests a simpler formalism for the two counterfactual models M0 and M1:

Y0(u) ≡ YM0(u) = YM(u,0)
Y1(u) ≡ YM1(u) = YM(u,1)

In general, the counterfactual models obey the following consistency rules:

if  T=0  then  Y0 = Y (factual case)
if  T=1  then Y1 = Y (factual case)

If T is binary, then the consistency rule takes the convenient form:

Y = T×Y1 + (1 – T)×Y0

Counterfactual definition of direct and indirect effects

Four types of effects can be defined for the transition from T=0 to T=1:

(a) Total effect —
TE ≡ E[Y1 – Y0] ≡ E[Y|do(T=1)] – E[Y|do(T=0)]
TE measures the expected increase in Y as the treatment changes from T=0 to T=1, while the mediator is allowed to track the change in T naturally, as dictaded by the function fM.

(b) Controlled direct effect —
CDE(m) ≡ E[Y1,m – Y0,m]≡ E[Y|do(T=1,M=m) – E[Y|do(T=0,M=m)]
CDE measures the expected increase in Y as the treatment changes from T=0 to T=1, while the mediator is set to a specified level M=m uniformly over the entire population.

(c) Natural direct effect —
NDE ≡ E[Y1,M0 – Y0,M0]
NDE measures the expected increase in Y as the treatment changes from T=0 to T=1, while the mediator is set to whatever value it would have attained (for each individual) prior to the change, that is, under T=0.

(d) Natural indirect effect —
NIE ≡ E[Y0,M1 – Y0,M0]
NIE measures the expected increase in Y when the treatment held constant, at T=0, and M changes to whatever value it would have attained (for each individual) under T=1. It captures, therefore, the portion of the effect that can be explaned by mediation alone, while disabling the capacity of Y to respond to T.

We note that, in general, the total effect can be decomposed as
TE = NDE – NIEr
where NIEr stands for the reverse transition, from T=1 to T=0. This implies that NIE is identifiable whenever NDE and TE are identifiable.

We further note that TE and CDE(m) are do-expressions and can, therefore, be estimated from randomly controlled experimental data or in observational studies by using the back-door or front-door formulae given elsewhere in the book.  Not so for the NDE and NIE; a new set of assumptions is needed for their identification, in the case of a confounded mediation model in which dependence exsists between UM and (UT,UY).

In the non-confounding case shown in my graph, NDE reduces to
NDE = Σm{E[Y|T=1,M=m] – E[Y|T=0,M=m]}×P(M=m|T=0)

Similarly, NIE becomes
NIE = ΣmE[Y|T=0,M=m]×{P(M=m|T=1) – P(M=m|T=0)}

The last two expressions are known as the mediation formulae. We see that while NDE is a weighted average of CDE, no such interpretation can be given to NIE.

The counterfactual definitions of NDE and NIE permit us to give these effects meaningful interpretations in terms of “response fractions”. The ratio NDE/TE measures the fraction of the response that is transmitted directly, with M “frozen”. NIE/TE measures the fraction of the response that may be transmitted through M, blinded to T. Consequently, the difference (TE – NDE)/TE measures the fraction of the response that is mecessarily due to M.

Numerical example: Mediation with binary variables

Let the basic mediation model represent an encouragement design, where T=1 stands for participation in an after-school remedial program, Y=1 for passing the exam, and M=1 for a student spending more than 3 hours per week on homework. Assume further that the data were obtained in a randomized trial to avoid confounding. The data show that training tends to increase both the time spent on homework and the rate of success on the exam. Moreover, training and time spent on homework together are more likely to produce success than each factor alone.

Our research question asks for the extent to which students’ homework contributes to their increased success rates regardless of the training program. The policy implications of such questions lie in evaluating policy options that either curtail or enhance homework efforts, for example, by counting homework effort in the final grade or by providing students with adequate work environments at home. An extreme explanation of the data, with significant impact on educational policy, might argue that the program does not contribute substantively to students’ success, save for encouraging students to spend more time on homework, an encouragement that could be obtained through less expensive means. Opposing this theory, we may have teachers who argue that the program’s success is substantive, achieved mainly due to the unique features of the curriculum covered, whereas the increase in homework efforts cannot alone account for the success observed.

The expected success (Y) for treated (T=1) and untreated (T=0) students, as a function of their homework (M =[1,0]):
E[Y|T=1,M=1] = 0.80
E[Y|T=1,M=0] = 0.40
E[Y|T=0,M=1] = 0.30
E[Y|T=0,M=0] = 0.20

The expected homework (M) done by treated (T=1) and untreated (T=0) students, NOTE: E[M|T=1]≡0×P(M=0|T=1)+1×P(M=1|T=1):
E[M|T=1] ≡ Σmm×P(M=m|T=1) = P(M=1|T=1) = 1 – P(M=0|T=1) = 0.75
E[M|T=0] ≡ Σmm×P(M=m|T=0) = P(M=1|T=0) = 1 – P(M=0|T=0) = 0.40

NDE =
{E[Y|T=1,M=1] – E[Y|T=0,M=1]}×P(M=1|T=0) +
{E[Y|T=1,M=0] – E[Y|T=0,M=0]}×P(M=0|T=0) =
(0.80 – 0.30)×0.40 + (0.40 – 0.20)×(1 – 0.40) = 0.32

NIE =
E[Y|T=0,M=1]×{P(M=1|T=1) – P(M=1|T=0)} +
E[Y|T=0,M=0]×{P(M=0|T=1) – P(M=0|T=0)} =
E[Y|T=0,M=1]×{P(M=1|T=1) – P(M=1|T=0)} –
E[Y|T=0,M=0]×{P(M=1|T=1) – P(M=1|T=0)} =
(E[Y|T=0,M=1]-E[Y|T=0,M=0])×(P(M=1|T=1)-P(M=1|T=0)) =
(0.30 – 0.20)×(0.75 – 0.40) = 0.10×0.35 = 0.035

NIEr =
(E[Y|T=1,M=1]-E[Y|T=1,M=0])×(P(M=1|T=0)-P(M=1|T=1)) =
(0.80 – 0.40)×(0.40 – 0.75) = -0.40×0.35 = -0.14

TE = NDENIEr = 0.32 + 0.14 = 0.46

NIE/TE = 0.07, NDE/TE = 0.696, 1 – NDE/TE = 0.304

We conclude that the program as a whole has increased the success rate by 46% and a significant portion, 30.4%, of this increase is due to the capacity of the program to stimulate improved homework effort. At the same time, only 7% of the increase can be explained by stimulated homework alone without the benefit of the program itself.

 

The Making of Life

The Making of Life

Michael L. Wong is a research associate in the Univeristy of Washington’s Astrobiology program.

ARE WE ALONE?

This question looms larger with every passing year of envelope-pushing space exploration. For the first time in human history, we have the potential to find unambiguous signs of extraterrestrial life.

Liquid water is the most fundamental requirement for life as we know it. The physical and chemical properties of H2O control the molecular processes that underpin how life works. Water molecules dissolve ions and enable organic reactions that drive the essential functions of biology. Wherever we find liquid water on Earth, we find life. It’s no wonder that NASA adopted “follow the water” as the theme for its Mars exploration program.

It turns out that water is everywhere in the solar system. A stable lake might hide beneath Mars’ south polar cap. Jupiter’s moon Europa houses perhaps twice as much liquid water as Earth. Tiny Enceladus squirts free samples of its ice-covered ocean into Saturn’s orbit. Titan has two kinds of fluids: a global layer of liquid water sloshes deep beneath a thick ice crust and the hydrocarbon seas on its surface. Add to this list the theoretical subsurface oceans of far-flung Pluto and Eris, and it seems that almost every world is staking its claim to habitability.

We now realize that about as many extrasolar planets exist as there are stars. Given the 100 billion stars in our galaxy and the trillions of galaxies in the observable universe, our cosmos should contain countless water-rich habitats.

But are they teeming with life?

Habitable vs. Inhabited

We used to believe that life spontaneously appeared wherever the conditions were right. In 1861, French scientist Louis Pasteur performed an experiment that refuted this concept of spontaneous generation, showing that life would only arise when a habitable but sterile environment was seeded by life from elsewhere. Life can only arise from life, Pasteur concluded.

In the present age of interplanetary exploration, Pasteur’s experiment serves as an important reminder that “habitable” is not synonymous with “inhabited.” Yet, if life is common across the cosmos, then abiogenesis—a synonym for spontaneous generation that doesn’t bear the latter’s historical baggage—must happen often enough to initiate life on worlds separated by vast tracts of sterile space.

The fact that you are reading The Planetary Report is proof that abiogenesis happened at least once in the universe’s history. However, until we find other such occurrences, we are forced to base our entire understanding of life on a single sample: us. Solving the mystery of how life emerged on Earth from nonliving processes is how astrobiologists hope to connect the concept of habitability to the reality of inhabitation.

CLUES FROM THE PAST

Almost every environment on Earth has been infected by—or at least affected by—biology, from microbes to humans. Life gave us an oxygen-rich atmosphere, introduced thousands of new minerals to Earth’s crust, broke rocks apart, held sediment together, sent the world into a global deep freeze, and is now steadily warming the climate.

Despite Earth’s incredible capacity to host life, there is no evidence that spontaneous generation has happened more than once. Just as Pasteur surmised, the life that exists today arose from life that came before it, which came from older life made by even older life, on and on and on. We know this because we can connect the dots between every living thing that we have discovered on a phylogenetic tree. Thanks to our ability to read the instructions written in DNA and RNA, we can compare genetic codes across every domain of life and draw a map of evolution stretching back to our last universal common ancestor, charmingly referred to as “LUCA.” The identity of LUCA has been lost to history, but through fossil evidence, we know that life has persisted on this planet in one form or another for roughly 4 billion years.

Thus, the very first life form on Earth emerged about one third the age of the universe ago. The conditions of the early Earth—which were nothing like what we experience now—might have been much more conducive to the emergence of life.

IT CAME FROM THE DEEP

You would die instantly if you were transported back in time by 4 billion years, asphyxiating in an oxygen-free environment and succumbing to high doses of ultraviolet radiation from the young Sun. Even if you circumvented those calamities, you’d eventually drown if you didn’t have the foresight to bring a boat because there were probably no landmasses on the infant Earth.

How could anything resembling life possibly originate at the surface of such a treacherous world? The short answer is: it probably didn’t. Instead, a growing body of scientific work has come to suggest that life emerged at hydrothermal vents at the bottom of the ocean.

Let’s say your time machine was also a submarine, one that could dive to the base of Earth’s early ocean. There, you would find spires resembling the chimneys of the Atlantic Ocean’s Lost City hydrothermal field. These massive structures, some more than 50 meters tall, were created by the precipitation of iron-bearing minerals when two very different kinds of water met.

The majority of Earth’s ancient ocean was acidic like a lightly carbonated soft drink, thanks to an overlying carbon dioxide–rich atmosphere. Just beneath the ocean floor, seawater and rock interacted in a process called serpentinization. This chemical reaction changed the water’s pH, rendering it alkaline (the opposite of acidic). It also heated the water and infused it with molecular hydrogen, a valuable chemical fuel for life.

When this alkaline groundwater seeped into the ocean, it found itself out of equilibrium with the surrounding colder, acidic, carbon dioxide–rich seawater. “Out of equilibrium” is just jargon for “unbalanced,” but its ramifications for life are enormous. An out-of-equilibrium situation is full of untapped energy—the potential to enact change and create complexity.

ELECTRONS AND PROTONS POWER LIFE

Consider how disequilibria power humans. We eat and breathe to gain energy, but what does that really mean? At its heart, it’s all about electrons. Our bodies take electrons from the electron-rich food that we eat and transfer them to the electron-greedy oxygen in the air that we breathe.

This electron transfer, mediated by a series of chemical reactions that happen inside of our cells’ mitochondria, releases useful energy. Proteins in our mitochondria use this energy to pump protons across an inner membrane, transforming what used to be an imbalance in electrons into an imbalance in protons. A protein called ATP synthase uses the potential energy stored in the imbalance of protons to create adenosine triphosphate (ATP) molecules. Actually, ATP exists as part of yet another imbalance: that between its wholesome self and its broken pieces, adenosine diphosphate and a lone phosphate. ATP is often thought of as the “energy currency of life,” and now the origin of that stored energy is clear: it’s the useful energy derived from a disequilibrium.

It’s not just our cells that harness these disequilibria to create ATP. Almost all the other living things on Earth do too, down to the most primitive single-celled organisms. It’s so universal, in fact, that these disequilibria might even be related to how life started.

A HYDROTHERMAL HATCHERY OF LIFE

Back in your time-traveling submarine 4 billion years in the past, you put a microscope up against a hydrothermal chimney. You find that this colossal tower is built like a high-rise apartment building with trillions of tiny mineral rooms, or vesicles, each roughly the size of a biological cell.

Inside these vesicles, H2 from serpentinization and CO2 from the surrounding seawater swirl in chemical disequilibrium. In this case, H2 is the fuel (electron donor) and carbon dioxide is the air (electron acceptor). There’s a second disequilibrium present between the alkaline vent fluid and the acidic seawater. The contrast in pH is a natural proton imbalance that resembles the proton imbalance created in modern-day cells.
We don’t know how life really began, but here’s a plausible scenario involving the untapped geochemical energy present in these ancient hydrothermal chimneys. Travel forward through time and you might see H2 and CO2 react with each other, aided by the catalytic metals in the vesicle’s walls. They would not only form organic molecules but release pent-up energy as well, and if some organic-mineral precursor to ATP synthase could use the natural proton gradient to bind phosphates together, an energy currency is within the realm of possibility. Incorporating nitrogen- and sulfur-bearing molecules dissolved in the surrounding water, these processes could lead to the first metabolic network: a web of reactions that reinforces itself, growing more stable and more complex with time.

Eventually, this network might lead to information-carrying and self-replicating molecules like RNA, allowing for greater adaptive abilities to changing conditions. It might also construct lipid membranes to replace the inflexible and immutable mineral walls and build ion pumps to regulate its own proton imbalance, thereby effecting the ability to escape these hydrothermal confines.

In the end, it would be a fully functional cell—a product of its geochemical past afloat in the formerly sterile abyss of this ancient sea, soon to encounter new places to thrive and evolve into new ways of being.

AN EVER-EVOLVING FIELD

For all of its attractive aspects, nobody knows whether this story represents primordial reality. As you read this, scientists are testing various aspects of the hydrothermal-vent hypothesis. Some are learning more about how chemical disequilibria create complex structures by making hydrothermal analogs in the lab. Others are conducting experiments on the catalytic properties of metal-bearing minerals. Still others are investigating how the temperature disparity between the vents’ hot interiors and cold exteriors could help concentrate the organic components of life. A few are even trying to come up with clever ideas for the structure of ATP synthase’s precursor and how proto-metabolic networks stored and carried information.

Some scientists are investigating completely different hypotheses for the origin of life. Many of these involve prebiotic soups of complex organic molecules. These organic medleys sunbathe on Earth’s surface until just the right combination of them find each other to form life. In the vast expanse of Earth’s primitive ocean, the chance encounters between potential molecular collaborators would be rare. So, most researchers suspect that the organic matchmaking must have occurred at tidal pools along the seashore or in freshwater hydrothermal pools, where periodic drying episodes promoted the concentration and polymerization (or “sticking together”) of life’s building blocks.

Origin-of-life research doesn’t lack in “far out” ideas either. One camp argues that life on Earth began as self-replicating clay minerals. Another group claims that nuclear-powered geysers—formed by the decay of radioactive uranium—enabled the prebiotic reactions that formed life.
Then there’s the notion that we’re all Martians, insisted upon by those who consider early Mars a likelier place to start life than early Earth. In this scenario, primitive Martian microbes hitched a ride deep inside an impact-ejected rock and seeded our planet in a process known as lithopanspermia.

If any of these ideas proves correct, what would that imply about our loneliness in the cosmos?

SEEKING OUR COSMIC NEIGHBORS

If life originated because of the physical and chemical disequilibria at hydrothermal vents, then countless wet, rocky planetoids should provide the basic requirements to start life. Hydrothermal systems can result from water meeting rock on any tectonically active world. Martian rocks examined by the Spirit rover bear the mineral byproducts of ancient hydrothermal systems, and Cassini identified the telltale signs of hydrothermal activity on the Saturnian moon Enceladus. Thus far, the hydrothermal-vent hypothesis is the only scheme that could plausibly lead to independent abiogeneses on both Earth-like planets and ice-covered ocean worlds.

However, if the emergence of life requires surface environments that undergo wet-dry cycles and are directly exposed to air and radiation, then Mars would be much more conducive to life than Europa or Enceladus are. In this case, the icy satellites of Jupiter and Saturn—as well as their analogs across the cosmos—would be habitable but sterile, barring the unlikely scenario that some rock ejected from an inhabited world like Earth seeded them.

If life can arise in a chemical soup without the aid of catalytic minerals or tectonic activity, then that raises the possibility of exotic life on the surface of Titan. This moon of Saturn produces complex organic molecules in its atmosphere that collect on its surface and in its hydrocarbon seas. Scientists look to Titan for potential analogs to the organic-rich soups that were present on Earth’s first landmasses.

At present, there is no consensus in the scientific community on the requirements for the origin of life. Perhaps none of our hypotheses are correct, or perhaps the answer is “all of them.” We just don’t know yet.

Finding life on any neighboring world would tell us which—if any—of our origin hypotheses are more likely than the others. Not finding life would also teach us that habitability alone is an insufficient condition for life. Now that we know that such a diverse array of astrobiological candidates exist in our own cosmic backyard, the question begs: is anyone out there?