Earth rock found on the moon

Ancient Earth rock found on the moon

By Richard A. Lovett |

What may be the oldest-known Earth rock has turned up in a surprising place: the moon. A 2-centimeter chip embedded in a larger rock collected by Apollo astronauts is actually a 4-billion-year-old fragment of our own planet, scientists say.

“It’s a very provocative conclusion but it could be right,” says Munir Humayun, a cosmochemist at Florida State University in Tallahassee. The finding “helps paint a better picture of early Earth and the bombardment that modified our planet during the dawn of life,” says David Kring, a lunar geologist at the Lunar and Planetary Institute in Houston, Texas, and an author on a study published on 24 January in Earth and Planetary Science Letters.

Sometime after the rock formed, Kring says, an asteroid impact blasted it from Earth. It found its way to the moon, which was three times closer to Earth than it is today. The fragment was later engulfed in a lunar breccia, a motley type of rock. Finally, Apollo 14 astronauts returned it to Earth in 1971. Although geologists have found meteorites on Earth that came from the moon, Mars, and asteroids, “This is the first time a rock from the moon has been interpreted as a terrestrial meteorite,” says Elizabeth Bell, a geochemist at the University of California, Los Angeles, who was not part of the study.

Several years ago, a team led by Kring detected fragments of asteroids in similar moon rocks, so looking for pieces of Earth was a logical next step.

Trace elements in the rock’s minerals, which are a granitelike mix of quartz, feldspar, and zircon crystals, provided clues to its origin. By measuring uranium and its decay products in the zircons, the team dated the formation of the rock, while titanium levels helped reveal the temperature and pressure at the time. Still other trace elements, such as cerium, pointed to the amount of water likely to have been present.

The results, Kring says, indicate that the rock formed in a water-rich environment at temperatures and pressures corresponding to either 19 kilometers beneath the surface of Earth, or about 170 kilometers deep in the moon. Craig O’Neill, a geodynamicist at Macquarie University in Sydney, Australia, favors an Earth origin because a depth of 170 kilometers would be “crazy”—way below the moon’s crust, where granitic rocks could have formed.

The rock isn’t Earth’s oldest relic: Zircon crystals from western Australia have been dated to as far back as 4.4 billion years, only 150 million years after Earth’s formation. But these zircons were stripped from their parent rocks and reworked into new materials. Here, Kring says, there’s no doubt that the rock and its zircons formed at the same time. “We’re sure it’s a complete rock,” he says. The rock is about as old as the oldest rocks found on Earth—metamorphic rocks from Canada and Greenland.

Bell says its preservation is not so surprising because the moon lacks the weather and geologic processes that erase ancient rocks on Earth. In fact, she says, the moon might be a better place to look for ancient Earth rocks than Earth itself. Norm Sleep, a geophysicist at Stanford University in Palo Alto, California, agrees. He says that although meteorites from Earth probably constitute a tiny fraction of the moon’s surface material, eons of subsequent asteroid impacts have churned them throughout the lunar soil, making it easier to find a small piece of Earth in a random sample of moon.

If the rock is truly terrestrial, it holds clues about an ancient time called the Hadean. For starters, it confirms Earth was being hit by asteroids big enough to blast rocks all the way to the moon. It also shows that the granitic rocks that make up Earth’s continents were already forming, Kring says. “That’s a big thing.”

Kring believes other scientists will soon be combing the Apollo moon rocks for bits of early Earth. Only a small fraction of the 382 kilograms of rocks brought back by the moonwalkers have been studied, he says, and analytical techniques are constantly improving. “I think we are going to get a little library of fragments of the early Earth emerging in the next few years,” he says.

 

Analyzing Causal DAGs

Welcome to DAGitty!

DAGitty v3.0

DAGitty is a browser-based environment for creating, editing, and analyzing causal models (also known as directed acyclic graphs or causal Bayesian networks). The focus is on the use of causal diagrams for minimizing bias in empirical studies in epidemiology and other disciplines. For background information, see the “learn” page.

Because the main purposoe of DAGitty is facilitating the use of causal models in empirical studies, it is and will always be Free software (both “free as in beer” and “free as in speech”). You can copy, redistribute, and modify it under the terms of the GNU general public license. Enjoy!

DAGitty development has been sponsored by the Leeds Institute for Data Analytics and by the Deutsche Forschungsgemeinschaft (DFG).

A brief introduction to causal diagrams

In Epidemiology, causal diagrams are also frequently called DAGs.
In a nutshell, a DAG is a graphic model that depicts a set of hypotheses about the causal process that generates a set of variables (X,Y,Z… with actual values x,y,z…) of interest. An arrow X→Y is drawn if there is a direct causal effect of X on Y. Intuitively, this means that the natural process determining Y is directly influenced by the status of X, and that altering X via external intervention would also alter Y . However, an arrow X→Y only represents that part of the causal effect which is not mediated by any of the other variables in the diagram. If one is certain that X does not have a direct causal influence on Y, then the arrow is omitted. This has two important implications: (1) arrows should follow time order, or else the diagram contradicts the basic principle that causes must precede their effects; (2) the omission of an arrow is a stronger claim than the inclusion of an arrow – the presence of an arrow depicts merely the that X might have an effect on Y.
Mathematically, the semantics of an arrow X→Y can be defined as follows. Given a DAG G and a variable Y in G, let X1,…,Xn be all variables in G that have direct arrows Xi→Y (also called the parents of Y). Then G claims that the causal process determining the value of Y can be modelled as a mathematical function Y := f(X1,…,XnY), where εY (the “causal residual”) is a random variable that is jointly independent of all Xi.

In an epidemiological context, we are often interested in the putative effect of a set of variables, called exposures, on another set of variables called outcomes. A key question in Epidemiology (and many other empirical sciences) is: how can we infer the causal effect of an exposure on an outcome of interest from an observational study? Typically, a simple regression will not suffice due to the presence of confounding factors. If the assumptions encoded in a given the diagram hold, then we can infer from this diagram sets of variables for which to adjust in an observational study to minimize such confounding bias.

If we were to perform an association study on the relationship between carrying matches in one’s pocket and developing lung cancer, we would probably find a correlation between these two variables. However, this correlation would not imply that carrying matches in your pocket causes lung cancer: Smokers are more likely to carry matches in their pockets, and also more likely to develop lung cancer. This is an example of a confounded association between two variables, which is mediated via a biasing path. Under this assumption, were we to adjust for smoking, e.g. by averaging separate effect estimates for smokers and non-smokers, we would no longer find a correlation between carrying matches and lung cancer. In other words, adjustment for smoking would close the biasing path.

Analyzing diagrams

Causal diagrams contain two different kinds of paths between exposure and outcome variables.

Causal paths start at the exposure, contain only arrows pointing away from the exposure, and end at the outcome. That is, they have the form e→x1→…→xk→o.

Biasing paths are all other paths from exposure to outcome. For example, such paths can have the form e←x1→…→xk→o.

With respect to a set Z of conditioning variables (that can also be empty if we are not conditioning on anything), paths can be either open or closed (also called d-separated). A path is closed by Z if one or both of the following holds:

• The path p contains a chain x→m→y or a fork x←m→y such that m is in Z.

• The path p contains a collider x→c←y such that c is not in Z and furthermore, Z does not contain any successor of c in the graph.

Otherwise, the path is open. The above criteria imply that paths consisting of only one arrow are always open, no matter the content of Z. Also it is possible that a path is closed with respect to the empty set Z={}.

Coloring

It is not easy to verify by hand which paths are open and which paths are closed, especially in larger diagrams. DAGitty highlights all arrows lying on open biasing paths in red and all arrows lying on open causal paths in green. This highlighting is optional and is controlled via the “highlight causal paths” and “highlight biasing paths” checkboxes.

Causal effect identification

Some of the most important features of DAGitty are concerned with the question: how can causal effects be estimated from observational data? Currently, two types of causal effect identification are supported: adjustment sets, and instrumental variables.

Adjustment sets

Finding sufficient adjustment sets is one main purpose of DAGitty. To identify adjustment sets, the diagram must contain at least one exposure and at least one outcome.

Total and direct effects. One can understand adjustment sets graphically by viewing an adjustment set as a set Z that closes all all biasing paths while keeping desired causal paths open. DAGitty considers two kinds of adjustment sets:

• Adjustment sets for the total effect are sets that close all biasing paths and leave all causal paths open.

• Adjustment sets for the direct effect are sets that close all biasing paths and all causal paths, and leave only the direct arrow from exposure X to outcome Y open.

In a diagram where the only causal path between exposure and outcome is the path X→Y, the total effect and the direct effect are equal. This is true e.g. for the diagram in Figure 1:

An example diagram where the direct and total effects are not equal is shown in Figure 2:

A causal diagram where the total and direct effects of exposure X on outcome Y are not equal.

It suffices to restrict our attention to the part of the model that consists of exposure, outcome, and their ancestors for identifying suffi cient adjustment sets. This is indicated by DAGitty by coloring irrelevant nodes in gray (it depends on the boxes checked).

Minimal sufficient adjustment sets. A minimal sufficient adjustment set is a sufficient adjustment set of which no proper subset is itself sufficient. For example, consider again the causal diagram in Figure 1. The following three sets are sufficient adjustment sets for the total and direct effects, which are equal in this case: {A,B,Z}, {A,Z}, {B,Z}.
Each of these sets is sufficient because it closes all biasing paths and leaves the causal path open. The sets {A,Z} and {B,Z} are minimal sufficient adjustment sets while the set {A,B,Z} is sufficient, but not minimal. In contrast, the set {Z} is not sufficient, since this would open the path E←A→Z←B←D: Because both E and D depend on Z, adjusting for Z will induce additional correlation between E and D.

Testable implications

Any implications that are obtained from a causal diagram, such as possible adjustment sets or instrumental variables, are of course dependent on the assumptions encoded in the diagram. To some extent, these assumptions can be tested via the (conditional) independences implied by the diagram: If two variables X and Y are d-separated by a set Z, then X and Y should be conditionally independent given Z. The converse is not true: Two variables X and Y can be independent given a set Z even though they are not d-separated in the diagram. Furthermore, two variables can also be d-separated by the empty set Z=∅. In that case, the diagram implies that X and Y are unconditionally independent.
DAGitty displays all minimal testable implications in the “Testable implications” text field. Only such implications will be displayed that are in fact testable, i.e., that do not involve any unobserved variables. Note that the set of testable implications displayed by DAGitty does not constitute a “basis set”. Future versions will allow choosing between different basis sets.
In general, the less arrows a diagram contains, the more testable predictions it implies. For this reason, “simpler” models with fewer arrows are in general easier to falsify (Occam’s razor).

How does Salary(S) depend on Education(Ed) and Experience(Ex)?

S, Ed, and Ex are random variables. In a purely data-driven analysis three possible regressions can be chosen: S against Ed and Ex, Ed against S and EX, and Ex against S and Ed.
The situation is totally different, if a directed acyclic graph is used as a causal model:

Causal diagram for the effect of education (Ed) and experience (Ex) on salary (S).

Mathematically, the semantics of an arrow X→Y can be defined as follows. Given a DAG G and a variable Y in G, let X1,…,Xn be all variables in G that have direct arrows Xi→Y (also called the parents of Y). Then G claims that the causal process determining the value of Y can be modelled as a mathematical function Y := f(X1,…,XnY), where εY (the “causal residual”) is a random variable that is jointly independent of all Xi.

This implies, that S has the functional form S := f(Ed, Ex, Us), where Us is an unobserved variable that affect salary. In addition, Education has a direct arrow into Experience. This implies, that Ex has the functional form Ex := g(Ed, Uex), where Uex is an unobserved variable. By combining the two equation we obtain this functional model
S := f(Ed, g(Ed, Uex), Us).

Natural Indirect Effect (NIE)

In 1973 Eugene Hammel noticed a worrisome trend in the university’s admission rates for men and women. His data showed that 44% of the men who applied to graduate school at Berkely hat been accepted, compared to only 35% of the women. Graduate admission decission are made by individual departments rather than by the university as a whole. So it made sense to look at the admission data department by department. But when he did so, Hammel discovered an amazing fact. Department after department, the admissions decisions were consistently more favorable to women than to men. How could this be? The causal diagram is as follows:

Causal diagram for Berkeley admission paradox – simpel version.

Bickel and Hammel wrote an article, published in Science magazine in 1975, proposing a simple explanation: women were rejected in greater numbers because they applied to harder departments to get into.

It is also very illuminating to look at the definition of discrimination in US case law. It uses counterfactual terminology. In Carson v. Bethlehem Steel Corp, (1996), the Seventh Circuit Court wrote, “The central question in any employment-discrimination case is whether the employer would have taken the same action had the employee been of a different race and everything else had been the same”. This definition clearly expresses the idea that we should disable all causal pathways that lead from gender to admission through any other variable. In other words, discrimination equals the direct effect of gender on the admission outcome (the light green arrow).

Many more details are found in Chapter 9 of The Book of Why by Judea Pearl.

But how is the indirect effect (fat arrows) defined? This problem is best illustrated by some quotes frem the book:

“As Melanie Wall’s student said, we have no variable or set of variables to intervene on to disable the direct path and let the indirect path stay active. For this reason the indirect effect seemed to me like a figment of the imagination, devoid of independent meaning except to remind us that the total effect may differ from the direct effect. I even said so in the first edition of my book Causality. This was one of the three greatest blunders of my career.”

“In retrospect, I was blinded by the success of the do-calculus, which had led me to believe that the only way to disable a causal path was to take a variable and set it to one particular vanue. This is not so; if I have a causal model, I can manipulate it in many creative ways, by dictating who (=variable) listens to whom, when, and how.”

“All these struggles came to sudden resolution, almost like a divine revelation, when I read the legal definition of discrimination again: had the employee been of a different race … and everything else had been the same. Here we have it — the crux of the issue! It’s a make-believe game. We deal with each individual on his og her own merits, and we keep all characteristics of the individual constant at whatever level they had prior to the change in the treatment variable (=race).

“I realized that every direct and indirect effect could be translated into a counterfactual expression. Once I saw how to do that, it was a snap to derive a formula that tells you how to estimate the natural direct and indirect effects from data and when it is permissible. Importantly, the formula makes no assumptions about the specific functional form of the relationship between X, M, and Y.

“I called the new rule the Mediation Formula, though there are actually two formulas, one for the natural direct effect and one for the natural indirect effect. Subject to transparent assumptions, explicitly displayed in the graph, it tells you how they can be estimated from data.”

In a situation like the Causal diagram for Berkeley admission paradox, where there is no confounding between any of the variables, and M is the mediator between treatment X and outcome Y:

NIE = Σm[P(M=m|X=1) – P(M=m|X=0)]×P(Y=1|X=0,M=m)

The interpretation of this formula is illuminating. The expression in brackets stands for the effect of X on M, and the following expression stands for the effect of M on Y, when X=0. Note alsothat this equation has no subscripts and no do-operators, so it can be estimated from data, alone.

 

Model Cards for Model Reporting

Model Cards for Model Reporting

Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru

Conference on Fairness, Accountability, and Transparency, January, 2019

Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.

Model Card – Smiling Detection in Images

Model Details
• Developed by researchers at Google and the University of Toronto, 2018.
• Convolutional Neural Net.
• Pretrained for face recognition then fine-tuned with cross-entropy loss for binary smiling classification.

Intended Use
• Intended to be used for fun applications, such as creating cartoon smiles on real images; augmentative applications, such as providing details for people who are blind; or assisting applications such as automatically finding smiling photos.
• Particularly intended for younger audiences.
• Not suitable for emotion detection or determining affect; smiles were annotated based on physical appearance, and not underlying emotions.

Factors
• Based on known problems with computer vision face technology, potential relevant factors include groups for gender, age, race, and Fitzpatrick skin type; hardware factors of camera type and lens type; and environmental factors of lighting and humidity.
• Evaluation factors are gender and age group, as annotated in the publicly available dataset CelebA. Further possible factors not currently available in a public smiling dataset. Gender and age determined by third-party annotators based on visual presentation, following a set of examples of male/female gender and young/old age.

Metrics
• Evaluation metrics include False Positive Rate and False Negative Rate to measure disproportionate model performance errors across subgroups. False Discovery Rate and False Omission Rate, which measure the fraction of negative (not smiling) and positive (smiling) predictions that are incorrectly predicted to be positive and negative, respectively, are also reported.
• Together, these four metrics provide values for different errors that can be calculated from the confusion matrix for binary classification systems.
• These also correspond to metrics in recent definitions of “fairness” in machinelearning, where parity across subgroups for different metrics correspond to different fairness criteria.
• 95% confidence intervals calculated with bootstrap resampling.
• All metrics reported at the .5 decision threshold, where all error types (FPR, FNR, FDR, FOR) are within the same range (0.04 – 0.14).

 

Peter Naurs datalogitanker

Peter Naurs datalogitanker er blevet uhyggeligt aktuelle

Elisa Nadire Caeli, ph.d.-stipendiat i datalogisk tænkning og teknologiforståelse, DPU, Aarhus Universitet/Københavns Professionshøjskole

Det forgangne år har stået i dataskandalernes navn. Ingen har næppe kunnet undslippe avisoverskrifter om dataovervågning, datalæk og videresalg af data, mens verdens ’techgiganter’ – et andet af årets nøgleord – har været i vælten den ene gang efter den anden.

Hvad skyldes den heftigt stigende opmærksomhed?

De seneste år er det i højere grad gået op for os, at mange af de gratis digitale tjenester, vi alle er mere eller mindre afhængige af og mere eller mindre frivilligt benytter os af, ikke er helt så gratis eller tjener os helt så godt som måske hidtil antaget.

Flere af dem truer vores privatliv, frihed og demokratiske beslutningsprocesser i et omfang, ingen kan sidde overhørig længere.

Vi har erfaret omfanget af målrettet indhold eksplodere fra halvuskyldige skræddersyede reklamer til regulering og kontrol af menneskelig adfærd.

Denne uhyggelige udvikling har blandt andet fået forskere på området til at stille krav om, at digitale systemers måder at behandle data på lægges åbent frem for alle og forklares for alle – på menneskesprog.

Om end sådanne protester er sunde, har vi været noget sløve i optrækket til at opfatte farernes alvor. Det sker på efterkant – selvom vi mere eller mindre altid har vidst, at data kan indsamles og behandles på både rigtig gode og rigtig dårlige måder. For mennesker og imod mennesker.

Faktisk gav en række danske pionerer os den ene kærlige advarsel efter den anden ved datalogiens spæde begyndelse for omtrent et halvt århundrede siden. Og særligt én af dem kan efter min mening inspirere os i dag.

Dette indlæg handler om Peter Naurs fremsynede tanker. Om datalogi som menneskelig aktivitet. Om data som et værktøj for mennesker. Som et fortrinsvis godt værktøj.

Datalogi og medmenneskelighed

Peter Naur blev født i 1928. Han var opfinder af begrebet datalogi, medgrundlægger af Datalogisk Institut på Københavns Universitet (DIKU), Danmarks første professor i datalogi og eneste dansker, der har vundet ACM’s Turing Award.

Så tidligt som i 1954 forsøgte han nøgternt at råbe verden op i artiklen ‘Elektronregnemaskinerne og hjernen’ foranlediget af datidens nye ‘elektriske apparater’, som medierne omtalte under »bemærkelsesværdige betegnelser som for eksempel elektroniske hjerner«.

Peter Naurs budskab var, at de processer, en maskine gennemgår, blot er et resultat af menneskelige planer.

»Maskinen udfører ganske mekanisk de processer som en menneskelig hjerne har udtænkt for den«, skrev han således og endte med i artiklen at konkludere, at han fandt datidens frygt for maskiner, der kunne tænke, ganske ubegrundet. I stedet var han bange for, at »faren lurer, ikke hos de maskiner som muligvis kan tænke, men hos de mennesker, som ikke kan«.

Citatet er uhyggeligt aktuelt den dag i dag. Allerede dengang var Peter Naur nemlig klar over, at magten over systemet ville komme til at ligge hos dem, der forstår, hvordan det virker.

I 1968 skrev han i en artikel med titlen ‘Demokrati i datamatiseringens tidsalder’, at »dette faktum er baggrunden for, at mange af os, der har datamaterne nært inde på livet, og som gør os tanker om deres samfundsmæssige konsekvenser, føler, at vi ved enhver gunstig lejlighed må fremhæve, at forståelsen af datamaternes programmering må bringes ind i almenuddannelsen og således blive almeneje«.

Og det var netop en pointe, han gentagne gange vedholdende pegede på væsentligheden af.

At alle mennesker skulle udvikle forståelse for datalogi for at kunne få indflydelse på beslutningsprocesser i fremtidens datamatiserede system – på samme måde, som vi skal lære sprog og matematik i skolen som en nødvendig forberedelse på livet, og ikke fordi vi efterfølgende alle skal blive lingvister eller matematikere.

Datalogi i almen uddannelse

Lige så overbevist Peter Naur var om, at det i det lange løb vil blive erkendt, at der eksisterer et datalogifag, som må indgå i den almene uddannelse, lige så sikker var han på, at det ville tage årtier at skabe de nødvendige ændringer, når man så på uddannelsessystemets træghed. Og det skulle vise sig at holde stik.

Ganske vist blev der i halen af Peter Naurs datalogitanker sat skibe i søen til faget datalære, der blandt andet sigtede mod, at eleverne udviklede kompetencer til at »vurdere og tage stilling til de muligheder, påvirkninger og konsekvenser, der følger af brugen af datamater«, ligesom undervisningen skulle »give eleverne mulighed for oplevelse af og erfaring med problemløsning«, som det stod i en undervisningsvejledning fra 1985, hvor faget endelig var kommet på skoleskemaet; dog blot som valgfag.

Bolden til faget var ellers givet op i 1972, hvor et udvalg nedsat af undervisningsministeriet i en betænkning anbefalede, at det kom ind i både folkeskole og læreruddannelse. Men af mere eller mindre ukendte årsager kom det fremsynede fag ikke med i den endelige skolelov, og med tiden blev det erstattet af pc-kørekort og undervisning i tifingersystemet, mens kommuner brugte millioner på indkøb af udstyr.

Nogle bemærkede, at »enhver idiot jo kunne lære at trykke på nogle knapper«, og andre advarede mod, at eleverne blev »datamaternes robotter«.

Alligevel skulle der gå et halvt århundrede, før vi kom på sporet af ambitionerne med datalære igen.

Et nyt og vigtigt datalogifag med ambitiøse mål er nemlig på vej til skolen – foreløbig som et treårigt forsøg. I dag hedder faget teknologiforståelse, og formålet er i lighed med dengang blandt andet, at eleverne udvikler faglige kompetencer til at forstå digitale teknologiers muligheder og konsekvenser og til problemløsning.

Fokus tilbage på mennesker

Når vi ser på de umenneskelige måder, mange af de såkaldte techgiganter behandler vores data på i dag, synes Peter Naurs medmenneskelige tanker om datalogi at være frygtelig aktuelle og mere presserende end nogensinde.

Alle i samfundet bør udvikle kritisk tænkning og forståelse for datalogiske systemer i vores alles samfund.

I 1983 formulerede tre aktører inden for datalærefaget – Neel Eriksen, Winnie Grønsved og Ib Lundgaard Rasmussen – pointen tydeligt:

»Alle kommer til at blive berørt af det teknologiske samfund, og alle bør have viden, så de er med til at gøre dette samfund menneskeværdigt.«

Tre andre visionære aktører – Carsten Fischer, Erik Frøkjær og Lisbeth Gedsø – forklarede i en elevbog fra 1972, hvordan store datamater og dataregistre nu havde gjort det muligt at sammenholde en hel række enkeltoplysninger om en enkelt person, så man vil kunne få et detaljeret billede af en persons adfærd, som let ville kunne komme til at ramme urimeligt.

»Gør man det, er der skabt risiko for, at oplysninger, der i øjeblikkes betragtes som private, kan misbruges,« advarede de.

I 2018 er god brug af data druknet i overskrifter om dårlig brug.

Om algoritmer, der diskriminerer.

Om søgemaskiner, der forstærker racisme.

Om digitale systemer, der overvåger vores gøren og laden – belønner ’god’ opførsel og straffer ’dårlig’.

Om målrettet markedsføring, der bruger private data om os og om andre, der ligner os, til at manipulere på enhver tænkelig måde: til at stemme på bestemte måder, til at købe bestemte varer, til at læse bestemte nyheder …

Der er brug for, at vi tænker os om.

Vores historie kan inspirere os til at finde tilbage til, at datalogi handler om mennesker.

Da Peter Naur en dag i april i 1966 på vej ad Lyngbyvejen fandt på begrebet datalogi, var det som en protest mod det misvisende amerikanske begreb computer science. »Datalogi – videnskab om data og databehandling – indeholder det menneskelige aspekt. Data handler om menneskelig forståelse,« udtalte han i sin Turing Talk i 2005.

Da han i 1969 formulerede en række planer og ideer for det dengang kommende DIKU, gjorde han det klart, at »data er et værktøj for mennesker« – og tilføjede: »fortrinsvis et godt værktøj«. Det står i modsætning til meget af den kontrollerende og adfærdsregulerende brug, vi ser i dag, hvor databehandling i stedet for at sætte os langt mere fri sætter os langt mindre fri.

Jeg ønsker mig et 2019, hvor ingen af os kan undslippe avisoverskrifter om menneskelighed, samarbejde og gennemsigtighed, når vi taler om brug og behandling af vores data.

Det er ikke maskiner, men os mennesker, der skaber retningen i vores samfund – og det skal vi alle kunne være med til.

 

Robots learning concepts

Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

Science Robotics  16 Jan 2019:
Vol. 4, Issue 26, eaav3150
DOI: 10.1126/scirobotics.aav3150

Abstract

Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, then it would notably improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a computer architecture consisting of a visual perception system, working memory, and action controller. The instruction set of this cognitive computer has commands for parsing a visual scene, directing gaze and attention, imagining new objects, manipulating the contents of a visual working memory, and controlling arm movement. Inferring a concept corresponds to inducing a program that can transform the input to the output. Some concepts require the use of imagination and recursion. Previously learned concepts simplify the learning of subsequent, more elaborate concepts and create a hierarchy of abstractions. We demonstrate how a robot can use these abstractions to interpret novel concepts presented to it as schematic images and then apply those concepts in very different situations. By bringing cognitive science ideas on mental imagery, perceptual symbols, embodied cognition, and deictic mechanisms into the realm of machine learning, our work brings us closer to the goal of building robots that have interpretable representations and common sense.

 

Saturn and Jupiter

Missions expose surprising differences in the interiors of Saturn and Jupiter

By Paul Voosen |

A clever use of radio signals from planetary spacecraft is allowing researchers to pierce the swirling clouds that hide the interiors of Jupiter and Saturn, where crushing pressure transforms matter into states unknown on Earth. The effort, led by Luciano Iess of Sapienza University in Rome, turned signals from two NASA probes, Cassini at Saturn and Juno at Jupiter, into probes of gravitational variations that originate deep inside these gas giants.

What the researchers have found is fueling a high-stakes game of compare and contrast. The results, published last year in Nature for Jupiter and this week in Science for Saturn, show that “the two planets are more complex than we thought,” says Ravit Helled, a planetary scientist at the University of Zurich in Switzerland. “Giant planets are not simple balls of hydrogen and helium.”

In the 1980s, Iess helped pioneer a radio instrument for Cassini that delivered an exceptionally clear signal because it worked in the Ka band, which is relatively free of noise from interplanetary plasma. By monitoring fluctuations in the signal, the team planned to search for gravitational waves from the cosmos and test general relativity during the spacecraft’s journey to Saturn, which began in 1997. Iess’s group put a similar device on Juno, which launched in 2011, but this time the aim was to study Jupiter’s interior.

Juno skims close to Jupiter’s surface every 53 days, and with each pass hidden influences inside the planet exert a minute pull on the spacecraft, resulting in tiny Doppler shifts in its radio signals. Initially, Iess and his team thought measuring those shifts wouldn’t be feasible at Saturn because of the gravitational influence of its rings. But that obstacle disappeared earlier this decade, after the Cassini team decided to end the mission by sending the craft on a series of orbits, dubbed the Grand Finale, that dipped below the rings and eliminated their effects. As a result, Iess and colleagues could use radio fluctuations to map the shape of gravity fields at both planets, allowing them to infer the density and movements of material deep inside.

One goal was to probe the roots of the powerful winds that whip clouds on the gas giants into distinct horizontal bands. Scientists assumed the winds would either be shallow, like winds on Earth, or very deep, penetrating tens of thousands of kilometers into the planets, where extreme pressure is expected to rip the electrons from hydrogen, turning it into a metallike conductor. The results for Jupiter were a puzzle: The 500-kilometer-per-hour winds aren’t shallow, but they reach just 3000 kilometers into the planet, some 4% of its radius. Saturn then delivered a different mystery: Despite its smaller volume, its surface winds, which top out at 1800 kilometers per hour, go three times deeper, to at least 9000 kilometers. “Everybody was caught by surprise,” Iess says.

Scientists think the explanation for both findings lies in the planets’ deep magnetic fields. At pressures of about 100,000 times that of Earth’s atmosphere—well short of those that create metallic hydrogen—hydrogen partially ionizes, turning it into a semiconductor. That allows the magnetic field to control the movement of the material, preventing it from crossing the field lines. “The magnetic field freezes the flow,” and the planet becomes rigid, says Yohai Kaspi, a planetary scientist at the Weizmann Institute of Science in Rehovot, Israel, who worked with Iess. Jupiter has three times Saturn’s mass, which causes a far more rapid increase in atmospheric pressure—about three times faster. “It’s basically the same result,” says Kaspi, but the rigidity sets in at a shallower depth.

The Juno and Cassini data yield only faint clues about greater depths. Scientists once believed the gas giants formed much like Earth, building up a rocky core before vacuuming gas from the protoplanetary disc. Such a stately process would have likely led to distinct layers, including a discrete core enriched in heavier elements. But Juno’s measurements, interpreted through models, suggested Jupiter’s core has only a fuzzy boundary, its heavy elements tapering off for up to half its radius. This suggests that rather than forming a rocky core and then adding gas, Jupiter might have taken shape from vaporized rock and gas right from the start, says Nadine Nettelmann, a planetary scientist at the University of Rostock in Germany.

The picture is still murkier for Saturn. Cassini data hint that its core could have a mass of some 15 to 18 times that of Earth, with a higher concentration of heavy elements than Jupiter’s, which could suggest a clearer boundary. But that interpretation is tentative, says David Stevenson, a planetary scientist at the California Institute of Technology in Pasadena and a co-investigator on Juno. What’s more, Cassini was tugged by something deep within Saturn that could not be explained by the winds, Iess says. “We call it the dark side of Saturn’s gravity.” Whatever is causing this tug, Stevenson adds, it’s not found on Jupiter. “It is a major result. I don’t think we understand it yet.”

Because Cassini’s mission ended with the Grand Finale, which culminated with the probe’s destruction in Saturn’s atmosphere, “There’s not going to be a better measurement anytime soon,” says Chris Mankovich, a planetary scientist at the University of California, Santa Cruz. But although the rings complicated the gravity measurements, they also offer an opportunity. For some unknown reason—perhaps its winds, perhaps the pull of its many moons—Saturn vibrates. The gravitational influence of those oscillations minutely warps the shape of its rings into a pattern like the spiraling arms of a galaxy. The result is a visible record of the vibrations, like the trace on a seismograph, which scientists can decipher to plumb the planet. Mankovich says it’s clear that some of these vibrations reach the deep interior, and he has already used “ring seismology” to estimate how fast Saturn’s interior rotates.

Cassini’s last gift may be to show how fortunate scientists are to have the rings as probes. Data from the spacecraft’s final orbits enabled Iess’s team to show the rings are low in mass, which means they must be young, as little as 10 million years old—otherwise, encroaching interplanetary soot would have darkened them. They continue to rain material onto Saturn, the Cassini team has found, which could one day lead to their demise. But for now they stand brilliant against the gas giant, with more stories to tell.

 

Jump-started life on Earth

How an ancient cataclysm may have jump-started life on Earth

By Robert F. Service |

A cataclysm may have jump-started life on Earth. A new scenario suggests that some 4.47 billion years ago—a mere 60 million years after Earth took shape and 40 million years after the moon formed—a moon-size object sideswiped Earth and exploded into an orbiting cloud of molten iron and other debris.

The metallic hailstorm that ensued likely lasted years, if not centuries, ripping oxygen atoms from water molecules and leaving hydrogen behind. The oxygens were then free to link with iron, creating vast rust-colored deposits of iron oxide across our planet’s surface. The hydrogen formed a dense atmosphere that likely lasted 200 million years as it ever so slowly dissipated into space.

After things cooled down, simple organic molecules began to form under the blanket of hydrogen. Those molecules, some scientists think, eventually linked up to form RNA, a molecular player long credited as essential for life’s dawn. In short, the stage for life’s emergence was set almost as soon as our planet was born.

That scenario captivated participants at an October 2018 conference here, where geologists, planetary scientists, chemists, and biologists compared notes on the latest thinking on how life got its start. No rocks or other direct evidence remain from the supposed cataclysm. Its starring role is inferred because it would solve a bevy of mysteries, says Steven Benner, an origin of life researcher at the Foundation for Applied Molecular Evolution in Alachua, Florida, who organized the Origins of Life Workshop.

The metal-laden rain accounts for the distribution of metals across our planet’s surface today. The hydrogen atmosphere would have favored the emergence of the simple organic molecules that later formed more complex molecules such as RNA. And the planetary crash pushes back the likely birthdate for RNA, and possibly life’s emergence, by hundreds of millions of years, which better aligns with recent geological evidence suggesting an early emergence of life.

The impact scenario joins new findings from laboratory experiments suggesting how the chemicals spawned on early Earth might have taken key steps along the road to life—steps that had long baffled researchers. Many in the field see a consistent narrative describing how and when life was born starting to take shape. “Fifteen years ago, we only had a few hazy ideas” about how life may have come about, says Andrej Lupták, a chemist at the University of California (UC), Irvine, who attended the meeting. “Now, we’re seeing more and more pieces come together.”

The case isn’t settled, Lupták and others say. Researchers still disagree, for example, over which chemical path most likely gave rise to RNA and how that RNA combined with proteins and fats to form the earliest cells. Nevertheless, Benner says, “The field is in a new place. There is no question.”

The RNA world

Life as we know it likely emerged from an “RNA world,” many researchers agree. In modern cells, DNA, RNA, and proteins play vital roles. DNA stores heritable information, RNA ferries it inside cells, and proteins serve as chemical workhorses. The production of each of those biomolecules requires the other two. Yet, the idea that all three complex molecules arose simultaneously seems implausible.

Since the 1960s, a leading school of thought has held that RNA arose first, with DNA and proteins evolving later. That’s because RNA can both serve as a genetic code and catalyze chemical reactions. In modern cells, RNA strands still work alongside proteins at the heart of many crucial cellular machines.

In recent years, chemists have sketched out reactions that could have produced essential building blocks for RNA and other compounds. In 2011, for example, Benner and his colleagues showed how boron-containing minerals could have catalyzed reactions of chemicals such as formaldehyde and glycolaldehyde, which were probably present on early Earth, to produce the sugar ribose, an essential component of RNA. Other researchers have laid out how ribose may have reacted with other compounds to give rise to individual RNA letters, or nucleosides.

But critics such as Robert Shapiro, a biochemist at New York University in New York City who died in 2011, often pointed out that when researchers produced one pre-RNA chemical component or another, they did so under controlled conditions, adding purified reagents in just the right sequence. How all those steps could have occurred in the chaotic environment of early Earth is unclear at best. “The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence,” Shapiro wrote in 2007 in Scientific American. He favored a “metabolism first” view of life’s origin, in which energetic small molecules trapped inside lipidlike membranes or other compartments established chemical cycles resembling metabolism, which transformed into more complex networks. Other researchers, meanwhile, have argued that simple proteins were a more likely driver of early life because their amino acid building blocks are far simpler than the nucleotides in RNA.

Arguments have sometimes been heated. At a 2008 meeting on the origin of life in Ventura, California, Shapiro and John Sutherland, a chemist at the University of Cambridge in the United Kingdom, wound up shouting at each other. “Bob was very critical about published routes to prebiotic molecules,” Sutherland says. If the chemistry wasn’t ironclad, “he felt it failed.”

Today, Benner says, “The amount of yelling has gone down.” A steady stream of new data has bolstered scenarios for how RNA could have arisen. For example, although Benner and his colleagues had previously shown how ribose may have formed, they could not explain how some of its ingredients—namely, the highly reactive small molecules formaldehyde, glycolaldehyde, and glyceraldehyde—could have survived. Geochemists have long thought that reactions sparked by lightning and ultraviolet (UV) light could have produced such compounds. However, Benner says, “There’s no way to build up a reservoir” of those compounds. They can react with one another, devolving into a tarlike glop.

Benner now has a possible solution, which builds on recent work suggesting early Earth had a wet-dry cycle. On the basis of evidence from tiny, almost indestructable mineral crystals called zircons, researchers think a modest amount of dry land was occasionally doused with rain. In a not-yet-published study, he and colleagues in the United States and Japan have found that sulfur dioxide, which would have belched from volcanoes on early Earth, reacts with formaldehyde to produce a compound called hydroxymethanesulfonate (HMS). During dry times, HMS would have accumulated on land “by the metric ton,” Benner says. The reverse reaction would have happened more slowly, regenerating formaldehyde. Then, when rains came, it could have washed in a steady trickle into puddles and lakes, where it could react to form other small organic molecules essential for building RNA. Similar processes, Benner says, could have provided a steady supply of glycolaldehyde and glyceraldehyde as well.

The sugar ribose is only one piece of RNA. The molecule also strings together four ring-shaped bases, which comprise the letters of the genetic code: cytosine (C), uracil (U), adenine (A), and guanine (G). Making them requires a supply of electron-rich nitrogen compounds, and identifying a plausible source for those has long challenged origin of life researchers. But other recent advances in prebiotic chemistry, which assume a supply of those compounds, have identified a set of reactions that could have produced all four of RNA’s genetic letters at the same time and place. In 2009, for example, Sutherland and his colleagues reported a plausible prebiotic reaction for making C and U, chemically related letters known as pyrimidines. Then, in 2016, a team led by chemist Thomas Carell from Ludwig Maximilian University in Munich, Germany, reported coming up with a plausible way to make A and G, known as purines. The trouble was that Sutherland’s and Carell’s routes to pyrimidines and purines required different reaction conditions, making it difficult to imagine how they could have taken place side by side.

At the workshop, Carell reported a possible solution. He and his colleagues found that simple compounds likely present on early Earth could react in several steps to produce pyrimidines. Nickel and other common metals trigger the last step in the sequence by swiping electrons from intermediate compounds, causing them to react with one another. It turns out that gaining electrons enables the metals to then carry out a final step in synthesizing purines. What’s more, those steps can produce all four nucleosides in one pot, thereby offering the first plausible explanation for how all four RNA letters could have arisen together.

Benner calls Carell’s solution very clever. But not everyone is on board. Sutherland notes that those reactions are inefficient; any nucleosides they produced might fall apart faster than they could accumulate. To address that concern, others argue that more stable RNA-like compounds, rather than RNA itself, might have emerged first and helped form the first chemical system that could reproduce itself. Later, those RNA mimics might have given way to more efficient modern biomolecules such as RNA.
Whichever route RNA’s letters took, other researchers have recently worked out how minerals likely present on early Earth could have added phosphate groups to RNA nucleosides, an essential step toward linking them into long strings of RNA that could then have acted as catalysts and a rudimentary genetic code. And many experiments have confirmed that once RNA chains begin to grow, they can swap RNA letters and even whole sections with other strands, building complexity, variation, and new chemical functions. At the meeting, for example, Niles Lehman, a chemist at Portland State University in Oregon, described experiments in which pairs of 16-letter-long RNA chains, known as 16-mers, rearranged to form 28-mers and 4-mers. “This is how we can go from short things that can be made prebiotically to more complex molecules,” Lehman said. Later, he quipped, “If you give me 8-mers, I’ll give you life.”

That process may help explain how more complex RNA molecules arose, including those that can propel the synthesis of simple proteins. At the meeting in Atlanta, chemist Ada Yonath presented one such prototypical proteinmaking RNA. Yonath, of the Weizmann Institute of Science in Rehovot, Israel, shared the 2009 Nobel Prize in Chemistry for working out the atomic structure of the ribosome, the complex molecular machine inside today’s cells that translates the genetic code into proteins. Yonath’s original structure was of a bacterium’s ribosome. Since then, she and her colleagues, along with other groups, have mapped the ribosomes of many other species. Modern ribosomes are behemoths, made up of dozens of protein and RNA components. But at their core, all ribosomes have a sinuous string of RNA with a narrow slit through which budding proteins emerge. The structure is virtually identical across species, unchanged after billions of years of evolution.

Her group has now synthesized that ribosomal core, which she refers to as the protoribosome. At the meeting, she reported that her team’s protoribosome can stitch together pairs of amino acids, the building blocks of proteins. “I think we’re seeing back to how life began billions of years ago,” Yonath says.

All that is still a long way from demonstrating the emergence of life in a test tube. Nevertheless, Clemens Richert, a chemist at the Institute of Organic Chemistry at the University of Stuttgart in Germany, says the recent progress has been heartening. “We’re finding reactions that work,” he says. “But there are still gaps to get from the elements to functional biomolecules.”

Earth’s mysteries

One major gap is identifying a source for the energetic nitrogen-containing molecules needed to make the RNA bases. Lightning and UV light acting on compounds in the atmosphere may have made enough of them, says Jack Szostak, an origin of life expert at Harvard University. At the meeting, Stephen Mojzsis, a geologist at the University of Colorado in Boulder, argued that the moon-size impact is a more plausible spark.

Mojzsis didn’t set out to grapple with the origin of life. Rather, he and his colleagues were looking for ways to make sense of a decades-old geological conundrum: the surprising abundance of platinum and related metals in Earth’s crust. In the standard picture of Earth’s formation, they simply shouldn’t be there. The violent assembly of the planet from smaller bodies 4.53 billion years ago would have left it as a boiling sea of magma for millions of years. Dense elements, such as iron, gold, platinum, and palladium, should have sunk to the planet’s core, whereas silicon and other light elements floated nearer the surface. Yet as the wares in any jewelry store testify, those metals remain plentiful near the planet’s surface. “Precious metals in the crust are thousands of times more abundant than they should be,” Mojzsis says.

The long-standing explanation has been that after Earth cooled enough to form a crust, additional metals arrived in a hail of meteors. On the basis of ages of moon rocks brought back by Apollo astronauts, geologists suspected this assault was particularly intense from 3.8 billion to 4.1 billion years ago, a period they refer to as the Late Heavy Bombardment (LHB).

But that scenario has problems, Benner says. For starters, fossil evidence of complex microbial mats called stromatolites shows up in rocks just a few hundred million years younger than the hypothetical bombardment. That’s a narrow window in which to move from zero organic molecules to full-blown cellular life.


This 4.1-billion-year-old zircon mineral (x-ray image) contains carbon isotopes suggestive of life.
Crystal Shi

Zircons—those tiny, durable crystals—also pose a challenge, says Elizabeth Bell, a geologist at UC Los Angeles. Zircons are hardy enough to have remained intact even as the rocks that originally housed them melted while cycling into and out of the planet’s interior.

In 2015, Bell and her colleagues reported in the Proceedings of the National Academy of Sciences that zircons dated to 4.1 billion years ago contain flecks of graphitic carbon with a lifelike combination of carbon isotopes—biased toward carbon’s lighter isotope over its heavier one. Bell concedes that an as-yet-unknown nonbiological process might account for that isotope mix, but she says it suggests life was already widespread 4.1 billion years ago, before the end of the LHB. Other recent zircon data, including samples from as long ago as 4.32 billion years, hint that very early Earth had both liquid water and dry land, suggesting it was more hospitable to life than originally thought. “We’re pushing back further and further the time when life could have been formed on Earth,” Bell says.

Collision course

Mojzsis argues that a moon-size cataclysm 4.47 billion years ago could explain both Earth’s veneer of precious metals and an early start for life. In December 2017, he and two colleagues published a set of extensive computer simulations in Earth and Planetary Science Letters showing how the current distribution of metals could have originated in the rain of debris from such an impact. Simone Marchi, a planetary scientist at the Southwest Research Institute in Boulder, and colleagues reached much the same conclusion in a paper the same month in Nature Geoscience. Marchi’s team, however, simulated not one moon-size impactor, but several smaller bodies, each about 1000 kilometers across.

Whether one impact or a few, those collisions would have melted Earth’s silicate crust, an event that appears to be recorded in data on isotopes of uranium and lead, according to Mojzsis. The collisions also would have profoundly affected Earth’s early atmosphere. Before the impact, the cooling magma and rocks on the surface would have spurted out gases, such as carbon dioxide, nitrogen, and sulfur dioxide. None of those gases is reactive enough to produce the organic compounds needed to make RNA. But Benner notes the blanket of hydrogen generated by the impact’s metallic hail would have formed exactly the kind of chemically reducing atmosphere needed to produce the early organics. Robert Hazen, a geologist at the Carnegie Institution’s Geophysical Laboratory in Washington, D.C., agrees that hydrogen could help. With that reducing atmosphere, the wide array of minerals on the planet’s surface could have acted as catalysts to propel the chemical reactions needed to make simple organics, Hazen says.

Just before the impact, Mojzsis says, “there was no persistent niche for the origin of life.” But after the impact and a brief period of cooling, he adds, “at 4.4 billion years ago, there are settled niches for the propagation of life.”

“I’m delighted,” Benner says. “Steve [Mojzsis] is giving us everything we need” to seed the world with prebiotic chemicals. And by eliminating the need for the LHB, the impact scenario implies organic molecules, and possibly RNA and life, could have originated several hundred million years earlier than thought. That would allow plenty of time for complex cellular life to evolve by the time it shows up in the fossil record at 3.43 billion years ago.

Enduring enigmas

Not everybody accepts that tidy picture. Even if geologists’ new view of early Earth is correct, the RNA world hypothesis remains flawed, says Loren Williams, a physical chemist at the Georgia Institute of Technology here and an RNA world critic who attended the workshop. “I like talking to Steve Benner,” Williams says. “But I don’t agree with him.”

One major problem with the RNA world, he says, is that it requires a disappearing act. An RNA molecule capable of faithfully copying other RNAs must have arisen early, yet it has vanished. “There’s no evidence for such a thing in modern biology,” Williams says, whereas other vestiges of ancient RNA machines abound. The ribosome’s RNA core, for example, is virtually unchanged in every life form on the planet. “When biology makes something, it gets taken and used over and over,” Williams notes. Instead of an RNA molecule that can copy its brethren, he says, it’s more likely that early RNAs and protein fragments called peptides coevolved, helping each other multiply more efficiently.

Advocates of the RNA world hypothesis concede they can’t explain how early RNA might have copied itself. “An important ingredient is still missing,” Carell says. Researchers around the globe have designed RNA-based RNA copiers in the lab. But those are long, complex molecules, made from 90 or more RNA bases. And the copiers tend to copy some RNA letters better than others.

Still, enough steps of an RNA-first scenario have come into focus to convince advocates that others will follow. “We are running a thought experiment,” says Matthew Powner, a chemist at University College London. “All we can do is decide what we think is the simplest trajectory.”

That thought experiment was on full display in the workshop’s final session. Ramon Brasser of the Tokyo Institute of Technology, one of Mojzsis’s collaborators, stood at the front of a small conference room and drew a timeline of Earth’s earliest days. A red slash at 4.53 billion years ago on the left side of Brasser’s flip chart marked Earth’s initial accretion. Another slash at 4.51 billion years ago indicated the moon’s formation. A line at 4.47 billion years ago marked the hypothetical impact of the planetesimal that gave rise to an atmosphere favorable to organic molecules.

Benner asked Brasser how long Earth’s surface would have taken to cool below 100°C after the impact, allowing liquid water to host the first organic chemical reactions. Probably 50 million years, Brasser said. Excited, Benner rushed up to the timeline and pointed to a spot at 4.35 billion years ago, adding a cushion of extra time. “That’s it, then!” Benner exclaimed. “Now we know exactly when RNA emerged. It’s there—give or take a few million years.”

 

Hayabusa2 faces touchdown

Japan’s asteroid mission faces ‘breathtaking’ touchdown

By Dennis Normile |

Japan’s Hayabusa mission made history in 2010 for bringing back to Earth the first samples ever collected on an asteroid. But the 7-year, 4-billion-kilometer odyssey was marked by degraded solar panels, innumerable mechanical failures, and a fuel explosion that knocked the spacecraft into a tumble and cut communications with ground control for 2 months. When planning its encore, Hayabusa2, Japan’s scientists and engineers were determined to avoid such drama. They made components more robust, enhanced communications capabilities, and thoroughly tested new technologies.

But the target asteroid, Ryugu, had fresh surprises in store. “By looking at the details of every asteroid ever studied, we had expected to find at least some wide flat area suitable for a landing,” says Yuichi Tsuda, Hayabusa2’s project manager at the Japan Aerospace Exploration Agency’s Institute of Space and Astronautical Science (ISAS), which is headquartered in Sagamihara. Instead, when the spacecraft reached Ryugu in June 2018—at 290 million kilometers from Earth—it found a cragged, cratered, boulder-strewn surface that makes landing a daunting challenge. The first sampling touchdown, scheduled for October, was postponed until at least the end of this month, and at a symposium here on 21 and 22 December, ISAS engineers presented an audacious new plan to make a pinpoint landing between closely spaced boulders. “It’s breathtaking,” says Bruce Damer, an origins of life researcher at the University of California, Santa Cruz.

Yet most everything else has gone according to plan since Hayabusa2 was launched in December 2014. Its cameras and detectors have already provided clues to the asteroid’s mass, density, and mineral and elemental composition, and three rovers dropped on the asteroid have examined the surface. At the symposium, ISAS researchers presented early results, including evidence of an abundance of organic material and hints that the asteroid’s parent body once held water. Those findings “add to the evidence that asteroids rather than comets brought water and organic materials to Earth,” says project scientist Seiichiro Watanabe of Nagoya University in Japan.

Ryugu is 1 kilometer across and 900 meters top to bottom, with a notable bulge around the equator, like a diamond. Visible light observations and computer modeling suggest it’s a porous pile of rubble that likely agglomerated dust, rocks, and boulders after another asteroid or planetesimal slammed into its parent body during the early days of the solar system. Ryugu spins around its own axis once every 7.6 hours, but simulations suggest that during the early phase of its formation, it had a rotation period of only 3.5 hours. That probably produced the bulge, by causing surface landslides or pushing material outward from the core, Watanabe says. Analyzing surface material from the equator in an Earth-based laboratory could offer support for one of those scenarios, he adds. If the sample has been exposed to space weathering for a long time, it was likely moved there by landslides; if it is relatively fresh, it probably migrated from the asteroid’s interior.

So far, Hayabusa2 has not detected water on or near Ryugu’s surface. But its infrared spectrometer has found signs of hydroxide-bearing minerals that suggest water once existed either on the parent body or on the asteroid, says Mutsumi Komatsu, a planetary materials scientist at the Graduate University for Advanced Studies in Hayama, Japan. The asteroid’s high porosity also suggests it once harbored significant amounts of water or ice and other volatile compounds that later escaped, Watanabe says. Asteroids such as Ryugu are rich in carbon as well, and they may have been responsible for bringing both water and carbon, life’s key building block, to a rocky Earth early in its history. (Comets, by contrast, are just 3% to 5% carbon.)

Support for that theory, known as the late heavy bombardment, comes from another asteroid sample return mission now in progress. Early last month, NASA’s OSIRISREx reached asteroid Bennu, which is shaped like a spinning top as well and, the U.S. space agency has reported, has water trapped in the soil. “We’re lucky to be able to conduct comparative studies of these two asteroid brothers,” Watanabe says.

Geologist Stephen Mojzsis of the University of Colorado in Boulder is not convinced such asteroids will prove to be the source of Earth’s water; there are other theories, he says, including the possibility that a giant Jupiter-like gaseous planet migrated from the outer to the inner solar system, bringing water and other molecules with it around the time Earth was formed. Still, findings on Ryugu’s shape and composition “scientifically, could be very important,” he says.

Some new details come from up-close looks at the asteroid’s surface. On 21 September, Hayabusa2 dropped a pair of rovers the size of a birthday cake, named Minerva-II1A and -II1B, on Ryugu’s northern hemisphere. Taking advantage of its low gravity to hop autonomously, they take pictures that have revealed “microscopic features of the surface,” Tsuda says. And on 5 October, Hayabusa2 released a rover developed by the German and French space agencies that analyzed soil samples in situ and returned additional pictures.


A close-up from Hayabusa2 shows a surface strewn with boulders.
JAXA, UNIVERSITY OF TOKYO, KOCHI UNIVERSITY, RIKKYO UNIVERSITY, NAGOYA UNIVERSITY, CHIBA INSTITUTE OF TECHNOLOGY, MEIJI UNIVERSITY, UNIVERSITY OF AIZU, AIST

The ultimate objective, to bring asteroid samples back to Earth, will allow lab studies that can reveal much more about the asteroid’s age and content. ISAS engineers programmed the craft to perform autonomous landings, anticipating safe touchdown zones at least 100 meters in diameter. Instead, the biggest safe area within the first landing zone turned out to be just 12 meters wide.

That will complicate what was already a nail-biting operation. Prior to each landing, Hayabusa2 planned to drop a small sphere sheathed in a highly reflective material to be used as a target, to ensure the craft is moving in sync with the asteroid’s rotation. Gravity then pulls the craft down gently until a collection horn extending from its underside makes contact with the asteroid; after a bulletlike projectile is fired into the surface, soil and rock fragments hopefully ricochet into a catcher within the horn. For safety, the craft has to steer clear of rocks larger than 70 centimeters.

During a rehearsal in late October, Hayabusa2 released a target marker above the 12-meter safe circle; unfortunately, it came to rest more than 10 meters outside the zone. But it is just 2.9 meters away from the edge of a second possible landing site that’s 6 meters in diameter. Engineers now plan to have the craft first hover above the target marker and then move laterally to be above the center of one of the two sites. Because the navigation camera points straight down, the target marker will be outside the camera’s field of view as Hayabusa2 descends, leaving the craft to navigate on its own.

“We are now in the process of selecting which landing site” to aim for, says Fuyuto Terui, who is in charge of mission guidance, navigation, and control. Aiming at the smaller zone means Hayabusa2 can keep the target marker in sight until the craft is close to the surface; the bigger zone gives more leeway for error, but the craft will lose its view of the marker earlier in the descent.

Assuming the craft survives the first landing, plans call for Hayabusa2 to blast a 2-meter-deep crater into Ryugu’s surface at another site a few months later, by hitting it with a 2-kilogram, copper projectile. This is expected to expose subsurface material for observations by the craft’s cameras and sensors; the spacecraft may collect some material from the crater as well, using the same horn device. There could be a third touchdown, elsewhere on the asteroid. If all goes well, Hayabusa2 will make it back to Earth with its treasures in 2020.

 

China moon mission

China moon mission: probe makes historic landing on far side of moon in important step for country’s space programme

A Chinese spacecraft has made the first successful landing ever on the far side of the moon, a mission seen as an important step as the country looks to push forward its space programme.

The lunar explorer Chang’e 4 touched down at 10:26 am and relayed a photo of the “dark side” of the moon to the Queqiao satellite, the official China Central Television reported on Thursday.

The moon is tidally locked to Earth, rotating at the same rate that it orbits our planet, so the far side – or the “dark side” – is never visible from Earth. Previous spacecraft have seen the far side of the moon, but none has landed on it.

The landing “lifted the mysterious veil” from the far side of the moon, and “opened a new chapter in human lunar exploration”, the broadcaster said.

China launched the Chang’e-4 probe earlier this month, carried by a Long March-3B rocket. It includes a lander and a rover to explore the surface of the moon.

“The far side of the moon is a rare quiet place that is free from interference of radio signals from Earth,” mission spokesman Yu Guobin said, according to Xinhua. “This probe can fill the gap of low-frequency observation in radio astronomy and will provide important information for studying the origin of stars and nebula evolution.”

Unlike the near side of the moon that offers many flat areas to touch down on, the far side is mountainous and rugged.

The tasks of the Chang’e-4 include astronomical observation, surveying the moon’s terrain, landform and mineral composition, and measuring the neutron radiation and neutral atoms to study the environment on the far side of the moon.

China aims to catch up with Russia and the United States to become a major space power by 2030. It is planning to launch construction of its own manned space station next year.

However, while China has insisted its ambitions are purely peaceful, the US Defence Department has accused it of pursuing activities aimed at preventing other nations from using space-based assets during a crisis.

Apart from its civilian ambitions, Beijing has tested anti-satellite missiles and the US Congress has banned Nasa from bilateral cooperation with its Chinese counterpart due to security concerns.

The United States is so far the only country to have landed humans on the moon. US President Donald Trump said in 2017 he wants to return astronauts to the lunar surface and establish a foundation there for an eventual mission to Mars.

It was not until 1959 that the Soviet Union captured the first images of the moon’s mysterious and heavily cratered “dark side”.

No lander or rover has ever previously touched the surface there, and it is no easy technological feat – China has been preparing for this moment for years.

A major challenge for such a mission was communicating with the robotic lander: as there is no direct “line of sight” for signals to the far side of the moon.

As a solution, China in May blasted the Queqiao (“Magpie Bridge”) satellite into the moon’s orbit, positioning it at a ‘Lagrange point’ so that it can relay data and commands between the lander and Earth.

A Lagrange point is a location in space where the combined gravitational forces of two large bodies, such as Earth and the sun or Earth and the moon, equal the centrifugal force felt by a much smaller third body. The interaction of the forces creates a point of equilibrium where a spacecraft may be “parked” to make observations.

In another extreme hurdle, during the lunar night – which lasts 14 Earth days – temperatures drop to as low as minus 173 degrees Celsius (minus 279 Fahrenheit).

During the lunar day, also lasting 14 Earth days, temperatures soar as high as 127 C (261 F).

The rover’s instruments have to withstand those fluctuations and it has to generate enough energy to sustain it during the long night.

The pioneering landing demonstrates China’s growing ambitions as a space power.

In 2013, Chang’e 3 was the first spacecraft to land on the moon since the Soviet Union’s Luna 24 in 1976.

China plans to send its Chang’e 5 probe to the moon next year and have it return to Earth with samples – the first time that will have been done since 1976.

Ultima Thule: En snemand

Rumsonden New Horizons har sendt sit første skarpe billede af objektet Ultima Thule i yderkanten af vores solsystem

Nytårsmorgen passerede Nasa-rumsonden New Horizons objektet Ultima Thule – et lille isbelagt objekt i den yderste ende af vores solsystem, Kuiperbæltet.

Aldrig før har en sonde undersøgt et objekt så langt borte fra vores egen planet. På grund af afstanden er det også først nu, at vi ser de første nogenlunde skarpe billeder af Ultima Thule, som er taget af New Horizons.

Nasa har netop præsenteret billederne og nogle få, indledende videnskabelige resultater på en pressekonference.

Ud fra billederne konkluderer Nasa-forskerne med stor glæde, at Ultima Thule ligner en snemand og ikke en kegle eller en jordnød, som ellers var første indskydelse ud fra det grynede billede, som forskerne kunne vise frem tirsdag.

Ultima Thule har nemlig både en krop og et hoved i form af to objekter, som på et tidspunkt har roteret tæt på hinanden, inden de er stødt sammen og blevet til et objekt.

Udover formen er forskerne også blevet klogere på farven.

– Vi kan definitivt sige, at Ultima Thule er rød, fortæller Cathy Olkin, der er planetforsker ved Nasa.

Forskerne fortæller samtidig, at Ultima Thule har en rotationstid på 15 timer – plus/minus en time.

Bedre billeder og flere detaljer i vente

New Horizons fløj forbi Ultima Thule i en afstand af ca. 3.500 kilometer. Set med rumfartsbriller er det ganske tæt.

Tirsdag aften afslørede de første meget grynede billeder (taget på cirka én million kilometers afstand) af Ultima Thule, at objektet var omkring 35 kilometer langt og omkring 14 kilometer bredt og muligvis var formet som en kegle.

Men det er nu rettet til at være en snemand på små 33 kilometer i stedet.

De første hjemsendte pixels (venstre) tydede på, at Ultima Thule var kegleformet. Men med ordene “Nu kan vi se, det er en snemand” præsenterede Nasa tirsdag aften langt bedre billeder af objektet (højre). (© Nasa)

Rejsen fortsætter

New Horizons begyndte sin rejse tilbage i 2006. Året efter fløj sonden forbi Jupiter, og i juli 2015 nåede den dværgplaneten Pluto, som også har hjemme i Kuiperbæltet.

Sondens formål har netop været at udforske Pluto, dens måner og objekter i Kuiperbæltet. Nasa håber, at New Horizons kan blive ved med at undersøge objekter i det enorme Kuiperbælte helt frem til engang i 2020’erne.

Nasa forventer, at der inden for den kommende tid kommer billeder, der vil vise Ultima Thule i op til fire gange bedre opløsning, og som derfor kan give os langt flere detaljer om objektet.

En langt mere detaljeret beskrifelse findes ved at følge dette link:

(486958) 2014 MU69

(486958) 2014 MU69, nicknamed Ultima Thule,[b] is a trans-Neptunian object located in the Kuiper belt. It is a contact binary, with estimated dimensions of 31 by 19 km (20 by 10 mi), consisting of a larger body (“Ultima”) three times the volume of the smaller (“Thule”). With an orbital period of 298 years and a low inclination and eccentricity, it is classified as a classical Kuiper belt object. With the New Horizons space probe’s flyby on 1 January 2019, 2014 MU69 became the farthest object in the Solar System visited by a spacecraft, and is believed to be the most primitive, both bodies being planetesimal aggregates of much smaller building blocks.

En tilsvarende beskrivelse på dansk findes her:

(486958) 2014 MU69

(486958) 2014 MU69, navngivet Ultima Thule af holdet bag rumsonden New Horizons,[7] er et transneptunsk objekt i Kuiper-bæltet, som ligger i Solsystemets yderste egn. Objektet blev opdaget den 26. juni 2014 af astronomer, som benyttede Hubble Space Telescope.[3] Ultima Thule er et klassisk objekt fra Kuiper-bæltet bestående af to sammenklæbede kugleformede objekter med diametre på henholdsvis19 og 14 kilometer, en kontaktbinær isdværg.