Hayabusa2 landsætter MINERVA-II1

Japan’s asteroid hoppers deliver new batch of incredible images

Jason DavisSeptember 27, 2018

A fresh batch of incredible images from Japan’s Hayabusa2 mission have arrived on Earth, revealing asteroid Ryugu’s rocky surface in even finer detail.

Last Friday, the Hayabusa2 spacecraft dropped a pair of hopping, drum-shaped rovers onto the surface from a height of about 60 meters. The 18-centimeter-wide probes, collectively called MINERVA-II1, can lift themselves off the surface for several minutes at a time using spinning, internal motors. Both rovers captured images during their descent, and one rover grabbed a picture mid-hop.

Now there are new images. On Thursday, Japan’s space agency, JAXA, confirmed both rovers are hopping as designed, and released a treasure trove of pictures from the probes as they tumble around Ryugu. The snapshots show the asteroid’s surface as a loose pile of gravel strewn with larger rocks and boulders.

JAXA also released a high-resolution image from the Hayabusa2 spacecraft itself, hovering above the shadowed edge of a boulder several meters wide.

Ryugu’s surface will prove challenging for the mission’s ultimate goal of collecting a sample for return to Earth in 2020. The latest JAXA press release lists that sample attempt happening in late October, with a rehearsal planned in the middle of the month. A similar touchdown rehearsal in mid-September was cancelled after the spacecraft had trouble detecting reflections from Ryugu’s dark surface.

Hayabusa2 is also scheduled to release a lander called MASCOT on Wednesday, October 3.

Hayabusa2 stops short of close approach on first touchdown rehearsal

Emily LakdawallaSeptember 13, 2018

This is why people do rehearsals. Hayabusa2 didn’t quite make it down to its intended 60-meter distance from asteroid Ryugu yesterday. The “touchdown 1 rehearsal 1” operation aborted at an altitude of about 600 meters after the laser altimeter had trouble detecting reflections from Ryugu’s very dark surface. There is nothing wrong with the spacecraft; it’s healthy and returning to its home position of 20 kilometers altitude. The team will adjust parameters and give it another try in the future. In the meantime, they grabbed some cool photos from distances under 1000 meters. The last several optical navigation photos, shared on the Web in real time, actually showed the shadow of Hayabusa2 on the surface of the asteroid.

 

Spock’s home discovered?

Spock’s home world has been discovered (sort of)

Gene Roddenberry, the creator of Star Trek, was visionary in many ways: The Enterprise crew’s communicators presaged today’s smartphones, Bones’s sickbay mirrored in modern medical scanners, and, well, we’re still working on that transporter. Now, it seems he accurately predicted a location for science officer Spock’s home planet, Vulcan.

The magazine Sky & Telescope reports this week that back in 1991, Roddenberry and three astronomers from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, declared in a letter to the magazine that Vulcan most likely would orbit the star 40 Eridani A. Although not mentioned in the original TV series or later feature films, a number of stars had been put forward by Trekkies as the likely locale of Vulcan. Roddenberry and his co-authors argued that 40 Eridani A was the most likely because, at 4 billion years old, an orbiting planet would have had long enough to evolve a superlogical being such as Spock.

Now, astronomers have found that 40 Eridani A, an orange dwarf star 16 light-years from Earth, does indeed have a planet. The Dharma Planet Survey, which is looking for low-mass planets around bright nearby stars, reports in a paper due to appear in the Monthly Notices of the Royal Astronomical Society that the putative “Vulcan”—officially known as HD 26965b (and shown above in an artist’s illustration)—is eight times the mass of Earth. That means it will have high gravity, probably too high to support any sort of alien life. It also orbits close enough to its star to be very hot. But then, Spock was always known to keep a cool head when the pressure starts to climb.

 

Der bliver ikke nogen Singularity

Forfatter: Singularity er dårlig science fiction uden science

En taxa-tur med teknologi-kommentatoren og forfatteren Bruce Sterling fører til døden for en elsket teknologi-lov. En død med konsekvenser for Singularity og Danmark.

Dan Mygind redaktion@ing.dk

»Der bliver ikke nogen Singularity. Det er en vrangforestilling.«

Ordene kommer fra teknologi-kommentatoren, Wired-skribenten og science fiction-forfatteren Bruce Sterling, som sidder klemt inde mellem sin kone, Jasmina, og Version 2’s udsendte på bagsædet af en taxa på vej mod Københavns Lufthavn.

Bruce Sterling har kort forinden holdt den afsluttende keynote på Techfestival i København og er nu på vej med sin kone for at fange et fly til Estland, hvor andre tech-interesserede vil lytte til hans tankevækkende perspektiv på teknologi.

Singularity bliver udskudt

Når forfatteren, der sammen med William Gibson startede cyberpunk-genren, betegner Singularity-fortællingen som »dårlig science fiction uden science«, er der grund til at spidse øren.

Begrebet Singularity blev introduceret af den amerikanske datalogi-professor og science fiction-forfatter Vernor Vinge i essayet ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’ fra 1993.

Her forudså Vernor Vinge, at 30 år ude i fremtiden, i år 2023, vil menneskeheden have teknologien til at skabe overmenneskelig intelligens, og kort efter vil den menneskelige æra være ovre.

Det er om 5 år.

Essayets budskab faldt i god jord i Silicon Valley, og i 2008 blev tænketanken Singularity University oprettet af Ray Kurzweil og Peter Diamandis. Specielt Ray Kurzweil er blevet en bannerfører for Singularity-bevægelsen.

Han er ansat hos Google som director of engineering og udgav bogen ‘The Singularity is Near – When Humans Transcend Biology’ i 2005. Her beskriver han, hvordan eksponentiel vækst inden for informationsteknologi, bioteknologi, nanoteknologi og anden teknologi vil skabe en fremtid, hvor mennesker bryder fri af begrænsningerne, som vores hjerne, krop og biologiske væsen sætter for os.

Ray Kurzweil lægger dog nogle ekstra år til Vernor Vinges forudsigelse. Ifølge Ray Kurzweils vil Singularity indtræffe i år 2045.
Moores lov er død.

Men ifølge Bruce Sterling har ideen om eksponentiel vækst og innovation, hvilket er forudsætningen for Singularity, ikke hold i en teknisk virkelighed. Det gælder eksempelvis udviklingen inden for mikroprocessorer.

Moores lov om mikroprocessorers udvikling foreskriver, at antallet af komponenter på en mikrochip fordobles hvert andet år. Loven har dog været sat ud af kraft i de seneste år, da chip-producenterne ganske enkelt ikke kan skrumpe chip-komponenterne hurtigt nok.

De støder simpelthen ind i begrænsninger sat af fysiske naturlove.

I produktionen af de nyeste chip, hvor de enkelte chip-komponenter blot er 10-14 nanometer store (eller rettere små), anvendes lys med en bølgelængde på 193 nanometer.

Der eksisterer produktionstekniske krumspring, der gør den slags muligt, men fremstillingsprocessen er kompliceret, dyr og årsag til forsinkelser. Eksempelvis var Intels kommende Cannonlake-chip baseret på 10 nanometer-teknologi oprindeligt planlagt til at blive lanceret i 2016.

Den seneste melding er nu i slutningen af 2019.

Samtidig lurer kvantemekaniske kvababbelser i horisonten. Kommer man ned på 2-3 nanometer-teknologi, består en enkelt transistor blot af 10-15 atomer, hvilket introducerer kvantemekaniske usikkerheder, der reelt gør komponenterne upålidelige.

»Moores lov er død,« opsummererer Bruce Sterling lakonisk.

Dansk Singularity-begejstring

Det handler ikke kun om filosofiske overvejelser, som hvorvidt eksponentiel vækst er mulig eller ej.

Singularity-tankegangen har fået godt fat i danske beslutningstagere, der er med til at udstikke retningslinjerne for den digitale udvikling i Danmark.

Singularity University etablerede forrige år sit nordiske hovedkvarter i København – Let’s Make the Nordics Exponential, som det hedder på SingularityU Nordic’s hjemmeside.

I pressemeddelelsen om åbningen blev daværende erhvervsminister Brian Mikkelsen citeret for, at Singularity Universitys tilstedeværelse i Danmark vil give danske virksomheder mulighed for at komme på forkant med den teknologiske udvikling.

Siden har SingularityU Nordic holdt velbesøgte konferencer og kurser med en række beslutningstagere fra offentlige myndigheder og private virksomheder som deltagere.

I dag er Brian Mikkelsen direktør for Dansk Erhverv, hvor hans Singularity-begejstring tilsyneladende deles af underdirektør Niels Milling.

På Niels Millings LinkedIn-side er listet et uddannelsesophold hos Singularity University i San Francisco i 2016.

Interessante, utopiske fortællinger, men bullshit.

Jeg spørger derfor Bruce Sterling, hvad han vil sige til danske politikere og forretningsfolk, der har købt visionen om eksponentiel udvikling:

»Det kommer ikke til at ende godt. Og det er egentlig allerede slut. Folk taler, som om der stadig er det her kæmpe boom, hvor Moores lov forandrer alting, og Singularity er nær, men det kan ikke være længere fra virkeligheden. Der er ingen Singularity, og vi befinder os i en æra af industrikonsolidering i tech-verdenen.«

Han kan dog godt forstå, at ministre, politikere og beslutningstagere er interesserede i Singularity.

»De har en spændende fortælling, men det er lidt den samme fortælling fra rumfartsalderen og atomalderen: Mennesker vil snart leve overalt i solsystemet, og energien bliver så billig, at det ikke kan betale sig at måle energiforbruget. Det kom ikke til at ske. Det samme vil være tilfældet med Singularity-fortællingen, det er bullshit, det kommer ikke til at ske,« forudsiger Bruce Sterling.

Han har tidligere udtalt, at der ikke er en forretningsmodel for Singularity, da der ikke er noget reelt produkt, men tilsyneladende har SingularityU Nordic fundet en forretningsmodel i form af kurser og konferencer.

»Det er ikke forskelligt fra andre religiøse ting. Jeg mener, Scientology har en forretningsmodel i Danmark, men det gør dem ikke bedre. Star Wars-filmene har en forretningsmodel; du kan sælge illusioner til folk, jeg siger blot, at teknisk kommer det ikke til at fungere.«

Lige inden Bruce Sterling gik op på Circle Stage i Kødbyen og gav sin afsluttende keynote, havde jeg sammen med 149 andre tech-personer præsenteret en række etiske tech-principper, der skal sørge for, at teknologi anvendes til at gøre samfundet og det enkelte menneskes liv bedre. En af deltagerne havde skrevet følgende: ‘Singularity is Near – But Reality is Nearer’.

Citat slut!

Jeg er ikke enig i, at problemet er enden på “Moores lov”. Man kan jo bare fortætte med udviklingen af clusters til at omfatte millioner af computere.

Det egentlige problem er, at Google (af naturlige årsager) definerer kunstig intelligens som Deep-Learning i eksisterende statiske data. Deep-Lerning er en fancy søgemaskine for eksisterende Big-Data. Man kan opfate et skakspil som et statisk træ af beslutninger. Der er tale om Big-Data defineret ved de faste regler for spillet.

Man kan med søgning ikke finde nye ikke-eksisterende ting, men man kan finde velbeskrevne kendte ting. AI i form af Deep-Learning kan ikke opfinde ting: AI er ikke innovativ.

Begejstringen for AI svarer til begejstringen for radium efter dets opdagelse i 1911. Det blev solgt som et vidundermiddel mod alle former for sygdomme.

Man kunne af ovenstående få det indtryk, at idèen om generel kunstig intelligens og den eventuelle teknologiske singularitet er opfundet i Silicon Valey. Dette er imidlertid forkert. Idèen blev først fremsat i 1965 som Speculations Concerning the First Ultraintelligent Machine af Irving John Good, hvis rigtige navn var Isadore Jacob Gudak:

Isadore Jacob Gudak

I. J. Good arbejdede som statistiker og kryptograf i England. Hans tilgang til statistik var anderledes end den traditionelle statistiske skole under ledelse af Sir Ronald Fisher, som helt støttede Bertrand Russells opfattelse af årsag og virkning: Årsag og virkning er begreber fra filosofien, som er helt uden hold i virkeligheden. Statistiske relationer er ikke et udtryk for en kausal sammenhæng. Good havde derfor problemer med at få bevillinger til sin forskning.  Det er vanskeligt at finde et abstract af artiklen fra 1965, når jeg anvender VPN (men her er et):

Speculations Concerning the First Ultraintelligent Machine

Publisher Summary

An ultra-intelligent machine is a machine that can far surpass all the intellectual activities of any man however clever. The design of machines is one of these intellectual activities; therefore, an ultra-intelligent machine could design even better machines. To design an ultra-intelligent machine one needs to understand more about the human brain or human thought or both. The physical representation of both meaning and recall, in the human brain, can be to some extent understood in terms of a subassembly theory, this being a modification of Hebb’s cell assembly theory. The subassembly theory sheds light on the physical embodiment of memory and meaning, and there can be little doubt that both needs embodiment in an ultra-intelligent machine. The subassembly theory leads to reasonable and interesting explanations of a variety of psychological effects.

Bemærk: En sådan maskine skal inkludere både hukommelse og forståelse.

Good flyttede til Virginia Tech, USA, kort tid efter.

Nekrolog i The Telegraph

Professor Jack Good

Professor Jack Good, who died on April 5 aged 92, made fundamental contributions to probability theory, drawing on ideas developed while working as a codebreaker at Bletchley Park during the Second World War; later on he advised Stanley Kubrick on the computer with a mind of its own in the film 2001 – A Space Odyssey, and popularised the board game Go.

A statistician by training and a county chess champion, Good was recruited to Bletchley Park from Cambridge in 1941. By the time he arrived, the German Air Force and Army Enigma codes had been broken, but their naval Enigma code remained frustratingly difficult to decrypt – a major problem at a time when supply lines from North America were being threatened by U-boats.

Initially Good was assigned to Hut 8 working with Alan Turing and Hugh Alexander, who were already using machines known as “bombes” to discover the Enigma wheel settings, based on complex algorithmic “cribs” devised by Turing using a branch of probability theory known as Bayesian statistics. During this early period, the mathematician Max Newman, working in another hut, had established a program to use electronic methods of decipherment and had recruited Donald Michie, an Oxford classicist, to help him.

In 1943 Good moved from Hut 8 to the “Newmanry” to work with Michie on the use of machine methods for decrypting a German cipher system known as “Fish”. The first machine, appropriately christened the “Heath Robinson”, used vacuum tubes, was highly unreliable, and thus required extensive statistical work to back it up. A particular problem, apart from the frequent failure of the vacuum tubes, was that the paper tapes containing the intercepted signals were fed in at very high speed and tended to snap. Good recalled being able to tell when the machine was going wrong by the sound it made – and even by the smell.

But thanks to their efforts the Heath Robinson worked well enough to show that the basic concept was sound, and the two men went on to use their joint expertise to develop the code-breaking technologies underpinning the Colossus machines. These, the world’s first programmable, digital electronic computers, were developed just in time for the Normandy landings and marked the beginning of the modern computer revolution. Good and Michie developed ways of using the machines to help in “pin breaking” – deciphering the pin patterns of the wheels used in the German Lorenz encryption machines, which were periodically changed.

After the war, Good went with Turing to the University of Manchester to work with Newman on statistical and mathematical computing, and in his spare time began to develop the field of Bayesian statistics. He went on to play a leading role in the development of Bayesian statistics as a practical tool for assessing probability and risk in fields ranging from medicine to defence strategy.

He was born Isidore Jacob Gudak in London on December 9 1916 to Polish-Jewish parents, but later anglicised his name to Irving John Good. His father was a watchmaker and a dealer in antique jewellery.

Although he was slow to learn to read, “Jack” Good’s mathematical genius was clear from an early age. In bed with diphtheria aged nine, he “discovered” the irrationality of the square root of two and found an infinity of solutions to the equation: 2x² = y² ± 1. By the age of 13 he had independently “discovered” mathematical induction and integration.

At Haberdashers’ Aske’s School, Hampstead, Good amazed his maths master by working out the solutions to a series of exercise questions before the man had finished writing them on the blackboard. His teachers soon reached the limits of what they could teach him and left him to pursue his mathematical studies on his own in the school library. By the time he won a scholarship to Jesus College, Cambridge, he had already covered much of the undergraduate syllabus.

Good graduated in 1938, and the next year won the Cambridgeshire chess championship. He won the Smith’s mathematical prize and completed a doctorate on “The topological concept of partial dimension based on the ideas of Henri Lebesgue”, under GH Hardy. In 1941 Good was interviewed by Hugh Alexander, the reigning British chess champion, for a job in the “Civil Service”, and on May 27 1941 – the day the Bismarck was sunk – found himself installed at Bletchley Park.

Good got on particularly well with Turing, with whom he played chess and who introduced him to the Chinese strategic board game Go. After the war Good played Go with the mathematician Roger Penrose and helped to popularise the game in Britain through an article published in New Scientist in 1965.

In 1947 Good accepted Newman’s invitation to join him at the University of Manchester to work with him and Turing on a computer based on Turing’s designs. Along with Tom Kilburn and Fred Williams, Good played a role in the development of the “Manchester Mark I”, the first computer in the world to be controlled from an internally-stored program.

In 1948 Good returned to government service within the Government Communications Headquarters (GCHQ). It was during this time that he published his first book, Probability and the Weighing of Evidence (1950), which expanded on “Bayesian” concepts of probability which he and Turing had been working on during the war.

In 1959 he joined the Admiralty Research Laboratory. Five years later, after a series of consulting positions, he returned to academic life as a senior research fellow at Trinity College, Oxford, where he was associated with the Atlas Computer Laboratory. Three years later, finding Oxford “a bit stiff”, he decided to take up a chair in statistics at Virginia Polytechnic Institute and State University (Virginia Tech).

In the later years of the war, Good and Donald Michie had fantasised about the idea of machines that were capable of learning, and in 1964 Good published a paper on Speculations Concerning the First Ultraintelligent Machine which was quoted by Arthur C Clarke in his 2001: A Space Odyssey to explain how his spaceship computer, the HAL 9000, had acquired a mind of its own. One of Good’s first assignments in America was to advise Stanley Kubrick on his film adaptation.

Good’s published work ranged from statistics, computation and number theory to the philosophy of mathematics and science. In addition to numerous papers and articles, his books included The Estimation of Probabilities: an Essay on Modern Bayesian Methods (1965) and Good Thinking: the Foundations of Probability and its Applications (1983).

Like other members of staff at Bletchley Park, Good was unable to talk about his wartime work for many years, though he allowed himself an oblique reference to his clandestine past in his car number plate: 007 IJG.

Later he contributed a chapter on “Enigma and Fish” in Codebreakers: The Inside Story of Bletchley Park (1994), edited by Harry Hinsley and Allan Stripp.

Good served on numerous scientific committees and won several awards and honours. In 1985 he was elected a Fellow of the American Academy of Arts and Sciences.

 

Kunstig intelligens taber i StarCraft

Derfor vinder kunstig intelligens over dig i skak – men taber i StarCraft

Kunstig intelligens vinder over verdensmestre i skak og det kinesiske Go. Men i nogle spil er mennesket stadig overlegen, siger forsker.

Det kom som et chok for skakspillere verden over, da IBM-computeren Deep Blue vandt i skak over stormesteren Garry Kasparov i 1997.

Det var nemlig første gang, en kunstig intelligens slog en verdensmester i det hjernebrydende brætspil. Siden er det kun gået fremad med udviklingen af kunstig intelligens.

Computeralgoritmer er nu så gode, at de kan slå mennesker i avancerede computerspil, hvor der er langt mere information at forholde sig til sammenlignet med de almindelige brætspil.

Skakmester efter 4 timers træning

Når den kunstige intelligens vinder i skak, skyldes det først og fremmest en effektiv søgealgoritme.

Det forklarer lektor Sebastian Risi, der forsker i kunstig intelligens og computerspil på IT-Universitetet på Københavns Universitet:

– Den gennemgår og forudser en masse muligheder ved de forskellige træk i et spil skak. Hver mulighed kan ses som én gren af et kæmpe træ af beslutninger, siger han.

– For at spille mere effektivt blev Deep Blue programmeret til at beskære en del af grenene, så den undgik at bruge kræfter på en masse træk, der alligevel ikke gav mening, fortsætter Sebastian Risi.

I dag er kunstig intelligens endnu skarpere på skakbrættet. Sidste år lykkedes det for eksempel Googles såkaldte AlphaZero at slå verdens dygtigste skakprogram, Stockfish.

Det gjorde den, selvom den kun havde kendt til spillet i fire timer. AlphaZero blev fodret med reglerne og lærte derefter at spille ved at øve mod sig selv.

Den endte med at vinde kampen med kun 80.000 søgninger per sekund, mens Stockfish brugte 70 millioner søgninger per sekund.

– AlphaZeros søgealgoritme er kombineret med dybe neurale netværk, der fokuserer på bestemte mønstre i skakspillet, siger Sebastian Risi.

– Den kan med andre ord nemmere afgøre hvilke situationer, der er oplagte at fokusere på, og hvilke, der ikke er, fortsætter han.

Samme teknologi blev brugt, da den navnebeslægtede AlphaGo slog verdensmesteren, den 19-årige kineser Ke Jie, i det populære brætspil ‘Go’.

Med sine mange handlemuligheder er Go efter sigende mere kompliceret end skak. Derfor blev det anset som milepæl, da AlphaGo løb med sejren.

Vinder over gamere

Når AlphaGo vurderer et træk i Go, er det baseret på de erfaringer, den har gjort sig efter millioner af træningsspil.

Det samme er gældende for den kunstige intelligens, som nu deltager i mere avancerede computerspil.

I år lykkedes det kunstig intelligens fra Elon Musk-projektet OpenAI at tæve semi-professionelle spillere i det populære computerspil Dota 2.

Det skete efter, at den kunstige intelligens havde trænet, hvad der svarer til hver dag i 180 år.

– Det var virkelig imponerende. Men den kunstige intelligens lærte ikke hele spillet fra bunden, siger Sebastian Risi og fortsætter:

– Den kendte allerede til præcise positioner af blandt andet holdspillere. Det gjorde den fordi, den ikke ser spillet visuelt ligesom mennesker. Den får konverteret pixels til information, der er nemmere for den at forstå.

Holdet bag OpenAI havde også sat nogle begrænsninger forud for kampen for ikke at gøre det for komplekst for den kunstige intelligens.

Begge hold skulle for eksempel spille med de samme fem karakterer.

Robotter lærer at arbejde sammen

Noget af det mest interessante ved sejren i Dota-spillet er ifølge Sebastian Risi, at robotterne arbejdede sammen.

Dota 2 er et spil, der i høj grad kræver samarbejde. Og det kan i nogle situationer være svært for de menneskelige spillere.

– Den kunstige intelligens har for eksempel ikke noget problem med at ofre en af spillerne for at vinde med hele holdet. Det kan være svært for mennesker at tage den beslutning, siger Sebastian Risi.

Trods flere sejre endte det dog med et nederlag, da OpenAI deltog i verdensmesterskabet i Dota 2.

– Problemet er, at så snart den kunstige intelligens forlader træningen og går i gang med den rigtige turnering, så stopper den med at træne og gøre erfaring af spillet. Og det er jo noget, vi mennesker bliver ved med, siger Sebastian Risi.

StarCraft – Her vinder menneskerne

Sebastian Risi er sammen med PhD student Niels Justesen ved at undersøge, hvordan kunstig intelligens klarer sig i forskellige computerspil.

Blandt andet i det populære strategi-computerspil med navnet StarCraft. Det er et spil, der er endnu sværere end Dota 2 for en kunstig intelligens at spille.

– Der er enormt mange muligheder i StarCraft. Så det er meget svært for den kunstige intelligens at beslutte sig for, hvad den skal fokusere på, siger Sebastian Risi og fortsætter:

– Derfor er de dygtigste StarCraft-spillere stadig meget bedre end kunstig intelligens.

Men Sebastian Risi tror på, at den kunstige intelligens vil blive bedre end mennesket til at spille StarCraft med tiden:

– Der vil nok gå nogle år, før vi ser det. Det kommer til at kræve endnu mere avancerede algoritmer. Vi kan i hvert fald ikke gøre men den teknologi, vi har nu.

Citat slut!

Men hvad er den grundlæggende årsag til, at Deep Learning ikke kan overgå den menneskelige hjerne i løsningen af visse problemer, selvom den elektroniske computer er millioner gange hurtigere end hjernen?

Problemet er, at Deep Learning er beregnet til løsning af stationære situationer, f.eks. regler, som ikke ændrer sig i tiden. Sådanne situationer er tidssymmetriske. Der er ingen forskel på fortid og fremtid. Årsag og virkning eksisterer ikke i sådanne situationer. Klassisk statistik benægter eksistensen af årsag og virkning. Der eksisterer kun symmetriske relationer mellem forskellige størrelser. Filosoffen Bertrand Russell har i 1913 beskrevet antagelsen på denne måde:

“The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.”

Bertrand Russell, Selected Papers

Men vi kan alle huske fortiden, men ikke fremtiden…

 

Chasing ‘Oumuamua

Chasing ‘Oumuamua

Written by Elizabeth Landau:

The interstellar object ‘Oumuamua perplexed scientists in October 2017 as it whipped past Earth at an unusually high speed. This mysterious visitor is the first object ever seen in our solar system that is known to have originated elsewhere.

What we know

-It came from outside the solar system — Because of its high speed (196,000 mph, or 87.3 kilometers per second) and the trajectory it followed as it whipped around the Sun, scientists are confident ‘Oumuamua originated beyond our solar system. The object flew by Earth so fast its speed couldn’t be due to the influence of the Sun’s gravity alone, so it must have approached the solar system at an already high speed and not interacted with any other planets. On its journey past our star, the object came within a quarter of the distance between the Sun and Earth.

-Its trajectory is hyperbolic — By tracking this object as it passed within view of telescopes, scientists can see that this high-speed object won’t be captured by our Sun’s gravity. It won’t circle back around again on an elliptical path. Instead, it will follow the shape of a hyperbola — that is, it will keep on going out of the solar system, and never come back.

-It doesn’t look like a comet, but it behaves like one — A comet is a small icy body that, when heated by the Sun, develops a coma — a fuzzy atmosphere and tail made of volatile material vaporizing off the comet body. At first, scientists assumed ‘Oumuamua was a comet. But because ‘Oumuamua appears in telescope images as a single point of light without a coma, scientists then concluded it was an asteroid. But when astronomers saw the object was accelerating ever so slightly, they realized that a coma and jets might not be visible to the telescopes used to observe it. The jetting of volatile materials or “outgassing” would explain why ‘Oumuamua was accelerating in a subtle, unexpected way when only gravity from our solar system is taken into account.

It must be elongated — While it is impossible to take a close-up photo of ‘Oumuamua, its dramatic variations in brightness over time suggest it is highly elongated. By calculating what kind of object could dim and brighten in this way, scientists realized the object must be up to 10 times as long as it is wide. Currently, ‘Oumuamua is estimated to be about half a mile (800 meters) long. Astronomers had never seen a natural object with such extreme proportions in the solar system before.

It tumbles through space — The unusual brightness variations also suggest the object does not rotate around just one axis. Instead, it is tumbling — not just end over end, but about a second axis at a different period, too. A small object’s rotation state can easily change, especially if it is outgassing, so this tumbling behavior could have started recently. The object appears to make a complete rotation every 7.3 hours.

What we don’t know

-What does it look like? All that astronomers have seen of ‘Oumuamua is a single point of light. But because of its trajectory and small-scale accelerations, it must be smaller than typical objects from the Oort Cloud, the giant group of icy bodies that orbit the solar system roughly 186 billion miles (300 billion kilometers) away from the Sun. Oort Cloud objects formed in our own solar system, but were kicked out far beyond the planets by the immense gravity of Jupiter. They travel slower than ‘Oumuamua and will forever be bound by the gravity of our Sun. But besides its elongated nature, scientists do not know what kinds of features ‘Oumuamua has on its surface, if any. An elongated shape would explain its rotation behavior, but its exact appearance is unknown.

-What is it made of? Comets from our solar system have a lot of dust, but because none is visible coming off ‘Oumuamua, scientists conclude it may not have very much at all. It is impossible to know what materials make up ‘Oumuamua, but it could have gases such as carbon monoxide or carbon dioxide coming off the surface that are less likely to produce a visible coma or tail.

Where did it come from? ‘Oumuamua came into our solar system from another star system in the galaxy, but which one? Scientists observe that its incoming speed was close to the average motion of stars near our own, and since the speed of younger stars is more stable than older stars, ‘Oumuamua may have come from a relatively young system. But this is still a guess — it is possible the object has been wandering around the galaxy for billions of years.

What is it doing now? After January 2018, ‘Oumuamua was no longer visible to telescopes, even in space. But scientists continue to analyze it and crack open more mysteries about this unique interstellar visitor.

 

Hayabusa2 prepares to collect

Hayabusa2 prepares to collect samples, leave Planetary Society names on Ryugu

Jason DavisAugust 28, 2018

Japan’s Hayabusa2 spacecraft, which has already returned some initial science from asteroid Ryugu, will soon try to collect a sample from the surface. Touchdown rehearsals are planned in September and October, with the first attempt expected in late October.

The original plan was for Hayabusa2 to collect multiple samples from several locations, to gather a broad range of materials. But it turns out Ryugu’s surface is fairly diverse to begin with, and since sampling is a risky procedure, the team is now focusing on a single location near the equator.

 

SETI Observations of ‘Oumuamua

Radio SETI Observations of the Interstellar Object ‘Oumuamua

Motivated by the hypothesis that ‘Oumuamua could conceivably be an interstellar probe, we used the Allen Telescope Array to search for radio transmissions that would indicate a non-natural origin for this object. Observations were made at radio frequencies between 1-10 GHz using the Array’s correlator receiver with a channel bandwidth of 100 kHz. In frequency regions not corrupted by man-made interference, we find no signal flux with frequency-dependent lower limits of 0.01 Jy at 1 GHz and 0.1 Jy at 7 GHz. For a putative isotropic transmitter on the object, these limits correspond to transmitter powers of 30 mW and 300 mW, respectively. In frequency ranges that are heavily utilized for satellite communications, our sensitivity to weak signals is badly impinged, but we can still place an upper limit of 10 W for a transmitter on the asteroid. For comparison and validation should a transmitter be discovered, contemporaneous measurements were made on the solar system asteroids uz2017 and wc2017 with comparable sensitivities. Because they are closer to Earth, we place upper limits on transmitter power to be 0.1 and 0.001 times the limits for ‘Oumuamua. A concurrent set of observations over the same frequency range were made with a narrow-band (1 Hz) beamformer/spectrometer. Setting a 6.5 sigma threshold, the (frequency dependent) sensitivity limits on ‘Oumuamua were in the range 175 +/- 25 Jy into a 1 Hz bin. This rules out 1 Hz transmitters on ‘Oumuamua, 2017uz, and 2017wc to less than 500 mW, 50 mW, and 0.5 mW respectively over the frequency range from 1-10 GHz.

 

QuasarNET: Deep Neural Networks

QuasarNET: Human-level spectral classification and redshifting with Deep Neural Networks

We introduce QuasarNET: a deep convolutional neural network that performs classification and redshift estimation of astrophysical spectra with human-expert accuracy. We pose these two tasks as a feature detection problem: presence or absence of spectral features determines the class, and their wavelength determines the redshift, very much like human-experts proceed. When ran on BOSS data to identify quasars through their emission lines, QuasarNET defines a sample 99.51±0.03% pure and 99.52±0.03% complete, well above the requirements of many analyses using these data. QuasarNET significantly reduces the problem of line-confusion that induces catastrophic redshift failures to below 0.2%. We also extend QuasarNET to classify spectra with broad absorption line (BAL) features, achieving an accuracy of 98.0±0.4% for recognizing BAL and 97.0±0.2% for rejecting non-BAL quasars. QuasarNET is trained on data of low signal-to-noise and medium resolution, typical of current and future astrophysical surveys, and could be easily applied to classify spectra from current and upcoming surveys such as eBOSS, DESI and 4MOST.

Der er her tale om en klassik anvendelse af deep convolutional neural network (det Google kalder “Powered by AI”) til klassifikation. Denne artikel viser, at “Machine Learning” (en anden betegnelse for den samme metode) når op på niveau med det menneskelige syn. Menneskets intelligens sidder altså ikke i synet, skønt det selvfølgelig er et definitionsspørgsmål. Intelligens må, efter min opfattelse, inkludere evnen til at kunne erkende årsagssammenhænge, og dette kræver igen evnen til at skelne mellem fortid og fremtid. Klassisk statistik arbejder kun med relationer mellem forskellige størrelser. Anvendelsen af denne her form for “Machine Learning” antager, at verden er stationær, dvs statistisk uforanderlig. Denne simple antagelse bliver ofte glemt.