Synthetic DNA has emerged as a novel substrate to encode computer data with the potential to be orders of magnitude denser than contemporary cutting edge techniques. However, even with the help of automated synthesis and sequencing devices, many intermediate steps still require expert laboratory technicians to execute. We have developed an automated end-to-end DNA data storage device to explore the challenges of automation within the constraints of this unique application. Our device encodes data into a DNA sequence, which is then written to a DNA oligonucleotide using a custom DNA synthesizer, pooled for liquid storage, and read using a nanopore sequencer and a novel, minimal preparation protocol. We demonstrate an automated 5-byte write, store, and read cycle with a modular design enabling expansion as new technology becomes available.
Storing information in DNA is an emerging technology with considerable potential to be the next generation storage medium of choice. Recent advances have shown storage capacity grow from hundreds of kilobytes to megabytes to hundreds of megabytes. Although contemporary approaches are book-ended with mostly automated synthesis and sequencing technologies (e.g., column synthesis, array synthesis, Illumina, nanopore, etc.), significant intermediate steps remain largely manual. Without complete automation in the write to store to read cycle of data storage in DNA, it is unlikely to become a viable option for applications other than extremely seldom read archival.
To demonstrate the practicality of integrating fluidics, electronics and infrastructure, and explore the challenges of full DNA storage automation, we developed the first full end-to-end automated DNA storage device. Our device is intended to act as a proof-of-concept that provides a foundation for continuous improvements, and as a first application of modules that can be used in future molecular computing research. As such, we adhered to specific design principles for the implementation: (1) maximize modularity for the sake of replication and reuse, and (2) reduce system complexity to balance cost and labor input required to setup and run the device modules.
With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers. However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems’ outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.
Adversarial examples are inputs to a machine-learning model that are intentionally crafted to force the model to make a mistake. Adversarial inputs were first formally described in 2004, when researchers studied the techniques used by spammers to circumvent spam filters. Typically, adversarial examples are engineered by taking real data, such as a spam advertising message, and making intentional changes to that data designed to fool the algorithm that will process it. In the case of text data like spam, such alterations may take the form of adding innocent text or substituting synonyms for words that are common in malignant messages. In other cases, adversarial manipulations can come in the form of imperceptibly small perturbations to input data, such as making a human-invisible change to every pixel in an image. Researchers have demonstrated the existence of adversarial examples for essentially every type of machine-learning model ever studied and across a wide range of data types, including images, audio, text, and other inputs.
Cutting-edge adversarial techniques generally use optimization theory to find small data manipulations likely to fool a targeted model. As a proof of concept in the medical domain, we recently executed successful adversarial attacks against three highly accurate medical image classifiers. The top figure provides a real example from one of these attacks, which could be fairly easily commoditized using modern software. On the left, an image of a benign mole is shown, which is correctly flagged as benign with a confidence of >99%. In the center, we show what appears to be random noise, but is in fact a carefully calculated perturbation: This “adversarial noise” was iteratively optimized to have maximum disruptive effect on the model’s interpretation of the image without changing any individual pixel by more than a tiny amount. On the right, we see that despite the fact the perturbation is so small as to be visually imperceptible to human beings, it fools the model into classifying the mole as malignant with 100% confidence. It is important to emphasize that the adversarial noise added to the image is not random and has near-zero probability of occurring by chance. Thus, such adversarial examples reflect not that machine-learning models are inaccurate or unreliable per se but rather that even otherwise-effective models are susceptible to manipulation by inputs explicitly designed to fool them.
Adversarial attacks constitute one of many possible failure modes for medical machine-learning systems, all of which represent essential considerations for the developers and users of models alike. From the perspective of policy, however, adversarial attacks represent an intriguing new challenge, because they afford users of an algorithm the ability to influence its behavior in subtle, impactful, and sometimes ethically ambiguous ways.
Klassifikation baseret på maskinlæring giver altid et resultat uden angivelse af en sandsynlighed for den enkelte klassifikation. Den opgivne sandsynlighed for klassifikationen gælder for det anvendte træningssæt. Inputdata kan altid tilføjes en omhyggeligt ændret lille tilføjelse, som fuldstændigt ændrer klassifikationen. Det er derfor vanskeligt/umuligt at give resultatet en juridisk gyldighed.
Jørg Tofte Jebsen (born 27 April 1888 in Berger, Norway and died 7 January 1922 in Bolzano, Italy) was a Norwegian physicist. Here he was the first to work on Einstein’s general theory of relativity. In this connection he became known after his early death for what many now call the Jebsen-Birkhoff teorem for the metric tensor outside a general, spherical mass distribution.
Jebsen grew up in Berger where his father Jens Johannes Jebsen ran two large textile mills. His mother was Agnes Marie Tofte and they had married in 1884. After elementary school he went through middle school and gymnasium in Oslo. He showed already then particular talents for mathematical topics.
After the final examen artium in 1906, he did not continue his academic studies at a university as would be normal at that time. He was meant to enter his father’s company and spent for that purpose two years in Aachen in Germany where he studied textile manufacturing. After a shorter stay in England, he came back to Norway and started to work with his father.
But his interests for natural science took over so that in 1909 he started this field of study at University of Oslo. His work there was interrupted in the period 1911-12 when he was an assistant for Sem Sæland at the newly established Norwegian Institute of Technology (NTH) in Trondheim. Back in Oslo he took up investigations of X-ray crystallography with Lars Vegard. With his help he could pursue this work at University of Berlin starting in the spring of 1914. That was at the same time as Einstein took up his new position there.
During the stay in Berlin it became clear that his main interests were in theoretical physics and electrodynamics in particular. This is central to Einstein’s special theory of relativity and would define his future work back in Norway. From 1916 he took a new job as assistant in Trondheim, but had to resign after a year because of health problems. In the summer of 1917 he got married to Magnhild Andresen in Oslo and they got a child a year later. They had then moved back to his parents home in Berger where he worked alone on a larger treatise with the title Versuch einer elektrodynamischen Systematik. It was finished a year later in 1918 and he hoped that it could be used to obtain a doctors degree at the university. In the fall the same year he received treatment at a sanatorium for what turned out to be tuberculosis.
The faculty at the University in Oslo sent Jebsen’s thesis for evaluation to Carl Wilhelm Oseen at the University of Uppsala. He had some critical comments with the result that it was approved for the more ordinary cand.real. degree. But Oseen had found this student so promising that he shortly thereafter was invited to work with him. Jebsen came to Uppsala in the fall of 1919 where he could follow lectures by Oseen on general relativity.
At that time it was natural to study the exact solution of Einstein’s equations for the metric outside a static, spherical mass distribution found by Karl Schwarzschild in 1916. Jebsen set out to extend this achievement to the more general case for a spherical mass distribution that varied with time. This would be of relevance for pulsating stars. After a relative short time he came to the surprising result that the static Schwarzschild solution still gives the exact metric tensor outside the mass distribution. It means that such a spherical, pulsating star will not emit gravitational waves.
During the spring 1920 he hoped to get the results published through the Royal Swedish Academy of Sciences. This was met by some difficulties, but after the intervention by Oseen it was accepted for publication in a Swedish journal for the natural sciences where it appeared the following year.
His work did not seem to generate much interest. One reason can be that the Swedish journal was not so well-known abroad. A couple of years later it was rediscovered by George David Birkhoff who included it in a popular science book he wrote. Thus it became known as «Birkhoffs teorem». The original discovery of Jebsen was pointed out first in 2005 and translated into English. From that time on it is now more often called the Jebsen-Birkhoff teorem. Most modern-day proofs are along the lines of the original Jebsen derivation.
Einstein came on a visit to Oslo in June 1920. He would give three public lectures about the theory of relativity after the invitation by the Student Society. Jebsen was also there, but it is not clear if he met him personally.
In the fall the same year Jebsen traveled with his family to Bolzano in northern Italy in order to find a milder climate to improve his detoriating health. Here he wrote the first Norwegian presentation of the differential geometry used in general relativity. He also found time to write a popular book on Galileo Galilei and his struggle with the church. But his health did not improve and he died there on January 7, 1922. A few weeks later he was buried near his home in Norway.
The Argonne National Laboratory Supercomputer will Enable High Performance Computing and Artificial Intelligence at Exascale by 2021
CHICAGO, ILLINOIS – Intel Corporation and the U.S. Department of Energy (DOE) will build the first supercomputer with a performance of one exaFLOP in the United States. The system being developed at DOE’s Argonne National Laboratory in Chicago, named “Aurora”, will be used to dramatically advance scientific research and discovery. The contract is valued at over $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray Computing in 2021.
The Aurora systems’ exaFLOP of performance – equal to a “quintillion” floating point computations per second – combined with an ability to handle both traditional high performance computing (HPC) and artificial intelligence (AI) – will give researchers an unprecedented set of tools to address scientific problems at exascale. These breakthrough research projects range from developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction, and discovering materials for the creation of more efficient organic solar cells. The Aurora system will foster new scientific innovation and usher in new technological capabilities, furthering the United States’ scientific leadership position globally.
“Achieving Exascale is imperative not only to better the scientific community, but also to better the lives of everyday Americans,” said U.S. Secretary of Energy Rick Perry. “Aurora and the next-generation of Exascale supercomputers will apply HPC and AI technologies to areas such as cancer research, climate modeling, and veterans’ health treatments. The innovative advancements that will be made with Exascale will have an incredibly significant impact on our society.”
“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO. “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”
“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, and industry partners Intel and Cray,” said Argonne National Laboratory Director, Paul Kearns. “Argonne’s Aurora system is built for next-generation Artificial Intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials, and further understanding the universe – and that is just the beginning.”
The foundation of the Aurora supercomputer will be new Intel technologies designed specifically for the convergence of artificial intelligence and high performance computing at extreme computing scale. These include a future generation of Intel® Xeon® Scalable processor, a future generation of Intel® Optane™ DC Persistent Memory, Intel’s Xe compute architecture and Intel’s One API software. Aurora will use Cray’s next-generation Shasta family which includes Cray’s high performance, scalable switch fabric codenamed “Slingshot”.
“Intel and Cray have a longstanding, successful partnership in building advanced supercomputers, and we are excited to partner with Intel to reach exascale with the Aurora system,” said Pete Ungaro, president and CEO, Cray. “Cray brings industry leading expertise in scalable designs with the new Shasta system and Slingshot interconnect. Combined with Intel’s technology innovations across compute, memory and storage, we are able to deliver to Argonne an unprecedented system for simulation, analytics, and AI.”
THE WOODLANDS, TEXAS—NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) mission to sample the asteroid Bennu and return to Earth was always going to be a touch-and-go maneuver. But new revelations about its target—a space rock five times the size of a U.S. football field that orbits close to Earth—are making the mission riskier than ever. Rather than smooth plains of rubble, Bennu’s surface is a jumble of more than 200 large boulders, with scarcely enough gaps for robotic sampling of its surface grit, the spacecraft’s team reported here today at the Lunar and Planetary Science Conference and in a series of Nature papers.
The $800 million spacecraft began to orbit Bennu at the start of this year, and the asteroid immediately began to spew surprises—literally. On 6 January, the team detected a plume of small particles shooting off the rock; 10 similar events followed over the next month. Rather than a frozen remnant of past cosmic collisions, Bennu is one of a dozen known “active” asteroids. “[This is] one of the biggest surprises of my scientific career,” says Dante Lauretta, the mission’s principal investigator and a planetary scientist at the University of Arizona in Tucson. “We are seeing Bennu regularly ejecting material into outer space.”
Ground-based observations of Bennu had originally suggested its surface was made of small pebbles incapable of retaining heat. OSIRIS-REx was designed to sample such a smooth environment, and it requires a 50-meter-wide circle free of hazards to approach the surface. No such circle exists, say mission scientists, but there are several smaller boulder-free areas that it could conceivably sample. Given how well the spacecraft has handled its maneuvers so far, “We’re going to try to hit the center of the bull’s-eye,” says Rich Burns, OSIRIS-REx’s project manager at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
OSIRIS-REx has always been a cautious mission. Unlike the speedy Hayabusa2 mission from Japan, which sampled the near-Earth Ryugu asteroid a half-year after its arrival, OSIRIS-REx plans to sample Bennu in July 2020, a year and a half after it started to orbit. That timetable has not changed, Lauretta says. By this summer, researchers hope to have the sampling site selected. And much remains to be discovered about the spinning, top-shaped asteroid, starting with the plumes, which can shoot off penny-size particles at speeds of up to several meters per second.
Just after OSIRIS-REx entered orbit around Bennu, the asteroid reached its closest approach to the sun. The other known active asteroids, which are all located in the asteroid belt between Mars and Jupiter, have similarly spouted particles as they get closer to the sun. It’s possible that the plumes are related to this approach, perhaps driven by water ice sublimating into vapor. But there are a dozen different hypotheses to explore, Lauretta says. “We don’t know the answer right now.”
The abundance of impact craters on Bennu’s ridgelike belly suggest the asteroid is up to a billion years old, more ancient than once thought. The craters also imply that Bennu got its toplike shape early in its history, rather than later from sun-driven spinning. And there are signs that material on the asteroid’s poles is creeping toward the equator, suggesting geological activity.
Although many of these puzzles intrigue scientists, ultimately the point of the mission is to return the largest amount of asteroid material ever captured to Earth’s surface. That is expected to happen in 2023. But, Lauretta adds, “The challenge got a lot harder when we saw the true nature of Bennu’s surface.”
THE WOODLANDS, TEXAS—As data streamed down last month from NASA’s New Year’s flyby of MU69, the most distant planetary object ever explored (above), New Horizons mission scientists got a shock. Rather than the 35-kilometer-long space “snowman” they were expecting, angled images revealed a flatter—not fatter—version, like two lumpy pancakes smooshed together.
“That took us by surprise,” said Alan Stern, the mission’s principal investigator and a planetary scientist at the Southwest Research Institute in Boulder, Colorado, today at the Lunar and Planetary Science Conference here. “We’re looking at something wild and wooly and pristine.”
Scientists believe MU69’s two lobes, with their sparse impact craters and generally smooth features, are primordial planetary building blocks called planetesimals. They still don’t understand why MU69’s two lobes did not form as spheres. But their flat shapes are now the best evidence that MU69, or “Ultima Thule” as the team has nicknamed it, first formed as two small, separate objects, says William McKinnon, a New Horizons team member and planetary scientist at Washington University in St. Louis, Missouri. “This is our strongest evidence that they really did start as an orbiting pair.”
Because the two lobes aren’t spheres, their height, width, and depth can be seen as three distinct axes, and all three axes of the lobes are nearly perfectly aligned, as if they had been laid end-on-end like dominos. This type of alignment would be expected if the duo formerly orbited each other in close proximity, their gravity gently tugging back and forth. “It’s very improbable this would arise completely by chance,” McKinnon says.
The new images support a newer theory of planetary formation, called the streaming instability, as Science reported in January. Fifteen years ago, scientists proposed that boulder-size “pebbles,” built up through static electricity, would clump together like a pack of racing cyclists thanks to the churn of the early solar system’s primordial disk. Those streaming pebbles would eventually gravitationally collapse into planetesimals, leading to pairs of orbiting objects that line up like MU69, McKinnon says. “That comes right out of the streaming instability model.”
THE WOODLANDS, TEXAS—After months of delicate maneuvering, NASA’s InSight lander has finished placing its hypersensitive seismometer on the surface of Mars. The instrument is designed to solve mysteries about the planet’s interior by detecting the booming thunder of “marsquakes.” But just a few weeks into its run, the car-size lander has already heard something else: the minute tremors that continually rock our red neighbor. If marsquakes are the drum solo, these microseisms, as they’re known, are the bass line.
The signal first became apparent in early February, as soon as the lander placed a protective shield over the seismometer, said Philippe Lognonné, a planetary seismologist at Paris Diderot University who heads the team that runs the instrument, in a talk here today at the annual Lunar and Planetary Science Conference. “We do believe that these signals are waves coming from Mars.” This is the first time, he said, that such microseisms have been detected on another planet.
On Earth, microseisms are ubiquitous, caused largely by the sloshing of the ocean by storms and tides. Mars, despite the dreams of science fiction writers, has no present-day oceans. Instead, this newly discovered noise is likely caused by low-frequency pressure waves from atmospheric winds that rattle the surface, inducing shallow, longer-period waves in the surface, called Rayleigh waves, Lognonné said.
Even though InSight has not yet detected a marsquake, the microseisms are an important indicator that the lander’s seismometer is working as hoped. In recent decades, seismologists have begun to see microseisms on Earth as not just a nuisance, but as a valuable tool for understanding features in the subsurface. This noise will be similarly valuable on Mars, Lognonné said, allowing the team’s seismologists to probe the rigid surface crust in the immediate vicinity around the lander.
But the seismometer has had little time to listen so far. Although the sand-filled crater where InSight landed, nicknamed “Homestead Hollow,” had little in the way of large rocks to complicate its placement, the deployment still took a month longer than planned, thanks to two delicate tasks. First, scientists had to carefully tweak the electric tether connecting the seismometer to the lander, in order to reduce noise coming off the lander. Then, they had to place a wind and heat shield over the instrument.
Since then, InSight has spent much of its time troubleshooting for its second instrument, a heat probe designed to burrow up to 5 meters below the surface. The robotic arm placed that instrument in mid-February. But soon after the probe began to hammer itself into the surface, its 40-centimeter-long “mole” got stuck on a rock or some other blockage just 30 centimeters down. Now, mission scientists have put the hammering on hold as they wait for the agencies’ engineers to evaluate their options. That will continue for several more weeks, said Bruce Banerdt, InSight’s principal investigator and a geophysicist at NASA’s Jet Propulsion Laboratory in Pasadena, California.
Although the microseisms are a thrill to hear, everyone working on InSight is waiting for the main event: their first marsquake. There’s no need to panic about not seeing one yet, Banerdt said. “Before we get nervous … [the mission is] exactly where we expected to be.” The team expects to detect about one marsquake a month, but these will likely come in clusters, not perfectly spaced out. Banerdt, who had been preparing this mission for decades, can be patient, he said. “The wait’s not completely over yet.”
1. Introduktion:Livets usandsynlighed og kemiens undertrykkelse
Skønt et levende system naturligvis er en formidabel industri af kemiske omdannelser, er liv og kemi fundamentalt uforenelige størrelser. En fungerende organisme må billedligt talt tage sine kemiske omdannelser ud af kemiens hænder for selv at udføre dem under anvendelse af makromolekylære maskiner, idet der groft sagt kræves en maskine for hver omdannelse; livet hæver sig derfor op over kemien.
Hovedgrunden er, at rent kemiske processer løber termodynamisk “ned ad bakke” mod en større sandsynlighed, hvorimod livet må løbe dets nødvendigste kemiske omdannelser “op ad bakke” – kemiske omdannelser, som går termodynamisk “baglæns” – for at skabe nogle fulstændigt nødvendige “far-from-equilibrium” (FFE) tilstande.
Alle levende systemer eksisterer uundgåeligt i en selvfrembragt fysisk tilstand ekstremt langt fra termodynamisk ligevægt (Extremely Far From Equilibrium), svarende til en tilsvarende med ekstremt lav sandsynlighed. Livets EFFE-tilstand omfatter desuden et meget stort antal forskellige underordnede ustabile tilstande, som aktivt må opretholdes.
De kemiske omdannelser, som åbner op for nye muligheder, udgøres af de transformationer, som skaber ustabile undertilstande langt fra termisk ligevægt, og som hver for sig kræver en makromolekylær “maskine“, som specifikt kobler processen, som drives “op ad bakke” sammen med den større process, som forløber “ned ad bakke“. Sammenkoblingen disse forskellige typer af kemiske omdannelser er på ingen måde inden for rækkevidde of “normal kemi“.
Selv de molekylære omdannelser, som faktisk bevæger sig “ned ad bakke“, må styres af en dedikeret makromolekylær maskine, som kontrollerer reaktionsrater og udelukker uønskede reaktioner. Kemi alene er alt for ukontrollabel.
Vi argumenterer for, at disse betragtninger gælder både for livets oprindelse som for det fuldt udviklede liv. Alle idéer om, at livet opstod som “kemi i en pose” ved tilførsel af energi til en samling molekylære byggesten, kan i princippet ikke være korrekt.
Denne artikels formål er understøtte disse påstande. Vi begynder med at opsummere vore tanker om livets oprindelse.
2. Energi, warme damme, hede supper:Hvilke idéer lå bag alle tidligere spekulationer om livets oprindelse?
Fra Darwins spekulationer om “… some warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity present, that a proteincompound was chemically formed, ready to undergo still more complex changes …” gennem Oparins, Haldanes og Bernals gisninger om en “prebiotic soup” drevet af “… a supply of energy such as lightning or ultraviolet light” gennem Urey og Millers eksperimenter med elektriske udladninger frem mod nutidens arbejde med oprindelsen, i stedet for fremkomsten, af liv er baseret på den grundlæggende idé, at man blot skal udsætte den rette molekylsuppe for en passende energiform for at skabe liv ud fra kemi. Nødvendigheden af at tilføre energi afspejler mere eller mindre implicit, men i stigende grad også direkte, nødvendigheden af at anbringe systemet i en tilstand af temodynamisk uligevægt for at frembringe nødvendige reaktioner, som kræver energitilførsel i form af arbejde i termodynamisk ligevægt. Man antager, igen mere eller mindre direkte, at livets oprindelse foregår helt af sig selv, eventuelt via en verden bestående af RNA-molekyler. Den tilførte energi tænkes i alle disse modeller at anbringe systemet i en ikke-specifik kemisk tilstand ude af ligevægt. Tilbageslag for disse antagelser har været sparsomme, men alligevel ganske overbevisende.
Vi argumenterer for, at sådanne forestillinger om “cooking-the-soup” over hele linjen er uholdbare. Vi anfører, at intet kunne være mindre foreneligt med livets inderste virkemåde, langt mindre med dets fremkomst, end resultaterne fra at udsætte en hvilken som helst blanding af molekyler for en hvilken som helst ikke-specifik kemisk “energi“. At man ud fra sådanne eksperimenter blandt de utallige organiske produkter kan finde nogle “building blocks of life” er aldeles irrelevant og vildledende. Sådanne idéer er uholdbare, da de er baserede på fundamentale misforståelser om livet som fænomen. Livet har sin egen tydeligt adskilte alkymi.
J.D. Bernal (1951) anerkendte én side af problemet, idet han erklærede: … it is not enough to explain the formation of such molecules, what is necessary is a physical-chemical explanation of the origin of these molecules that suggests the presence of suitable sources and sinks for free energy“. Bernal afspejlede klart argumentet fremført 7 år tidligere af Schrödinger: Det er ikke energi men “negentropy” (svarende til en uligevægt, også kendt som “free energy“), som er drivkraften bag livet.
Oversættelsen tager længere tid end ventet. Forfatterne er mestre i det engelske sprog med mange indskudte sætninger. Hav tålmodighed.
PENTICTON, CANADA—Reporting from the Dominion Radio Astrophysical Observatory here requires old-school techniques: pad and pen. Upon arrival, I must turn off my digital recorder and cellphone and stash them in a shielded room with a Faraday cage—a metal mesh that prevents stray electromagnetic signals from escaping. The point is to keep any interference away from the observatory’s newest radio telescope, the Canadian Hydrogen Intensity Mapping Experiment (CHIME).
On a clear, cold day in January, Nikola Milutinovic stands on the vertiginous gantry that runs along the focus of one of CHIME’s four 100-meter-long, trough-shaped dishes. Milutinovic, a scientific engineer at the University of British Columbia (UBC) in Vancouver, scans their reflective surfaces for snow, which generally sifts through the metallic mesh but sometimes sticks and freezes. Snow-covered hills surround him, shielding CHIME from the cellphone towers, TV transmitters, and even microwave ovens of nearby towns. “If you switched on a cellphone on Mars, CHIME could detect it,” he says.
CHIME’s quarry is neither so faint nor so close. The telescope is smaller and cheaper than other leading radio observatories. But by luck as much as design, its capabilities are just right for probing what may be the most compelling new mystery in astronomy: signals from the distant universe called fast radio bursts (FRBs). Discovered in 2007, FRBs are so bright that they stick out in the data like a peak in the nearby Canadian Rockies—so long as a telescope is watching and its electronics are fast enough to pick out the pulses, which last only a few thousandths of a second.
Just days before I visit, CHIME—still in its shakedown phase—had made global headlines for bagging 13 new FRBs, bringing the total known to more than 60. Nearly that many theories exist for explaining them. One of the few things researchers know for sure, from the nature of the pulses, is that they come from far beyond our Milky Way. But in an instant, each event is over, leaving no afterglow for astronomers to study and frustrating efforts to get a fix on their origin.
Whatever generates FRBs must be compact to produce such short pulses, astronomers believe, and extremely powerful to be seen at such great distances. Think neutron stars or black holes or something even more exotic. FRBs can repeat—although strangely, only two of the dozens known appear to do so. The repetition could rule out explosions, mergers, or other one-time cataclysmic events. Or repeating and solitary FRBs could be different animals with different sources—theorists just don’t know.
What they need are numbers: more events and, most important, more repeaters, which can be traced to a particular environment in a home galaxy. CHIME will deliver that by surveying the sky at high sensitivity. Its troughs don’t move, but they observe a swath of sky half a degree wide, stretching from one horizon to the other. As Earth turns, CHIME sweeps across the entire northern sky. Sarah Burke-Spolaor, an astrophysicist at West Virginia University in Morgantown, says its sensitivity and wide field of view will enable it to survey a volume of the universe 500 times bigger than the one surveyed by the Parkes radio telescope in Australia, which discovered the first FRB and 21 others. “CHIME just has access to that all day, every day,” she says.
Once CHIME’s commissioning phase is over later this year, scientists think it could find as many as two dozen FRBs per day. “Within a year, it will be the dominant discoverer of FRBs,” says Harvard University astrophysicist Edo Berger.
The strange-looking telescope has been a labor of love for the small team behind it—labor being the operative word. A contractor assembled the dishes, lining the troughs with a radio-reflective steel wire mesh. But everything else was painstakingly assembled by researchers from UBC, the University of Toronto, and McGill University in Montreal. That includes 1000 antennas fixed beneath the gantry at each trough’s focus, 100 kilometers of cabling, and more than 1000 computer processors that sit inside radiation-shielded shipping containers next to the dishes.
“Everyone has put their hands on the telescope,” says Milutinovic, who puts in shifts monitoring it and its computer systems. It’s not just a desk job. Although he left alone two baby ospreys that nested on a tall pole near the telescope, he has called in conservationists to remove other birds that set up house in the telescope’s structure, along with the occasional rattlesnake. When a humidity sensor in one of the computer containers goes off at night, Milutinovic makes the 25-minute drive to the deserted observatory to check it out. He worries about other nocturnal visitors. “I’ve seen the tracks of coyote, and there’s a bear that hangs around here.”
In a field in which front-rank telescopes cost billions, the CA$20 million CHIME looks set to have an impact out of all proportion to its price tag. “CHIME shows you can build a telescope that makes the world news pretty cheaply,” Milutinovic says.
None of that was part of CHIME’s original job description. Back in 2007, a group of cosmologists in Canada had the idea of building a cheap telescope to measure the 3D distribution across the universe of hydrogen gas clouds, which glow faintly at radio frequencies. The aim, says Keith Vanderlinde of the University of Toronto, was to map ripples in the density of matter created soon after the big bang and chart their expansion over cosmic history. A change in the expansion rate would tell researchers something about dark energy, the mysterious force thought to be accelerating the universe’s growth. “Any handle we can get on it would be a huge boon to physics,” Vanderlinde says.
CHIME would also be an excellent machine for studying pulsars. Pulsars are neutron stars, dense cinders of collapsed giant stars, that shoot electromagnetic beams out of their poles while rotating like a celestial lighthouse, sometimes thousands of times per second. Astronomers on Earth detect the beams as metronomic pulses of radio waves. CHIME will monitor 10 pulsars at a time, 24 hours a day, for hiccups in their perfect timekeeping that could result when passing gravitational waves stretch intervening space.
When CHIME was conceived, few people were thinking about FRBs because the first, found in 2007 in archival Parkes telescope data, was such an enigma. It had a high dispersion measure, meaning the pulse was smeared across frequencies because free electrons in intergalactic space had slowed the burst’s low-frequency radio waves disproportionately. The high dispersion measure suggested the burst came from billions of light-years away, far beyond our local group of galaxies.
The pulse was still bright, implying the source’s energy was a billion times that of a pulsar pulse. Yet its short duration meant the source could be no bigger than 3000 kilometers across because signals could not cross a larger object fast enough for it to act in unison and produce a single, short pulse. A citysize pulsar could fit in that space. But how could a pulsar detonate so powerfully?
Astronomers were tempted to dismiss that first burst as a mirage. But it was no anomaly: Another pulse was uncovered in Parkes archival data in 2012. Then, after an upgrade with new digital instruments, Parkes detected four more in 2013, all with high dispersion measures, suggesting cosmically distant origins. That paper “made me a believer,” says McGill astronomer Victoria Kaspi, who was working to integrate pulsar monitoring into CHIME.
The paper also sparked a realization: CHIME could be adapted to look for FRBs, too. “Vicky called me up and said, ‘You know, this would also make a good FRB machine,’” recalls Ingrid Stairs, a collaborator of Kaspi’s at UBC.
The upgrade was not easy. Catching FRBs requires finer time and frequency resolution than mapping hydrogen. CHIME’s data would have to be logged every millisecond across 16,000 frequency channels, Kaspi says. To do that meant tinkering with the correlator, the fearsomely parallel computer that chomps through the 13 terabits of data streaming every second from CHIME’s 1024 antennas—comparable to global cellphone traffic.
The time-critical astrophysicists needed a different output from the sensitivity-is-everything cosmologists. The cosmologists, eager to map the cosmic clouds, could get by without the extra resolution. At the end of each day, they could download data onto a hard disk and ship it to UBC for leisurely processing. But that wasn’t an option for the FRB hunters, who needed high-resolution data that would quickly overwhelm a hard drive. Kaspi and her colleagues devised algorithms to scan in real time just a few minutes of high-resolution data stored in a buffer. If an event is detected, the key 20 seconds of data around it are saved. If there’s nothing, they’re dumped. Searching for FRBs is “smash and grab science,” says team member Paul Scholz of McGill.
As test observations began in 2017, the team got twitchy about how many FRBs CHIME would see. CHIME was observing at frequencies of 400 to 800 megahertz (MHz), lower than the 1.4-gigahertz frequency used to detect most FRBs. A 300-MHz survey at a different telescope had found nothing, and another survey at 700 to 800 MHz saw just a single burst. “It was worrying, especially in the lower part of the band,” Stairs says.
Those worries evaporated in July and August 2018, when the team struck gold with the 13 new FRBs, even though sections of the telescope were sporadically taken offline for adjustments. The haul, published in Nature in January, included one repeater—only the second yet discovered. Kaspi declined to provide an update on the number of FRB discoveries since last summer, citing two unpublished papers in the works. But she says CHIME is “fulfilling expectations.” “It’s a bit like drinking from a firehose, but in a good way,” she says.
Theorists want all that CHIME will deliver, and then some. A poverty of information is allowing ideas to run riot. “Almost every aspect of FRBs is in play for theorists,” Berger says. An online catalog of FRB origin theories had 48 entries at the time of writing. Many theorists initially put forward models based on the violent collapse or merger of compact objects, including white dwarfs, neutron stars, pulsars, and black holes. But the discovery of repeaters shifted speculation to sources that would not be destroyed in the act of generating a burst.
Active galactic nuclei, the supermassive black holes at the centers of galaxies, spew winds and radiation that might trigger a burst by striking nearby objects—a gas cloud, a small black hole, or a hypothetical quark star. Or the bursts might come from more speculative phenomena, such as lightning strikes in the atmospheres of neutron stars or the interaction of hypothetical dark matter particles called axions with black holes or neutron stars. Amanda Weltman, a theorist at the University of Cape Town in South Africa, does not discount even more fanciful ideas such as cosmic strings, hypothetical threadlike defects in the vacuum of space leftover from the moments after the big bang. They “could be releasing fast radio bursts in a number of ways,” she says.
But as the number of detected FRBs moved from single digits into dozens, astronomers realized the bursts could be downright common, detectable by the thousands every day if the right telescopes were watching. “That’s a serious problem for a lot of models,” Berger says.
FRB 121102, the first repeating event detected, may be the most revealing FRB so far. The Arecibo telescope in Puerto Rico saw its first burst in 2012, but since then dozens more have been seen coming from that spot on the sky. In 2017, the 27-dish Karl G. Jansky Very Large Array in New Mexico revealed the FRB resides in the outskirts of a distant dwarf galaxy and that the location coincides with a weak but persistent radio source. That dim radio glow may emanate from a supernova remnant—an expanding ball of gas from a stellar explosion, which could have formed a black hole or neutron star that powers the FRB. In another clue, the polarization of the FRB’s radio waves rotates rapidly, suggesting they emanate from a strong magnetic environment.
Brian Metzger, a theorist at Columbia University, believes a young magnetar—a highly magnetized neutron star—resides at the center of the cloud and powers the bursts. In a scenario developed with his colleagues, its magnetic field serves as a fizzy store of energy that occasionally flares, blasting out a shell of electrons and ions at nearly the speed of light—an outburst resembling a coronal mass ejection from our sun, but on steroids. When the flare hits ion clouds leftover from previous flares, the resulting shock wave boosts the strength of the clouds’ magnetic field lines and causes electrons to spiral around them in concert. Just as synchrotrons on Earth whip electrons around racetracks to emit useful x-rays, those gyrations spawn a coherent pulse of radio waves.
Magnetars are often invoked to explain such energetic events, Metzger says. “They’re a catch-all for anything we don’t understand. But here it’s kind of warranted.” CHIME team member Shriharsh Tendulkar of McGill wonders whether objects such as magnetars could explain both repeaters and single-burst FRBs. Single-burst FRBs might “start out regular as repeaters, then slow as [the source’s] magnetic field weakens,” he says.
But according to Weltman, it’s too early to declare the mystery solved. “There are so many clues here, but they do not yet point to a single conclusive theoretical explanation,” she says.
Knowledge in numbers
As observers amass new FRBs, different classes of events may emerge, perhaps offering clues about what triggers them. FRBs may also turn out to come from specific types of galaxies—or regions within galaxies—which could allow theorists to distinguish between active galactic nuclei and other compact objects as the sources. “We need statistics and we need context,” Metzger says.
In the coming years, other FRB spotters will come online, including the Hydrogen Intensity and Realtime Analysis eXperiment in South Africa and the Deep Synoptic Array in California. With their widely spaced arrays of dishes, both facilities will precisely locate FRBs on the sky—something CHIME can’t do for now. “They’re all going for localization because they know CHIME will clean up on statistics,” Scholz says.
The CHIME team, not to be outdone, is drawing up a proposal to add outriggers, smaller troughs at distances of hundreds of kilometers, which will record the same events from a different angle and so help researchers pinpoint them. “With all these new efforts, there’ll be substantial progress in the next few years,” Metzger says.
For now, as CHIME’s commissioning phase winds down, Milutinovic’s job is to ensure that it keeps doing its job. “You want it to be boring,” he says. “It’s the weather that gives us most issues”—snow on the troughs, summer heat waves that tax the cooling system for the electronics. Then there’s the grass, a wildfire risk. Every summer, the observatory invites ranchers to graze their cattle on-site—not only to be neighborly, but also because cows emit less radio frequency interference than a lawn mower. But they can’t graze right around CHIME because they might chew on cables. So Milutinovic relies on diesel-powered mowers, which, lacking spark plugs, pose less of an interference problem.
But he longs for an even better high-resolution grass-cutting tool. “We thought of having a CHIME goat.”
BGP blackhole filtering is a routing technique used to drop unwanted traffic. Black holes are placed in the parts of a network where unwanted traffic should be dropped. For example, a customer can ask a provider to install black hole on its provider edge (PE) routers to prevent unwanted traffic from entering a customer’s network.
Routing internet traffic around the world relies on the border gateway protocol (BGP), which manages how internet traffic is routed the internet. BGP relies on trust between network operators to not send incorrect or malicious data. But mistakes happen, and malformed data can form a “route leak” that leads to confusion over where internet traffic should go, and can lead to massive outages.
In a BGP route leak, the routing announcements from an autonomous system that guides the information to its destination is inaccurate and is rejected by either receiver, the sender or an intermediary along the route that packet is supposed to travel.
“At approximately 12:52PM EST on March 13th, 2019, it appears that an accidental BGP routing leak from a European ISP to a major transit ISP, which was then propagated onwards to some peers and/or downstreams of the transit ISP in question, resulted in perceptible disruption of access to some well-known Internet properties for a short interval,” explained Roland Dobbins, a NETSCOUT principal engineer in an email to TechCrunch.
“However, BGP is a usually a static protocol, meaning that once it’s setup it rarely changes. More likely a cause of this nature would be due to a mistake in programmatic automation and various health checks that they perform to ensure optimal functionality for users. If I had to conjecture, I would suspect that the outage today was likely due to a flaw in the code that controls such functions on a high-level business wise. Consider that the impact was across several Facebook owned services therefore the likelihood of them trying to be efficient in their code and its centralization for many services is more likely the root cause,” Thomas wrote.
BGP is, quite literally, the protocol that makes the internet work. BGP is short for Border Gateway Protocol and it is the routing protocol used to route traffic across the internet. Routing Protocols (such as BGP, OSPF, RIP, EIGRP, etc…) are designed to help routers advertise adjacent networks and since the internet is a network of networks, BGP helps to propagate these networks to all BGP Routers across the world.
BGP is defined by IETF in RFC 4271 and we are currently on version 4 (BGP4 or BGP-4) since 2006. BGP is a Layer 4 Protocol where peers have to be manually configured to form a TCP connection and begin speaking BGP to exchange routing information.
Within the Internet, an autonomous system (AS) is a network controlled by a single entity typically an Internet Service Provider or a very large organization with independent connections to multiple networks.
These Autonomous Systems must have an officially registered autonomous system number (ASN), which they get from their Regional Internet Registry: AFRINIC, ARIN, APNIC, LACNIC or RIPE NCC.
A unique ASN (AS Number) is allocated to each AS for use in BGP routing. AS numbers are important because the ASN uniquely identifies each network on the Internet.
Two routers that have established connection for exchanging BGP information, are referred to as BGP peers. Such BGP peers exchange routing information between them via BGP sessions that run over TCP, which is a reliable, connection oriented & error free protocol.
Once the BGP Session is established, the routers can advertise a list of network routes that they have access to and will scrutinize them to find the route with the shortest path.
Of course, BGP does not make sense when you are connected only to one other peer (such as your ISP) because he is always going to be the best (and only path) to other networks. However, when you are connected to multiple networks at the same time, then certain paths will be shorter, faster or more reliable than others. For example, Google’s AS15169 peers with 270 other networks (Autonomous Systems), one of which is Digital Ocean Inc. AS14061. They are both connected to other ISPs for the internet, however this way since they have now peered together, they can exchange routing information, so now their router can chose a shorter path of connectivity they have between themselves. If that neighbourship is broken for some reason or another, then their routers can re-arrange their routing tables to reach those Autonomous Systems through other Autonomous Systems such as Tier 1 ISPs like: Cogent (AS174), TeliaSonera (AS1299), Level 3 (AS1, AS3356, AS3549), NTT (AS2914), AT&T (AS7018), etc…
Misconfiguring or Abusing BGP
Since BGP is at the absolute core of the internet, when it is misconfigured or abused it can cause havoc across large portions of the internet.
For example, in 2008, when the Pakistan Government tried to ban YouTube, Pakistan Telecom (AS17557) used BGP to route YouTube’s address block (AS36561) into a black hole. Accidentally (allegedly), this routing information somehow got transmitted to Pakistan Telecom’s Upstream IP Transit Provider PCCW Global (AS3491) which then got propagated to the rest of the world. Most of YouTube’s traffic from across the world ended up in a black hole in Pakistan. In this video, RIPE NCC Chief Scientist Daniel Karrenberg (@limbodan) uses BGPlay to replay the events in the BGP Routing Table during the YouTube outage.
Some other famous incidents are: AS7007 incident, Brazilian carrier leaks BGP table and Turkish ISP takes over the Internet.
Apart from misconfiguration, BGP can be also abused for malicious purposes. By taking advantage of unsecured BGP peerings or not verifying routes that are being announced from your peers, attackers may announce IP ranges that they do not actually own and thus routing internet traffic towards their links, essentially creating an MITM attack. For more information about this, I suggest you read Wired’s Blog Post on Revealed: The Internet’s Biggest Security Hole and the post from BGPmon: BGP Routing Incidents in 2014, malicious or not?
As businesses grow, however, they will start requiring BGP connectivity (any customer who wants to achieve truly redundant Internet access has to have its own AS and exchange BGP information with its ISPs), and you’ll be forced to deploy BGP on more and more core and edge routers.
This short introduction to BGP should be enough for you to understand the basics of what BGP is and how it works, but it is by no means a good idea to operate it in a production environment until you have spent some time reading the RFCs.