Siddon’s Algorithm

Fast calculation of the exact radiological path for a 3D CT array

Robert L. Siddon

ABSTRACT

Ready availability has prompted the use of computed tomography (CT) data in various applications in radiation therapy. For example, some radiation treatment planning systems now utilize CT data in heterogeneous dose calculation algorithms. In radiotherapy imaging applications, CT data are projected onto specified planes, thus producing radiographs, which are compared with simulatorradiographs to assist in proper patient positioning and delineation of target volumes. All these applications share the common geometric problem of evaluating the radiological path through the CT array. Due to the complexity of the three-dimensional geometry and the enormous amount of CT data, the exact evaluation of the radiological path has proven to be a time consuming and difficult problem. This paper identifies the inefficient aspect of the traditional exact evaluation of the radiological path as that of treating the CT data as individual voxels. Rather than individual voxels, a new exact algorithm is presented that considers the CT data as consisting of the intersection volumes of three orthogonal sets of equally spaced, parallelplanes. For a three-dimensional CT array of N3 voxels, the new exact algorithm scales with 3N, the number of planes, rather than N3, the number of voxels. Coded in FORTRAN-77 on a VAX 11/780 with a floating point option, the algorithm requires approximately 5 ms to calculate an average radiological path in a 1003 voxel array.

INTRODUCTION

In radiation therapy applications, computer tomography (CT) data are utilized in various dose calculation and imaging algorithms. For example, some radiation treatment planning systems now utilize two-dimensional CT data for pixel-based heterogeneous dose calculations. Other systems forward project three-dimensional CT data onto specified planes, thus forming radiographs, which are compared with simulator radiographs to assist in proper patient positioning and delineation of target volumes. All such applications, whether in inhomogeneity calculations or imaging applications, essentially reduce to the same geometric problem: that of calculating the radiological path for a specified ray through the CT array.

Although very simple in principle, elaborate computer algorithms and a significant amount of computer time is required to evaluate the exact radiological path. The amount of detail involved was recently emphasized by Harauz and Ottensmeyer, who stated that even for the two-dimensional case, their algorithm for calculating the exact radiological path grew more and more unwiedly and time consuming, while remaining unreliable. For three dimensions, they concluded that determining the exact radiological path is not viable. This paper describes an exact, efficient, and reliable algorithm for calculating the radiological path through a three-dimensional CT array.

Denoting a particular voxel density as ρ(i,j,k) and the length contained by that voxel as l(i,j,k), the radiological path may be written as
(1) d = ΣiΣjΣkl(i,j,k)ρ(i,j,k)
Direct evaluation of Eq.(1) entails an algorithm which scales with the number of terms in the sums, that is, the number of voxels in the CT array. The following describes an algorithm that scales with the sum of linear dimensions of the CT array.

METHOD

Rather than independent elements, the voxels are considered as the intersection volumes of orthogonal sets of equally spaced, parallel planes. Without loss of generality, Fig. 1 illustrates the two-dimensional case, where pixels are considered as the intersection areas of orthogonal sets of equally spaced, parallel lines. The intersections of the ray with the lines are calculated, rather than intersections of the ray with the individual pixels.

The pixels of the CT array (left) may be considered as the intersection areas of orthogonal sets of equally spaced, parallel lines (right). The intersections of the ray with the pixels are a subset of the intersections of the ray with the lines. The intersections of the ray with the lines are given by two equally spaced sets: one set for the horizontal lines (filled circles) and one set for the vertical lines (open circles). The generalization to a three-dimensional CT array is straightforward.

Determining the intersections of the ray with the equally spaced, parallel lines is a particularly simple problem. As the lines are equally spaced, it is only necessary to determine the first intersection and generate all the others by recursion. As shown on the right illustration of Fig. 1, the intersections consist of two sets, one set for the intersections with the horizontal lines (closed circles) and one set for the intersections with the vertical lines (open circles). Comparing the left and right illustrations of Fig. 1, it is clear that the intersections of the ray with the pixels is a subset of the intersections with the lines. Identifying that subset allows the radiological path to be determined. The extension to the three-dimensional CT array is straightforward.

The ray from point 1 to point 2 may be represented parametrically as
(2) X(α) = X1+α(X2-X1), Y(α) = Y1+α(Y2-Y1), Z(α) = Z1+α(Z2-Z1)
where the parameter α is zero at point 1 and unity at point 2. The intersections of the ray with the sides of the CT array are shown in Fig. 2.

The quantities αmin and αmax define the allowed range of parametric values for the intersections of the ray with the sides of the CT array: (a) both 1 and 2 outside the array, (b) 1 inside and 2 outside, (c) 1 outside and 2 inside, and (d) 1 inside and 2 inside.

If both points 1 and 2 are outside the array [Fig. 2(a)], then the parametric values corresponding to the two intersection points of the ray with the sides are given by αmin and αmax. All intersections of the ray with individual planes must have parametric values which lie in the range (αminmax). For the case illustrated in Fig 2(b), where point 1 is inside the array, the value of αmin is zero. Likewise, for Fig. 2(c), if point 2 is inside, then αmax is one. For both points 1 and 2 inside the array [Fig. 2(d)], then αmin is zero and αmax is one. The solution to the intersection of the ray with the CT voxels follows immediately: Determine the parametric intersection values, in the range (αminmax), of the ray with each orthogonal set of equally spaced, parallel planes. Merge the three sets of parametric values into one set; for example, merging of the sets (1,4,7), (2,5,8), and (3,6,9) results in the merged set (1,2,3,4,5,6,7,8). The length of the ray contained by a particular voxel, in units of the ray length, is simply the difference between two adjacent parametric values in the merged set. For each voxel intersection length, the corresponding voxel indices are obtained, and the products of the length and density are summed over all intersections to yield the radiological path. A more detailed description of the algorithm is given in the following section.

ALGORITHM

For a CT array of (Nx-1, Ny-1, Nz-1) voxels, the orthogonal sets of equally spaced, parallel planes may be written as

(3)  Xp(i) = Xp(0) + idx, Yp(j) = Yp(0) + jdy, Zp(k) = Zp(0) + kdz,

where i, j, and k are integers and dx, dy, and dz are the distances between the x, y, and z planes, respectively. The quantities dx, dy, and dz are also the lengths of the sides of the voxel. The parametric values αmin and αmax are obtained by intersecting the ray with the sides of the CT array. As shown in Fig. 2(d) both 1 and 2 are assumed to be located on one of the three equally spaced, parallel planes. This is also assumed to be the case if 1 and/or 2 are located outside the CT array. From Eqs. (2) and (3), the parametric values corresponding to the sides are given by the following:

(4) If(X2-X1)≠0, αx(0)=(Xp(0)-X1)/(X2-X1), αx(Nx-1)=(Xp(Nx-1)-X1)/(X2-X1)

with similar expressions for αy(0), αy(Ny-1), αz(0), and αz(Nz-1). If the denominator (X2-X1) in Eq. (4) is equal to zero, then the ray is perpendicular to the x axis, and the corresponding values of αx are undefined, and similarly for αy and αz. If the αx, αy, or αz values are undefined, then those values are simply excluded in all the following discussion.
In terms of the parametric values given above, the quantities αmin and αmax are given by

(5.1) αmin=max{0, min[αx(0),αx(Nx-1)], min[αy(0),αy(Ny-1)], min[αz(0),αz(Nz-1)]},
(5.2) αmax=min{0, max[αx(0),αx(Nx-1)], max[αy(0),αy(Ny-1)], max[αz(0),αz(Nz-1)]},

where the functions min and max select from their argument list, the minimum and maximum terms, respectively. If αmax is less than or equal to αmin, then the ray does not intersect the CT array.
From all intersected planes, there are only certain intersected planes which will have parametric values in the range (αminmax). From Eqs. (2), (3), and (5), the range of indices (imin,imax), (jmin,jmax), and (kmin,kmax), corresponding to these particular planes, are given by the following:

Light from the vacuum

Physicists predict a way to squeeze light from the vacuum of empty space

By Adrian Cho

Talk about getting something for nothing. Physicists predict that just by shooting charged particles through an electromagnetic field, it should be possible to generate light from the empty vacuum. In principle, the effect could provide a new way to test the fundamental theory of electricity and magnetism, known as quantum electrodynamics, the most precise theory in all of science. In practice, spotting the effect would require lasers and particle accelerators far more powerful than any that exist now.

“I’m quite confident about [the prediction] simply because it combines effects that we understand pretty well,” says Ben King, a laser-particle physicist at the University of Plymouth in the United Kingdom, who was not involved in the new analysis. Still, he says, an experimental demonstration “is something for the future.”

Physicists have long known that energetic charged particles can radiate light when they zip through a transparent medium such as water or a gas. In the medium, light travels slower than it does in empty space, allowing a particle such as an electron or proton to potentially fly faster than light. When that happens, the particle generates an electromagnetic shockwave, just as a supersonic jet creates a shockwave in air. But whereas the jet’s shockwave creates a sonic boom, the electromagnetic shockwave creates light called Cherenkov radiation. That effect causes the water in the cores of nuclear reactors to glow blue, and it’s been used to make particle detectors.

However, it should be possible to ditch the material and produce Cherenkov light straight from the vacuum, predict Dino Jaroszynski, a physicist at the University of Strathclyde in Glasgow, U.K., and colleagues. The trick is to shoot the particles through an extremely intense electromagnetic field instead.

According to quantum theory, the vacuum roils with particle-antiparticle pairs flitting in and out of existence too quickly to observe directly. The application of a strong electromagnetic field can polarize those pairs, however, pushing positive and negative particles in opposite directions. Passing photons then interact with the not-quite-there pairs so that the polarized vacuum acts a bit like a transparent medium in which light travels slightly slower than in an ordinary vacuum, Jaroszynski and colleagues calculate.

Putting two and two together, an energetic charged particle passing through a sufficiently strong electromagnetic field should produce Cherenkov radiation, the team reports in a paper in press at Physical Review Letters. Others had suggested vacuum Cherenkov radiation should exist in certain situations, but the new work takes a more fundamental and all-encompassing approach, says Adam Noble, a physicist at Strathclyde.

Spotting vacuum Cherenkov radiation would be tough. First, the polarized vacuum slows light by a tiny amount. The electromagnetic fields in the strongest pulses of laser light reduce light’s speed by about a millionth of a percent, Noble estimates. In comparison, water reduces light’s speed by 25%. Second, charged particles in an electromagnetic field spiral and emit another kind of light called synchroton radiation that, in most circumstances, should swamp the Cherenkov radiation.

Still, in principle, it should be possible to produce vacuum Cherenkov radiation by firing high-energy electrons or protons through overlapping pulses from the world’s highest intensity lasers, which can pack a petawatt, or 1015 watts, of power. However, Jaroszynski and colleagues calculate that in such fields, even particles from the world’s highest energy accelerators would produce much more synchrotron radiation than Cherenkov radiation.

Space could be another place to look for the effect. Extremely high-energy protons passing through the intense magnetic field of a spinning neutron star—also known as a pulsar—should produce more Cherenkov radiation than synchrotron radiation, the researchers predict. However, pulsars don’t produce many high-energy protons, says Alice Harding, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and the particles that do enter a pulsar’s magnetic field should quickly lose energy and spiral instead of zipping across it. “I’m not terribly excited about the prospect for pulsars,” she says.

Nevertheless, King says, experimenters might see the effect someday. Physicists in Europe are building a trio of 10 petawatt lasers in Romania, Hungary, and the Czech Republic, and their counterparts in China are developing a 100 petawatt laser. Scientists are also trying to create compact laser-driven accelerators that might produce highly energetic particle beams far more cheaply. If those things come together, physicists might be able to spot vacuum Cherenkov radiation, King says.

Others are devising different ways to use high-power lasers to probe the polarized vacuum. The ultimate aim of such work is to test quantum electrodynamics in new ways, King says. Experimenters have confirmed the theory’s predictions are accurate to within a few parts in a billion. But the theory has never been tested in the realm of extremely strong fields, King says, and such tests are now becoming possible. “The future of this field is quite exciting.”

 

Rivers raged on Mars

UChicago scientists find substantial runoff fed rivers for more than a billion years

Long ago on Mars, water carved deep riverbeds into the planet’s surface—but we still don’t know what kind of weather fed them. Scientists aren’t sure, because their understanding of the Martian climate billions of years ago remains incomplete.

A new study by University of Chicago scientists catalogued these rivers to conclude that significant river runoff persisted on Mars later into its history than previously thought. According to the study, published March 27 in Science Advances, the runoff was intense—rivers on Mars were wider than those on Earth today—and occurred at hundreds of locations on the red planet.

This complicates the picture for scientists trying to model the ancient Martian climate, said lead study author Edwin Kite, assistant professor of geophysical sciences and an expert in both the history of Mars and climates of other worlds. “It’s already hard to explain rivers or lakes based on the information we have,” he said. “This makes a difficult problem even more difficult.”

But, he said, the constraints could be useful in winnowing the many theories researchers have proposed to explain the climate.

Mars is crisscrossed with the distinctive tracks of long-dead rivers. NASA’s spacecraft have taken photos of hundreds of these rivers from orbit, and when the Mars rover Curiosity landed in 2012, it sent back images of pebbles that were rounded—tumbled for a long time in the bottom of a river.

But it’s a puzzle why ancient Mars had liquid water. Mars has an extremely thin atmosphere today, and early in the planet’s history, it was also only receiving a third of the sunlight of present-day Earth, which shouldn’t be enough heat to maintain liquid water. “Indeed, even on ancient Mars, when it was wet enough for rivers some of the time, the rest of the data looks like Mars was extremely cold and dry most of the time,” Kite said.

Seeking a better understanding of Martian precipitation, Kite and his colleagues analyzed photographs and elevation models for more than 200 ancient Martian riverbeds spanning over a billion years. These riverbeds are a rich source of clues about the water running through them and the climate that produced it. For example, the width and steepness of the riverbeds and the size of the gravel tell scientists about the force of the water flow, and the quantity of the gravel constrains the volume of water coming through.

Their analysis shows clear evidence for persistent, strong runoff that occurred well into the last stage of the wet climate, Kite said.

The results provide guidance for those trying to reconstruct the Martian climate, Kite said. For example, the size of the rivers implies the water was flowing continuously, not just at high noon, so climate modelers need to account for a strong greenhouse effect to keep the planet warm enough for average daytime temperatures above the freezing point of water.

The rivers also show strong flow up to the last geological minute before the wet climate dries up. “You would expect them to wane gradually over time, but that’s not what we see,” Kite said. The rivers get shorter—hundreds of kilometers rather than thousands—but discharge is still strong. “The wettest day of the year is still very wet.”

It’s possible the climate had a sort of “on/off” switch, Kite speculated, which tipped back and forth between dry and wet cycles.

“Our work answers some existing questions but raises a new one. Which is wrong: the climate models, the atmosphere evolution models, or our basic understanding of inner solar system chronology?” he said.

UChicago Planetary GIS/Data Specialist David Mayer, now at the United States Geologic Survey Astrogeology Program, and then-visiting student Gaia Stucky de Quay from Imperial College London, co-authored the study, as well as scientists with the Smithsonian Institution, the Natural History Museum in London and the Centre National de la Recherche Scientifique in Paris. The study used University of Chicago Research Computing Center resources.

Citation: “Persistence of intense, climate-driven runoff late in Mars history.” Kite, et al, Science Advances, March 27, 2019. Doi: 10.1126/sciadv.aav7710

Persistence of intense, climate-driven runoff late in Mars history

Abstract

Mars is dry today, but numerous precipitation-fed paleo-rivers are found across the planet’s surface. These rivers’ existence is a challenge to models of planetary climate evolution. We report results indicating that, for a given catchment area, rivers on Mars were wider than rivers on Earth today. We use the scale (width and wavelength) of Mars paleo-rivers as a proxy for past runoff production. Using multiple methods, we infer that intense runoff production of >(3–20) kg/m2 per day persisted until <3 billion years (Ga) ago and probably <1 Ga ago, and was globally distributed. Therefore, the intense runoff production inferred from the results of the Mars Science Laboratory rover was not a short-lived or local anomaly. Rather, precipitation-fed runoff production was globally distributed, was intense, and persisted intermittently over >1 Ga. Our improved history of Mars’ river runoff places new constraints on the unknown mechanism that caused wet climates on Mars.

 

Rocket Lab launch with R3D2

Rocket Lab launch with DARPA’s R3D2 satellite rescheduled for Thursday

Stephen Clark (March 25, 2019)

Rocket Lab’s launch team canceled a launch attempt Sunday in New Zealand after discovering a misbehaving video transmitter on the Electron booster set to loft a small U.S. military satellite into orbit to test an innovative antenna design. After replacing the transmitter, Rocket Lab announced the launch is set for Thursday (U.S. time) to wait for better weather.

The U.S.-New Zealand launch company announced the initial delay as the Electron counted down to a targeted liftoff time of 7:36 p.m. EDT (2336 GMT) Sunday.

“The team has identified a video transmitter 13dB down with low performance,” Rocket Lab tweeted. “It’s not an issue for flight, but we want to understand why, so we’re waiving off for the day.”

Peter Beck, Rocket Lab’s founder and CEO, added that the rocket was “technically good to fly, as we have redundant links, but we don’t know why the performance dropped and that makes me uncomfortable.”

In an update a few hours later, Rocket Lab said crews aimed to replace the suspect video transmitter in time for a second launch attempt Tuesday (U.S. time). But the company announced further delays Monday and Tuesday to wait for lighter winds and improved weather at the New Zealand launch base.

Rocket Lab also cited concerns that safety cutouts to avoid a collision with another object already in orbit would limit launch opportunities during a planned launch window Wednesday.

Liftoff of the Electron rocket is now scheduled for a four-hour window opening at 6:30 p.m. EDT (2230 GMT) Thursday. In New Zealand, the launch window will open at 11:30 a.m. local time Friday.

The two-stage Electron rocket will launch the Defense Advanced Research Projects Agency’s Radio Frequency Risk Reduction Deployment Demonstration satellite into a 264-mile-high (425-kilometer) orbit, where the spacecraft will unfurl an antenna in an experiment to demonstrate how expandable reflector arrays could be stowed into volumes tight enough to fit on a small, relatively inexpensive rocket.

Artist’s illustration of DARPA’s R3D2 satellite. Credit: Northrop Grumman

Known as R3D2, the $25 million satellite carries an antenna made of tissue-thin Kapton material, which will open to a diameter of nearly 7.4 feet (2.25 meters) in orbit.

The spacecraft was integrated by Northrop Grumman. R3D2’s spacecraft bus was provided by Blue Canyon Technologies of Boulder, Colorado, and the antenna was designed and built by MMA Designs in Louisville, Colorado.

The R3D2 mission is Rocket Lab’s first launch for the U.S. Defense Department, which is examining how to field more small satellites to provide the military with communications, battlefield surveillance, and other services, making the military’s spacecraft fleets more resilient to attacks and less expensive to design, build and launch.

Rocket Lab’s Electron rocket has launched four times since May 2017. After the Electron’s inaugural test flight fell short of orbit due to a ground tracking error, Rocket Lab has logged three straight successes delivering CubeSats to orbit for NASA, commercial companies and educational institutions.

Once it lifts off, the Electron rocket will head east from Rocket Lab’s privately-operated launch base on New Zealand’s North Island. The R3D2 satellite is set to deploy from the Electron’s upper stage around 53 minutes into the flight, kicking off the spacecraft’s planned six-month tech demo mission.

 

83 supermassive black holes

Astronomers discover 83 supermassive black holes in the early universe

Astronomers from Japan, Taiwan and Princeton University have discovered 83 quasars powered by supermassive black holes in the distant universe, from a time when the universe was less than 10 percent of its present age.

“It is remarkable that such massive dense objects were able to form so soon after the Big Bang,” said Michael Strauss, a professor of astrophysical sciences at Princeton University who is one of the co-authors of the study. “Understanding how black holes can form in the early universe, and just how common they are, is a challenge for our cosmological models.”

This finding increases the number of black holes known at that epoch considerably, and reveals, for the first time, how common they are early in the universe’s history. In addition, it provides new insight into the effect of black holes on the physical state of gas in the early universe in its first billion years. The research appears in a series of five papers published in The Astrophysical Journal and the Publications of the Astronomical Observatory of Japan.

Supermassive black holes, found at the centers of galaxies, can be millions or even billions of times more massive than the sun. While they are prevalent today, it is unclear when they first formed, and how many existed in the distant early universe. A supermassive black hole becomes visible when gas accretes onto it, causing it to shine as a “quasar.” Previous studies have been sensitive only to the very rare, most luminous quasars, and thus the most massive black holes. The new discoveries probe the population of fainter quasars, powered by black holes with masses comparable to most black holes seen in the present-day universe.

The research team used data taken with a cutting-edge instrument, “Hyper Suprime-Cam” (HSC), mounted on the Subaru Telescope of the National Astronomical Observatory of Japan, which is located on the summit of Maunakea in Hawaii. HSC has a gigantic field-of-view — 1.77 degrees across, or seven times the area of the full moon — mounted on one of the largest telescopes in the world. The HSC team is surveying the sky over the course of 300 nights of telescope time, spread over five years.

The team selected distant quasar candidates from the sensitive HSC survey data. They then carried out an intensive observational campaign to obtain spectra of those candidates, using three telescopes: the Subaru Telescope; the Gran Telescopio Canarias on the island of La Palma in the Canaries, Spain; and the Gemini South Telescope in Chile. The survey has revealed 83 previously unknown very distant quasars. Together with 17 quasars already known in the survey region, the researchers found that there is roughly one supermassive black hole per cubic giga-light-year — in other words, if you chunked the universe into imaginary cubes that are a billion light-years on a side, each would hold one supermassive black hole.

The sample of quasars in this study are about 13 billion light-years away from the Earth; in other words, we are seeing them as they existed 13 billion years ago. As the Big Bang took place 13.8 billion years ago, we are effectively looking back in time, seeing these quasars and supermassive black holes as they appeared only about 800 million years after the creation of the (known) universe.

It is widely accepted that the hydrogen in the universe was once neutral, but was “reionized” — split into its component protons and electrons — around the time when the first generation of stars, galaxies and supermassive black holes were born, in the first few hundred million years after the Big Bang. This is a milestone of cosmic history, but astronomers still don’t know what provided the incredible amount of energy required to cause the reionization. A compelling hypothesis suggests that there were many more quasars in the early universe than detected previously, and it is their integrated radiation that reionized the universe.

“However, the number of quasars we observed shows that this is not the case,” explained Robert Lupton, a 1985 Princeton Ph.D. alumnus who is a senior research scientist in astrophysical sciences. “The number of quasars seen is significantly less than needed to explain the reionization.” Reionization was therefore caused by another energy source, most likely numerous galaxies that started to form in the young universe.

The present study was made possible by the world-leading survey ability of Subaru and HSC. “The quasars we discovered will be an interesting subject for further follow-up observations with current and future facilities,” said Yoshiki Matsuoka, a former Princeton postdoctoral researcher now at Ehime University in Japan, who led the study. “We will also learn about the formation and early evolution of supermassive black holes, by comparing the measured number density and luminosity distribution with predictions from theoretical models.”

Based on the results achieved so far, the team is looking forward to finding yet more distant black holes and discovering when the first supermassive black hole appeared in the universe.

The HSC collaboration includes astronomers from Japan, Taiwan and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.

 

DNA Data Storage

Demonstration of End-to-End Automation of DNA Data Storage

Scientific Reports volume 9, Article number: 4998 (2019)

Abstract

Synthetic DNA has emerged as a novel substrate to encode computer data with the potential to be orders of magnitude denser than contemporary cutting edge techniques. However, even with the help of automated synthesis and sequencing devices, many intermediate steps still require expert laboratory technicians to execute. We have developed an automated end-to-end DNA data storage device to explore the challenges of automation within the constraints of this unique application. Our device encodes data into a DNA sequence, which is then written to a DNA oligonucleotide using a custom DNA synthesizer, pooled for liquid storage, and read using a nanopore sequencer and a novel, minimal preparation protocol. We demonstrate an automated 5-byte write, store, and read cycle with a modular design enabling expansion as new technology becomes available.

Introduction

Storing information in DNA is an emerging technology with considerable potential to be the next generation storage medium of choice. Recent advances have shown storage capacity grow from hundreds of kilobytes to megabytes to hundreds of megabytes. Although contemporary approaches are book-ended with mostly automated synthesis and sequencing technologies (e.g., column synthesis, array synthesis, Illumina, nanopore, etc.), significant intermediate steps remain largely manual. Without complete automation in the write to store to read cycle of data storage in DNA, it is unlikely to become a viable option for applications other than extremely seldom read archival.

To demonstrate the practicality of integrating fluidics, electronics and infrastructure, and explore the challenges of full DNA storage automation, we developed the first full end-to-end automated DNA storage device. Our device is intended to act as a proof-of-concept that provides a foundation for continuous improvements, and as a first application of modules that can be used in future molecular computing research. As such, we adhered to specific design principles for the implementation: (1) maximize modularity for the sake of replication and reuse, and (2) reduce system complexity to balance cost and labor input required to setup and run the device modules.

 

Attacks on medical machine learning

Adversarial attacks on medical machine learning
Science  22 Mar 2019:
Vol. 363, Issue 6433, pp. 1287-1289

With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers. However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems’ outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.

Deep Vulnerabilities

Adversarial examples are inputs to a machine-learning model that are intentionally crafted to force the model to make a mistake. Adversarial inputs were first formally described in 2004, when researchers studied the techniques used by spammers to circumvent spam filters. Typically, adversarial examples are engineered by taking real data, such as a spam advertising message, and making intentional changes to that data designed to fool the algorithm that will process it. In the case of text data like spam, such alterations may take the form of adding innocent text or substituting synonyms for words that are common in malignant messages. In other cases, adversarial manipulations can come in the form of imperceptibly small perturbations to input data, such as making a human-invisible change to every pixel in an image. Researchers have demonstrated the existence of adversarial examples for essentially every type of machine-learning model ever studied and across a wide range of data types, including images, audio, text, and other inputs.

Cutting-edge adversarial techniques generally use optimization theory to find small data manipulations likely to fool a targeted model. As a proof of concept in the medical domain, we recently executed successful adversarial attacks against three highly accurate medical image classifiers. The top figure provides a real example from one of these attacks, which could be fairly easily commoditized using modern software. On the left, an image of a benign mole is shown, which is correctly flagged as benign with a confidence of >99%. In the center, we show what appears to be random noise, but is in fact a carefully calculated perturbation: This “adversarial noise” was iteratively optimized to have maximum disruptive effect on the model’s interpretation of the image without changing any individual pixel by more than a tiny amount. On the right, we see that despite the fact the perturbation is so small as to be visually imperceptible to human beings, it fools the model into classifying the mole as malignant with 100% confidence. It is important to emphasize that the adversarial noise added to the image is not random and has near-zero probability of occurring by chance. Thus, such adversarial examples reflect not that machine-learning models are inaccurate or unreliable per se but rather that even otherwise-effective models are susceptible to manipulation by inputs explicitly designed to fool them.

Adversarial attacks constitute one of many possible failure modes for medical machine-learning systems, all of which represent essential considerations for the developers and users of models alike. From the perspective of policy, however, adversarial attacks represent an intriguing new challenge, because they afford users of an algorithm the ability to influence its behavior in subtle, impactful, and sometimes ethically ambiguous ways.

Klassifikation baseret på maskinlæring giver altid et resultat uden angivelse af en sandsynlighed for den enkelte klassifikation. Den opgivne sandsynlighed for klassifikationen gælder for det anvendte træningssæt. Inputdata kan altid tilføjes en omhyggeligt ændret lille tilføjelse, som fuldstændigt ændrer klassifikationen. Det er derfor vanskeligt/umuligt at give resultatet en juridisk gyldighed.

 

Jørg Tofte Jebsen

Jørg Tofte Jebsen

Jørg Tofte Jebsen (born 27 April 1888 in Berger, Norway and died 7 January 1922 in Bolzano, Italy) was a Norwegian physicist. Here he was the first to work on Einstein’s general theory of relativity. In this connection he became known after his early death for what many now call the Jebsen-Birkhoff teorem for the metric tensor outside a general, spherical mass distribution.

Jebsen grew up in Berger where his father Jens Johannes Jebsen ran two large textile mills. His mother was Agnes Marie Tofte and they had married in 1884. After elementary school he went through middle school and gymnasium in Oslo. He showed already then particular talents for mathematical topics.

After the final examen artium in 1906, he did not continue his academic studies at a university as would be normal at that time. He was meant to enter his father’s company and spent for that purpose two years in Aachen in Germany where he studied textile manufacturing. After a shorter stay in England, he came back to Norway and started to work with his father.

But his interests for natural science took over so that in 1909 he started this field of study at University of Oslo. His work there was interrupted in the period 1911-12 when he was an assistant for Sem Sæland at the newly established Norwegian Institute of Technology (NTH) in Trondheim. Back in Oslo he took up investigations of X-ray crystallography with Lars Vegard. With his help he could pursue this work at University of Berlin starting in the spring of 1914. That was at the same time as Einstein took up his new position there.

During the stay in Berlin it became clear that his main interests were in theoretical physics and electrodynamics in particular. This is central to Einstein’s special theory of relativity and would define his future work back in Norway. From 1916 he took a new job as assistant in Trondheim, but had to resign after a year because of health problems. In the summer of 1917 he got married to Magnhild Andresen in Oslo and they got a child a year later. They had then moved back to his parents home in Berger where he worked alone on a larger treatise with the title Versuch einer elektrodynamischen Systematik. It was finished a year later in 1918 and he hoped that it could be used to obtain a doctors degree at the university. In the fall the same year he received treatment at a sanatorium for what turned out to be tuberculosis.

The faculty at the University in Oslo sent Jebsen’s thesis for evaluation to Carl Wilhelm Oseen at the University of Uppsala. He had some critical comments with the result that it was approved for the more ordinary cand.real. degree. But Oseen had found this student so promising that he shortly thereafter was invited to work with him. Jebsen came to Uppsala in the fall of 1919 where he could follow lectures by Oseen on general relativity.

At that time it was natural to study the exact solution of Einstein’s equations for the metric outside a static, spherical mass distribution found by Karl Schwarzschild in 1916. Jebsen set out to extend this achievement to the more general case for a spherical mass distribution that varied with time. This would be of relevance for pulsating stars. After a relative short time he came to the surprising result that the static Schwarzschild solution still gives the exact metric tensor outside the mass distribution. It means that such a spherical, pulsating star will not emit gravitational waves.

During the spring 1920 he hoped to get the results published through the Royal Swedish Academy of Sciences. This was met by some difficulties, but after the intervention by Oseen it was accepted for publication in a Swedish journal for the natural sciences where it appeared the following year.

His work did not seem to generate much interest. One reason can be that the Swedish journal was not so well-known abroad. A couple of years later it was rediscovered by George David Birkhoff who included it in a popular science book he wrote. Thus it became known as «Birkhoffs teorem». The original discovery of Jebsen was pointed out first in 2005 and translated into English. From that time on it is now more often called the Jebsen-Birkhoff teorem. Most modern-day proofs are along the lines of the original Jebsen derivation.

Einstein came on a visit to Oslo in June 1920. He would give three public lectures about the theory of relativity after the invitation by the Student Society. Jebsen was also there, but it is not clear if he met him personally.

In the fall the same year Jebsen traveled with his family to Bolzano in northern Italy in order to find a milder climate to improve his detoriating health. Here he wrote the first Norwegian presentation of the differential geometry used in general relativity. He also found time to write a popular book on Galileo Galilei and his struggle with the church. But his health did not improve and he died there on January 7, 1922. A few weeks later he was buried near his home in Norway.

Jørg T. Jebsen og Birkhoffs teorem (SIDE 96)

 

Exascale Aurora computer

U.S. Department of Energy and Intel to Build First Exascale Supercomputer

The Argonne National Laboratory Supercomputer will Enable High Performance Computing and Artificial Intelligence at Exascale by 2021

Aurora is expected to be completed by 2021. | Photo: Argonne National Laboratory

 

CHICAGO, ILLINOIS – Intel Corporation and the U.S. Department of Energy (DOE) will build the first supercomputer with a performance of one exaFLOP in the United States. The system being developed at DOE’s Argonne National Laboratory in Chicago, named “Aurora”, will be used to dramatically advance scientific research and discovery. The contract is valued at over $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray Computing in 2021.

The Aurora systems’ exaFLOP of performance – equal to a “quintillion” floating point computations per second – combined with an ability to handle both traditional high performance computing (HPC) and artificial intelligence (AI) – will give researchers an unprecedented set of tools to address scientific problems at exascale. These breakthrough research projects range from developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction, and discovering materials for the creation of more efficient organic solar cells. The Aurora system will foster new scientific innovation and usher in new technological capabilities, furthering the United States’ scientific leadership position globally.

“Achieving Exascale is imperative not only to better the scientific community, but also to better the lives of everyday Americans,” said U.S. Secretary of Energy Rick Perry. “Aurora and the next-generation of Exascale supercomputers will apply HPC and AI technologies to areas such as cancer research, climate modeling, and veterans’ health treatments. The innovative advancements that will be made with Exascale will have an incredibly significant impact on our society.”

“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO. “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”

“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, and industry partners Intel and Cray,” said Argonne National Laboratory Director, Paul Kearns. “Argonne’s Aurora system is built for next-generation Artificial Intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials, and further understanding the universe – and that is just the beginning.”

The foundation of the Aurora supercomputer will be new Intel technologies designed specifically for the convergence of artificial intelligence and high performance computing at extreme computing scale. These include a future generation of Intel® Xeon® Scalable processor, a future generation of Intel® Optane™ DC Persistent Memory, Intel’s Xe compute architecture and Intel’s One API software. Aurora will use Cray’s next-generation Shasta family which includes Cray’s high performance, scalable switch fabric codenamed “Slingshot”.

“Intel and Cray have a longstanding, successful partnership in building advanced supercomputers, and we are excited to partner with Intel to reach exascale with the Aurora system,” said Pete Ungaro, president and CEO, Cray. “Cray brings industry leading expertise in scalable designs with the new Shasta system and Slingshot interconnect. Combined with Intel’s technology innovations across compute, memory and storage, we are able to deliver to Argonne an unprecedented system for simulation, analytics, and AI.”

 

Mysterious asteroid activity

Mysterious asteroid activity complicates NASA’s sampling attempts

By Paul Voosen |

THE WOODLANDS, TEXAS—NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) mission to sample the asteroid Bennu and return to Earth was always going to be a touch-and-go maneuver. But new revelations about its target—a space rock five times the size of a U.S. football field that orbits close to Earth—are making the mission riskier than ever. Rather than smooth plains of rubble, Bennu’s surface is a jumble of more than 200 large boulders, with scarcely enough gaps for robotic sampling of its surface grit, the spacecraft’s team reported here today at the Lunar and Planetary Science Conference and in a series of Nature papers.

The $800 million spacecraft began to orbit Bennu at the start of this year, and the asteroid immediately began to spew surprises—literally. On 6 January, the team detected a plume of small particles shooting off the rock; 10 similar events followed over the next month. Rather than a frozen remnant of past cosmic collisions, Bennu is one of a dozen known “active” asteroids. “[This is] one of the biggest surprises of my scientific career,” says Dante Lauretta, the mission’s principal investigator and a planetary scientist at the University of Arizona in Tucson. “We are seeing Bennu regularly ejecting material into outer space.”

Ground-based observations of Bennu had originally suggested its surface was made of small pebbles incapable of retaining heat. OSIRIS-REx was designed to sample such a smooth environment, and it requires a 50-meter-wide circle free of hazards to approach the surface. No such circle exists, say mission scientists, but there are several smaller boulder-free areas that it could conceivably sample. Given how well the spacecraft has handled its maneuvers so far, “We’re going to try to hit the center of the bull’s-eye,” says Rich Burns, OSIRIS-REx’s project manager at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

OSIRIS-REx has always been a cautious mission. Unlike the speedy Hayabusa2 mission from Japan, which sampled the near-Earth Ryugu asteroid a half-year after its arrival, OSIRIS-REx plans to sample Bennu in July 2020, a year and a half after it started to orbit. That timetable has not changed, Lauretta says. By this summer, researchers hope to have the sampling site selected. And much remains to be discovered about the spinning, top-shaped asteroid, starting with the plumes, which can shoot off penny-size particles at speeds of up to several meters per second.

Just after OSIRIS-REx entered orbit around Bennu, the asteroid reached its closest approach to the sun. The other known active asteroids, which are all located in the asteroid belt between Mars and Jupiter, have similarly spouted particles as they get closer to the sun. It’s possible that the plumes are related to this approach, perhaps driven by water ice sublimating into vapor. But there are a dozen different hypotheses to explore, Lauretta says. “We don’t know the answer right now.”

The abundance of impact craters on Bennu’s ridgelike belly suggest the asteroid is up to a billion years old, more ancient than once thought. The craters also imply that Bennu got its toplike shape early in its history, rather than later from sun-driven spinning. And there are signs that material on the asteroid’s poles is creeping toward the equator, suggesting geological activity.

Although many of these puzzles intrigue scientists, ultimately the point of the mission is to return the largest amount of asteroid material ever captured to Earth’s surface. That is expected to happen in 2023. But, Lauretta adds, “The challenge got a lot harder when we saw the true nature of Bennu’s surface.”