Supertæt kodning

Dette er dog en besynderlig overskrift? Man kan fristes til at kalde den “kryptisk”. Jeg vender i denne Blog tilbage til Alice og Bob, som udveksler kvantetilstande, der omtales som qubits. Det drejer sig i dette afsnit om spintilstande for elektroner. Alice og Bob anvender den samme instrumentering til frembringelse (skrivning) og måling (læsning) af uafhængige spintilstande (qubits). Én qubit kan netop overføre en klassisk bit. Overførsel af 2 klassiske bits kræver derfor 2 uafhængige qubits. En kvantetilstand bestående af et tensorprodukt af 2 uafhængige qubits |0> og |0> angives som |00> ≡ |0>⊗|0>, hvor den første |0> altid tilhører Alice, og den anden |0> altid tilhører Bob.

Einstein var en stor modstander af Bohrs fortolkning af den kvantemekaniske måling. Einstein havde brugt hele sit liv på at afskaffe newtons tyngdekraft, som virker øjeblikkeligt over enorme afstande. Enhver fysisk teori bør være lokal. Schrödinger kom Einstein til hjælp ved at indføre det besynderlige fænomen entanglement som et argument for skjulte variable.

Jeg viste i en tidlige Blog, hvordan Bell-kredsløbet B kan transformere et par uafhængige qubits |00> til en tilstand af Schrödingers entanglement:
B(|00>) = (|00>+|11>)/√2.
Husk: Den første spintilstand tilhører Alice, den anden tilhører Bob.

Alice får den første elektron og Bob får den anden. De rejser nu langt bort fra hinanden, idet de sørger for at bevare de to elektroners kvantetilstande. Dette er lettere sagt en gjort i den fysiske verden, men der er tale om et tankeeksperiment. Dette er helt i stil med tvillingeparadokset i Einsteins specielle relativitetsteori. Supertæt kodning går ud på at finde en metode til at kombinere informationen i det sammenfiltrede elektronpar, så Alice kan sende Bob 2 klassiske bits, uden at Alice og Bob behøver at mødes igen. Problemet er løst, hvis Bob kan anvende det inverse Bell-kredsløb til at frembringe det oprindelige par af uafhængige qubits. Dette kræver uheldigvis at Bob ved, hvilke af de 4 mulige Bell-entanglements, der er tale om, og det ved Bob ikke. Kan Alice mon klare dette ved anvendelse af nogle kvante-gates? Ja, hun kan anvende de 4 Pauli gates.

De 4 Pauli gates er defineret ved de 4 2×2 matricer:
I ≡ [[1,0]’,[0,1]’]
Z ≡ [[1,0]’,[0,-1]’]
X ≡ [[0,1]’,[1,0]’]
Y ≡ [[0,-1]’,[1,0]’]
Tegnet “‘” betyder transponering, som laver en rækkevektor om til en søjlevektor.
I, Z og X er symmetriske omkring diagonalen.
De er derfor deres egne inverse matricer, hvorfor der gælder:
I I = I
Z Z = I
X X = I
Y er derimod ikke symmetrisk; den er ikke sin egen inverse matrix.

Hvis Alice sender sin elektron gennem et Pauli gate, vil den ændre spintilstand (hvis vi ser bort fra I, som lader den uændret). Bobs elektron påvirkes ikke på nogen måde.
Jeg vil nu se på, hvordan Z, X og Y påvirker en vilkårlig qubit:
Z(a0|0> + a1|1>) = a0|0> – a1|1>, (a0 og a1 får modsat fortegn)
X(a0|0> + a1|1>) = a1|0> + a0|1>, (a0 og a1 bytter plads)
Y(a0|0> + a1|1>) = a1|0> – a0|1>, (a0 og a1 bytter plads,
og får modsat fortegn)

Nu er Alice’s elektron ikke en hvilket som helst qubit i den sammenfiltrede Bell-tilstand. Der gælder, at a0=1/√2 og a1=±1/√2.

Alice gør intet, hvis hun ønsker at sende 00: B(00) = (|00>+|11>)/√2.
Alice anvender X, som ombytter hendes |0> og |1>, hvis hun ønsker at sende 01. Den nye Bell-tilstand bliver: B(01) = (|10>+|01>)/√2.
Alice anvender Z, som skifter fortegn for |1>, hvis hun ønsker at sende 10. Den nye Bell-tilstand bliver: B(10) = (|00>-|11>)/√2.
Alice anvender Y, som ombytter |0> og |1>, og skifter fortegn for |1>, hvis hun ønsker at sende 11. Den nye Bell-tilstand bliver: B(11) =(|01>-|10>)/√2.

Det inverse Bell-kredsløb vil automatisk transformere Bell-tilstandene tilbage til én af de 4 uafhængige basistilstande: |00>, |01>, |10> og |11>.

Bob vil derfor være i stand til at måle spintilstandene for begge elektroner. Resultatet er 2 klassiske bitværdier.

 

Bell-kredsløbet

To eller flere reversible Gates anbragt efter hinanden kaldes et kredsløb.
Om Hadamard Gate gælder
H(|0>) = (|0>+|1>)/√2
H(|1>) = (|0>-|1>)/√2
Disse 2 Hadamard ligninger giver 4 tensorprodukter med |0> og |1>:
H(|0>)|0> = (|00>+|10>)/√2
H(|1>)|0> = (|00>-|10>)/√2
H(|0>)|1> = (|01>+|11>)/√2
H(|1>)|1> = (|01>-|11>)/√2

Både Hardamard og CNOT er reversible:
De er deres egne inverse ortogonale matricer.
Om CNOT gælder:
CNOT(|x0>) = |xx>, såfremt |x> er enten |0> eller |1>.

Jeg tager nu de første 2 Hadamard tensorprodukter som input til CNOT:
CNOT(H(|0>)|0>) = (|00>+|11>)/√2
CNOT(H(|1>)|0>) = (|00>-|11>)/√2

Om CNOT gælder desuden:
CNOT(|01>) = |01>
CNOT(|11>) = |10>

Jeg tager nu de sidste 2 Hadamard tensorprodukter som input til CNOT:
CNOT(H(|0>)|1>) = (|01>+|10>)/√2
CNOT(H(|1>)|1>) = (|01>-|10>)/√2

Bell-kredsløbet B defineres som
B(|00>) ≡ CNOT(H(|0>)|0>) = (|00>+|11>)/√2
B(|01>) ≡ CNOT(H(|0>)|1>) = (|01>+|10>)/√2
B(|10>) ≡ CNOT(H(|1>)|0>) = (|00>-|11>)/√2
B(|11>) ≡ CNOT(H(|1>)|1>) = (|01>-|10>)/√2

De fire outputs er alle entangled. Da de tilsvarende inputs danner en ortonormal basis for ℜ4, må output vektorerne også danne en ortonormal basis. Denne basis består af 4 entangled vektorer, som kaldes en Bell-basis.

Et Bell-kredsløb fungerer ved først at sende et par qubits igennem et Hadamard-gate, hvorefter resultatet sendes gennem et CNOT-gate. Men begge funktioner er deres egne inverse funktioner. Man kan ophæve Bell-transformationen ved igen at anvende CNOT efterfulgt af Hadamard. Man kalder dette kredsløb B-1 for det inverse Bell-kredsløb. Man kommer altså fra en Bell-basis til en standardbasis ved:
B-1((|00>+|11>)/√2) = |00>
B-1((|01>+|10>)/√2) = |01>
B-1((|00>-|11>)/√2) = |10>
B-1((|01>-|10>)/√2) = |11>

B og B-1 kan anvendes til nogle meget interessante ting som f.eks. kvante-teleportering.

 

Dragonfly is going to Titan

NASA will fly a billion-dollar quadcopter to Titan, Saturn’s methane-rich moon

By Paul Voosen |

The siren call of Titan could not be ignored. NASA’s next billion-dollar mission, called Dragonfly, will be an innovative quadcopter to explore Titan, Saturn’s largest moon, the agency announced today. The craft will soar and hover over the icy moon’s surface—and land on it—in a search for the conditions and chemistry that could foster life.

The mission—led by Elizabeth “Zibi” Turtle, a planetary scientist at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland, and also managed by APL—will launch in 2026. It represents a calculated risk for the agency, embracing a new paradigm of robotic exploration to be used on a distant moon. “Titan is unlike any other place in the solar system, and Dragonfly is like no other mission,” said Thomas Zurbuchen, NASA’s associate administrator for science in Washington, D.C., while announcing the mission’s selection. “The science is compelling. It’s the right time to do it.”

Titan is veiled by a nitrogen atmosphere and larger than Mercury. It is thought to harbor a liquid ocean beneath its frozen crust of water ice. NASA’s Cassini spacecraft studied Titan during its historic campaign, and, in 2005, dropped the short-lived Huygens probe into Titan’s atmosphere.

The surface it saw had many geologic features similar to those found on Earth, including plateaus, dune-filled deserts, and, at its poles, liquid seas and rivers. But on Titan, where temperatures average a frigid 94 K, the “rocks” are made of water ice and the seas are filled with ethane and methane, hydrocarbons that are gases on Earth. The moon’s stew of organic molecules and water, many scientists believe, could have resulted in reactions to create amino acids and the bases used to build DNA’s double helix. It’s as if Titan has been conducting experiments on life formation for millions of years, Turtle says. “Dragonfly is designed to go pick up the results of those experiments and study them.”

Dragonfly is an inspiring selection, adds Lindy Elkins-Tanton, a planetary scientist at Arizona State University in Tempe and principal investigator of Psyche, NASA’s mission to a metallic asteroid. “Titan might truly be the cradle for some kind of life—and whether life has emerged or not, Titan’s hydrocarbon rivers and lakes, and its hydrocarbon snow, makes it one of the most fantasylike landscapes in our solar system.”

Given Titan’s complex surface, a lander at a single site would not be able to say much about the moon’s chemistry. Dragonfly leverages the advances in computing and aircraft design that have led to the explosion of hovering drones on Earth. It will carry eight rotor blades, on the top and bottom of each of four arms. It is, in effect, a movable lander, capable of shunting kilometers between sampling sites every 16 Earth days. Titan’s dense air and low gravity will allow the 300-kilogram, sedan-size copter, which will be powered by a radioactive generator, to hover with 38 times less power than needed on Earth.

The timing of Dragonfly’s arrival, in 2034 during Titan’s long northern winter, ruled out a landing near the north pole, home to the moon’s evocative methane seas; those sites would leave it unable to radio home. Instead, the quadcopter will explore the moon’s vast equatorial deserts, which are likely fed by a grab bag of material from all over the moon. (“The largest zen garden in the solar system,” Turtle says.) It will search especially for impact craters or ice volcanoes, energetic processes that could provide a spark—and the liquid water—needed for nascent organic chemistry. During its nearly 3-year primary mission, after traveling 175 kilometers in a series of flights lasting up to 8 kilometers each, Dragonfly will ultimately reach the 80-kilometer-wide Selk impact crater, its primary target. The impact that created Selk was large enough to melt Titan’s water-ice crust and liberate oxygen, priming reactions that are recorded in its outcrops.

Dragonfly won’t be equipped with a robotic arm, like the recent Mars rovers. Its exploration will first be guided by an instrument on its belly that will bombard the ground with neutron radiation, using the gamma rays this attack releases to differentiate between basic terrain types, such as ammonia-rich ice or carbon-rich sand dunes. Its two landing skids will also each carry a rotary-percussive drill capable of taking samples and feeding them through a pneumatic tube to a mass spectrometer that can analyze their composition. The sampling system represented a risk for the mission; NASA scientists were concerned Titan’s hydrocarbon-rich atmosphere could clog it, Zurbuchen says. “It’s the oil spill version of an atmosphere.” Over the past 2 years, after extensive testing with “pathological” materials and a redesign, Turtle says, the agency’s fears were allayed.

Beyond Titan’s surface, Dragonfly will also target its atmosphere and interior. During flight, it can collect measurements, much like instruments mounted on a balloon would. And it is also equipped with a seismometer that could use vibrations induced on the moon by its tidal lock with Saturn to gauge the ocean hidden beneath its crust, which scientists have suggested could be made up of ammonia-water or water and sulfate. Ultimately, the quadcopter’s explorations may be able to last up to 8 years after landing before its nuclear power source peters out.

The cost-capped New Frontiers program, with $850 million set for the mission and some $150 million for launch, is the largest planetary exploration line that NASA opens to outside competition and leadership. A significant factor in Dragonfly’s selection, Zurbuchen adds, was APL’s ability to deliver the Parker Solar Probe, now on a mission to explore the sun that’s on time and under budget. Dragonfly went head-to-head with one other finalist, the Comet Astrobiology Exploration Sample Return, which would have sampled primordial ice from a comet and returned it for study on Earth.

Previous spacecraft launched under New Frontiers include New Horizons, which surveyed Pluto and recently flew by MU69, an icy object in the farthest reaches of the solar system; Juno, now in orbit around Jupiter; and the Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer, now orbiting the asteroid Bennu before collecting samples and returning them to Earth.

 

FRB180924 – 2. lokalisering

Baffling radio burst traced to a galaxy 3.6 billion light-years away

By Daniel Clery |

Fast radio bursts (FRBs)—intense blasts of radio waves from distant galaxies—have perplexed astronomers since they were first detected a dozen years ago. The bursts are so brief, only about one-thousandth of a second, that it’s usually impossible to pinpoint their origins—or their cause, be it a supernova, a neutron star, or something even more exotic.

So far, almost all of the 85 detected FRBs are one-off events. But a couple have been seen to repeat, letting astronomers pinpoint where at least one of them comes from.

Now, researchers have localized a second FRB, this time a nonrepeater. The team used an array of 36 radio dishes called the Australian Square Kilometre Array (SKA) Pathfinder (above), a precursor instrument to the planned SKA in Western Australia. Because the FRB signal arrived at each dish of the widely spaced array at slightly different times, the team could analyze the lags, measured in fractions of one-billionth of a second, to pinpoint the source in the sky.

With the help of follow-up observations using some of the world’s largest optical telescopes, astronomers identified the burst, FRB 180924, as coming from a medium-size galaxy 3.6 billion light-years from Earth, they report today in Science. But this presents a puzzle for theorists: The previously localized FRB came from a particular type of dwarf galaxy that provided one of the few clues to what could cause FRBs. If the new find comes from a very different galaxy, does that suggest repeaters and one-offs have different causes? Only more detections and localizations will answer that question—perhaps using new purpose-built telescopes that are joining the hunt.

A single fast radio burst localized to a massive galaxy at cosmological distance

Science  27 Jun 2019: eaaw5903

Abstract

Fast Radio Bursts (FRBs) are brief radio emissions from distant astronomical sources. Some are known to repeat, but most are single bursts. Non-repeating FRB observations have had insufficient positional accuracy to localize them to an individual host galaxy. We report the interferometric localization of the single pulse FRB 180924 to a position 4 kpc from the center of a luminous galaxy at redshift 0.3214. The burst has not been observed to repeat. The properties of the burst and its host are markedly different from the only other accurately localized FRB source. The integrated electron column density along the line of sight closely matches models of the intergalactic medium, indicating that some FRBs are clean probes of the baryonic component of the cosmic web.

Cosmological observations have shown that baryons comprise 4% of the energy density of the Universe, of which only about 10% is in cold gas and stars , with the remainder residing in a diffuse plasma surrounding and in between galaxies and galaxy clusters. The location and density of this material has been challenging to characterize, and up to 50% of it remains unaccounted.

Fast radio bursts are bright bursts of radio waves with millisecond duration. They can potentially be used to detect, study, and map this medium, as bursts of emission are dispersed and scattered by their passage through any ionized material, including the intergalactic medium. If the emission is linearly polarized and any of the media are magnetized, the burst is also subject to Faraday rotation, i.e., the frequency dependent rotation of the plane of linear polarization due to its passage through a magnetized plasma.

Detailed studies of the medium, and the bursts themselves, require localization of bursts to host galaxies, so that burst redshifts and their propagation distances can be determined.

To date, only one source (FRB 121102) has been localized to sufficient accuracy to identify a host. Is is also one of only two FRBs known to repeat. The burst localization was made through radio-interferometric detections of repeated bursts. The burst source lies in a luminous radio nebula within a dwarf galaxy with high star formation rate per unit stellar mass, at redshift z = 0.19. This has led to the hypothesis that bursts are produced by young magnetars embedded in pulsar wind nebulae, with the host galaxy properties suggesting an indirect connection between FRBs and other transient events which are common in this type of galaxy, such as superluminous supernovae and long-duration gamma-ray bursts.

The relationship between the source of FRB 121102 and the larger FRB population is unclear. Many sources have not been observed to repeat despite extensive campaigns spanning hundreds to thousands of hours. The progenitors and mechanism by which burst emission is generated remain uncertain. Localizing examples of further bursts, including those from a population that have not repeated, is required to determine their nature and establish if they can be used as cosmological probes.

 

Reversible Computing

The Future of Computing Depends on Making It Reversible

It’s time to embrace reversible computing, which could offer dramatic improvements in energy efficiency

By Michael P. Frank

For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The ­payoff from this win-win-win scenario enabled reinvestment in semi­conductor fabrication technology that could make even smaller, more densely packed transistors. And so this ­virtuous ­circle continued, decade after decade.

Now though, experts in industry, academia, and government laboratories anticipate that semiconductor miniaturization won’t continue much longer—maybe 5 or 10 years. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors caused clock speeds to stagnate more than a decade ago, which drove the industry to start building chips with multiple cores. But even multicore architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.

Heroic efforts are being made within the semiconductor industry to try to keep miniaturization going. But no amount of investment can change the laws of physics. At some point—now not very far away—a new computer that simply has smaller transistors will no longer be any cheaper, faster, or more energy efficient than its predecessors. At that point, the progress of conventional semiconductor technology will stop.

What about unconventional semiconductor technology, such as carbon-nanotube transistors, tunneling transistors, or spintronic devices? Unfortunately, many of the same fundamental physical barriers that prevent today’s complementary metal-oxide-semiconductor (CMOS) technology from advancing very much further will still apply, in a modified form, to those devices. We might be able to eke out a few more years of progress, but if we want to keep moving forward decades down the line, new devices are not enough: We’ll also have to rethink our most fundamental notions of computation.

Let me explain. For the entire history of computing, our calculating machines have operated in a way that causes the intentional loss of some information (it’s destructively overwritten) in the process of performing computations. But for several decades now, we have known that it’s possible in principle to carry out any desired computation without losing information—that is, in such a way that the computation could always be reversed to recover its earlier state. This idea of reversible computing goes to the very heart of thermo­dynamics and information theory, and indeed it is the only possible way within the laws of physics that we might be able to keep improving the cost and energy efficiency of general-purpose computing far into the future.

In the past, reversible computing never received much attention. That’s because it’s very hard to implement, and there was little reason to pursue this great challenge so long as conventional technology kept advancing. But with the end now in sight, it’s time for the world’s best physics and engineering minds to commence an all-out effort to bring reversible computing to practical fruition.

The history of reversible computing begins with physicist Rolf Landauer of IBM, who published a paper in 1961 titled “Irreversibility and Heat Generation in the Computing Process.” In it, Landauer argued that the logically irreversible character of conventional computational operations has direct implications for the thermodynamic behavior of a device that is carrying out those operations.

Landauer’s reasoning can be understood by observing that the most fundamental laws of physics are reversible, meaning that if you had complete knowledge of the state of a closed system at some time, you could always—at least in principle—run the laws of physics in reverse and determine the system’s exact state at any previous time.

To better see that, consider a game of billiards—an ideal one with no friction. If you were to make a movie of the balls bouncing off one another and the bumpers, the movie would look normal whether you ran it backward or forward: The collision physics would be the same, and you could work out the future configuration of the balls from their past configuration or vice versa equally easily.

The same fundamental reversibility holds for quantum-scale physics. As a consequence, you can’t have a situation in which two different detailed states of any physical system evolve into the exact same state at some later time, because that would make it impossible to determine the earlier state from the later one. In other words, at the lowest level in physics, information cannot be destroyed.

The reversibility of physics means that we can never truly erase information in a computer. Whenever we overwrite a bit of information with a new value, the previous information may be lost for all practical purposes, but it hasn’t really been physically destroyed. Instead it has been pushed out into the machine’s thermal environment, where it becomes entropy—in essence, randomized information—and manifests as heat.

Returning to our billiards-game example, suppose that the balls, bumpers, and felt were not frictionless. Then, sure, two different initial configurations might end up in the same state—say, with the balls resting on one side. The frictional loss of information would then generate heat, albeit a tiny amount.

Today’s computers rely on erasing information all the time—so much so that every single active logic gate in conventional designs destructively overwrites its previous output on every clock cycle, wasting the associated energy. A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect.

How much heat is produced? Landauer’s conclusion, which has since been experimentally confirmed, was that each bit erasure must dissipate at least 17-⁠thousandths of an electron volt at room temperature. This is a very small amount of energy, but given all the operations that take place in a computer, it adds up. Present-day CMOS technology actually does much worse than Landauer calculated, dissipating something in the neighborhood of 5,000 electron volts per bit erased. Standard CMOS designs could be improved in this regard, but they won’t ever be able to get much below about 500 eV of energy lost per bit erased, still far from Landauer’s lower limit.

Can we do better? Landauer began to consider this question in his 1961 paper where he gave examples of logically reversible operations, meaning ones that transform computational states in such a way that each possible initial state yields some unique final state. Such operations could, in principle, be carried out in a thermodynamically reversible way, in which case any energy associated with the ­information-bearing signals in the system would not necessarily have to be dissipated as heat but could instead potentially be reused for subsequent operations.

To prove this approach could still do everything a conventional computer could do, Landauer also noted that any desired logically irreversible computational operation could be embedded in a reversible one, by simply setting aside any information that was no longer needed, rather than erasing it. But Landauer originally thought that doing this was only delaying the inevitable, because the information would still need to be erased eventually, when the available memory filled up.

It was left to Landauer’s younger colleague, Charles Bennett, to show in 1973 that it is possible to construct fully reversible computers capable of performing any computation without quickly filling up memory with temporary data. The trick is to undo the operations that produced the intermediate results. This would allow any temporary memory to be reused for subsequent computations without ever having to erase or overwrite it. In this way, reversible computations, if implemented on the right hardware, could, in principle, circumvent Landauer’s limit.

Unfortunately, Bennett’s idea of using reversible computing to make computation far more energy efficient languished in academic backwaters for many years. The problem was that it’s really hard to engineer a system that does something computationally interesting without inadvertently incurring a significant amount of entropy increase with each operation. But technology has improved, and the need to minimize energy use is now acute. So some researchers are once again looking to reversible computing to save energy.

What would a reversible computer look like? The first detailed attempts to describe an efficient physical mechanism for reversible computing were carried out in the late 1970s and early 1980s by Edward Fredkin and his colleague Tommaso Toffoli in their Information Mechanics research group at MIT.

As a proof of concept, Fredkin and Toffoli proposed that reversible operations could, in principle, be carried out by idealized electronic circuits that used inductors to shuttle charge packets back and forth between capacitors. With no resistors damping the flow of energy, these circuits were theoretically lossless. In the mechanical domain, Fredkin and Toffoli imagined rigid spheres bouncing off of each other and fixed barriers in narrowly constrained trajectories, not unlike the frictionless billiards game I described earlier.

Unfortunately, these idealized systems couldn’t be built in practice. But these investigations led to the development of two abstract computational primitives, now known as the Fredkin gate and the Toffoli gate, which became the foundation of much of the subsequent theoretical work in reversible computing. Any computation can be performed using these gates, which operate on three input bits, transforming them into unique final configurations of three output bits.

Meanwhile, other researchers at such places as Caltech, Rutgers, the University of Southern California, and Xerox PARC continued to explore possible electronic implementations. They called their circuits “adiabatic” after the ­idealized thermodynamic regime in which energy is barred from leaving the system as heat.

These ideas later found fertile ground back at MIT, where in 1993 a graduate student named Saed Younis in Tom Knight’s group showed for the first time that adiabatic circuits could be used to implement fully reversible logic. Later students in the group, including Carlin Vieri and I, built on that foundation to design and construct fully reversible processors of various types in CMOS as simple proofs of concept. This work established that there were no fundamental barriers preventing the entire discipline of computer architecture from being translated to the reversible realm.

Meanwhile, other researchers had been exploring alternative approaches to implementing reversible computing that were not based on semiconductor electronics at all. In the early 1990s, nanotechnology visionary K. Eric Drexler produced detailed designs for reversible nanomechanical logic devices made from diamond-like materials. Over the decades, Russian and Japanese researchers had been developing reversible superconducting electronic devices, such as the similarly named (but distinct) parametric ­quantron and quantum flux parametron. And a group at the ­University of Notre Dame was studying how to use interacting single electrons in arrays of quantum dots. To those of us who were working on reversible computing in the 1990s, it seemed that, based on the wide range of possible hardware that had already been proposed, some kind of practical reversible computing technology might not be very far away.

Alas, the idea was still ahead of its time. Conventional semiconductor technology improved rapidly through the 1990s and early 2000s, and so the field of reversible computing mostly languished. Nevertheless, some progress was made. For example, in 2004 Krishna Natarajan (a student I was advising at the University of Florida) and I showed in detailed simulations that a new and simplified family of circuits for reversible computing called two-level adiabatic logic, or 2LAL, could dissipate as little as 1 eV of energy per transistor per cycle—about 0.001 percent of the energy normally used by logic signals in that generation of CMOS. Still, a practical reversible computer has yet to be built using this or other approaches.

There’s not much time left to develop reversible machines, because progress in conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate, making forward progress that much more difficult. So the time is indeed ripe now to pursue this technology, as it will probably take at least a decade for reversible computers to become practical.

The most crucial need is for new reversible device technologies. Conventional CMOS transistors—especially the smallest, state-of-the-art ones—leak too much current to make very efficient adiabatic circuits. Larger transistors based on older manufacturing technology leak less, but they’d have to be operated quite slowly, which means many devices would need to be used to speed up computation through parallel operation. Stacking them in layers could yield compact and energy-efficient adiabatic circuits, but at the moment such 3D fabrication is still quite costly. And CMOS may be a dead end in any case.

Fortunately, there are some promising alternatives. One is to use fast superconducting electronics to build reversi­ble circuits, which have already been shown to dissipate less energy per device than the Landauer limit when operated reversibly. Advances in this realm have been made by researchers at Yokohama National University, Stony Brook University, and Northrop Grumman. Meanwhile, a team led by Ralph Merkle at the Institute for Molecular Manufacturing in Palo Alto, Calif., has designed reversible nanometer-scale molecular machines, which in theory could consume one-hundred-billionth the energy of today’s computing technology while still switching on nanosecond timescales. The rub is that the technology to manufacture such atomically precise devices still needs to be invented.

Whether or not these particular approaches pan out, physi­cists who are working on developing new device concepts need to keep the goal of reversible operation in mind. After all, that is the only way that any new computing substrate can possibly surpass the practical capabilities of end-of-line CMOS technology by many orders of magnitude, as opposed to only a few at most.

To be clear, reversible computing is by no means easy. Indeed, the engineering hurdles are enormous. Achieving efficient reversible computing with any kind of technology will likely require a thorough overhaul of our entire chip-design infrastructure. We’ll also have to retrain a large part of the digital-engineering workforce to use the new design methodologies. I would guess that the total cost of all of the new investments in education, research, and development that will be required in the coming decades will most likely run well up into the billions of dollars. It’s a future-computing moon shot.

But in my opinion, the difficulty of these challenges would be a very poor excuse for not facing up to them. At this moment, we’ve arrived at a historic juncture in the evolution of computing technology, and we must choose a path soon.

If we continue on our present course, this would amount to giving up on the future of computing and accepting that the energy efficiency of our hardware will soon plateau. Even such unconventional concepts as analog or spike-based neural computing will eventually reach a limit if they are not designed to also be reversible. And even a quantum-computing breakthrough would only help to significantly speed up a few highly specialized classes of computations, not computing in general.

But if we decide to blaze this new trail of reversible computing, we may continue to find ways to keep improving computation far into the future. Physics knows no upper limit on the amount of reversible computation that can be performed using a fixed amount of energy. So as far as we know, an unbounded future for computing awaits us, if we are bold enough to seize it.

This article appears in the September 2017 print issue as “Throwing Computing Into Reverse.”

 

Kvante-gates

Kvante-gates er en naturlig udvidelse af klassiske, reversible gates. De er desuden en anden måde at betragte matematikken bag overførslen af qubits fra Alice til Bob. Jeg skrev i den forrige Blog, at valg af en retning ved måling af en qubit svarede til valg af en ortogonal matrix svarende til en ordnet ortonormal basis. Jeg vil i denne Blog anvende en fast ordnet ortonormal basis og lade den ortogonale matrix svarer til en reversibel gate, som qubits passerer før målingen. Jeg vil starte med at introducere nogle nye betegnelser.

Qubits

Vi vil kun anvende én ordnet basis til både at afsende og modtage qubits. Det er naturligt at vælge standard basen ([1,0]’,[0,1]’). Tegnet “‘” betyder transponering. Vi tildeler den første søjlevektor den målte bitværdi 0 og den anden vektor den målte bitværdi 1. Det er derfor naturligt at lade |0> betyde [1,0]’ og |1> betyde [0,1]’. En qubit har derfor formen a0|0>+a1|1>, hvor a0²+a1²=1. Kvantetilstanden springer til |0> med sandsynligheden a0² eller til |1> med sandsynligheden a1².

Et kvantesystem bestående af n qubits består af 2n tensorprodukter. Den ordnede basis for et kvantesystem bestående af 2 qubits er
(|0>⊗|0>,|0>⊗|1>,|1>⊗|0>,|1>⊗|1>).
Man fjerner ofte symbolet ⊗, da et tensorprodukt er underforstået i denne sammenhæng: (|0>|0>,|0>|1>,|1>|0>,|1>|1>).
Man kan indføre den kortfattede konvention, at |ab> betyder |a>|b>.
Den ordnede basis for et kvantesystem bestående af 2 qubits får derfor denne korte form: (|00>,|01>,|10>,|11>).

The CNOT Gate

Det klassiske CNOT Gate har 2 input og 2 output; den er defineret ved:
CNOT(x,y) = (x,x+y), hvor ⊕ betyder + modulus 2; den er på tabelform:
CNOT: [0,0,1,1]’,[0,1,0,1]’ ⇒ [0,0,1,1]’,[0,1,1,0]’
Vi udvider denne tabel til at gælde for qubits på den naturlige måde – ved at erstatte 0 med |0> og 1 med |1>. Tabellen er derfor givet ved:
CNOT: [|0>,|0>,|1>,|1>]’,[|0>,|1>,|0>,|1>]’ ⇒
[|0>,|0>,|1>,|1>]’,[|0>,|1>,|1>,|0>]’
Dette kan skrives mere kortfattet ved anvendelse af den kompakte notation for tensorprodukter:
CNOT: [|00>,|01>,|10>,|11>]’ ⇒ [|00>,|01>,|11>,|10>]’
Tebellen fortæller os, hvad der sker med basisvektorer ved passage af CNOT, men hvad sker der med en linearkombination af basisvektorer?
CNOT(r|00>+s|01>+t|10>+u|11>) = r|00>+s|01>+u|10>+t|11>
Det bytter om på sandsynlighedsapplituderne for |10> og |11>.

Vi må være forsigtige med, hvordan vi fortolker dette gates virkning på en kvantetilstand  for et qubitpar. For klassiske bit gælder, at en bit, som går ind i  den øverste ledning, forlater den øverste ledning i uændret form. Dette er stadig tilfældet for en qubit, hvis den øverste qubit er enten |0> eller |1>. Vi har hermed sikret os, at et kvante-gate for basisvektorer er identisk med det tilsvarende klassiske, reversible gate. Men dette er ikke tilfældet for andre qubits.

Jeg vælger eksemplet, hvor den øverste qubit er (|0>+|1>)/√2 og den nederste er |0>. Dette bliver til (|00>+|10>)/√2 i den kompakte notation. Output fremkommer ved ombytning af amplituderne for |10> og |11>:
CNOT((|00>+|10>)/√2) = (|00>+|11>)/√2

Resultatet er et klart eksemper på entanglement. Ledningerne repræsenterer elektroner eller fotoner, som kan befinde sig langt fra hinanden. Men entanglement betyder, at en måling af den ene påvirker den andens tilstand. Dette gate kan anvendes til at frembringe entanglement. Tensorprodukter kan anvendes til at tildele den første basisvektor til Alice og den anden til Bob. Det vigtigste resultat er, at de klassiske, reversible gates er specialtilfælde af kvante-gates. Klassiske, reversible beregninger er derfor et specialtilfælde af kvanteberegninger.

Quantum Gates

CNOT permuterer basisvektorerne. Permutation af vektorerne i en ordnet ortonormal basis frembringer en anden ordnet ortonormal basis, som fremkommer ved multiplikation med en ortogonal matrix. CNOT svarer til en ortogonal matrix. Alle de klassiske reversible gates permuterer basisvektorer. De svarer alle til ortogonale matricer. Kvante-gates er blot operationer, som kan beskrives ved ortogonale matricer.

Quantum gates for én qubit

I klassiske, reversible beregninger findes kun to mulige boolske operatorer, som virker på én bit: Identiteten, som efterlader bitværdien uændret, og NOT, som skifter mellem værdierne 0 og 1. Der findes derimod uendeligt mange gates for qubits.

Jeg vil først se på de 2 kvante-gates, som svarer til den klassiske identitet, og som begge efterlader |0> og |1> uændrede. Jeg vil dernæst se på de 2 kvante-gates, som svarer til ombytning af |0> og |1>. Disse 4 gates er opkaldt efter Wolfgang Pauli, og de kaldes Pauli transformationer.

I og Z matricer

I er blot identitetsmatricen [[1,0]’,[0,1]’]. Jeg vil vise, hvordan I virker på en vilkårlig qubit a0|0>+a1|1>:
I(a0|0>+a1|1>) = [[1,0]’,[0,1]’][a0,a1]’ = a0|0>+a1|1>.
I efterlader en qubit totalt uændret.
Z er defineret som matricen [[1,0]’,[0,-1]’]. Lad os se, hvordan Z virker på en vilkårlig qubit a0|0>+a1|1>:
Z(a0|0>+a1|1>) = [[1,0]’,[0,-1]’][a0,a1]’ = a0|0>-a1|1>
Z efterlader amplituden for |0> uændret, men fortegnet ændres på amplituden for |1>. Der gælder altså Z(|0>)=|0> og Z(|1>)=-|1>.
Men husk, det er kun de reelle sandsynligheder, som har en fysisk betydning. Sandsynlighederne er givet ved amplitudernes kvadrater, så kvantetilstandene -|1> og |1> er ækvivalente. Z bevarer altså begge basisvektorer, selvom Z ikke er identitetsmatricen. Z anvendt på den specielle qubit (|0>+|1>)/√2 giver (|0>-|1>)/√2, som ikke er identisk med (|0>+|1>)/√2. Selvom Z-transformationen bevarer begge basisvektorer, vil den ændre enhver anden qubit! Man siger, at den ændrer den relative fase for en qubit.

Z og Y matricer

De svarer begge til NOT, idet de ombytter |0> og |1>. X ombytter blot, hvorimod Y ombytter og ændrer den relative fase:
X = [[0,1]’,[1,0]’] og Y = [[0,-1]’,[1,0]’]. Tegnet “‘” betyder transponering.

Hadamard matricen

Det sidste og vigtigste gate, som som virker på én bit, er defineret ved Hadamard matricen H:
H = [[1,1]’,[1,-1]’]/√2.
Dette gate anvendes ofte til at kombinere basisvektorer:
H(|0>) = (|0>+|1>)/√2 og H(|1>) = (|0>-|1>)/√2.

Jeg har omtalt 5 kvante-gates, som virker på kun én qubit. Der findes imidlertid uendeligt mange ortogonale matricer, som virker på én qubit. Enhver rotation frembringer en ortogonal matrix, og der findes uendeligt mange af disse, som alle kan definere gates.

No cloning teoremet

Klassiske kredsløb er baseret på muligheden af at lave en kopi af et elektrisk signal ved at opsplitte en elektrisk ledning i to ledninger, den såkaldte fan-out operation. Klassiske, reversible gates er et specialtilfælde af et kvante-gate, som lader qubits passere. Klassiske, reversible beregninger er derfor inkluderet i kvanteberegninger med qubits. Kan man lave en kopi af en qubit? Hvordan laver man kopier af basisvektorerne|0> og |1>? Dette er en nødvendighed for klassiske beregninger.

Dette kan gøres ved anvendelse af en hjælpebit – ved altid at sætte det andet input til 0 ved anvendelsen af CNOT:
CNOT(|00>) = |00> og CNOT(|10>) = |11>, så CNOT(|x0>) = |xx>, såfremt |x> er enten |0> eller |1>. Vi ender uheldigvis ikke med to kopier i det generelle tilfælde. Resultatet er entanglement, ikke to kopier af en qubit. Vi kan anvende CNOT til at kopiere klassiske bits, ikke generelle qubits.

Begrebet fan-out anvendes kun i forbindelse med klassiske beregninger. Man anvender ordet cloning for den analoge idé i forbindelse med kvanteberegninger. Vi ønsker at lave kopier af qubits. Vi ønsker et gate G med en generel qubit |x> som første input og |0> som fast andet input. G har som output to kopier af |x>. Kan et sådant gate eksistere uden modstrid? Jeg starter med at nedskrive de krævede betingelser:
1. G(|0>⊗|0>) = |0>⊗|0> = |00>
2. G(|1>⊗|0>) = |1>⊗|1>  = |11>
3. G((|0>+|1>)⊗|0>/√2) = (|0>+|1>)⊗(|0>+|1>)/2

G((|00>+|10>)/√2) = (|00>+|o1>+|10>+|11>)/2
Bemærk: |0> anvendes ikke ved beregningen af tensorproduktet på højresiden.

G må som enhver matrix være lineær; dette medfører:
G(|00>+|10>) = G(|00>) + G(|10>) = |00>+ |11>
Det sidste lighedstegn følger fra (1) og (2). Jeg får ved division med √2:
G((|00>+|10>)/√2) = (|00>+ |11>)/√2, men (3) medførte:
G((|00>+|10>)/√2) = (|00>+|o1>+|10>+|11>)/2.
Dette er en modsigelse. Den eneste mulige konklusion er, at det er umuligt at konstruere et gate, som kan clone generelle qubits.

Kvante- og klassiske beregninger

At en kvante-computer ikke er i stand til at kopiere en qubit kunne forekomme at være en alvorlig begrænsning. En klassisk computer kopierer hele tiden data frem og tilbage mellem registre og lagerenheder. Hvordan skulle man kunne undvære denne mulighed? Hvis vi kun sender de to basisvektorer |0> og |1> gennem et CNOT-gate bliver resultatet identisk med at sende de to klassiske bit 0 og 1 gennem CNOT. En kvante-computer kan derfor udføre reversible klassiske beregninger. At man ikke kan clone en qubit er i virkeligheden en stor fordel, idet teoremet tillader en sikker transmission af data.

 

ESA to intercept a comet

Comet Interceptor

19 June 2019 ‘Comet Interceptor’ has been selected as ESA’s new fast-class mission in its Cosmic Vision Programme. Comprising three spacecraft, it will be the first to visit a truly pristine comet or other interstellar object that is only just starting its journey into the inner Solar System.

The mission will travel to an as-yet undiscovered comet, making a flyby of the chosen target when it is on the approach to Earth’s orbit. Its three spacecraft will perform simultaneous observations from multiple points around the comet, creating a 3D profile of a ‘dynamically new’ object that contains unprocessed material surviving from the dawn of the Solar System.

“Pristine or dynamically new comets are entirely uncharted and make compelling targets for close-range spacecraft exploration to better understand the diversity and evolution of comets,” says Günther Hasinger, ESA’s Director of Science.

“The huge scientific achievements of Giotto and Rosetta – our legacy missions to comets – are unrivalled, but now it is time to build upon their successes and visit a pristine comet, or be ready for the next ‘Oumuamua-like interstellar object.”

MISSION

Comet Interceptor will be a new type of mission, launched before its primary target has been found.

The only way to encounter dynamically new comets or interstellar objects is to discover them inbound with enough warning to direct a spacecraft to them. The time between their discovery, perihelion, and departure from the inner Solar System has until recently been very short, historically months to a year: far too little time to prepare and launch a spacecraft. This timescale is, however, lengthening rapidly, with recent advances allowing observational surveys to cover the sky more deeply, coherently, and rapidly, such as the current Pan-STARRS and ATLAS surveys, and the Large Synoptic Survey Telescope under construction in Chile, LSST (www.lsst.org).

Long Period Comets are now discovered much further away, considerably more than a year pre-perihelion; e.g. C/2017 K2 (Pan-STARRS) was discovered beyond Saturn’s orbit in 2017, and will pass perihelion in 2022. From 2023, LSST will conduct the most sensitive search for new comets ever, providing a true revolution in understanding their populations, and making this mission possible.

Comet Interceptor will be launched with the ESA ARIEL spacecraft in 2028, and delivered to the Sun-Earth Lagrange Point L2. It will be a multi-element spacecraft comprising a primary platform which also acts as the communications hub, and sub-spacecraft, allowing multi-point observations around the target. All spacecraft will be solar powered. The spacecraft will remain connected to each other at L2, where they will reside until directed to their target. The mission cruise phase will last months to years.

Before the encounter, the spacecraft will separate into its separate elements, probably a few weeks pre-flyby. For very active comets, separation will be earlier, to maximize separation of the spacecraft elements, whilst for low activity targets, separation will occur only a few days before the encounter takes place.

SCIENCE

The mission’s primary science goal is to characterise, for the first time, a dynamically-new comet or interstellar object, including its surface composition, shape, and structure, the composition of its gas coma. A unique, multi-point ‘snapshot’ measurement of the comet- solar wind interaction region is to be obtained, complementing single spacecraft observations made at other comets.

Additional science will include multi-point studies of the solar wind pre- and post-encounter over gradually-changing separation distances.

The proposed instruments for the main and an accompanying spacecraft are the following:
Spacecraft A: (ESA)

  • CoCa: Comet Camera – to obtain high resolution images of the comet’s nucleus at several wavelengths.
  • MIRMISMultispectral InfraRed Molecular and Ices Sensor – to measure the heat radiation being released from the comet’s nucleus and study the molecular composition of the gas coma.
  • DFP : Dust, Field, and Plasma – to understand the charged gases, energetic neutral atoms, magnetic fields, and dust surrounding the comet.

Spacecraft B1: (JAXA)

  • HI: Hydrogen Imager – UV camera devoted to studying the cloud of hydrogen gas surrounding the target
  • PS: Plasma Suite – to study the charged gases and magnetic field around the target
  • WAC: Wide Angle Camera – to take images of the nucleus around closest approach from an unique viewpoint

Spacecraft B2: (ESA)

  • OPIC: Optical Imager for Comets – mapping of the nucleus and its dust jets at different visible and infrared wavelengths.
  • MANIaC: Mass Analyzer for Neutrals and Ions at Comets – a mass spectrometer to sample the gases released from the comet.
  • EnVisSEntire Visible Sky coma mapper  – to map the entire sky within the comet’s head and near-tail, to reveal changing structures within the dust, neutral gas, and ionized gases.
  • DFP : Dust, Field, and Plasma (near-match of DFP sensors on spacecraft A) – to understand the charged gases, energetic neutral atoms, magnetic fields, and dust surrounding the comet.

Logik, gates og kredsløb

George Boole indså i slutningen af det 19. århundrede, at visse dele af logikken kan udstrykkes som algebra ved anvendelse af funltioner eller operatorer, som virker på de to værdier: sand (T) og falsk (F). De tre grundlæggende operatorer er ¬ (NOT), ∧ (AND) og ∨ (OR). De boolske operatorer defineres ved sandhedstabeller:

P:(T,F) ⇒ ¬P:(F,T)
P:(T,T,F,F),Q(T,F,T,F) ⇒ P∧Q:(T,F,F,F,F)
P:(T,T,F,F),Q(T,F,T,F) ⇒ P∨Q:(T,T,T,F)
P:(T,T,F,F),Q(T,F,T,F) ⇒ P⊕Q:(F,T,T,F), ⊕ er exclusive or.
P:(T,T,F,F),Q(T,F,T,F) ⇒ ¬P:(F,F,T,T),¬Q:(F,T,F,T) ⇒ ¬P∧¬Q:(F,F,F,T)
Sandhedstabellen for negationen af (¬P∧¬Q) er:
P:(T,T,F,F),Q(T,F,T,F) ⇒ ¬(¬P∧¬Q):(T,T,T,F)
P∨Q ≡ ¬(¬P∧¬Q), ≡ betyder logisk ækvivalent med.
På tilsvarende måde finder vi:
P⊕Q ≡ (P∧¬Q)∨(¬P∧Q)
Ved anvendelse af udtrykket for OR findes:
P⊕Q ≡ ¬(¬(P∧¬Q)∧(¬(¬P∧Q)))
Både ∨ og ⊕ kan erstattes med ¬ og ∧. Denne metode virker helt generelt for andre udtryk.

Boolske funktioner

De logiske operatorer kan opfattes som boolske funktioner. En funktion af 3 variable P, Q og R, f(P,Q,R) defineres ved en sandhedstabel med 2³ = 8 værdier:
P:(T,T,T,T,F,F,F,F),Q:(T,T,F,F,T,T,F,F),R:(T,F,T,F,T,F,T,F) ⇒ f(P,Q,R):(…)
For enhver funktion f(P,Q,R) findes et ækvivalent udtryk, som kun indeholder funktionerne ¬ og ∧. Man anvender nøjagtigt den samme metode, som jeg brugte til at finde et udtryk for P∨Q. Metoden virker helt generelt. f er logisk ækvivalent med et udtryk, som kun involverer funktionerne ¬ og ∧, hvis f er en funktion defineret ved en sandhedstabel. Man siger derfor, at {¬,∧} er et funktionelt komplet sæt af boolske operatorer. Det forekommer overraskende, at vi kan frembringe enhver funktion defineret ved en sandhedstabel ved kun at anvende ¬ og ∧, men vi kan gøre det endnu bedre. En vilkårlig boolsk funktion er logisk ækvivalent med et udtryk, som kun anvender NAND operatoren.

NAND

NAND er en kombination af not og and. Den angives med tegnet ↑ og defineres som:
P↑Q ≡ ¬(P∧Q)
Sandhedstabellen for det specielle tilfælde ¬(P∧P):
P:(T,F) ⇒ P∧P:(T,F) ⇒ ¬(P∧P):(F,T). Dette er sandhedstabellen for ¬P:
¬(P∧P) ≡ ¬P ≡ P↑P
Jeg benytter mig af, at NOT er reversibel: ¬¬P ≡ P:
P∧Q ≡ ¬(¬(P∧Q)) ≡ ¬(P↑Q) ≡ (P↑Q)↑(P↑Q)

Jeg har nu vist, at både ∧ og ¬ har ækvivalente udtryk, som kun indeholder den boolske operator ↑. Jeg har vist, at NAND er funktionelt komplet: En vilkårlig boolsk operator kan omskrives til en ækvivalent funktion, som kun anvender NAND.

Boolske variable antager én af to værdier. Man anvender traditionelt værdierne T og F, men man kan også anvende 0 og 1. Boolske udtryk kan herved opfattes som funktioner af bitværdier.

Der er to mulige valg for udskiftning af T og F. Man vælger konventionelt at udskifte F med 0 og T med 1. Man lister konventionelt T før F, hvorimod 0 listes før 1. Dette betyder, at sandhedstabeller udtrykt ved 0 og 1 har en omvendt rækkefølge sammenlignet med den samme tabel for T og F:
P:(T,T,F,F),Q:(T,F,T,F) ⇒ P∧Q:(T,T,T,F)
P:(0,0,1,1),Q:(0,1,0,1) ⇒ P∧Q:(0,1,1,1)

Gates

Claude Shannon viste (som student på MIT), at al boolsk algebra kan udføres ved anvendelse af elektriske kontakter. Dette er en af de fundamentale idéer bag kredsløbsdesign i alle moderne computere.

En elektrisk impuls sendes (T eller 1) eller ej (F eller 0) med diskrete tidsintervaller. Kombinationen af kontakter, som svarer til de omtalte binære operatorer, kaldes gates. De mest almindelige gates er blevet teldelt nogle specielle diagrammer. NOT Gate har 1 input-ledning og 1 output-ledning. AND Gate har 2 input-ledninger og 1 output-ledning. OR Gate har ligeledes 2 input-ledninger og 1 output-ledning. Det samme gælder for NAND Gate.

Kredsløb

Vi kan frembringe kredsløb ved at forbinde de omtalte gates. De er lineære og læses fra venstre til højre. Vi skriver input-bit til ledningerne på venstre side og læser output-bit fra ledningerne på højre side. Et interessant eksempel er P↑P. Vi ønsker at skrive den samme bitværdi, P, til begge input-ledninger for et NAND gate. Dette opnås ved at splitte signalet til to ledninger. Processen med at splitte et signal til flere kopier kaldes fan-out.

NAND er et universelt gate

Vi kan konstruere et kredsløb, som beregner en vilkårlig boolsk funktion ved blot at anvende NOT og AND gates. Man anvender traditionelt betegnelsen universelt om et gate. NAND er et universelt gate. Vi må imidlertid anvende fan-out for at slippe af med NOT og AND til fordel for det universelle NAND. Det kunne forekomme indlysende, at man blot kan kopiere et signal ved at føre det til flere ledninger; men det viser sig, at vi ikke kan anvende dette trick på en qubit.

Gates og beregninger

The exclusive or med den binære operator ⊕ er defineret ved:
0⊕0 = 0, 0⊕1 = 1, 1⊕0 = 1, 1⊕1 = 0.
Dette kan sammenlignes med addition af lige og ulige hele tal. Denne addition kaldes addition modulo 2. XOR gate svarer til operatoren⊕. XOR kan anvendes til at konstruere et half-adder kredsløb, som adderer to binære cifre. Det konstrueres ved anvendelse af et XOR gate og et AND gate. XOR beregner cifferdelen og AND beregner menten. Dette kredsløb anvender fan-out.

Reversible beregninger

Studiet af reversible gates og reversible beregninger startede med beregningers termodynamik. Shannon definerede information som negativ entropi. Shannon fik idéen fra den termodynamiske entropi. Hvor tæt er disse to entropier beslægtede? Kan teorien bag beregninger udtrykkes ved anvendelse af begreber fra termodynamik? Kan man finde et minimum for den energi, som kræves for at udføre en beregning? John von Neumann gættede sig til, at der udvikles varme ved tab af information. Rolf Landauer udledte den minimale energi, som kræves for at slette en bit af information. Denne energi kaldes Landauers grænse.

Der tabes ingen information, hvis beregningen er reversibel, og den kan udføres uden varmeudvikling.  Jeg vil i det følgende gennemgå 3 reversible gates: CNOT, Toffoli og Fredkin.

Controlled Not Gate

CNOT har 2 input og leverer 2 output. Det første input er en kontrol-bit. Det andet input-bit forlader CNOT uændret, hvis det første input-bit er 0. CNOT virker som NOT på det andet input-bit, hvis det første input-bit er 1:
CNOT(x,y) = (x,x⊕y)
Denne operation er reversibel. Vi kan konstruere et kredsløb, som udfører denne operation ved anvendelse af fan-out og et XOR gate. CNOT har den nyttige egenskab, at den er sin egen inverse funktion:
CNOT(x,x⊕y) = (x,x⊕x⊕y) = (x,y), hvor jeg anvender x⊕x = 0 og 0⊕y = y.
Man kan vise, at CNOT er et universelt gate: Alle boolske funktioner kan udføres med CNOT.

The Toffoli Gate

The Toffoli gate, opfundet af Tommaso Toffoli, har 3 input og 3 output. De første 2 input er kontrol-bit. Dette gate virker som NOT på det tredje input, hvis de 2 første input begge er 1. Dette gate er givet ved funktionen:
T(x,y,z) = (x,y,(x∧y)⊕z)
Dette gate er også sin egen inverse funktion:
T(x,y,(x∧y)⊕z) = T(x,y,(x∧y)⊕(x∧y)⊕z) = (x,y,z),
da (x∧y)⊕(x∧y) = 0 og 0⊕z = z.
Man kan vise, at Toffoli er et universelt gate.

The Fredkin Gate

Fredkin har som Toffoli 3 input-ledninger og 3 output-ledninger. Den første er en kontrol-ledning. Hvis den modtager et 0, føres de næste 2 ledninger direkte igennem til output. Hvis den derimod modtager et 1, vil de næste 2 ledninger bytte plads mellem input og output. Kontrol-ledningen føres lige igennem til output. Fredkin opfører sig som et sporskifte på en jernbane. Hvis man sender output fra dette gate direkte ind i det samme gate vil output blive det samme som input til det første gate. Fredkin er altså sin egen inverse funktion defineret ved:
F(0,y,z) = (0,y,z), F(1,y,z) = (1,z,y)
Denne definition er usædvanlig ved ikke at anvende de sædvanlige boolske operatorer.

Output fra dette gate er 3 binære tal. Det første er altid lig det binære input x. Det andet tal bliver 1, hvis enten x = 0 og y = 1 eller  x = 1 og z = 1, hvilket udtrykkes som (¬x∧y)∨(x∧z). Det tredje tal bliver 1, hvis enten x = 0 og z = 1 eller x = 1 og y = 1, hvilket udtrykkes som (¬x∧z)∨(x∧y). Fredkin kan derfor defineres ved den boolske funktion:
F(x,y,z) = (x,(¬x∧y)∨(x∧z),(¬x∧z)∨(x∧y)).

Man kan endvidere vise, at Fredkin er et universelt gate, så alle boolske operationer kan udføres ved anvendelse af dette reversible gate.

Man kan altså konstruere et reversibelt gate ved anvendelse a de boolske operatorer ¬, ∧ og ∨. Det kom som en overraskelse for mig, at alle klassiske beregninger kan udføres reversibelt, hvis man anvender CNOT eller Fredkin gates.

 

Compact spherical tokamak

Towards a compact spherical tokamak fusion pilot plant

A. E. Costley
Published:

The question of size of a tokamak fusion reactor is central to current fusion research especially with the large device, ITER, under construction and even larger DEMO reactors under initial engineering design. In this paper, the question of size is addressed initially from a physics perspective. It is shown that in addition to size, field and plasma shape are important too, and shape can be a significant factor. For a spherical tokamak (ST), the elongated shape leads to significant reductions in major radius and/or field for comparable fusion performance. Further, it is shown that when the density limit is taken into account, the relationship between fusion power and fusion gain is almost independent of size, implying that relatively small, high performance reactors should be possible. In order to realize a small, high performance fusion module based on the ST, feasible solutions to several key technical challenges must be developed. These are identified and possible design solutions outlined. The results of the physics, technical and engineering studies are integrated using the Tokamak Energy system code, and the results of a scoping study are reviewed. The results indicate that a relatively small ST using high temperature superconductor magnets should be feasible and may provide an alternative, possibly faster, ‘small modular’ route to fusion power.

This article is part of a discussion meeting issue ‘Fusion energy using tokamaks: can development be accelerated?’.

Introduction

Research with tokamaks has been ongoing for more than 50 years and for most of the time it has generally been considered that in order to generate net fusion power tokamak fusion reaktors will have to be large and powerful; major radius of ≥6 m, plasma volume ≥ 1000 m³, and operation with fusion power ≥1 GW, typically being considered necessary. The large scale ITER device currently under construction in France is the latest device in this line of approach, and designs of even larger and more powerful demonstration (DEMO) reactors are underway.

Recent work, however, has shown that an approach based on much smaller and lower power devices may be possible. The approach is based on a re-evaluation of the empirical scaling of energy confinement time with machine parameters such as size and field, and the adoption of a relatively new technology, high temperature superconductors (HTSs) for magnets. The shape of the plasma is important too. The work indicates that much smaller devices based on the spherical tokamak (ST) configuration, perhaps with a major radius of 1.5-2.0 m, volume of 50-100 m, volume of 50-100 m³ and operating at relatively low power levels, 100-200 MW, may be feasible. Smaller devices would open the possibility of a modular approach to fusion power; that is one where single or multiple relatively small, low power devices would be used together to achieve the required power. Smaller and less expensive fusion modules would enable faster development cycles and thereby speed up the realization of fusion power.

Fig1: Schematic of conventional and spherical tokamaks. The aspect ratio A=R0/a and the elongation κ=b/a.

Spherical tokamaks have a much smaller ratio of plasma major radius (R0) to plasma minor radius (a) than conventional tokamaks such as JET and ITER; they resemble the shape of a cored apple rather than the more conventional tokamak shape of a doughnut (figure 1). Research has shown that STs have beneficial properties from a reactor standpoint such as operation at high plasma pressure relative to the pressure of the confining magnetic field, and the generation of higher levels of self-driven current within the plasma. This aspect is especially important. Auxiliary current drive systems are inefficient and thus can lead to substantial amounts of re-circulating power, and that power could be a major drain on the potential economics of a fusion reactor. There are also indications that STs have higher levels of energy confinement relative to conventional shaped tokamaks. STs share many of the challenges experienced in the development of the larger devices, for example the handling of the plasma exhaust in the divertor region where the power loads will be at the limit of available materials, and the installation of shielding on the inboard side necessary to protect the central column from the intense neutron and gamma radiation. The technical solutions being developed for the larger devices can be adapted and used on STs. The positive performance characteristics combined with potential solutions to the technical problems make STs particularly attractive for the compact approach.

In this paper, the work that is ongoing to realize this alternative approach to fusion power is reviewed. First, the question of size is addressed in general terms from a physics perspective. It is shown that it is not just size that is important; magnetic field and shape are important too, and the interplay between these parameters is developed. The question of fusion power is also important since that determines loads on the internal tokamak components and will limit the minimum possible device size. As shown in previous papers, the two key reactor performance parameters, i.e. the fusion gain, which is the power produced divided by the input power, and the fusion power are found to be directly linked. Using the latest empirical scalings for the energy confinement time, it is shown that the power needed for a useful fusion gain is three to four times lower that previously thought necessary. Taken together these findings indicate that smaller fusion devices based on the spherical tokamak should be feasible.

The realization of a relatively small, low power fusion module will depend on satisfactory solutions being developed to several key technical challenges such as the superconducting magnets that provide the plasma confining magnetic field, the inner shield that protects the central column from neutron and gamma radiation that potentially could cause material damage, and the divertor that handles the plasma exhaust.

The fusion triple product

The most important figure of merit of a fusion plasma is the product of the density (n), temperature (T) and energy confinement time (τE), nTτE. This is known as the fusion triple product and is derived from the work of John Lawson in 1957. For net fusion power, nTτE must be greater than 1×1021 m-3keVs. The progress towards fusion can be measured with nTτE.

Fig2: Improvement of the fusion tripel product (relative units) with time.

Figure 2 shows how the triple product has increased with time as larger tokamaks operating at higher magnetic field and higher plasma current were brought into operation. As can be seen, the rate of progress was very rapid from the late 1960s through to about 2000 but has slowed since, partly because of delay with ITER. Insight into key aspects of achieving net fusion power with tokamaks can be gained by looking closer at the fusion triple product.

The density and temperature are straightforward parameters but the energy confinement time is complicated. The energy confinement time characterizes the rate at which heat is transported from the hot central core of the plasma to the relatively cold surrounding material surfaces.  Within a tokamak plasma there are multiple, interacting phenomena occurring simultaneously on a wide range of temporal and spatial scales. These interactions lead to the transport of heat through processes that are essentially turbulent. While graet progress has been made in understanding these processes it is not yet possible to determine the transport of heat through the plasma by a first principles approach. This is not an unfamiliar situation. In many areas of physics and engineering situations are too complex for a ‘first principle’ approach. In such situations, it is common to perform experiments on devices or structures of different scale and to determine how the parameter  of interest scales with device parameters.

 

John Stewart Bells ulighed

Der herskede en filosofisk strid mellem Albert Einstein og Niels Bohr om den rette fortolkning af den kvantemekaniske måling. Bohr postulerer, at der til enhver måling af en qubit svarer en basis bestående af to ortogonale enhedsvektorer (qubits). Selve målingen medfører et kvantespring af den målte qubit til én af de to ortogonale enhedsvektorer. Den første enhedsvektor svarer til den klassiske bit-0, den anden til den klassiske bit-1. Einstein gik ind for lokal realisme, som antager, at en partikels tilstand kun kan afhænge af andre lokale partiklers tilstande. Einstein hævdede, at kvantespringet afhænger af nogle ukendte skjulte variable.

Striden startede omkring 1925 og varede mange år frem. Schrödinger fandt på et genialt tankeeksperiment til støtte for Einsteins synspunkt. Han foreslog, at to personer, Alice og Bob, skulle foretage fælles målinger med de samme basisvektorer af en blandingstilstand af to qubits:
T = (|a0>⊗|b0> + |a1>⊗|b1>)/√2

Der er et par mærkværdigheder: Hvorfor anvendes der to forskellige betegnelser for den samme basis: (|a0>,|a1>) og (|b0>,|b1>)? Hvad er det for et mærkeligt gangetegn: ⊗? De to ting hænger sammen. ⊗ angiver en speciel multiplikation, som ikke tillader en ombytning af de to sider. Dette tillader os at tildele den venstre side til Alice og den højre til Bob. Dette er grunden til, at de to baser får tildelt forskellige navne.

Når Alice og Bob måler tilstanden T, får de enten begge bitkombinationen 00 eller begge bitkombinationen 11. Deres fælles målinger er totalt sammenfiltrede. Korrelationen mellem deres målinger er 100%.

Der er intet krav til afstanden mellem Alice og Bob. Alice kan være nær Jorden og Bob nær Alpha Cen. Hvordan kan Einsteins krav om lokal realisme være overholdt? Alices måling medfører, at Bobs måling straks er kendt. Einsteins specielle relativitetsteori kræver, at ingen information kan bevæge sig hurtigere end lyset. Men Alice og Bob har ingen mulighed for at afgøre, hvem der måler først. Teorien forudsiger kun, at de to parters målinger er korrelerede. Den forudsiger ikke årsag og virkning mellem de to parters målinger. Der er ikke tale om udveksling af information. De to qubits har derimod vekselvirket, da de befandt sig på næsten samme sted til samme tid. De postulerede skjulte variable har haft mulighed for at fungere under denne korte vekselvirkning.  Sagen er langt fra afklaret med Schrödingers tankeeksperiment.

Bells ulig er baseret på 3 tilfældige målinger af et sammenfiltrede qubitpar med 3 ortonormales baser med retningsvinklerne:
θ = (0°, 120°, -120°) = (0, 2π/3, -2π/3) = (a, b, c).
En ordnet ortonormal basis svarente til retningsvinklen θ er givet ved:
([cos(θ/2), -sin(θ/2)]’, [sin(θ/2), cos(θ/2)]’).
De 3 (a, b, c) er altså givet ved:
a = ([1,0]’,[0,1]’)
b = ([cos(π/3),-sin(π/3)]’,[sin(π/3),cos(π/3)]’) = ([1,-√3]’/2,[√3,1]’/2)
c = ([cos(π/3),sin(π/3)]’,[-sin(π/3),cos(π/3)]’) = ([1,√3]’/2,[-√3,1]’/2)

De 3 baser kan også angives ved ket-vektorer, hvis man husker, at den anden vektor i et basispar peger i modsat retning af den første vektor:
a = (|0>,|π>)
b = (|π/3>,|-π/6>)
c = (|-π/3>,|π/6>)

En måling i hver af retningerne (a, b, c) resulterer i enten et 0 eller et 1. Dette giver os 8 konfigurationer: 000, 001, 010, 011, 100, 101, 110, 111, hvor første ciffer til venstre giver os svaret, hvis vi måler med basen a, det midterste ciffer giver os svaret, hvis vi måler med basen b, og det sidste ciffer giver sos svaret, hvis vi måler med basen c.

Vi frembringer nu en strøm af qubit-par, som vi sender sender til Alice og Bob. Hvert par befinder sig i en entangled tilstand for basis a:
T = (|0>|0> + |π>||π>)/2
Sandsynligheden for et spring mellem 2 qubits er givet ved skalarproduktet mellem de to qubits. Det har derfor betydning at finde alle skalarprodukter mellem vektorer i a og alle vektorer i b og c (overgange mellem vektorer internt i a er undersøgt):
<0|π/3> = [1,0][1,-√3]’/2 = 1/2
<0|-π/6> = [1,0][√3,1]’/2 = √3/2
<0|-π/3> = [1,0][1,√3]’/2 = 1/2
<0|π/6> = [1,0][-√3,1]’/2 = -√3/2
<π|π/3> = [0,1][1,-√3]’/2 = -√3/2
<π|-π/6> = [0,1][√3,1]’/2 = 1/2
<π|-π/3> = [0,1][1,√3]’/2 = √3/2
<π|π/6> = [0,1][-√3,1]’/2 = 1/2

Alice vælger tilfældigt en af de 3 retninger (a,b,c) med sandsynligheden 1/3 uden at nedskrive retningen, men hun nedskriver 0 eller 1 alt efter, om springet sker til den første eller den anden vektor i den valgte basis. Bob udfører kort tid efter den samme procedure, idet han også nedskriver enten 0 eller 1. Alice og Bob nedskriver en lang liste med 0er og 1er. De sammenligner den lange liste af bits. De nedskriver bogstavet A, hvis de er enige om de to bits, ellers nedskrives bogstavet D. Hvor stor en brøkdel af bogstaverne udgøres af A? Bell indså, at den kvantemekaniske model og den klassiske model gav forskellige tal for dette svar.

Det kvantemekaniske svar

Jeg har tidligere fundet, at Alice og Bob får det samme resultat, hvis de begge foretager målingen i samme retning. Hvad sker der, hvis de vælger forskellige baser? Jeg vil undersøge tilfældet, hvor Alice vælger b og Bob vælger c. Begge parter modtager tilstanden
(|0>|0>+|π>|π>)/2, som måles af Alice i basen (|π/3>|π/3>+|-pi;/6>|-π/6>)/2. En måling får den oprindelige tilstand til springe til enten |π/3>|π/3> eller |-pi;/6>|-π/6> med lige stor sandsynlighed. Hun vil nedskrive 0, hvis den springer til |π/3>|π/3>. Hun nedskriver 1, hvis den springer til |-pi;/6>|-π/6>.

Bob må nu udføre sin måling. Antag, at Alice målte tilstanden |π/3>|π/3>, så Bobs qubit (den anden i produktet) er i tilstanden |π/3>, der kan udtrykkes som en linearkombination af Bobs basisvektorer (|-π/3>,|π/6>). Vi finder faktorerne ved at gange |π/3> med en matrix udtrykt som en søjle af Bobs basis udtrykt som bra-vektorer:
M|π/3> = [<-π/3|,<π/6|]’|π/3> = [<-π/3|π/3>,<π/6|π/3>]’ =
[[1, √3][1, -√3]’/4, [-√3, 1][1, -√3]’/4] = [-1/2, -√3/2]’.

Bob vil måle 0 med sandsynligheden (-1/2)² = 1/4 og 1 med sandsynligheden (-√3/2)² = 3/4. Hvis Alice måler 0, vil Bob også måle 0 med sandsynligheden 1/4. Man kan på tilsvarende måde vise, at Bob også vil måle 1 med sandsynligheden 1/4, hvis Alice først har målt 1.

De andre tilfælde er tilsvarende: Hvis Alice og Bob måler i forskellige retninger, vil de få samme resultat i 1/4 af tilfældene og forskellige resultater i 3/4 af tilfældene.

De måler i samme retning 1/3 af tilfældene og de måler det samme hver gang. De måler i forskellige retninger i 2/3 af tilfældene og og får det samme resultat i 1/4 af tilfældene. Den samlede sandsynlighed for at få A:
(1/3)×1 + (2/3)×(1/4) = 1/2.

Det klassiske svar

Striden mellem Einstein og Bohr omhandlede i virkeligheden elektronens bølgenatur. Elektronens spin kom først til senere. Luis de Broglie havde i 1924 foreslået, at elektroner udbreder sig som bølger. Dette blev eksperimentelt påvist i 1927. Elektronen bevæger sig langs flere baner mellem punkterne P og Q. Sandsynlighedsamplituderne for de mange baner interfererer i Q. Kvadratet på den samlede amplitude er sandsynligheden for at finde elektronen i Q. Einstein mente, at elektronens mange baner måtte skyldes nogle ukendte variable. Bohr mente, at naturen kun tillader os at forudsige en sandsynlighedsamplitude. De fleste mente, at det var en filosofisk strid, som ikke kunne afgøres ved noget eksperiment.

Den filosofiske strid flyttede sig med årene fra elektronens bane til dens spin. Både kvantemekanisk teori og målinger viser, at elektronens spin kun kan måles for én bestemt retning, samt at der kun findes 2 lige sandsynlige spintilstande. Disse 2 spintilstande er grundlaget for de klassiske bit, der fremkommer ved måling af en kvantebit eller qubit. Spørgsmålet om den mulige eksistens af skjulte variable har betydning for, om man overhovedet kan konstruere en kvantecomputer.

Den irske fysiker John Stewart Bell var overbevist om, at Einsteins argumenter var korrekte. Han offentliggjorte i 1964 en ulighed baseret på en kombination af de veldokumenterede lige store spinsandsynligheder og klassisk statistik. De kvantemekaniske beregninger fra forrige afsnit opfylder ikke Bells ulighed. Man kan alligevel afgøre et filosofisk spørgsmål ved et eksperiment.

Det klassiske synspunkt er, at målinger i alle retninger er bestemt helt fra begyndelsen. Der er som allerede nævnt 3 retninger. En måling i hver retning kan give enten et 0 eller et 1. Dette giver os 8 konfigurationer:000, 001, 010, 011,100, 101, 110, 111, hvor det første ciffer er svaret, hvis vi måler med basis a, det midterste ciffer er svaret, hvis vi måler med basis b, og det sidste ciffer er svaret, hvis vi måler med basis c.

Entanglement betyder blot, at konfigurationerne for Alices og Bobs qubits er identiske — hvis Alices qubit har konfiguration 001, så har Bobs det også. Vi må nu finde ud af, hvad der sker, når Alice og Bob hver vælger en retning. For eksempel, hvis deres elektroner er i konfiguration 001, og Alice måler i basis a, og Bob måler i basis c, så vil Alice måle 0, og Bob vil måle 1. De er uenige om resultatet.

Tabellen nedenfor angiver alle mulighederne. Den venstre søjle angiver de 8 konfigurationer. Den øverste række giver mulighederne for Alice og Bobs målebaser.  Vi angiver Alices basis først efterfulgt af Bobs. (b,c) betyder, at Alice vælger basis b og Bob vælger basis c. Indgangene i tabellen viser, om målingerne stemmer A(gree) eller ej D(isagree).

Konfiguration versus måleretning

Vi ved ikke, hvilke sandsynligheder vi skal tildele de enkelte konfigurationer. Der er 8 mulige konfigurationer, så det forekommer plausibelt, at de hver forekommer med sandsynligheden 1/8, men de er muligvis ikke alle ens. Den matematiske analyse vil ikke antage bestemte værdier for disse sandsynligheder. Vi kan imidlertid tildele de målte retninger bestemte sandsynligheder. Både Bob og Alice vælger deres 3 baser med samme sandsynlighed, så hver af de 9 mulige basepar forekommer med sandsynligheden 1/9.

Bemærk: Hver række indeholder mindst 5 A’er, så et givet qubit-par med en hvilket som helst konfiguration har mindst sandsynligheden 5/9 for at få et A. Da sandsynligheden for at få et A er mindst 5/9 for hver af spin-konfigurationerne, kan vi udlede, at den overordnede sandsynlighed må være mindst 5/9, uafhængigt af den relative forekomst af de enkelte konfigurationer.

Dette er Bells ulighed. Kvantemekanikken fortæller os, at Alice og Bob er enige om resultatet nøjagtigt halvdelen af gangene. Den klassiske model fortæller os, at Alice og Bob vil være enige i mindst 5 ud af 9 gange.

Det er imidlertid en delikat sag at udføre testen i praksis. John Clauser og Stuart Freedman udførte den første gang i 1972. Den viste, at den kvantemekaniske forudsigelse er korrekt. Eksperimentet er siden blevet gentaget i stadig forbedrede versioner. Der er meget lidt tvivl om, at den klassiske model er forkert.

At en måling automatisk medfører et kvantespring fra en qubit til en anden, helt uden for den kvantemekaniske tidsudvikling, er ganske uforståelig. Kvantespringet forudsiges generelt i form af en complex sandsynlighedsamplitude, hvis normkvadrat er sandsynligheden for springet. Hvorfor forekommer springet kun, når man observerer kvantetilstanden? Universet udvikler sig jo fint, uden at jeg behøver at observere det. Min egen favoritfortolkning er, at kvantemekanik er en teori for udviklingen af informationen om et mikroskopisk system. En kvantetilstand repræsenterer ikke selve det fysiske system, men derimod vores maksimale information om systemet. En måling medfører en indsnævring af vores uvidenhed om systemet. Det er derfor logisk, at kvantetilstanden må ændres efter en måling, som har forøget informationen om systemet. En kvantecomputer er IT for et mikroskopisk system.