Tuesday, May 5, 2015

Boson-Sampling-Inspired Quantum Metrology

Our group at Louisiana State University has teamed up with researchers at Macquarie University in Sydney and Boise State University in Boise to produce an new publication in Physical Review Letters, entitled, “Linear Optical Quantum Metrology with Single Photons: Exploiting Spontaneously Generated Entanglement to Beat the Shot-Noise Limit.” For regular readers of this blog, you will know that Boson Sampling is a new paradigm in quantum computing whereby single photons, inputted into a linear optical interferometer, can carry out a mathematical sampling problem that would be intractable on classical computer. The buzz surrounding Boson Sampling is that, unlike universal linear optical quantum computing, the experimental implementation requires no special quantum gates, like controlled-NOT gates, nor feed forward nor teleportation or any other fancy stuff. Identical single photons rattle around in the interferometer and they are sampled in the number basis when they come out. Sounds simple, but a classical machine cannot efficiently simulate the sampling output, whereas the linear optical device does this quite easily. For our recent review on Boson Sampling the reader is encouraged to go here.

In spite of all the excitement about Boson Sampling as a new paradigm for quantum information processing, the Boson Sampling problem has no know practical application to any mathematics problem anybody is interested in. In some ways the situation is similar to the late 1980s and early 1990s, before Shor’s invention of his factoring algorithm, when the first quantum algorithm shown to give an exponential speedup was the Deutsch-Jozsa (DJ) algorithm that allowed one to tell if a function was balanced or unbalanced. While a very nice result, nobody really gave a rat’s ass whether a function was balanced or unbalanced. It was however hoped that the DJ algorithm was just the tip of an iceberg and indeed the rest of the iceberg was revealed when Shor’s factoring algorithm was discovered. That was an (apparent) exponential speedup on a problem that people cared deeply about.

So too do we hope that Boson Sampling is just the tip of the iceberg when it comes to the power of linear optical interferometers, with simple single-photon inputs, to carry out tasks that are not only impossible classically but also of practical interest. In that direction our paper makes a frontal attack on the berg with a metrological ice axe. The idea emerged from the understanding that in Boson Sampling, an exponentially large amount of number-path entanglement is generated through the natural evolution of the single photons in the interferometer via repeated implementation of the Hong-Ou-Mandel effect at each beam splitter. It has been known for nearly 30 years the number-path entanglement is a resource for quantum metrology, beating the shot-noise limit, and so it was natural for us to ask if this hidden power in linear optics with single photon inputs might be put to work for a metrological advantage. Our paper shows that this is indeed the case.

To briefly summarize our scheme, we send a sequence of single photons into linear optical interferometer that contains an interferometric implementation of the Quantum Fourier Transform coupled with a bank of phase shifters with an unknown phase that is to be measured. Our signal consists of a sampling of the outputs tuned to the same sequence of single photons emerging from the exit ports. The signal-to-noise analysis was quite challenging as it involves the computation of the permanent of a large square matrix with complex entries. While in general this is classically intractable, to our surprise, something about the structure of the Quantum Fourier Transform seems to allow the permanent to be computed analytically in closed form. As least we conjecture this is so. We were able to eyeball a closed form formula for the permanent of a matrix of any rank and confirm it out to rank 20 or so numerically, but a rigorous mathematical proof of the permanent formula is still wanting.

Once we had the signal and variance analysis carried out, we were able to show (carefully counting resources) that the sensitivity of the device, which we christened the Quantum Fourier Transform Interferometer, is well below the classical shot-noise limit. It has been known for years that exotic number-path entangled states, such as N00N states, can beat the shotnoise limit, but N00N states are resource intensive to create in the first place, requiring either very strong Kerr nonlinearities or non-deterministic heralding. Here in our new paper we get super sensitivity for free from the natural evolution of single photons in a passive optical linear interferometer. This then seems to be the first example of the Boson Sampling paradigm providing a quantum advantage in an arena of importance, which is quantum metrology.

 Who knows what is left on this iceberg still yet unexplored?

Saturday, April 18, 2015

UK National Strategy for Quantum Technologies

Just winding up a one-week trip to the UK where I attended the Bristol Quantum Information Technologies (BQIT) Workshop at the kind invitation of the organizers. There was some disagreement how the acronym BQIT should be pronounced but upon my arrival we instantly all agreed it should be B-QuIeT. The workshop was a lively set of short talks interspersed with panel discussions and it was the first time I heard in some detail about the new UK National Strategy for Quantum Technology from non-other than Sir Peter Knight himself, who was a speaker on one of the panels focusing on the UK Quantum Hubs Network. There was quite a bit of excitement in the air as Simon Benjamin (University of Oxford, Quantum Computing Hub) gushed effusively about writing a 12-page proposal that came in at £3 million per page! 

There are four hubs dotting the UK countryside from Scotland to England with a total five-year budget of £120M for all four of the hubs with foci in quantum communications, imaging, sensing, and computing. And to complement these hubs are at least three new quantum technologies doctoral training centers. The budget for the training centers was less clear but I suppose all together this is close to £200M for five years potentially renewable in five years for another five. And that is, folks, as they say, new money.

All this activity seems to be coordinated by the UK Quantum Technologies Strategic Advisory Board, chaired by Prof. David Delpy, which has laid out a vision for a coordinated strategy in quantum technologies development in the UK.

It is somewhat disheartening to see all of this activity in the UK from the perspective of a research in the US, where the congress and the president can’t even seem to pass any new budget at all from year to year. I wish the UK program well and I did hear that each hub has set aside funds for international collaborations and so I hope this will be the first of several trips to visit my quantum friends and colleagues on the far side of the big pond. 

For young researchers interested in doing PhD or postdoctoral work in quantum technologies, you should follow your noses and follow the money. The UK seems to be the place where the quantum of action is at these days. 

Thursday, February 5, 2015

Guest Ghost Post: The Future of QIP: To parallelize or not?

This year’s Quantum Information Processing conference (QIP) was held in the beautiful and vibrant city of Sydney, Australia from the 12th to the 16th of January. Close to 225 researchers from across the world attended the conference. The talks were hosted at University of Technology Sydney (UTS). Runyao Duan led the local organizing committee, and its members were from UTS, University of Sydney, Macquarie University and University of Queensland. They did a splendid job in ensuring that the conference was a grand success.

The 18th edition of QIP featured about 40 talks and 150 posters covering various important advances in quantum information processing over the past year. A detailed summary of all the talks presented at QIP can be found on the Quantum Pontiff blog http://dabacon.org/pontiff/?p=10785, where Aram Harrow and Steve Flammia were “live-blogging” the conference. In this report, I shall focus on the things not covered by Aram and Steve, especially on the business meeting.

A lot of buzz and anticipation prevailed around this year’s business meeting at QIP. This was largely due to the pending decision on the question of whether “to parallelize or not parallelize” QIP.  Here is some background on the issue. QIP, as it stands today, is a single session-track conference featuring two kinds of talks: 50-minute plenary talks and 30-minute talks. During a five-day period minus one free afternoon, this allows for about 40 talks during the entirety of the conference. However, the number of submissions to the conference has seen a steep increase over the years due to the explosion of research in quantum information processing. What began as a workshop with a few tens of submissions in the early nineties, QIP today receives several hundred submissions each year. Thus the acceptance rates at QIP are now terrifyingly low; the rate for this year’s QIP for a talk was just about 20%.

Each year the program committee has been faced with the increasingly difficult task of rejecting at least 20 to 30 good submissions which they think are just as good as some of the other talks that make the cut. This has led the steering committee to consider introducing parallel sessions with the view that it would allow for more talks. In order to hear the public opinion on the issue, Stephanie Wehner posted a survey on the Web for the QIP community. Stephanie presented the results of the survey at the business meeting. The public sentiment on the issue seemed largely in favor of parallel sessions. When the result of the survey was shared at the business meeting, however, a major concern was raised about the possible fragmentation of the community into sub-communities. In response to this concern, Peter Shor, spoke about how not parallelizing QIP at this point could have the same fragmenting effect at a much graver level. Peter pointed out the precedent of STOC and FOCS, where the latter remained a single session conference for a long time, while the computer science community had grown many fold in size. Peter noted that in due course of time, when certain factions of the community felt that they weren’t being sufficiently accepted at the conference, they decided to split away with their own conference, the STOC. This is already beginning to happen in the quantum information community with the birth of various conferences such as QCrypt, QEC, and Beyond I.I.D. in Information Theory. These conferences provide venues for topics that are becoming more marginalized and less fashionable at the larger QIP conference.

Nevertheless, it was also pointed out by the steering committee that parallel sessions, even if introduced, would only occur during certain sessions. For instance, the plenary talks would still be held in common, and therefore couldn’t result in a complete splitting of the community. At this point, the question of logistics surfaced regarding a single track for plenary sessions and two parallel sessions for other talks, namely the need to secure one big room and two small rooms at the conference venue, which could be more expensive. Barry Sanders, the lead organizer for QIP 2016, in his presentation about the conference venue at Banff (near Calgary, Canada), however guaranteed that this would not be an issue at next year’s QIP. From the pulse of things at this year’s QIP, it seems rather likely that we will see parallel sessions in next year’s edition. Yet, this is by no means a certainty.

Another development worth mentioning from the business meeting was the proposal for open refereeing of papers at QIP. Aram Harrow and Steve Flammia, who had already implemented such a scheme at TQC (Theory of quantum computation, communication and cryptography) 2014, put forth the proposal. Aram explained why he thought referee reports of QIP submissions should be available on the public domain. The real purpose cited was not the obvious ones---it was neither to make it open the reasons behind why a paper is accepted or rejected, nor to push referees to write reports according to what this year’s program committee chair Ronald de Wolf called the “golden rule” of refereeing, namely to write referee reports the way one would like his/her own paper to be reviewed. The real reason cited was rather simply to make available expert summaries and critiques, which could immensely benefit other researchers, especially the younger researchers, which otherwise go underutilized aiding in the publication decision process alone. Although the general perception about the idea was positive, it seems unlikely that the QIP steering committee would recommend the scheme as a whole to the program committee. Ronald and Andrew Doherty raised concern about how such a scheme could result in a huge extra burden on the already over-burdened program committee. However, it seems likely that, as an intermediate step, the program committee summaries of the accepted talks would be made available to the public at QIP 2016, as was done earlier at TQC 2014.

The business meeting also saw ETH Zurich and Microsoft Research bid for hosting QIP 2017. The public opinion seemed to be in favor of the ETH bid for 2017, while it seemed that Microsoft could potentially host QIP during the subsequent year, i.e., 2018.

Earlier, proceedings at QIP this year kicked off with tutorial sessions during the weekend in the lead-up to the conference. Entry to the tutorials was included as part of the conference registration. Itai Arad of CQT covered the local Hamiltonian problem (I couldn’t make this one due to flight delay.) The second speaker of the day was Roger Colbeck of Univ. of York, who discussed the topic of device independence in quantum information processing. Roger described the goal of the device independence model in the context of cryptography as to provide unconditional security while allowing for device failure or tampering, and discussed the various tools that go into proving security of protocols within the model. He also highlighted one of the main challenges of the approach as the need of protocols that allow for reuse of the devices while guaranteeing unconditional security. On the second morning, Krysta Svore of Microsoft gave a fascinating tutorial on the various components that go into the design and implementation of quantum software architecture for an automated control and programming of tomorrow’s large-scale quantum computers. Later, Alexandre Blais of Univ. of Sherbrooke delivered the final tutorial on superconducting qubits. Addressing a largely theoretical audience, Alexandre did a splendid job of describing the basic physics behind the superconducting transmon qubit. He also discussed the control and readout via strong coupling to a microwave resonator along with results from various recent experiments.

The social aspects of the conference included a banquet and a rump session. The banquet was a fancy affair, being held on a showboat. The Sydney weather, which had been dull and rainy until then, had moved to great, UV-rich sunshine just in time for the banquet. The boat set sail from the Sydney Darling Harbor around half past seven with plenty of food, beer, wine, and the awesome folks from the conference. Some spectacular views of the Sydney skyline in the magical twilight soon followed, which were a real treat. Despite being given numerous warnings from the organizers, many participants unfortunately “missed the boat”.

This year’s rump session was a fun ride with the lighter side of the QIP community. The session was held at the “Manning” bar in the University of Sydney. Steve Flammia introduced himself as the “MR---the Master of Rump” for the night. Among the speakers, John Smolin poked fun at the ensuing trend of adding the “quantum” suffix to literally anything in the world in the name of quantum information research. Later Daniel Gottesman decided to take the new proposal for open refereeing to a whole new level. While we’ve heard of double-blinded refereeing, where the identity of the authors is conceals from the referees, Daniel suggested triple and quadruple blinding, where the text of the paper is encrypted from the referee and where the talk is concealed from the audience, respectively.

From excellent and stimulating talks and posters, through the intriguing business meeting, to the fun-filled banquet and rump session, QIP 2015 had it all. Parallel sessions or not, you can’t help, but be excited about QIP 2016 already.

Kaushik P. Seshadreesan is a doctoral student in physics at Louisiana State University, working under the supervision of Jonathan P. Dowling and Mark M. Wilde. He will graduate with his PhD in quantum information theory and quantum metrology in May of 2015.

Tuesday, January 6, 2015

Linear optical quantum metrology with single photons --- Exploiting spontaneously generated entanglement to beat the shotnoise limit

Keith R. Motes, Jonathan P. Olson, Evan J. Rabeaux, Jonathan P. Dowling, S. Jay Olson, Peter P. Rohde


(Submitted on 6 Jan 2015)

Quantum number-path entanglement is a resource for super-sensitive quantum metrology and in particular provides for sub-shotnoise or even Heisenberg-limited sensitivity. However, such number-path entanglement has thought to have been resource intensive to create in the first place --- typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feed-forward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multi-photon walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer --- fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection --- is capable of significantly beating the shotnoise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

Tuesday, December 9, 2014

D-Wave, Boss, D-Wave!

Some of my colleagues (and my followers on both social and anti-social media) have been asking me to comment on the D-Wave Systems (so-called) quantum computer, in light of all the seemingly contradictory press releases, scientific papers, and blog posts that have appeared over the years. This request of me is similar to asking me to assess and comment on the status of a pond filled with ravenous blood-thirsty piranhas by stripping down to my Speedos, slathering myself with the blood of a fatted calf, and then diving into the pond and thrashing about. However, I can assure you, that the sight of me wearing Speedos would keep even the most ravenous piranha — or perhaps even Scott Aaronson? — at bay.

Just kidding Scott! Love ya bro! (Let the flames begin….)

In my recent book, I do indeed have a section on the D-Wave phenomenon, but that was written in 2012 and perhaps needs a light-hearted updated. When writing the book my friend and colleague and known Grand Poobah of fine D-Wave Systems hardware, Daniel Lidar, asked me if I wanted him to read over my section on D-Wave Systems computers. I told him, flat out, no. He pressed on and said, with a wry grin, “You don’t want to say something you might later regret.” I replied, “Dan I can imagine many things I would like to have engraved on my tombstone but one of them certainly would not be…


In addition, for reasons I will leave nebulous for the moment, in the past few months I have had to give myself a crash course on the status and performance of the D-Wave Systems machines and perhaps I have a clearer view of what is (or is not) going on.

Firstly let me say that this vigorous and open debate about the D-Wave machines is good for science and is the hallmark of the scientific method at work. At times I wish the tone of the debate were more civil but history is filled with uncivilized scientific debates and perhaps civility is too much to wish for. Who can forget the baleful Boltzmann-Loschmidt debates on the statistical mechanics reversibility question? How about the forceful Huxley-Wilberforce debate on evolution? (“So tell us, Dr. Huxley, is it from your father’s side of the family that you claim your descent from an ape or is it from your mother’s?”) What about Cantor’s cantankerous feud with Kronecker over the nature of mathematical infinities?

Anybody? Anybody? (Tap, tap, tap....) Is this thing even on!?

Like scientific debates of yore this D-Wave business is not without its polemics. I attended a workshop at Harvard in March where another invited speaker, a friend and colleague of mine, when asked about what he thought about D-Wave, replied in no uncertain terms, “I think they’re a bunch of charlatans.”(“So tell us, D-Wave Two computer, is it from your father’s side of the family that you claim your descent from an abacus, or is it from your mother’s?”) On the other hand my friends and colleagues like Colin Williams are pro (or at least not totally anti) or like Daniel Lidar who is steadfastly neutral.   I can assure that particular Harvard-workshop-speaker colleague of mine that neither of them is (at least currently known to be) a charlatan.

I will try to give here an abridged history of what is going on, where I will fill in the gaps in my knowledge with the usual wild-asset conjectures. So as far as I can tell, as usual, this all can be blamed on us theorists. In 2001, Ed Farhi at MIT and his collaborators published a paper in Science that proposed a new model for quantum computation called Adiabatic Quantum Computing (AQC). This model was radically different than the standard circuit model for quantum computing, which we’ll call Canonical Quantum Computing (CQC), developed by Deutsch and others.

The standard circuit-gate model describes how a universal quantum computer can be built up out of circuits of one- and two-qubit gates in a way analogous to how a classical digital computer is built. It is called ‘universal’ as it can any quantum algorithm, such as Shor’s factoring algorithm, can be programmed into its circuitry. (By contrast it is strongly believed that the factoring algorithm cannot be programmed into a classical computer’s circuitry without suffering an exponential slowdown in run time.) While the circuit-gate model is very likely more powerful than a classical computer for this problem, to factor large numbers of interest to, say, the National Security Agency (NSA), the circuit would have to contain billions or even trillions of qubits and gates, allowing for quantum error correction and all that. To date, the record number of qubits constructed in the lab in such a way, the ion trap quantum computer holds the record, is about 16 and not a trillion. Every few years or so, the ion trappers carefully polish up and add another qubit to their trap, but the pace is very slow.

Adiabatic Quantum Computing (AQC) seemed completely different than the circuit-gate model or Canonical Quantum Computing (CQC). In AQC, instead of building gates and circuits in the usual sense, one identifies a large physical quantum system, say a collection of interacting quantum spins, with a particular Hamiltonian structure. The problem to be solved is then “programmed” into the state of these spins. Then the system is allowed to adiabatically evolve by slowly tuning some parameter, such as temperature or time, allowing the system to evolve into its ground state. If set up properly then the spin-state of the ground state of the Hamiltonian contains the answer to the computational question you programmed in at the higher temperature (or earlier time).

What set the quantum computing community off into a paroxysm was that Farhi and the gang claimed, in their original Science paper, that ACQ could solve NP-complete (travelling-salesman-type problems) with an exponential speedup over a classical computer. This was an astounding claim, as it is widely and strongly believed that the CQC model gives no exponential speedup on NP-complete problems over a classical computer. And so, in a nutshell, Farhi and collaborators were claiming that AQC approach was exponentially more powerful than CQC, at least on this one particular class of important optimization problems.

A vigorous scientific debate ensued, rotten tomatoes were thrown, and finally, in 2007, Dorit Aharanov and collaborators proved that AQC was computationally equivalent to ordinary circuit-gate quantum computing. That is AQC was not more powerful than CQC. (They are polynomially isomorphic hence both equally universal models of quantum computation.) Hence the claim that AQC would give an exponential speedup on NP-complete problems was very likely incorrect since it is strongly believed that no such speedup occurs on the CQC.

Got it?  

The situation reminds me of the situation in quantum mechanics in 1926 where you had the Heisenberg matrix mechanics and Schrödinger wave mechanics and nobody knew if they were the same theory or gave different predictions or what until Dirac (and others) showed that they were transformable into each other and hence made exactly the same physical predictions and hence were not two different theories of quantum mechanics but rather two different pictures of quantum mechanics. However, as is well known to the quantum mechanic on the street, some quantum theory problems are much easier to work out in the Heisenberg picture than in the Schrödinger picture (and vice versa). As a consequence, due to the Gottesman-Knill theorem the Heisenberg and Schrödinger pictures are not computationally equivalent even though they are physically equivalent. That is a quantum computing circuit, one composed of only gates from the Clifford algebra, that looks like it would be impossible to efficiently simulate on a classical computer in the Schrödinger picture is in fact efficiently simulatable in the Heisenberg picture.

Okay, so AQC model of quantum computation is neither more nor less powerful than circuit-gate quantum computing, but because of its mapping into fairly simple spin systems, it looks more like a design for a straight-forward physics experiment than a design for a gigantic digital quantum computer. The open question is that maybe AQC is easier to implement than the CQC. That is, instead pursing the circuit-gate paradigm, adding one ion qubit to your trap every few years until you hit a trillion of them, perhaps you could cook up some simple spin-like physical system with a whole lot of qubits (maybe hundreds but not trillions) interacting in a certain way, code in your problem, cool the whole thing down slowly, and read out your answer — build your quantum computer that way! And based on that potential, I surmise, is how and why D-Wave Systems got involved in the quantum computing game.

In 2007 D-Wave systems demonstrated a 16-qubit-prototype system called The Orion and used it to solve some computational problems such as a pattern-matching problem related to drug design and a Sudoku puzzle. In 2011 they unveiled their first commercial machine, the D-Wave One, a 128-qubit machine that sold for $1,000,000. They have sold precisely one of these and that was to Lockheed-Martin, which then immediately handed it over to a research group at the University of Southern California, consisting of Daniel Lidar and collaborators, who operate the machine with complete autonomy from that company. (USC does provide the physical infrastructure for the machine and its operation but that is role that is compartmentalized and separate from the research activities.) In 2013 D-Wave systems produced the D-Wave Two, a 512-qubit machine, and they have sold also precisely two of those. The first was also bought by Lockheed-Martin and is run by USC and the second was bought by Google and is located at the Quantum Artificial Intelligence Lab at the NASA Ames Research Center at Moffett Field, California (near San Jose). This particular NASA machine has a front-end user interface run on a Google proxy server that allows the user to submit batch jobs to the D-Wave Two hardware via that route.

So just what are these machines and what are they doing? This question is difficult to answer, as we are dealing not with a science experiment but a product produced by an industrial concern. If a scientist, say at a university, claims they see such and such a result in an experiment; then they are required to publish sufficient details about the experiment so that independent researchers at other universities can construct identical experiments and confirm or refute the result. But the D-Wave machines are industrial products, not science experiments, and details about their design, construction, performance, and even the software that runs them is proprietary. They don’t want other companies to know all the details of how they are built.

So scientists, instead of constructing their own duplicate of the machine, are forced to use the D-Wave machine to test itself — to run tests or “experiments” treating the thing as a black box. One bit of front-end software that interfaces with the machine is in fact called “black box”. And for some years now the game has been to choose some computationally hard problem, run it on the D-Wave machine, and then compare the run time when the problem is run on a fast classical computer, and then publish the results of the race. The results greatly depend on which problem you choose, and hence this is why there are so many conflicting reports appearing in the popular press. One day we hear that the D-Wave machine is more powerful than a classical computer and the next we hear it is no more powerful. Well this is because the term “powerful” is problem dependent. It makes little sense to say one machine is more powerful than another in general. It only makes sense to say one machine is more powerful than another when computing a specific problem or class of problems. When computer scientists say a quantum computer is more powerful than a classical one, they mean on a specific set of problems. For example factoring. For some problems, say computing the permanent of a matrix, a quantum computer is likely not any more powerful than a classical one.

Let’s take an analogy. Suppose in 1984, ten years before Shor discovered his factoring algorithm, an alien spacecraft lands on the roof of the physics building at the University of Colorado (where I was a grad student at that time) and out from the UFO emerges a luminous, one-eyed, green-skinned, alien named Xiva the Decompiler of Words. Xiva hands me a pocket-sized device that she calls the Xblox and claims is a universal quantum computer and tells me to try it out. I decide to pit it against the nearby Cray X-MP supercomputer. If I first run the oracular ordered search problem (don’t ask), the Xblox will perform better than the X-MP by at most a factor of 3, and I would exclaim, “This quantum computer sucks!” If instead I randomly chose to try and factor a large number with the two machines, I’d find the Xblox has an exponentially shorter run time than the X-MP.  I then would exclaim, “This quantum computer really rocks!” On other problems (NP-complete or unordered search) the Xblox would show only a quadratic improvement at all over the X-MP. The question, which computer has the better performance, is meaningless unless you specify the particular problem they are performing the computation on.

What is inside the D-Wave machines? I suspect that D-Wave had originally hoped to implement something like full-blown Adiabatic Quantum Computing on their machines. Since the 2007 result of Aharonov, et al., we know that AQC is equivalent to ordinary QC, as discussed above, but AQC is perhaps easier to implement. Hence if the D-Wave machine turned out to be a general-purpose AQC machine it would be truly a universal quantum computer. This hope may have been the source of early (and somewhat naive) press reports from D-Wave that they were building a true universal quantum computer with entanglement and the whole nine yards.

The speed of hype may often greatly exceed the speed of write.

As mentioned above, AQC could be implemented by a building a controllable lattice of coupled spin-half particles. However, given that the spin-half particle (a neutron for example) is a quantum two-level system, any array of coupled two-level systems will do. D-Wave decided to go with superconducting flux qubits. These are little micron-size loops of superconducting current-carrying wires where all the electrons can be made to circulate counterclockwise (the qubit |0 state) or all clockwise (the qubit |1 state). If the little buggers are sufficiently cold and sufficiently protected from the environment then they have a long enough coherence time so you can make a cat state or the superposition |0+|1⟩, which is then the starting point for generating massive amounts of entanglement across the lattice, via qubit-qubit interactions, and away you go to implement AQC (or even CQC). The particular Hamiltonian system D-Wave has chosen is something called the Ising spin model (ISM) whose Hamiltonian has a binary and tunable coupling between each pair of qubits

This Hamiltonian shows up in the theory of quantum spin glasses, for example. There is one additional hardware implementation step. This Ising (pronounced “ee-sing” not “icing”) Hamiltonian presupposes, via qubit-qubit coupling, any qubit (in say a square lattice) can be coupled to any other qubit in the lattice. That is not the case in the current D-Wave One and Two. Only nearest neighbors are coupled. In addition some of the qubits don’t work due to manufacturing issues, but you know in advance which of them is going to be offline as D-Wave characterizes the chip and tells you.

To handle this hardware issue an additional wiring diagram is needed to map the Ising problem (with arbitrary qubit-qubit coupling with all the qubits in play) onto the D-Wave hardware (with only nearest-neighbor coupling and where some of the qubits are no-shows). That wiring diagram is called a Chimera graph. (I suppose because the graph looks the fire-breathing hybrid of a goat and a lion and a serpent to the unaided eye.) It is important to note that each different computational problem will require a different Chimera mapping, which can be quite hard to work out for a large number of qubits, but the D-Wave black-box front end will help you find one if you are to dumb to figure it out on your own.

So the idea is that you program your input as an initial state of this Hamiltonian (time t = 0), then slowly tune the time parameter t until the thing reaches its ground state (time t = T), and then read out your answer and you have a type of AQC. The problem is for this protocol to work as universal AQC the qubits must have coherence times that are at least as long (and preferably longer) than the adiabatic run time T. This might be the case if the qubits were cold enough and the protected from the environment enough but that is not the case. The qubits in the current D-Wave machines are quite noisy and have very short coherence times, due to temperature fluctuations, imperfections, and other technical noise. Hence the qubits are not sufficiently coherent so that long-range entanglement can be spread across the lattice and hence full-blown AQC cannot yet be implement and by the Aharonov et al. proof neither is it then universal for quantum computing. That is the current D-Wave machines cannot solve all the problems a general quantum computer can solve. Well if they can’t do that, what can they do?

There is a watered-down version of AQC that is called Simulated Quantum Annealing and the good folks at D-Wave Systems claim their machines can do this. Annealing is a physical process, used for example in the manufacturing of swords, where by the metal alloy of the sword is cooled very slowly into its ground state. The idea is that if you cool the sword too rapidly it will freeze in some higher state of energy than its ground state, a local minimum of the alloy structure, instead of the global minimum. These higher-energy local minima produce defects that weaken the final product. Hence you want to cool it slowly to get a strong sword. The physics is that while the alloy is cooling, if it does find itself in a local minimum energy well, because it is still at a relatively high temperature, the thermal fluctuations will cause it to leap over the well’s barrier height and allow it to leave the local minimum and continue the search for the true minimum.

Computer scientists have emulated this physical process by designing an optimization procedure called simulated annealing. You are seeking the optimal state of some large computational problem and so you simulate the annealing process with a slowly adjustable parameter (usually time and not temperature) and you add in by hand random fluctuations to mimic the thermal fluctuations of the sword-making process. Simulated annealing often works much better than other optimization methods, such as steepest descent, in finding the true global minimum of a large problem.

In Quantum Simulated Annealing (QSA) the bits are replaced with qubits and the thermal fluctuations are replaced with quantum fluctuations inherit in any quantum system. Qualitatively, it is claimed that quantum simulated annealing is more efficient than classical simulated annealing since, instead of fluctuations driving the up and over the potential barrier of the local minimum the system is stuck in, quantum mechanics allows also for the tunneling of the system through the potential barrier. For this reason claims are made that quantum simulated annealing should find the global minimum of a system faster than classical simulated annealing. Quantum simulated annealing should be viewed as strictly contained in AQC, and hence is not universal for quantum computing. What remains to be seen is it more powerful than a classical computer on a specific problem?

And so we come the core idea. As far as I can tell, D-Wave Systems claims their machines implement quantum simulated annealing. The question then is then, is this version of QSA with noisy qubits, short coherence times, and no entanglement good for anything. To answer this question, D-Wave farms out time on their machines to various researchers to see if there are any computational problems, mapped into QSA, for which the D-Wave machines out perform a comparable classical machine. So in addition to being problem dependent, the answer also depends on what you mean ‘comparable classical machine’. It is due to these ambiguities that we have what appear to be contradictory publications and press releases on the power of the D-Wave machines.

I’m not going to go through the list of all the computational problems that have been tried on the D-Wave computers but focus on the one that seems currently to show the most promise. This computational problem is known as QUBO. A few months ago I had never heard of QUBO and I was shocked to find out, after I had, that the Q in QUBO does not stand for ‘quantum’ but rather ‘quadratic’ as in Quadratic Unconstrained Binary Optimization.

Borzhemoi! This I knew from nothing!

Well there was a steep learning curve but without going into all the details, a QUBO problem is a type of optimization problem whose mathematical structure maps directly onto the Ising Hamiltonian that is native to the D-Wave hardware. It seems likely that if the D-Wave machine gives any computational advantage, and if that advantage is quantum in origin — the jury is still out on both these points — then it will only be on problems of the QUBO type. However this is an interesting class of problems with applications that seem primarily have applications to image recognition. In particular certain types of synthetic radar image processing, of NASA Earth Science interest, maps onto the QUBO form.

If the D-Wave machine gives an advantage it will be only on practical problems that can be mapped (with little computational overhead) to a QUBO problem. Whether that set of problems is the empty set, is still unknown.

Here is a summary of the step required to map a practical problem like image recognition onto the D-Wave machine:

1.)  Find a practical problem that has an efficient mapping to a QUBO problem and carry out that mapping. 

2.)  Next the QUBO problem must be mapped to the specifics of the D-Wave circuitry via a Chimera graph that is essentially a wiring diagram for the quantum computer. This is done by an intermediate mapping to a Quantum Ising Spin Model (QuIS) (for which the D-Wave machine was designed to solve in the generic AQC model. In this way programming the D-Wave machine is akin to programming a classical computer in assembly or machine language, as was done decades ago, before the development of higher level compilers and programming languages. That is why the programming process is somewhat convoluted to the uninitiated. To carry out the Chimera mapping for nontrivial problems, D-Wave provides the user access to a front-end server called “black box” that carries out a search for an optimal Chimera mapping.

3.)  Run the problem on the D-Wave machine. For the D-Wave 2, purchased by Google and located at the NASA Ames Research Center, Google has developed a proxy server that interfaces with the actual D-Wave machine that will allow the user to submit batch jobs to the quantum machine.

So we can see that there is a lot of overhead between the physical problem to be solved and the final run on the machine. If that overhead is too large, any quantum advantage of the D-Wave machine may very well be swamped.

In particular, my colleagues ask, “What’s the deal with these announcements that come out every few weeks or so? Some claim to see a speedup and some claim they do not.” We are now prepared to answer this. Experiments on the D-Wave machine that test algorithms that are not of the QUBO form show no speedup. Some problems that have an efficient mapping to QUBO seem, at least in a number of publications, to show some speed up. Whether you see a speedup depends on what problem you choose and how efficiently you can map your problem to a QUBO problem.

Let me be frank and say that it is still possible that D-Wave machines will show no speedup of a quantum nature, but the jury is out. If they are proved to show a quantum-enabled speedup it will be only on a specific narrow class of important  problems that have an efficient QUBO implementation. This means that the current D-Wave machines are clearly not a universal quantum computer but potentially are a very narrow scope, special purpose, possibly quantum, QUBO problem solving engine.

 I should say that I'm in favor of a top-down approach like this to compete, say, with the ion trappers that add a very high-fidelity qubit to their trap once every year or two. I think the competition and the associated discussions are good for the field. Let's avoid calling each other charlatans and instead let's all sit down and try to figure out what is going on and see how we can reach our common goal — a universal quantum computer.

Thursday, November 20, 2014

You Can't Teleporter Anything He Doesn't Want to Hear

The early days of the experimental demonstrations of quantum teleportation, particularly the first three experiments by Zeilinger, DeMartini, and Kimble, were not without controversy. This I learned first hand in 1999 when I attended the Sixth International Conference on Squeezed States and Uncertainty Relations in Naples, Italy. As the three different experimental group’s publications had just appeared, the conference decided to have a panel discussion on these three different experiments. Curiously, the organizers asked me to chair the panel discussion. I was a bit puzzled by this request, and carefully stated that while I was honored by the invitation, I was hesitant to accept, since I had never carried out research (either theoretical or experimental) on quantum teleportation, and was certainly not an expert on the matter. The organizers looked about the room nervously and then in hushed tones took me aside and explained their rational — apparently there was a very contentious debate raging among the three experimental groups as to who had done the first “true” demonstration of quantum teleportation, and the organizers expected the panel discussion to be loud and chaotic and they feared the panelists might become unruly — quantum physicists run amok! Therefore, they explained to me nervously while averting their eyes from my gaze, that they needed a moderator who would be able to run the thing with a strong hand (and a loud voice) in order to keep order. “The organizers are unanimous in our decision that you, Dr. Dowling, are our only hope.” I had been in a moment demoted from world’s expert in quantum physics to the quantum mechanical equivalent of either Obi-Wan Kenobe or a barroom bouncer.

The panel was chaotic from the inception. As I expected, there was one panelist per experimental group, Austrian physicist Gregor Weihs from the Zeilinger group, DeMartini from his own group, and Australian physicist Samuel Braunstein from the Kimble group. (American physicist Marlan Scully once called me the “Bob Hope” of theoretical physics. If that is the case then my good friend and colleague Sam Braunstein is surely the “Woody Allen” of theoretical physics.) Since the debate to be among these three competing groups, I was again puzzled when the organizers, at the last minute, added two additional Italian physicists to the panel, other than DeMartini. I was even more puzzled that these two last minute additions seemed to have even less experience with quantum teleportation than me! I gentle inquired as to why they should be on the panel — my role as bouncer was clear — but their roles were not. Again more hushed tones and averted gazes and the organizers explained that they were both big-shot Italian professors who asked to be on the panel discussion on quantum teleportation, in spite of knowing absolutely nothing about quantum teleportation, but for the simple reason that they were big-shot Italian professors, and thought it would look prestigious to insert themselves onto the panel. I had lived in Italy for a year as a graduate student, and I knew a thing or two about Italian politics in the universities, and conceded to their admittance to the panel in the interest of keeping the peace. Peace, however, was not long kept.

The night before the panel discussion I was in a bit of a panic myself in my hotel room as I wondered how to organize things. I decided that in the hour-long time slot I would give each of the six participants, including myself, five minutes to speak or present a few slides, and then reserve the second half hour for questions from the audience. Now I knew there was this debate between the three experimental groups about whose teleportation experiment was the “best” experiment, where “best is very subjective, but I had not followed this debate at all, and did not really have time to read through all three of the experimental papers and try to figure it all out. Hence I had an experimentalist friend and colleague, German physicist Andreas Sizmann, give me a quick tutorial on the nature of the experimental debate, at the bar, and then went back and carefully reread the original theory proposal by Bennett and colleagues, and with the help of Sizmann constructed a few overhead transparencies on how I thought a quantum teleporter should work and how, if handed such a device, one might be able tell if it was working properly, and if handed three of them, how I might gauge which of the three was “best”. It was then, for the first time, I devised the story of the mythical National Institute of Quantum Standards and Technology (NIQuIST) and the equally mythical quantum teleporter-testing machine NIQuIST had constructed to test the three claimed experimental implementations. In other words I did not compare the three experiments at all, I figured the panelists could do that themselves, but instead I put up a series of tests that each teleporter should pass to get the NIQuIST seal of approval, or more accurately something like a Consumer Reports rating: Recommended, Best Buy, or Not Acceptable. The quantum teleporter-testing machine that I drew up by hand looked like that in figure.

In the figure we show a compactified version of the quantum teleporter, being slowly lowered into the NIQuIST teleportation-testing machine (brownish orange). As before we have Doug, now a NIQuIST employee (purple, left) who provides the teleporter with a photon whose quantum polarization state is unknown to Alice, Bob, or Charlie in the teleporter. Doug produces a large number of such unknown states using a machine called the “ensembler” that produces ensembles or collections of single photon polarization states, where the ensembler can be programmed to choose the states at random, or in a pre-selected sequence. There are an infinite number of states so ensembler should produce a large number of different states so the test of the teleporter will be statistically significant. That is if the sample of states is random enough and sample enough of the possible infinite space in a way that the teleporter operators cannot anticipate, the more likely that it will be a fair test and that the teleporter operators cannot somehow cheat by using some inside knowledge of the states being transmitted. 
The next part of the tester will be to see how good the teleporter is doing. If an unknown state Y is sent in by Doug to Alice, we want the state that emerges on Bob’s side of the teleporter to be as close to Y as possible. This is quality control and one measure of the quality of such a state transport is called the quantum fidelity. The fidelity is 100% if the outputted teleported state is identical to the inputted stated to be teleported. The fidelity is 0% if the outputted teleported state is as far from the inputted state as possible, which would be hard to arrange without trying. If the teleporter simply completely scrambles the input state then (on average) the fidelity of the output states, with respect to the input states, will be 50%. That is, if the teleporter sucks, on average the outputted state agrees with the inputted state only half the time. Hence anything better than 50% is considered good; I would call it the bronze standard of teleportation. The gold standard would be 82% fidelity, that is the minimum required for the teleporter to be used to teleport one half of an entangled photon pair and still violate Bell’s inequality. Anything less than 82% can be modeled with a classical local hidden variable and hence does not really test quantum mechanics. A new character in our pantheon, Ellen, on the right (purple), extracts the fidelity; she runs a machine called the “fidellerator” that measures the state of the teleported photon and compares it with the state that was actually teleported. Doug sends the complete information on the states he provided, and Ellen sends the complete information on the states she received, to François (bottom green), who runs a machine called the comparator (which I do not put in quotes because “comparator” is a real word), which compares what Doug sent to what Ellen got and then ranks the teleporter in the test.
The next part of the tester will be to see how good the teleporter is doing. If an unknown state  is sent in by Doug to Alice, we want the state that emerges on Bob’s side of the teleporter to be as close to that state as possible. This is quality control and one measure of the quality of such a state transport is called the quantum fidelity. The fidelity is 100% if the outputted teleported state is identical to the inputted stated to be teleported. The fidelity is 0% if the outputted teleported state is as far from the inputted state as possible, which would be hard to arrange without trying. If the teleporter simply completely scrambles the input state then (on average) the fidelity of the output states, with respect to the input states, will be 50%. That is, if the teleporter sucks, on average the outputted state agrees with the inputted state only half the time. Hence anything better than 50% is considered good; I would call it the bronze standard of teleportation. The gold standard would be 82% fidelity, that is the minimum required for the teleporter to be used to teleport one half of an entangled photon pair and still violate Bell’s inequality. Anything less than 82% can be modeled with a classical local hidden variable and hence does not really test quantum mechanics. A new character in our pantheon, Ellen, on the right (purple), extracts the fidelity; she runs a machine called the “fidellerator” that measures the state of the teleported photon and compares it with the state that was actually teleported.  Doug sends the complete information on the states he provided, and Ellen sends the complete information on the states she received, to François (bottom green), who runs a machine called the comparator (which I do not put in quotes because “comparator” is a real word), which compares what Doug sent to what Ellen got and then ranks the teleporter in the test. 
So I presented this slide and this spiel in about five minutes at the Naples panel discussion, along with an additional slide and discussion from my partner in crime and sidekick, Andreas Sizmann. Then I allowed each of the remaining five panelists to speak. As expected, the two-big shot Italian professors who were admitted to the panel for political reasons, gave presentations that had nothing to do with quantum teleportation, much to the consternation of the audience, consternation that was on display through audience member protestations consisting of gnarled grimaces (or audible snoring). Then in rapid fire came (in alphabetical order) the three true experimental teleporter panelists, Braunstein, DeMartini, and Wiehs. This took up about a half hour of time, as anticipated. As the other panelists spoke, Sam Braunstein scribbled away furiously on a little notepad and, periodically with great showmanship and dash, he would loudly tear off the note sheet with the scribbles on it, neatly fold it in half, and hand it to me, while he displayed a very serious face to the audience. On the notepad sheets, that I would carefully unfold like a poker hand of two cards (so that nobody else could see them), Braunstein had drawn cartoon caricatures of the various panelists (other then himself), with their name and such artistic embellishments as crossed eyes, drool, maniacal grins, or feathered and bloodied arrows sticking through their craniums from ear to ear. 
After he would hand these cartoons to me, he sitting and I standing, with great seriousness and I would peer intently at them. Astonished, I realized that he was simply trying to get me to laugh out loud during the panel discussion! Not to be fooled, I would inspect the crude cartoons carefully, out of eyesight from the other panelists and the audience, and then fold them neatly, tuck them into my shirt pocket, and announce, “A very good point, Prof. Braunstein, I will be sure to bring it up in the discussion session to follow!” All the panelists, the audience, and Braunstein would then nod in solemn approval of this curious ritual and then Braunstein would then return madly to his work of furiously scribbling out the next cartoon caricature on his note pad.
Each panelist had spoken their piece and then it was time for the grilling from the audience. I did my best to maintain order, timing the questions, cutting people off when sufficient, deafening others (with my ribald vocabulary of Italian curse words) when necessary, but all was lost when a young man (whose name I never got) with a Germanic accent (German, Swiss, or Austrian), grabbed the portable microphone and launched into a diatribe of his own. The young Teuton first assailed the structure and organization of the panel, “This panel discussion was badly organized. I have no idea who these first two Italian guys were or why they were even on the panel, as they had nothing to say about the topic of quantum teleportation!” The Italians looked at the German-accented Wünderkindt in horror as he dared speak the obvious. (I liked this Kindt already.) Then he continued, “Even the presentations from the panelists, who have done work on quantum teleportation, were mostly incomprehensible.” Now all the hackles were raised. “In fact,” continued this spawn of Odin, “The only person on this panel who made any sense was Dr. Jonathan Dowlings (sic)!” (I liked this Kindt even more.) I grinned ear from ear and smiled at all the organizers, but the were not smiling back. Then the blond-haired, German-accented, intemperate audience participant went in for the kill, “And you, DeMartini, your teleportation experiment meets none of Dowlings’ criteria for a success at all!” I stared at this guy in horror as a shudder when down my spine and spread rapidly through the floor and across the audience
That was it! DeMartini snatched up his transparencies and stormed off the podium, off the stage, down the steps, and out of the auditorium side door, never to be seen again (at least at that conference). I, jokingly, announced, “Did anybody see where DeMartini went? Can somebody check the toilet? No? Okay then I officially declare the panel discussion to be at an end!”

— The unedited story adapted from Schrödinger's Killer App: Race to Build the World's First Quantum Computer


Tuesday, November 4, 2014

Einstein's Least Famous Equation?

In 1980, my first year in graduate school, the English physicist, Paul Dirac, Nobel Prize 1933 awardee, came to the University of Colorado to give a popular talk at The Gamow Memorial Lecture. As a big fan of Dirac, I dragged all my non-physicist friends to the “popular” lecture early to get good seats in the middle and second row from the front. The place was packed with the mayor, the chancellor, the provost, the deans, all the physics professors, a blonde woman from the Sufi community dressed in a turban and a white cloak sporting ceremonial dagger in  her waistband, and so forth. (This is Boulder, Colorado, after all.) Dirac gave what I thought was a very interesting talk on the history of quantum theory, but with no slides, no notes, no audiovisual aides, and no nothing. He just stood at the podium and talked for an hour. He was 78 years old at the time and he spoke in a very soft high-pitched, English-accented, mouse-like voice. So soft it was that you could barely hear him at all and the technicians kept cranking up the amplifiers until it screeched periodically from the feedback. The talk put all the non-physicists in the audience immediately to sleep. Then Dirac got to the part where he discovered the Dirac equation predicting the existence of antimatter.

He clearly gets a bit excited and impossibly goes up an octave, whereupon the feedback kicks in waking everybody up, and Dirac says, “I was led to the idea of the discovery of antimatter by considering Einstein’s most famous equation, E = ….” All my buddies from the English department began to nudge me and the crowd visibly perked up. They had not understood a goddamn thing but for sure even the English majors knew what “…Einstein’s most famous equation, E = ???” was going to be. Dirac continues triumphantly onward to the hushed auditorium, E = the square root of p-squared times c-squared plus m-squared times c to the fourth!?"

(Einstein’s least famous equation?) The audience visibly collapsed upon themselves in utter disappointment—they understood nothing—and I in the tomb-like quiet that followed in the hallowed Rocky Mountain granite of the vast Macky auditorium— burst out laughing uncontrollably. (And I was the only one.) Dirac, normally an endearing bird-like little man, scowled, halted the talk, stepped out from behind the podium, and stared down at me in silence, vulture-like, for a full minute. The rest of the audience looked back and forth between Dirac and me as they coughed and inspected their watches. Then, without a word, after my torturous minute was up, he returned behind the podium and finished his talk as if nothing had happened at all.

From: Schrödinger's Killer App: Race to Build the World's First Quantum Computer (Page 42). Taylor & Francis Press.