Sunday, June 5, 2016

Wheeler vs. Dowling in the War of the Commas

In 1990, while still a postdoc at the Max Planck Institute for Quantum Optics in Garching, I was writing a paper with the John A. Wheeler and the Wolfgang P. Schleich, entitled, “Interference in Phase Space.” I had first met Wheeler years before, when I was an undergraduate student at the University of Texas, where I audited his course, “Quantum Measurement Theory,” taught jointly with Wojciech H. Zurek. When I asked Wheeler for permission to audit, he said to me, “How much trouble could one undergraduate student be?” 

Taking that as challenge, I would sit in the front and constantly ask questions until the graduate students threatened to take me out in the parking lot and beat me up. I finally told them all that I knew karate (at that time a yellow belt in tae kwon do) and they left me alone after that so I could still interrupt class.

The paper, “Interference in Phase Space,” was a review article on phase-space methods in quantum optics; a topic that Wheeler and Schleich began working on in Texas when Schleich was a postdoc there. My primary job on the paper was to type up the whole thing in TeX. (This was back in the day when women were women and men were men and we all wrote our own macro packages and had no need for that namby pamby LaTex.) I also helped with some of the calculations and preparing the figures. Even sometimes I had help with double-checking the English, as when Schleich accidentally translated the German nickname for Bohr’s Correspondence Principle, Bohr’s Zauberstab, as “Bohr’s Magic Stick” (instead of “Bohr’s Magic Wand”).

Wheeler refused to let anybody start writing the body of the paper until we had all the figures and figure captions done up to his liking. We would lay the figures end to end and endlessly discuss their ordering, the captions, the style of the drawings, etc. In this way we had a very clear storyboard of the paper long before I started typing the main text. This is a very useful technique that I still use to this day, especially with students and colleagues who have writer’s block.

Schleich and I were working in Garching, Germany, and Wheeler was back at Princeton. As I typeset the manuscript, which eventually ran to nearly 50 pages, I would fax drafts of it to Wheeler to mark up and fax back to me to make his changes. We must have had done nearly 20 rounds of this faxing back and forth. Of course, being the junior author, I mostly implemented Wheeler’s changes as were given to me, but for one particular typesetting bone of contention. When Wheeler would refer to a short snippet of mathematics (that followed the noun that defined it) let us say “The variable x is inserted into the function f(x) in order to....,” he would put these math terms in parenthetical commas, e.g., “The variable, x, is inserted into the function, f(x), in order to….”

If you have read Knuth’s big book of TeX or various physics author style manuals, you learn that putting short math expressions like this in commas is considered old fashioned and is frowned upon. Figuring that the Editors would delete these commas in the final typesetting at the Annalen der Physik, I simply left them out of the manuscript. 

In that way then the battle with Wheeler over the commas had begun.

I would diligently fax the 50-page draft of the manuscript to Princeton, over and over again, sans the parenthetical commas around the short math, and Wheeler would send back his revised draft where, by hand, he would painstakingly add each and every one of those mission commas back in again. One must realize in a 50-page theoretical physics paper there were thousands of such corrections in each round. Just writing the commas in, alone, must have taken him hours in each review.

Finally he had enough. One day my office phone in Germany rang and it was Wheeler, calling long distance from Princeton, and he was understandably quite upset. “Dowling, he growled into the phone, why are you not putting my commas back into the manuscript!?” I replied, evenly, “Prof. Wheeler, putting parenthetical commas around such short math expressions in a physics journal is old fashioned and is recommended against in the journal’s style guide. Besides, the editors will just remove them.”

Stunned into a moment of silence, Wheeler then barked back, “I have been putting parenthetical commas around my short mathematical expressions since before you were born!” I paused, and then answered, firmly, “Perhaps so, Prof. Wheeler, but it is I who is typing this darn 50-page manuscript!”

He hung up on me.

I won the battle. In the end the commas stayed out. (I don’t think Schleich ever knew about this battle.) Some years later, in the mid-1990s, I ran into Wheeler at a conference reception. I went to say hello but at first he did not recognize me. So I said, “Wheeler, it’s me, Dowling — the smart-mouthed postdoc who would never put your commas back in our manuscript!” Wheeler’s jaw dropped and then, just when I though he was going to punch me, he started to laugh, and then he patted me on the back. “Dowling! You know you know that fight over the commas nearly put me in a coma?” Then I laughed too and wandered off to the bar.

That was the last time I ever saw Wheeler.

Sunday, January 3, 2016

Quantum Pundit's Science Predictions for 2016!

"Prediction is very difficult, especially about the future." 

Niels Bohr


At the end of each year, multitudes of science rags (on and offline) present their lists of the N (where N is around 10) most important science discoveries of that year. For 2015, a quick and unscientific Google search provides the following top-ten-list fodder for that year in review:

1.     Researchers Discover New Phase of Carbon
2.     Astronomers Discover ‘Fat Jupiter’ And ‘Twin Earths’
3.     Doctors Grow Human Vocal Cords from Scratch
4.     Have Scientists Found A New Particle Bigger Than Higgs Boson?
5.     2015 is Hottest Year on Record
6.     New Pluto Photos Show Edges of Its Frozen Heart
7.     Europe To Build Base On MOON By 2030 Using 3D Printer
8.     China Set To Launch 'Hack Proof' Quantum Communications Network
9.     Spooky Action at a Distance Is Real
10. Neil deGrasse Tyson Claims Entanglement can be used to Communicate Backwards in Time

But highlighting what happened in the past year is easy. The real money is predicting what will happen in the year to come. Here are some actual psychic predictions for 2016 from true psychics (as opposed to all those fake ones):

1.     Scientists Will Breed a Hybrid of a Dog and a Cat
2.     Drones Will Strike Buckingham Palace
3.     Naked Picture of Kim Jong-Un Will Cause a Political Row
4.     World Peace Will Break Out
5.     Australia Will Soon Be At War
6.     Farmers Will Develop a New Strain of Cauliflower
7.     Justin Bieber will Become Involved With an Older Woman
8.     A Comet Will Come Out of Nowhere and Bring Awe and Wonder to Humanity

It would appear that Niels Bohr was wrong; predictions about the future are remarkably easy to make — so long as they don’t have to be also true. And so month-by-month here are my psychic science predictions for 2016.  

(Disclaimer: Quantum Pundit's psychic predictions have no basis in science and are provided here for entertainment purposes only.)

January — High Energy Physics


CERN announces that the Large Hadron Collider (LHC) has found evidence for yet a third Higg’s-like resonance at a whopping 1,125 GeV. In the ecstasy of irrational (and un-peer-reviewed) exuberance, CERN leaks information about the discovery to the Journal of Irreproducible Results, the National Enquirer, and Portable Restroom Operator Monthly Magazine. Sole evidence for the new particle, dubbed by CERN as the “triphoton”, is what appears to be a tiny bump, nearly imperceptible to the human eye, on a single data curve displayed on a low-resolution CRT computer screen in the LHC cafeteria (between the toilets and the ATM). Nevertheless, the story is picked up by the Associated Press, who renames the triphoton “The Holy Ghost Particle” — third in a row after, "The God Particle (the Higgs Boson) and "The Jesus Particle" (the diphoton). Within several days thousands of theory papers are posted to the HEP-TH high-energy theory preprint archive explaining the physical origins of the triphoton. (However, no two of any of these thousands of theory papers agree with each other.)

Within hours of the announcement, the Facebook posts and Twitter feeds of scientists the world over are bogged down with requests for input from sketchy online newspapers, and the Huffington Post, about their opinions on the existence of the triphoton, and particularly, whether or not they believe that its discovery indeed provides proof for the existence of The Holy Trinity. Several hundred physicists claim that they do indeed believe precisely this, in a shameless bid to win the 2016 Templeton Prize. Pope Francis refuses to comment on the data, saying only, "Who am I to fudge?"

A Twitter flame war over the Holy-Trinity-particle question breaks out between Richard Dawkins, the Archbishop of Canterbury, and Koko the Gorilla, which causes the entire Internet to crash for a nearly a minute around midnight on January 5th, the feast day of St. John Neumann. Upon seeing this happen, unbelieving quantum physicists the world over undergo a spontaneous conversion to the Holy Trinity Particle creed. (The next day, however, most of them renounce their newfound belief when Pope Francis explains to them that St. John Neumann and St. John von Neumann were two entirely different people.)

As more and more data is collected, the data bump never seems to be any clearer, and stubbornly stays within only a single sigma of statistical confidence level. Finally, on January 31st, the feast day of St. Xavier Bianchi, a member of the LHC janitorial staff points out that the so-called bump is actually a fingerprint smudge on the computer screen, and he promptly removes it with a spritz of Windex and a Wet Wipe. This explanation for the data bump causes great dismay for the loop-quantum-gravity theorists, until the same janitor explains to them that St. Bianchi is not the patron saint of the Bianchi identities.

Later in the year, stand-up comedian, rabbinical scholar, and string theorist, Sarah Silverman, will publish a paper in which she claims to have found hidden messages in the Pentateuch that predicted the existence of the pentaquark nearly three thousand years ago. Dr. Silverman handily wins the 2016 Templeton Prize, only to announce on her deathbed (many years later and after she spent all the money) that it was all a hoax and that she never believed in pentaquarks in the first place. With this off her mind she dies peacefully, surrounded by friends, family, and the ghost of Murray Gell-Mann, who then guides her through the eight-fold way into the afterlife.

February – Biochemistry


The agrochemical giant, Santmono Corporation, modifies the gene-editing tool, CRISPR/Cas into a now trademarked tool they rename TOASTR/Aas. The TOASTR/Aas gene-splicing tool is specifically designed to make any photosynthetic plant life more susceptible to being killed off by Santmono’s premier weed killer, Roundoff. The TOASTR/Aas tool is used to make a gene drive that is then "accidentally" released into the wild, causing the collapse of worldwide agriculture. Santmono responds by marketing a “non-plant-based-human-food supplement” they christen SoLessGreen™ in order to prevent worldwide starvation. Santmono gives sole distribution rights to distribute SoLessGreen™ to the now nearly bankrupt Pichotle restaurant chain (whose motto “Food With Decency!” had recently been changed to “Food with Dysentery!”). Pichotle is relieved, as, unlike their previous food line, it was clear that no virus, bacterium, or prion could survive for more than a few seconds in SoLessGreen™. The truth about the origins of SoLessGreen™ is uncovered by a group of agri-terrorist grammarians but the secret is never revealed to the public, as the group cannot agree if the public announcement should read “SoLessGreen™ is people!” or “SoLessGreen™ are people!” and Charlton Heston is no longer around for a consult.

March – Biology


Urban biologists will report that "pizza rat" (the rat caught on film dragging a slice of pizza into a New York subway station), is not an ordinary Norwegian rat but rather a newly evolved species of rat that is building a large hive-like structure (complete with a three-ton queen rat) in the New York subway tunnel system. The new species is named Rattus pizzapylori, and it is discovered that it can only survive by eating New-York-style pizza (much as a panda can only survive by eating one species of bamboo). As the hive grows to the point where geysers of engorged rats are spouting forth from all the street manholes (which is, frankly, just a regular day in New York City), Mayor Bill de Blasio concocts a plan to feed the queen rat a Chicago-style pizza, which is well known to be inedible by all life forms. The queen dies within minutes, taking the entire hive with her, and as torrential rains from a freak typhoon called “Super Storm Shandy” wash the carcasses of dead rats from the sewers out into Long Island Sound, not one of the New Yorkers notices a damn thing.

April — Space Science


NASA scientists excitedly hold two last-minute press conferences where they announce that they have found water on Mars and that the Voyager spaceship has left the Solar System. When a canny reporter for the New York Times retorts that these same NASA scientists have been making these exact same two claims every few months for the past ten years, the reporter is taken to JPL and given a deep-space probe.

After NASA's New Horizons mission captured a spectacular photo of a human-heart-shaped feature on the surface of Pluto in 2015, in 2016 it will fly by one of Pluto's orbital companions and capture a photo of a new feature on its surface that looks like a pair of human buttocks. Scientists extol the fact that Charon had been literally mooning Pluto for millions of years. The International Astronomical Union (IAU) passes a resolution to rename Pluto the Dwarf Planet, "Sneezy".

SETI will announce they have discovered evidence than extraterrestrials have been secretly communicating with Earth for several decades. They triangulate the signals and find that they have all along been directed to Donald Trump’s hair. Further investigations reveal that his hair is actually a orange-hamster-like alien life form that has been glommed onto his noggin since the 1980s and stealthily controlling his mind using telepathy, in order to win the US presidency, and thus hasten the hostile takeover of the Earth by a sentient race of orange hamsters. Scientists immediately embrace the news, as it is clear that no other theory can reasonably explain the success of Trump’s television show, The Celebrity Apprentice. CIA operatives work feverishly to create a portable psionic-dampening field around The Donald’s hair, and catastrophe is then narrowly averted. SETI scientists will have no time to relax, as further analysis of the extraterrestrial signals reveals the existence of a second invasive alien species. One of which, disguised as a white feather duster, is attached to the otherwise bald head of Bernie Sanders.

May — Technology


The 3N Corporation will develop a 3D printer that is capable of printing working copies of itself using only solar energy and sand as input. A shipping error results in a prototype of the printer being shipped to a FedEx (formerly Kinkos) copying center in Barstow, California. The store manager, unwilling to pay the return-shipping fee, dumps the 3N3D printer in the local landfill, where it springs into action and begins churning out copies of itself at an exponential rate. Within days the entire Inland Empire is blanketed with 3D printers that threatened to end life on Earth, as we know it, by converting the entire planet’s crust into 3D printers. In the nick of time a fleet of stealth bombers from nearby Edwards Air Force Base, equipped with top-secret laser cannons, blankets the region with laser fire, thereby melting all the printers (and the inhabitants) into a molten sea of glassy slag. A memorial plaque is placed in what was once Barstow to honor all of those who were vaporized.

Nearby, Gögle Corporation will launch its well-publicized “Project Loonacy” from Moffett Federal Airfield near Mountain View, California. As originally envisioned, fleets of high-altitude helium filled balloons, carrying on-board solar-powered wireless routers, would provide Internet access to the most inaccessible spots on the globe. Alas, due to the continuing worldwide helium shortage, Gögle takes the drastic measure of replacing the helium-filled balloons with hydrogen-filled balloons instead. A memorial plaque is placed in was once Mountain View to honor all of those who were vaporized.

Waysag Corporation will release a new model of their hoverboard that will actually hover (as opposed to just roll around on the sidewalk an periodically pitch the rider into oncoming freeway traffic). They solve the ongoing problems related to the devices’ batteries bursting into flames — or just flat-out exploding — by replacing all the rechargeable sealed lead acid batteries with plutonium fuel cells. This works well until a group of hoverboard enthusiasts, returning from a hoverboard competition in Duluth, Minnesota, decide to pack several hundred of their hoverboards into a single towed U-Haul trailer. They exceed critical mass for runaway nuclear fission and Duluth is reduced to a smoldering pile of glowing embers, while the rest of Minnesota enjoys an early summer. A memorial plaque is placed in what was once Duluth to honor all of those who were vaporized.

June — Mathematics


After the famous 2002 proof that PRIMES is in P, and the recent tantalizing 2015 result that the Graph Isomorphism Problem is solvable classically in quasi-polynomial time, a Lithuanian mathematician at the University of Vilnius, discovers a classical polynomial time integer factoring algorithm and programs it into a desktop telephone-answering machine before he is assassinated by a gang of rouge computer scientists. The discovery renders the most pressing cryptoanalytical need to develop a quantum computer moot, and the computer scientists hold the economy of the World hostage by threatening to publish the classical factoring algorithm on the Internet, thereby causing collapse of Internet commerce. Robert Redford and his group of hackers (secretly working for the NSA) take out the rouge computer scientists, and the telephone answering machine is recovered and secreted in a giant underground government storage facility, next to the Arc of the Covenant. In order to keep all this a secret, the NSA gives Dan Akroyd a Winnebago as a bribe to maintain his silence.

July — Computer Science


Not to be outdone by the success of IBM's Watson computer in playing Jeopardy, T-Wave Systems programs its newly released T-Wave 3Y Computer System to play Wheel of Fortune. In a double-blind series of tests, the T-Wave machine scores slightly better than chance, when pitted against either Richard Dawkins, The Archbishop of Canterbury, or Koko the Gorilla. T-Wave trumpets this result as proof of the existence of an exponential speed-up in their quantum computer. However MIT professor Aaron Scottson, ever the party pooper, points out that the T-Wave machine cheats as it is also programmed to regularly compliment Vanna White on her snazzy outfits, and given that it is she who is in charge of choosing which letters on the board to light up, it is Prof. Scottson's claim that T-Wave is skewing with the results.

August — Astronomy


The Kepler, space-based, exoplanet observatory loses the third of four original gyroscopes, potentially rendering the craft useless, as it is now unable to point itself. After losing the second of four gyroscopes in 2013, crafty NASA engineers were able to stabilize the craft by balancing the solar wind pressure across the surface of the spaceship. With the third gyroscope now out of commission, the engineers will come up with a fix whereby, in addition to the solar wind, they balance the cosmic background radiation and vacuum fluctuation pressure across the ship. This leaves the telescope pointing forever at a single fixed point in the sky. There the astronomers find, orbiting a Sun-like star in the Goldilocks zone, an Earth-sized planet that appears to have liquid water and the spectrum of oxygen in the atmosphere; both telltale signs of life. They name the planet Kepler-137q but the press quickly nicknames it “Alderaan” given its likeness to the fictional planet from the Star Wars films. The planet also has orbiting it what appears to be a small moon, which the astronomers name “Lovell” (after retired astronaut Jim Lovell).

After several weeks of observing Alderaan and its companion, one NASA astronomer exclaims, “That’s no moon!” and Alderaan suddenly disappears in what appears to be catastrophic explosion. Meanwhile the so-called moon, Lovell, moves out of view, seemingly under its own power. Given that they literally have nowhere else to look, the sad and lonely NASA “extronomers” continue to look for Lovell in Alderaan places.

As August comes to an end, a comet will appear out of nowhere and bring awe and wonder to humanity.

September — Economics


At the headquarters of the investment management company, Pimpco Corporation, in Oldport Beach, California, Satan Incarnate will appear in the cubicle of a “quant” stock market trader, named Donna Giovanni, who is located in a windowless subbasement of a Quonset hut on the perimeter of the Pimpco campus. Lucifer will offer Ms. Giovanni a working quantum computer, a faster-than-light communicator, and a quantum algorithm for solving the Black-Scholes stock derivative equation in BQP runtime. All this he offers her in exchange for her immortal soul. Ms. Giovanni, who is sure she is already damned anyway for working at Pimpco, gladly accepts the offer. She then sets up a superfast trading station in the sub-basement that allows her to make stock trades faster than anybody else, by exploiting the quantum algorithm, the quantum computer, and by communicating with the Wall Street trading computers instantaneously, using the faster-than-light communicator. Within a few weeks she earns trillions of dollars for Pimpco (and a hefty billion-dollar bonus for herself) before her shenanigans trigger a stock market flash-crash that wipes out the life savings of millions of people and sends world into a second great recession. Luckily for her, Ms. Giovanni cashes in her stocks before the crash, is promoted to president of the company, and lives a long and luxurious life off her wealth before she dies peacefully in her bed surrounded by her friends, family, Beelzebub, and the ghost of Murray Gell-Mann. As Gell-Mann guides her through the eight-fold way into the afterlife, the Devil returns empty handed to Hades. (As Ms. Giovanni had suspected, she had already sold her immortal soul to Pimpco many years before Old Scratch* ever showed up.)

* “Old Scratch” is a name of the Devil, chiefly in Southern US English. It is rarely, if ever, used to refer to Prof. Murray Gell-Mann.

October — Quantum Physics


Neil deGrasse Tyson, still reeling from the repercussions of claiming in 2015, on his television show StarTalk, that backward in time communications are possible using quantum entangled particles, contacts Ms. Giovanni at Pimpco corporation (after the stock market crash) and offers to buy her diabolical faster-than-light communicator for a bargain price. (A faster-than-light communicator doubles as a backwards-in-time communicator.) Dr. deGrasse Tyson then uses the machine to send a message backwards in time to tell his earlier 2015 self not to make that claim on television again, until such time as the future Dr. deGrasse Tyson had secured the patent rights to the communicator. The plan backfires when, due to quantum coherence, the message has the effect of producing a quantum-Schrödinger-cat-like superposition of Neil deGrasse Tysons, one who did make such a claim in 2015 on television and the other who did not. These deGrasse Tysons become quantum entangled with a superposition of future 2016 deGrasse Tysons, one who patents the faster-than-light communicator and the other who does not.

The resulting paradox rips a rift in the entire space-time continuum, which threatens to suck the entire universe into a super-duper massive black hole whose center is located at the Tokyo patent office. The universe is saved, however, at the last minute when a strange looking Englishman, wearing a bad suit, a bowtie, and a fez, emerges from a blue police box in the lobby of the Hayden Planetarium and repairs the rift (and also undoes the stock market crash and the collapse of worldwide agriculture while he is at it) with a glowing and buzzing green handheld magic wand, which the curious traveller claims is a screwdriver. After this all blows over, quantum physicists decide to hold an emergency international workshop on closed time-like curves, to which they invite as plenary speakers, Neil deGrasse Tyson, David Deutsch, and the online Random Fictional Deepak Chopra Quote Generator. (Example from that last one; “Your desire embraces innumerable photons.”)

Later in the month T-Wave will announce that have finally built a working quantum computer, the T-Wave 4Z, capable of solving intractable mathematical problems of tremendous practical importance. Nobody will believe them. Nevertheless, MIT Prof. Aaron Scottson will write a fiery blog post condemning the announcement. The United States will continue for the rest of the year to not invest in photonic quantum information processing and therefore the Chinese will be the first to invent the quantum Internet, just in time for Christmas.

November — Chemistry


Chemists manage to coax a large sheet of graphene to grow into the shape of a Klein bottle, which is a topological oddity; namely a hollow three-dimensional object, with a two-dimensional surface, embedded in a four-dimensional space. The chemists name the new object the “funkyball”. A key property of any Klein bottle is that its inside is the same as its outside, so that it can never be bigger on the inside than on the outside. As the chemists run electrical tests on the new material to search for signs of semiconducting or even superconducting properties, they begin receiving strange signals from the funkyball that seem to be intelligent in origin. Having nowhere else to turn, they call in Jody Foster, who explains that the signals are in fact repetitions of the first million digits of pi in base three (or perhaps the first million digits of three in base pi). The chemists have inadvertently opened a trans-dimensional portal and made contact with extra-dimensional beings that live in spatial dimensions four through six. As the days go by, a rudimentary simultaneous translator is developed that allows the chemists to communicate with the beings in Morse code (base pi).

Things go well for a while. For example the chemists learn that the extra-dimensional beings are composed entirely of dark matter and have hitherto only interacted with our own universe gravitationally. The transdimensional beings amusingly note that in dimensions four through six, our own universe back here is considered to be made of what they call dark matter that acts only gravitationally with them.

Matters take a turn for the würst when the human chemists reveal that on Earth people still eat meat while the shocked extra-dimensional aliens declare that they are all vegans. Terrified that our meat-eating ways might corrupt their young, the aliens decide to destroy all human and animal life in our universe by attempting to send an army of self-replicating killer nanobots from the fourth dimension through the funkyball and into our domain.

In the nick of time the chemists call in an elite squad of militant carnivorous topologists, who succeed in convincing the chemists to cut funkyball in half using an ion beam. This well-known topological operation reduces the Klein bottle into two Möbius strips; two, one-sided, two-dimensional objects embedded in ordinary three-dimensional space. Their swift action cuts off the portal to the fourth dimension, destroying all the nanobots in mid-transport, and saving our universe.

The chemists and the topologists share the 2016 Fields medal.

December — Earth Science


NASA earth scientists will declare 2016 to be the hottest year on record (after declaring both 2014 and 2015 to also be the hottest years on record). Senator James Inhofe, Chair of the Senate Environment and Public Works committee, will call a joint session of congress, and subpoena NASA and other climate scientists to testify before it, to allow the Senator to protest the NASA scientists’ findings. Having trouble finding a snowball this time around, as it is 83°F in Washington DC on that day in December, Senator Inhofe drives to the nearby Dairy Queen ice cream shop in Arlington, Virginia, and purchases a vanilla ice cream milkshake, which he then dutifully totes back to congress and displays on the floor, claiming, “Ladies and gentlemen this ice cream milkshake is proof that global warming is a hoax!” He then hurls the milkshake at the head of Minority House Leader, Nancy Pelosi. Congresswoman Pelosi is, however, more agile than Inhofe gives her credit for, and she ducks. The milkshake instead hits Speaker of the House, Paul Ryan, squarely on his newly bearded jaw. Since Dairy Queen ice cream milkshakes contain neither ice cream nor milk, but rather Plasticine-like substances more akin to Epoxy and Elmer's Glue, the white milkshake sticks to Congressman Ryan’s beard and hair, making him look for all the world like Santa Claus.

The entire incident is broadcast on live television worldwide and millions of confused children begin to telephone Congressman Ryan’s office number with their Christmas lists. In desperation, Congressman Ryan contacts The Official NORAD Santa Tracker for help. NORAD agrees to automatically forward the calls to their own Santa Claus Christmas list phone number, which they have been using since 1955, and to track now both Santa Claus and Paul Ryan on Christmas Eve to soothe the nerves of all the confused children. The standard NORAD videos of 24 hours of Santa zipping around the globe in his flying sleigh and delivering presents delight children the world over, as usual. The NORAD videos of Congressman Ryan were less popular, as they showed that he spent most of Christmas Eve working out at the gym and lighting candles over the tomb of Ayn Rand.

Tuesday, May 5, 2015

Boson-Sampling-Inspired Quantum Metrology

Our group at Louisiana State University has teamed up with researchers at Macquarie University in Sydney and Boise State University in Boise to produce an new publication in Physical Review Letters, entitled, “Linear Optical Quantum Metrology with Single Photons: Exploiting Spontaneously Generated Entanglement to Beat the Shot-Noise Limit.” For regular readers of this blog, you will know that Boson Sampling is a new paradigm in quantum computing whereby single photons, inputted into a linear optical interferometer, can carry out a mathematical sampling problem that would be intractable on classical computer. The buzz surrounding Boson Sampling is that, unlike universal linear optical quantum computing, the experimental implementation requires no special quantum gates, like controlled-NOT gates, nor feed forward nor teleportation or any other fancy stuff. Identical single photons rattle around in the interferometer and they are sampled in the number basis when they come out. Sounds simple, but a classical machine cannot efficiently simulate the sampling output, whereas the linear optical device does this quite easily. For our recent review on Boson Sampling the reader is encouraged to go here.

In spite of all the excitement about Boson Sampling as a new paradigm for quantum information processing, the Boson Sampling problem has no know practical application to any mathematics problem anybody is interested in. In some ways the situation is similar to the late 1980s and early 1990s, before Shor’s invention of his factoring algorithm, when the first quantum algorithm shown to give an exponential speedup was the Deutsch-Jozsa (DJ) algorithm that allowed one to tell if a function was balanced or unbalanced. While a very nice result, nobody really gave a rat’s ass whether a function was balanced or unbalanced. It was however hoped that the DJ algorithm was just the tip of an iceberg and indeed the rest of the iceberg was revealed when Shor’s factoring algorithm was discovered. That was an (apparent) exponential speedup on a problem that people cared deeply about.

So too do we hope that Boson Sampling is just the tip of the iceberg when it comes to the power of linear optical interferometers, with simple single-photon inputs, to carry out tasks that are not only impossible classically but also of practical interest. In that direction our paper makes a frontal attack on the berg with a metrological ice axe. The idea emerged from the understanding that in Boson Sampling, an exponentially large amount of number-path entanglement is generated through the natural evolution of the single photons in the interferometer via repeated implementation of the Hong-Ou-Mandel effect at each beam splitter. It has been known for nearly 30 years the number-path entanglement is a resource for quantum metrology, beating the shot-noise limit, and so it was natural for us to ask if this hidden power in linear optics with single photon inputs might be put to work for a metrological advantage. Our paper shows that this is indeed the case.

To briefly summarize our scheme, we send a sequence of single photons into linear optical interferometer that contains an interferometric implementation of the Quantum Fourier Transform coupled with a bank of phase shifters with an unknown phase that is to be measured. Our signal consists of a sampling of the outputs tuned to the same sequence of single photons emerging from the exit ports. The signal-to-noise analysis was quite challenging as it involves the computation of the permanent of a large square matrix with complex entries. While in general this is classically intractable, to our surprise, something about the structure of the Quantum Fourier Transform seems to allow the permanent to be computed analytically in closed form. As least we conjecture this is so. We were able to eyeball a closed form formula for the permanent of a matrix of any rank and confirm it out to rank 20 or so numerically, but a rigorous mathematical proof of the permanent formula is still wanting.

Once we had the signal and variance analysis carried out, we were able to show (carefully counting resources) that the sensitivity of the device, which we christened the Quantum Fourier Transform Interferometer, is well below the classical shot-noise limit. It has been known for years that exotic number-path entangled states, such as N00N states, can beat the shotnoise limit, but N00N states are resource intensive to create in the first place, requiring either very strong Kerr nonlinearities or non-deterministic heralding. Here in our new paper we get super sensitivity for free from the natural evolution of single photons in a passive optical linear interferometer. This then seems to be the first example of the Boson Sampling paradigm providing a quantum advantage in an arena of importance, which is quantum metrology.

 Who knows what is left on this iceberg still yet unexplored?

Saturday, April 18, 2015

UK National Strategy for Quantum Technologies

Just winding up a one-week trip to the UK where I attended the Bristol Quantum Information Technologies (BQIT) Workshop at the kind invitation of the organizers. There was some disagreement how the acronym BQIT should be pronounced but upon my arrival we instantly all agreed it should be B-QuIeT. The workshop was a lively set of short talks interspersed with panel discussions and it was the first time I heard in some detail about the new UK National Strategy for Quantum Technology from non-other than Sir Peter Knight himself, who was a speaker on one of the panels focusing on the UK Quantum Hubs Network. There was quite a bit of excitement in the air as Simon Benjamin (University of Oxford, Quantum Computing Hub) gushed effusively about writing a 12-page proposal that came in at £3 million per page! 

There are four hubs dotting the UK countryside from Scotland to England with a total five-year budget of £120M for all four of the hubs with foci in quantum communications, imaging, sensing, and computing. And to complement these hubs are at least three new quantum technologies doctoral training centers. The budget for the training centers was less clear but I suppose all together this is close to £200M for five years potentially renewable in five years for another five. And that is, folks, as they say, new money.

All this activity seems to be coordinated by the UK Quantum Technologies Strategic Advisory Board, chaired by Prof. David Delpy, which has laid out a vision for a coordinated strategy in quantum technologies development in the UK.

It is somewhat disheartening to see all of this activity in the UK from the perspective of a research in the US, where the congress and the president can’t even seem to pass any new budget at all from year to year. I wish the UK program well and I did hear that each hub has set aside funds for international collaborations and so I hope this will be the first of several trips to visit my quantum friends and colleagues on the far side of the big pond. 

For young researchers interested in doing PhD or postdoctoral work in quantum technologies, you should follow your noses and follow the money. The UK seems to be the place where the quantum of action is at these days. 

Thursday, February 5, 2015

Guest Ghost Post: The Future of QIP: To parallelize or not?

This year’s Quantum Information Processing conference (QIP) was held in the beautiful and vibrant city of Sydney, Australia from the 12th to the 16th of January. Close to 225 researchers from across the world attended the conference. The talks were hosted at University of Technology Sydney (UTS). Runyao Duan led the local organizing committee, and its members were from UTS, University of Sydney, Macquarie University and University of Queensland. They did a splendid job in ensuring that the conference was a grand success.

The 18th edition of QIP featured about 40 talks and 150 posters covering various important advances in quantum information processing over the past year. A detailed summary of all the talks presented at QIP can be found on the Quantum Pontiff blog, where Aram Harrow and Steve Flammia were “live-blogging” the conference. In this report, I shall focus on the things not covered by Aram and Steve, especially on the business meeting.

A lot of buzz and anticipation prevailed around this year’s business meeting at QIP. This was largely due to the pending decision on the question of whether “to parallelize or not parallelize” QIP.  Here is some background on the issue. QIP, as it stands today, is a single session-track conference featuring two kinds of talks: 50-minute plenary talks and 30-minute talks. During a five-day period minus one free afternoon, this allows for about 40 talks during the entirety of the conference. However, the number of submissions to the conference has seen a steep increase over the years due to the explosion of research in quantum information processing. What began as a workshop with a few tens of submissions in the early nineties, QIP today receives several hundred submissions each year. Thus the acceptance rates at QIP are now terrifyingly low; the rate for this year’s QIP for a talk was just about 20%.

Each year the program committee has been faced with the increasingly difficult task of rejecting at least 20 to 30 good submissions which they think are just as good as some of the other talks that make the cut. This has led the steering committee to consider introducing parallel sessions with the view that it would allow for more talks. In order to hear the public opinion on the issue, Stephanie Wehner posted a survey on the Web for the QIP community. Stephanie presented the results of the survey at the business meeting. The public sentiment on the issue seemed largely in favor of parallel sessions. When the result of the survey was shared at the business meeting, however, a major concern was raised about the possible fragmentation of the community into sub-communities. In response to this concern, Peter Shor, spoke about how not parallelizing QIP at this point could have the same fragmenting effect at a much graver level. Peter pointed out the precedent of STOC and FOCS, where the latter remained a single session conference for a long time, while the computer science community had grown many fold in size. Peter noted that in due course of time, when certain factions of the community felt that they weren’t being sufficiently accepted at the conference, they decided to split away with their own conference, the STOC. This is already beginning to happen in the quantum information community with the birth of various conferences such as QCrypt, QEC, and Beyond I.I.D. in Information Theory. These conferences provide venues for topics that are becoming more marginalized and less fashionable at the larger QIP conference.

Nevertheless, it was also pointed out by the steering committee that parallel sessions, even if introduced, would only occur during certain sessions. For instance, the plenary talks would still be held in common, and therefore couldn’t result in a complete splitting of the community. At this point, the question of logistics surfaced regarding a single track for plenary sessions and two parallel sessions for other talks, namely the need to secure one big room and two small rooms at the conference venue, which could be more expensive. Barry Sanders, the lead organizer for QIP 2016, in his presentation about the conference venue at Banff (near Calgary, Canada), however guaranteed that this would not be an issue at next year’s QIP. From the pulse of things at this year’s QIP, it seems rather likely that we will see parallel sessions in next year’s edition. Yet, this is by no means a certainty.

Another development worth mentioning from the business meeting was the proposal for open refereeing of papers at QIP. Aram Harrow and Steve Flammia, who had already implemented such a scheme at TQC (Theory of quantum computation, communication and cryptography) 2014, put forth the proposal. Aram explained why he thought referee reports of QIP submissions should be available on the public domain. The real purpose cited was not the obvious ones---it was neither to make it open the reasons behind why a paper is accepted or rejected, nor to push referees to write reports according to what this year’s program committee chair Ronald de Wolf called the “golden rule” of refereeing, namely to write referee reports the way one would like his/her own paper to be reviewed. The real reason cited was rather simply to make available expert summaries and critiques, which could immensely benefit other researchers, especially the younger researchers, which otherwise go underutilized aiding in the publication decision process alone. Although the general perception about the idea was positive, it seems unlikely that the QIP steering committee would recommend the scheme as a whole to the program committee. Ronald and Andrew Doherty raised concern about how such a scheme could result in a huge extra burden on the already over-burdened program committee. However, it seems likely that, as an intermediate step, the program committee summaries of the accepted talks would be made available to the public at QIP 2016, as was done earlier at TQC 2014.

The business meeting also saw ETH Zurich and Microsoft Research bid for hosting QIP 2017. The public opinion seemed to be in favor of the ETH bid for 2017, while it seemed that Microsoft could potentially host QIP during the subsequent year, i.e., 2018.

Earlier, proceedings at QIP this year kicked off with tutorial sessions during the weekend in the lead-up to the conference. Entry to the tutorials was included as part of the conference registration. Itai Arad of CQT covered the local Hamiltonian problem (I couldn’t make this one due to flight delay.) The second speaker of the day was Roger Colbeck of Univ. of York, who discussed the topic of device independence in quantum information processing. Roger described the goal of the device independence model in the context of cryptography as to provide unconditional security while allowing for device failure or tampering, and discussed the various tools that go into proving security of protocols within the model. He also highlighted one of the main challenges of the approach as the need of protocols that allow for reuse of the devices while guaranteeing unconditional security. On the second morning, Krysta Svore of Microsoft gave a fascinating tutorial on the various components that go into the design and implementation of quantum software architecture for an automated control and programming of tomorrow’s large-scale quantum computers. Later, Alexandre Blais of Univ. of Sherbrooke delivered the final tutorial on superconducting qubits. Addressing a largely theoretical audience, Alexandre did a splendid job of describing the basic physics behind the superconducting transmon qubit. He also discussed the control and readout via strong coupling to a microwave resonator along with results from various recent experiments.

The social aspects of the conference included a banquet and a rump session. The banquet was a fancy affair, being held on a showboat. The Sydney weather, which had been dull and rainy until then, had moved to great, UV-rich sunshine just in time for the banquet. The boat set sail from the Sydney Darling Harbor around half past seven with plenty of food, beer, wine, and the awesome folks from the conference. Some spectacular views of the Sydney skyline in the magical twilight soon followed, which were a real treat. Despite being given numerous warnings from the organizers, many participants unfortunately “missed the boat”.

This year’s rump session was a fun ride with the lighter side of the QIP community. The session was held at the “Manning” bar in the University of Sydney. Steve Flammia introduced himself as the “MR---the Master of Rump” for the night. Among the speakers, John Smolin poked fun at the ensuing trend of adding the “quantum” suffix to literally anything in the world in the name of quantum information research. Later Daniel Gottesman decided to take the new proposal for open refereeing to a whole new level. While we’ve heard of double-blinded refereeing, where the identity of the authors is conceals from the referees, Daniel suggested triple and quadruple blinding, where the text of the paper is encrypted from the referee and where the talk is concealed from the audience, respectively.

From excellent and stimulating talks and posters, through the intriguing business meeting, to the fun-filled banquet and rump session, QIP 2015 had it all. Parallel sessions or not, you can’t help, but be excited about QIP 2016 already.

Kaushik P. Seshadreesan is a doctoral student in physics at Louisiana State University, working under the supervision of Jonathan P. Dowling and Mark M. Wilde. He will graduate with his PhD in quantum information theory and quantum metrology in May of 2015.

Tuesday, January 6, 2015

Linear optical quantum metrology with single photons --- Exploiting spontaneously generated entanglement to beat the shotnoise limit

Keith R. Motes, Jonathan P. Olson, Evan J. Rabeaux, Jonathan P. Dowling, S. Jay Olson, Peter P. Rohde

(Submitted on 6 Jan 2015)

Quantum number-path entanglement is a resource for super-sensitive quantum metrology and in particular provides for sub-shotnoise or even Heisenberg-limited sensitivity. However, such number-path entanglement has thought to have been resource intensive to create in the first place --- typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feed-forward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multi-photon walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer --- fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection --- is capable of significantly beating the shotnoise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

Tuesday, December 9, 2014

D-Wave, Boss, D-Wave!

Some of my colleagues (and my followers on both social and anti-social media) have been asking me to comment on the D-Wave Systems (so-called) quantum computer, in light of all the seemingly contradictory press releases, scientific papers, and blog posts that have appeared over the years. This request of me is similar to asking me to assess and comment on the status of a pond filled with ravenous blood-thirsty piranhas by stripping down to my Speedos, slathering myself with the blood of a fatted calf, and then diving into the pond and thrashing about. However, I can assure you, that the sight of me wearing Speedos would keep even the most ravenous piranha — or perhaps even Scott Aaronson? — at bay.

Just kidding Scott! Love ya bro! (Let the flames begin….)

In my recent book, I do indeed have a section on the D-Wave phenomenon, but that was written in 2012 and perhaps needs a light-hearted updated. When writing the book my friend and colleague and known Grand Poobah of fine D-Wave Systems hardware, Daniel Lidar, asked me if I wanted him to read over my section on D-Wave Systems computers. I told him, flat out, no. He pressed on and said, with a wry grin, “You don’t want to say something you might later regret.” I replied, “Dan I can imagine many things I would like to have engraved on my tombstone but one of them certainly would not be…


In addition, for reasons I will leave nebulous for the moment, in the past few months I have had to give myself a crash course on the status and performance of the D-Wave Systems machines and perhaps I have a clearer view of what is (or is not) going on.

Firstly let me say that this vigorous and open debate about the D-Wave machines is good for science and is the hallmark of the scientific method at work. At times I wish the tone of the debate were more civil but history is filled with uncivilized scientific debates and perhaps civility is too much to wish for. Who can forget the baleful Boltzmann-Loschmidt debates on the statistical mechanics reversibility question? How about the forceful Huxley-Wilberforce debate on evolution? (“So tell us, Dr. Huxley, is it from your father’s side of the family that you claim your descent from an ape or is it from your mother’s?”) What about Cantor’s cantankerous feud with Kronecker over the nature of mathematical infinities?

Anybody? Anybody? (Tap, tap, tap....) Is this thing even on!?

Like scientific debates of yore this D-Wave business is not without its polemics. I attended a workshop at Harvard in March where another invited speaker, a friend and colleague of mine, when asked about what he thought about D-Wave, replied in no uncertain terms, “I think they’re a bunch of charlatans.”(“So tell us, D-Wave Two computer, is it from your father’s side of the family that you claim your descent from an abacus, or is it from your mother’s?”) On the other hand my friends and colleagues like Colin Williams are pro (or at least not totally anti) or like Daniel Lidar who is steadfastly neutral.   I can assure that particular Harvard-workshop-speaker colleague of mine that neither of them is (at least currently known to be) a charlatan.

I will try to give here an abridged history of what is going on, where I will fill in the gaps in my knowledge with the usual wild-asset conjectures. So as far as I can tell, as usual, this all can be blamed on us theorists. In 2001, Ed Farhi at MIT and his collaborators published a paper in Science that proposed a new model for quantum computation called Adiabatic Quantum Computing (AQC). This model was radically different than the standard circuit model for quantum computing, which we’ll call Canonical Quantum Computing (CQC), developed by Deutsch and others.

The standard circuit-gate model describes how a universal quantum computer can be built up out of circuits of one- and two-qubit gates in a way analogous to how a classical digital computer is built. It is called ‘universal’ as it can any quantum algorithm, such as Shor’s factoring algorithm, can be programmed into its circuitry. (By contrast it is strongly believed that the factoring algorithm cannot be programmed into a classical computer’s circuitry without suffering an exponential slowdown in run time.) While the circuit-gate model is very likely more powerful than a classical computer for this problem, to factor large numbers of interest to, say, the National Security Agency (NSA), the circuit would have to contain billions or even trillions of qubits and gates, allowing for quantum error correction and all that. To date, the record number of qubits constructed in the lab in such a way, the ion trap quantum computer holds the record, is about 16 and not a trillion. Every few years or so, the ion trappers carefully polish up and add another qubit to their trap, but the pace is very slow.

Adiabatic Quantum Computing (AQC) seemed completely different than the circuit-gate model or Canonical Quantum Computing (CQC). In AQC, instead of building gates and circuits in the usual sense, one identifies a large physical quantum system, say a collection of interacting quantum spins, with a particular Hamiltonian structure. The problem to be solved is then “programmed” into the state of these spins. Then the system is allowed to adiabatically evolve by slowly tuning some parameter, such as temperature or time, allowing the system to evolve into its ground state. If set up properly then the spin-state of the ground state of the Hamiltonian contains the answer to the computational question you programmed in at the higher temperature (or earlier time).

What set the quantum computing community off into a paroxysm was that Farhi and the gang claimed, in their original Science paper, that ACQ could solve NP-complete (travelling-salesman-type problems) with an exponential speedup over a classical computer. This was an astounding claim, as it is widely and strongly believed that the CQC model gives no exponential speedup on NP-complete problems over a classical computer. And so, in a nutshell, Farhi and collaborators were claiming that AQC approach was exponentially more powerful than CQC, at least on this one particular class of important optimization problems.

A vigorous scientific debate ensued, rotten tomatoes were thrown, and finally, in 2007, Dorit Aharanov and collaborators proved that AQC was computationally equivalent to ordinary circuit-gate quantum computing. That is AQC was not more powerful than CQC. (They are polynomially isomorphic hence both equally universal models of quantum computation.) Hence the claim that AQC would give an exponential speedup on NP-complete problems was very likely incorrect since it is strongly believed that no such speedup occurs on the CQC.

Got it?  

The situation reminds me of the situation in quantum mechanics in 1926 where you had the Heisenberg matrix mechanics and Schrödinger wave mechanics and nobody knew if they were the same theory or gave different predictions or what until Dirac (and others) showed that they were transformable into each other and hence made exactly the same physical predictions and hence were not two different theories of quantum mechanics but rather two different pictures of quantum mechanics. However, as is well known to the quantum mechanic on the street, some quantum theory problems are much easier to work out in the Heisenberg picture than in the Schrödinger picture (and vice versa). As a consequence, due to the Gottesman-Knill theorem the Heisenberg and Schrödinger pictures are not computationally equivalent even though they are physically equivalent. That is a quantum computing circuit, one composed of only gates from the Clifford algebra, that looks like it would be impossible to efficiently simulate on a classical computer in the Schrödinger picture is in fact efficiently simulatable in the Heisenberg picture.

Okay, so AQC model of quantum computation is neither more nor less powerful than circuit-gate quantum computing, but because of its mapping into fairly simple spin systems, it looks more like a design for a straight-forward physics experiment than a design for a gigantic digital quantum computer. The open question is that maybe AQC is easier to implement than the CQC. That is, instead pursing the circuit-gate paradigm, adding one ion qubit to your trap every few years until you hit a trillion of them, perhaps you could cook up some simple spin-like physical system with a whole lot of qubits (maybe hundreds but not trillions) interacting in a certain way, code in your problem, cool the whole thing down slowly, and read out your answer — build your quantum computer that way! And based on that potential, I surmise, is how and why D-Wave Systems got involved in the quantum computing game.

In 2007 D-Wave systems demonstrated a 16-qubit-prototype system called The Orion and used it to solve some computational problems such as a pattern-matching problem related to drug design and a Sudoku puzzle. In 2011 they unveiled their first commercial machine, the D-Wave One, a 128-qubit machine that sold for $1,000,000. They have sold precisely one of these and that was to Lockheed-Martin, which then immediately handed it over to a research group at the University of Southern California, consisting of Daniel Lidar and collaborators, who operate the machine with complete autonomy from that company. (USC does provide the physical infrastructure for the machine and its operation but that is role that is compartmentalized and separate from the research activities.) In 2013 D-Wave systems produced the D-Wave Two, a 512-qubit machine, and they have sold also precisely two of those. The first was also bought by Lockheed-Martin and is run by USC and the second was bought by Google and is located at the Quantum Artificial Intelligence Lab at the NASA Ames Research Center at Moffett Field, California (near San Jose). This particular NASA machine has a front-end user interface run on a Google proxy server that allows the user to submit batch jobs to the D-Wave Two hardware via that route.

So just what are these machines and what are they doing? This question is difficult to answer, as we are dealing not with a science experiment but a product produced by an industrial concern. If a scientist, say at a university, claims they see such and such a result in an experiment; then they are required to publish sufficient details about the experiment so that independent researchers at other universities can construct identical experiments and confirm or refute the result. But the D-Wave machines are industrial products, not science experiments, and details about their design, construction, performance, and even the software that runs them is proprietary. They don’t want other companies to know all the details of how they are built.

So scientists, instead of constructing their own duplicate of the machine, are forced to use the D-Wave machine to test itself — to run tests or “experiments” treating the thing as a black box. One bit of front-end software that interfaces with the machine is in fact called “black box”. And for some years now the game has been to choose some computationally hard problem, run it on the D-Wave machine, and then compare the run time when the problem is run on a fast classical computer, and then publish the results of the race. The results greatly depend on which problem you choose, and hence this is why there are so many conflicting reports appearing in the popular press. One day we hear that the D-Wave machine is more powerful than a classical computer and the next we hear it is no more powerful. Well this is because the term “powerful” is problem dependent. It makes little sense to say one machine is more powerful than another in general. It only makes sense to say one machine is more powerful than another when computing a specific problem or class of problems. When computer scientists say a quantum computer is more powerful than a classical one, they mean on a specific set of problems. For example factoring. For some problems, say computing the permanent of a matrix, a quantum computer is likely not any more powerful than a classical one.

Let’s take an analogy. Suppose in 1984, ten years before Shor discovered his factoring algorithm, an alien spacecraft lands on the roof of the physics building at the University of Colorado (where I was a grad student at that time) and out from the UFO emerges a luminous, one-eyed, green-skinned, alien named Xiva the Decompiler of Words. Xiva hands me a pocket-sized device that she calls the Xblox and claims is a universal quantum computer and tells me to try it out. I decide to pit it against the nearby Cray X-MP supercomputer. If I first run the oracular ordered search problem (don’t ask), the Xblox will perform better than the X-MP by at most a factor of 3, and I would exclaim, “This quantum computer sucks!” If instead I randomly chose to try and factor a large number with the two machines, I’d find the Xblox has an exponentially shorter run time than the X-MP.  I then would exclaim, “This quantum computer really rocks!” On other problems (NP-complete or unordered search) the Xblox would show only a quadratic improvement at all over the X-MP. The question, which computer has the better performance, is meaningless unless you specify the particular problem they are performing the computation on.

What is inside the D-Wave machines? I suspect that D-Wave had originally hoped to implement something like full-blown Adiabatic Quantum Computing on their machines. Since the 2007 result of Aharonov, et al., we know that AQC is equivalent to ordinary QC, as discussed above, but AQC is perhaps easier to implement. Hence if the D-Wave machine turned out to be a general-purpose AQC machine it would be truly a universal quantum computer. This hope may have been the source of early (and somewhat naive) press reports from D-Wave that they were building a true universal quantum computer with entanglement and the whole nine yards.

The speed of hype may often greatly exceed the speed of write.

As mentioned above, AQC could be implemented by a building a controllable lattice of coupled spin-half particles. However, given that the spin-half particle (a neutron for example) is a quantum two-level system, any array of coupled two-level systems will do. D-Wave decided to go with superconducting flux qubits. These are little micron-size loops of superconducting current-carrying wires where all the electrons can be made to circulate counterclockwise (the qubit |0 state) or all clockwise (the qubit |1 state). If the little buggers are sufficiently cold and sufficiently protected from the environment then they have a long enough coherence time so you can make a cat state or the superposition |0+|1⟩, which is then the starting point for generating massive amounts of entanglement across the lattice, via qubit-qubit interactions, and away you go to implement AQC (or even CQC). The particular Hamiltonian system D-Wave has chosen is something called the Ising spin model (ISM) whose Hamiltonian has a binary and tunable coupling between each pair of qubits

This Hamiltonian shows up in the theory of quantum spin glasses, for example. There is one additional hardware implementation step. This Ising (pronounced “ee-sing” not “icing”) Hamiltonian presupposes, via qubit-qubit coupling, any qubit (in say a square lattice) can be coupled to any other qubit in the lattice. That is not the case in the current D-Wave One and Two. Only nearest neighbors are coupled. In addition some of the qubits don’t work due to manufacturing issues, but you know in advance which of them is going to be offline as D-Wave characterizes the chip and tells you.

To handle this hardware issue an additional wiring diagram is needed to map the Ising problem (with arbitrary qubit-qubit coupling with all the qubits in play) onto the D-Wave hardware (with only nearest-neighbor coupling and where some of the qubits are no-shows). That wiring diagram is called a Chimera graph. (I suppose because the graph looks the fire-breathing hybrid of a goat and a lion and a serpent to the unaided eye.) It is important to note that each different computational problem will require a different Chimera mapping, which can be quite hard to work out for a large number of qubits, but the D-Wave black-box front end will help you find one if you are to dumb to figure it out on your own.

So the idea is that you program your input as an initial state of this Hamiltonian (time t = 0), then slowly tune the time parameter t until the thing reaches its ground state (time t = T), and then read out your answer and you have a type of AQC. The problem is for this protocol to work as universal AQC the qubits must have coherence times that are at least as long (and preferably longer) than the adiabatic run time T. This might be the case if the qubits were cold enough and the protected from the environment enough but that is not the case. The qubits in the current D-Wave machines are quite noisy and have very short coherence times, due to temperature fluctuations, imperfections, and other technical noise. Hence the qubits are not sufficiently coherent so that long-range entanglement can be spread across the lattice and hence full-blown AQC cannot yet be implement and by the Aharonov et al. proof neither is it then universal for quantum computing. That is the current D-Wave machines cannot solve all the problems a general quantum computer can solve. Well if they can’t do that, what can they do?

There is a watered-down version of AQC that is called Simulated Quantum Annealing and the good folks at D-Wave Systems claim their machines can do this. Annealing is a physical process, used for example in the manufacturing of swords, where by the metal alloy of the sword is cooled very slowly into its ground state. The idea is that if you cool the sword too rapidly it will freeze in some higher state of energy than its ground state, a local minimum of the alloy structure, instead of the global minimum. These higher-energy local minima produce defects that weaken the final product. Hence you want to cool it slowly to get a strong sword. The physics is that while the alloy is cooling, if it does find itself in a local minimum energy well, because it is still at a relatively high temperature, the thermal fluctuations will cause it to leap over the well’s barrier height and allow it to leave the local minimum and continue the search for the true minimum.

Computer scientists have emulated this physical process by designing an optimization procedure called simulated annealing. You are seeking the optimal state of some large computational problem and so you simulate the annealing process with a slowly adjustable parameter (usually time and not temperature) and you add in by hand random fluctuations to mimic the thermal fluctuations of the sword-making process. Simulated annealing often works much better than other optimization methods, such as steepest descent, in finding the true global minimum of a large problem.

In Quantum Simulated Annealing (QSA) the bits are replaced with qubits and the thermal fluctuations are replaced with quantum fluctuations inherit in any quantum system. Qualitatively, it is claimed that quantum simulated annealing is more efficient than classical simulated annealing since, instead of fluctuations driving the up and over the potential barrier of the local minimum the system is stuck in, quantum mechanics allows also for the tunneling of the system through the potential barrier. For this reason claims are made that quantum simulated annealing should find the global minimum of a system faster than classical simulated annealing. Quantum simulated annealing should be viewed as strictly contained in AQC, and hence is not universal for quantum computing. What remains to be seen is it more powerful than a classical computer on a specific problem?

And so we come the core idea. As far as I can tell, D-Wave Systems claims their machines implement quantum simulated annealing. The question then is then, is this version of QSA with noisy qubits, short coherence times, and no entanglement good for anything. To answer this question, D-Wave farms out time on their machines to various researchers to see if there are any computational problems, mapped into QSA, for which the D-Wave machines out perform a comparable classical machine. So in addition to being problem dependent, the answer also depends on what you mean ‘comparable classical machine’. It is due to these ambiguities that we have what appear to be contradictory publications and press releases on the power of the D-Wave machines.

I’m not going to go through the list of all the computational problems that have been tried on the D-Wave computers but focus on the one that seems currently to show the most promise. This computational problem is known as QUBO. A few months ago I had never heard of QUBO and I was shocked to find out, after I had, that the Q in QUBO does not stand for ‘quantum’ but rather ‘quadratic’ as in Quadratic Unconstrained Binary Optimization.

Borzhemoi! This I knew from nothing!

Well there was a steep learning curve but without going into all the details, a QUBO problem is a type of optimization problem whose mathematical structure maps directly onto the Ising Hamiltonian that is native to the D-Wave hardware. It seems likely that if the D-Wave machine gives any computational advantage, and if that advantage is quantum in origin — the jury is still out on both these points — then it will only be on problems of the QUBO type. However this is an interesting class of problems with applications that seem primarily have applications to image recognition. In particular certain types of synthetic radar image processing, of NASA Earth Science interest, maps onto the QUBO form.

If the D-Wave machine gives an advantage it will be only on practical problems that can be mapped (with little computational overhead) to a QUBO problem. Whether that set of problems is the empty set, is still unknown.

Here is a summary of the step required to map a practical problem like image recognition onto the D-Wave machine:

1.)  Find a practical problem that has an efficient mapping to a QUBO problem and carry out that mapping. 

2.)  Next the QUBO problem must be mapped to the specifics of the D-Wave circuitry via a Chimera graph that is essentially a wiring diagram for the quantum computer. This is done by an intermediate mapping to a Quantum Ising Spin Model (QuIS) (for which the D-Wave machine was designed to solve in the generic AQC model. In this way programming the D-Wave machine is akin to programming a classical computer in assembly or machine language, as was done decades ago, before the development of higher level compilers and programming languages. That is why the programming process is somewhat convoluted to the uninitiated. To carry out the Chimera mapping for nontrivial problems, D-Wave provides the user access to a front-end server called “black box” that carries out a search for an optimal Chimera mapping.

3.)  Run the problem on the D-Wave machine. For the D-Wave 2, purchased by Google and located at the NASA Ames Research Center, Google has developed a proxy server that interfaces with the actual D-Wave machine that will allow the user to submit batch jobs to the quantum machine.

So we can see that there is a lot of overhead between the physical problem to be solved and the final run on the machine. If that overhead is too large, any quantum advantage of the D-Wave machine may very well be swamped.

In particular, my colleagues ask, “What’s the deal with these announcements that come out every few weeks or so? Some claim to see a speedup and some claim they do not.” We are now prepared to answer this. Experiments on the D-Wave machine that test algorithms that are not of the QUBO form show no speedup. Some problems that have an efficient mapping to QUBO seem, at least in a number of publications, to show some speed up. Whether you see a speedup depends on what problem you choose and how efficiently you can map your problem to a QUBO problem.

Let me be frank and say that it is still possible that D-Wave machines will show no speedup of a quantum nature, but the jury is out. If they are proved to show a quantum-enabled speedup it will be only on a specific narrow class of important  problems that have an efficient QUBO implementation. This means that the current D-Wave machines are clearly not a universal quantum computer but potentially are a very narrow scope, special purpose, possibly quantum, QUBO problem solving engine.

 I should say that I'm in favor of a top-down approach like this to compete, say, with the ion trappers that add a very high-fidelity qubit to their trap once every year or two. I think the competition and the associated discussions are good for the field. Let's avoid calling each other charlatans and instead let's all sit down and try to figure out what is going on and see how we can reach our common goal — a universal quantum computer.