Wednesday, July 31, 2013

Spontaneous parametric down-conversion photon sources are scalable in the asymptotic limit for boson-sampling

http://arxiv.org/abs/1307.8238

Boson-sampling has emerged as a promising avenue towards post-classical optical quantum computation, and numerous elementary demonstrations have recently been performed. Spontaneous parametric down-conversion is the mainstay for single-photon state preparation, the technique employed in most optical quantum information processing implementations to-date. Here we present a simple architecture for boson-sampling based on multiplexed parameteric down-conversion and demonstrate that the architecture is limited only by the post-selected detection efficiency. That is, given that detection efficiencies are sufficiently high to enable post-selection, photon-number errors in the down-converters are sufficiently low as to guarantee correct boson-sampling most of the time. Thus, we show that parametric down-conversion sources will not present a bottleneck for future boson-sampling implementations. Rather, photodetection efficiency is the limiting factor and thus future implementations may continue to employ down-conversion sources.

For all of my colleagues who thought from my previous blog post that I was a boson sampling skeptic.....

Sunday, July 28, 2013

Sampling — Schmämpling!

In January of 2011, I went on a six-month sabbatical — my first sabbatical ever — and, as is often the case on such ventures (in order to have such sabbatical requests approved by parsimonious deans), I declared to the dean that I would write a book.

Luna Han, my editor at Taylor & Francis, had been hounding me for some years to write a textbook.. Having moved to academia in 2004 (after 10 years of working for the federal government), I was not in a good position to write a textbook as I did not have years and years of class notes that could be 'easily' up-converted into one. I did, however, have lots and lots of notes and anecdotes spanning the 15 years I was involved in the US Government program to build a quantum computer.

Hence it was with some trepidation that I called up Ms. Han in and declared, "Well I have good news and bad news!" Ever the stoic, Luna replied, "Whenever anybody says that it is only ever bad news." I continued on, "Well the good news is I'm done with chapter one!" Luna's temper improved slightly, "Sound good so far...." Then I went in for the kill, "It's a popular book and not a textbook." This did not go over well, as popular books have much lower profit margins and so forth, but she told me to email her chapter one anyway and she would take a look. A few days latter I got her response, "I love what I'm reading ... I certainly hope you’ll consider sending me the full [book] proposal. "

The proposal was reviewed (and at times reviled) by my friends and colleagues, but the reviews were (for the most part) so positive that I went under contract with Taylor & Francis and began typing four hours a day. I shunned all refereeing, committee meetings, conference travel, proposal reviews, and did nothing much else than work on the book for two years. I fell so far off the grid that some of my colleagues were asking around to see if had met some untimely end. I submitted the book on schedule in September of 2012, then worked with the typesetters for the next few months and spent my Xmas break in 2012 proofreading the 450 page proofs, and then Schrödinger's Killer App: Race to Build the World's First Quantum Computer, was off to press.

I then emerged in the January of 2013 like Rip Van Winkle, sleeping for two years after an all-night bender with some magical dwarfs; rubbing my eyes and blinking. Then I strolled down into my village only to discover that everybody was talking about the boson-sampling problem. My reaction was to inquire, "What the heck is boson the boson-sampling problem!?" I was entreated to read a 94 page preprint posted in 2010 by Scott Aaronson and Alex Arkhipov on the ArXiv, but to a great extent I found this paper, written in the language of quantum computer complexity class theory, to be almost completely incomprehensible. However I understood enough to realize that these computer scientists were claiming that us quantum physicists had missed something very important about the nature of linear optical interferometers with Fock-state (number-state) inputs. Well, I have been working on the theory of linear optical interferometers for 25 years, and this clearly now was a turf war. What the Fock did I miss? It turns out that, what I had missed, was precisely the Fock. (If you don't want to read that 94-page paper either then try this two-page introduction to boson sampling theory and experiment, written by James Franson.)

Rather than take a crash course in quantum complexity theory and then go figure out that 94-page paper, I decided to roll up my sleeves and dig into these interferometers myself in my own way and figure out just what the heck was going on — avoiding quantum computer complexity class theory like the plague — and using only elementary tools from quantum optics theory. Until the rise of the boson sampling problem, I would attest that nearly every quantum optician on Earth, including myself, did not think there was much interesting going on in passive, linear optical interferometers, no matter what quantum state of light you put into them — Fock or not. (For a fun an flippant overview of why we all thought this way about these interferometers, see my recent lecture given at the 2013 Rochester Quantum Information and Measurement Conference; a conference where Robert Boyd came up to me and declared, "Jon! It's so good to see you! Nobody has seen you in a couple of years and people were asking if you were all right.")

It turned out that two of my undergraduates, Bryan Gard and Robert Cross, also in 2010, were working on a problem that was closely related to the boson-sampling problem (but none of us knew it at the time); they were analytically and numerically studying quantum random walks with multi-photon Fock states in a linear optical interferometer. I gave this to them as an undergraduate 'starter' problem, motivated by experiments in Jeremy O'Brien's group with two-photon Fock states. Since I did not expect anything too interesting to happen when you went from quantum random walks with one or two photons to random walks with multiple photons, I expected the undergrads to come up with a closed form solution predicting the photon output from an arbitrary photon input.

Then I went on sabbatical and when was buried in the writing of my book for two years and I did not pay too much attention to them when they complained that they could not come up with even a numerical simulation for more than a few photons in an interferometer with only a few modes. They particularly complained that, "The problem blows up exponentially fast!" "These undergrads," I thought, "they see exponential blow ups whenever the math gets a little hairy." Of course, as usual, my undergrads were right and I was wrong. The thing does blow up exponentially fast.

In January of 2013 we were trying to get this random walk paper published, and after endless rounds of refereeing, we finally did and it recently appeared in JOSA B as, "Quantum Random Walks with Multiple Photons." But in the final round of refereeing, in response to a toss-off comment we made in the paper about the apparent exponential growth of the problem's complexity, a referee suggested, "Were the authors to make this comparison [to the boson sampling problem], they would be in a position to comment on the computational hardness of such systems, which would be insightful." My thought, and that of Bryan and Robert, was, "What the heck is the boson sampling problem!?"

The referee cited an experimental paper out of Andrew White's group, and then I suddenly I remembered Andrew gave at lecture on this in a NASA conference in January of 2012. However I was then so engrossed in the writing of my book that the only take-home message I got from his talk was that Andrew was experimenting with permanents, and I joked that perhaps he was experimenting with new hairdos. Suddenly things started to click and I finally became fully aware of that daunting 94-pager by Aaronson and Arkhipov.

So Bryan and Robert and I rolled up our sleeves even farther and tackled this problem again from the point of view of counting all the resources and comparing coherent-state and squeezed-state inputs to Fock-state inputs and sure enough, although not much interesting happens with coherent and squeezed, everything blows up with the Fock. When I showed co-author Hwang Lee our proof that the Hilbert space dimension blows up as a function of the number of modes and number of photons, he retorted, "This is shocking!" But what is shocking to a quantum optical theorist is not necessarily shocking to a quantum computational theorist.

We fired off the paper to Physical Review Letters (PRL) — and the referees immediately pounced. One said that our result was not new and our proof was not nearly as good as invoking the "collapse of the polynomial hierarchy" as did Aaronson and Arkhipov. At that point I had no idea what the polynomial hierarchy even was — some computer science-y thing — and I certainly did not much care if it collapsed or not and so we retorted, "Physical Review is a physics journal and not a computer science journal." Comments from Aaronson and Peter Rhode were much more helpful. They both pointed out that, due to the Gottesman-Knill theorem, it is now well-known, in spite of Feynman's arguments to the contrary, that sometimes systems with exponentially large Hilbert spaces are still efficiently simulateable — who knew!? When Aaronson and Rohde both pointed out that fermionic linear interferometers with Fock state inputs also have an exponentially large Hilbert space — I didn't believe it! But in spite of that blow up the fermionic devices were still efficiently simulateable do to special properties of matrix determinants that matrix permanents don't have. Sometimes, boys and girls, there are polynomial shortcuts through your exponentially large Hilbert space....

Rolling up our sleeves again (this time to our eyeballs) first I had to take a crash course on fermionic interferometers (think neutrons) and prove the Hilbert space blows up there too. (It does.) Then we had to augment our paper and bring the argument back around to matrix permanents (nothing to do with Andrew White's hairdo) and matrix determinants. (We did.) And now our revised paper, "Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument,"  sits in the limbo that awaits quantum optics papers that are not too horribly rejected from Physical Review Letters, and so you try to transfer them to Physical Review A (PRA) instead. In Catholic school I was taught that limbo was just like heaven — except you never got to see the face of God. Similarly, PRA-limbo is just like PRL-heaven — except you never get to see the face of Basbas....

But now finally to the point of this post. This post has a point? Finally! I think it was Einstein who once said that doing physics research is like staggering around drunk in a dark, unfamiliar, furniture-filled room, bumping over chairs and ottomans, while groping for the light switch.  In this vein, having me explain to you how we do research in our group is a bit like having your Würst vendor explain to you how he makes his sausages. However, if you want to pursue a career in sausage making, it might be best to know what you are getting into up front. After all you want to make sure you are always getting the best of the Würst! However when it comes to the experiments done so far on boson sampling, it is not quite yet the best....

Now, after pondering the  theory of this boson sampling business for six months and now knowing enough to be dangerous, I have finally decided to dive fully in and read the flurry of nearly simultaneously published recent papers claiming to implement boson sampling with three or four photons  [Broome2013, Spring2012, Crespi2012, Tillmann2013, Spagnolo13]. It is clear to me now, that in order to implement boson sampling, the input photons must be in Fock states that is the whole point. Coherent states and squeezed states and other so-called Gaussian states simply will not do, as it is well known (due to work of Sanders and Braunstein and others) that linear optical interferometers with Gaussian-state inputs are always efficiently simulateable.

The whole point of boson sampling is that the input needs to be in the form of pure photon Fock states in order to perform the sampling. That is, thinking of the interferometer as the information processor, the processing must take place on Fock states and not on Gaussian states. If you input only Gaussian states, as in the four-photon experiment of Crespi, et al., you are clearly not doing boson sampling at all. If you mix one-photon Fock states with a two-mode Gaussian state (as all the three-photon experiments do) it is not clear what you are doing, but you are certainly not implementing boson sampling as advertised. Yet all these experiments do this. I'll focus on the three-photon experiments, as they are the most tricky. (The single four-photon experiment can be discarded out of hand as there is not a Fock state in sight.)

To do this right one would need to have three, distinct,  heralded single-photon Fock states. Instead all the three-photon experiments herald only one single-photon Fock state and then mix it with the output of a spontaneous parametric downconverter (SPDC) —an output which is a Gaussian state! In particular it is not the product of two-distinct single-photon Fock states that is required for boson sampling.

The output of the SPDC, in the Fock basis, is a superposition of mostly twin vacuum states, some twin single-Fock states, some fewer twin double-Fock states, and so on — all summing up to a Gaussian state. The experimentalist's hope is that by post-selecting only on events where three photons (in total) are counted that this experiment is equivalent to having three distinct single-photon-Fock states to begin with.

It is not. 

What matters for boson sampling is what you put in to the interferometer and not so much what you do to what comes out. This post-selection process fools you into thinking that only three photons went in. That is not true. A single photon went in mixed with a Gaussian, two-mode squeezed vacuum (of indeterminate photon number) — a two-mode squeezed vacuum state is definitely not two photons in a pair of Fock states.

To kick in the large Hilbert space and the permanents of matrices and the things that make boson sampling interesting the processor must process only Fock states. You can't just pretend at the end of the experiment that what you sent in were Fock states. Whatever these experiments are doing it is not boson sampling as advertised. A three-photon boson sampling experiment will require that all three of the input photons are heralded — not just one (or in the case of the four-photon experiment not just none).

Now before my old friend Andrew White (in his curly new hairdo) comes flying down from Brisbane to pummel me here in Sydney, where I am visiting for a few weeks, let me say that all of these experiments were impressive tours de force carried out by some of the best quantum mechanics in the galaxy. Each and every experiment required hard work and lots of hours in the lab and most of these experiments are on the forefront of integrated quantum photonics, which will have a plethora of applications to quantum optical computing, metrology, and imaging. And someday soon one of these extraordinary experimental groups will demonstrate boson sampling with three photons — but I venture that this day has not yet come.

And yes, yes, yes, I know why all the experimentalists all do it this way — I may not know much about quantum complexity theory but I do know a thing or two about quantum optics. The point is that with the SPDC sources the probability of heralding exactly N photons scales exponentially poorly with N. That is if they tried to do this with three really heralded single photons, then the data collection would have taken months, and if they tried to do this with four really heralded single photons, then it would have taken years.

Nonetheless, while it is true in previous experiments in other areas of quantum optics that it does not really matter too much if you have real single-photon inputs, or post-select the outputs and then pretend that you had real single-photon inputs, for boson sampling this distinction is critical. The linear optical interferometer must process only Fock input states to be in the boson sampling regime. If, instead, it processes only Gaussian states or admixtures of Gaussian states with Fock states, well then it is not carrying out boson sampling it is doing something else — no matter how much you pretend to the contrary. 

To summarize this entire diatribe in three words: What? The Fock!













Tuesday, July 23, 2013

NP or NOT NP — What is the Question?

I'm reading Scott Aaronson's book, Quantum Computing Since Democritus, which according to Amazon.com is frequently bought together with Schrödinger's Killer App (the second, funniest book on quantum computing to appear this year).

In spite of Stephen Wolfram's scathing review, and apparent exponential growth of mind-bending computer science acronyms, I am finding it quite enjoyable and, as a simple-minded quantum physicist, I am also finding that I'm learning quite a lot and, against my own will, thinking a lot about computer complexity theory.

A particular comment by Aaronson, connecting the P vs NP business to something in physics, struck a nerve. I first heard about this business in graduate school in the 1980s when the mathematics students would jokingly scrawl "P=NP!" on the chalkboard only to find the next day somebody added a slash so it read "P≠NP!" and they all had a good laugh and the physics students all just shook their heads.

I had to wrestle with this again in 1994 after Shor's algorithm hit the fan and, as a reviewer of quantum computing proposals for the Department of Defense, I had to have at least some idea of what all these proposals were talking about. However I felt it was all really a bunch of non-physical mumbo jumbo about what the computer scientists thought were hard or not hard problems, even though they could not much prove anything.

I really did not care if it turned out that P=NP or not and most physicists I talk to don't much care either. If it indeed turns out that someday P=NP it would just mean that the computer scientists were not trying hard enough to find a proof and in any case it had little to do with physics.

However I'm ready to not quite eat but perhaps just reheat my 'umble pie. The bit in Scott's book that struck a nerve was his discussion of the feeling that if P=NP, even on a quantum computer, then we would have 'God-like' powers. Again that sounds just like hype but here's the physics.

From the 1998 paper by Daniel Abrams and Seth Lloyd we know that if quantum mechanics was nonlinear, with even a small non-linearity of the Weinberg type, then we could build super-duper quantum computers to solve not only NP-complete problems, but also the supposed even harder class of #P problems, all in polynomial time on the nonlinear quantum mechanics machine.

However the Weinberg model for nonlinear quantum mechanics would also permit superluminal communication and the violation of the second law of thermodynamics. The superluminal communication is the worst as this would lead to a violation of Einstein causality, something that never happens in ordinary linear quantum theory. It is well known that with superluminal communication, allowing you to send messages backwards in time, you can get a computational speedup by just letting your classical computer run for an exponential amount of time and then have it transmit the answer backwards in time to you so that the overall effect is a polynomial-time computation.

So if Nature gave us Weinberg nonlinear quantum mechanics, instead of ordinary linear quantum mechanics, we could use that to compute NP-complete and #P problems in polynomial time, violate causality, and for extra credit violate the second law of thermodynamics. Physicists should be very concerned if we could do the latter two things and hence, I posit along with Aaronson, that physicists should be very concerned if we could do the first two things as well.

Beware! There is no proof that the ability to solve NP-complete or #P problems implies the violation of causality or the second law. But it is suggestive that there is one resource, nonlinear quantum mechanics of the Weinberg type, that allows you do all of these — that is it gives us 'God-like' powers.

I suggest then that physicists who shudder at the thought of a theory that implies a violation of Einstein causality and the second law should also shudder at the thought that someday we will find a polynomial route to solving NP-complete problems using the laws of physics as we currently understand them.




Tuesday, July 16, 2013

The Riemann Conjecture

Mein Lieber Herr Riemann,
All night I will dream on,
'Bout how you deserve a lecture.
But of course I allude
To your famous and shrewd
Outstanding and unsolved conjecture.

Oh, I owe you my life,
My 3 kids and my wife,
For the proof of the Prime Number Theorem.
Your zeta function trick
Made the proof really slick,
And those primes — no more do I fear 'em.

But I just stop to think,
How I've taken (hic) to drink,
And evolved this hysterical laugh —
Because still I don't know
If ζ's roots all go
On the line Re z = 1/2!

So I don't sleep at night,
And I'm losing my sight
In search of this darn thing's solution.
As my mind starts to go
My calculations grow
In a flood of "complex" confusion.

I bought a computer;
Not any astuter,
It ran for nearly 10 years — no jive!
But still it doesn't know
If zeta's roots all go
On that line Re z = .5

Now I sit in my room —
I feel doomed in the gloom —
And entombed by mountains of paper.
Still, I pray that some night
My "ol' lightbulb" will light
With the clue that could wrap-up this caper!


— "The Riemann Conjecture," Jonathan P. Dowling, Mathematics Magazine 62 (June 1989) 197; Reprinted in Gamma: Exploring Euler's Constant, by Julian Havil (Princeton University Press, 2003); see also the German-Language Edition, "Die Riemannsche Vermutung," in GAMMA - Euler's Konstante, PrimzahlstrŠnde und die Riemannsche Vermutung (Springer Heidelberg 2007) .

Wednesday, July 10, 2013

Postdoc, Ergo Proper Doc!

Postdoc in Quantum Information and/or applications to complexity  theory, foundations, quantum gravity, quantum many-body physics,  and thermodynamics

Limit of tenure: up to 3 years
Closing date: Sept 7, or until the positions are filled

The quantum information group at University College London (UCL) invite applications for two post-doctoral research positions in the groups of Fernando Brandao and Jonathan Oppenheim. We are especially interested in applications of quantum information theory to research areas such as complexity theory, foundations, quantum gravity, quantum many-body physics, and thermodynamics. Other faculty in Quantum Information Theory include Dan Browne, Simone Severini, Alessio Serafini and Sougato Bose.

The start date of the appointment is flexible, and both junior and senior research associate positions are available. The salary is £32,375-£39,132 p.a.

For any further queries on the scientific details and requirements of the post, please contact Jonathan Oppenheim (j.oppenheim[at]ucl.ac.uk) and/or Fernando Brandao (fgslbrandao[at]gmail.com).

Applications should include a full CV, publications list, a summary of research interests and previous contributions (two pages), and the names of three referees. Applications should be emailed to qi-jobs@ucl.ac.uk, and the candidate should arrange for the three reference letters to be sent to qi-references@ucl.ac.uk by the closing date with the subject line being the applicants name.

UCL values diversity and is committed to equality of opportunity.

Thursday, July 4, 2013

Estimation of Phase and Diffusion: Combining Quantum Statistics and Classical Noise

Estimation of Phase and Diffusion: Combining Quantum Statistics and Classical Noise

Coherent ensembles of $N$ qubits present an advantage in quantum phase estimation over separable mixtures, but coherence decay due to classical phase diffusion reduces overall precision. In some contexts, the strength of diffusion may be the parameter of interest. We examine estimation of both phase and diffusion in large spin systems using a novel mathematical formulation. For the first time, we show a closed form expression for the quantum Fisher information for estimation of a unitary parameter in a noisy environment. The optimal probe state has a non-Gaussian profile and differs also from the canonical phase state; it saturates a new tight precision bound. For noise below a critical threshold, entanglement always leads to enhanced precision, but the shot-noise limit is beaten only by a constant factor, independent of $N$. We provide upper and lower bounds to this factor, valid in low and high noise regimes. Unlike other noise types, it is shown for $N \gg 1$ that phase and diffusion can be measured simultaneously and optimally.

Wednesday, July 3, 2013

✭✭✭✭✭ Historical, Scientific, Hysterical View of Quantum Information

Second Amazon five-star review of my book, Schrödinger's Killer App, and this one is not written by my sister or any other close relative!

"Quantum computing is one of the most popular and well-funded ideas in modern physics research. This book explains, in a lively way, the ideas behind this field, its history, and colorful characters who are playing a role in making QC a reality. It starts with an overview of quantum mechanics so you can appreciate why people are going through so much trouble to make a quantum computer. Before you know it you're learning (and understanding) advanced quantum information, and how science is done in the government and academic worlds.

If you are a high school student thinking of becoming a scientist, a teacher who wants to liven up the standard curriculum, a science lover, a fan of sci-fi, considering investing in a quantum computer "company," or just love good writing, you'll want to read this book. However, if you like dry, long-winded, opaque academic writing, you're going to have to find another author.

Physicists in the field should attend a lecture by Prof. Dowling before reading this book for full effect. Knowing his personality amplifies the hilarity of his stories."

— Ellie Arroway, 29 June 2013