4.6 Article

How many qubits are needed for quantum computational supremacy?

期刊

QUANTUM
卷 4, 期 -, 页码 -

出版社

VEREIN FORDERUNG OPEN ACCESS PUBLIZIERENS QUANTENWISSENSCHAF
DOI: 10.22331/q-2020-05-11-264

关键词

-

资金

  1. Undergraduate Research Opportunities Program (UROP) at MIT
  2. Dominic Orr Fellowship at Caltech
  3. National Science Foundation Graduate Research Fellowship [DGE-1745301]
  4. NSF [CCF-1452616, CCF-1729369]
  5. ARO [W911NF-17-1-0433]
  6. MIT-IBM Watson AI Lab under the project Machine Learning in Hilbert space
  7. National Science Scholarship from the Agency for Science, Technology and Research (A*STAR)
  8. Enabling Practical-scale Quantum Computation (EPiQC), a National Science Foundation (NSF) Expedition in Computing [CCF-1729369]
  9. MIT School of Science Fellowship
  10. MIT Department of Physics
  11. MIT-IBM Watson AI Lab

向作者/读者索取更多资源

Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P not equal NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH(a) and per-int-NSETH(b) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F2 or the permanent of an n x n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2(cn) time steps, where c is an element of {a, b}. A third conjecture poly3-ave-SBSETH(a') asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class SBP. We analyze evidence for these conjectures and argue that they are plausible when a = 1 /2, b = 0.999 and a' = 1/2. Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10(4) to 10(7) gates.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据