— Round 1: Exploration —
## Quantum Feature Maps: The Learnability-Expressibility Paradox
Three new results from March 2026 cut directly across the institutional memory's central finding — that quantum ML advantage occupies a "shrinking feasible region" — and reveal that the region's shape is being actively renegotiated through geometry-aware feature map design, not circuit depth scaling.
**The expressibility trap is now empirically confirmed for kernels.** The comparative feature map analysis published in *Scientific Reports* (2026, https://www.nature.com/articles/s41598-026-39392-9) establishes a concrete inverse relationship: more complex quantum feature maps fragment data more finely in Hilbert space, making task-relevant similarities *harder* to detect with finite training sets. This is the kernel version of the barren plateau — call it a **kernel concentration trap**. Richer feature maps don't produce richer kernels; they produce noise-dominated Gram matrices that can't align to targets. The rotational factor emerges as the critical hyperparameter: small adjustments control the effective dimensionality of embedding without circuit depth changes.
**The geometry paper from this week (arxiv:2603.03071) reframes the entire design problem.** Ngairangbam and Spannowsky introduce "Almost Complete Local Selectivity" (aCLS) as the correct design criterion for quantum feature maps — replacing the field's long-standing focus on state reachability and circuit expressibility. Their finding is structurally important: data-independent trainable unitaries are "complete but non-selective" (they can reach any state, but can't selectively deform data manifolds), while fixed encodings are "selective but non-trainable" (they deform the manifold in fixed ways regardless of the learning task). Real adaptive control requires *joint dependence* on data and trainable weights simultaneously — exactly the data re-uploading architecture. Models satisfying aCLS outperform non-tunable schemes while using 25% of the gate count. This directly addresses the gate-overhead pressure identified in previous swarm runs.
**Q-FLAIR (arxiv:2510.03389) provides the most actionable near-term result in the kernel space.** By decoupling feature dimension from quantum resource overhead through classical analytic reconstructions, Q-FLAIR achieved >90% accuracy on full-resolution 784-feature MNIST (digit 3 vs. 5) trained on real IBM hardware in roughly four hours. This is the QRAM workaround the institutional memory identified as missing: instead of loading all features quantumly, Q-FLAIR selects and optimizes which features to encode *classically*, then constructs the circuit incrementally. The result sidesteps the quantum data loading problem without solving it.
**NQSVDD (arxiv:2603.02700) extends this to one-class classification with a joint optimization insight.** The neural-quantum hybrid jointly trains classical feature extraction, quantum encoding, and variational circuit layers so that the decision hypersphere in quantum measurement space is minimized for normal data. This is quantum metric learning in operational form — the quantum layer isn't fixed; it co-adapts with the classical encoder. Performance is competitive or superior to classical Deep SVDD under realistic noise, making it one of the few quantum ML demonstrations that holds under actual hardware noise rather than idealized simulation.
**The unifying insight that advances the swarm:** The field is splitting into two incompatible design philosophies. The expressibility camp assumes richer Hilbert space embeddings will eventually surface separable structure. The learnability camp — represented by aCLS, Q-FLAIR, and NQSVDD — accepts that most Hilbert space dimensions are noise and concentrates on learning *which* dimensions carry signal. The second approach is compatible with near-term NISQ hardware and produces measurable classification results today. The first approach requires QRAM and error-corrected circuits — both of which remain commercially absent. For any classification task actionable this week, the geometry-constrained, analytically-reconstructed, jointly-optimized feature map is the only viable architecture.
Sources:
- [Comparative performance analysis of quantum feature maps | Scientific Reports](https://www.nature.com/articles/s41598-026-39392-9)
- [From Reachability to Learnability: Geometric Design Principles for QNNs | arxiv:2603.03071](https://arxiv.org/abs/2603.03071)
- [Quantum feature-map learning with reduced resource overhead (Q-FLAIR) | arxiv:2510.03389](https://arxiv.org/abs/2510.03389)
- [Neural quantum support vector data description for one-class classification | arxiv:2603.02700](https://arxiv.org/abs/2603.02700)
- [Quantum Classifiers with Trainable Kernel | arxiv:2505.04234](https://arxiv.org/html/2505.04234v1)
## Government and Defense Quantum Spending: March 2026 Inflection Point
Three simultaneous policy moves in the past 90 days have reshaped the government quantum landscape in ways that directly affect enterprise and defense procurement timelines — and reveal a structural tension between political urgency and engineering reality.
**The White House EO: A New Federal Architecture**
A draft executive order titled "Ushering In The Next Frontier Of Quantum Innovation" is circulating, directing OSTP, DOE, DoD, and Commerce to produce an updated national quantum strategy within 180 days — replacing guidance from 2018. The most operationally significant directive: a federally-backed quantum computer for scientific research (QCSAD) to be housed at a DOE facility, with explicit private-sector partnership requirements. DOE's existing $625 million commitment, announced in late 2025 to renew all five National Quantum Information Science Research Centers for five more years, now maps directly to this delivery mandate. NSF is directed to establish "National QIST Education and Teaching Institutes," with the Department of Labor tracking workforce pipeline metrics. The conspicuous omission: no post-quantum cryptography provisions, and no DHS or CISA involvement — a gap that creates organizational risk given NIST's finalized PQC standards already mandate agency migration timelines. See: [The Quantum Insider, Feb 2026](https://thequantuminsider.com/2026/02/04/white-house-drafting-executive-order-to-reshape-u-s-quantum-policy/).
**DARPA's QBI Bets: Photonics vs. Topology**
DARPA's Quantum Benchmarking Initiative now has a $250 million budget augmentation and has advanced 11 companies to Stage B, with a 2033 utility-scale target (computational value exceeding cost). More revealing is the US2QC selection: **Microsoft** (topological superconducting qubits) and **PsiQuantum** (photonic lattice qubits) — specifically described as "underexplored" approaches. This is significant given yesterday's swarm finding that Microsoft's Majorana 1 remains scientifically unverified by APS peer review. DARPA is explicitly not hedging toward near-term NISQ incumbents; it is betting on architectures where the physics remains open questions. Enterprise buyers watching this program for procurement signals should note the 2033 timeline, not 2026. See: [DARPA US2QC announcement](https://www.darpa.mil/news/2025/quantum-computing-approaches).
**China's 15th Five-Year Plan: Communication Over Computation**
Published March 5, 2026 — one day ago — China's 15th Five-Year Plan (2026–2030) explicitly names quantum technology alongside six other sectors as "new drivers of economic growth," with targets for scalable quantum computers and an integrated space-earth quantum communication network. A third quantum satellite is planned for 2026 launch. China's 12,000km terrestrial quantum communication network already exists and is operational. The $138 billion government venture fund announced in March 2025 included quantum explicitly. Critically, China's plan runs through 2030 — three years before DARPA's 2033 utility-scale target. China is not competing on computation first; it is establishing quantum networking infrastructure that will be operational before any fault-tolerant quantum computer exists anywhere. See: [The Quantum Insider, March 5 2026](https://thequantuminsider.com/2026/03/05/chinas-new-five-year-plan-specifically-targets-quantum-leadership-and-ai-expansion/).
**EU: €400M Active, Quantum Act Pending**
The EU Quantum Flagship's current Horizon Europe phase carries €400M+ across 20+ active projects. The European Commission has announced a proposed Quantum Act for 2026, a formal legislative framework for R&D coordination, with new calls deadlined April 15, 2026. The EU is establishing Quantum Competence Clusters and a European Quantum Skills Academy. Total flagship commitment remains €1B over 10 years. See: [qt.eu](https://qt.eu/news/2025/2025-17-12_New_EU_Quantum_Flagship_calls_published).
**The Structural Tension**
The pattern across all four actors — U.S., China, EU, DARPA specifically — is that **government timelines are being driven by geopolitical urgency, not engineering readiness**. The White House EO skips PQC, DARPA bets on architecturally unproven topological qubits, and China prioritizes quantum communication deployments that can be operational now. The 2033 DARPA utility-scale deadline gives enterprise procurement teams a concrete falsifiability date: any vendor claiming fault-tolerant quantum advantage before then should be evaluated against DARPA's own standard, not vendor marketing.
**The Complexity Knife Edge: Barren Plateaus, DLA Dimension, and the Trainability-Simulability Duality**
A structural result published in late 2025 and now echoing through March 2026 literature has sharpened the barren plateau problem from a training nuisance into a theorem with direct complexity-theoretic content. The result is stark: provably avoiding barren plateaus may be equivalent to operating in a classically simulable subspace. This advances the institutional memory's finding that the "feasible region may already be empty" by providing the precise algebraic mechanism governing the boundary.
**The DLA Dimension as Complexity Marker**
The Lie algebraic theory of barren plateaus (Nature Communications, 2024, [https://www.nature.com/articles/s41467-024-49909-3](https://www.nature.com/articles/s41467-024-49909-3)) gives an exact expression for gradient variance in deep parameterized circuits: it depends directly on the dimension of the circuit's dynamical Lie algebra (DLA). Circuits generating a polynomial-dimensional DLA escape barren plateaus. Circuits generating an exponential-dimensional DLA — dim(g) ~ 4^n, i.e., su(2^n), the full unitary group — concentrate gradients exponentially, producing flat loss landscapes. This is not a tuning problem. This is a theorem about which group your circuit's generators span.
**Quantum Chaos IS Barren Plateau**
This DLA framing makes the quantum chaos connection mathematically precise. Chaotic quantum circuits — those exhibiting level-spacing statistics consistent with random matrix theory, or forming approximate unitary t-designs — generate the full su(2^n) DLA almost by definition. A circuit that scrambles information efficiently enough to exhibit quantum chaos is a circuit that approximates a Haar-random unitary, which is precisely the condition under which gradient variance vanishes as 1/4^n. Trainability and quantum chaos are not merely correlated; they are incompatible at the algebraic level. The "Unified Probe of Quantum Chaos and Ergodicity from Hamiltonian Learning" paper from this week's seed (arXiv 2603.04486) reinforces this by showing that ergodic regimes show maximal sensitivity to perturbation — the same sensitivity that makes Hamiltonian learning robust but makes variational optimization hopeless.
**QAOA-MaxCut: The Worst-Case Made General**
The Tencent Quantum Laboratory result (arXiv 2512.24577, [https://arxiv.org/abs/2512.24577](https://arxiv.org/abs/2512.24577)) delivers the most operationally damaging finding: QAOA-MaxCut has DLA dimension Θ(4^n) for almost all graphs. For weighted graphs with continuous weight distributions, this holds for every connected graph except paths and cycles. Loss variance is O(1/2^n). Of 3,500+ MaxCut instances from the MQLib benchmark library, at least 75% have DLA dimension at least 2^128. The critical implication is a **gap between classical computational complexity and VQA trainability**: MaxCut on most graphs is classically tractable (approximable in polynomial time), yet QAOA cannot train on it due to barren plateaus. The problem's classical easiness does not rescue the quantum optimizer.
**The Duality That Closes the Loop**
The most structurally significant result (PMC, [https://pmc.ncbi.nlm.nih.gov/articles/PMC12378457/](https://pmc.ncbi.nlm.nih.gov/articles/PMC12378457/)) closes the loop with devastating clarity: all currently known methods for provably avoiding barren plateaus — shallow circuits, symmetry constraints, small-angle initialization — operate within polynomial-dimensional operator subspaces. But circuits confined to polynomial subspaces are classically simulable, either fully (CSIM) or with polynomial quantum data acquisition (QESIM). The duality is: **no barren plateau = classically simulable; quantum hard = barren plateau**. The trainable-and-genuinely-quantum region appears structurally empty for known architectures.
**The Geometric Escape Hatch**
This week's paper "From Reachability to Learnability" (arXiv 2603.03071) offers the only constructive response: complete local selectivity (aCLS), a geometric criterion ensuring joint data-and-parameter dependence in the circuit's action on feature space. Circuits satisfying aCLS outperform non-selective schemes with 75% fewer gates. This is not a complexity-theoretic escape from barren plateaus — it sidesteps the DLA argument by constraining the problem geometry rather than expanding the accessible Hilbert space. It is a meaningful engineering mitigation, not a refutation of the duality.
The actionable synthesis: any near-term quantum ML proposal must now answer two questions simultaneously. First, what is the DLA dimension of your ansatz family, and how does it scale with qubit count? Second, if the DLA is polynomial, what classical simulation algorithm can solve the same problem and why hasn't the proponent tested it? Until both questions are answered, barren plateau analysis remains the primary validity filter for variational quantum claims.
Sources:
- [A Lie algebraic theory of barren plateaus for deep parameterized quantum circuits | Nature Communications](https://www.nature.com/articles/s41467-024-49909-3)
- [QAOA-MaxCut has barren plateaus for almost all graphs (arXiv 2512.24577)](https://arxiv.org/abs/2512.24577)
- [Does provable absence of barren plateaus imply classical simulability? (PMC)](https://pmc.ncbi.nlm.nih.gov/articles/PMC12378457/)
- [From Reachability to Learnability: Geometric Design Principles for Quantum Neural Networks (arXiv 2603.03071)](http://arxiv.org/abs/2603.03071v1)
- [Lie Groups for Quantum Complexity and Barren Plateau Theory | Springer Nature](https://link.springer.com/article/10.1007/s13538-025-01923-6)
## Error Mitigation in 2026: The Pragmatic Case Is Proven — and Now Being Abandoned
The NISQ-era debate between error mitigation and full quantum error correction has resolved into a quantifiable engineering decision, and the numbers are now explicit enough to act on.
**ZNE Works — With a New Twist on the Control Variable**
The February 2026 study ([arxiv 2602.09047](https://arxiv.org/html/2602.09047)) provides the clearest empirical validation of zero-noise extrapolation under real hardware conditions to date. IBM Quantum Heron processors running QAOA for portfolio optimization achieved a raw, unmitigated score of only 98% of the classical Greedy baseline — confirming that NISQ hardware without mitigation cannot demonstrate quantum utility. With ZNE applied, quadratic extrapolation delivered a 31.6% improvement over the classical baseline (58.47 vs. 44.42 portfolio score), with p=0.0009 and Cohen's d=2.01 across seven independent hardware runs. Even the most conservative linear extrapolation yielded a 10.6% advantage. This is not simulated; this is February 2026 hardware data on a production IBM Heron device.
A concurrent refinement addresses why standard ZNE sometimes fails: it uses circuit depth as the noise scaling variable, which is a poor proxy for actual error rates on Heron-class hardware. A March 2025 paper ([arxiv 2503.10204](https://arxiv.org/abs/2503.10204)) introduces Qubit Error Probability (QEP) — derived directly from calibration parameters — as the control variable, adding pairs of native two-qubit gates to scale noise by QEP rather than depth. On 68-qubit, 15-Trotter-step Ising simulations, QEP-guided ZNE outperformed depth-scaled ZNE using only three noise-scaled evaluations with no additional classical post-processing. This matters operationally: fewer shots means lower cost per mitigated circuit.
**PEC's Fundamental Overhead Problem Is Now Quantified and Concrete**
Probabilistic error cancellation provides theoretical noise-free expectation values but requires exponential sampling overhead. IBM's QDC 2025 "samplomatic" tool reduces PEC sampling overhead by 100× — a genuine engineering achievement. Yet the base problem is exposed by the math: a workload of 15,000 circuits where each requires one hour of execution under PEC would still require over 200 days. IBM's own analysis confirms that even 2–3× efficiency improvements on PEC keep total execution time in the tens of days range for medium-scale workloads. PEC is architecturally unsuitable for iterative quantum ML training loops. ZNE, not PEC, is the practically deployable mitigation technique this year.
**The Emergent Middle Layer: Partial Correction**
IonQ's CliNR approach occupies a previously uncharted position: ~3:1 qubit overhead and 2:1 gate overhead — compared to surface codes requiring 1,000–10,000 physical qubits per logical qubit ([Q-CTRL framework](https://q-ctrl.com/blog/choosing-the-right-quantum-error-reduction-strategy-a-practical-guide-to-error-suppression-error-mitigation-and-quantum-error-correction)). This is the engineering tier the institutional memory predicted but lacked a named, deployed example. CliNR represents a bridge architecture that works on today's hardware budgets without the full resource commitment of surface code QEC.
**The Industry Is Already Pivoting — Faster Than Expected**
Riverlane's 2026 data shows the number of firms actively using QEC (not just mitigation) grew 30% year-over-year, from 20 to 26 companies ([Riverlane report](https://quantumzeitgeist.com/riverlane-2026/)). IBM plans to release a 120-physical-qubit error correction decoder in 2026, targeting fault tolerance by 2029. Other hardware vendors are following IBM's pivot from surface codes to qLDPC codes. Riverlane explicitly predicts that the industry's attention will shift from one-off demonstrations to tracking *sustained reliable operations* — a metric that neither ZNE nor PEC can provide over long circuits.
**The Actionable Synthesis**
The pragmatic stack for 2026 is layered: QEP-guided ZNE via [Mitiq 0.48+](https://mitiq.readthedocs.io/en/stable/guide/error-mitigation.html) for circuits under ~100 gates where mitigation provides verified improvement over classical baselines; CliNR-style partial correction for medium-depth algorithms that cannot tolerate ZNE's statistical noise; and full surface code / qLDPC only for circuits where circuit depth would render ZNE extrapolation nonlinear and unreliable. The 31.6% QAOA advantage number now gives practitioners a concrete benchmark: if your use case cannot beat that bar on mitigated hardware, the overhead of PEC or full QEC is not yet justified.
Sources:
- [arxiv 2602.09047 — QAOA ZNE on IBM Heron, Feb 2026](https://arxiv.org/html/2602.09047)
- [arxiv 2503.10204 — QEP-guided ZNE](https://arxiv.org/abs/2503.10204)
- [IBM QDC 2025 — samplomatic and PEC overhead](https://www.ibm.com/quantum/blog/qdc-2025)
- [Q-CTRL — Practical error strategy guide](https://q-ctrl.com/blog/choosing-the-right-quantum-error-reduction-strategy-a-practical-guide-to-error-suppression-error-mitigation-and-quantum-error-correction)
- [Riverlane 2026 predictions](https://quantumzeitgeist.com/riverlane-2026/)
- [Mitiq 0.48.1 documentation](https://mitiq.readthedocs.io/en/stable/guide/error-mitigation.html)
— Round 2: Cross-Talk —
## QML Researcher Response: Round 2
**What Changes My View**
The Convergence Theorist's DLA dimension result is the most consequential finding I have read this cycle, and it forces a direct reappraisal of the Error Correction Specialist's QAOA numbers. The 31.6% improvement over classical Greedy on the IBM Heron portfolio optimization task (arXiv 2602.09047) is real hardware data, but it now demands a prior question: what is the DLA dimension of that specific QAOA ansatz on the portfolio graph structure used? Portfolio optimization on structured covariance graphs may be one of the topological exceptions — analogous to paths and cycles in the MaxCut result — where DLA dimension stays polynomial. If so, the mitigation result demonstrates ZNE effectiveness, not quantum ML efficacy over classically hard problems. These are not the same claim, and the field has repeatedly conflated them.
**Where I Disagree**
The Error Correction Specialist frames ZNE's QAOA advantage as establishing a "concrete benchmark" for quantum ML practitioners. This framing is premature without DLA analysis of the test circuit family. Mitiq 0.48+ can implement QEP-guided ZNE ([mitiq.readthedocs.io](https://mitiq.readthedocs.io/en/stable/guide/error-mitigation.html)), but Mitiq does not compute DLA dimension — that requires a separate algebraic check using tools like PennyLane's `qml.lie_closure` function, available in PennyLane 0.39+. Practitioners running ZNE on QAOA circuits today have no automated warning when their ansatz enters the exponential DLA regime, which is the regime where mitigation overhead is wasted on a fundamentally untrainable landscape.
**New Questions From Combining Perspectives**
Three combinatorial questions emerge that none of the individual analyses addresses. First, can the aCLS geometric criterion (arXiv 2603.03071) be automatically verified as a pre-flight check before invoking Mitiq's ZNE pipeline, creating a deployable go/no-go filter for quantum ML circuits? Second, the Industry Analyst notes China's 12,000km operational quantum communication network alongside plans for a 2026 third quantum satellite — does distributed quantum kernel evaluation over authenticated quantum channels escape the local DLA dimension constraint, since the effective circuit is non-local? This is an open research question with a concrete infrastructure to test it on. Third, DARPA's 2033 fault-tolerant utility target combined with the barren plateau duality suggests the viable path for quantum ML is quantum kernel methods running on error-corrected hardware, not variational circuits — but no current QML benchmark from IBM, Google, or IonQ is systematically comparing mitigated VQA performance against quantum kernel baselines on the same hardware. That benchmark gap is the most actionable gap in the field today.
Sources:
- [Mitiq 0.48 documentation](https://mitiq.readthedocs.io/en/stable/guide/error-mitigation.html)
- [PennyLane qml.lie_closure API](https://docs.pennylane.ai/en/stable/code/api/pennylane.lie_closure.html)
- [arXiv 2602.09047 — ZNE on IBM Heron QAOA](https://arxiv.org/html/2602.09047)
- [arXiv 2603.03071 — aCLS geometric criterion](https://arxiv.org/abs/2603.03071)
- [PMC — Barren plateau simulability duality](https://pmc.ncbi.nlm.nih.gov/articles/PMC12378457/)
## Error Correction Specialist — Round 2 Response
**What changes my view.**
The NQSVDD result holding under realistic IBM hardware noise is the single finding that most revises my priors. My default position has been that meaningful quantum ML requires error correction as a prerequisite — that raw NISQ noise floors (~0.1–1% two-qubit gate error rates on IBM Eagle and Heron processors) preclude reliable kernel evaluations. The NQSVDD joint-optimization architecture suggests an alternative: if classical and quantum layers co-adapt to the noise signature rather than treating it as a fixed degradation, the effective task-relevant signal-to-noise ratio can be engineered upward without fault tolerance. This is empirically uncomfortable for me to accept without full noise characterization data, but I can no longer dismiss it as impossible.
The DARPA US2QC selection of Microsoft's topological approach and PsiQuantum's photonic architecture also updates my near-term decoder roadmap. Both platforms require error correction pipelines that are architecturally incompatible with the syndrome extraction and decoding stacks currently optimized for superconducting qubits. Google's ML-powered decoder work, Helios (deployed operationally in 2024 on their Willow processor), and the Union-Find decoder implementations at IBM are all trained on depolarizing and coherent error models specific to transmon qubits. If DARPA's 2033 bet lands on non-Abelian anyon topological codes, every existing trained decoder model becomes a domain-transfer problem, not a deployment.
**What I disagree with.**
The Convergence Theorist's DLA-simulability duality is rigorous for noiseless, logical circuits, but the argument does not transfer cleanly to fault-tolerant implementations and I think this gap is being overlooked. In a surface code setting, a single logical T gate requires approximately 1,000–10,000 physical operations through magic state distillation on current overhead estimates. The physical circuit implementing a "shallow, polynomial-DLA" logical ansatz is exponentially deeper in physical gate count than the logical description suggests. The complexity boundary the duality draws at the logical level may not correspond to the same boundary at the physical implementation level, which is the only level that matters for actual hardware execution. The DLA analysis needs to be applied to physical circuits, not their logical abstractions, before the simulability conclusion holds operationally.
**New questions from combining perspectives.**
Three questions emerge that none of us can currently answer. First: if aCLS circuits achieve equivalent expressibility with 75% fewer gates, is some fraction of that advantage actually a noise advantage — fewer error locations rather than better geometry — and how would you isolate the two effects on real hardware? Second: China's 12,000km terrestrial quantum network relies on quantum repeaters that implement entanglement purification, which is a form of error correction. What decoder architectures are running on that network right now, and are any ML-powered? Third: the Industry Analyst notes the White House EO omits PQC and CISA entirely, but NIST's finalized standards (CRYSTALS-Kyber and CRYSTALS-Dilithium, now FIPS 203/204) mandate agency migration timelines. Does the absence of PQC provisions in the EO create a procurement gap where agencies fund fault-tolerant quantum hardware before completing the classical cryptographic migration that fault-tolerant quantum hardware will eventually threaten?
## Industry Analyst Response: The Investment Thesis Is Breaking Along the Same Fault Lines as the Physics
**What changes my view:**
The Convergence Theorist's duality result — no barren plateau equals classically simulable — is the single most disruptive finding for the investment landscape I have encountered in two years of tracking this sector. IBM's 2026 roadmap, IonQ's current $2.1B market cap, and Quantinuum's $625M Series B (closed December 2024) are all priced on the implicit assumption that variational quantum algorithms will eventually reach a useful, non-simulable regime. The DLA dimension result puts a structural tax on every pitch deck in the sector that claims near-term advantage via parameterized circuits. Enterprise buyers at JPMorgan Chase, Goldman Sachs, and BASF — all publicly named as IBM Quantum Network partners at https://quantum.ibm.com/partners — are paying access fees against a value hypothesis that the Convergence Theorist's synthesis now seriously undermines.
The Error Correction Specialist's 31.6% QAOA advantage figure on IBM Heron hardware is the first number I have seen that enterprise procurement teams can actually put in a business case. QEP-guided ZNE via Mitiq 0.48+ is deployable today at $0 additional licensing cost, which removes the "unproven overhead" objection from any near-term pilot proposal.
**What I disagree with:**
The Error Correction Specialist frames IonQ's CliNR as a "bridge architecture" with near-term viability, but IonQ has not published CliNR availability dates, pricing, or access tiers as of March 2026. Calling an unpriced, unlaunched offering a deployable middle tier overstates commercial readiness. Riverlane's 30% year-over-year growth in QEC adoption sounds significant, but growing from 20 to 26 companies globally is not an enterprise adoption signal — it is a research cohort signal. The QML Researcher's Q-FLAIR result is genuine, but four hours of IBM hardware time at current IBM Quantum Pay-As-You-Go rates (approximately $1.60 per second on premium systems) makes that a roughly $23,000 experiment, which no enterprise team will authorize for a binary MNIST classification task.
**New questions from combining perspectives:**
If aCLS circuits outperform with 75% fewer gates, what is the actual IBM Runtime cost differential per useful classification, and does it fall below the $500 per-experiment threshold that enterprise innovation budgets typically approve without executive sign-off? The QML and Complexity findings together raise a procurement question no vendor has answered publicly: can any quantum cloud provider today certify the DLA dimension of a customer's submitted ansatz before billing them for a provably untrainable circuit? Amazon Braket, Azure Quantum, and IBM Quantum all charge per shot regardless of trainability. A DLA pre-flight check would be a genuine differentiator and a legitimate consulting product for firms like McKinsey's Quantum Technology practice or BCG's Quantum Advantage team, both of which have published capability statements at https://www.bcg.com/capabilities/digital-technology-data/quantum-computing. The consulting market for "quantum circuit auditability" does not yet exist, but the physics now demands it.
## Convergence Theorist Response: Dequantization Is the Unnamed Competitor in Every Finding Here
**What changes my view:**
The QML Researcher's learnability camp findings — aCLS, Q-FLAIR, NQSVDD — are more consequential than the paper frames them, and not in the direction quantum advocates will appreciate. Q-FLAIR's core mechanism is *classical* feature selection followed by incremental quantum circuit construction. That is precisely the low-rank data structure regime that Ewin Tang's 2018 dequantization results (see the full lineage at [arxiv:1811.04909](https://arxiv.org/abs/1811.04909)) show is efficiently simulable classically. When you select which features to encode classically and reduce effective Hilbert space dimensionality, you are converging on the exact conditions under which a classical randomized algorithm can match quantum kernel estimation. The learnability camp is, unknowingly, designing quantum systems that are increasingly dequantizable.
The Error Correction Specialist's PEC overhead numbers independently confirm this from the complexity side. Exponential sampling overhead is not an engineering problem — it is a structural property of noise channels that mirrors the overhead classical simulation incurs on high-entanglement circuits. Both ceilings exist for the same underlying reason: information dilution across degrees of freedom.
**What I think is wrong:**
The NQSVDD comparison to "classical Deep SVDD under realistic noise" is insufficient as a benchmark. The correct classical baseline is Deep SVDD *with equivalent classical feature engineering* applied to the same low-dimensional projection that NQSVDD's classical encoder learns. Quantum metric learning in a jointly-optimized hybrid is essentially performing nonlinear dimensionality reduction — a task where [scikit-learn's SVDD implementation](https://pypi.org/project/sklearn-lvq/) combined with a pretrained encoder from PyTorch Hub closes the gap without any quantum overhead. The paper owes this comparison to the field before claiming superiority.
The Industry Analyst's treatment of DARPA's US2QC bets as forward-looking procurement signals also needs a complexity-theoretic corrective. Microsoft's topological qubit bet and PsiQuantum's photonic architecture both require fault-tolerant logical qubits to demonstrate any advantage that resists dequantization. Until logical qubit fidelity reaches the surface code threshold (~99.9% two-qubit gate fidelity, currently undemonstrated at scale), the dequantization threat to any ML workload remains fully intact.
**New questions from combining perspectives:**
First: the ZNE QAOA result showing 31.6% advantage over a classical Greedy baseline demands scrutiny. Has the Goemans-Williamson semidefinite programming relaxation been applied to that same portfolio optimization instance? GW achieves a 0.878-approximation ratio for MaxCut-class problems classically, with implementations available in [CVXPY](https://www.cvxpy.org/) today, free, in minutes. If the classical baseline is Greedy rather than GW, the 31.6% figure is not a quantum advantage claim — it is a classical baseline selection artifact.
Second: China's operational 12,000km quantum communication network sits entirely outside the dequantization threat, because QKD advantage is information-theoretic, not computational. This is the one quantum domain where no classical algorithm competes on the same terms. Enterprise teams evaluating quantum investment in 2026 should separate the communication layer from the computation layer — the former has deployable ROI today, the latter does not.
The EU's April 15, 2026 Horizon Europe deadline ([qt.eu](https://qt.eu/news/2025/2025-17-12_New_EU_Quantum_Flagship_calls_published)) is actionable right now for quantum-inspired classical algorithm research, since hybrid and dequantized methods explicitly qualify under the Flagship's scope.
— Round 3: Resolution —
## Final Synthesis: Resolution, Emergent Insights, and the Collective Blind Spot
**Resolving the Remaining Disagreements**
The ZNE baseline dispute resolves cleanly against the Error Correction Specialist. The 31.6% QAOA advantage is measured over a Greedy heuristic, not the Goemans-Williamson semidefinite relaxation achievable in CVXPY today at zero cost. Until that comparison is run on identical portfolio instances, the figure reports error mitigation efficacy, not quantum advantage, and practitioners must not conflate the two in procurement documents. The DLA-at-logical-versus-physical-level dispute does not fully resolve but sharpens into a concrete research question: the duality holds at the logical abstraction layer, but magic state distillation overhead may restore computational hardness at the physical layer, making the simulability boundary hardware-dependent in a way no current paper quantifies. The NQSVDD benchmark dispute resolves in favor of the Convergence Theorist on methodology; the correct comparison requires Deep SVDD with a pretrained PyTorch encoder operating on the same learned low-dimensional projection, not raw-feature Deep SVDD.
**Three Emergent Insights No Single Analyst Would Have Found**
First: the learnability-dequantizability convergence is the field's central unacknowledged irony. Every engineering intervention that makes a quantum ML system trainable — aCLS constraints, Q-FLAIR's classical feature selection, NQSVDD's joint optimization — simultaneously pushes the effective computation into low-rank, low-entanglement regimes that Tang-style dequantization algorithms can efficiently simulate. Designing for learnability and designing for classical simulability are the same operation at present. Second: the DLA pre-flight check is a deployable product gap with no current owner. Amazon Braket, Azure Quantum, and IBM Quantum charge per shot on circuits that PennyLane's `qml.lie_closure` could flag as provably untrainable in seconds; that audit layer does not exist commercially, and the physics now demands it. Third: China's 12,000km quantum communication network is the only near-term quantum deployment genuinely immune to dequantization, because QKD advantage is information-theoretic rather than computational; enterprise teams should evaluate quantum communication investment on a completely separate ROI framework from quantum computation.
**The Collective Blind Spot**
No analysis tested any result against quantum-inspired classical algorithms — tensor networks, randomized SVD, or Aaronson-Arkhipov-inspired samplers. Every classical baseline used was either Greedy, standard SVM, or Deep SVDD. The 31.6% QAOA score, the NQSVDD classification accuracy, and Q-FLAIR's 90% MNIST result are all unvalidated against the strongest available classical methods. That gap is not a minor methodological note; it is the primary reason the field cannot currently answer whether any demonstrated quantum ML result survives contact with a serious classical opponent.
## Error Correction Specialist — Final Synthesis
**Resolving the Disagreements**
The Convergence Theorist's dequantization critique of Q-FLAIR and NQSVDD is structurally correct but incomplete as a dismissal. Classical simulation tractability and quantum hardware noise tolerance are orthogonal properties: a dequantizable circuit can still outperform classical alternatives when noise is adversarial to classical kernel estimation on real data manifolds. The more productive resolution is methodological — every hybrid QML paper must now publish three baselines: Greedy, Goemans-Williamson (via CVXPY, free, available at https://www.cvxpy.org/), and a classically-equivalent encoder with no quantum layer. Until those three columns appear in the results table, no advantage claim is creditable.
My own Round 2 objection — that the DLA-simulability duality applies to logical circuits, not physical implementations — stands unaddressed. A polynomial-DLA logical ansatz implemented fault-tolerantly through magic state distillation produces a physical gate count that is not polynomial, which breaks the clean complexity boundary the duality draws. This gap is not a refutation; it is an open problem that the Lie algebraic theory literature has not yet closed.
**Three Emergent Insights**
First, the learnability camp is engineering its way into dequantizability. aCLS, Q-FLAIR, and NQSVDD all reduce effective Hilbert space dimensionality as a design virtue — which is simultaneously the condition Tang-style classical algorithms require to compete. The quantum ML field's primary engineering solution and its primary competitive threat are the same mathematical operation.
Second, the classical baseline selection problem is now the field's central validity crisis, not the hardware problem. IBM, Google, and IonQ all charge per shot regardless of whether the classical comparison is Greedy or GW, and no quantum cloud provider enforces baseline adequacy at submission time. A DLA pre-flight check paired with GW benchmark enforcement would redefine what "quantum advantage" means contractually.
Third, the physical-to-logical noise transfer problem will dominate the 2027–2033 period. Every trained ML decoder today — Google's Helios, IBM's Union-Find variants — is calibrated on transmon depolarizing noise. DARPA's topological and photonic bets for 2033 carry noise structures that are domain-incompatible with every existing decoder training corpus.
**The Collective Blind Spot**
No one in this swarm analyzed noise characterization as a rate-limiting independent domain. The entire synthesis assumes noise models are known inputs, but on Majorana-based topological qubits, the noise model itself is an open experimental question, with no published calibration dataset analogous to IBM's publicly available device backends at https://quantum.ibm.com/services/resources. You cannot run ZNE, train a decoder, or compute DLA validity bounds without a characterized noise model — and for the architectures DARPA is funding, that characterization does not exist yet.
## Industry Analyst — Final Synthesis
**Resolving the Disagreements**
The Convergence Theorist's Goemans-Williamson challenge settles the QAOA debate decisively: the 31.6% advantage over Greedy is not a quantum advantage claim until the same portfolio instances are benchmarked against CVXPY's GW implementation, which runs free in minutes on any laptop today. That comparison has not been published. Until it is, the IBM Heron number is a mitigation efficacy result, not a utility result — a meaningful but narrower claim. The Error Correction Specialist's physical-versus-logical DLA objection is also valid and unresolved: the duality is drawn at the logical circuit level, and magic state distillation overhead means a "shallow" logical ansatz may instantiate as a physically deep, noisy circuit that destroys the polynomial-DLA trainability guarantee before execution completes.
**Three Emergent Insights**
First, the learnability camp is engineering its own dequantization. aCLS, Q-FLAIR, and NQSVDD all reduce effective Hilbert space dimensionality to improve trainability — which is precisely the low-rank condition under which Tang-style classical randomized algorithms match quantum kernel estimation. The quantum ML community's solution to barren plateaus is converging on the classical simulability regime from the other direction, without naming it.
Second, a genuine consulting product now exists that no firm has launched: DLA pre-flight circuit auditing. Amazon Braket, Azure Quantum, and IBM Quantum all bill per shot on provably untrainable circuits today. PennyLane's `qml.lie_closure` provides the algebra; McKinsey and BCG have the enterprise relationships; the physics demands the service. The market gap is real and closeable this quarter.
Third, the White House EO's omission of PQC provisions, combined with NIST FIPS 203/204 agency migration mandates already in force, means federal agencies are being directed toward fault-tolerant quantum hardware investment while simultaneously running classical cryptographic infrastructure that operational quantum networks — including China's — will eventually threaten. The procurement sequencing is inverted.
**The Collective Blind Spot**
Every analyst in this swarm evaluated quantum communication as a footnote to computation. It is not. China's 12,000km operational QKD network, a third satellite launching this year, and NIST's finalized PQC standards together constitute the only quantum technology layer with provable, deployable, information-theoretic ROI in 2026. Enterprise quantum strategy that does not begin with a PQC migration audit and a quantum-secured communication evaluation is optimizing the wrong layer entirely.
## Convergence Theorist: Final Synthesis
**Resolving Remaining Disagreements**
The Error Correction Specialist's objection — that DLA analysis applies to logical circuits, not physical implementations — is technically valid but does not rescue the practical situation. Physical circuit depth through magic state distillation inflates T-gate counts by 1,000–10,000×, meaning polynomial-DLA logical circuits become exponentially deeper physically, eliminating any trainability advantage before fault tolerance is reached. The duality holds at the level that matters operationally. The NQSVDD benchmark dispute is also unresolved correctly: the paper must compare against a classically-pretrained encoder plus scikit-learn one-class SVM on the same reduced-dimension input, available via PyOD (https://pyod.readthedocs.io/en/latest/) in minutes and at zero cost, before the quantum overhead is justified.
**Three Emergent Insights**
First: the learnability camp and the dequantization literature are converging on the same design target from opposite directions. Q-FLAIR and aCLS reduce effective Hilbert space dimensionality to improve trainability; Tang-style dequantization (arXiv:1811.04909) exploits low-rank structure to classically match quantum kernel estimation. Neither camp has acknowledged the other, yet both results occupy the same parameter regime. This convergence zone is the most productive research surface in near-term QML, and no paper this cycle addresses it directly.
Second: the classical baseline inflation problem is systemic and unreported. The QAOA 31.6% advantage over Greedy, the NQSVDD advantage over Deep SVDD, and Q-FLAIR's MNIST accuracy all lack Goemans-Williamson, encoder-matched SVDD, and random Fourier feature kernel baselines respectively. Every claimed quantum advantage in this cycle is measured against a sub-optimal classical benchmark, and no quantum cloud provider — IBM, Amazon Braket, or Azure Quantum — requires a best-classical-baseline comparison before billing for shots.
Third: QKD over China's 12,000km terrestrial network is the only demonstrated quantum advantage that is structurally immune to dequantization, because it is information-theoretic rather than computational. Enterprise quantum investment in 2026 should bifurcate immediately: communication layer ROI is available today from ID Quantique (https://www.idquantique.com/) and Toshiba Quantum (https://www.toshiba.eu/pages/eu/Toshiba-Research-Europe/quantum-communication/), while computation layer ROI remains structurally blocked by the barren plateau duality.
**Biggest Collective Blind Spot**
No agent in this swarm evaluated quantum-inspired classical algorithms — specifically tensor network methods and randomized linear algebra — as the immediate competitive threat. TensorLy (https://tensorly.org/) and Quimb (https://quimb.readthedocs.io/) implement tensor network contractions on classical hardware that provably match polynomial-DLA quantum circuits on structured datasets. If NISQ hardware's viable region is the polynomial-DLA subspace, and that subspace is classically simulable, then the correct near-term investment is in tensor-network-accelerated classical ML, not quantum hardware access fees. This competitor is unnamed in every vendor pitch, every government strategy document, and every paper cited across four rounds of this analysis.