TY - CHAP
T1 - AccelerQ: Accelerating Quantum Eigensolvers With Machine Learning on Quantum Simulators
AU - Bensoussan, Avner
AU - Chachkarova, Elena
AU - Even-Mendoza, Karine
AU - Fortz, Sophie
AU - Lenihan, Connor
PY - 2025/6/18
Y1 - 2025/6/18
N2 - We present AccelerQ, a framework for automatically tuning quantum eigensolver (QE) implementations–these are quantum programs implementing a specific QE algorithm–using machine learning and search-based optimisation. Rather than redesigning quantum algorithms or optimising the implementation of an already existing algorithm, AccelerQ treats QE implementations as black-box programs and learns to optimise their hyperparameters to improve accuracy and efficiency.Our approach leverages two key insights: (1) training on data extracted from smaller and simpler QE implementations’ inputs, and (2) training a program-specific ML model. To further enhance our approach, we incorporate search-based techniques and genetic algorithms alongside machine learning models to efficiently explore the hyperparameter space of QE implementations and avoid local minima.To evaluate AccelerQ, we applied it to the Quantum Eigensolver as a use case using two fundamentally different quantum implementations: ADAPT-QSCI and QCELS. For each, we trained a lightweight XGBoost Python regressor model using data extracted classically from systems of up to 16 qubits. We deployed the model to optimise hyperparameters for executions on larger systems–20-, 24-, and 28-qubit Hamiltonians, where direct classical simulation becomes impractical.For ADAPT-QSCI, we observed a reduction in error from 5.48% to 5.3% with only the ML model and further to 5.05% using genetic algorithms. For QCELS, ML reduced the error from 7.5% to 6.5%, with no additional gain from genetic algorithm use. Our results highlight the potential of machine learning and optimisation techniques for quantum programs and suggest promising directions for integrating software engineering methods into quantum software stacks. Nonetheless, due to inconclusive results with some of the Hamiltonian systems of 20- and 24-qubit systems, we suggest further examination of the training data based on Hamiltonian characteristics.
AB - We present AccelerQ, a framework for automatically tuning quantum eigensolver (QE) implementations–these are quantum programs implementing a specific QE algorithm–using machine learning and search-based optimisation. Rather than redesigning quantum algorithms or optimising the implementation of an already existing algorithm, AccelerQ treats QE implementations as black-box programs and learns to optimise their hyperparameters to improve accuracy and efficiency.Our approach leverages two key insights: (1) training on data extracted from smaller and simpler QE implementations’ inputs, and (2) training a program-specific ML model. To further enhance our approach, we incorporate search-based techniques and genetic algorithms alongside machine learning models to efficiently explore the hyperparameter space of QE implementations and avoid local minima.To evaluate AccelerQ, we applied it to the Quantum Eigensolver as a use case using two fundamentally different quantum implementations: ADAPT-QSCI and QCELS. For each, we trained a lightweight XGBoost Python regressor model using data extracted classically from systems of up to 16 qubits. We deployed the model to optimise hyperparameters for executions on larger systems–20-, 24-, and 28-qubit Hamiltonians, where direct classical simulation becomes impractical.For ADAPT-QSCI, we observed a reduction in error from 5.48% to 5.3% with only the ML model and further to 5.05% using genetic algorithms. For QCELS, ML reduced the error from 7.5% to 6.5%, with no additional gain from genetic algorithm use. Our results highlight the potential of machine learning and optimisation techniques for quantum programs and suggest promising directions for integrating software engineering methods into quantum software stacks. Nonetheless, due to inconclusive results with some of the Hamiltonian systems of 20- and 24-qubit systems, we suggest further examination of the training data based on Hamiltonian characteristics.
M3 - Conference paper
BT - ACM SIGPLAN International Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA)
ER -