Optimizing the quantum stack: a machine learning approach
Overview
- Date:Starts 25 October 2024, 09:30Ends 25 October 2024, 12:30
- Location:Kollektorn, MC2, Kemivägen 9, Chalmers
- Language:English
Opponent:Dr. Mario Kren, Max Planck Institute for the Science of Light, Erlangen, Germany
This compilation thesis explores the intersection of machine learning and quantum computing, focusing on optimizing quantum systems and exploring use-cases for quantum computers. Motivated by the potential impact of quantum computers, we investigate several key areas.
First, we delve into machine learning and optimization techniques, establishing the foundation for our research. We then explore reinforcement learning, enabling machines to learn through interaction.
Building on these concepts, we investigate variational quantum algorithms as a promising framework for near-term quantum computing. We analyze the quantum approximate optimization algorithm and introduce gradient-based optimization techniques for VQAs, aiming to assess the potential of quantum computing in solving real-world challenges.
Next, we focus on quantum circuit optimization, proposing methods to combine machine learning techniques with the qubit allocation and routing problem. This work aims to narrow the gap between theoretical quantum algorithms and their practical implementation on quantum hardware.
Finally, we focus on quantum error correction, developing a reinforcement learning approach to efficiently decode error syndromes. This demonstrates the potential of machine learning in enhancing the reliability of quantum computations.
Throughout the thesis, we highlight the benefits of using classical machine learning methods to optimize processes on a quantum computer, contributing to the advancement of quantum computing technologies and their practical applications.