# Lecture 5.1 - Building a Quantum Classifier

- URL:: https://learn.qiskit.org/summer-school/2021/lec5-1-building-quantum-classifier
- Links:: 2021 Qiskit Global Summer School
- Instructor::
*Amira Abbas*

## # Notes

- ðŸ•’ 00:58
*Hilbert Space*is a big space!- Quantum Computing gives us excess to exponential state space where we can apply computations.
- With just $275$ qubits, we can represent more states than the number of atoms in the observable universe.

- ðŸ•’ 01:55
*Interference*is more important/interesting as compared to exponential state space. - ðŸ•’ 03:07 Quantum Machine Learning paradigms
- ðŸ•’ 05:15 How quantum computers can fit into the framework of classical machine learning?
- ðŸ•’ 06:35 Near-term vs fault-tolerant
- Near-term devices are those noisy devices that are available today.
- Fault-tolerant are those that will be error-free in the future, hopefully.

- ðŸ•’ 07:33 What can we do now with NISQ devices?

### ðŸ•’ 07:58 Variational models

- A quantum circuit that contains some parameters in it that we can train/optimize.
- ðŸ•’ 09:33 Names of Variational models

### ðŸ•’ 10:27 A first attempt at QML

- We can use a variational circuit as a classifier.
- ðŸ•’ 12:09 Task: Train a quantum circuit on labelled samples in order to predict labels for new data
- Steps
- ðŸ•’ 12:46 Step 1: Encode the classical data into a quantum state
- ðŸ•’ 13:09 Step 2: Apply a parameterized model
- ðŸ•’ 13:26 Step 3: Measure the circuit to extract labels
- ðŸ•’ 13:42 Step 4: Use optimization techniques (like gradient descent) to update model parameters

### ðŸ•’ 14:13 Data encoding

- How to encode classical data into a quantum system?
- This is still an open question/problem.
- It depends on the problem.
- ðŸ•’ 15:18
*Basis encoding*- Encode classical data into computational basis states.

- ðŸ•’ 17:15
*Amplitude encoding*- Encode classical data into amplitude vectors.
- Use a parameterized gate such that the resulting state encodes feature into its amplitudes.

- ðŸ•’ 20:05
*Angle encoding*- Encode classical data into angles with which we rotate qubits.

- ðŸ•’ 22:53 Higher order encoding
- Here, we are encoding higher-orders of our data, by encoding products of features as angles.

### ðŸ•’ 25:34 Applying a variational model

- How to design circuits? This is an open question/problem.
- ðŸ•’ 29:34 High-expressive variational circuit is not always advantageous.
- ðŸ•’ 30:18 Example of a typical variational model
- ðŸ•’ 33:32 How to extract labels from the model?
- This is also an open question.
- ðŸ•’ 34:20 Parity post-processing
- This is for binary classification.

- ðŸ•’ 39:42 Measuring the first qubit
- We can only measure the first qubit to interpret the probability of our label.

### ðŸ•’ 43:58 Optimization

- How to compute the gradient of a quantum circuit?
- This is called the parameter shift rule.

- ðŸ•’ 47:29 Is this advantageous?
- ðŸ•’ 48:40 Data encoding
- We map our original data into a higher-dimensional space.
- So we can think of this as a feature map, transforming our data from one space to another higher-dimensional space.

- ðŸ•’ 51:05 We can think of quantum classifiers as linear classifiers in a feature space (recall support vector machines and their linear decision function)
- At the moment, this doesn’t provide any known advantage.

- ðŸ•’ 48:40 Data encoding
- ðŸ•’ 52:13 Recap
- ðŸ•’ 53:32 What’s next?