Blog

The Classiq Engine II: Electric Boogaloo (or, How to Look for a Needle in a Haystack)

11
September
,
2023
Ravid Alon

In our previous blog post about the Classiq engine, we started explaining what the engine does and gave a little sneak peek into how it does it, namely, the backtracking algorithm.
This time we will take a deep dive into the world of CSP (constraint satisfaction problem) solving. The backtracking algorithm is a very simple way to solve CSPs, but it does require looking through the whole search space. In most CSPs, this is unacceptable - the search space is just too big (exponential in the best case). Let’s revisit the Sudoku example mentioned in the previous post. There are 9^81 possible ways to fill out a Sudoku board. This is equal to slightly less than 2e77 (that’s 2 followed by 77 zeros). Even if we consider an easy example where half the squares are already filled out for us, we have about 1.5e38 possible options. Looking through all of them will just take too long.
In this post, we will explain how we can improve the backtracking algorithm, or in other words, how you can search for a needle in a haystack efficiently.

In the Beginning

The word backtracking is going to be mentioned a lot in this post, but we haven’t even said what it means yet, so let’s backtrack (I’m sorry).
CSPs are defined as a set of variables, each with a domain of valid values, and a set of constraints on the assignments to these variables. A valid solution is an assignment of values to all variables such that all constraints are satisfied.
The backtracking algorithm tries to incrementally build a valid solution by successively assigning a value to each variable. After each assignment, we can verify whether the constraints are satisfied (at least for some of the constraints). If one constraint can no longer be satisfied, the current partial solution is a dead end. Thus, we backtrack - we re-assign a different value to the last assigned variable. If there are no valid values left, we go back (=backtrack) further, to the previous variable, and re-assign a value to it.
If we reach a state where all variables are assigned and all constraints are satisfied, we found a valid solution! On the other hand, if we exhaust all options, it means that there are no valid solutions at all.
If you were paying attention, you might have noticed that the algorithm already does something sort of smart: it stops the moment it recognizes that some constraint is no longer satisfiable.
I know, this sounds trivial. Indeed, if you’re solving a Sudoku and you find an error, you immediately try to fix it, rather than continue solving. However, this simple thing already significantly reduces the size of the search space - no need to continue looking if we can say, in advance, that the search is futile. We will go back to this notion later.
I also want to point out that many “smart” things that we do in backtracking might sound trivial. This is because one way to come up with them is trying to figure out the thought process in the head of someone solving the same problem manually. Our brain is still the best computer around1.

Searching in an Orderly Fashion

With that in mind, let’s try to mimic the thought process of a person solving a Sudoku, and see where it takes us. Consider a partially-filled Sudoku board, where are you going to search for the next square to fill? Will you look at the empty column, or at a column with 5 numbers crossing a row with 4 numbers? Of course it’s the latter. When solving a Sudoku, you naturally look for areas with lots of clues (i.e., filled-out squares). So let’s tell our computer to do it as well!
In CSP solving, this is called order heuristics. When describing the backtracking algorithm, I (implicitly) mentioned two cases where the algorithm makes choices: deciding which variable to assign next, and deciding which value to assign to it. How does it make these choices? Well, that’s up to us. We can implement heuristics that would help it make the right decision. Note that the heuristics do not guarantee that we would find a valid solution immediately. They normally implement a greedy approach to solving the problem, making it more likely that we would find a valid solution earlier, while the backtracking algorithm ensures that we still iterate over all possibilities.
Our Sudoku example can be seen as a variation of the least remaining values (LRV) heuristic, which dictates that we try to assign a value to the variable with the smallest number of valid values remaining. These variables are likely going to pose a challenge, so it’s good to tackle them early on.
Now, we haven’t discussed quantum circuits today. As mentioned in the previous post, the Classiq engine allocates resources to different functions in the circuit. In what order should it allocate resources? One natural answer is the topological order of the circuit, i.e., if a function must be applied before another, then we should allocate its resources earlier. While it isn’t necessary to allocate resources in this order, it makes it easier for the engine to identify situations where the constraints can no longer be satisfied, and thus, makes a good heuristic.
What happens if two functions can be applied in either order? Here, there isn’t necessarily a natural answer. However, we can make use of another popular heuristic: the most constraining variable (MCV) heuristic, which dictates that we choose the variable which imposes the most constraints on the remaining variables. In this case, that would be the function which requires the most resources.
What about determining the value ordering? One common heuristic for that is the least constraining value (LCV2) heuristic: choose the value which, if assigned to the variable, rules out the fewest values for other variables. For example, if we are trying to synthesize a circuit with a limited number of qubits, we will first try to allocate fewer qubits to functions (so that the next functions have more qubits for them). Again, this might sound trivial to us, but the algorithm wouldn’t know how to do it if we didn’t explicitly tell it to do so.

The Early Bird Catches the Worm

Earlier I mentioned that we want to stop the search as soon as even one constraint is no longer satisfiable. We can take this idea and extend it much further. Sometimes it’s easy to tell that some constraint is unsatisfiable, e.g., if you filled out a column with all numbers 1-8, but the row of the last remaining square already contains a 9. But sometimes, you have to look a bit deeper. Maybe there are 3 remaining squares in a column, but two of the missing numbers must be in the same square. This does not immediately jump out, but with enough thinking you can figure out that this constraint is unsatisfiable.
There are different ways to implement such ideas in CSP solving. The first we are going to discuss is forward checking. In forward checking, we are going to keep track of the valid values remaining for each variable (note that this can also be used for implementing the order heuristics mentioned earlier). If you ever got stuck solving a Sudoku and started writing down all possible values in the corners of squares, that’s exactly that.
After every assignment we will go over the remaining valid values, and remove any value that is no longer valid (by checking the relevant constraints). If there is no valid value left, we can immediately backtrack - this assignment cannot lead us to a valid solution.
But this method isn’t quite enough for the example above - all squares will have at least one valid value. We need to do something more sophisticated. We need arc consistency.
Arc consistency looks at pairs of variables that are part of the same constraint, rather than single variables. To make sure that a value is valid, we will check if there is a valid value of the other variable that can be paired with it to satisfy their shared constraint. If we can’t find one, then surely this value is invalid: if we choose it, we will fail to satisfy this shared constraint.
This is getting confusing, isn’t it? Sometimes the trivial things that we do in our head require some work to translate to an algorithm. And this isn’t even the end of it: path consistency looks at triplets of variables, while k-consistency looks at k-tuples of variables! Those are quite complicated, but the premise remains the same: the earlier we find that this partial solution is a dead end, the better.

When is it Worth it?

All methods discussed thus far require non-trivial computations. Can performing them repeatedly in the middle of an already very very long search cause runtime degradation? In the worst case, the answer is yes. You could probably build an edge case where all computations are useless, and the algorithm might as well have just tried every possible solution in a random order. But in practice, it’s almost always beneficial to perform those extra computations. The problems that we’re dealing with in CSP solving are at best exponential in size, and so any subspace that we don’t have to search through is a huge benefit. If our computations are polynomial, it will always be better to perform them over a more extensive search.

Is this it?

No.

With CSP solvers there is never an end. You can always find new heuristics or improve your implementation to shave a few percent off the runtime. There are also even more complex techniques, such as backjumping or constraint learning.
It’s a never ending race to improve so that the Classiq engine can synthesize more complex circuits even faster.



1 This is actually something I always tell people who are new to programming: computers are pretty stupid, they just do exactly what they’re told. The only reason we use them is that they do it much faster than humans. Backtracking is a great example of this.
2 LRV, MCV and LCV. Yes, acronyms in computer science can be confusing as hell.

In our previous blog post about the Classiq engine, we started explaining what the engine does and gave a little sneak peek into how it does it, namely, the backtracking algorithm.
This time we will take a deep dive into the world of CSP (constraint satisfaction problem) solving. The backtracking algorithm is a very simple way to solve CSPs, but it does require looking through the whole search space. In most CSPs, this is unacceptable - the search space is just too big (exponential in the best case). Let’s revisit the Sudoku example mentioned in the previous post. There are 9^81 possible ways to fill out a Sudoku board. This is equal to slightly less than 2e77 (that’s 2 followed by 77 zeros). Even if we consider an easy example where half the squares are already filled out for us, we have about 1.5e38 possible options. Looking through all of them will just take too long.
In this post, we will explain how we can improve the backtracking algorithm, or in other words, how you can search for a needle in a haystack efficiently.

In the Beginning

The word backtracking is going to be mentioned a lot in this post, but we haven’t even said what it means yet, so let’s backtrack (I’m sorry).
CSPs are defined as a set of variables, each with a domain of valid values, and a set of constraints on the assignments to these variables. A valid solution is an assignment of values to all variables such that all constraints are satisfied.
The backtracking algorithm tries to incrementally build a valid solution by successively assigning a value to each variable. After each assignment, we can verify whether the constraints are satisfied (at least for some of the constraints). If one constraint can no longer be satisfied, the current partial solution is a dead end. Thus, we backtrack - we re-assign a different value to the last assigned variable. If there are no valid values left, we go back (=backtrack) further, to the previous variable, and re-assign a value to it.
If we reach a state where all variables are assigned and all constraints are satisfied, we found a valid solution! On the other hand, if we exhaust all options, it means that there are no valid solutions at all.
If you were paying attention, you might have noticed that the algorithm already does something sort of smart: it stops the moment it recognizes that some constraint is no longer satisfiable.
I know, this sounds trivial. Indeed, if you’re solving a Sudoku and you find an error, you immediately try to fix it, rather than continue solving. However, this simple thing already significantly reduces the size of the search space - no need to continue looking if we can say, in advance, that the search is futile. We will go back to this notion later.
I also want to point out that many “smart” things that we do in backtracking might sound trivial. This is because one way to come up with them is trying to figure out the thought process in the head of someone solving the same problem manually. Our brain is still the best computer around1.

Searching in an Orderly Fashion

With that in mind, let’s try to mimic the thought process of a person solving a Sudoku, and see where it takes us. Consider a partially-filled Sudoku board, where are you going to search for the next square to fill? Will you look at the empty column, or at a column with 5 numbers crossing a row with 4 numbers? Of course it’s the latter. When solving a Sudoku, you naturally look for areas with lots of clues (i.e., filled-out squares). So let’s tell our computer to do it as well!
In CSP solving, this is called order heuristics. When describing the backtracking algorithm, I (implicitly) mentioned two cases where the algorithm makes choices: deciding which variable to assign next, and deciding which value to assign to it. How does it make these choices? Well, that’s up to us. We can implement heuristics that would help it make the right decision. Note that the heuristics do not guarantee that we would find a valid solution immediately. They normally implement a greedy approach to solving the problem, making it more likely that we would find a valid solution earlier, while the backtracking algorithm ensures that we still iterate over all possibilities.
Our Sudoku example can be seen as a variation of the least remaining values (LRV) heuristic, which dictates that we try to assign a value to the variable with the smallest number of valid values remaining. These variables are likely going to pose a challenge, so it’s good to tackle them early on.
Now, we haven’t discussed quantum circuits today. As mentioned in the previous post, the Classiq engine allocates resources to different functions in the circuit. In what order should it allocate resources? One natural answer is the topological order of the circuit, i.e., if a function must be applied before another, then we should allocate its resources earlier. While it isn’t necessary to allocate resources in this order, it makes it easier for the engine to identify situations where the constraints can no longer be satisfied, and thus, makes a good heuristic.
What happens if two functions can be applied in either order? Here, there isn’t necessarily a natural answer. However, we can make use of another popular heuristic: the most constraining variable (MCV) heuristic, which dictates that we choose the variable which imposes the most constraints on the remaining variables. In this case, that would be the function which requires the most resources.
What about determining the value ordering? One common heuristic for that is the least constraining value (LCV2) heuristic: choose the value which, if assigned to the variable, rules out the fewest values for other variables. For example, if we are trying to synthesize a circuit with a limited number of qubits, we will first try to allocate fewer qubits to functions (so that the next functions have more qubits for them). Again, this might sound trivial to us, but the algorithm wouldn’t know how to do it if we didn’t explicitly tell it to do so.

The Early Bird Catches the Worm

Earlier I mentioned that we want to stop the search as soon as even one constraint is no longer satisfiable. We can take this idea and extend it much further. Sometimes it’s easy to tell that some constraint is unsatisfiable, e.g., if you filled out a column with all numbers 1-8, but the row of the last remaining square already contains a 9. But sometimes, you have to look a bit deeper. Maybe there are 3 remaining squares in a column, but two of the missing numbers must be in the same square. This does not immediately jump out, but with enough thinking you can figure out that this constraint is unsatisfiable.
There are different ways to implement such ideas in CSP solving. The first we are going to discuss is forward checking. In forward checking, we are going to keep track of the valid values remaining for each variable (note that this can also be used for implementing the order heuristics mentioned earlier). If you ever got stuck solving a Sudoku and started writing down all possible values in the corners of squares, that’s exactly that.
After every assignment we will go over the remaining valid values, and remove any value that is no longer valid (by checking the relevant constraints). If there is no valid value left, we can immediately backtrack - this assignment cannot lead us to a valid solution.
But this method isn’t quite enough for the example above - all squares will have at least one valid value. We need to do something more sophisticated. We need arc consistency.
Arc consistency looks at pairs of variables that are part of the same constraint, rather than single variables. To make sure that a value is valid, we will check if there is a valid value of the other variable that can be paired with it to satisfy their shared constraint. If we can’t find one, then surely this value is invalid: if we choose it, we will fail to satisfy this shared constraint.
This is getting confusing, isn’t it? Sometimes the trivial things that we do in our head require some work to translate to an algorithm. And this isn’t even the end of it: path consistency looks at triplets of variables, while k-consistency looks at k-tuples of variables! Those are quite complicated, but the premise remains the same: the earlier we find that this partial solution is a dead end, the better.

When is it Worth it?

All methods discussed thus far require non-trivial computations. Can performing them repeatedly in the middle of an already very very long search cause runtime degradation? In the worst case, the answer is yes. You could probably build an edge case where all computations are useless, and the algorithm might as well have just tried every possible solution in a random order. But in practice, it’s almost always beneficial to perform those extra computations. The problems that we’re dealing with in CSP solving are at best exponential in size, and so any subspace that we don’t have to search through is a huge benefit. If our computations are polynomial, it will always be better to perform them over a more extensive search.

Is this it?

No.

With CSP solvers there is never an end. You can always find new heuristics or improve your implementation to shave a few percent off the runtime. There are also even more complex techniques, such as backjumping or constraint learning.
It’s a never ending race to improve so that the Classiq engine can synthesize more complex circuits even faster.



1 This is actually something I always tell people who are new to programming: computers are pretty stupid, they just do exactly what they’re told. The only reason we use them is that they do it much faster than humans. Backtracking is a great example of this.
2 LRV, MCV and LCV. Yes, acronyms in computer science can be confusing as hell.

About "The Qubit Guy's Podcast"

Hosted by The Qubit Guy (Yuval Boger, our Chief Marketing Officer), the podcast hosts thought leaders in quantum computing to discuss business and technical questions that impact the quantum computing ecosystem. Our guests provide interesting insights about quantum computer software and algorithm, quantum computer hardware, key applications for quantum computing, market studies of the quantum industry and more.

If you would like to suggest a guest for the podcast, please contact us.

See Also

No items found.

Start Creating Quantum Software Without Limits

contact us