Electronic – How to optimize a binary circuit

circuit-designoptimization

I'm currently in the situation in which I need to optimize a binary circuit by reducing its number of gates.
We regard a circuit simply as a directed acyclic graph with some input wires and some output wires.
Some of the input wires may hold constants, and some other hold "variables".
Vertices contain gates which can be either XOR or AND.

The circuit appeared in a Computer Science context, not in a Electrical Engineering one.
We need to optimize our circuit by reducing its number of gates, so we started thinking of some approaches to achieve this task.
Researching online we discovered that this is a well-studied problem in Electrical Engineering, with a lot of software out there that aids in this direction.
However, as expected most of the documentation/information/blogs/etc are targeted towards electrical engineers, and they are overwhelming for outsiders who do not have the terminology.

For instance, I managed to gather from here the following steps

…We first created flattened Verilog netlists, we started from a purely combinational HDL description of the implementation. We used Cadence Encounter RTL compiler in conjunction with the Faraday FSA0A_C 0.18 mm ASIC Standard Cell Library for synthesis. For synthesis we flattened the hardware modules hierarchy and we allowed only the use of INV1S, AN2, XOR2HS, TIE0, and TIE1 cells. After this we re-wrote the output and topologically sorted the resulting circuits.

I have highlighted the terms that do not make sense to me, even after extensive searches online. Also, other sources I found that may help with this problem keep a similar notation and are hard to understand for an outsider.

Given the above, my question is

Given a boolean circuit with only XOR and AND gates, what should be the starting point in order to minimize its number of gates?

Any recommendation on software, examples, terms, etc. is highly appreciated.


Addendum

I was suggested to provide more details on the problem I'm trying to solve here. I have a large binary circuit (to be explicit, this circuit is written in a file like this one), roughly 300K gates (as I said above, these are XOR or AND gates).
I want to algorithmically minimize the size of this circuit so that it ends up with fewer gates.
Morever, due to the context in which this circuit appeared, AND gates have a much higher cost than XOR gates.
Therefore, an optimization with a smaller number of AND gates is very much preferable (I think this may contrast with usual optimization in electrical engineering in which only the size matters without distinction on the different types of gates, but I may be wrong).

To receive more accurate answers I include a bit of my background for this problem.
I know about boolean logic and I can identify the optimization techniques that can be used for this purpose.
I have read about two-level and multi-level optimization (although I don't get the differences between the two).
I also know about the existence of algorithms like Esspreso and Quine–McCluskey.
Finally, I've read a bit about HDL and RTL languages and compilers, but I'm not even sure it's relevant.

My main issue here is that I don't know where to get started. Which of these concepts/tool fit better in my setting?


Context of the question

In case you're interested, here I add some details about the context where this question appeared.

The field is cryptography, a subdomain of computer science.
More specifically, I'm talking about something called Multiparty Computation and Fully Homomorphic Encryption.
The important point here is that we regard functions to be computed as circuits, or DAGs, where each vertex is an operation (either addition or multiplication).
This is another way to say multivariate polynomials.
When this is done modulo 2 we're talking then about binary circuits with AND and XOR as the two allowed operations.

Now, the goal of these techniques is to be able to secretly compute some function over some data.
This is done by writing the function in terms of XOR and AND operations (so, a binary circuit or a very long boolean expression) and proceeding from the inputs to the outputs in an operation-by-operation fashion.

Due to the way these techniques work, each AND operation for us is much more expensive than an XOR operation.
This means that having equivalent representations for the same program with a smaller number of AND operations is much better, since it can be performed more efficiently.
Moreover, having a very "deep" circuit is bad, so we also want to optimize to have a more shallow circuit (I hope these terms of "deep" and "shallow" make sense in this context).

My initial impression is that this problem could be similar to some problems faced in electrical engineering.
Is there any correlation?


PS: Please bear I mind that I come from Computer Science and hold no knowledge on Electrical Engineering at all, so this question may be silly, contain incorrect terminology, wrong tags, or even be off-topic, in which case I kindly ask you for a pointer to the right place to ask.

Best Answer

A first place to go might be something called "boolean algebra". If you have only few boolean algebra terms, then you can do that by hand.

Example from school: (x+1)*(1+2)reduces to 3x+3

analogous...

(A xor B) * A reduces to A * not_B

You just have to learn the boolean algebraic rules. A place to start might be here

Related Topic