Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't need XOR and AND. You ONLY need NAND (or NOR).

NAND/NOR are universal in that they can single handedly construct all other gates. XOR and AND is not a minimal construction.

This is well below the level of understanding to get to GF(2).



See my answer above. Even if a single universal gate is enough if you disregard speed and cost (area and power consumption) when you implement physical logic circuits speed and cost are essential so you cannot use a single type of universal gate but you must use a small number of basic gates, because each of them has a better implementation from diodes and transistors than an implementation made as a cascade of a single type of universal gates.


Boolean logic says nothing about implementation.

They're entirely different levels of abstraction.

https://en.wikipedia.org/wiki/Functional_completeness This line in particular: "In digital electronics terminology, the binary NAND gate and the binary NOR gate are the only binary universal logic gates."

Saying that insight isn't useful (that you can reduce any Boolean equation to a single universal gate) - when it is a cornerstone in modern equivalence checking, Boolean SAT solving, and minimization of combinatorial equations) is really, really missing some very key things...

It is the kind of fundamental cornerstone that allows modern digital computers to even exist.


Doesn't equivalence checking normally use BDDs, and SAT solving normally use separate AND, OR, and NOT? I mean the algorithm to reduce an expression to CNF is a pretty simple set of local rewrite rules with AND, OR, and NOT; I could be wrong but I feel like the NAND-based equivalent of CNF is quite a bit hairier.

As for minimizing combinational logic, don't you want to use the full set of gates available to you in that case, taking into account their cost, and maybe trying to minimize or at least limit circuit depth? It seems like just using NAND would be kind of a step backwards in that case. Karnaugh maps give you DNF; PALs/GALs want DNF; the fact that your DNF PAL circuit might just be two layers of NAND (equivalent to a layer of AND feeding into an OR layer) is kind of an implementation detail, isn't it? I mean you would configure the circuit in the exact same way if it were made out of physical AND and OR gates.


Of course after you do the minimization, you move to a technology mapping step that takes these things into account.

"the fact that your DNF PAL circuit might just be two layers of NAND (equivalent to a layer of AND feeding into an OR layer) is kind of an implementation detail, isn't it?"

No. That's absolutely, 100% fundamental.

It's called "Canonical Form" https://en.wikipedia.org/wiki/Canonical_normal_form

It's canonical for a reason.


It seems that you didn't understand my comment well enough to respond to it, maybe due to a lack of domain knowledge.


Sure, what's a good reference?


Also, I forgot to mention in my first answer above that it is false that "XOR and AND is not a minimal construction".

The following 4 sets of logic operations (and many others) are minimal: 1. AND, NOT 2. AND, XOR 3. NAND 4. NOR

So AND + XOR is a minimal set, because you can make NOT from a XOR where one input is true.


If you make a NOT from an XOR, and already have AND, you've made...

NAND!

Which is why it isn't a minimal set - because you're using AND and XOR to get NAND, which (by itself) is universal!


The question about a minimal set of primitives arises in many domains, not only in Boolean functions, for example in structured programming or in a minimal computer instruction set.

In all such domains it is possible to find a set composed of only one primitive and some people think that that is the minimal set because it has a minimum number of primitives.

Nevertheless, other people do not agree that the minimum number of primitives is the correct goal for minimal complexity.

In all such cases the single primitive is more complex than the primitives belonging to an equivalent set of primitives with more members and it is equivalent to a composition of all the simpler primitives, done in such a way that it is possible to obtain any of the simpler primitives by composing the single primitive with itself.

In my opinion, it is nice to know this nice trick about reducing a set of independent primitives to only one, but this almost never has any practical application.

When reasoning about that domain, using the single primitive does not make it any easier, because it glues together independent concepts which are easier to handle separate and not in a forced combination. For example, when thinking about Boolean function, it is easier to think about AND-OR stages instead of NAND-NAND, or about OR-AND stages instead of NOR-NOR. So the knowledge about being able to use just NAND does not help you.

If we are talking about implementation, there also the single primitive does not help you, because you must actually implement it from more simpler primitives.

For example the main blocks for TTL logic are minimum circuits with diodes (AND), parallel transistors (OR) and transistor inverters (NOT), while the main blocks for CMOS logic are parallel transistors, series transistors, transistor inverters and transmission gates.

You do not design just a NAND and then make the other cells from it, but you must design at least a NAND, a NOR, a XOR and a multiplexer, then make other functions from these blocks. Knowing that you might use just a NAND does not help you, because any implementation that uses only NAND is less efficient than those based on multiple simpler primitives.

An example from another domain, you can use just a WHILE-DO to implement both an IF-THEN and a REPEAT-UNTIL, so you can claim that it is enough to have that primitive.

Nevertheless, if you look at the implementation, both IF-THEN and REPEAT-UNTIL are made with a single conditional jump. On the other hand, you must use 2 conditional jumps for a WHILE-DO, there is now way to implement it otherwise than as a composition of IF-THEN with REPEAT-UNTIL. If you implement the simpler primitives with WHILE-DO, you obtain a more inefficient implementation.

In conclusion, while it is interesting to know that you can reduce most or all sets of primitives to only one complex primitive, both in theory and in practice the sets of multiple simple primitives are more useful.


"In my opinion, it is nice to know this nice trick about reducing a set of independent primitives to only one, but this almost never has any practical application."

Also, NAND/NOR aren't "complex primitive", they're literally just primitives.

You've never used any modern computer? I consider modern computers pretty useful; considering this "trick" backs everything we do with modern computers, I consider it pretty fundamental...


That makes the set {AND, XOR} a cover, yes, but since it is larger than {NAND} (or {NOR}) it is not a minimum cover.


In the set-containment lattice, neither {AND, XOR} nor {NAND} is "larger than" the other, so they are both minimum covers.


Agh, yes, you're absolutely right. Been too long :(


XOR and AND is a falsehood-preserving set of gates. Whatever acyclic circuit you make out of them will always produce 0 as its output if all its inputs are 0.

It's a minimal universal set if you have access to constants (specifically, constant 1), but the other sets you mention do not require access to constants for universality. AND-NOT (BIC, set subtraction, logical abjunction, &^ in Golang) is a single binary gate that is minimal and universal in the same way (that is, it's falsehood-preserving, but universal if you have a constant 1), and it has the distinction of being the only universal gate available as a bitwise operation in many CPU instruction sets.

Here's a minimal construction of all 16 primitive binary combinations using XOR and AND:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = 0 (truth table 0000, cost 1) = 0
    r[ 3] = -1 (truth table 1111, cost 1) = -1
    r[ 4] = r[0] ^ r[1] (truth table 0110, cost 1) = x ^ y
    r[ 5] = r[0] ^ r[3] (truth table 1100, cost 2) = x ^ -1
    r[ 6] = r[1] ^ r[3] (truth table 1010, cost 2) = y ^ -1
    r[ 7] = r[0] & r[1] (truth table 0001, cost 1) = x & y
    r[ 8] = r[0] & r[4] (truth table 0010, cost 2) = x & (x ^ y)
    r[ 9] = r[1] & r[4] (truth table 0100, cost 2) = y & (x ^ y)
    r[10] = r[5] & r[6] (truth table 1000, cost 4) = (x ^ -1) & (y ^ -1)
    r[11] = r[0] ^ r[6] (truth table 1001, cost 3) = x ^ (y ^ -1)
    r[12] = r[0] ^ r[9] (truth table 0111, cost 3) = x ^ (y & (x ^ y))
    r[13] = r[3] ^ r[9] (truth table 1011, cost 4) = -1 ^ (y & (x ^ y))
    r[14] = r[3] ^ r[8] (truth table 1101, cost 4) = -1 ^ (x & (x ^ y))
    r[15] = r[3] ^ r[7] (truth table 1110, cost 3) = -1 ^ (x & y)
And here it is with just abjunction:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = 0 (truth table 0000, cost 1) = 0
    r[ 3] = -1 (truth table 1111, cost 1) = -1
    r[ 4] = r[0] &^ r[1] (truth table 0010, cost 1) = x &^ y
    r[ 5] = r[1] &^ r[0] (truth table 0100, cost 1) = y &^ x
    r[ 6] = r[3] &^ r[0] (truth table 1100, cost 2) = -1 &^ x
    r[ 7] = r[3] &^ r[1] (truth table 1010, cost 2) = -1 &^ y
    r[ 8] = r[0] &^ r[4] (truth table 0001, cost 2) = x &^ (x &^ y)
    r[ 9] = r[3] &^ r[4] (truth table 1101, cost 3) = -1 &^ (x &^ y)
    r[10] = r[3] &^ r[5] (truth table 1011, cost 3) = -1 &^ (y &^ x)
    r[11] = r[6] &^ r[1] (truth table 1000, cost 3) = (-1 &^ x) &^ y
    r[12] = r[3] &^ r[8] (truth table 1110, cost 4) = -1 &^ (x &^ (x &^ y))
    r[13] = r[3] &^ r[11] (truth table 0111, cost 4) = -1 &^ ((-1 &^ x) &^ y)
    r[14] = r[9] &^ r[5] (truth table 1001, cost 5) = (-1 &^ (x &^ y)) &^ (y &^ x)
    r[15] = r[3] &^ r[14] (truth table 0110, cost 6) = -1 &^ ((-1 &^ (x &^ y)) &^ (y &^ x))
You might intuitively suppose that &^ is "more efficient" or "more expressive" than NAND or NOR, since x &^ y gives you different results from y &^ x, so you would think that with the same number of gates, you would have more usefully different possibilities, and so some circuits would require fewer gates using abjunction than using NAND. However, I haven't found a natural family of circuits for which that is the case. Some circuits are simpler with abjunction (abjunction itself, for example, is only a single gate with abjunction), while other circuits are simpler with NAND, but neither one seems to have the kind of clear 2ⁿ advantage you'd expect from this asymmetry. Here's NAND's corresponding table:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = r[0] &̄ r[0] (truth table 1100, cost 1) = x &̄ x
    r[ 3] = r[0] &̄ r[1] (truth table 1110, cost 1) = x &̄ y
    r[ 4] = r[1] &̄ r[1] (truth table 1010, cost 1) = y &̄ y
    r[ 5] = r[0] &̄ r[2] (truth table 1111, cost 2) = x &̄ (x &̄ x)
    r[ 6] = r[0] &̄ r[3] (truth table 1101, cost 2) = x &̄ (x &̄ y)
    r[ 7] = r[1] &̄ r[2] (truth table 1011, cost 2) = y &̄ (x &̄ x)
    r[ 8] = r[2] &̄ r[4] (truth table 0111, cost 3) = (x &̄ x) &̄ (y &̄ y)
    r[ 9] = r[3] &̄ r[3] (truth table 0001, cost 2) = a &̄ a where a = x &̄ y
    r[10] = r[3] &̄ r[8] (truth table 1001, cost 5) = (x &̄ y) &̄ ((x &̄ x) &̄ (y &̄ y))
    r[11] = r[5] &̄ r[5] (truth table 0000, cost 3) = a &̄ a where a = x &̄ b and b = x &̄ x
    r[12] = r[6] &̄ r[6] (truth table 0010, cost 3) = a &̄ a where a = x &̄ b and b = x &̄ y
    r[13] = r[7] &̄ r[7] (truth table 0100, cost 3) = a &̄ a where a = y &̄ b and b = x &̄ x
    r[14] = r[8] &̄ r[8] (truth table 1000, cost 4) = a &̄ a where a = c &̄ b and b = y &̄ y and c = x &̄ x
    r[15] = r[6] &̄ r[7] (truth table 0110, cost 5) = (x &̄ (x &̄ y)) &̄ (y &̄ (x &̄ x))
The searching is done with http://canonical.org/~kragen/sw/dev3/abjsearch.py.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: