I was wondering if there are any computers that operate exclusively on boolean operations. For example, no add, sub, mult, or div in the instruction set (although these could be emulated with the appropriate boolean code). Rather, the cpu would work by comparing 2 bits at a time, with instructions like and, or, xor. I realize that no modern computer would operate like this, but have any historical computers had an instruction set something like this?
Computer Architecture – Computers Operating Exclusively on Boolean Algebra
booleancomputer-architecturecpu
Related Solutions
Executables do depend on both the OS and the CPU:
Instruction Set: The binary instructions in the executable are decoded by the CPU according to some instruction set. Most consumer CPUs support the x86 (“32bit”) and/or AMD64 (“64bit”) instruction sets. A program can be compiled for either of these instruction sets, but not both. There are extensions to these instruction sets; support for these can be queried at runtime. Such extensions offer SIMD support, for example. Optimizing compilers might try to take advantage of these extensions if they are present, but usually also offer a code path that works without any extensions.
Binary Format: The executable has to conform to a certain binary format, which allows the operating system to correctly load, initialize, and start the program. Windows mainly uses the Portable Executable format, while Linux uses ELF.
System APIs: The program may be using libraries, which have to be present on the executing system. If a program uses functions from Windows APIs, it can't be run on Linux. In the Unix world, the central operating system APIs have been standardized to POSIX: a program using only the POSIX functions will be able to run on any conformant Unix system, such as Mac OS X and Solaris.
So if two systems offers the same system APIs and libraries, run on the same instruction set, and use the same binary format, then a program compiled for one system will also run on the other.
However, there are ways to achieve more compatibility:
Systems running on the AMD64 instruction set will commonly also run x86 executables. The binary format indicates which mode to run. Handling both 32bit and 64bit programs requires additional effort by the operating system.
Some binary formats allow a file to contain multiple versions of a program, compiled for different instruction sets. Such “fat binaries” were encouraged by Apple while they transitioning from the PowerPC architecture to x86.
Some programs are not compiled to machine code, but to some intermediate representation. This is then translated on-the-fly to actual instructions, or might be interpreted. This makes a program independent from the specific architecture. Such a strategy was used on the UCSD p-System.
One operating system can support multiple binary formats. Windows is quite backwards compatible and still supports formats from the DOS era. On Linux, Wine allows the Windows formats to be loaded.
The APIs of one operating system can be reimplemented for another host OS. On Windows, Cygwin and the POSIX subsystem can be used to get a (mostly) POSIX-compliant environment. On Linux, Wine reimplements many of the Windows APIs.
Cross-platform libraries allow a program to be independent of the OS APIs. Many programming languages have standard libraries that try to achieve this, e.g. Java and C.
An emulator simulates a different system by parsing the foreign binary format, interpreting the instructions, and offering a reimplementation of all required APIs. Emulators are commonly used to run old Nitendo games on a modern PC.
There isn't always a perfect solution, but you have many alternatives to choose from:
Use named arguments, if available in your language. This works very well and has no particular drawbacks. In some languages, any argument can be passed as a named argument, e.g.
updateRow(item, externalCall: true)
(C#) orupdate_row(item, external_call=True)
(Python).Your suggestion to use a separate variable is one way to simulate named arguments, but does not have the associated safety benefits (there's no guarantee that you used the correct variable name for that argument).
Use distinct functions for your public interface, with better names. This is another way of simulating named parameters, by putting the paremeter values in the name.
This is very readable, but leads to a lot of boilerplate for you, who is writing these functions. It also can't deal well with combinatorial explosion when there are multiple boolean arguments. A significant drawback is that clients can't set this value dynamically, but must use if/else to call the correct function.
Use an enum. The problem with booleans is that they are called "true" and "false". So, instead introduce a type with better names (e.g.
enum CallType { INTERNAL, EXTERNAL }
). As an added benefit, this increases the type safety of your program (if your language implements enums as distinct types). The drawback of enums is that they add a type to your publicly visible API. For purely internal functions, this doesn't matter and enums have no significant drawbacks.In languages without enums, short strings are sometimes used instead. This works, and may even be better than raw booleans, but is very susceptible to typos. The function should then immediately assert that the argument matches a set of possible values.
None of these solutions has a prohibitive performance impact. Named parameters and enums can be resolved completely at compile time (for a compiled language). Using strings may involve a string comparison, but the cost of that is negligible for small string literals and most kinds of applications.
Best Answer
Even nowadays you can find examples of such processors, for example in complex interlocking systems.
However, these processors are not off-the shelf, and typically the production numbers are so low that in the end these end up being implemented in programmable logic (such as FPGA).