Flow control is a general term for a means by which an entity that wants to push information to another can avoid sending it faster than the recipient can accept. One of the earliest forms of flow control that still exists in common usage is commonly called xon/xoff; it was used in communication between teletypes, in situations where one teletype was using its paper-tape reader to send data to another teletype. Although a teletype printer could usually keep up with a paper tape reader (both operated at ten characters/second), that would be contingent upon things like an adequate supply of paper. An operator who noticed that it was necessary to replace the paper in a teletype which was receiving a transmission could type Control-S to send an XOFF character, which would ask the paper-tape reader at the other end to stop. After the paper was replaced, the operator could type Control-Q to restart the paper-tape reader. Those characters are still used to this day, although the far end of the connection will usually be a computer rather than a tape reader.
RTS/CTS protocol is a method of handshaking which uses one wire in each direction to allow each device to indicate to the other whether or not it is ready to receive data at any given moment. One device sends on RTS and listens on CTS; the other does the reverse. A device should drive its handshake-output wire low when it is ready to receive data, and high when it is not. A device that wishes to send data should not start sending any bytes while the handshake-input wire is low; if it sees the handshake wire go high, it should finish transmitting the current byte and then wait for the handshake wire to go low before transmitting any more.
Note that while devices should ideally never send more than a byte after their handshake input goes high (if the line goes high just as they start transmitting a character, they must allow that character to be transmitted completely), many PC serial ports do not comply with this even when handshaking is enabled. The serial ports allow software to detect the state of the incoming handshake wire, and expect software to decide when data should be enqueued for transmission. Unfortunately, the only way to achieve good performance with a serial port is to enqueue data for transmission slightly in advance of when it will actually be sent, and many PC serial ports will always transmit any queued-up data as fast as they can without regard for the handshake wires. Consequently, it's not uncommon for PC serial ports to send a dozen or so characters even after they've been asked to wait.
The MCP23S17 is really meant to be connected to a microcontroller. I have used it successfully in a Blackfin-based project. It has a number of internal registers, just like the GPIO ports on a typical microcontroller. Each 8-bit port has a direction register, an input register and an output register, plus registers for input polarity and interrupt-on-change. There's also a global configuration register.
It does default to all-inputs at power up, so if that's all you need, then you just need to create a state machine that reads the two input registers. Note that you need to supply both a chip address byte and then a register address byte for each read cycle.
Also, you need to be aware that this chip has the funky feature of having two different address maps for the registers, depending on the setting of the "BANK" bit. Study this part carefully; it's pretty confusing.
The BANK bit is zero on power-up, so the two registers you want, GPIOA and GPIOB are found at addresses 12 and 13, respectively. Therefore, to read them both, you need to do two 24-clock SPI cycles:
CS: 1111000000000000000000000000111111110000000000000000000000001111
MOSI: xxxx0100aaa10000110000000000xxxxxxxx0100aaa10000110100000000xxxx
MISO: xxxx0000000000000000AAAAAAAAxxxxxxxx0000000000000000BBBBBBBBxxxx
- "aaa" represents the chip address.
- "AAAAAAAA" represents the data from port A
- "BBBBBBBB" represents the data from port B
Note that everything is MSB-first.
Best Answer
Are you referring to simulation or synthesis performance?
Simulation computational complexity is controlled by subprograms being dynamically elaborated plus simulation overhead, all expressions use operators (subprograms) or basic operations (also functions).
You'll also find the more abstract (control flow) a design model can be described the faster it will simulate.
The amount of work to minimize so is related to the number of concurrent control bits being evaluated times the number of elaborated assignments.
The entire idea of synthesis is to avoid having to do minimization and mapping yourself. Come up with a couple of equivalent test cases and time them - the resulting logic will be the same.
As far as expressing code in minimized terms try this:
It can't be maintained without resorting to other documentation showing what operations
S
defines and why there are particular intermediary terms.From the LRM, IEEE Std 1076-2008, 1.2 Purpose:
How much documentation is inherent in the above VHDL code?
Compare the above minimized description to a control flow expression (VHDL code for an 74-series ALU (the 74LS381 chip))
(And I'd personally have replaced
BminusA
, etc. with the expressions on the right hand side of their assignments in the assignment tof
. Finding an equivalent was fortuitous, around 20 years separate their authorship.)Without validating both I'd expect they both can produced the same complexity logic following synthesis. The amount of grunt work synthesis tools perform isn't the controlling factor in EDA today. In addition to design and documentation verification is increasingly more important (also from the LRM, same paragraph):
Now ask yourself from which of the two above forms can the person stuck doing verification more easily determine what the expected result should be?
You might notice there are two errors in the second example. The distinction being there is almost sufficient information to fix it from the design description. What's missing is a more complete description of the 8 operations:
A hardware description is about more than just the logic.
The minimized expression version was done in a time when synthesizing arithmetic functions cost more in licensing costs. CPU performance and memory sizes made the distinction in time (expressed in the value added) increasingly less significant.
We tended to document better for ASIC targets than is generally done for FPGAs, you could look it up in the design specification, plus their were data books of datasheets with schematics.