Chip select lines are usually asserted low, e.g. !CS (where ! represents the bar over the name). For this reason, address decoders like the 74HCT138 output a 0 on the decoded address line.
But sometimes, an address select line may be generated from some logic such that it is asserted high instead of low. If there are extra pins available on a package, rather than leaving the pin as NC (no connect), the designer of the chip may include a second chip select of the opposite polarity.
Both the CS1 and !CS2 lines are not necessarily used together.
If the address line is asserted low, then the designer using the chip can run the line into !CS2, and tie CS1 high
If the address line is asserted high, the designer using the chip can run the line to the CS1 line and tie !CS2 low. This saves an extra inverter in the circuit which would have been needed if the only chip select was !CS.
Other times, it may be convenient to use both teh CS1 and !CS2 lines together. Note in the datasheet for the 74HCT138 chip mentioned above, it actually provides three enable lines (like chip selects), G1, !G2A and !G2B, which are all anded together. Again, the logic designer may elect to use only the low asserted or high asserted line(s), and tie the opposite high or low as described above, or they make have some more elaborate logic that makes use of two or all three of the enable lines.
Disk drive rotation speed is only one of several properties which determine disk performance. The seek speed of the heads, and the number of bits on a track that can be written or read, are also very important. Further, there is a chain of systems which need to be optimised. Specifically the host disk drive interface and Operating System (OS).
I think rotation speed is somewhat historical; disk drive speeds were set a very long time ago. I remember we bought a 10,000rpm disk for a PC in the mid '90's, when technology costs were quite different. I suspect that those disk speeds are retained for reasonable reasons.
Those numbers are rpm, revs/minute. Convert them to revs/second:
- 4,200rpm = 70rps
- 5,400rpm = 90rps
- 7,200rpm = 120rps
- 10,000rpm = 166.7rps *
- 15,000rpm = 250rps
Those numbers largely look quite simple, round numbers except one. With quite a significant improvement from one to the next.
A disk drive rotation speed determines rotational latency, how long it'll take on average before a block can be read. The faster the better.
However, it also translates to the speed that data can be read or written through the disk drive hardware interface. I think it makes sense to have a relatively small number of different transfer speeds to ensure the host disk drive interface can 'keep up' reliably and isn't too expensive. Also, the disk's electronics must support the data transfer speed.
If that is the case, then a small speed bump on rotational speed isn't helpful.
Either the data isn't read or written any quicker (so the host disk drive interface is okay), in which case the recording density is lower than a slower disk. A slightly faster disk stores less data per track. That seems like a poor product offering for a small improvement in latency.
Or the data is read and written more quickly, so the machine needs a faster host disk drive interface. It makes sense to only offer a few different disk dive interfaces, with a small number of tested, guaranteed speed ranges. If I were a host disk drive interface manufacturer, I would prefer to test at a few specific speeds, and guarantee those, rather than test every different disk speed possible. So a small speed bump may require a more expensive disk drive interface, and it may also need a way to 'squirt' the data out faster than it read it.
So for a small speed bump, either the electronics seems to have gotten more expensive, or the disk stores less data. Neither seems like a useful product.
Worse, in the 'olden days' the operating system was fully responsible for deciding where a file's disk blocks went, in order to get maximum performance. The OS might not lay down blocks on a track sequentially, which would give the highest speed write or read. Instead it might have a block gap, or even interleave a files blocks within a track so that the CPU had time to deal with the application reading or writing a file. Having a small number of disk speeds would make it simpler for the OS designer to measure and optimise performance.
*) The obvious round number for 160rps is 9,600, but that looks a lot like a common baud rate, so marketing probably want to avoid that, and 10,000 looks so much better :-)
Best Answer
A computer have different "layers" of memory. Each layer is faster and lighter than the previous one. When the processor ask for a block of memory, he takes a look at the first layer (very fast in therm of reading speed but very small as well). If the needed block here, the processor read it. If not, the 1st layer look at the 2nd layer and load the block if it's present, the processor read the block from the 1st layer. If not, the 2nd layer look at the 3rd etc...
This is why there is almost no difference between read a byte and several at the same time, because the processor reads memory in block of byte, not in byte. However, if the processor ask for two datas coming from a different block, the require time will be different. (depending on the last commun block of those two datas)