Electrical – If we can access (64*4) 256Kb of memory at a time in 8086 and you can move those segments around, what is the use of the remaining memory


If we can access (64*4) 256Kb of memory at a time in 8086 and you can move those segments around, what is the use of the remaining memory? Some say that we can move around the segments but what is the benefit of moving around the segments? Still we can't use the whole memory…

Best Answer

Context Beforehand

The earlier 8080A/8085 processor supported only a 16-bit address bus. At first, this wasn't much of a limitation as the cost of memory was quite high and many could not afford (nor at the time saw much need for) more than 65k. In the few cases where someone was willing to work for it, they would implement memory banking by providing an additional "card" on a modified bus design that supported more address bits. But these address bits were supplied by a simple 74xx latch that was "written to" by software. This was a paging register.

Since the 8080A/8085 knew nothing at all about this address expansion on the bus and only knew about the lower-order 16 bits it was driving, changing the value in the latch instantly addressed a different block of 65k the moment it changed. This meant there had to be code there that could run correctly when the paging latch was modified.

A variety of ideas were tried. One was to map the same, but much smaller, memory to all pages. Another was to overlap the address spaces using an additional adder.

But the ideas were clumsy, a pain to manage, and widely varied. And compiler vendors were faced with such a variety of home-brew approaches that it pretty much killed any serious consideration of handling all of them. And no one of them was a large enough market to bother with.

But there was also a growing need for more. Partly because of the advent of Visicalc, at the time a very innovative concept (that would later be "borrowed" and turned into Excel by Microsoft.) Visicalc was the software program that finally brought small businesses into the microcomputer marketplace and make successes of many hardware companies (especially Apple, which was the first computer they supported.) But Visicalc was also a horrible memory pig. And so bigger memory systems became very important and very quickly after Visicalc arrived.

(Short personal note: When Visicalc first arrived, I noticed for the first time lots of "business suits" showing up in a nearby Apple store. They hadn't been there before, because all the Apple II did before was display pretty colors and play games and it also "cost a lot more" than the Altair 8800 or the IMSAI 8080 at the time. But with Visicalc as pre-packaged and very easy to use software, there was suddenly a nice software "plug" for a big business "hole." And all of a sudden, those poorly selling "gamer" Apple II devices were selling like hotcakes. The Altair and IMSAI computers weren't supported by Visicalc and, as soon as the IBM PC arrived with the 8088 CPU and Visicalc rapidly ported to it, there was no longer any need at all for Altair or IMSAI computers and they rapidly died off.)


A wider address bus was easy. Anyone can add a few more lines to it. (Even I can!) The only question for Intel was, "What is the next logical step beyond the 8085?" And here they decided to leverage the idea of over-lapping a lot of 65k memory areas over each other. What used to be an external latch would be brought into the processor as a "segment register," instead. And instead of just one common latch they would instead provide one for code, one for stack, and one for data. Plus an extra one for data, since a common need was to move data around from place to place (from a source to a destination.) So a total of four separate latches: CS, SS, DS and ES.

The 8088/8086 processor supported a 20-bit address bus. This allowed it access to about one megabyte of memory. (The processor also supported a separate I/O address space with separate bus transactions.)

To keep it simple in hardware while at the same time making it relatively easy to run small programs without worrying about these new latches (if you didn't want to), they arranged things so that these latches (to be called "segment registers") represented the upper 16 bits of a 20-bit address, with the lower 4 bits defaulting to zero. To this, they'd add an offset determined by the executing instruction. The regular registers (those whose content could be treated as a full 16 bits, anyway) would provide the lower 16-bits, which would simply be added to the associated segment register. And different registers would be automatically associated with a segment register, depending upon usage. (An assumption that could be over-ridden, explicitly.) So the SP and BP registers would automatically associate themselves with the SS segment register for the purposes of computing a 20-bit address. The instruction pointer, also 16-bits, would associate with the CS segment register. But the remaining registers, such as the BX, SI, and DI registers, would associate with the DS segment register. (In a few move-block instructions one register would associate with DS and another with ES.) And as I mentioned, explicit over-rides were supported for those special cases "off the beaten path." (Often needed by the operating system that loaded and executed programs.)

Bits and Pieces

The instruction set supported, for example, a jump instruction that would only modify the instruction pointer but would not modify the CS segment register. But another "far" jump instructon would modify both at once. Conditional branches might "adjust" the instruction pointer by using a relative value that was added/subtracted from the instruction register, too. (Relative branches are useful.)

The far jump allowed you to change from one block of 65k memory to another. But that doesn't mean these two memory blocks didn't overlap. They could. For example, you could be running code at 0x0010:0x0100 -- which is at address 0x00200 -- and then jump to address (0x0020:0x0010) -- which is at address 0x00210. That is not very far away. But you've also changed the memory segment from 0x0010 to 0x0020, now. So you can still run the same code (mostly), but you can run code at slightly higher addresses than beforehand, now. Your old base address used to be 0x00100 and the new base address for the 65k segment of memory is now 0x00200. Even though you are running code that is very close to where you were running before.

Memory Models

It was one thing to create the hardware. It was entirely another thing to support all this with compilers, linkers, and assemblers. There were hundreds of ways to use all this capability. But that bewildering array of possibilities had to be winnowed down to a small set that people could practically use.

So they decided to invent just a few "standard" memory models that all the compilers and assemblers and linkers were supposed to support.

  1. The tiny model where all of the code and data and stack were in the same 65k memory segment. The CS, DS, ES, and SS segment registers would all be set to the same value and would NOT change throughout the execution of the program. This would be the same as the "olden days" when you only had at most 65k memory to work with.
  2. The small model where the code is allowed to be in a different memory segment (but only one segment, at most) than the data and stack. But the data and stack had to be in the same memory segment, so SS=DS here. (But again, also only one segment for stack and data. So two segments at most in this model.)
  3. The medium model where the code is allowed to reside in more than one memory segment. The compiler would have to make choices about how to get from one code segment to another. But the data and stack had to be in the same memory segment, so again SS=DS here.
  4. The compact model where the code sits in a single segment (like the small model) but now where the data can sprawl over more than one segment. (The stack is still limited to just one more segment.) A single data array was still limited to a single segment (code would NOT be generated that could handle arrays larger than 65k byte.)
  5. The large model where both the code and data can sprawl across many segments. However, a single data array was still limited to a single segment (code would NOT be generated that could handle arrays larger than 65k byte.)
  6. The huge model which is the same as the large model except that the compilers were required to support single arrays that were larger than 65k byte.

Keep in mind that the software concept of a "segment" is not quite the same as the Intel hardware concept of a "segment." A software segment could be smaller than 65k byte and was a "book-keeping" concept used by the compilers and assemblers to generate workable code. The hardware segment was always 65k byte in size (the offset was a full 16 bits.)

Final Notes

The hardware segment had a granularity of 16 bytes (the lower four bits were zero.) If you could "increment" a segment register, all you would have done is moved the reachable 65k of memory forward in memory by 16 bytes. This means that it would almost completely overlap the prior memory segment. An "object" sitting in memory has many different segmented addresses. For example, an object located at address (these are 20 bit addresses, remember) 0x06700 can be equally addressed by these segmented addresses (and many more): 0x0670:0x0000, 0x0300:0x3700, and 0x0000:0x6700. Those are all the same physical address. The main difference is where these memory segments physically start and end. That's all.