Electronic – How does the Intel 8086/88 know when to assert the IO/M signal

inteliox86

Consider an Intel 8088 processor with a standard, parallel RAM and ROM implementation that also supports address/data bus access to various external peripherals like analog-to-digital converters (ADCs), UARTs, and more.

I'm having trouble designing a chip-select decoding scheme that I'm confident will work. Although I could use logic gates across all 20 address lines, the resulting PCB has significantly more traces and ICs, along with a greater potential of my design being incorrect. I'd like to utilize the IO/M pin to make my design easier to design and debug.

The 8086/88 datasheet describes the basic function of the IO/M pin but doesn't explain the underlying mechanism behind it. I understand that a logic low on the pin indicates a memory access and a logic high indicates I/O access, but I don't understand where the processor comes up with this information. The memory map I'm trying to work with has 2kB of address space reserved to address peripherals. Both ADC's require 8 bytes each to address individual analog inputs, and the UART needs a 1-byte placeholder.

0x00000 - 0x7FFFF : SRAM Chip 0 (512kB)
0x80000 - 0xDFFFF : SRAM Chip 1 (384kB)
------------------------------
0xE0000           : ADC 0  (8 Bytes)
0xE0008           : ADC 1  (8 Bytes)
0xE0010           : UART 0 (1 Byte)
------------------------------
0xE0800 - 0xFFFFF : Flash ROM (126kB)

Since memory maps can be arbitrary, how does the processor magically know when it's trying to access memory vs. I/O devices? By extension, how does the Intel 8088 know what to do with it's IO/M pin if I could easily swap the ordering of the above address space?

Best Answer

Thanks to my suspicions based on a past (and future?) life in Z80 ASM and a quick search for 8086 io, I found a handy synopsis of 8086 I/O at this page by Dr. Jim Plusquellic (hooray for free lecture notes!) - http://ece-research.unm.edu/jimp/310/slides/8086_IO1.html - which I'll now try to... synopsise even more handily.

As his page explains, the 8086 has two available modes of I/O:

In the latter case, a special set of instructions must be used - IN, INS, OUT, and OUTS. These cause corresponding signals to be output on the M/IO (Memory or I/O) and R/W (Read/Write) pins. That page indicates the difference and how these can be wired up:

enter image description here

As the Prof. explains, using this mode avoids using up normal memory ranges for I/O, with the caveats that:

  • it increases circuit complexity: you must wire up the mentioned pins to disambiguate between the two possible meanings of an address and direct each to the right destination. In doing so, you conceptually create the 'virtual pins' IORC or IOWC (I/O Read/Write Control) shown in the diagram.
  • it limits the instructions you can use for I/O to the 4 mentioned, rather than letting you do all kinds of acrobatics with normal memory loads/stores/etc., as you could under memory-mapped I/O (assuming the target device will tolerate them!)

So, the reason the 8086 and friends know when to assert IO/M is... because you tell them when, by using one of their dedicated I/O instructions.