As you mentioned 8086 processor i'm explaining with reference to 8086:
8086 is 16-bit processor(16-bit Data Bus means: no.of Physical lines to carry-out or carry-in are 16 lines some time ALU size also)
and 20-bit Address Bus(20-bit Address Bus means : no.of Physical lines to carry-out the address are 20) these lines are nothing but 8086-pins.
To save No.of 8086 pins 16-data bus lines and 16- Address lines are multiplexed(during first part of Machine cycle 16-lines acts as Address bus later they acts as DATA Bus line)
so pin count will be reduced.
Now your main Question Answer:
Case:1(8086 wants to access a Byte in a Memory)
8086 can address 2^20(Address Bus size) Locations, each location holds a Byte(8-bits of data) that implies (2^10 x 2^10 =1024(1K) x1024(1K)=>1M) finally it ought to be 1MByte
Memory: Like USB Drive(2GB, 4GB,etc.) Each memory location has an Address and Location holds a byte(8-bits) suppose 1MB memory Chip will have 1024K Memory locations and each location has 8-bits data unit capacity
If 8086 Processor wants access a byte at an address first it'll send Address to to 1MB memory chip let say(0x20002) Location, this 1MB memory chip is connected to the 8086 Address bus and data bus. it's like below
- 8086 address on address bus to 1MB Memory Chip
- 1MB memory Chip will receive address on its address Bus(0x20002)
- 8086 waits for awhile to sync with memory waiting time(memory may slower than Processor )
- 1MB Memory Chip will send out requested byte on the data bus lines
to 8086
- 8086 reads in the receiving byte
There is another case if the memory is stack of memory chips like two 512KB chips of total 1MB(512KB + 512KB)
In this case each memory chip has 19 address lines(512K = 512 x 1024 =>2^9 x 2^10) and one Important line called CHIP SELECT which enables the memory Chip
Both Memory chip address line are connected to 8086 address bus
But 8086 has 20 address lines so one line is used to connect Chip Select line of 512K Memory Chip which Enables the 512KB chip
Here 1 address line has two states:0 or 1
- 0:is used to select one memory Chip-A-521KB + 19 Address line
- 1:is used to select second memory Chip-B-521KB + 19 Address line
Another case if memory size of 256 Bytes capacity(2^8), It has 8-ADDRESS LINES and a Chip Select Line
In the above case just 8 address line of 8086 are connected to the 256 Bytes Memory Chip but reaming are connected to the Chip select line of memory chip it maybe NOR gate whose output is one if all the inputs are zero
here let say 20 address lines(A19-A0) of 8086
- Address Lines A7-A0 are connected to the 256 bytes Memory Chip
- Reaming Address lines i.e A19 to A8 are connected to the Chip Select Line of memory Chip Through 14-Input NOR gate Whose out put is One if All the inputs are Zero i.e.(A19-A8=0x0 00)
- Above Procedure is called Address Decoding
4. The Address Range for 256 Bytes memory Chip is(A7-A0=>0x00 to 0xFF) But 8086 has 20-Address Lines So Reaming lines are connected to Chip select of 256 Bytes memory Chip through NOR gate So final Address range is 0x000 00 to 0x000 FF in this way 256 Bytes Memory Chip is addressed
In this way Processor access data from Memory Chip with the help of address and Data Bus
Case:2(some device may wants send data to 8086)
It's very complex to explain here but soon i'll add to this question
finally about 8255 please refer this intel 8255 Wiki Page
If there is no dynamic dispatch (polymorphism), "methods" are just sugary functions, perhaps with an implicit additional parameter. Accordingly, instances of classes with no polymorphic behavior are essentially C struct
s for the purpose of code generation.
For classical dynamic dispatch in a static type system, there is basically one predominant strategy: vtables. Every instance gets one additional pointer that refers to (a limited representation of) its type, most importantly the vtable: An array of function pointers, one per method. Since the the full set of methods for every type (in the inheritance chain) is known at compile time, one can assign consecutive indices (0..N for N methods) to the methods and invoke the methods by looking up the function pointer in the vtable using this index (again passing the instance reference as additional parameter).
For more dynamic class-based languages, typically classes themselves are first-class objects and each object instead has a reference to its class object. The class object, in turn, owns the methods in some language-dependent manner (in Ruby, methods are a core part of the object model, in Python they're just function objects with tiny wrappers around them). The classes typically store references to their superclass(es) as well, and delegate the search for inherited methods to those classes to aid metaprogramming which adds and alters methods.
There are many other systems that aren't based on classes, but they differ significantly, so I'll only pick out one interesting design alternative: When you can add new (sets of) methods to all types at will anywhere in the program (e.g. type classes in Haskell and traits in Rust), the full set of methods isn't known while compiling. To resolve this, one creates a vtable per trait and passes them around when the trait implementation is required. That is, code like this:
void needs_a_trait(SomeTrait &x) { x.method2(1); }
ConcreteType x = ...;
needs_a_trait(x);
is compiled down to this:
functionpointer SomeTrait_ConcreteType_vtable[] = { &method1, &method2, ... };
void needs_a_trait(void *x, functionpointer vtable[]) { vtable[1](x, 1); }
ConcreteType x = ...;
needs_a_trait(x, SomeTrait_ConcreteType_vtable);
This also means the vtable information isn't embedded in the object. If you want references to an "instance of a trait" that will behave correctly when, for example, stored in data structures that contain many different types, one can create a fat pointer (instance_pointer, trait_vtable)
. This is actually a generalization of the above strategy.
Best Answer
Without any further coordination, at least one writer plus one reader can result in a classic race condition.
There are a number of factors involved.
If there is only one memory location involved (a byte, or aligned word) it is possible that two threads, one writer and one reader, accesing the same location, do effectively communicate. (Alignment is usually important in the context of the professor's memory model, because unaligned data acts like two or more independent memory locations)
However, keeping within these limitations alone does not allow a generous or rich interaction between two threads.
Involve more than one memory location or more than one writer and explicit synchronization is almost certainly required.
There are various processor instructions that facilitate synchronization.
One set works like an atomic read-modify-write, and allows multiple writers to do, among other things, increment a counter without loosing any counts. These are sometimes implemented as compare-and-swap instructions. There are a number of variations, including paired insructions load-linked and stored-conditional.
There are also memory barrier instructions that tell the processor something about when and how to flush individual processor caches to common main memory.
These primitives can be used to build larger locks. Most operating systems will provide some rich thread synchronization capabilities that are in some way built on these hardware primitives.
Programming languages and operating systems expose these hardware primitives thru locking, synchronized methods & blocks, and volatile variables.
Transactions and or transactional memory is another very interesting feature having some underlying, new hardware support, but is still very new.