Hal, can`t say I have nor ever heard of the Omni keyboard. I really can`t explain it, I just love the old IBM 101 keyboards. We`re talking tuff here and when it gets dirty, I simply take it into the shower and wash this ol puppy out. Inside of a few drying hours, it`s like new and ready for even more punishment.
DMA channels, DMA and associated uses.
DMA is "Direct Memory Access".
What this is, is the ability for add-on peripherals and some "system" components to use memory addresses without CPU intervention. A classic example is a "soundblaster" sound card. These require 1 IRQ and 1 DMA channel. The 16 bit and higher SB cards require 2 DMA channels. To improve system performance and peripheral performance it was found that if a device were able to "use" memory as a "scratch" area without CPU intervention, performance increases markedly.
In the SB example, the card can move data in and out of memory to do its "work" without CPU intervention thus freeing the CPU for other tasks. When the SB card compiles it`s data it signals the CPU via an IRQ that its ready for bus/CPU time and you hear the sound. The wonderful thing about DMA channels is that there are 7 to choose from and whats better is that DMA channels can be shared if need be. Its rather unusual to NEED to share DMA channels because so few devices use them.
The only reason WHY a device would need to use DMA is because it has no "local" processing memory to "calculate" its coming task. In most systems, a printer port *may* be using a DMA, a soundcard and your lowly floppy drive(s). Each of these devices would require oodles of CPU time to perform the same task but is performed faster via DMA and using fewer CPU cycles to boot.
Why dont more devices use DMA? Well, its not necissary really because most peripherals have "local" memory and the intelligence to perform calculations "on-board". With all the "smart" devices such as video cards and such have oodles of onboard memory and a "local" ASIC (Application Specific Integrated Curcuit) or in less blown up terms a CPU designed for a specific task(s). So all this is now done "locally" and the device simply notifies the system via an IRQ that its ready for CPU/bus time. Local meaning "onboard" the device and not using "outboard" resources to do calculations and commencing data transfers. SCSI cards use DMA to "que" commands between processes and to "latch" on the bus for longer intervals by "buffering" commands, dropping the IRQ, enabling the IRQ and starting a transaction, disabling the transfer and IRQ and buffering the next transfer/command que and repeating the cycle.
So, it seems that the device is holding the interupt longer but actually stores commands/que`s for the next transfer. This acts like an uninterupted transfer because the buffer is always full with new data whilst moving previous data. UDMA IDE drives work in a similar fashion.
So basicly, DMA is sort of like a community "scratch pad" and a "check valve" to meter throughput in a steady flow fashion.
There is very little to worry about here because there are ample DMA channels to go around.
Whatcha think...should the next be CACHE, MEMORY, or?