Jump to content

Evolution of Operating Systems Designs/Storage: Addressing Schemes

From Wikibooks, open books for an open world

Storage: Addressing Schemes

[edit | edit source]

The amount and configuration of the storage systems has always had an effect on the nature of the operating systems that can be installed. Some of the early monitors and monitoring systems had to place themselves in extremely small memories and leave room for other programs. It is important to recognize that assumptions about the size of a program that could be safely built using cards, for instance, limited the sizes of computers until multiprogramming and timesharing changed the rules.

Memory Banks

[edit | edit source]

Originally memory was built on frames, and organized into long "banks" that consisted of a number of frames of memory. Since then any linear architecture of memory has been called a memory bank.

Memory Arrays

[edit | edit source]

As memory got larger, more banks were added, creating a two-dimensional arrangement called a memory array. Later, technology shrunk these two-dimensional structures down into circuits so small they could fit on a silicon chip.

Memory Modules

[edit | edit source]

Today, we have two-dimensional arrangements of memory where each bank is a memory module consisting of a number of chips built on a small printed circuit board that organizes them into a specific meory[check spelling] architecture such as DDR2 DIMMS of SDRAM. Today's memory arrays tend to store gigabytes of data on the motherboard.

Memory Caches

[edit | edit source]

Although modern memory chips are fast, much faster than they have ever been before in history, chip speeds of the micro-processors have increased faster, and have been slowed down by the chip speeds of the main memory. To allow the processors to run more efficiently, faster static ram areas called caches have been built that can service the processor at its full speed. Modern computers have two such caches built into the CPU, so that the CPU can run in the multi-gigahertz range, even though main memory is often still measured in the megahertz range.

The idea is that much of a process, is kept within a single page of memory, and can be addressed using a local jump. So we store a few pages of memory in the cache, and the processor flips back and forth between them, while it runs the program. This means that as long as we don't need a far call, our processing stays in the cache. However the minute we need a far call, we generate a cache miss, and we have to suspend processing while we load the cache again from the slower memory of the next stage below. Eventually we load memory from the main memory, and it gets propagated back into the top level cache where the program can again run for a while without needing to drop down again into the lower memory speeds.

The Long and Short of Addressing

[edit | edit source]

In most modern machine languages there are two types of fetch operations, near calls and Far Calls. As we have seen, much of programming is done so that memory can be stored in clusters of pages, and can be anticipated so that memory fetches can precede the actual need for a particular memory page. As long as the fetch can be started long before the page is required, the page will be ready for use, without significant waiting for the cache miss to be dealt with. Modern Optimizing Compilers can predict the need for such a pre-load and build it into the program, so that the cache is always anticipating the need for a new page. The problem comes where we need to make a far call, to a routine that is part of another program, or part, for instance of the operating system. Because it is compiled at another time, the compiler cannot anticipate anything more than the fact that you will need to change pages, and so it will have to start processing the new section of memory before it can anticipate which pages will need to be pre-loaded. This is one reason why machine languages have different calls for near jumps and far jumps. Another reason of course is that near jumps do not need the full address in order to access memory, whereas long jumps do.

Memory Pages and Swap Files

[edit | edit source]

Just as the processor pre-fetches memory pages to load into the cache, a similar architecture has been developed, to allow large programs to be kept mostly on the secondary storage device, and loaded in page by page as needed. The mechanism involved is called a SwapFile or Virtual Memory Device. It consists of a framework in memory to contain active pages, and a file on the secondary storage device, to hold the actual program. As pages are needed they are moved from the swap file to the main memory framework, where they can be accessed by the cache. This would create confusion, if there wasn't a program or device called a memory manager that mapped the absolute address onto the virtual memory frameworks address.

In order to make this type of addressing practical, memory systems are organized into page based structures and memory is seldom accessed at any smaller resolution than a page fetch. Once this architecture has been created, it becomes relatively easy to load pages, and then swap them out to disk, and from the processors point of view, the main memory simply becomes a cache for the secondary storage devices.

Memory Partitions and Memory Virtualization

[edit | edit source]

It wasn't until the third generation of operating systems, that it became necessary for large areas of the memory to be set aside for processing different users or different applications in a multi-programming or time-share architecture. One way of doing this, was shown in the architecture of the 80X86 family starting with the 286. By creating special Partition Registers and logically combining the partition address with the base address register, it was possible to isolate different versions of even the same program accessing the same base addresses, simply by changing the Partition address in the partition register. By associating a particular Partition with a particular process, virtual memory areas could be created that were isolated from each other.

The CP operating system was written to make use of this capability, by giving its own clones each a partition in which to run. As a result CP was both the first virtual machine and one of the first multiprocessing systems to be able to isolate different functions or give each user their own virtual machine. This allowed the batch system to do away with the 1401 front-end and Back-end processors and run their software in different partitions of main memory. It was the relative success of this program that triggered the development of the 360 line of computers and OS360 the operating system that ran them.

Direct Access to Hardware

[edit | edit source]

Computers take time to do data intensive processes. However when a program wants to do a data-intensive process, the overhead of the operating system itself may be a factor in how much data can be processed. This is most notable in HCI because the user is part of the processing loop. A good example of where overhead is critical is in animated graphics. Screen updates can be delayed significantly if the operating system, gives them a lower priority than other processes running simultaneously. If you are watching a movie, for instance, this can make the movie jerky, and make it stall at odd times, breaking the connection between the sound and the video. Another place where timing is critical is in the sound itself, since jerky sound is almost impossible to translate into speech.

Here we have a problem, in a single-threaded, single user system, it is practical to allow the process to take over control of processing, and directly write to the physical hardware, without going through the system call interface. However, in a multiprogramming, or time-share system, to do so would reduce the use that others, or other programs can make of the computer, since while the program had direct control, other processes couldn't run.

Further there is a second side-effect to allowing direct access to hardware, and that is that all security schemes are circumvented, and a malicious program can literally take over the computer and create accounts or copy data, while ostensibly showing a movie.

Because of these last two factors, Direct access to hardware is Controversial, and must be carefully considered when designing an operating system. Just what options does the operating system give the program? Does it run in a sandbox, or is it given free rein to modify memory and secondary storage at will? Anything that makes the computer more secure, will also reduce the quality of the animation, and, anything that increases the quality of animation reduces the quality of service for other programs and other users.

For myself, I remember finding out that my TRS-80 pocket computer didn't implement peek or poke commands the direct access commands for Basic, and that therefore I couldn't program it in machine language. It was at that time, that I abandoned my plans for making things with my pocket computer, and began to think in terms of buying a slightly more capable toy.

Extended Memory

[edit | edit source]

An interesting problem happened because Intel the chip manufacturer designed the 80286 to be backwards compatible with the 8086, and wanted to keep it compatible with DOS. The problem was that DOS was built on an architecture that limited memory to 640K (Kilobytes), and the 286 had an extended memory that could access a much larger address space. To access this memory, the operating system had to deal with the fact that the address of the space between 640 KB and 1 meg, had been used for ROM. By allowing the ROM to stay within the first meg of space, the chip manufacturer had essentially created two ram memory spaces that had a gap in-between them. Regular memory in the first 640K and extended memory from 1 megabyte up.

Programs had to either be aware of the gap, and load over it, or alternately had to be separated into modules and load one modules into one memory space, and the other into the other. Windows could do this, because it had a memory manager, but few dos programs were that sophisticated so dos programs tended to be limited to the first 640K. It was partly because Windows allowed larger spreadsheets that the 286 and windows became even more popular than the 8086/8 version of Dos machines could.

Eventually by the time of the 80386 Dos developed its own memory manager, and became an extended operating system, but in the time of the 286 it was limited to the bottom 640K of memory.

Enhanced Memory

[edit | edit source]

Some motherboard manufacturers were bothered enough by the inconsistency in the addressing of memory in the 286 based computers that they designed a new memory map, called Enhanced Memory that allowed them to fill in the gap between the two memory spaces, by bank switching out the ROMs once the BIOS was loaded. This architecture has since become standard, And today the operating system does not have to deal with the inconsistencies caused by keeping the ROMs in the first 1 megabyte address range. However DOS has dropped away from Windows, and is now just a memory kept by the syntax of the command language used in the command line mode.