All about Operating systems |
A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z
H
Visit also: www.sunorbit.net
T | Tao
Operating System (Tao Systems) Group
Members The ultimate OS (-wishlist), open, object-oriented and standard, designed by Vladimir Z. Nuri <vznuri@netcom.com> "This article
proposes a radically new kind of operating system and
software style that includes what I consider to be the
best features of all existing operating systems created
to date as well as many new innovations. It is based on
ideas and inspirations I have collected over many years
and have finally written up here."
"One of the key components in the Internet's
success has been the public availability of its design
documents. Many proprietary networking systems, such as
SNA and IPX, have guarded their packet formats and
details of protocol operation as trade secrets. Even
``open'' standard organizations, such as IEEE and ISO,
sell their standard documents as a primary source of
revenue. In contrast, Internet design documents, the
``Request for Comments'' (RFCs), have always be available
for anyone to download and study. I believe that this
policy, making it easy for the public to study the
Internet and learn about it, has greatly contributed to
the success of this exciting technology. A key
requirement of the project is the continuation of this
open policy."
A very short history of the Internet.
Some remarks and annotations on actual problems and themes of operating system theory Monolithic kernel against micro-kernel Micro-kernels evolved of the needs of distributed processing. Large monolithic kernels like Unix/Vms/... are hard to adapt to multiprocessor environments. Although this is not the place to discuss the many complicated and partly contradictous topics inherent in multitask/multithread/multiprocessor system design some annotations and short remarks: Since its developpment in the early 70's Unix was the place to test, prove and implement new ideas. Many features were added later such as networking and graphics, all of this made Unix a very complete system, but not well integrated. Besides the fact that Unix was not build homogeneous, it showed up in the 80's that generally monolithic kernels tend to produce too many side effects if transformed into a multiprocessor system. A micro-kernel is the least common denominator, it is what rests if you successivly take all not really neccessary parts away from an Os. It comprises only the very essential parts of an operating system: memory management, process sheduler, interprocess communication and in the case of a real-time micro-kernel a sophisticated interrupt-handler scheme which is closely related to the sheduler. Everything else - file system(s), network-management, graphic system etc. are treated as external and therefore normal user processes. Back to the monolithic kernel? But there is also a trend back to the monolithic kernel with sound arguments. A micro-kernel is not really practicable, they state: "First, micro-kernels are larger than desired because of the complications of a modern virtual memory system (such as the copy-on-write facility), the need to support many different hardware devices, and complex optimizations in communication facilities, all of which have been handled inside most micro-kernels. Moreover, performance problems have tended to force services originally implemented on top of a micro-kernel back into the kernel, increasing its size. For example, the Mach inter-machine network server has been added back into some versions of Mach for this reason. Second, micro-kernels do not support domain-specific resource allocation policies any better than monolithic kernels, an increasingly important issue with sophisticated applications and application systems. For example, the standard page-replacement policies of UNIX-like operating systems perform poorly for applications with random or sequential access . Placement of conventional operating system kernel services in a micro-kernel-based server does not generally give the applications any more control because the server is a fixed protected system service. Adding a variety of resource management policies to the micro-kernel fails to achieve the efficiency that application- specific knowledge allows and increases the kernel size and complexity. Finally, micro-kernels are bloated with exception-handling mechanisms for the failure and unusual cases that can arise with the hardware and with other server and application modules. For example, the potential page-in exception conditions with external pagers introduces complications into Mach."[stanford**] IPC The process sheduler and interprocess communication are the two parts of the system that decide at the very 'heart' of the system over the overall performance of the system. Design faults commited here have influence on the whole system and time lost here cannot be regained in any part of the system. "Inter-process communication (ipc) performance is vital for modern operating systems, especially micro-kernel based ones.But most micro-kernels on the market today exhibit poor ipc performance..... To achieve fast IPC performance the whole Kernel-design must be guided by ipc-requirements and heavy use of the concept of virtual address space inside the micro-kernel itself must be made."[liedke**] Although today it should be clear that any binding to a certain processor should be avoided - which naturally means also avoiding assembler code, certain performance critical parts of the system - namely the sheduler - should even in times of 2Ghz processors at least get optimized on this level. But to state it clearly: this is the absolute exception to the rule. In the sense of readability, portability and maintanance a high level language should be preferred. Writing a fast comand-line (windowed-)multitask system should be no big chalenge today - this was possible already 20 years ago with 8bit processors like the Z80. The chalenge today are grafical systems with high frames/second rates. And to end all those useless emotional debates about better or best systems we should arrange about a benchmark which gives clear facts: eg since Opengl or it's clones are available on all systems, a simple fixed Opengl-application with moving objects ( and perhaps a second one with disc-activity, texture loading etc) or a Povray-animation etc. And since Co-operation matters a second (or third) application can be defined to simulate some load. If you want to know more on scheduling/dispatching and IPC: Real time dispatcher. Memory,libraries,dll's To reduce the size of application programs and to avoid binding of always the same functions to application programs, these libraries are integrated into the OS. On the other hand it makes no sense to have all additions for all purposes permanently loaded. The cure are Dll's which only get loaded when needed. But Dll's are the point where comercial systems get irrational and where free systems have their highest potential: If the system manufacturer delivers himself application programs it is not in his interest to produce competition to his own programs. He will deliver these libraries as system Dll's only after the last freelance coder in the world has written his own (graphics- etc) library. The last example for this is the gdiplus library. By the way: Dll's are no invention of the nineteeths, already in the late 70ths/early 80ths systems were using runtime linking - as DLL's were called at that time. One should think that memory management is no more an issue today. If you're working with real existing systems you know it is - in many areas. Recombining free'ed areas of memory after heavy allocation/freeing, memory managers which can't use the installed memory, cache managers with weird designs... Annotations to File Systems esp journaling see File system.
The modern operating system The modern operating system was not planned nor did it evolve out of any theoretical thoughts. It simply did evolve out of an hardware accident: when the PC was born, terminals used to cost a lot of money, sometimes twice, three times the price of a PC. So it was quite natural that hardware manufacterers tried to integrate the terminal into the PC. All cheap PC's featured "video memory" at the end of the 70ths. But soon this distress turned out to be a immense benefit: games turned out to be much faster without the bottle neck of a connection to a terminal. And while the "real systems guys" still did laugh at those strange micros, a whole revolution did happen behind their back. Since the Cpu could work in full speed in this video memory, every grafics was super fast on these cheap 'boxes'. So it was quite natural that when the OS's learned multi-tasking on these cheap PC's, that this video memory had to be served in the used way.
OS's today Many changes in paradigms occured since the introduction of UNIX. Computers are entering their addults age. But the weakest point of todays OS's is still the file system. What's worse todays big harddisks aggravate this problem further and we will see a real explosion in hard memories in the next time. Many new features even not thought of by system makers today will play a role. Information retrieval in a much more flexible manner than todays IR's is only one example, but the file systems by themselves must get much more intelligent, flexible and faster at the same time. This should be no big challenge. To state it clearly, todays file systems are products of the stone age with minimal enhancements over first generation file systems. ** the internet is a very volatile medium and so the links are all
dead. And since search engines degrade your rank with many dead links
you got to remove them. But this is an unbearable fact. The articles
themselves are degraded by this fact, not to speak of scientific
citation honesty, prove of correct citation etc..
OS/360 developed by IBM in 1962 is generally
considered as the first "real" OS. The most
important innovation was multiprogramming (batch
processing, partitioned storage areas, several jobs
waited at any one time to be processed). The point was
always to make optimal use of the processor as a scarce
and expensive resource. An even better economic use
promised the concept of time-sharing (proposed by
Christopher Strachey in 1959). Time-sharing was an
important turning point in the evolution of the computer.
Topsy is a OS for teaching purposes. In Topsy, there exist exactly two processes: the user process and the kernel process (operating system). Threads, however, share their resources, in particular they run in the same address space . The user process as well as the kernel process contain several threads. In Topsy all threads (of a specific process!) are running in one address space and may share global memory between them. Synchronization of shared memory is accomplished via messages between multiple threads. The private area of a thread is its stack which is not protected against (faulty) accesses from other threads. However, a simple stack checking mechanism has been incorporated to terminate threads on returning from their main function (stack underflow). Topsy divides the memory into two address spaces: one for user threads and the other for the OS kernel (separation of user and kernel process). This has the advan-tage of better fault recognition facilities, and a stable and consistent behavior of the kernel (user threads are not able to crash the system by modifying kernel memory). The memory is organized in paged manner, i.e. the whole address space is split up into blocks of a predefined size . Further-more, the two address spaces are embedded in one virtual address space, although no swapping of pages to secondary memory is supported . Topsy itself comes with a small footprint. It is able to run with a few 10 kilobytes of memory which is managed in a dynamic fashion. This ensures good utilization of memory. Threads can allocate memory by reserving a certain, connected piece of the virtual address space. We call these pieces consisting of several pages virtual memory regions. Every virtual memory region is assigned an appropriate number of physical pages.
is a fully-preemptive real-time UNIX-like PC operating
system developed at the NCE department
|
All about OSs |
A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z
Visit also: www.sunorbit.net
All trademarks and trade names are recognized as property of their owners