All about Operating systems

     A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z

 Links

H

Home

Visit also: www.sunorbit.net

T Tao Operating System (Tao Systems) Group Members  
The ultimate OS (-wishlist), open, object-oriented and standard, designed by Vladimir Z. Nuri <vznuri@netcom.com> 

"This article proposes a radically new kind of operating system and software style that includes what I consider to be the best features of all existing operating systems created to date as well as many new innovations. It is based on ideas and inspirations I have collected over many years and have finally written up here." 
Tao is a radical commercial operating system or run time module offering all of the features required for the building of leading edge, cost driven, embedded consumer electronics (single and multi-processor). It is available on a broad range of processors both as a stand alone OS and co-existing with host operating systems.


TCP/IP 
Is not as   Novell Netware a OS, but a protocol. Version V.6 is in the works. 
OSI-reference model 
Since TCP/IP is the basis of the internet, no OS today is conceivable without TCP/IP. 
"... almost all the initial Internet protocols were developed first for  UNIX, largely due to the availability of kernel source (for a price) and the relative ease of implementation (relative to things like  VMS or  MVS). The University of California at Berkeley (UCB) deserves special mention, because their Computer Science Research Group (CSRG) developed the  BSD variants of AT&T's UNIX operating system. BSD UNIX and its derivatives would become the most common Internet programming platform. 
     The Free Software Movement owes much to bulletin board systems, but really came into its own on the Internet, due to a combination of forces. The public nature of the Internet's early funding ensured that much of its networking software was non-proprietary. The emergence of anonymous FTP sites provided a distribution mechanism that almost anyone could use. Network newsgroups and mailing lists offered an open communication medium. Last but not least were individualists like  Richard Stallman, who wrote EMACS, launched the  GNU Project and founded the Free Software Foundation. In the 1990s, Linus Torvalds wrote  Linux, the popular (and free) UNIX clone operating system. 
     The very existence of the Free Software Movement is part of the Internet saga, because free software would not exist without the net. "Movements" tend to arise when progress offers us new freedoms and we find new ways to explore and, sometimes, to exploit them. The Free Software Movement has offered what would be unimaginable when the Internet was formed - games, editors, windowing systems, compilers, networking software, and even entire operating systems available for anyone who wants them, without licensing fees, with complete source code, and all you need is Internet access. It also offers challenges, forcing us to ask what changes are needed in our society to support these new freedoms that have touched so many people. And it offers chances at exploitation, from the businesses using free software development platforms for commercial code, to the Internet Worm and the security risks of open systems." 
(http://www.FreeSoft.org/CIE/Topics/57.htm

"One of the key components in the Internet's success has been the public availability of its design documents. Many proprietary networking systems, such as SNA and IPX, have guarded their packet formats and details of protocol operation as trade secrets. Even ``open'' standard organizations, such as IEEE and ISO, sell their standard documents as a primary source of revenue. In contrast, Internet design documents, the ``Request for Comments'' (RFCs), have always be available for anyone to download and study. I believe that this policy, making it easy for the public to study the Internet and learn about it, has greatly contributed to the success of this exciting technology. A key requirement of the project is the continuation of this open policy." 
(http://www.FreeSoft.org/CIE/Project/project.htm


In December 2008  IPv6 was still not widely used. A 2008 study by Google Inc. indicated that penetration was still less than one percent of Internet-enabled hosts in any country. IPv6 has been implemented on all major operating systems in use. .( Google: more Macs mean higher IPv6 usage in US)

 

A very short history of the Internet.

 


 
Theory**  
Some remarks and annotations on actual problems and themes 
of operating system theory 
Monolithic kernel against micro-kernel
  Micro-kernels evolved of the needs of distributed processing. Large
monolithic kernels like Unix/Vms/... are hard to adapt to multiprocessor
environments. Although this is not the place to discuss the many
complicated and partly contradictous topics inherent in
multitask/multithread/multiprocessor system design some annotations and short
remarks: Since its developpment in the early 70's Unix was the place to
test, prove and implement new ideas. Many features were added later such
as networking and graphics, all of this made Unix a very complete system,
but not well integrated. Besides the fact that Unix was not build
homogeneous, it showed up in the 80's that generally monolithic kernels
tend to produce too many side effects if transformed into a
multiprocessor system.
  A micro-kernel is the least common denominator, it is what rests if
you successivly take all not really neccessary parts away from an Os. It
comprises only the very essential parts of an operating system: memory
management, process sheduler, interprocess communication and in the case
of a real-time micro-kernel a sophisticated interrupt-handler scheme
which is closely related to the sheduler.
Everything else - file system(s), network-management, graphic system 
etc. are treated as external and therefore normal user processes.
Back to the monolithic kernel?
But there is also a trend back to the monolithic kernel with sound arguments. A 
micro-kernel is not really practicable, they state: "First, micro-kernels are larger 
than desired because of the complications of a modern virtual memory system 
(such as the copy-on-write facility), the need to support many different hardware 
devices, and complex optimizations in communication facilities, all of which have 
been handled inside most micro-kernels. Moreover, performance problems have 
tended to force services originally implemented on top of a micro-kernel back into 
the kernel, increasing its size. For example, the Mach inter-machine network server 
has been added back into some versions of Mach for this reason. 
Second, micro-kernels do not support domain-specific resource allocation policies 
any better than monolithic kernels, an increasingly important issue with sophisticated 
applications and application systems. For example, the standard page-replacement 
policies of UNIX-like operating systems perform poorly for applications with random 
or sequential access . Placement of conventional operating system kernel services 
in a micro-kernel-based server does not generally give the applications any more control 
because the server is a fixed protected system service. Adding a variety of resource 
management policies to the micro-kernel fails to achieve the efficiency that application-
specific knowledge allows and increases the kernel size and complexity. 
Finally, micro-kernels are bloated with exception-handling mechanisms for the failure 
and unusual cases that can arise with the hardware and with other server and application 
modules. For example, the potential page-in exception conditions with external pagers 
introduces complications into Mach."[stanford**]
IPC
The process sheduler and interprocess communication are the two parts
of the system that decide at the very 'heart' of the system over the overall
performance of the system. Design faults commited here have influence on 
the whole system and time lost here cannot be regained in any part of the system. 
"Inter-process communication  (ipc) performance is vital for modern 
operating systems, especially micro-kernel based ones.But most micro-kernels 
on the market today exhibit poor ipc performance..... To achieve fast IPC 
performance the whole Kernel-design must be guided by ipc-requirements 
and heavy use of the concept of virtual address space inside the 
micro-kernel itself must be made."[liedke**]
Although today it should be clear that any binding to a certain processor should be avoided 
- which naturally means also avoiding  assembler code, certain performance critical parts
of the system - namely the sheduler - should even in times of 2Ghz processors at least get
optimized on this level. But to state it clearly: this is the absolute exception to the rule. In
the sense of readability, portability and maintanance a high level language should be preferred.
Writing a fast comand-line (windowed-)multitask system should be no big chalenge today - this 
was possible already 20 years ago with 8bit processors like the Z80. The chalenge today 
are grafical systems with high frames/second rates.  And to end all those useless emotional 
debates about better or best systems we should arrange about a benchmark which gives clear 
facts: eg since Opengl or it's clones are available on all systems, a simple fixed Opengl-application 
with moving objects ( and perhaps a second one with disc-activity, texture loading etc) or 
a Povray-animation etc. And  since Co-operation matters a second (or third) application can 
be defined to simulate some load.
If you want to know more  on scheduling/dispatching and IPC: Real time dispatcher.
Memory,libraries,dll's
To reduce the size of application programs and to avoid binding of always the same
functions to application programs, these libraries are integrated into the OS.
On the other hand it makes no sense to have all additions for all purposes
permanently loaded. The cure are Dll's which only get loaded when needed.
But Dll's are the point where comercial systems get irrational and 
where free systems have their highest potential: If the system
manufacturer delivers himself application programs it is not in his interest
to produce competition to his own programs. He will deliver these libraries
as system Dll's only after the last freelance coder in the world has written his 
own (graphics- etc) library. The last example for this is the gdiplus library. 
By the way: Dll's are no invention of the nineteeths, already in the
late 70ths/early 80ths systems were using runtime linking - as DLL's were
called at that time.

One should think that memory management is no more an issue today. If
you're working with real existing systems you know it is - in many areas. 
Recombining free'ed areas of memory after heavy allocation/freeing, memory 
managers which can't use the installed memory, cache managers with weird 
designs...
Annotations to File Systems esp journaling see File system.

 

The modern operating system
The modern operating system was not planned nor did it evolve out of any theoretical
thoughts. It simply did evolve out of an hardware accident:  when the PC was born,
terminals used to cost a lot of money, sometimes twice, three times the price of a PC. So it was
quite natural that hardware manufacterers tried to integrate the terminal into the PC. All
cheap PC's featured "video memory" at the end of the 70ths. But soon this distress turned
out to be a immense benefit: games turned out to be much faster without the bottle neck
of a connection to a terminal. And while the "real systems guys" still did laugh at those strange
micros, a whole revolution did happen behind their back. Since the Cpu could work in full speed
in this video memory, every grafics was super fast on these cheap 'boxes'. So it was quite natural
that when the OS's learned multi-tasking on these cheap PC's, that this video memory had to be
served in the used way.
 

 

OS's today
Many changes in paradigms occured since the introduction of UNIX. Computers
are entering their addults age. But the weakest point of todays OS's is still the 
file system. What's worse todays big harddisks aggravate this problem further and we
will see a real explosion in hard memories in the next time. Many new features even not 
thought of by system makers today will play a role. Information retrieval in a much 
more flexible manner than todays IR's is only one example, but the file systems by 
themselves must get much more intelligent, flexible and faster at the same time. This
should be no big challenge. To state it clearly, todays file systems are products of the 
stone age with minimal enhancements over first generation file systems.

** the internet is a very volatile medium and so the links are all dead. And since search engines degrade your rank with many dead links you got to remove them. But this is an unbearable fact. The articles themselves are degraded by this fact, not to speak of scientific citation honesty, prove of correct citation etc..
 

 


Tigger (Trinity College Dublin) Group Members

The Tigger project is developing a framework for the construction of a family of distributed object-support platforms suitable for use in a variety of distributed applications ranging from embedded soft-real time systems to concurrent engineering frameworks. Customisability, extensibility and portability are put forward as the way to handle diversity and are thus the core design goals in Tigger.

 

 


Time-Sharing 

OS/360 developed by IBM in 1962 is generally considered as the first "real" OS. The most important innovation was multiprogramming (batch processing, partitioned storage areas, several jobs waited at any one time to be processed). The point was always to make optimal use of the processor as a scarce and expensive resource. An even better economic use promised the concept of time-sharing (proposed by Christopher Strachey in 1959). Time-sharing was an important turning point in the evolution of the computer.
The Creation of Time-Sharing 
      Among the earliest time-sharing systems were those developed at MIT. By 1963, there were two versions of the Compatible Time Sharing System (CTSS) operating at MIT on two IBM 7094 computers, one at the Computation Center, and another at MIT's Project MAC. 
     Those using the early time-sharing systems at MIT and elsewhere soon discovered the delights of interactive computing made possible by time-sharing.(6) Describing the advantages of interactive computing, time-sharing pioneers Robert Fano and Fernando Corbato, wrote: 
 
    "For professional programmers the time-sharing system has come to  mean a great deal more than mere ease of access to the computer. Provided with the opportunity to run a program in continuous dialogue with the machine, editing, `debugging' and modifying the program as they proceed, they have gained immeasurably in the ability to  experiment. They can readily investigate new programming techniques and new approaches to problems."(7) 
 
     The results of this programming flexibility led both to a bolder and more flexible approach to problem solving and to undertaking new areas of  research. Fano and Corbato reported that users not only would build on each other's work, but also they would come to depend more and more on the computer to facilitate their work. The most surprising development that they encountered, however, was the fact that the users themselves 
created many of the programming commands used in the system, instead of needing professional programmers. While at the conventional computer installation, they noted, "one hardly ever makes use of a program developed by another user, because of the difficulty of exchanging programs and data," in the Project MAC time-sharing environment, "the ease of exchange has encouraged investigators to design their programs with an eye to possible use by other people. They have acted essentially as if they were writing papers to be published in technical journals."(8) 
     Fano and Corbato envisioned that time-sharing systems would have a profound impact on the future. "Communities will design systems," they predicted, "to perform various functions -- intellectual, economic and social -- and the systems in turn undoubtedly will have profound effects in shaping the patterns of  human life."(9) "The coupling between such a utility and the community it serves," they discovered, "is so strong that the community is actually a part of the system itself." They foresaw the development of a symbiotic relationship between the computer systems and its human users which "will create new services, new institutions, a new environment and new problems." Among these, they proposed, would be the question of access. "How will access to the utility be controlled?" they asked, "To what ends will the system be devoted, and what safeguards can be devised for its misuses? It is easy to see," they concluded, "that the progress of this new technique will raise many social questions as well as technical ones." (10) 
     Others during this period were concerned with the impact the computer would have on current society. For example, John McCarthy predicted that, "The computer gives signs of becoming the contemporary  counterpart of the steam engine that brought on the industrial revolution."(11) Unlike the steam engine, however, the utility of the computer was dependent on the successful development of software programs  written to direct it. Therefore, along with the increasing speed and capacity of computer hardware, came the increase in the demand for and in the cost of software. By the mid 1960's, the U.S. government was spending  increasing amounts of money to create programs to utilize computer resources. The U.S. government, wrote McCarthy, "with a dozen or so big systems serving its military and space establishments, is spending more  than half of its 1966 outlay of $844 million on software."(12) 
      Pointing out the need for studying the processes of programming, McCarthy observed, "What computers can do, depends on the state of the art and the science of programming as well as on speed and memory  capacity."(14) Computer pioneers like McCarthy recognized that the computer was more than an efficient bookkeeping machine. There was a need to discover what new applications were possible, and to create these new applications. Therefore, there would be a need for breakthroughs in the process of programming software. McCarthy believed that it was important for the user to be able to program in order to realize the potential of  the computer. He pointed out that programming was a skill that was not difficult to learn, and that it was more important to understand the task being automated than to master programming languages. "To program the  trajectory of a rocket," McCarthy offers as an example, "requires a few weeks' study of programming(sic!) and a few years' study of physics."(14) 
      These early explorations in time-sharing prepared the foundation for an important development in the process of creating software. Once the discovery was made that simple programming tools could be created to aid in the process of software development, and that such tools could help those who understood the tasks to be automated, a needed leap could be made in how to develop software. Such a program was to be carried out by research programmers and developers at Bell Labs in the 1970's and early 1980's, building on the principles developed by the pioneers of time-sharing and Project MAC. 
(On the Early History and Impact of Unix. Tools to Build the Tools for a New Millenium, Chapter 9 of  Ronda & Michael Hauben's "Netizen's Netbook") 
 

 


TOPS 
TOPS-10 
TOPS-20/TWENEX 
 

 

TOPSY

Topsy is a OS for teaching purposes. In Topsy, there exist exactly two processes: the user process and the kernel process (operating system). Threads, however, share their resources, in particular they run in the same address space . The user process as well as the kernel process contain several threads. In Topsy all threads (of a specific process!) are running in one address space and may share global memory between them. Synchronization of shared memory is accomplished via messages between multiple threads. The private area of a thread is its stack which is not protected against (faulty) accesses from other threads. However, a simple stack checking mechanism has been incorporated to terminate threads on returning from their main function (stack underflow).

Topsy divides the memory into two address spaces: one for user threads and the other for the OS kernel (separation of user and kernel process). This has the advan-tage of better fault recognition facilities, and a stable and consistent behavior of the kernel (user threads are not able to crash the system by modifying kernel memory). The memory is organized in paged manner, i.e. the whole address space is split up into blocks of a predefined size . Further-more, the two address spaces are embedded in one virtual address space, although no swapping of pages to secondary memory is supported . Topsy itself comes with a small footprint. It is able to run with a few 10 kilobytes of memory which is managed in a dynamic fashion. This ensures good utilization of memory. Threads can allocate memory by reserving a certain, connected piece of the virtual address space. We call these pieces consisting of several pages virtual memory regions. Every virtual memory region is assigned an appropriate number of physical pages.

 


Tornado (University of Toronto)

Tornado is a new operating system being developed for the NUMAchine that addresses NUMA programming issues using novel approaches, some of which were developed for our previous operating system Hurricane. Tornado uses an object-oriented, building block approach that allows applications to customize policies and adapt them to their performance needs. For research purposes, we intend to tune Tornado for applications with very large data sets that typically do not fit in memory and hence have high I/O demands. We also intend to provide applications with an operating environment that provides predictable performance behavior to allow performance tuning and to allow the application to appropriately parameterize its algorithms at run-time.

 


Transputer
(outdated)
The Transputer was a special Microprocessor by INMOS with 4 communication ports integrated into the CPU, thus making it easy to construct arrays of transputers. Many problems of array-computers were first studied on transputer-'farms'. Although the transputer was pure hardware, many software concepts used today were first introduced here.

 


TRON (The Realt-time Operating Nucleus) 
 TRON was originally developed by Ken Sakamura, Tokyo University, and is now likely running on your VCR, car navigation system and digital camera. It's non-proprietary and GNU software development tools for TRON specification chips are available.  
  


TROPIX

is a fully-preemptive real-time UNIX-like PC operating system developed at the NCE department
of UFRJ (Federal Univeristy of Rio de Janeiro, Brazil). Its was based on PLURIX, an older multiprocessing
UNIX-like operating system, also developed at the same Universisty in the 80's, for the Motorola 68010/68020
processors.

Info taken from: http://www.icsi.berkeley.edu/techreports/1992.abstracts/tr-92-037.htm

In the user level, TROPIX bears a reasonable similarity to the UNIX operating system. Processes are
created through fork-execs, I/O is always treated as a sequence of bytes and is performed through open-
read-write-close primitives, signals can be sent to processes, there is a kernel process zero (swapper/pager),
the init process is the common ancestor of all other user processes, etc.

Internally, TROPIX kernel structure is quite different from UNIX. TROPIX has a fully-preemptible kernel
and many specialized system calls to manipulate and coordinate the execution of real-time processes.
Real-time processes coexist with their time-sharing counterparts but they can run at higher priorities
and have many other privileges. Besides its swapper/pager, TROPIX kernel standard processes include
a unique dispatcher process per processor. When running in a multiprocessing environment, this scheme
greatly facilitates the implementation of different scheduling strategies to be followed by different
processors. Fine-grain parallel processing within executing processes is also possible since TROPIX
implements threads at the supervisor level.

 


TUNES Group Members

Tunes is a project to replace existing Operating Systems, Languages, and User Interfaces by a completely rethough Computing System, based on a correctness-proof-secure higher-order reflective self-extensible fine-grained distributed persistent fault-tolerant version-aware decentralized (no-kernel) object system. We want to implement such a system because we know all these are required for the computing industry to compete fairly, which is not currently possible. Even if Tunes itself does not become a world-wide OS, we hope the TUNES experience can speed up the appearance of such an OS that would fulfill our requirements.

 


Turbodos(outdated) 
Legendary system of Software 2000 Inc. Late 70th/early 80th. It was CP/M compatible, giving it a huge software basis. It was designed for the Zilog Z80-processor, written purely in assembler. This made Turbodos a very fast multiuser/multitask-system. Since graphical surfaces were not usual at those times, it featured a very fast reaction time. Broad installation base(USA/Europe).


 
 

 

All about OSs

 

     A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z

 Links

Home

Visit also: www.sunorbit.net

   

 

 

 

All trademarks and trade names are recognized as property of their owners