All about Operating systems

 

     A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z

 Links

H

Home

Visit also: www.sunorbit.net

 

M Mach 
Project Mach was an operating systems research project of the Carnegie Mellon University School of Computer Science from 1985 to 1994. Target: a new kernel for the further evolution of UNIX. Paid by ARPA Carnegie Mellon took the place Berkely was in before. Mach should be a better UNIX than UNIX ever was: all the faint points of Unix (multi-processor, threads, a better memory management and interprocess communication) should get improved. Compatibility to BSD 4.3 should be kept.

NeXT chosed Mach as starting point for their   OO-OS  NeXTSTEP. Mach is the kernel of  OSF/1.  GNU Hurd is based on Mach. IBM selected Mach Version 3 as starting point for new OS-developpments (Unix,  OS/2, Workplace). 

 

Mach (Carnegie Mellon University)
Mach is one of the giants in the operating systems research community. Originally started at CMU, Mach has become the basis for many research systems. Although work on Mach at CMU has largely stopped except real-time work and multi-server work, many other groups are still using Mach as the basis for research.

Mach at OSF (OSF Research Institute)
Related to: Mach
The OSF Research Institute is using the Mach technology started at CMU and is using it as the basis for several areas of research, including operating systems for parallel machines, trusted object-oriented kernels, and other OS research areas.

Mach-US (Carnegie Mellon University)
Related to: Mach
The Mach-US system is an OS developed as part of the CMU MACH project. It is comprised of a set of servers, each of which supports orthogonal system services. For example, instead of one server supplying all of the system services as under the Mach BSD4.3 single server (UX), the Mach Multiserver (Mach-US) has several servers: a task server, a file server, a tty server, an authentication server, a network server, etc. It also has and emulation library that is mapped dynamically into each user process, and uses the system servers to support the application programmers interface (API) of the UNIX operating system.



MacOS Apple Computer Corporation)
Related to: Rhapsody

Macintosh 
System 7 


Max OS X in its open source version is known as  Darwin

An in-depth look at Mac OS X with Apple's Ken Bereskin, Director of OS Technologies. 
article in technetcast 

Mac OS X Architecture (PDF) 

NeXTstep 
 



Maruti (University of Maryland) Group Members
Maruti is a time-based operating system research project at the University of Maryland. With Maruti 3.0, we are entering a new phase of our project. We have an operating system suitable for field use by a wider range of users, and we are embarking on the integration of our time-based, hard real-time technology with industry standards and more traditional event-based soft- and non-real-time systems.



Masix (Blaise Pascal Institute MASI Laboratory)

Group Members: Rémy Card, Franck Mével, Julien Simon
Related to: Mach
Masix is a distributed operating system, based on the Mach micro-kernel, currently under development at the MASI Laboratory. Its primary goal is the simultaneous execution of multiple personalities, in order to run concurrently on a same workstation applications from the Unix, DOS, OS/2 and Win32 worlds. Furthermore, Masix pools the resources of a workstation local area network, independently from the personalities that run on each node. Masix also provides distributed services to the personalities.



Merlin (University of Sao Paulo)

An object-oriented, reflective operating system based on the Self programming language



MetaOS (
University of Victoria)
MetaOS is an object-oriented system model, based on meta-levels, meta-spaces, meta-objects, and meta-interfaces, that allows applications to securely customize their run-time environment on the fly. Furthermore, it allows applications to share customizations with other applications, allows different types of security schemes to be implemented, and permits secure, remote troubleshooting of software.



Microkernel 
contrary to monolithic kernel

a micro kernel is considered as the future standard kernel in os, it delivers nothing but:
1. a communication level for inter process communication, 
2. a simple memory management, 
3. a minimal task AND thread management and sheduling/dispatching,
4. simple I/O management 

 

Mach 
Chorus 
 



Micro Processor

The microprocessor/microcomputer revolution happened almost unnoticed by most people. It was a real revolution because afterwards nothing was as before. But it was an artificially slowed down revolution, it all could have happen ten or even 15  years at least before.

Pre micro processor age see here...

Around the beginning 70ths the semiconductor industry was able to integrate about 1000 transistors on a chip. This was too much for conventional - dedicated - IC's, but actually too less for a real micro-processor. So it all started 1971 with the 4004, a 4-bit processor by Intel with 2300 transistors, which nobody wanted, except some fans of homebrew computer systems. The 4004 was hard to use because of many operative voltages, the unusual 4-bit format, complicate clock requirements  and the early eproms (1701, also 1971), which were hard to program( so my very first computer had a plugable diode matrix as short Boot-Rom!). Essentially it was not a really useful processor, it could address 640 bytes and ran with 100khz. This changed with the advent of the 8-Bit 8008(could address 16kb) and later 8080, which were delivered with a set of supporting chips which made the developpment of a micro-computer very much easier (although very few really had the knowledge at that time and eproms stayed hard to program). An article in 'Popular Electronics'/'Radio Electronics' made these processors known to the broader public(MITS Altair). 1976 'BYTE' appeared(Sep 75 issue#1) and Dr.Dobbs both with many programs in source code . The final breakthrough came in the US with the advent of the S-100 bus(MITS Altair, later IMSAI (clone of Altair), later many more), in Germany with the ECB-Bus. Equipped with CP/M, both systems gained a tremendous importance in the following years, and could only get stopped by the unbelievable cheap taiwaneese clones of IBM-PC's.

Parallel to the evolution of the Micro went the evolution of memory ships. The 1972 introduced Intel C2102 made possible the easy constructuction of alpha numerical displays (1024bit x 1). A 80x24 alphanumeric display needed 16 of them (2kx8). Before that the 1969 introduced 1101 with 256bits made necessary a lot of chips for the same functionality( you needed 64, a real IC-graveyard). And it took nearly another decade till halfway usefull graphical displays were feasable. Before that only analog terminals with vector alpha numerics (40x10) were used. Magnetic core memory was too slow for use in displays. And the somewhat faster H-TTL  technology was in 1969 just able to put one (in numbers: 1)  FlipFlop on an IC (slow standard TTL 16). So you would have needed thousands of these TTL IC's.... With a sheer incredible power consumption for a complete computer. Since the LS-family of TTL chips was not yet available in 1969, their supply current reached with even moderate designs gigantic dimensions: 1kByte of 16bit memory consumed 100 Amps / 500 Watts of power.

So the upcoming IBM-PC's  were not so much a technical chalenge for the early CP/M systems but more an economic one, since the taiwaneese cards and complete PC's were offerered at nearly 1/3 the price of US or european products. Whats more the IBM-PC was first equipped with the at that time already antiquated Intel-8088, somethimg undefinable between 8 Bit and 16 Bit, more 8 than 16. Motorolla had developped a much better product: the 68.000. And even better was National's 16.032/32.032, introduced 1981 (wikipedia even reports  late 70s, this would mean 2-3 years before the IBM-PC), at the same time as the IBM-PC (introduced 1981, but actually a useless toy with 32kByte Ram ). National Semiconductor's 32.000 was a fantastic superscalar 32-bit engine, fully symetric, almost equivivalent or even superior to the much later appearing 80486/586, developped by people who knew what they did: there participated developers of the former VAX-team as consultant. A full range of support chips was available: memory manager, floating point unit, timer. It really was the long awaited VAX on a chip, without the weak points of the VAX.

So we had 1981 the strange situation that we had a super micro-processor, the 32.000, a sligthly less good 16-bit processor, the 68.000, and an outdated product, the really slow - essentially 8 Bit Processor - 8088.(Which all programmers world wide hated because of its "brain-damaged-architecture" , original quotation internet of that time (eg. Sir Andrew Tanenbaum in the Thorvalds/Linux controversy Jan 92: "brain-dead"). You could read harsher expressions, I omit them here. Actually you couldn't allocate more then 64k with this processor as was the case with all 8-bit processors. If you needed more - and most programs soon needed more - a real nightmare began.) But the race was wone by the latter. And the best, the National 32K died an almost unnoticed death. The reason for the triumphal procession of the IBM-PC and it's clones was that all the people who had absolutely no idea what a computer is decided for the three blue letters. You can't go wrong by this, the three blue letters should know what they do, isn't it?

By the way: all big mainframe manufacterers behaved the same: at first they had a fine sleep concerning the evolution of the mikro, then they laughed at those strange micros, then they stood paralyzed aside and finally they were lost in the swirl of events. The only one to react (and survive) was IBM: you can say a lot not too nice things about them, but there is no question they got a (or should I better say: they are a) superb marketing department: and they were at some point tired always to hear the same question: "when will IBM bring out a PC?" (Remember the hype: "THINK!" ?)


R.B.

Apearance in historical order(only mainstream processors):

1971: Intel 4004

1973: Intel 8008 (other sources tell '72 as introduction year)

1974/75/76: Intel 8080/some time later Motorolla 6800/even some time later Rockwell 6500

1976: Zilog Z80

1978:Intel 8086

1979: Intel 8088

1979:Motorolla 68.000

1981: National 16.032 ( had many features of the almost a decade later appearing 80486/586 Wikipedia even reports:  "late 1970s" for first availability of 16032 )

??? Intel 80186 ( no substantial new features)

1982: Intel 80286, the processor which could switch into protected mode but not out of it

1983: National 32.032 (wikipedia reports 1984, as far as I remeber I had one in 1983 in my hands, there are also very trustworthy other sources in the net who tell 1983)

1985: Intel 80386

1989: Intel 80486

1993: Intel 80586




Microsoft 
IBM was the only one of the big computer manufacterers that realized in the beginning 80'ths the need for a 'personal computer'. More driven by their customers than really decided, they ordered the development of a personal computer on the basis of the (at that time already antiquated) 8088 by a small engeneering company. This was the point where Microsoft came into the game. While IBM wanted Gary Kildall, the owner of Digital Research and developper of CP/M, to develop an OS for their new PC, a series of misunderstandings prevented this contract. Instead they asked Microsoft, which at that time was only involved in computer languages -mainly Basic- the only language that Billy understood (and understands till today?). But Billy had shortly before that time bought cheaply sort of an operating system written by Tim Paterson who had no more use for this 'OS': the 'Quick and Dirty OS' QDOS. This was the basis for MSDOS (and PCDOS). The first PC was alternatively eqipped with cassetes or 5 1/4floppies. Although hard discs were since long state of the art MSDOS/PCDOS first couldn't handle these. And although the market delivered since long bigger hard-discs MSDOS first couldn't handle hard-discs larger than 10 MB. This was later 'enhanced' first to incredible 20MB and then to 30MB, which remained for a long time the hard barrier for IBM-PC's(till the advent of MSDOS 4/5/6 which followed one another in rapid succession because of the many inherent bugs, till 1993? ).

The "Unofficial" Bill Gates 
 
 
 


Minix (outdated)
a tiny  Unix-like OS for 386 machines developed by Andrew Tanenbaum for educational purposes. 
 


Monitor 
A monitor is a tiny OS, more a control pgm for doing I/O. The first Operating Systems named themselves moderately monitor. Like the Fortran Monitor System (FMS) , ZMON etc. The name OS was reserved at these times to Monitors who could handle disc's. But even CP/M moderatedly called itself a monitor (in tradition of the PDP-OSs,PDP8,PDP10,PDP11 ??)



Monolithic OS 
Unix 
MS-DOS ( MS-Windows

Conventional Operating systems are all monolithic OS.
 


MOSIX (
Hebrew University, Jerusalem, Israel) Group Members
A solution to the NOW problem is  available in the form of a multicomputer operating system enhancements, called MOSIX. MOSIX is an enhancement of UNIX which allows users to use the resources of a NOW configuration, without any change to the application level. By using transparent, dynamic process migration algorithms, MOSIX enhances the network services, i.e. NFS, TCP/IP, of UNIX, to the process level, by supporting load balancing and a dynamic work distribution (leveling) in clusters of homogeneous computers.



 

MS-DOS (Microsoft Disk Operating System) 

The basis  for the not-NT-line of Windows: WINDOWS 98 etc.. MSDOS was originally developped by Tim Paterson . He gave his operating system the name "Quick and Dirty Dos": QDOS. Since he wasn't able to market his system and had no further use for QDOS, he offered Bill Gates his OS for pocket money.  IBM had knocked at Billys door, asking if he couldn't sell them an OS for a new PC they wanted to bring to market. This was the birth of MSDOS.  Later Tim Patterson was hired at Microsoft.  Microsoft had become Nbr one player in microcomputer software bussiness through MS-Dos..

 


MS-Windows  see Windows
Win98 and successors were still build on 16-bit code/MSDOS -till Windows XP. Quite contrary to normal engeneering concepts build on a weak basis, getting bigger and bigger and mightier towards the top: a pyramid standing on its top. A true miracle, which will surely find its place in history.



 

MS-Windows-CE   see Windows

Runs on embedded processors, including Advanced RISC Machines' ARM, Hitachi's SH4, and NEC's VR4300. (The OS continues to support the processors handled by the current release, including a variety of x86, PowerPC, MIPS, and SH3 central processing units.)     Runs on Nintendos console Dreamcast. 

http://www.microsoft.com/windowsce/ 


 

MS-Windows-NT (introduced Nov 1993)  see Windows
Not a DOS-upgrade, fully 32 bit. As Billy said: its essentially Unix. 
 


MTOS



Mungi (
University of New South Wales)
A new operating system based on a single, flat virtual address space, orthogonal persistence, and a strong but unintrusive protection model.

 

Multics(outdated) 
Developped after  CTSS. The essential about MULTICS is, that Unix somehow evolved out of the Multics project. Unix should be a Multics without the flaws of Multics.
     "As Multics developed further, Honeywell contracted with the Air Force to add features to extend Multics access control to match the traditional military security model of SECRET, TOP SECRET, and so on. This was a natural extension of the system, and it came with money we needed. (Many technical decisions on Multics were ones that led to extra people or funding.) The goal of the Air Force project was to come up with a timesharing system that could be used by more than one clearance level of user, such that no user could get at data they weren't cleared to have. The Air Force team was led by Roger Schell; they also had a brilliant team from MITRE working with them. This project was around 1972 - 1974. 
     "The whole project was called Project GUARDIAN. Honeywell was responsiblefor adding features to the system's resource management and access control. The MITRE crew laid down some basic theory. A team from MITRE and Air Force looked for security problems in the existing system: this tiger team called themselves Project ZARF.

The Multics Collaboration 
     In 1964, MIT joined with GE and AT&T in a project designed to implement time-sharing by developing a new computer and a new operating system. The joint research project among GE, MIT, and AT&T was created to extend time-sharing techniques from a pilot program into a useful prototype for the future information utility.(15) The researchers realized that there was no existing computer that would meet the demands of time-sharing. Therefore part of the goal of their collaboration was to make it possible to develop a new computer as well as a new operating system. 
     The collaborative project was called Multics [Multiplexed Information and Computing Service] and was to be implemented on the GE 645 computer.(16) Technical leadership of the project included F. J. Corbato from MIT and V.A. Vyssotsky from Bell Labs. "One of the overall design goals is to create a computing system," they wrote, "which is capable of meeting almost all of the present and near-future requirements 
of a large computer utility.  Such systems, must run continuously and reliably 7 days a week, 24 hours a day in a way similar to telephone or power systems, and must be capable of meeting wide service demands: from 
multiple man-machine interaction to the sequential processing of  absentee-user jobs..."(17) 
    The goal of the research was to produce a prototype time-sharing system. Berkley Tague, one of the Bell Labs researchers involved in the Multics project writes, "The Multics Project was a joint project of Bell 
Labs, the GE Computer Systems Division, and MIT's Project MAC to develop a new computer and operating system that would replace MIT's CTSS system, Bell Labs BESYS, and support the new GE machine."(18) Though AT&T withdrew from the project in 1969, the joint work achieved significant 
results. Summarizing these achievements, Tague writes, "Multics was one of the seminal efforts in computing science and operating system design. It established principles and features of operating system design that  are taken for granted today in any modern operating system."(19) 
(On the Early History and Impact of Unix. Tools to Build the Tools for a New Millenium, Chapter 9 of  Ronda & Michael Hauben's "Netizen's Netbook") 

Project Guardian 
Project Guardian grew out of the ARPA support for Multics and the sale of Multics systems to the US Air Force. USAF wanted a system that could be used to handle more than one security classification of data at a time. They contracted with Honeywell and MITRE to figure out how to do this. Project Guardian led to the creation of the Access Isolation Mechanism (AIM), the forerunner of the B2 labeling and star property support in Multics. The DoD Orange Book was influenced by the experience in building secure systems gained in Project Guardian. Also involved: CISL. 

Orange Book 
Standards document produced by the National Computer Security Center.  DOD 5200.28-STD, December 1985. Describes levels of security for computer systems. Roger Schell was the main driver behind this document. 

Access Isolation Mechanism. 
The underpinnings for multilevel security. This facility is a part of every Multics system shipped. It enforces classification of information and authorization of users, augmenting the Multics ACL-based access control mechanism with a mandatory access control policy known as the Star Property. Produced by Project Guardian, and crucial in the eventual B2 rating for Multics. 

access control 
The Multics feature that checks if a user can do something. The user identity, established at login, is checked against the ACL of the thing being accessed. [TVV] 
     User access to segments is enforced by the hardware in bits in the SDW (see REWPUG). Segment control, which keeps track of all of the processes having SDWs for a segment (via a database called the system trailer segment, str_seg) is equipped to revoke access to a segment instantly when the ACL of a segment is changed. The connect mechanism assures that in spite of the associative memories in which access can be cached, access can be revoked in mid-instruction (see EIS) if need be. [BSG] 

     Publications  M. Schroeder, D. Clark, and J. Saltzer. The MULTICS kernel design project. In: Proceedings of the 6th Symposium on Operating Systems Principles, pages 43-56. ACM, November 1977. 
 

 

MVS (IBM) 
MVS/TSO on IBM 370 & 3033 
 
 

 

All about OSs

 

     A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z

 Links

Home

Visit also: www.sunorbit.net

 

   

 

home

All trademarks and trade names are recognized as