All about OSs |
A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z
H
Visit also: www.sunorbit.net
N | Nemesis (University
of Cambridge, UK) Pegasus has produced an entirely new operating system, whose design is geared to support of high-performance applications which require a consistent quality of service (QoS), such as those which use multimedia. This operating system, called Nemesis, currently runs on a number of platforms including Intel, Alpha (21064 and 21164) and StrongARM. Nemesis is a single-address-space system, with an extremely light-weight kernel ('Nemesis Trusted Supervisor Code', or NTSC), and strong emphasis on performing operating system functions in the user's domain (thus avoiding the need for expensive protection regime changes).
TCP/IP not an OS but a
protocoll NeXT, the computer of which nothing but the OS
remained. Steve Jobs' marketing child after Apple, all
that is left of NeXT is the remarkably good
NeXTSTEP/OPENSTEP Operating System, a Mach-based mostly UNIX with a very good user
interface and programmer environment. It's now available
for the x86 PC OPENSTEP GNUStep
Windows NT, Windows XP Journaling
Filesystem Windows NTFS is a journaling file-system. Journaling is a technique long known in database systems. A journaling or transaction-oriented filesystem writes down every step it takes. Journaling file systems can only exist on top of a conventional file-system. So lets first have a look at the 'real' NTFS file system. It is often stated that MS-Dos is essentially a modestly refined CP/M. Thats a rather crude simplification. From an administration point of view, MS-Dos is quite the contrary of CP/M. CP/M had much more in common with the inode system of Unix, even the design-flaw of fixing the maximum number of directory entries. If you see the layout of NTFS for the first time your first impression is: ah, good old CP/M! This impression even gets reinforced the deeper you look into the system. Maybe thats the reason why Microsoft is so sparse in information about NTFS. But to be honest: at the second glance you see also, it is a reworked, a reengeneered CP/M and a much better CP/M than CP/M ever was (what was not hard to do). Now to the facts: As was the case with CP/M, NTFS has a compact directory, called 'Master File Table', or short MFT. Unlike CP/M this MFT is not fixed at the beginning of the disc(a pointer in the boot-block points to it) and unlike CP/M its not fixed in size. Like CP/M it is a linear sequence of fixed size records(1 KB in size, prior to NT4.0 4KB, 128 bytes in CP/M). Each MFT-entry or record describes one file or (sub)directory , extensions are used, if the whole file doesn't fit into one record. Unlike CP/M this MFT allows subdirs. These subdirs are contained completely in the MFT if they are small enough to fit, else their information is broken into multiple data records, referenced from the root entry in the MFT. The MFT is a normal file and as such can be placed anywhere on the disc.(This feature was not so much thought as a way to minimize the physical ways on the disc - by placing the directory in the middle of the data area - but more a concession to the not perfect discs of those days. The importance of this fact again is only historically understandable: Since the hard discs in the eigthies and beginning nineties knew no defect management, a disk was not usable without this feature if there were bad spots at the beginning of the disc.) The first 16 MFT records are reserved for metadata, which describes in its first record the MFT itself, the blocks where the MFT is to be found. This is followed by a redundant copy of this data, by a record describing the log file for the journal, by the volume file, attribute definitions, root directory, the allocation-bitmap etc... We can't go deeper into this here, maybe I append a comprising description if there is enough interest. One point should be mentioned which again shows how easy file-systems can get obsolete: One entry is reserved for the map of bad blocks on the disk. Since todays intelligent harddisks manage these bad blocks by themselves there is naturally no more need for this table. Again as CP/M, the free space on the volume is kept track of with a allocation-bitmap. This is a very effective and fast way to overlook the free blocks of disc-space, since this map can easily be held in memory and thus searched with the maximum search speed. What's more, since the whole directory organization is very compact, it should be clear that search ways are minimized and the whole administration could be done in Ram. This is the complete layout of the NTFS-system. Sure there are some other noteworthy facts and enhancements over the CP/M system, like small files(and there are many of them in a normal system) can get stored completely in the MFT-record (immediate file: Mullender and Tannenbaum,1987), NTFS knows 'runs' of consecutive blocks, which means not every single block gets noted but a "disk-address,block-count" pair -thus conserving space, additional compressing methods help not only keep the size of record data low, but also other data. There are some more refinements, such as that large (sub-)directories use a different format than small ones. While small ones are organized as a simple list, large directories are organized as a B+tree. So it is easier to have an alphabetical lookup, insertions in the proper place are easier to accomplish. Redundancy: yes, there is a certain redundancy in NTFS, but it could be more. It seems that at the time NTFS was created the trust in a journaling file system was high. NTFS too uses a simplified variant of
the anti-fragmentation algorithm described under file
systems. But the MFT itself can get fragmented what
reduces the throughput. Although NTFS tries to keep
'head-room' for the MFT by reserving the blocks following
the MFT this can no more be guarateed on a rather full
disk. Annotations to journaling see File system. Any other critics? The fixed 1k record
size seems a little bit high -especially for small to
medium sized (not immediate) files. (As usual there's a
tradeoff between administration speed and wasted space.
Especially if the administration is done more or less on
disc. But the here wasted space seems tolerable. With
NTFS prior to NT4.0 this criticism was more appropriate:
the record size was 4k by then. Afterwards Microsoft had
pared the size of a record to 1k.) A little bit more
redundancy would have been a good idea, a somewhat
cleaner design also. From an puristic point of view this
system has the same design flaw as the Unix file system,
but since the MFT is not fixed in size and extensible,
this criticism seems a little bit puristic. If you need more in-depth information on NTFS look
here:
|
All about OSs |
A / B / C / D / E / F / G / H / I / J / K / L / M / N / O / P / Q / R / S / T / U / V / W / X / Y / Z
Visit also: www.sunorbit.net
All trademarks and trade names are recognized as property of their owners