Archive for the ‘intel’ Category

2007/09/19 End of Moore’s Law in 10-15 years?

Wednesday, September 19th, 2007

Could it really be? (story from slashdot.org):

Hardware: End of Moore’s Law in 10-15 years?

Posted by CmdrTaco on Wednesday September 19, @10:52AM
from the no-for-real-this-time dept.


javipas writes “In 1965 Gordon Moore — Intel’s co-founder — predicted that the number of transistors on integrated circuits would double every two years. Moore’s Law has been with us for over 40 years, but it seems that the limits of microelectronics are now not that far from us. Moore has predicted the end of his own law in 10 to 15 years, but he predicted that end before, and failed.”

2007/09/07 Server Benchmarking Lone Wolf Bites Intel Again

Friday, September 7th, 2007

Here is another thing thats biting Intel in the butt again:

Hardware: Server Benchmarking Lone Wolf Bites Intel Again

Posted by ScuttleMonkey on Friday September 07, @02:53PM
from the everyone-loves-a-homecourt-ruling dept.

 

Ian Lamont writes “Neal Nelson, the engineer who conducts independent server benchmarking, has nipped Intel again by reporting that AMD’s Opteron chips ‘delivered better power efficiency‘ than Xeon processors. Intel has discounted the findings, claiming that Nelson’s methodology ‘ignores performance,’ but the company may not be able to ignore Nelson for much longer: the Standard Performance Evaluation Corp., a nonprofit company that develops computing benchmarks, is expected to publish a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug.”

2007/08/03 Sun To Release 8-Core Niagara 2 Processor

Friday, August 3rd, 2007

Here is a good story from Slashdot:

  An anonymous reader writes “Sun Microsystems is set to announce its eight-core Niagara 2 processor next week. Each core supports eight threads, so the chip handles 64 simultaneous threads, making it the centerpiece of Sun’s “Throughput Computing” effort. Along with having more cores than the quads from Intel and AMD, the Niagara 2 have dual, on-chip 10G Ethernet ports with cryptographic capability. Sun doesn’t get much processor press, because the chips are used only in its own CoolThreads servers, but Niagara 2 will probably be the fastest processor out there when it’s released, other than perhaps the also little-known 4-GHz IBM Power 6.”

that is all for today.

2007/07/27 Slashdot: A historical Look at the First Linux Kernel

Friday, July 27th, 2007

This is a article on slashdot taking a look at the historical Linux Kerel 0.01:

LinuxFan writes “KernelTrap has a fascinating article about the first Linux kernel, version 0.01, complete with source code and photos of Linus Torvalds as a young man attending the University of Helsinki. Torvalds originally planned to call the kernel “Freax,” and in his first announcement noted, “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.” He also stressed that the kernel was very much tied to the i386 processor, “simply, I’d say that porting is impossible.” Humble beginnings.”

Now for the real article itsel:

This is a free minix-like kernel for i386(+) based AT-machines,” began the Linux version 0.01 release notes in September of 1991 for the first release of the Linux kernel. “As the version number (0.01) suggests this is not a mature product. Currently only a subset of AT-hardware is supported (hard-disk, screen, keyboard and serial lines), and some of the system calls are not yet fully implemented (notably mount/umount aren’t even implemented).” Booting the original 0.01 Linux kernel required bootstrapping it with minix, and the keyboard driver was written in assembly and hard-wired for a Finnish keyboard. The listed features were mostly presented as a comparison to minix and included, efficiently using the 386 chip rather than the older 8088, use of system calls rather than message passing, a fully multithreaded FS, minimal task switching, and visible interrupts. Linus Torvalds noted, “the guiding line when implementing linux was: get it working fast. I wanted the kernel simple, yet powerful enough to run most unix software.” In a section titled “Apologies :-)” he noted:

“This isn’t yet the ‘mother of all operating systems’, and anyone who hoped for that will have to wait for the first real release (1.0), and even then you might not want to change from minix. This is a source release for those that are interested in seeing what linux looks like, and it’s not really supported yet.”

Linus had originally intended to call the new kernel “Freax”. According to Wikipedia, the name Linux was actually invented by Ari Lemmke who maintained the ftp.funet.fi FTP server from which the kernel was originally distributed.

The initial post that Linus made about Linux was to the comp.os.minix Usenet group titled, “What would you like to see most in minix“. It began:

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).”

Later in the same thread, Linus went on to talk about how unportable the code was:

“Simply, I’d say that porting is impossible. It’s mostly in C, but most people wouldn’t call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It’s the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data – max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task – tough cookies).

“It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn’t need so much assembly language objects. Some of my ‘C’-files (specifically mm.c) are almost as much assembler as C. It would be ‘interesting’ even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery).

“Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state-machine?). All in all it’s a porters nightmare. “

Indeed, Linux 1.0 was released on March 13th, 1994 supporting only the 32-bit i386 architecture. However, by the release of Linux 1.2 on March 7th, 1995 it had already been ported to 32-bit MIPS, 32-bit SPARC, and the 64-bit Alpha. By the release of Linux 2.0 on June 9th, 1996 support had also been added for the 32-bit m68k and 32-bit PowerPC architectures. And jumping forward to the Linux 2.6 kernel, first released in 2004, it has been and continues to be ported to numerous additional architectures.


Linux 0.01 release notes:

		Notes for linux release 0.01

		0. Contents of this directory

linux-0.01.tar.Z	- sources to the kernel
bash.Z			- compressed bash binary if you want to test it
update.Z		- compressed update binary
RELNOTES-0.01		- this file

		1. Short intro

This is a free minix-like kernel for i386(+) based AT-machines.  Full
source is included, and this source has been used to produce a running
kernel on two different machines.  Currently there are no kernel
binaries for public viewing, as they have to be recompiled for different
machines.  You need to compile it with gcc (I use 1.40, don't know if
1.37.1 will handle all __asm__-directives), after having changed the
relevant configuration file(s).

As the version number (0.01) suggests this is not a mature product.
Currently only a subset of AT-hardware is supported (hard-disk, screen,
keyboard and serial lines), and some of the system calls are not yet
fully implemented (notably mount/umount aren't even implemented).  See
comments or readme's in the code.

This version is also meant mostly for reading - ie if you are interested
in how the system looks like currently.  It will compile and produce a
working kernel, and though I will help in any way I can to get it
working on your machine (mail me), it isn't really supported.  Changes
are frequent, and the first "production" version will probably differ
wildly from this pre-alpha-release.

Hardware needed for running linux:
	- 386 AT
	- VGA/EGA screen
	- AT-type harddisk controller (IDE is fine)
	- Finnish keyboard (oh, you can use a US keyboard, but not
	  without some practise :-)

The Finnish keyboard is hard-wired, and as I don't have a US one I
cannot change it without major problems. See kernel/keyboard.s for
details. If anybody is willing to make an even partial port, I'd be
grateful. Shouldn't be too hard, as it's tabledriven (it's assembler
though, so ...)

Although linux is a complete kernel, and uses no code from minix or
other sources, almost none of the support routines have yet been coded.
Thus you currently need minix to bootstrap the system. It might be
possible to use the free minix demo-disk to make a filesystem and run
linux without having minix, but I don't know...

		2. Copyrights etc

This kernel is (C) 1991 Linus Torvalds, but all or part of it may be
redistributed provided you do the following:

	- Full source must be available (and free), if not with the
	  distribution then at least on asking for it.

	- Copyright notices must be intact. (In fact, if you distribute
	  only parts of it you may have to add copyrights, as there aren't
	  (C)'s in all files.) Small partial excerpts may be copied
	  without bothering with copyrights.

	- You may not distibute this for a fee, not even "handling"
	  costs.

Mail me at [email blocked] if you have any questions.

Sadly, a kernel by itself gets you nowhere. To get a working system you
need a shell, compilers, a library etc. These are separate parts and may
be under a stricter (or even looser) copyright. Most of the tools used
with linux are GNU software and are under the GNU copyleft. These tools
aren't in the distribution - ask me (or GNU) for more info.

		3. Short technical overview of the kernel.

The linux kernel has been made under minix, and it was my original idea
to make it binary compatible with minix. That was dropped, as the
differences got bigger, but the system still resembles minix a great
deal. Some of the key points are:

	- Efficient use of the possibilities offered by the 386 chip.
	  Minix was written on a 8088, and later ported to other
	  machines - linux takes full advantage of the 386 (which is
	  nice if you /have/ a 386, but makes porting very difficult)

	- No message passing, this is a more traditional approach to
	  unix. System calls are just that - calls. This might or might
	  not be faster, but it does mean we can dispense with some of
	  the problems with messages (message queues etc). Of course, we
	  also miss the nice features :-p.

	- Multithreaded FS - a direct consequence of not using messages.
	  This makes the filesystem a bit (a lot) more complicated, but
	  much nicer. Coupled with a better scheduler, this means that
	  you can actually run several processes concurrently without
	  the performance hit induced by minix.

	- Minimal task switching. This too is a consequence of not using
	  messages. We task switch only when we really want to switch
	  tasks - unlike minix which task-switches whatever you do. This
	  means we can more easily implement 387 support (indeed this is
	  already mostly implemented)

	- Interrupts aren't hidden. Some people (among them Tanenbaum)
	  think interrupts are ugly and should be hidden. Not so IMHO.
	  Due to practical reasons interrupts must be mainly handled by
	  machine code, which is a pity, but they are a part of the code
	  like everything else. Especially device drivers are mostly
	  interrupt routines - see kernel/hd.c etc.

	- There is no distinction between kernel/fs/mm, and they are all
	  linked into the same heap of code. This has it's good sides as
	  well as bad. The code isn't as modular as the minix code, but
	  on the other hand some things are simpler. The different parts
	  of the kernel are under different sub-directories in the
	  source tree, but when running everything happens in the same
	  data/code space.

The guiding line when implementing linux was: get it working fast. I
wanted the kernel simple, yet powerful enough to run most unix software.
The file system I couldn't do much about - it needed to be minix
compatible for practical reasons, and the minix filesystem was simple
enough as it was. The kernel and mm could be simplified, though:

	- Just one data structure for tasks. "Real" unices have task
	  information in several places, I wanted everything in one
	  place.

	- A very simple memory management algorithm, using both the
	  paging and segmentation capabilities of the i386. Currently
	  MM is just two files - memory.c and page.s, just a couple of
	  hundreds of lines of code.

These decisions seem to have worked out well - bugs were easy to spot,
and things work.

		4. The "kernel proper"

All the routines handling tasks are in the subdirectory "kernel". These
include things like 'fork' and 'exit' as well as scheduling and minor
system calls like 'getpid' etc. Here are also the handlers for most
exceptions and traps (not page faults, they are in mm), and all
low-level device drivers (get_hd_block, tty_write etc). Currently all
faults lead to a exit with error code 11 (Segmentation fault), and the
system seems to be relatively stable ("crashme" hasn't - yet).

		5. Memory management

This is the simplest of all parts, and should need only little changes.
It contains entry-points for some things that the rest of the kernel
needs, but mostly copes on it's own, handling page faults as they
happen. Indeed, the rest of the kernel usually doesn't actively allocate
pages, and just writes into user space, letting mm handle any possible
'page-not-present' errors.

Memory is dealt with in two completely different ways - by paging and
segmentation.  First the 386 VM-space (4GB) is divided into a number of
segments (currently 64 segments of 64Mb each), the first of which is the
kernel memory segment, with the complete physical memory identity-mapped
into it.  All kernel functions live within this area.

Tasks are then given one segment each, to use as they wish. The paging
mechanism sees to filling the segment with the appropriate pages,
keeping track of any duplicate copies (created at a 'fork'), and making
copies on any write. The rest of the system doesn't need to know about
all this.

		6. The file system

As already mentioned, the linux FS is the same as in minix. This makes
crosscompiling from minix easy, and means you can mount a linux
partition from minix (or the other way around as soon as I implement
mount :-). This is only on the logical level though - the actual
routines are very different.

	NOTE! Minix-1.6.16 seems to have a new FS, with minor
	modifications to the 1.5.10 I've been using. Linux
	won't understand the new system.

The main difference is in the fact that minix has a single-threaded
file-system and linux hasn't. Implementing a single-threaded FS is much
easier as you don't need to worry about other processes allocating
buffer blocks etc while you do something else. It also means that you
lose some of the multiprocessing so important to unix.

There are a number of problems (deadlocks/raceconditions) that the linux
kernel needed to address due to multi-threading.  One way to inhibit
race-conditions is to lock everything you need, but as this can lead to
unnecessary blocking I decided never to lock any data structures (unless
actually reading or writing to a physical device).  This has the nice
property that dead-locks cannot happen.

Sadly it has the not so nice property that race-conditions can happen
almost everywhere.  These are handled by double-checking allocations etc
(see fs/buffer.c and fs/inode.c).  Not letting the kernel schedule a
task while it is in supervisor mode (standard unix practise), means that
all kernel/fs/mm actions are atomic (not counting interrupts, and we are
careful when writing those) if you don't call 'sleep', so that is one of
the things we can count on.

		7. Apologies :-)

This isn't yet the "mother of all operating systems", and anyone who
hoped for that will have to wait for the first real release (1.0), and
even then you might not want to change from minix.  This is a source
release for those that are interested in seeing what linux looks like,
and it's not really supported yet.  Anyone with questions or suggestions
(even bug-reports if you decide to get it working on your system) is
encouraged to mail me.

		8. Getting it working

Most hardware dependancies will have to be compiled into the system, and
there a number of defines in the file "include/linux/config.h" that you
have to change to get a personalized kernel.  Also you must uncomment
the right "equ" in the file boot/boot.s, telling the bootup-routine what
kind of device your A-floppy is.  After that a simple "make" should make
the file "Image", which you can copy to a floppy (cp Image /dev/PS0 is
what I use with a 1.44Mb floppy).  That's it.

Without any programs to run, though, the kernel cannot do anything. You
should find binaries for 'update' and 'bash' at the same place you found
this, which will have to be put into the '/bin' directory on the
specified root-device (specified in config.h). Bash must be found under
the name '/bin/sh', as that's what the kernel currently executes. Happy
hacking.

		Linus Torvalds		[email blocked]
		Petersgatan 2 A 2
		00140 Helsingfors 14
		FINLAND

First posting about Linux:

From: Linus Benedict Torvalds
Newsgroups: comp.os.minix
Subject: Gcc-1.40 and a posix-question
Date: 3 Jul 91 10:00:50 GMT

Hello netlanders,

Due to a project I'm working on (in minix), I'm interested in the posix
standard definition. Could somebody please point me to a (preferably)
machine-readable format of the latest posix rules? Ftp-sites would be
nice.

As an aside for all using gcc on minix - the new version (1.40) has been
out for some weeks, and I decided to test what needed to be done to get
it working on minix (1.37.1, which is the version you can get from
plains is nice, but 1.40 is better :-).  To my surpice, the answer
turned out to be - NOTHING! Gcc-1.40 compiles as-is on minix386 (with
old gcc-1.37.1), with no need to change source files (I changed the
Makefile and some paths, but that's it!).  As default this results in a
compiler that uses floating point insns, but if you'd rather not,
changing 'toplev.c' to define DEFAULT_TARGET from 1 to 0 (this is from
memory - I'm not at my minix-box) will handle that too.  Don't make the
libs, use the old gnulib&libc.a.  I have successfully compiled 1.40 with
itself, and everything works fine (I got the newest versions of gas and
binutils at the same time, as I've heard of bugs with older versions of
ld.c).  Makefile needs some chmem's (and gcc2minix if you're still using
it).

                Linus Torvalds          [email blocked]

PS. Could someone please try to finger me from overseas, as I've
installed a "changing .plan" (made by your's truly), and I'm not certain
it works from outside? It should report a new .plan every time.

First Linux announcement:

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 20:57:08 GMT

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones.  This has been brewing
since april, and is starting to get ready.  I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want.  Any suggestions
are welcome, but I won't promise I'll implement them :-)

                Linus (torva... at kruuna.helsinki.fi)

PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that's all I have :-(.

From: Jyrki Kuoppala [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 23:44:50 GMT

In article Linus Benedict Torvalds writes:

>I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
>This implies that I'll get something practical within a few months, and
>I'd like to know what features most people would want.  Any suggestions
>are welcome, but I won't promise I'll implement them :-)

Tell us more!  Does it need a MMU?

>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>It is NOT protable (uses 386 task switching etc)

How much of it is in C?  What difficulties will there be in porting?
Nobody will believe you about non-portability ;-), and I for one would
like to port it to my Amiga (Mach needs a MMU and Minix is not free).

As for the features; well, pseudo ttys, BSD sockets, user-mode
filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
window size in the tty structure, system calls capable of supporting
POSIX.1.  Oh, and bsd-style long file names.

//Jyrki

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 26 Aug 91 11:06:02 GMT

In article Jyrki Kuoppala writes:
>> [re: my post about my new OS]

>Tell us more!  Does it need a MMU?

Yes, it needs a MMU (sorry everybody), and it specifically needs a
386/486 MMU (see later).

>>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>>It is NOT protable (uses 386 task switching etc)

>How much of it is in C?  What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).

Simply, I'd say that porting is impossible.  It's mostly in C, but most
people wouldn't call what I write C.  It uses every conceivable feature
of the 386 I could find, as it was also a project to teach me about the
386.  As already mentioned, it uses a MMU, for both paging (not to disk
yet) and segmentation. It's the segmentation that makes it REALLY 386
dependent (every task has a 64Mb segment for code & data - max 64 tasks
in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

It also uses every feature of gcc I could find, specifically the __asm__
directive, so that I wouldn't need so much assembly language objects.
Some of my "C"-files (specifically mm.c) are almost as much assembler as
C. It would be "interesting" even to port it to another compiler (though
why anybody would want to use anything other than gcc is a mystery).

Unlike minix, I also happen to LIKE interrupts, so interrupts are
handled without trying to hide the reason behind them (I especially like
my hard-disk-driver.  Anybody else make interrupts drive a state-
machine?).  All in all it's a porters nightmare.

>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
>window size in the tty structure, system calls capable of supporting
>POSIX.1.  Oh, and bsd-style long file names.

Most of these seem possible (the tty structure already has stubs for
window size), except maybe for the user-mode filesystems. As to POSIX,
I'd be delighted to have it, but posix wants money for their papers, so
that's not currently an option. In any case these are things that won't
be supported for some time yet (first I'll make it a simple minix-
lookalike, keyword SIMPLE).

                Linus [email blocked]

PS. To make things really clear - yes I can run gcc on it, and bash, and
most of the gnu [bin/file]utilities, but it's not very debugged, and the
library is really minimal. It doesn't even support floppy-disks yet. It
won't be ready for distribution for a couple of months. Even then it
probably won't be able to do much more than minix, and much less in some
respects. It will be free though (probably under gnu-license or similar).

From: Alan Barclay [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 27 Aug 91 14:34:32 GMT

In article Linus Benedict Torvalds writes:

>yet) and segmentation. It's the segmentation that makes it REALLY 386
>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

Is that max 64 64Mb tasks or max 64 tasks no matter what their size?
--
  Alan Barclay
  iT                                |        E-mail : [email blocked]
  Barker Lane                       |        BANG-STYLE : [email blocked]
  CHESTERFIELD S40 1DY              |        VOICE : +44 246 214241

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 28 Aug 91 10:56:19 GMT

In article Alan Barclay writes:
>In article Linus Benedict Torvalds writes:
>>yet) and segmentation. It's the segmentation that makes it REALLY 386
>>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

>Is that max 64 64Mb tasks or max 64 tasks no matter what their size?

I'm afraid that is 64 tasks max (and one is used as swapper), no matter
how small they should be. Fragmentation is evil - this is how it was
handled. As the current opinion seems to be that 64 Mb is more than
enough, but 64 tasks might be a little crowded, I'll probably change the
limits be easily changed (to 32Mb/128 tasks for example) with just a
recompilation of the kernel. I don't want to be on the machine when
someone is spawning >64 processes, though :-)

                Linus

Early Linux installation guide:

		Installing Linux on your system

Ok, this is a short guide for those people who actually want to get a
running system, not just look at the pretty source code :-). You'll
certainly need minix for most of the steps.

	0.  Back up any important software.  This kernel has been
working beautifully on my machine for some time, and has never destroyed
anything on my hard-disk, but you never can be too careful when it comes
to using the disk directly.  I'd hate to get flames like "you destroyed
my entire collection of Sam Fox nude gifs (all 103 of them), I'll hate
you forever", just because I may have done something wrong.

Double-check your hardware.  If you are using other than EGA/VGA, you'll
have to make the appropriate changes to 'linux/kernel/console.c', which
may not be easy.  If you are able to use the at_wini.c under minix,
linux will probably also like your drive.  If you feel comfortable with
scan-codes, you might want to hack 'linux/kernel/keyboard.s' making it
more practical for your [US|German|...] keyboard.

	1.  Decide on what root device you'll be using.  You can use any
(standard) partition on any of your harddisks, the numbering is the same
as for minix (ie 0x306, which I'm using, means partition 1 on hd2).  It
is certainly possible to use the same device as for minix, but I
wouldn't recommend it.  You'd have to change pathnames (or make a chroot
in init) to get minix and linux to live together peacefully.

I'd recommend making a new filesystem, and filling it with the necessary
files: You need at least the following:

	- /dev/tty0		(same as under minix, ie mknod ...)
	- /dev/tty		(same as under minix)
	- /bin/sh		(link to bash)
	- /bin/update		(I guess this should be /etc/update ...)

Note that linux and minix binaries aren't compatible, although they use
the same (gcc-)header (for ease of cross-compiling), so running one
under the other will result in errors.

	2.  Compile the source, making necessary changes into the
makefiles and linux/include/linux/config.h and linux/boot/boot.s.  I'm
using a slightly hacked gcc-1.40, to which I have added a -mstring-insns
flag, which uses the i386 string instructions for structure copy etc.
Removing the flag from all makefiles should do the trick for you.

NOTE! I'm using -Wall, and I'm not seeing many warnings (2 I think, one
about _exit returning although it's volatile - it's ok.) If you get
more warnings when compiling, something's wrong.

	3.  Copy the resultant code to a diskette of the right type.
Use 'cp Image /dev/PS0' or equivalent.

	4.  Boot with the new diskette.  If you've done everything right
(and if *I've* done everything right), you should now be running bash as
root.  You can't do much (alias ls='echo *' is a good idea :-), but if
you do run, most other things should work.  I'd be happy to hear from
anybody that has come this far - and I'll send any ported binaries you
might want (and I have).  I'll also put them out for ftp if there is
enough interest.  With gcc, make and uemacs, I've been able to stop
crosscompiling and actually compile natively under linux.  (I also have
a term-emu, sz/rz, sed, etc ...)

The boot-sequence should start with "Loading system...", and then a
"Partition table ok" followed by some root-dev info. If you forget to
make the /dev/tty0-character device, you'll never see anything but the
"loading" message. Hopefully errors will be told to the console, but if
there are problems at boot-up there is a distinct possibility that the
machine just hangs.

	5.  Check the new filesystem regularly with (minix) fsck.  I
haven't got any errors for some time now, but I cannot guarantee that
this means it will never happen.  Due to slight differences in 'unlink',
fsck will report "mode inode XXX not cleared", but that isn't an error,
and you can safely ignore it (if you don't like it, do a fsck -a every
once in a while).  Minix "restore" will not work on a file deleted with
linux - so be extra careful if you have a tendency to delete files you
don't really want to.

Logging out from the "login-shell" will automatically do a sync, and
will leave you hanging without any processes (except update, which isn't
much fun), so do the "three-finger-salute" to restart dos/minix/linux or
whatever.

	6.  Mail me and ask about problems/updates etc.  Even more
welcome are success-reports (yeah, sure), and bugreports or even patches
(or pointers to corrections).

NOTE!!! I haven't included diffs with the binaries I've posted for the
simple reason that there aren't any - I've had this silly idea that I'd
rather change the OS than do a lot of porting.  All source to the
binaries can be found on nic.funet.fi under /pub/gnu or /pub/unix.
Changes have been to makefiles or configuration files, and anybody
interested in them might want to contact me. Mostly it's been a matter
of adding a -DUSG to makefiles.

The one exception if gcc - I've made some hacks on it (string-insns),
and have got it (with the gracious help of Bruce Evans) to correctly
emit software floating point. I haven't got diffs to that one either, as
my hard-disk is overflowing and I cannot accomodate both originals and
changes, but as per the GNU copyleft I'll make them available if
someone wants them. I hope nobody want's them :-)

		Linus		[email blocked]

README about early pictures of Linus Torvalds:

I finally got these made, and even managed to persuade Linus into
allowing me to publish three pictures instead of only the first one.
(He still vetoes the one with the toy moose... :-)

linus1.gif, linus2.gif, linus3.gif

        Three pictures of Linus Torvalds, showing what a despicable
        figure he is in real life.  The beer is from the pre-Linux
        era, so it's not virtual.

In nic.funet.fi: pub/OS/Linux/doc/PEOPLE.

--
Lars.Wirzenius [email blocked]  (finger wirzeniu at klaava.helsinki.fi)
   MS-DOS, you can't live with it, you can live without it.

2007/07/25 Slashdot.org: Virtual Containerization

Wednesday, July 25th, 2007

Here is a story from Slashdot.org:

Virtual Containerization

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: “It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about ‘containerization,’ to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales.”

Here is the full story from Interopt News

It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about “containerization,” to employ a really ugly but useful word.

Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales. After all, if a copy of VMware ESX lets you replace four or five boxes with just one, some hapless hardware vendor has got to be left holding the bag, right? Ever since this virtualization-kills-the-hardware-star meme got started, Wall Street has been in something of a funk about server hardware stocks.

But if the meme is really true, why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn’t? There has got to be a little head scratching going on over in Framingham just now.

The obvious explanation for Barrett’s investment (which will net Intel a measly 2.5% of VMware’s shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware. This thesis was forcefully articulated on the day of the Intel announcement by the CEO of software startup rPath, Billy Marshall, in a clever blog post that also – naturally – makes the case for his own product. I had a chat with Marshall a few days ago and found what he had to say quite interesting.

Simply put, Marshall’s thesis is that “sales of computers, especially server computers, are currently constrained by the complexity of software.” Anything that makes that complexity go down will make hardware sales (and, one presumes, operating systems sales) go up. Right now it’s so blinking hard to install applications, operating systems and their middleware stacks that once people get an installation up and running they don’t want to touch it again for love or money. But if you install your app stack on a virtual machine – VMware, Xen, or Microsoft – then you can save the result as simple image file. After that, you’re free to do whatever you want with it. You can archive the image on your SAN and deploy it as needed. You can let people download it from your web site. Or you can put it on a CD and mail it to your branch office in Timbuktu or Oshkosh. Anyone will be able to take this collection of bits and install it on the appropriate virtual machine in their own local environment without having to undergo the usual hell of installation and configuration.

This idea of using virtual machine images as a distribution mechanism for fully integrated application stacks is not new. VMware has a Virtual Appliance Marketplace with hundreds of apps available for download. You won’t find Oracle 10g or BEA WebLogic or MySAP here, at least not yet. But you will find plenty of stuff from open source projects and smaller commercial ISVs (independent software vendors). Microsoft also has a download page for pre-configured VDH images of most of its major pieces of server software, including SQL Server 2005 and Exchange Server 2007.

So what does rPath add to the mix? Although Marshall has cleverly hitched his pitch to the virtualization bandwagon, he is actually in a somewhat different business, namely that of providing a roll-your-own-Linux toolkit and update service for ISVs. Marshall likes to recount the following anecdote to explain his value-add. When open source e-mail vendor Zimbra wanted to package its software in VMware disk image format using RHEL clone Centos as the OS, the install snapshot produced a monstrous 2 gigabyte file. You could fit that on a DVD, but this is clearly not in CD territory anymore, and maybe not so convenient to download over a garden variety DSL connection either. The problem is that a fully populated OS drags along huge excess code baggage that a typical application just doesn’t need. In the case of Zimbra, the excess added up to many hundreds of megabytes.

rPath’s solution is to use its own stripped-down and customized Linux distribution. It provides its own collection of Linux kernel and user space components along with a tool called rBuilder for deciding exactly which pieces are necessary to run a particular application. This is not a totally automated process – ISVs will have to roll up their sleeves and make some choices. But when the process is complete, rBuilder will generate a finished image file containing a fully integrated OS-middleware-application stack. This is what rPath calls a software appliance. The appliance can be packaged for any of the major target virtual machines, or for an actual install on raw Intel-based hardware. When Zimbra applied rBuilder to its application stack, swapping out Centos for a custom build of rPath Linux, the resulting VMware image shrank to only 350 megabytes.

In addition to eliminating installation and configuration hell for end users, rPath gives ISVs a platform similar to Red Hat Network for managing the distribution of application updates and OS patches. If rPath releases an OS patch for its version of Linux that the ISV determines is not needed by the ISV’s customers, then the patch doesn’t get distributed to them. This two-stage model is a lot more sensible than the Red Hat system of distributing all patches to everyone and then letting users discover for themselves whether a particular OS patch breaks their application.

rPath was launched at LinuxWorld last year and has already gone through a couple of version updates. Marshall didn’t come up with the vision for his company out of thin air. It’s based in large part on the insight he gained during a multi-year stint in the belly of the beast at Red Hat. In fact, a lot of his team are ex-Red Hatters. Marshall himself put in a couple of years as VP of Sales, and before that he was the guiding light behind the launch of the Red Hat Network provisioning and management platform. His CTO Erik Troan developed the Red Hat Package Manager (RPM) tool at the heart of RHEL and Fedora. Another rPath engineer, Matthew Wilson, wrote Red Hat’s Anaconda installation tool.

These people obviously know a thing or two when it comes to building and maintaining a Linux distribution. Their product concept is ingenious. The question is whether it’s big enough to make a stand-alone company. Right now it’s too early to tell.

There are a couple of real drawbacks to rPath from the end user’s point of view. One is that only Linux stacks are supported. If you are running a Microsoft stack, you’re out of luck. To be fair, you can run your rPath stack on top of Microsoft Virtual Server, and no doubt on the future Viridian hypervisor too. But if you were using just the unadorned VMware image format as your container rather than rPath you could run pretty much any OS stack you pleased.

Another drawback is that even in a pure Linux context, an rPath software appliance can’t use a particular piece of commercial software unless the ISV is an rPath customer. rPath’s basic business model is to sell tools and platforms to ISVs. The rPath appliances available now are mostly pure open source stacks, some commercial and some community. But there is no Oracle database or BEA or IBM middleware, which is a pretty big limitation in the real world of corporate data centers. Marshall does say he is involved in “deep discussions” with BEA, so maybe there will be some movement on this front at some point in the future. But for now it’s wait and see.

What it all boils down to is how credible the rPath Linux distribution can be in the eyes of the ISVs who consider using it. rPath politely avoids using the word “port,” but that is really what an ISV has to do to get its application running on rPath. An ISV that can afford to drop the other platforms it supports and serve its products up only on rPath will reap the full benefits of the system. But big commercial ISVs with big legacy installed bases won’t be able to take such a radical step. Marshall’s spin on this delicate issue seems to be that enterprise ISVs should leverage the end user ease-of-installation benefits of its platform to expand into Small and Medium Business markets where tolerance for complexity is much lower. Of course one could take this argument a step further – which the company for the moment is not willing to do – and say that rPath’s natural home is in the embedded market, just like Debian founder Ian Murdock’s now defunct Progeny (don’t worry about Ian, he landed at Sun).

At the end of the day, I have to wonder whether rPath wouldn’t make itself a lot more credible in the eyes of its ISV target customers by becoming part of a larger software organization. Red Hat obviously comes to mind as a possible home, assuming Red Hat management could swallow its pride enough to buy back the innovation of its ex-employees. But another possibility would be… Oracle. After all, if Larry really wants to get RHEL out of his stack, what better way to do it than to add an entirely free and unencumbered RHEL-like distro to the bottom of every Oracle stack?

Be all that as it may, there is one thing about the rPath concept that really, really intrigues me. What is to prevent Microsoft from trying this? If ISVs had a convenient way to package up highly efficient custom builds of Windows Server 2008 together with key Microsoft or third party applications for the Viridian hypervisor, the idea would be wildly popular. Will it happen? Let’s wait and see what happens after WS 2008 comes out.

Copyright © 2007, Peerstone Research Inc. All rights reserved.

That is all for today and this Came out at 12:10 PM PST

2007/07/23 Worldwide XPS 700 Motherboard Exchange Program Launch Date Confirmed

Monday, July 23rd, 2007

Posts now will come out @ 5:00 PM PST which is when the New Day Starts on ryanorser.com

Here is the Dell XPS 700 Motherboard Exchange programs launch date confirmed on July 19th 2007 is now worldwide on Monday, August 13th, 2007 and the world wide exchange will end on October 13th, 2007. So you have 2 Months to get your upgrade kits. here is an excerpt from Direct2Dell:

 

Worldwide XPS 700 Motherboard Exchange Program Launch Date Confirmed

We are pleased to announce that we will launch the XPS 700 Motherboard Exchange Program worldwide on Monday, August 13, 2007.

On August 13, we will launch a website for XPS 700 and 710 customers to register for the program and to tell us what options you prefer. XPS 700 customers will be able to choose a Hardware Kit at no charge, and can also opt for on-site installation service at no charge. XPS 700 customers will also have the option of purchasing a quad-core QX6700 processor for 25% off our Electronics & Accessories price (pricing may vary depending on the time you order). XPS 710 customers will have the option of of purchasing a Hardware Kit and on-site installation service. Pricing and program offering details may vary by region and will be outlined in future posts.

This program will expire on October 13, 2007. All upgrade requests must be submitted no later than midnight Central Standard Time October 13, 2007. For more information, here’s the very first post where we outlined the details of this program, and here’s the link to the XPS 700 Motherboard Exchange Program category that contains all the information we’ve shared so far.

Between now and August 13, I’ll plan to publish more details about how to prepare, how the process will work, pricing details for XPS 710 customers, and more. In the meantime, we’ll continue to prepare for the rollout of this global program.

We appreciate your continued patience.

Published Thursday, July 19, 2007 11:30 PM
by Lionel Menchaca, Digital Media Manager
Filed under , , , , , ,

 

 

This is the 19th of July 2007 on Direct2Dell. Have a nice Day.

Ryan

 

2007/07/17 Dell fixes Linux Prices

Tuesday, July 17th, 2007

Hello again happy Tuesday. here is an article from http://www.desktoplinux.com/news/NS9933912441.html

Dell Ubuntu Linux buyers were recently outraged when a price comparison between identical Inspiron 1420 laptops showed that instead of the Ubuntu system being cheaper, it actually ended up costing $225 more than the same laptop with Vista Home Basic Edition. This was after Dell had announced the week before that Ubuntu systems would be $50 cheaper than similar systems running Vista Home Basic Edition.

“Bottom line this was an oversight, pure and simple,” a Dell spokesperson told DesktopLinux.com. “We will be posting a comment to IdeaStorm to that effect by tomorrow.” In the meantime, Dell says that the prices have been reset to the appropriate prices.

The systems that were compared were Inspiron 1420s with Intel Core 2 Duo T5250 processors running at 1.5GHz, with a 667MHz FSB (front-side bus) and 2MB of cache. Each uses an Intel Graphics Media Accelerator X3100 for video. For memory and storage, the notebooks come with 1GB Shared Dual Channel DDR2 (double-data-rate two) RAM, an 80GB SATA hard drive and a 24x CD burner/DVD combo drive. To connect to the Internet, they use an Intel 3945 802.11a/g Mini Card. The only difference between the systems was that one ran Ubuntu 7.04 while the other ran Vista Home Basic Edition.

The base price of the systems remained the same. The Vista system cost
$819, while the Ubuntu system came in at $50 less: $774. Where Dell ran into trouble was that it was offering a special deal where customers could buy one of the colorful Inspiron 1420 line with Vista and get a ‘free’ upgrade to 2GB of RAM and a 160GB hard drive. Dell valued this package at $275. If you wanted that same upgrade with Ubuntu, you’d have to pay the full price and the additional $275.

Now, Dell has corrected its mistake. If you go to the Dell Inspiron 1420 page, you’ll find you can get the same offer for the free upgrade to 2GB of memory and 160GB hard drive for the Ubuntu Inspiron 1420.

So, as of July 12, once more you can get exactly the same laptops from Dell, except for the operating systems, and you’ll pay $50 less for the Ubuntu-powered notebook.

so have a good day.

2007/06/26 Dell Scraps Dimension Desktops!

Tuesday, June 26th, 2007

Instead of having Dimension Desktops Dell has come out with Inspiron Desktops which support both AMD and Intel chipsets. They also Made a huge improvement to the Laptop line which Dell has brought back the 1400 line of 14.1 ” notebooks! They have also made 2 new 15.4 ” notebooks the 1520 and the 1521, and they have not stopped there! There are the 1720 and 1721 inspirons to go with that. The 1720, 1420 and 1520 have intel chipsets while the 1721’s and the 1521’s have AMD chipsets. The New Laptops also support the new Nvidia Graphics of the 8400M GS and the 8600M GT. There are two other Graphics cards sets that you can get. they are ATI x1300 which are now normally included and the updated Intel® Integrated Graphics Media Accelerator 3100 which was updated from the Intel® Integrated Graphics Media Accelerator 3000. The have also released a new XPS laptop which is ultra-portable so you can carry it anywhere you go. if you would like to check out the Canadian products go to http://dell.ca

Also if you have any comments feel free to leave them in this post and I thank you for reading this article.

Ryan Orser.

Add to Technorati Favorites