Archive for the ‘Cheap Tech Solutions’ Category

2007/08/04 Its My Birthday

Saturday, August 4th, 2007

Yah I am 17 that is all I can talk about today as I will be offline for the next 2 – 2 1/2 weeks. though there will still be posts on weekdays. Hopefully Tom will Post somethings also when I am away.

2007/07/28 Dell to offer more Linux PC’s

Saturday, July 28th, 2007

Here is an article on Slashdot:

 “According to this article, Mark Shuttleworth from the Ubuntu camp says Dell is seeing a demand for the Linux-based PC and, “There are additional offerings in the pipeline.” I’m starting to see flashbacks of the days when Microsoft partnered up with IBM to gain control of the desktop market. Will other Linux flavors find their way to the likes of Lenovo or HP, etc, or will Ubuntu claim the desktop market working with other PC manufacturers?”

Here is the real article:

Dell to expand Linux PC offerings, partner says
Thursday July 26, 4:36 pm ET

BOSTON (Reuters) – Dell Inc (NasdaqGS:DELLNews) will soon offer more personal computers that use the Linux operating system instead of Microsoft Corp’s (NasdaqGS:MSFTNews) Windows, said the founder of a company that offers Linux support services.

Dell, the world’s second-largest PC maker after Hewlett-Packard Co (NYSE:HPQNews), now offers three consumer PCs that run Ubuntu Linux.

“What’s been announced to date is not the full extent of what we will see over the next couple of weeks and months,” Shuttleworth said an interview late on Wednesday.

“There are additional offerings in the pipeline,” he said. Shuttleworth founded Canonical Inc to provide support for Ubuntu Linux.

A Dell spokeswoman, Anne Camden, declined comment, saying the company does not discuss products in the pipeline.

She added that Dell was pleased with customer response to its Linux PCs. She said Dell believed the bulk of the machines were sold to open-source software enthusiasts, while some first-time Linux users have purchased them as well.

Open-source software refers to computer programs, generally available over the Internet at no cost, that users can download, modify and redistribute.

The Linux operating system is seen as the biggest threat to Microsoft’s Windows operating system.

Shuttleworth said sales of the three Dell Ubuntu PC models were on track to meet the sales projections of Dell and Canonical. He declined to elaborate.

Companies like his privately held Canonical Inc, Red Hat Inc (NYSE:RHTNews) and Novell Inc (NasdaqGS:NOVLNews) make money by selling standardized versions of Linux programs and support contracts to service them.

There are dozens of versions of Linux, available for all sorts of computers from PCs to mainframes and tiny mobile devices.

Shuttleworth said his company was not in discussions with Hewlett-Packard or the other top five PC makers to introduce machines equipped with Ubuntu.

The other three top PC makers are Lenovo Group Ltd (HKSE:0992.HKNews), Acer Inc (Taiwan:2353.TWNews) and Toshiba Corp (Tokyo:6502.T – News).

(Reporting by Jim Finkle)

2007/07/27 Slashdot: A historical Look at the First Linux Kernel

Friday, July 27th, 2007

This is a article on slashdot taking a look at the historical Linux Kerel 0.01:

LinuxFan writes “KernelTrap has a fascinating article about the first Linux kernel, version 0.01, complete with source code and photos of Linus Torvalds as a young man attending the University of Helsinki. Torvalds originally planned to call the kernel “Freax,” and in his first announcement noted, “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.” He also stressed that the kernel was very much tied to the i386 processor, “simply, I’d say that porting is impossible.” Humble beginnings.”

Now for the real article itsel:

This is a free minix-like kernel for i386(+) based AT-machines,” began the Linux version 0.01 release notes in September of 1991 for the first release of the Linux kernel. “As the version number (0.01) suggests this is not a mature product. Currently only a subset of AT-hardware is supported (hard-disk, screen, keyboard and serial lines), and some of the system calls are not yet fully implemented (notably mount/umount aren’t even implemented).” Booting the original 0.01 Linux kernel required bootstrapping it with minix, and the keyboard driver was written in assembly and hard-wired for a Finnish keyboard. The listed features were mostly presented as a comparison to minix and included, efficiently using the 386 chip rather than the older 8088, use of system calls rather than message passing, a fully multithreaded FS, minimal task switching, and visible interrupts. Linus Torvalds noted, “the guiding line when implementing linux was: get it working fast. I wanted the kernel simple, yet powerful enough to run most unix software.” In a section titled “Apologies :-)” he noted:

“This isn’t yet the ‘mother of all operating systems’, and anyone who hoped for that will have to wait for the first real release (1.0), and even then you might not want to change from minix. This is a source release for those that are interested in seeing what linux looks like, and it’s not really supported yet.”

Linus had originally intended to call the new kernel “Freax”. According to Wikipedia, the name Linux was actually invented by Ari Lemmke who maintained the ftp.funet.fi FTP server from which the kernel was originally distributed.

The initial post that Linus made about Linux was to the comp.os.minix Usenet group titled, “What would you like to see most in minix“. It began:

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).”

Later in the same thread, Linus went on to talk about how unportable the code was:

“Simply, I’d say that porting is impossible. It’s mostly in C, but most people wouldn’t call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It’s the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data – max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task – tough cookies).

“It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn’t need so much assembly language objects. Some of my ‘C’-files (specifically mm.c) are almost as much assembler as C. It would be ‘interesting’ even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery).

“Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state-machine?). All in all it’s a porters nightmare. “

Indeed, Linux 1.0 was released on March 13th, 1994 supporting only the 32-bit i386 architecture. However, by the release of Linux 1.2 on March 7th, 1995 it had already been ported to 32-bit MIPS, 32-bit SPARC, and the 64-bit Alpha. By the release of Linux 2.0 on June 9th, 1996 support had also been added for the 32-bit m68k and 32-bit PowerPC architectures. And jumping forward to the Linux 2.6 kernel, first released in 2004, it has been and continues to be ported to numerous additional architectures.


Linux 0.01 release notes:

		Notes for linux release 0.01

		0. Contents of this directory

linux-0.01.tar.Z	- sources to the kernel
bash.Z			- compressed bash binary if you want to test it
update.Z		- compressed update binary
RELNOTES-0.01		- this file

		1. Short intro

This is a free minix-like kernel for i386(+) based AT-machines.  Full
source is included, and this source has been used to produce a running
kernel on two different machines.  Currently there are no kernel
binaries for public viewing, as they have to be recompiled for different
machines.  You need to compile it with gcc (I use 1.40, don't know if
1.37.1 will handle all __asm__-directives), after having changed the
relevant configuration file(s).

As the version number (0.01) suggests this is not a mature product.
Currently only a subset of AT-hardware is supported (hard-disk, screen,
keyboard and serial lines), and some of the system calls are not yet
fully implemented (notably mount/umount aren't even implemented).  See
comments or readme's in the code.

This version is also meant mostly for reading - ie if you are interested
in how the system looks like currently.  It will compile and produce a
working kernel, and though I will help in any way I can to get it
working on your machine (mail me), it isn't really supported.  Changes
are frequent, and the first "production" version will probably differ
wildly from this pre-alpha-release.

Hardware needed for running linux:
	- 386 AT
	- VGA/EGA screen
	- AT-type harddisk controller (IDE is fine)
	- Finnish keyboard (oh, you can use a US keyboard, but not
	  without some practise :-)

The Finnish keyboard is hard-wired, and as I don't have a US one I
cannot change it without major problems. See kernel/keyboard.s for
details. If anybody is willing to make an even partial port, I'd be
grateful. Shouldn't be too hard, as it's tabledriven (it's assembler
though, so ...)

Although linux is a complete kernel, and uses no code from minix or
other sources, almost none of the support routines have yet been coded.
Thus you currently need minix to bootstrap the system. It might be
possible to use the free minix demo-disk to make a filesystem and run
linux without having minix, but I don't know...

		2. Copyrights etc

This kernel is (C) 1991 Linus Torvalds, but all or part of it may be
redistributed provided you do the following:

	- Full source must be available (and free), if not with the
	  distribution then at least on asking for it.

	- Copyright notices must be intact. (In fact, if you distribute
	  only parts of it you may have to add copyrights, as there aren't
	  (C)'s in all files.) Small partial excerpts may be copied
	  without bothering with copyrights.

	- You may not distibute this for a fee, not even "handling"
	  costs.

Mail me at [email blocked] if you have any questions.

Sadly, a kernel by itself gets you nowhere. To get a working system you
need a shell, compilers, a library etc. These are separate parts and may
be under a stricter (or even looser) copyright. Most of the tools used
with linux are GNU software and are under the GNU copyleft. These tools
aren't in the distribution - ask me (or GNU) for more info.

		3. Short technical overview of the kernel.

The linux kernel has been made under minix, and it was my original idea
to make it binary compatible with minix. That was dropped, as the
differences got bigger, but the system still resembles minix a great
deal. Some of the key points are:

	- Efficient use of the possibilities offered by the 386 chip.
	  Minix was written on a 8088, and later ported to other
	  machines - linux takes full advantage of the 386 (which is
	  nice if you /have/ a 386, but makes porting very difficult)

	- No message passing, this is a more traditional approach to
	  unix. System calls are just that - calls. This might or might
	  not be faster, but it does mean we can dispense with some of
	  the problems with messages (message queues etc). Of course, we
	  also miss the nice features :-p.

	- Multithreaded FS - a direct consequence of not using messages.
	  This makes the filesystem a bit (a lot) more complicated, but
	  much nicer. Coupled with a better scheduler, this means that
	  you can actually run several processes concurrently without
	  the performance hit induced by minix.

	- Minimal task switching. This too is a consequence of not using
	  messages. We task switch only when we really want to switch
	  tasks - unlike minix which task-switches whatever you do. This
	  means we can more easily implement 387 support (indeed this is
	  already mostly implemented)

	- Interrupts aren't hidden. Some people (among them Tanenbaum)
	  think interrupts are ugly and should be hidden. Not so IMHO.
	  Due to practical reasons interrupts must be mainly handled by
	  machine code, which is a pity, but they are a part of the code
	  like everything else. Especially device drivers are mostly
	  interrupt routines - see kernel/hd.c etc.

	- There is no distinction between kernel/fs/mm, and they are all
	  linked into the same heap of code. This has it's good sides as
	  well as bad. The code isn't as modular as the minix code, but
	  on the other hand some things are simpler. The different parts
	  of the kernel are under different sub-directories in the
	  source tree, but when running everything happens in the same
	  data/code space.

The guiding line when implementing linux was: get it working fast. I
wanted the kernel simple, yet powerful enough to run most unix software.
The file system I couldn't do much about - it needed to be minix
compatible for practical reasons, and the minix filesystem was simple
enough as it was. The kernel and mm could be simplified, though:

	- Just one data structure for tasks. "Real" unices have task
	  information in several places, I wanted everything in one
	  place.

	- A very simple memory management algorithm, using both the
	  paging and segmentation capabilities of the i386. Currently
	  MM is just two files - memory.c and page.s, just a couple of
	  hundreds of lines of code.

These decisions seem to have worked out well - bugs were easy to spot,
and things work.

		4. The "kernel proper"

All the routines handling tasks are in the subdirectory "kernel". These
include things like 'fork' and 'exit' as well as scheduling and minor
system calls like 'getpid' etc. Here are also the handlers for most
exceptions and traps (not page faults, they are in mm), and all
low-level device drivers (get_hd_block, tty_write etc). Currently all
faults lead to a exit with error code 11 (Segmentation fault), and the
system seems to be relatively stable ("crashme" hasn't - yet).

		5. Memory management

This is the simplest of all parts, and should need only little changes.
It contains entry-points for some things that the rest of the kernel
needs, but mostly copes on it's own, handling page faults as they
happen. Indeed, the rest of the kernel usually doesn't actively allocate
pages, and just writes into user space, letting mm handle any possible
'page-not-present' errors.

Memory is dealt with in two completely different ways - by paging and
segmentation.  First the 386 VM-space (4GB) is divided into a number of
segments (currently 64 segments of 64Mb each), the first of which is the
kernel memory segment, with the complete physical memory identity-mapped
into it.  All kernel functions live within this area.

Tasks are then given one segment each, to use as they wish. The paging
mechanism sees to filling the segment with the appropriate pages,
keeping track of any duplicate copies (created at a 'fork'), and making
copies on any write. The rest of the system doesn't need to know about
all this.

		6. The file system

As already mentioned, the linux FS is the same as in minix. This makes
crosscompiling from minix easy, and means you can mount a linux
partition from minix (or the other way around as soon as I implement
mount :-). This is only on the logical level though - the actual
routines are very different.

	NOTE! Minix-1.6.16 seems to have a new FS, with minor
	modifications to the 1.5.10 I've been using. Linux
	won't understand the new system.

The main difference is in the fact that minix has a single-threaded
file-system and linux hasn't. Implementing a single-threaded FS is much
easier as you don't need to worry about other processes allocating
buffer blocks etc while you do something else. It also means that you
lose some of the multiprocessing so important to unix.

There are a number of problems (deadlocks/raceconditions) that the linux
kernel needed to address due to multi-threading.  One way to inhibit
race-conditions is to lock everything you need, but as this can lead to
unnecessary blocking I decided never to lock any data structures (unless
actually reading or writing to a physical device).  This has the nice
property that dead-locks cannot happen.

Sadly it has the not so nice property that race-conditions can happen
almost everywhere.  These are handled by double-checking allocations etc
(see fs/buffer.c and fs/inode.c).  Not letting the kernel schedule a
task while it is in supervisor mode (standard unix practise), means that
all kernel/fs/mm actions are atomic (not counting interrupts, and we are
careful when writing those) if you don't call 'sleep', so that is one of
the things we can count on.

		7. Apologies :-)

This isn't yet the "mother of all operating systems", and anyone who
hoped for that will have to wait for the first real release (1.0), and
even then you might not want to change from minix.  This is a source
release for those that are interested in seeing what linux looks like,
and it's not really supported yet.  Anyone with questions or suggestions
(even bug-reports if you decide to get it working on your system) is
encouraged to mail me.

		8. Getting it working

Most hardware dependancies will have to be compiled into the system, and
there a number of defines in the file "include/linux/config.h" that you
have to change to get a personalized kernel.  Also you must uncomment
the right "equ" in the file boot/boot.s, telling the bootup-routine what
kind of device your A-floppy is.  After that a simple "make" should make
the file "Image", which you can copy to a floppy (cp Image /dev/PS0 is
what I use with a 1.44Mb floppy).  That's it.

Without any programs to run, though, the kernel cannot do anything. You
should find binaries for 'update' and 'bash' at the same place you found
this, which will have to be put into the '/bin' directory on the
specified root-device (specified in config.h). Bash must be found under
the name '/bin/sh', as that's what the kernel currently executes. Happy
hacking.

		Linus Torvalds		[email blocked]
		Petersgatan 2 A 2
		00140 Helsingfors 14
		FINLAND

First posting about Linux:

From: Linus Benedict Torvalds
Newsgroups: comp.os.minix
Subject: Gcc-1.40 and a posix-question
Date: 3 Jul 91 10:00:50 GMT

Hello netlanders,

Due to a project I'm working on (in minix), I'm interested in the posix
standard definition. Could somebody please point me to a (preferably)
machine-readable format of the latest posix rules? Ftp-sites would be
nice.

As an aside for all using gcc on minix - the new version (1.40) has been
out for some weeks, and I decided to test what needed to be done to get
it working on minix (1.37.1, which is the version you can get from
plains is nice, but 1.40 is better :-).  To my surpice, the answer
turned out to be - NOTHING! Gcc-1.40 compiles as-is on minix386 (with
old gcc-1.37.1), with no need to change source files (I changed the
Makefile and some paths, but that's it!).  As default this results in a
compiler that uses floating point insns, but if you'd rather not,
changing 'toplev.c' to define DEFAULT_TARGET from 1 to 0 (this is from
memory - I'm not at my minix-box) will handle that too.  Don't make the
libs, use the old gnulib&libc.a.  I have successfully compiled 1.40 with
itself, and everything works fine (I got the newest versions of gas and
binutils at the same time, as I've heard of bugs with older versions of
ld.c).  Makefile needs some chmem's (and gcc2minix if you're still using
it).

                Linus Torvalds          [email blocked]

PS. Could someone please try to finger me from overseas, as I've
installed a "changing .plan" (made by your's truly), and I'm not certain
it works from outside? It should report a new .plan every time.

First Linux announcement:

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 20:57:08 GMT

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones.  This has been brewing
since april, and is starting to get ready.  I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want.  Any suggestions
are welcome, but I won't promise I'll implement them :-)

                Linus (torva... at kruuna.helsinki.fi)

PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that's all I have :-(.

From: Jyrki Kuoppala [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 23:44:50 GMT

In article Linus Benedict Torvalds writes:

>I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
>This implies that I'll get something practical within a few months, and
>I'd like to know what features most people would want.  Any suggestions
>are welcome, but I won't promise I'll implement them :-)

Tell us more!  Does it need a MMU?

>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>It is NOT protable (uses 386 task switching etc)

How much of it is in C?  What difficulties will there be in porting?
Nobody will believe you about non-portability ;-), and I for one would
like to port it to my Amiga (Mach needs a MMU and Minix is not free).

As for the features; well, pseudo ttys, BSD sockets, user-mode
filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
window size in the tty structure, system calls capable of supporting
POSIX.1.  Oh, and bsd-style long file names.

//Jyrki

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 26 Aug 91 11:06:02 GMT

In article Jyrki Kuoppala writes:
>> [re: my post about my new OS]

>Tell us more!  Does it need a MMU?

Yes, it needs a MMU (sorry everybody), and it specifically needs a
386/486 MMU (see later).

>>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>>It is NOT protable (uses 386 task switching etc)

>How much of it is in C?  What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).

Simply, I'd say that porting is impossible.  It's mostly in C, but most
people wouldn't call what I write C.  It uses every conceivable feature
of the 386 I could find, as it was also a project to teach me about the
386.  As already mentioned, it uses a MMU, for both paging (not to disk
yet) and segmentation. It's the segmentation that makes it REALLY 386
dependent (every task has a 64Mb segment for code & data - max 64 tasks
in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

It also uses every feature of gcc I could find, specifically the __asm__
directive, so that I wouldn't need so much assembly language objects.
Some of my "C"-files (specifically mm.c) are almost as much assembler as
C. It would be "interesting" even to port it to another compiler (though
why anybody would want to use anything other than gcc is a mystery).

Unlike minix, I also happen to LIKE interrupts, so interrupts are
handled without trying to hide the reason behind them (I especially like
my hard-disk-driver.  Anybody else make interrupts drive a state-
machine?).  All in all it's a porters nightmare.

>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
>window size in the tty structure, system calls capable of supporting
>POSIX.1.  Oh, and bsd-style long file names.

Most of these seem possible (the tty structure already has stubs for
window size), except maybe for the user-mode filesystems. As to POSIX,
I'd be delighted to have it, but posix wants money for their papers, so
that's not currently an option. In any case these are things that won't
be supported for some time yet (first I'll make it a simple minix-
lookalike, keyword SIMPLE).

                Linus [email blocked]

PS. To make things really clear - yes I can run gcc on it, and bash, and
most of the gnu [bin/file]utilities, but it's not very debugged, and the
library is really minimal. It doesn't even support floppy-disks yet. It
won't be ready for distribution for a couple of months. Even then it
probably won't be able to do much more than minix, and much less in some
respects. It will be free though (probably under gnu-license or similar).

From: Alan Barclay [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 27 Aug 91 14:34:32 GMT

In article Linus Benedict Torvalds writes:

>yet) and segmentation. It's the segmentation that makes it REALLY 386
>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

Is that max 64 64Mb tasks or max 64 tasks no matter what their size?
--
  Alan Barclay
  iT                                |        E-mail : [email blocked]
  Barker Lane                       |        BANG-STYLE : [email blocked]
  CHESTERFIELD S40 1DY              |        VOICE : +44 246 214241

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 28 Aug 91 10:56:19 GMT

In article Alan Barclay writes:
>In article Linus Benedict Torvalds writes:
>>yet) and segmentation. It's the segmentation that makes it REALLY 386
>>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

>Is that max 64 64Mb tasks or max 64 tasks no matter what their size?

I'm afraid that is 64 tasks max (and one is used as swapper), no matter
how small they should be. Fragmentation is evil - this is how it was
handled. As the current opinion seems to be that 64 Mb is more than
enough, but 64 tasks might be a little crowded, I'll probably change the
limits be easily changed (to 32Mb/128 tasks for example) with just a
recompilation of the kernel. I don't want to be on the machine when
someone is spawning >64 processes, though :-)

                Linus

Early Linux installation guide:

		Installing Linux on your system

Ok, this is a short guide for those people who actually want to get a
running system, not just look at the pretty source code :-). You'll
certainly need minix for most of the steps.

	0.  Back up any important software.  This kernel has been
working beautifully on my machine for some time, and has never destroyed
anything on my hard-disk, but you never can be too careful when it comes
to using the disk directly.  I'd hate to get flames like "you destroyed
my entire collection of Sam Fox nude gifs (all 103 of them), I'll hate
you forever", just because I may have done something wrong.

Double-check your hardware.  If you are using other than EGA/VGA, you'll
have to make the appropriate changes to 'linux/kernel/console.c', which
may not be easy.  If you are able to use the at_wini.c under minix,
linux will probably also like your drive.  If you feel comfortable with
scan-codes, you might want to hack 'linux/kernel/keyboard.s' making it
more practical for your [US|German|...] keyboard.

	1.  Decide on what root device you'll be using.  You can use any
(standard) partition on any of your harddisks, the numbering is the same
as for minix (ie 0x306, which I'm using, means partition 1 on hd2).  It
is certainly possible to use the same device as for minix, but I
wouldn't recommend it.  You'd have to change pathnames (or make a chroot
in init) to get minix and linux to live together peacefully.

I'd recommend making a new filesystem, and filling it with the necessary
files: You need at least the following:

	- /dev/tty0		(same as under minix, ie mknod ...)
	- /dev/tty		(same as under minix)
	- /bin/sh		(link to bash)
	- /bin/update		(I guess this should be /etc/update ...)

Note that linux and minix binaries aren't compatible, although they use
the same (gcc-)header (for ease of cross-compiling), so running one
under the other will result in errors.

	2.  Compile the source, making necessary changes into the
makefiles and linux/include/linux/config.h and linux/boot/boot.s.  I'm
using a slightly hacked gcc-1.40, to which I have added a -mstring-insns
flag, which uses the i386 string instructions for structure copy etc.
Removing the flag from all makefiles should do the trick for you.

NOTE! I'm using -Wall, and I'm not seeing many warnings (2 I think, one
about _exit returning although it's volatile - it's ok.) If you get
more warnings when compiling, something's wrong.

	3.  Copy the resultant code to a diskette of the right type.
Use 'cp Image /dev/PS0' or equivalent.

	4.  Boot with the new diskette.  If you've done everything right
(and if *I've* done everything right), you should now be running bash as
root.  You can't do much (alias ls='echo *' is a good idea :-), but if
you do run, most other things should work.  I'd be happy to hear from
anybody that has come this far - and I'll send any ported binaries you
might want (and I have).  I'll also put them out for ftp if there is
enough interest.  With gcc, make and uemacs, I've been able to stop
crosscompiling and actually compile natively under linux.  (I also have
a term-emu, sz/rz, sed, etc ...)

The boot-sequence should start with "Loading system...", and then a
"Partition table ok" followed by some root-dev info. If you forget to
make the /dev/tty0-character device, you'll never see anything but the
"loading" message. Hopefully errors will be told to the console, but if
there are problems at boot-up there is a distinct possibility that the
machine just hangs.

	5.  Check the new filesystem regularly with (minix) fsck.  I
haven't got any errors for some time now, but I cannot guarantee that
this means it will never happen.  Due to slight differences in 'unlink',
fsck will report "mode inode XXX not cleared", but that isn't an error,
and you can safely ignore it (if you don't like it, do a fsck -a every
once in a while).  Minix "restore" will not work on a file deleted with
linux - so be extra careful if you have a tendency to delete files you
don't really want to.

Logging out from the "login-shell" will automatically do a sync, and
will leave you hanging without any processes (except update, which isn't
much fun), so do the "three-finger-salute" to restart dos/minix/linux or
whatever.

	6.  Mail me and ask about problems/updates etc.  Even more
welcome are success-reports (yeah, sure), and bugreports or even patches
(or pointers to corrections).

NOTE!!! I haven't included diffs with the binaries I've posted for the
simple reason that there aren't any - I've had this silly idea that I'd
rather change the OS than do a lot of porting.  All source to the
binaries can be found on nic.funet.fi under /pub/gnu or /pub/unix.
Changes have been to makefiles or configuration files, and anybody
interested in them might want to contact me. Mostly it's been a matter
of adding a -DUSG to makefiles.

The one exception if gcc - I've made some hacks on it (string-insns),
and have got it (with the gracious help of Bruce Evans) to correctly
emit software floating point. I haven't got diffs to that one either, as
my hard-disk is overflowing and I cannot accomodate both originals and
changes, but as per the GNU copyleft I'll make them available if
someone wants them. I hope nobody want's them :-)

		Linus		[email blocked]

README about early pictures of Linus Torvalds:

I finally got these made, and even managed to persuade Linus into
allowing me to publish three pictures instead of only the first one.
(He still vetoes the one with the toy moose... :-)

linus1.gif, linus2.gif, linus3.gif

        Three pictures of Linus Torvalds, showing what a despicable
        figure he is in real life.  The beer is from the pre-Linux
        era, so it's not virtual.

In nic.funet.fi: pub/OS/Linux/doc/PEOPLE.

--
Lars.Wirzenius [email blocked]  (finger wirzeniu at klaava.helsinki.fi)
   MS-DOS, you can't live with it, you can live without it.

2007/07/26 Slashdot: Dell is asking for Better ATI drivers on Linux!

Thursday, July 26th, 2007

Now here is a story with a better voice for the greater good of Linux:

Open Source IT writes “According to a presentation at Ubuntu Live 2007, Dell is working on getting better ATI drivers for Linux for use in its Linux offerings. While it is not known whether the end product will end up as open source, with big businesses like Google and Dell now behind the push for better Linux graphics drivers, hopefully ATI will make the smart business decision and give customers what they want.”

 From the original story:

 

 Dell knows it won’t happen overnight, but along side wanting to ship audio/video codecs, Intel Wireless 80.211N support for Linux, Broadcom Wireless for Linux, and being able to ship notebooks and desktops with Compiz Fusion enabled, Dell would like to see improved ATI Linux drivers. At Ubuntu Live 2007, Amit Bhutani had a session on Ubuntu Linux for Dell Consumer Systems, where he had shared a slide with Dell’s “area of investigation”, which Amit had said is essentially their Linux road-map. Amit had also stated that the NVIDIA 2D and 3D video drivers were “challenges in platform enablement”. Dell wants to offer ATI Linux systems, but first the driver must be improved for the Linux platform (not necessarily open-source, but improved). Dell currently ships desktop Linux systems with Intel using their open-source drivers as well as NVIDIA graphics processors under Linux. Amit had went on to add that new Dell product offerings and availability in other countries will come later this summer.

This is a great sign! I hope this works out for Linux and for Dell.

2007/07/25 Slashdot.org: Virtual Containerization

Wednesday, July 25th, 2007

Here is a story from Slashdot.org:

Virtual Containerization

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: “It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about ‘containerization,’ to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales.”

Here is the full story from Interopt News

It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about “containerization,” to employ a really ugly but useful word.

Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales. After all, if a copy of VMware ESX lets you replace four or five boxes with just one, some hapless hardware vendor has got to be left holding the bag, right? Ever since this virtualization-kills-the-hardware-star meme got started, Wall Street has been in something of a funk about server hardware stocks.

But if the meme is really true, why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn’t? There has got to be a little head scratching going on over in Framingham just now.

The obvious explanation for Barrett’s investment (which will net Intel a measly 2.5% of VMware’s shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware. This thesis was forcefully articulated on the day of the Intel announcement by the CEO of software startup rPath, Billy Marshall, in a clever blog post that also – naturally – makes the case for his own product. I had a chat with Marshall a few days ago and found what he had to say quite interesting.

Simply put, Marshall’s thesis is that “sales of computers, especially server computers, are currently constrained by the complexity of software.” Anything that makes that complexity go down will make hardware sales (and, one presumes, operating systems sales) go up. Right now it’s so blinking hard to install applications, operating systems and their middleware stacks that once people get an installation up and running they don’t want to touch it again for love or money. But if you install your app stack on a virtual machine – VMware, Xen, or Microsoft – then you can save the result as simple image file. After that, you’re free to do whatever you want with it. You can archive the image on your SAN and deploy it as needed. You can let people download it from your web site. Or you can put it on a CD and mail it to your branch office in Timbuktu or Oshkosh. Anyone will be able to take this collection of bits and install it on the appropriate virtual machine in their own local environment without having to undergo the usual hell of installation and configuration.

This idea of using virtual machine images as a distribution mechanism for fully integrated application stacks is not new. VMware has a Virtual Appliance Marketplace with hundreds of apps available for download. You won’t find Oracle 10g or BEA WebLogic or MySAP here, at least not yet. But you will find plenty of stuff from open source projects and smaller commercial ISVs (independent software vendors). Microsoft also has a download page for pre-configured VDH images of most of its major pieces of server software, including SQL Server 2005 and Exchange Server 2007.

So what does rPath add to the mix? Although Marshall has cleverly hitched his pitch to the virtualization bandwagon, he is actually in a somewhat different business, namely that of providing a roll-your-own-Linux toolkit and update service for ISVs. Marshall likes to recount the following anecdote to explain his value-add. When open source e-mail vendor Zimbra wanted to package its software in VMware disk image format using RHEL clone Centos as the OS, the install snapshot produced a monstrous 2 gigabyte file. You could fit that on a DVD, but this is clearly not in CD territory anymore, and maybe not so convenient to download over a garden variety DSL connection either. The problem is that a fully populated OS drags along huge excess code baggage that a typical application just doesn’t need. In the case of Zimbra, the excess added up to many hundreds of megabytes.

rPath’s solution is to use its own stripped-down and customized Linux distribution. It provides its own collection of Linux kernel and user space components along with a tool called rBuilder for deciding exactly which pieces are necessary to run a particular application. This is not a totally automated process – ISVs will have to roll up their sleeves and make some choices. But when the process is complete, rBuilder will generate a finished image file containing a fully integrated OS-middleware-application stack. This is what rPath calls a software appliance. The appliance can be packaged for any of the major target virtual machines, or for an actual install on raw Intel-based hardware. When Zimbra applied rBuilder to its application stack, swapping out Centos for a custom build of rPath Linux, the resulting VMware image shrank to only 350 megabytes.

In addition to eliminating installation and configuration hell for end users, rPath gives ISVs a platform similar to Red Hat Network for managing the distribution of application updates and OS patches. If rPath releases an OS patch for its version of Linux that the ISV determines is not needed by the ISV’s customers, then the patch doesn’t get distributed to them. This two-stage model is a lot more sensible than the Red Hat system of distributing all patches to everyone and then letting users discover for themselves whether a particular OS patch breaks their application.

rPath was launched at LinuxWorld last year and has already gone through a couple of version updates. Marshall didn’t come up with the vision for his company out of thin air. It’s based in large part on the insight he gained during a multi-year stint in the belly of the beast at Red Hat. In fact, a lot of his team are ex-Red Hatters. Marshall himself put in a couple of years as VP of Sales, and before that he was the guiding light behind the launch of the Red Hat Network provisioning and management platform. His CTO Erik Troan developed the Red Hat Package Manager (RPM) tool at the heart of RHEL and Fedora. Another rPath engineer, Matthew Wilson, wrote Red Hat’s Anaconda installation tool.

These people obviously know a thing or two when it comes to building and maintaining a Linux distribution. Their product concept is ingenious. The question is whether it’s big enough to make a stand-alone company. Right now it’s too early to tell.

There are a couple of real drawbacks to rPath from the end user’s point of view. One is that only Linux stacks are supported. If you are running a Microsoft stack, you’re out of luck. To be fair, you can run your rPath stack on top of Microsoft Virtual Server, and no doubt on the future Viridian hypervisor too. But if you were using just the unadorned VMware image format as your container rather than rPath you could run pretty much any OS stack you pleased.

Another drawback is that even in a pure Linux context, an rPath software appliance can’t use a particular piece of commercial software unless the ISV is an rPath customer. rPath’s basic business model is to sell tools and platforms to ISVs. The rPath appliances available now are mostly pure open source stacks, some commercial and some community. But there is no Oracle database or BEA or IBM middleware, which is a pretty big limitation in the real world of corporate data centers. Marshall does say he is involved in “deep discussions” with BEA, so maybe there will be some movement on this front at some point in the future. But for now it’s wait and see.

What it all boils down to is how credible the rPath Linux distribution can be in the eyes of the ISVs who consider using it. rPath politely avoids using the word “port,” but that is really what an ISV has to do to get its application running on rPath. An ISV that can afford to drop the other platforms it supports and serve its products up only on rPath will reap the full benefits of the system. But big commercial ISVs with big legacy installed bases won’t be able to take such a radical step. Marshall’s spin on this delicate issue seems to be that enterprise ISVs should leverage the end user ease-of-installation benefits of its platform to expand into Small and Medium Business markets where tolerance for complexity is much lower. Of course one could take this argument a step further – which the company for the moment is not willing to do – and say that rPath’s natural home is in the embedded market, just like Debian founder Ian Murdock’s now defunct Progeny (don’t worry about Ian, he landed at Sun).

At the end of the day, I have to wonder whether rPath wouldn’t make itself a lot more credible in the eyes of its ISV target customers by becoming part of a larger software organization. Red Hat obviously comes to mind as a possible home, assuming Red Hat management could swallow its pride enough to buy back the innovation of its ex-employees. But another possibility would be… Oracle. After all, if Larry really wants to get RHEL out of his stack, what better way to do it than to add an entirely free and unencumbered RHEL-like distro to the bottom of every Oracle stack?

Be all that as it may, there is one thing about the rPath concept that really, really intrigues me. What is to prevent Microsoft from trying this? If ISVs had a convenient way to package up highly efficient custom builds of Windows Server 2008 together with key Microsoft or third party applications for the Viridian hypervisor, the idea would be wildly popular. Will it happen? Let’s wait and see what happens after WS 2008 comes out.

Copyright © 2007, Peerstone Research Inc. All rights reserved.

That is all for today and this Came out at 12:10 PM PST

2007/07/24 Slashdot: Are Cheap Laptops a Roadblock for Moore’s Law?

Tuesday, July 24th, 2007

here is an interesting story:

Are Cheap Laptops a Roadblock for Moore’s Law?

Is it that $100 Laptop is trying to kill what Moore said that consumers should be lusting the faster, more expensive hardware and that they (the consumer) should buy the most expensive hardware for laptops so that they will never have the brains to buy that slower way less expensive laptop thats probably less then half the price?

here is an excerpt from Slashdot.org:

Timothy Harrington writes “Cnet.co.uk wonders if the $100 laptop could spell the end of Moore’s Law: ‘Moore’s law is great for making tech faster, and for making slower, existing tech cheaper, but when consumers realize their personal lust for faster hardware makes almost zero financial sense, and hurts the environment with greater demands for power, will they start to demand cheaper, more efficient ‘third-world’ computers that are just as effective?” Will ridiculously cheap laptops wean consumers off ridiculously fast components?”

Here is the story from CNet.co.uk

The One Laptop Per Child organisation’s XO computer, aka the $100 laptop, has just started mass production. And while Crave is happy that thousands of underprivileged African children will reap the benefits of a PC and the Internet, we can’t help but feel a little jealous — and even embarrassed.

Here we are, extolling the virtues of laptops such as the £2,000 Sony Vaio TZ, when for most users the $100 XO would be just as effective. Sure, it doesn’t have a premium badge on the lid, and its 433MHz AMD CPU won’t win any speed records, but it’ll let you surf the Web, send email, enjoy audio and video, and even, as some Nigerian children have discovered, allow you to browse for porn.

Think about your own PC usage — does it honestly include anything more demanding than Facebook stalking, laughing at idiots on YouTube or hitting the digg button underneath the latest lolcat? Can you justify spending £2,000 when a machine costing £50 will do exactly the same thing? Crave thinks the world can learn a lot from the XO, the ClassMate PC and its ilk. These devices could change the computing world as we know it. And despite its makers saying it’s exclusive to the developing world, the XO absolutely should be brought to the West.

Since 1965, the tech world has obsessed about keeping pace with Moore’s Law — an empirical observation that computing performance will double every 24 months. Concurrently, consumers have lusted after the latest and greatest computing hardware, encouraged in part by newer, fatter, ever more demanding operating systems and applications.

Moore’s law is great for making tech faster, and for making slower, existing tech cheaper, but when consumers realise their personal lust for faster hardware makes almost zero financial sense, and hurts the environment with greater demands for power, will they start to demand cheaper, more efficient ‘third-world’ computers that are just as effective?

We think so. The amount of interest generated by the XO, the ClassMate PC, and more recently the £200 Asus Eee PC is phenomenal. Most people in the Crave office are astounded by their low price and relatively high functionality, and are finding it difficult to justify buying anything else. If you want to play the latest games, well, the latest games consoles, while power-hogs, are relatively cheap and graphically very impressive.

It’s almost poetic that the poorest nations in the world have the potential to push the Western tech industry in a new direction. Don’t get us wrong — we love fast, outlandish laptops and PCs as much as the next blog, but we’d be idiots not to show you the alternative. And what a fantastic alternative it is. We predict some very interesting, and money-saving times ahead. -Rory Reid

Thats all for today.

Ryan Orser

2007/07/23 SSH Tricks

Sunday, July 22nd, 2007

Here is some cool tricks for SSH! This would be great for all the people who use SSH on their computers and their servers. It looks alright though I am hoping that it could help more people then what I have been getting viewing my Blog.

Here is an excerpt from http://polishlinux.org/apps/ssh-tricks/# :

SSH (secure shell) is a program enabling secure access to remote filesystems. Not everyone is aware of other powerful SSH capabilities, such as passwordless login, automatic execution of commands on a remote system or even mounting a remote folder using SSH! In this article we’ll cover these features and much more.
Author: Borys Musielak

SSH works in a client-server mode. It means that there must be an SSH daemon running on the server we want to connect to from our workstation. The SSH server is usually installed by default in modern Linux distributions. The server is started with a command like /etc/init.d/ssh start. It uses the communication port 22 by default, so if we have an active firewall, the port needs to be opened. After installing and starting the SSH server, we should be able to access it remotely. A simple command to log in as user1 to the remote_server (identified by a domain name or an IP address) looks like this:

ssh user1@remote_server

After entering the password to access the remote machine, a changed command prompt should appear, looking similar to user1@remote_server:~$. If this is the case, it means that the login was successful and we’re working in a remote server environment now. Any command we run from this point on, will be executed on the remote server, with the rights of the user we logged in with.

SCP – secure file copying

SCP is an integral part of the OpenSSH package. It is a simple command allowing to copy any file or folder to or from a remote machine using the SSH protocol. The SSH+SCP duo is a great replacement of the non-secure FTP protocol which is widely used by the Internet users nowadays. Not everyone is aware of the fact though, that all the passwords sent while using the FTP protocol are transferred over the network in a plain text format (making it dead easy for crackers to take over) – SCP is a much more reliable alternative. The simplest usage of SCP looks like on the following example:

scp file.txt user1@remote_server:~/

This will copy the local file.txt to the remote server and put it in the home folder of user1. Instead of ~/, a different path can be supplied, i.e. /tmp, /home/public, and any other path we have write access to.

In order to copy a file from a remote server to the local computer, we can use another SCP syntax:

scp user1@remote_server:~/file.txt .

This will copy a file file.txt located in a home folder of user user1 of a remote system to the local folder (the one we are currently in).

Other interesting SCP options:

  • -r – to copy folders recursively (including subfolders),
  • -P port – to use a non-standard port (the default is 22) – of course this option should be used if the server listens on a non-standard port. The option can be helpful when connecting from a firewall-protected network. Setting the SSH server to listen on 443 port (used for secure HTTP connections) is the best way to by-pass the administrator’s restrictions.

GUIs for SCP

If we do not like the console and we prefer GUI (graphical user interface), we can use a graphical (or pseudo-graphical) SCP client. Midnight Commander is one of the programs that provides an SCP client (option shell link). Nautilus and Konqueror are the SCP-capable file managers as well. Entering ssh://user1@remote_server:~/ in the URI field results in a secure shell connection to the remote system. The files can be then copied just as they were available locally.
In the MS Windows environment, we have a great app called WinSCP. The interface of this program looks very much like Total Commander. By the way, there is a plug-in allowing for SCP connections from TC as well.

SSH without passwords – generating keys

Entering passwords upon every SSH connection can be annoying. On the other hand, unprotected remote connection is a huge security risk. The solution to this problem is authorization using the private-public key-pair.

The pair of keys is usually generated using the ssh-keygen command. Below, there is a sample effect of such key generation. RSA or DSA keys can be used.

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key
(/home/user1/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in
/home/user1/.ssh/id_rsa.
Your public key has been saved in
/home/user1/.ssh/id_rsa.pub.
The key fingerprint is
:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx

When the program asks for the key password, we should just press ENTER – this way, a passwordless key will be created. Remember that this is always a security hole to have a passwordless key (in simple words, doing that downgrades your remote system security to the security of your local system) so do it on your own risk. As the ssh-keygen finishes its work, you can see that two keys have been generated. The private key landed in /home/user1/.ssh/id_rsa and we should never make this public. The second public key appeared in /home/user1/.ssh/id_rsa.pub and this is the one we can show the entire world.

Now, if we want to access a remote system from our local computer without passwords (only using the keys), we have to add the information about our public key to the authorized_keys file located in ~/.ssh folder on the remote system. This can be done using the following commands:

$ scp /home/user1/.ssh/id_rsa.pub
user1@remote_server:~/
$ ssh user1@remote_server
$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

The third command will be obviously executed on a remote server. After this operation, all actions performed on the remote server with SSH will not need any password whatsoever. This will certainly make our work easier.

Notice that if you need passwordless access from the remote server to the local one, the similar procedure has to be performed from the remote server. Authorization using keys is a one-way process. The private key can verify the public one, not vice-versa.

Executing commands on a remote system

Well, now when we can already log into remote OS without the password, why wouldn’t we want to execute some command remotely? There can be multiple useful appliances of this, especially when we have to execute some command on a daily basis, and it could not be automated before, because of the need to enter the password manually (or enter it as plain text which is not very secure).

One interesting case is a “remote alert”. Let’s say that we have some crucial process running on the remote system, i.e. a website running on Apache server. We want be be warned when the system gets out of resources (i.e. the disk space is getting short or the system load is too high). We could obviously send an e-mail in such cases. But additionally, we can execute a remote command which plays a warning sound on our local OS! The code for such event would look something like that:

ssh user1@local_server 'play
/usr/share/sounds/gaim/arrive.wav'

This command, executed in a script from the remote server would cause a passwordless login of user1 to the local_server (the one we’re usually working on) and play a wave file with the play command (which is usually available in Linux). The actual case in which we execute this remote command should obviously be specified in a script, but we’re not going to provide a scripting course here, but a way to execute remote commands with passwordless SSH :).

X11 forwarding – running graphical apps remotely

One of the least known functions of SSH is X protocol forwarding. This enables us to run almost every X application remotely! It’s enough to connect to the remote server using the -X option:

ssh -X user1@remote_serwer

and the display of every X application executed from now on will be forwarded to our local X server. We can configure the X11 Forwarding permanently by editing the /etc/ssh/ssh_config file (relevant option is ForwardX11 yes). Of course for the option to work, the remote SSH server needs to support X11 forwarding as well. The /etc/ssh/sshd_config file is responsible for that. This option is however configured by default in most of the Linux distros.

If we just need to execute one single command, we can use the syntax we learned before:

ssh -X user1@remote_serwer 'psi'

– this will execute PSI instant messenger on the remote server, passing the display to the local screen.

Of course the speed of applications executed remotely depends mostly on the network connection speed. It works almost flawlessly in local networks (even things like forwarding Totem playing a DivX movie). In case of Internet connection, a DSL seems reasonable to get apps like Skype or Thunderbird work quite well with a remote call.

Notice that it’s also possible to connect to the remote server without the X11 forwarding enabled, export the DISPLAY variable to point to the local machine and then run the X application. This way, the application would be executed with a remote display, using the generic X server functionality. SSH security would not be applied in such case since this kind of configuration has nothing to do with SSH. Depending on the configuration of the local X server, it may be that the authorization of the remote X applications needs to be turned on in such case. This is usually done by the command xhost. For example, xhost + hostname accepts all the remote applications from the specified hostname for a while. If we plan to use this option regularly, a more secure X server configuration is recommended.

SSHFS – mounting a remote folder

Working on a file located on some remote server via SSH can be quite annoying especially when we need often copy different files in both directions. Using a the fish:// protocol in Midnight Commander or Konqueror is a partly solution – fish tends to be much slower than pure SSH and it often slows down even more while copying files. The ideal solution would be a possibility to mount a remote resource available through SSH only. The good news is that… this option exists for a while already, thanks to sshfs and the fuse project.

Fuse is a kernel module (recently it has been adopted in the official 2.6 series) allowing for mounting different filesystems by an unprivileged user. SSHFS is a program created by the author of fuse himself which enables to mount remote folders/filesystems using SSH. The idea is very simple – a remote SSH folder is mounted as a local folder in the filesystem. Since then, almost all operations on this folder work exactly as if this was a normal local folder. The difference is that the files are silently transferred though SSH in the background.

Installing fuse and sshfs in Ubuntu is as easy as entering (as root):

# apt-get install sshfs

The only remaining action is adding the user that we want to give the permission to mount SSH folders to the fuse group (using a command like usermod -G -a fuse user1 or manually editing the /etc/group file). Eventually, the fuse module needs to be loaded:

# modprobe fuse

And then, after logging in, we can try to mount a remote folder using sshfs:

mkdir ~/remote_folder
sshfs user1@remote_server:/tmp ~/remote_folder

The command above will cause the folder /tmp on the remote server to be mounted as ~/remote_folder on the local machine. Copying any file to this folder will result in transparent copying over the network using SCP. Same concerns direct file editing, creating or removing.

When we’re done working with the remote filesystem, we can unmount the remote folder by issuing:

fusermount -u ~/remote_folder

If we work on this folder on a daily basis, it is wise to add it to the /etc/fstab table. This way is can be automatically mounted upon system boot or mounted manually (if noauto option is chosen) without the need to specify the remote location each time. Here is a sample entry in the table:

sshfs#user1@remote_server:/tmp /home/user1/remote_folder/ fuse    defaults,auto    0 0

If we want to use fuse and sshfs regularly, we need to edit the /etc/modules file adding the fuse entry. In other case we would have to load the module manually each time we want to use it.

Summary

As you can see, SSH is a powerful remote access tool. If we need to work with remote UNIX filesystems often, it’s really worth to learn a few powerful features of SSH and use them in practice. SSH can really make your daily work much more effective and pleasant at the same time. In the following article (to be published later this month) we’re going to cover another great feature of SSH: making different kinds of tunnels with port forwarding using transparent socks and a corkscrew

You should also change the port instead of Port 22 to Port 443. I use Secure File Copying to post something on either of my websites. I also use ssh though i am having a little trouble with it at the moment, and the winSCP to use Secure File Copying on Windows XP. OpenSSH i have heard is good and I am hoping that it can improve. I am also trying TightVNC for my Server. I hope that people could give me some reviews maybe on comments on my blog. Good Luck on this.

Thats all for now. This SSH Tricks Post also has some SCP (Secure File Copying.) Good luck as you will probably get a few minor problems if you do not use Ubuntu or one of its derivitives.

2007/07/21 Ubuntu Developement announcement

Friday, July 20th, 2007

Today I have decided that I would talk about something else other then How to install Applications. Todays post will be about Ubuntu and Launch Pad

Send ubuntu-devel-announce mailing list submissions to
ubuntu-devel-announce@lists.ubuntu.com

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce
or, via email, send a message with subject or body ‘help’ to
ubuntu-devel-announce-request@lists.ubuntu.com

You can reach the person managing the list at
ubuntu-devel-announce-owner@lists.ubuntu.com

When replying, please edit your Subject line so it is more specific
than “Re: Contents of ubuntu-devel-announce digest…”

Today’s Topics:

1. Launchpad 1.1.7 release notes (Matthew Revell)
2. Tribe 3 released (Sarah Hobbs)

———————————————————————-

Message: 1
Date: Thu, 19 Jul 2007 16:03:54 +0100
From: Matthew Revell <matthew.revell@canonical.com>
Subject: Launchpad 1.1.7 release notes
To: ubuntu-devel-announce@lists.ubuntu.com
Message-ID: <469F7D5A.1000402@canonical.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

The past four weeks have flown by in a flurry of activity here in the
Launchpad team, resulting in the most improvements of any 2007 release
so far! 1.1.7 brings bug fixes and new features right across Launchpad.

What’s new in 1.1.7?
====================

Highlights in Launchpad 1.1.7 include:

* Larger font size: visit launchpad.net and you’ll see that we’ve
increased the size of the text used on the site, making it easier to
read Launchpad’s default text size.
* New remote bug tracker support: Launchpad can now track bugs in the
Mantis bug tracker.
* Improved duplicate bug handling: if someone has already reported the
bug you’ve encountered it’s now much easier to select that report
rather than create a duplicate.
* Frequently Asked Questions in the Answer Tracker: answer contacts
can now mark frequently asked questions and create a canonical
answer that is available to anyone using the Answer Tracker.
* Teams can now set their default language, allowing a team to become
an answer contact for a specific language.
* Branch associations: you can now see all bugs, blueprints and
subscribers associated with a branch on its branch associations
page.

Read on for full details of Launchpad 1.1.7!

General Launchpad
=================

* Launchpad’s font size is now larger and is now easier to read.
(Bug 82344 and Bug 87471)
* If a team member has the rights to extend their own membership,
Launchpad will now provide them with a link to do so (Bug 12136)
* Safari rendering has been improved (Bugs 102155 and 109517)
* A layout problem on distribution series pages has been fixed.
(Bug 111783)
* Chart bars in Launchpad Translations are now displayed more
correctly by W3M. (Bug 39292)
* Links to “Report a bug” in the Bug Tracker have been reworked to be
less confusing and easier to read. (Bugs 3786 and 121199)
* Expandable areas, such as the bug comment field, no longer flicker
open while pages are loading. (Bug 87221)
* Internet Explorer no longer encounters a JavaScript error on most
Launchpad pages. (Bug 110926)
* The text of the link to see project membership has been reworded
for clarification (Bug 121761)
* The green home menu’s projects link now correctly links to projects
instead of products. (Bug 124246)
* A logic error in the team invite form has been fixed. Previously
this would result in an error being displayed to users. (Bug 125042)
* Several updates were made to how we deal with folding quoted (and
other) text to increase readability. (Bugs 40526 and 122681)
* When filing new bugs and questions, the algorithm to look for
similar items was tweaked to offer better suggestions when there are
very few existing items. (Bug 126587)

Distribution management (Soyuz)
===============================

* Upload archive consistency checks have been improved. They now
detect file collisions before they get into publishing tables.
(Bug 119753)
* The “builder page” has been modified to store logs as utf-8 text.
(Bug #122439)
* We now use the changed-by person instead of the maintainer for
announcing the -changes (e.g. gusty-changes) emails. A typo in the
upload rejection email was also fixed (Bug 122086 and Bug 120605)
* We now extend the distroseries FROZEN state to allow uploads for all
pockets. (Bug 67790)
* Minor fixes were applied to the ftpmaster-tool to correct some
functionality issues. (Bug 121784) A larger refactoring effort is
planned for the future.
* “change-override.py -S” now only affects binaries built by the
source package in the suite currently being acted upon (32135)
* Uploads rejected because of non-ASCII chars will now be accepted.
(Bug 121711)

Infrastructure and System Administration
========================================

* Further changes have been made to improve our ability to reconnect
to the database when an individual server has issues. This helps
to prevent the appearance that Launchpad is down when actually only
one individual server is unhappy.
* The accuracy of database timeouts has been improved thus preventing
tardy timeout errors. (Bug 107722)

Code Branch Management (Code Hosting)
=====================================

* Launchpad now serves hosted and mirrored branches over bzr+ssh, as
well as sftp. (Spec supermirror-smart-server)
* We no longer display merged or abandoned branches on the Latest Code
portlet for projects. (Bug 116033)
* When registering a branch via “bzr register-branch” we now correctly
check to see if the branch URL is valid and has a unique branch
name. When register-branch does fail, it now produces a more
informative error message. (Bug 78522 and Bug 124441)
* There is a new page that shows a branch’s associated bugs,
blueprints and subscribers. (branch-associations-view) e.g.
https://code.launchpad.net/~ubuntu-doc/ubuntu-doc/trunk/+associations

Bug Tracker
===========

* Initial support for tracking bugs in the Mantis bug tracker has
been added. (Bug 32266)
* Triaged bugs are now correctly included in the default bug listing.
(Bug 121636)
* Closing bugs from changelogs now works when the changelog contains
a URL or CVE reference.(Bug 123534)
* A temporary issue prevented bugs from being closed via changelogs.
This has now been fixed. (Bug 121606)
* The warning about a bug contact subscription is now gender neutral.
(Bug 97268)
* Including CVEs in a package upload that closes a bug now works
correctly (Bug 123968)
* Improvements to the bug search should result in fewer timeout
errors when filing new bugs with long text strings. (Bug 86361)
* The +filebug page now prompts users, who find their bug is already
reported, to subscribe to the bug instead of creating a duplicate.
(Bug 116364)
* The list of bugs returned when using +filebug (i.e. “Your bug may
have already been reported”) now display their current status.
(Bug 79115)
* The BugZilla resolutions ‘CODE_FIX’ and ‘PATCH_ALREADY_AVAILABLE’
(for bugs of status CLOSED, VERIFIED or RESOLVED in BugZilla) are
now mapped to the Launchpad ‘Fix released’ bug status. Bugzilla’s
‘WONTFIX’ resolution now correctly maps to Launchpad’s ‘Won’t Fix’
status. Previously, these resolutions were incorrectly mapped to
Launchpad’s ‘Invalid’ status (Bug 121348 and Bug 113974).
* Anonymous users may no longer nominate a bug for release and will
be asked to log in when they attempt to do so. Previously, anonymous
users trying to nominate a bug for release would trigger an
application error (Bug 90791).
* The text displayed on the ‘report a bug as affecting another
upstream project’ form has been reworded to support teams and be
gender neutral (Bug 97268).
* It is no longer possible to use the bug-filing forms at
https://bugs.launchpad.net/bugs/+filebug to file bugs against
projects that do not use Launchpad for bug tracking (Bug 113268).
* It is no longer possible to attempt to file bugs, answers or
blueprints against empty project groups. Previously, users would be
presented with blank or un-submittable forms when attempting to add
these items to an empty project group. Empty project groups’ Bugs,
Blueprints, Translations and Answers facets are now disabled by
default. A warning will be shown to empty groups’ owners advising
them to add projects to their group
(Bugs 106523, 124428 and 124434).
* The Bug Tracker statistics now report Bug Tracker statistics and
not Translations Statistics. (Bug 121353)
* A situation where bugs could be both targeted and nominated for a
release has been fixed. (Bug 118915)
* Bug search bookmarks no longer generate an error when clicked. This
was fixed in production shortly after it occurred. (Bug 122550)
* A distribution driver may now add and remove any person to/from the
bug contacts list. (Bug 29022)
* The “Date Last Updated” has been added to the bug details portlet.
(Bug 5936)

Answer Tracker
==============

* Launchpad now has an FAQ management facility. (Implement faq-base
blueprint and closes bug 117914)
* It is now possible for a team to set their default language.
(Bugs 121075, 121077, 121089, 121093, and 121094)
* Highlighting and contrast in answers has been improved.
(Bug 73009 and Bug 105135)
* The question page HTML no longer claims to be in English even when
it’s known not to be. It now claims to be non-English for better
search engine processing. (Bug 119288)
* It is now possible to add a new question from the question page.
(Bug 120211)
* When users state that their problem is solved, we no longer highlight
a best answer. Instead, they can now select the specific answer that
helped them solve the problem at a later time. To reflect this
change, the button called ‘I Solved My Problem’ was renamed to
‘Problem Solved’. (Bug 107810)
* The Answer Tracker now recognises English variants – e.g. Canadian
English – and will send questions in English to anyone who selected
an English variant in their preferred languages. (Bug 122063)
* The “Unsupported View” which contained questions asked in a language
for which there was no answer contact has been removed. Instead we
display a leading paragraph of links to unsolved questions by
language. (Bug 118726)
* Minor aesthetic improvements have been made to the questions page.

Translations
============

* The user interface for translating strings has been improved:
packaged translations are marked clearly, suggestions display has
been cleaned up, and fonts have changed to emphasize translation text
over less important information. (Bugs 81681, 83360, 103525)
* The +pots pages have been made clearer and easier to read.
* Duplicate suggestions will no longer appear when translating using
the Launchpad interface. (Bug 121582)
* Launchpad no longer reports an error on both the translation and
translation status pages when products don’t have a translation
reviewer assigned.
* Translation credits are now automatically handled by Launchpad by
listing all contributors in Launchpad along with credits coming from
the PO files. GNOME and KDE style translation credits are supported.
(Bug 116, specification translation-credit-strings)
* Launchpad translation modes have changed. The former CLOSED mode is
now called RESTRICTED and we have a new CLOSED mode, which limits
all translation activity – including suggestions – to translation
team members. This is useful where a project requires copyright
assignment and requires that translators sign an agreement before
starting work.
* Selecting translation suggestions from an alternate language works
again. (Bug 85117)
* Uploaders of translation templates are notified by email when their
uploads have been processed. (Bug 88875)
* Uploading translation tarballs will no longer fail if the tarballs
contain empty translation files, editor backups of translation
files etc. (Bug 102381)

Blueprint Tracker
=====================

* When registering a blueprint with an upper-case name, Launchpad
no longer issues an error message. Instead, names are now
automatically converted to lower case. (Bug 111799)
* It is now possible to register blueprints directly from project group
pages. While looking at a project group page, select “register a
blueprint” from the actions panel.
(Spec register-blueprints-from-project-groups)
* Vertical white-space and a ‘register blueprint’ button have been
added to the listing pages. (Bug 99967 and Bug 123494)
* Previously, blueprints with the same name but for different
projects could mistakenly be linked instead of the relevant
blueprint in the target project. This has been fixed. (Bug 79377)
* Email notifications now correctly state Blueprints instead of the
deprecated “Specification” name. (Bug 88561)
* Adding dependencies to blueprints now uses a pop-up control instead
of a drop-down control. This should make the process of establishing
dependencies easier. (Bug 78265)
* The graphical “register a blueprint” link now works correctly.
(Bug 125126)
* Blueprints can now be registered from a sprint homepage and
automatically proposed for that sprint.
* The implementation status of blueprints requiring no implementation
can now be set to informational. Use this option to track blueprints
that do not result in any products other than the blueprint itself.
(Spec implicitly-informational-blueprint)
* Launchpad now notifies a user by email when their subscription to a
blueprint is changed by someone else. (Bug 70982)


Matthew Revell – talk to me about Launchpad
Join us in #launchpad on irc.freenode.net

——————————

Message: 2
Date: Fri, 20 Jul 2007 00:51:12 +1000
From: Sarah Hobbs <hobbsee@ubuntu.com>
Subject: Tribe 3 released
To: Ubuntu Development Announcements
<ubuntu-devel-announce@lists.ubuntu.com>
Message-ID: <469F7A60.8020008@ubuntu.com>
Content-Type: text/plain; charset=ISO-8859-1

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA1

Hello Ubuntu developers,

Welcome to Gutsy Gibbon Tribe 3, which will in time become Ubuntu 7.10.

Pre-releases of Gutsy are *not* encouraged for anyone needing a stable
system or anyone who is not comfortable running into occasional, or even
frequent breakage. They are, however, recommended for Ubuntu developers
and those who want to help in testing, reporting, and fixing bugs.

Tribe 3 is the third in a series of milestone CD images that will be
released throughout the Gutsy development cycle. The Tribe images are
known to be reasonably free of show-stopper CD build or installer
bugs, while representing a very recent snapshot of Gutsy. You can
download it here:

http://cdimage.ubuntu.com/releases/gutsy/tribe-3/ (Ubuntu)
http://cdimage.ubuntu.com/kubuntu/releases/gutsy/tribe-3/ (Kubuntu)
http://cdimage.ubuntu.com/edubuntu/releases/gutsy/tribe-3/ (Edubuntu)
http://cdimage.ubuntu.com/xubuntu/releases/gutsy/tribe-3/ (Xubuntu)

See http://wiki.ubuntu.com/Mirrors for a list of mirrors.

Another set of new features landed in Tribe 3, and are ready for
large-scale testing. Please refer to the following web pages for
details:

http://www.ubuntu.com/testing/tribe3 (Ubuntu)
https://wiki.kubuntu.org/GutsyGibbon/Tribe3/Kubuntu (Kubuntu)
https://wiki.ubuntu.com/GutsyGibbon/Tribe3/Xubuntu (Xubuntu)

This is quite an early set of images, so you should expect some
bugs. Among these are the following (so you don’t need to bother
reporting these if you encounter them):

* The desktop CD hangs on a lot of systems, especially slower ones
with little RAM. Sometimes it is just slow, sometimes it will hang
eternally. If you experience this and waiting a bit longer does
not help, try to restart the computer and the live CD. If that
still does not help, please use the alternate CD.
(https://launchpad.net/bugs/126964)

* On Edubuntu server installs, the “Building LTSP root” step takes a
very long time (in the order of 15 minutes) without visible
progress. It will eventually finish, though.
(https://launchpad.net/bugs/121547)

If the graphical system does not come up or is very slow, please
file a bug against compiz:

https://launchpad.net/ubuntu/+source/compiz/+filebug

Please include a copy of the files ~/.xsession-errors and
/var/log/Xorg.0.log, and the output of glxinfo and xdpyinfo.

If you’re interested in following the changes as we further develop
Gutsy, have a look at the gutsy-changes mailing list:

http://lists.ubuntu.com/mailman/listinfo/gutsy-changes

Please be aware that this list usually has several dozen mails every
day.

We also suggest that you subscribe to the ubuntu-devel-announce list
if you’re interested in following Ubuntu development. This is a
low-traffic list (a few posts a month) carrying announcements of
approved specifications, policy changes, alpha releases, and other
interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

Bug reports should go to the Ubuntu bug tracker:

https://bugs.launchpad.net/ubuntu

Enjoy,

The Ubuntu Development Team
http://www.ubuntu.com

2007/07/20 How to install azureus with Yum

Thursday, July 19th, 2007

Here we are again with a installation with yum.

first of all you need to have yum.

second of all you need to open terminal

thirdly type in ‘su’ into the terminal

Fourth type in your password

fifth type in ‘yum install azureus’

sixth type ‘Y’ when the terminal asks you if you still want the download

seventh let the installation move forward til its complete.

eighth you now have azureus so you can use it as a bit torrent client.

ninth type into terminal azureus

its a little slow on a plugin. though it should work.

2007/07/19 How to install Thunderbird in Command line with yum

Thursday, July 19th, 2007

Well here we are again with another Fedora 7 CLi (command line) installation of a executable.

First of all you need a linux client that supports YUM (ie Redhat, Fedora)

1) open terminal

2) type ‘su’

3) type your password

4) type ‘yum install thunderbird’

5) when terminal asks you if its ok to download 23 MB (MegaBytes) of the executable type in ‘y’

6) Terminal will download all the packages and dependencies then will install.

7) after it completes type in the terminal ‘thunderbird’ to run the program.

8) Set up the email client so that it collects your mail.

9) Your done!

Update to Yesterdays post on How to install Synaptic with yum:
as you have already have seen I have looked at the synaptic again and I have seen what there is in the Synaptic: Practically nothing since its for Debian Packages. I am getting this out a day late.