Archive for July, 2007

2007/07/31 Slashdot: A Majority of Businesses Will Not Move to Vista

Tuesday, July 31st, 2007

Here is the story from Slashdot IT:

  oDDmON oUT writes “An article on the Computerworld site quotes polling results from a potentially-divisive PatchLink survey. The poll shows that the majority of enterprise customers feel there are no compelling security enhancements in Windows Vista, that they have no plans to migrate to it in the near term and that many will ‘either stick with the Windows they have, or turn to Linux or Mac OS X’. A majority, 87%, said they would stay with their existing version of Windows. This comes on the heels of a dissenting view of Vista’s track record in the area of security at the six month mark, which sparked a heated discussion on numerous forums.”

Here is the full story:

Businesses having second thoughts about Vista

Fewer now believe it’s more secure than XP, says new survey

July 30, 2007 (Computerworld) — Fewer businesses are now planning to move to Windows Vista than seven months ago, according to a survey by patch management vendor PatchLink Corp., while more said they will either stick with the Windows they have, or turn to Linux or Mac OS X.

In a just-released poll of more than 250 of its clients, PatchLink noted that only 2% said they are already running Vista, while another 9% said they planned to roll out Vista in the next three months. A landslide majority, 87%, said they would stay with their existing version(s) of Windows.

Those numbers contrasted with a similar survey the Scottsdale, Ariz.-based vendor published in December 2006. At the time, 43% said they had plans to move to Vista while just 53% planned to keep what Windows they had.

Today’s hesitation also runs counter to what companies thought they would do as of late last year. In PatchLink’s December poll, 28% said they would deploy Vista within the first year of its release. But by the results of the latest survey, fewer than half as many — just 11% — will have opted for the next-generation operating system by Nov. 1.

Their change of heart may be because of a changed perception of Vista’s security skills. Seven months ago — within weeks of Vista’s official launch to business, but before the operating system started selling in retail — 50% of the CIOs, CSOs, IT and network administrators surveyed by PatchLink said they believe Vista to be more secure than Windows XP. The poll put the security skeptical at 15% and pegged those who weren’t sure yet at 35%.

Today, said PatchLink, only 28% agreed that Vista is more secure than XP. Meanwhile, the no votes increased to 24% and the unsure climbed to 49%.

Reconsiderations about Vista have given rival operating systems a second chance at breaking into corporations. Last year, Linux and Max OS X had only meager appeal to the CIOs, CSOs, IT and network administrators surveyed: 2% said they planned to deploy the open-source Linux, while none owned up to Mac OS X plans. July’s survey, however, noted a six-fold increase in the total willing to do without Windows on at least some systems: 8% of those polled acknowledged Linux plans and 4% said they would deploy Mac OS X.

PatchLink’s survey results fit with research firms’ continued forecasts that corporate deployment of Vista won’t seriously begin until early next year.

Although Microsoft recently announced it had shipped 60 million copies of Vista so far, it has declined to specify how many buyers are businesses, or even what percentage of the estimated 42 million PCs covered by corporate license agreements have actually upgraded to Vista.

The poll also offered evidence that corporations are even more afraid of zero-day vulnerabilities — bugs still unpatched when they’re made public or used in exploits — than they were last year.

Zero-day vulnerabilities are the top security concern for the majority of IT professionals, according to the survey, with 53% of those polled ranking it as a major worry. In the December 2006 survey, only 29% of the administrators pegged zero-days as their top problem.

“The prospect of zero-day attacks is extremely troubling for organizations of all sizes,” said Charles Kolodgy, an IDC research director, in a statement accompanying the survey. “Today’s financially motivated attackers are creating customized, sophisticated malware designed to exploit unpublished application vulnerabilities in specific applications before they can be fixed.”

This is a good story, I think.

Ryan Orser

2007/07/30 Microsoft FUD Watch!

Monday, July 30th, 2007

Well here is something that it seems to go on and on. Slashdot’s story of Microsoft FUD Watch:

rs232 writes “Not a week goes by when Microsoft doesn’t manufacture a little fear, uncertainty and doubt about something. Yesterday’s financial analyst conference was full of it … Our approach is simple: We look at who said what and why it’s FUD. Lots of companies engage in FUD, and we only single out Microsoft because we’re Microsoft Watch”

 Here is the Story on eweek Microsoft Watch:

 

Microsoft FUD Watch, 6-27-07

 

 Not a week goes by when Microsoft doesn’t manufacture a little fear, uncertainty and doubt about something. Yesterday’s financial analyst conference was full of it.

FUD Watch will be an ongoing addition to our blogging, this time delivered in simple post format. Some future FUD Watch updates could come in podcast or slide show format.

Our approach is simple: We look at who said what and why it’s FUD. Lots of companies engage in FUD, and we only single out Microsoft because we’re Microsoft Watch.

Ray Ozzie, chief software architect

What he said:

“We are the only company in the industry that has the breadth of reach from consumer to enterprises to understand and deliver and to take full advantage of the services opportunity in all of these markets. I believe we’re the only company with the platform DNA that’s necessarily to viably deliver this highly leveragable platform approach to services. And we’re certainly one of the few companies that has the financial capacity to capitalize on this sea change, this services transformation.”

Why is it FUD?
Ozzie spoke about Microsoft’s services strategy at last year’s financial analysts conference, too. Talk, talk, talk. Promises, promises. Microsoft hasn’t yet delivered one piece of its so-called services strategy. The boasting, coupled with yesterday’s presentation on the services framework, is a good way of making Microsoft out to be doing much more than it really is; right now that’s not much, because nothing new is on the market. Meanwhile, Google continues to make huge advertising and search gains. Microsoft is notorious for talking about what it’s going to do some day. Hey, what about today?

Robbie Bach, president, Entertainment & Devices division

What he said:

“What we find in the phone market is that people do want choice, because they use their phone for different things. Some people want an entertainment phone. Some people want a text-messaging e-mail phone. Some people want a phone where it’s easy to dial. People want different sets of capabilities, and a bunch of people want a full QWERTY keyboard. And so we have to be able to provide the operating system to the operators and to the handset manufacturers that delivers that diversity.”

Why is it FUD?
“Choice” is a code word for “choice, as long as it’s on a Microsoft platform.” When iPod sales started to skyrocket, Microsoft responded with a FUD campaign about choice—how many different devices and music services used Windows Media technologies. Microsoft clearly is cueing up for a choice FUD campaign against the iPhone. Regarding the iPhone, Microsoft delivers a two-FUD punch about choice and cost, dismissing, as CEO Steve Ballmer has done, the iPhone because of its $500 or $600 price.

Number of choices isn’t the same as what you choose. Remember those old Starkist commercials with Charlie the Tuna, where he had good taste but that didn’t mean he would taste good? Choices aren’t necessarily the same as choice. Tens of millions of people chose the iPod. In mobiles, the majority already has made its choice: Symbian OS-based cell phones. That said, lots of U.S. folks have chosen the iPhone—270,000 units in the first two days of sales.

Jeff Raikes, president, Business division

What he said:

“Historically, our [Enterprise Agreement] renewal rates have been about 2/3 to 3/4. And I know many of you wonder, well, with customers already licensed for the 2007 Office system, were they going to renew their Enterprise Agreement? We were very, very excited to see that because of the strength of our road map, the future that they see in what we’re investing in the Office system, the rate was greater than 90 percent in this last quarter.”

Why is it FUD?
Analysts from Forrester and Gartner have Microsoft customer data indicating sluggish Software Assurance renewals. Microsoft hasn’t publicly commented on SA renewals. The very positive Enterprise Agreement data is a misdirection. It draws attention to an exciting trend that suggests volume licensing contract renewals are rosy. But strong EA renewals don’t necessarily mean a similar trend for Software Assurance. Can you say non sequitur?

Kevin Turner, chief operating officer

What he said:

“By our math we eclipse the entire install base of Apple in the first five weeks that this product shipped. And that’s something again—this ecosystem that I just talked about—we’re not building an ecosystem that handles four, five, six devices and five or six printers. The opportunity is 2.1 million devices and thousands and thousands of printers. And that’s the importance of getting this product to mass and scale, which we believe is a huge competitive advantage for us.”

Why is it FUD?
Apple announced record earnings the day before Turner made this statement. The company shipped a record number of Macs with year-over-year unit growth of 20 percent and 42 percent, respectively, for desktops and notebooks. Microsoft’s estimate for second-calendar-quarter PC shipment growth, which is in line with those analyst projections, was between 11 and 13 percent; Mac shipments far exceed market growth.

Microsoft appears concerned about Apple, which brand is resurgent and which has made huge strides in some areas of entertainment and communications; however, Mac OS poses no immediate threat to Windows.

As for Turner’s ding: Wal-Mart typically takes in as much money in the first quarter as Target makes in one year. Is that a reason to pick one store over the other?

Thats what we should all be doing: Watching Microsoft!

2007/07/29 IT on Slashdot: Don’t Overlook Efficient C/C++ Cmd Line Processing

Sunday, July 29th, 2007

here is a story on Slashdot:

  Firedog writes “There’s been a lot of recent debate over why Linus Torvalds chose the new CFS process scheduler written by Ingo Molnar over the SD process scheduler written by Con Kolivas, ranging from discussing the quality of the code to favoritism and outright conspiracy theories. KernelTrap is now reporting Linus Torvalds’ official stance as to why he chose the code that he did. ‘People who think SD was “perfect” were simply ignoring reality,’ Linus is quoted as saying. He goes on to explain that he selected the Completely Fair Scheduler because it had a maintainer who has proven himself willing and able to address problems as they are discovered. In the end, the relevance to normal Linux users is twofold: one is the question as to whether or not the Linux development model is working, and the other is the question as to whether the recently released 2.6.23 kernel will deliver an improved desktop experience.”

Here is the story on the Kernel Trap:

Linux: Linus On CFS vs SD

July 27, 2007 – 8:10pm

Submitted by Jeremy on July 27, 2007 – 8:10pm.

People who think SD was ‘perfect’ were simply ignoring reality,” Linus Torvalds began in a succinct explanation as to why he chose the CFS scheduler written by Ingo Molnar instead of the SD scheduler written by Con Kolivas. He continued, “sadly, that seemed to include Con too, which was one of the main reasons that I never [entertained] the notion of merging SD for very long at all: Con ended up arguing against people who reported problems, rather than trying to work with them.” He went on to stress the importance of working toward a solution that is good for everyone, “that was where the SD patches fell down. They didn’t have a maintainer that I could trust to actually care about any other issues than his own.” He then offered some praise to Ingo, “as a long-term maintainer, trust me, I know what matters. And a person who can actually be bothered to follow up on problem reports is a *hell* of a lot more important than one who just argues with reporters.” Linus went on to note a comparison between the two schedulers:

“I realize that this comes as a shock to some of the SD people, but I’m told that there was a university group that did some double-blind testing of the different schedulers – old, SD and CFS – and that everybody agreed that both SD and CFS were better than the old, but that there was no significant difference between SD and CFS.”

Con Kolivas maintained the -ck Linux kernel patchset which aimed at improving the desktop experience since 2002, originally for the 2.4 kernel. Shortly after the decision to merge the CFS scheduler instead of his SD scheduler, and without an official response about merging his swap prefetch patch, he announced his decision to stop working on the Linux kernel. More information about his contributions and recent decision can be found in this interview on apcmag.com.

Ingo Molnar wrote the original CFS scheduler within a 62 hour window of time starting on April 11’th, 2007. Early reports on the CFS scheduler suggested it was an improvement over the old scheduler, but perhaps not over the SD scheduler. Ingo determinedly followed up on all bug and regression reports, rapidly improving the scheduler and addressing all known issues. It was merged into the 2.6.23 kernel on July 9’th, three months after it was written.


From: Kasper Sandberg [email blocked]
To: Linus Torvalds [email blocked]
Subject: Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 04:04:39 +0200

(sorry for repost, but there seemed to have been some troubles..)

On Sun, 2007-07-22 at 14:04 -0700, Linus Torvalds wrote:
> Ok, right on time, two weeks afetr 2.6.22, there’s a 2.6.23-rc1 out there.
>
> And it has a *ton* of changes as usual for the merge window, way too much
> for me to be able to post even just the shortlog or diffstat on the
> mailing list (but I had many people who wanted to full logs to stay
> around, so you’ll continue to see those being uploaded to kernel.org).
>
> Lots of architecture updates (for just about all of them – x86[-64], arm,
> alpha, mips, ia64, powerpc, s390, sh, sparc, um..), lots of driver updates
> (again, all over – usb, net, dvb, ide, sata, scsi, isdn, infiniband,
> firewire, i2c, you name it).
>
> Filesystems, VM, networking, ACPI, it’s all there. And virtualization all
> over the place (kvm, lguest, Xen).
>
> Notable new things might be the merge of the cfs scheduler, and the UIO
> driver infrastructure might interest some people.
>
Im still not so keen about this, Ingo never did get CFS to match SD in
smoothness for 3d applications, where my test subjects are quake(s),
world of warcraft via wine, unreal tournament 2004. And this is despite
many patches he sent me to try and tweak it. As far as im concerned, i
may be forced to unofficially maintain SD for my own systems(allthough
lots in the gaming community is bound to be interrested, as it does make
games lots better)

<snip>

From: Linus Torvalds [email blocked]
To: Kasper Sandberg [email blocked]
Subject: Re: Linus 2.6.23-rc1
Date: Fri, 27 Jul 2007 19:35:58 -0700 (PDT)

On Sat, 28 Jul 2007, Kasper Sandberg wrote:
>
> Im still not so keen about this, Ingo never did get CFS to match SD in
> smoothness for 3d applications, where my test subjects are quake(s),
> world of warcraft via wine, unreal tournament 2004. And this is despite
> many patches he sent me to try and tweak it.

You realize that different people get different behaviour, don’t you?
Maybe not.

People who think SD was “perfect” were simply ignoring reality. Sadly,
that seemed to include Con too, which was one of the main reasons that I
never ended entertaining the notion of merging SD for very long at all:
Con ended up arguing against people who reported problems, rather than
trying to work with them.

Andrew also reported an oops in the scheduler when SD was merged into -mm,
so there were other issues.

> As far as im concerned, i may be forced to unofficially maintain SD for
> my own systems(allthough lots in the gaming community is bound to be
> interrested, as it does make games lots better)

You know what? You can do whatever you want to. That’s kind of the point
of open source. Keep people honest by having alternatives.

But the the thing is, if you want to do a good job of doing that, here’s a
big hint: instead of keeping to your isolated world, instead of just
talking about your own machine and ignoring other peoples machines and
issues and instead of just denying that problems may exist, and instead of
attacking people who report problems, how about working with them?

That was where the SD patches fell down. They didn’t have a maintainer
that I could trust to actually care about any other issues than his own.

So here’s a hint: if you think that your particular graphics card setup is
the only one that matters, it’s not going to be very interesting for
anybody else.

[ I realize that this comes as a shock to some of the SD people, but I’m
told that there was a university group that did some double-blind
testing of the different schedulers – old, SD and CFS – and that
everybody agreed that both SD and CFS were better than the old, but that
there was no significant difference between SD and CFS. You can try
asking Thomas Gleixner for more details. ]

I’m happy that SD was perfect for you. It wasn’t for others, and it had
nobody who was even interested in trying to solve those issues.

As a long-term maintainer, trust me, I know what matters. And a person who
can actually be bothered to follow up on problem reports is a *hell* of a
lot more important than one who just argues with reporters.

Linus

From: Grzegorz Kulewski [email blocked]
To: Linus Torvalds [email blocked]
Subject: Re: [ck] Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 09:09:06 +0200 (CEST)

On Fri, 27 Jul 2007, Linus Torvalds wrote:
> On Sat, 28 Jul 2007, Kasper Sandberg wrote:
>>
>> Im still not so keen about this, Ingo never did get CFS to match SD in
>> smoothness for 3d applications, where my test subjects are quake(s),
>> world of warcraft via wine, unreal tournament 2004. And this is despite
>> many patches he sent me to try and tweak it.
>
> You realize that different people get different behaviour, don’t you?
> Maybe not.
>
> People who think SD was “perfect” were simply ignoring reality. Sadly,
> that seemed to include Con too, which was one of the main reasons that I
> never ended entertaining the notion of merging SD for very long at all:
> Con ended up arguing against people who reported problems, rather than
> trying to work with them.

I don’t really want to keep all that -ck flamewar going but this sum-up is
a little strange for me:

If Con was thinking SD was “perfect” why he released 30+ versions of it?
And who knows how many versions of his previous scheduler?

Besides Con always tried to help people and improve his code if some bugs
or problems were reported. Archives of this list prove that. I reported
several problems (on list and privately) and all were fixed very fast and
with very kind responses. I had run -ck for months and years and it was
always very stable (I remember one broken “stable” version).

I don’t know what exactly are you refering to when you say about those
unaddressed reports but maybe it depends on who was asking, how and to do
what (for example – purely theoretical one, I don’t remember exact emails
you refering to so I am not saying it happened – stating at the beginning
that the whole design is unacceptable and interactivity hacks are a
must-have won’t make a friend from any maintainer and for sure lowers his
desire to get anything fixed for that guy). Or maybe Con had some bad day
or was depressed. Happens. But I really don’t remember Con ignoring too
many valuable user reports in last 3 years…

And no – I am not thinking that SD was “perfect”. Nothing is perfect,
especially not software. But it was based on months and years of Con’s
experience with desktop and gaming workloads and extensively tested in
similar uses by _many_ others. In nearly all possible desktop
configurations, with most games and all video drivers. This is why it was
perfectly designed and tuned for such workloads while still being general
enough and without any ugly hacks. And because of these tests and Con’s
believe that the desktop is very (most?) important all bugs and problems
in this area were probably killed long ago. I think even design was
changed and tuned a little at the early stages to help solve such
interactivity/dekstop/gaming problems.

So it does not surprise me that CFS is worse in such workloads (at least
for some people) because I strongly suspect that the number of people who
played games with current version of CFS is limited to about 5, maybe 10.
And I also suspect that you (and Ingo) will get many regression reports
when 2.6.23 is released (and months later too… or maybe you won’t
because users will be to “scared” to report such hard to mensure and
reproduce “unimportant” bugs). Hopefully such problems when reported will
be addressed as soon as they can. And hopefully they will be easy enough
to solve without rewriting or redesigning CFS and causing that way even
more regressions in other areas. If not people will probably be patching
O(1) scheduler back privately…

Thanks,

Grzegorz Kulewski

From: Linus Torvalds [email blocked]
Subject: Re: [ck] Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 10:12:32 -0700 (PDT)

On Sat, 28 Jul 2007, Jonathan Jessup wrote:
>
> Linus, there is a complaint about the Linux kernel, this complaint is that
> the Linux kernel isn’t giving priorities to desktop interactivity and
> experience. The response on osnews.com etc have shown that there is public
> demand for it too.

No, the response on osnews.com only shows that there are a lot of armchair
complainers around.

People are suggesting that you’d have a separate “desktop kernel”. That’s
insane. It also shows total ignorance of maintainership, and reality. And
I bet most of the people there haven’t tested _either_ scheduler, they
just like making statements.

The fact is, I’ve _always_ considered the desktop to be the most important
part. And I suspect that that actually is true for most kernel developers,
because quite frankly, that’s what 99% of them ends up using. If a kernel
developer uses Windows for his day-to-day work, I sure as hell wouldn’t
want to have him developing Linux. That has nothing to do with anything
anti-windows: but the whole “eat your own dogfood” is a very fundamental
thing, and somebody who doesn’t do that shouldn’t be allowed to be even
_close_ to a compiler!

So the whole argument about how kernel developers think that the desktop
isn’t important is totally made-up crap by Con, and then parrotted by
osnews and other places.

The fact is, most kernel developers realize that Linux is used in
different places, on different machines, and with different loads. You
cannot make _everybody_ happy, but you can try to do as good a job as
possible. And doing “as good a job as possible” very much includes not
focusing on any particular load.

And btw, “the desktop” isn’t actually one single load. It’s in fact a lot
of very different loads, and different people want different things. What
makes the desktop so interesting is in fact that it shows more varied
usage than any other niche – and no, 3D gaming isn’t “it”.

> Maybe once or twice Con couldn’t help or fix an issue but isn’t that what
> open source software is all about anyway?

That’s not the issue.

Con wass fixated on one thing, and one thing only, and wasn’t interested
in anythign else – and attacked people who complained. Compare that to
Ingo, who saw that what Con’s scheduler did was good, and tried to solve
the problems of people who complained.

The ck mailing list is/was also apparently filled with people who all had
the same issues, which is seriously the *wrong* thing to do. It means that
any “consensus” coming out of that kind of private list is totally
worthless, because the people you ask are already in agreement – you have
a so-called “selection bias”, and they just reinforce their own opinions.

Which is why I don’t trust mailing lists with a narrow topic. They are
_useless_. If you cannot get many different people from _different_ areas
to test your patches, and cannot see the big picture, the end result won’t
likely be very interesting to others, will it?

The fact is, _any_ scheduler is going to have issues. I will bet you
almost any amount of money that people are going to complain about Ingo’s
scheduler when 2.6.23 is released. That’s not the issue: the issue is that
the exact same thing would have happened with CK too.

So if you are going to have issues with the scheduler, which one do you
pick: the one where the maintainer has shown that he can maintain
schedulers for years, can can address problems from _different_ areas of
life? Or the one where the maintainer argues against people who report
problems, and is fixated on one single load?

That’s really what it boils down to. I was actually planning to merge CK
for a while. The _code_ didn’t faze me.

Linus


From: Kasper Sandberg [email blocked]
To: Linus Torvalds [email blocked]
Subject: Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 11:44:08 +0200

On Fri, 2007-07-27 at 19:35 -0700, Linus Torvalds wrote:
>
> On Sat, 28 Jul 2007, Kasper Sandberg wrote:
> >
> > Im still not so keen about this, Ingo never did get CFS to match SD in
> > smoothness for 3d applications, where my test subjects are quake(s),
> > world of warcraft via wine, unreal tournament 2004. And this is despite
> > many patches he sent me to try and tweak it.
>
> You realize that different people get different behaviour, don’t you?
> Maybe not.

Sure.

>
> People who think SD was “perfect” were simply ignoring reality. Sadly,
> that seemed to include Con too, which was one of the main reasons that I
> never ended entertaining the notion of merging SD for very long at all:
> Con ended up arguing against people who reported problems, rather than
> trying to work with them.

Im not saying its perfect, not at all, neither am i saying CFS is bad,
surely CFS is much better than the old one, and i agree with what that
university test you mentioned on kerneltrap says, that CFS and SD is
basically impossible to feel difference in, EXCEPT for 3d under load,
where CFS simply can not compete with SD, theres no but, this is how it
has acted on every system ive tested, and YES, others reported it too,
whether you choose to see it or not. and others people who run games on
linux tells me the exact same thing, and i have had quite a few people
try this.

>
> Andrew also reported an oops in the scheduler when SD was merged into -mm,
> so there were other issues.

And whats the point here? If you are trying to pull the old “Con just
runs away”, forget it, its a certainty that he would have put the
required time into fixing whatever issues arise.

>
> > As far as im concerned, i may be forced to unofficially maintain SD for
> > my own systems(allthough lots in the gaming community is bound to be
> > interrested, as it does make games lots better)
>
> You know what? You can do whatever you want to. That’s kind of the point
> of open source. Keep people honest by having alternatives.

True that

>
> But the the thing is, if you want to do a good job of doing that, here’s a
> big hint: instead of keeping to your isolated world, instead of just
> talking about your own machine and ignoring other peoples machines and
First off, i’ve personally run tests on many more machines than my own,
i’ve had lots of people try on their machines, and i’ve seen totally
unrelated posts to lkml, plus i’ve seen the experiences people are
writing about on IRC. Frankly, im not just thinking of myself.

> issues and instead of just denying that problems may exist, and instead of
> attacking people who report problems, how about working with them?

As i recall, there was only 1 persons reports that were attacked, and
that was because the person repeatedly reported the EXPECTED behavior as
broken, simply because it was FAIRLY allocating the cpu time, and this
did not meet with the dudes expectations. And it was after multiple
mails he was “attacked”

>
> That was where the SD patches fell down. They didn’t have a maintainer
> that I could trust to actually care about any other issues than his own.

You may not have been able to trust Con, but thats because you havent
taken the time to actually really see whats been going on, if you just
read the threads for SD you’d realize that he was more than willing to
maintain it, after all, why do you think he wrote and submitted it? you
think he just wrote it to piss you off by having it merged and leave?

>
> So here’s a hint: if you think that your particular graphics card setup is
> the only one that matters, it’s not going to be very interesting for
> anybody else.

as explained earlier, its not just my particular setup, but actually
that of alot of people, with lots of different hardware.

>
>
> [ I realize that this comes as a shock to some of the SD people, but I’m
> told that there was a university group that did some double-blind
> testing of the different schedulers – old, SD and CFS – and that
> everybody agreed that both SD and CFS were better than the old, but that
> there was no significant difference between SD and CFS. You can try
> asking Thomas Gleixner for more details. ]
>
> I’m happy that SD was perfect for you. It wasn’t for others, and it had
> nobody who was even interested in trying to solve those issues.
>
> As a long-term maintainer, trust me, I know what matters. And a person who
> can actually be bothered to follow up on problem reports is a *hell* of a
> lot more important than one who just argues with reporters.

Okay, i wasnt going to ask, but ill do it anyway, did you even read the
threads about SD? Con was extremely polite to everyone, and he did work
with a multitude of people, you seem to be totally deadlocked into the
ONE incident with a person that was unhappy with SD, simply for being a
fair scheduler.

From: Linus Torvalds [email blocked]
To: Kasper Sandberg [email blocked]
Subject: Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 10:50:48 -0700 (PDT)

On Sat, 28 Jul 2007, Kasper Sandberg wrote:
>
> First off, i’ve personally run tests on many more machines than my own,
> i’ve had lots of people try on their machines, and i’ve seen totally
> unrelated posts to lkml, plus i’ve seen the experiences people are
> writing about on IRC. Frankly, im not just thinking of myself.

Ok, good. Has anybody tried to figure out why 3D games seem to be such a
special case?

I know Ingo looked at it, and seemed to think that he found and fixed
something. But it sounds like it’s worth a lot more discussion.

> Okay, i wasnt going to ask, but ill do it anyway, did you even read the
> threads about SD?

I don’t _ever_ go on specialty mailing lists. I don’t read -mm, and I
don’t read the -fs mailing lists. I don’t think they are interesting.

And I tried to explain why: people who concentrate on one thing tend to
become this self-selecting group that never looks at anything else, and
then rejects outside input from people who hadn’t become part of the “mind
meld”.

That’s what I think I saw – I saw the reactions from where external people
were talking and cc’ing me.

And yes, it’s quite possible that I also got a very one-sided picture of
it. I’m not disputing that. Con was also ill for a rather critical period,
which was certainly not helping it all.

> Con was extremely polite to everyone, and he did work
> with a multitude of people, you seem to be totally deadlocked into the
> ONE incident with a person that was unhappy with SD, simply for being a
> fair scheduler.

Hey, maybe that one incident just ended up being a rather big portion of
what I saw. Too bad. That said, the end result (Con’s public gripes about
other kernel developers) mostly reinforced my opinion that I did the right
choice.

But maybe you can show a better side of it all. I don’t think _any_
scheduler is perfect, and almost all of the time, the RightAnswer(tm) ends
up being not “one or the other”, but “somewhere in between”.

It’s not like we’ve come to the end of the road: the baseline has just
improved. If you guys can show that SD actually is better at some loads,
without penalizing others, we can (and will) revisit this issue.

So what you should take away from this is that: from what I saw over the
last couple of months, it really wasn’t much of a decision. The difference
in how Ingo and Con reacted to peoples reports was pretty stark. And no, I
haven’t followed the ck mailing list, and so yes, I obviously did get just
a part of the picture, but the part I got was pretty damn unambiguous.

But at the same time, no technical decision is ever written in stone. It’s
all a balancing act. I’ve replaced the scheduler before, I’m 100% sure
we’ll replace it again. Schedulers are actually not at all that important
in the end: they are a very very small detail in the kernel.

Linus

That is all for today!

Ryan

2007/07/28 Dell to offer more Linux PC’s

Saturday, July 28th, 2007

Here is an article on Slashdot:

 “According to this article, Mark Shuttleworth from the Ubuntu camp says Dell is seeing a demand for the Linux-based PC and, “There are additional offerings in the pipeline.” I’m starting to see flashbacks of the days when Microsoft partnered up with IBM to gain control of the desktop market. Will other Linux flavors find their way to the likes of Lenovo or HP, etc, or will Ubuntu claim the desktop market working with other PC manufacturers?”

Here is the real article:

Dell to expand Linux PC offerings, partner says
Thursday July 26, 4:36 pm ET

BOSTON (Reuters) – Dell Inc (NasdaqGS:DELLNews) will soon offer more personal computers that use the Linux operating system instead of Microsoft Corp’s (NasdaqGS:MSFTNews) Windows, said the founder of a company that offers Linux support services.

Dell, the world’s second-largest PC maker after Hewlett-Packard Co (NYSE:HPQNews), now offers three consumer PCs that run Ubuntu Linux.

“What’s been announced to date is not the full extent of what we will see over the next couple of weeks and months,” Shuttleworth said an interview late on Wednesday.

“There are additional offerings in the pipeline,” he said. Shuttleworth founded Canonical Inc to provide support for Ubuntu Linux.

A Dell spokeswoman, Anne Camden, declined comment, saying the company does not discuss products in the pipeline.

She added that Dell was pleased with customer response to its Linux PCs. She said Dell believed the bulk of the machines were sold to open-source software enthusiasts, while some first-time Linux users have purchased them as well.

Open-source software refers to computer programs, generally available over the Internet at no cost, that users can download, modify and redistribute.

The Linux operating system is seen as the biggest threat to Microsoft’s Windows operating system.

Shuttleworth said sales of the three Dell Ubuntu PC models were on track to meet the sales projections of Dell and Canonical. He declined to elaborate.

Companies like his privately held Canonical Inc, Red Hat Inc (NYSE:RHTNews) and Novell Inc (NasdaqGS:NOVLNews) make money by selling standardized versions of Linux programs and support contracts to service them.

There are dozens of versions of Linux, available for all sorts of computers from PCs to mainframes and tiny mobile devices.

Shuttleworth said his company was not in discussions with Hewlett-Packard or the other top five PC makers to introduce machines equipped with Ubuntu.

The other three top PC makers are Lenovo Group Ltd (HKSE:0992.HKNews), Acer Inc (Taiwan:2353.TWNews) and Toshiba Corp (Tokyo:6502.T – News).

(Reporting by Jim Finkle)

2007/07/27 Slashdot: A historical Look at the First Linux Kernel

Friday, July 27th, 2007

This is a article on slashdot taking a look at the historical Linux Kerel 0.01:

LinuxFan writes “KernelTrap has a fascinating article about the first Linux kernel, version 0.01, complete with source code and photos of Linus Torvalds as a young man attending the University of Helsinki. Torvalds originally planned to call the kernel “Freax,” and in his first announcement noted, “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.” He also stressed that the kernel was very much tied to the i386 processor, “simply, I’d say that porting is impossible.” Humble beginnings.”

Now for the real article itsel:

This is a free minix-like kernel for i386(+) based AT-machines,” began the Linux version 0.01 release notes in September of 1991 for the first release of the Linux kernel. “As the version number (0.01) suggests this is not a mature product. Currently only a subset of AT-hardware is supported (hard-disk, screen, keyboard and serial lines), and some of the system calls are not yet fully implemented (notably mount/umount aren’t even implemented).” Booting the original 0.01 Linux kernel required bootstrapping it with minix, and the keyboard driver was written in assembly and hard-wired for a Finnish keyboard. The listed features were mostly presented as a comparison to minix and included, efficiently using the 386 chip rather than the older 8088, use of system calls rather than message passing, a fully multithreaded FS, minimal task switching, and visible interrupts. Linus Torvalds noted, “the guiding line when implementing linux was: get it working fast. I wanted the kernel simple, yet powerful enough to run most unix software.” In a section titled “Apologies :-)” he noted:

“This isn’t yet the ‘mother of all operating systems’, and anyone who hoped for that will have to wait for the first real release (1.0), and even then you might not want to change from minix. This is a source release for those that are interested in seeing what linux looks like, and it’s not really supported yet.”

Linus had originally intended to call the new kernel “Freax”. According to Wikipedia, the name Linux was actually invented by Ari Lemmke who maintained the ftp.funet.fi FTP server from which the kernel was originally distributed.

The initial post that Linus made about Linux was to the comp.os.minix Usenet group titled, “What would you like to see most in minix“. It began:

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).”

Later in the same thread, Linus went on to talk about how unportable the code was:

“Simply, I’d say that porting is impossible. It’s mostly in C, but most people wouldn’t call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It’s the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data – max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task – tough cookies).

“It also uses every feature of gcc I could find, specifically the __asm__ directive, so that I wouldn’t need so much assembly language objects. Some of my ‘C’-files (specifically mm.c) are almost as much assembler as C. It would be ‘interesting’ even to port it to another compiler (though why anybody would want to use anything other than gcc is a mystery).

“Unlike minix, I also happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them (I especially like my hard-disk-driver. Anybody else make interrupts drive a state-machine?). All in all it’s a porters nightmare. “

Indeed, Linux 1.0 was released on March 13th, 1994 supporting only the 32-bit i386 architecture. However, by the release of Linux 1.2 on March 7th, 1995 it had already been ported to 32-bit MIPS, 32-bit SPARC, and the 64-bit Alpha. By the release of Linux 2.0 on June 9th, 1996 support had also been added for the 32-bit m68k and 32-bit PowerPC architectures. And jumping forward to the Linux 2.6 kernel, first released in 2004, it has been and continues to be ported to numerous additional architectures.


Linux 0.01 release notes:

		Notes for linux release 0.01

		0. Contents of this directory

linux-0.01.tar.Z	- sources to the kernel
bash.Z			- compressed bash binary if you want to test it
update.Z		- compressed update binary
RELNOTES-0.01		- this file

		1. Short intro

This is a free minix-like kernel for i386(+) based AT-machines.  Full
source is included, and this source has been used to produce a running
kernel on two different machines.  Currently there are no kernel
binaries for public viewing, as they have to be recompiled for different
machines.  You need to compile it with gcc (I use 1.40, don't know if
1.37.1 will handle all __asm__-directives), after having changed the
relevant configuration file(s).

As the version number (0.01) suggests this is not a mature product.
Currently only a subset of AT-hardware is supported (hard-disk, screen,
keyboard and serial lines), and some of the system calls are not yet
fully implemented (notably mount/umount aren't even implemented).  See
comments or readme's in the code.

This version is also meant mostly for reading - ie if you are interested
in how the system looks like currently.  It will compile and produce a
working kernel, and though I will help in any way I can to get it
working on your machine (mail me), it isn't really supported.  Changes
are frequent, and the first "production" version will probably differ
wildly from this pre-alpha-release.

Hardware needed for running linux:
	- 386 AT
	- VGA/EGA screen
	- AT-type harddisk controller (IDE is fine)
	- Finnish keyboard (oh, you can use a US keyboard, but not
	  without some practise :-)

The Finnish keyboard is hard-wired, and as I don't have a US one I
cannot change it without major problems. See kernel/keyboard.s for
details. If anybody is willing to make an even partial port, I'd be
grateful. Shouldn't be too hard, as it's tabledriven (it's assembler
though, so ...)

Although linux is a complete kernel, and uses no code from minix or
other sources, almost none of the support routines have yet been coded.
Thus you currently need minix to bootstrap the system. It might be
possible to use the free minix demo-disk to make a filesystem and run
linux without having minix, but I don't know...

		2. Copyrights etc

This kernel is (C) 1991 Linus Torvalds, but all or part of it may be
redistributed provided you do the following:

	- Full source must be available (and free), if not with the
	  distribution then at least on asking for it.

	- Copyright notices must be intact. (In fact, if you distribute
	  only parts of it you may have to add copyrights, as there aren't
	  (C)'s in all files.) Small partial excerpts may be copied
	  without bothering with copyrights.

	- You may not distibute this for a fee, not even "handling"
	  costs.

Mail me at [email blocked] if you have any questions.

Sadly, a kernel by itself gets you nowhere. To get a working system you
need a shell, compilers, a library etc. These are separate parts and may
be under a stricter (or even looser) copyright. Most of the tools used
with linux are GNU software and are under the GNU copyleft. These tools
aren't in the distribution - ask me (or GNU) for more info.

		3. Short technical overview of the kernel.

The linux kernel has been made under minix, and it was my original idea
to make it binary compatible with minix. That was dropped, as the
differences got bigger, but the system still resembles minix a great
deal. Some of the key points are:

	- Efficient use of the possibilities offered by the 386 chip.
	  Minix was written on a 8088, and later ported to other
	  machines - linux takes full advantage of the 386 (which is
	  nice if you /have/ a 386, but makes porting very difficult)

	- No message passing, this is a more traditional approach to
	  unix. System calls are just that - calls. This might or might
	  not be faster, but it does mean we can dispense with some of
	  the problems with messages (message queues etc). Of course, we
	  also miss the nice features :-p.

	- Multithreaded FS - a direct consequence of not using messages.
	  This makes the filesystem a bit (a lot) more complicated, but
	  much nicer. Coupled with a better scheduler, this means that
	  you can actually run several processes concurrently without
	  the performance hit induced by minix.

	- Minimal task switching. This too is a consequence of not using
	  messages. We task switch only when we really want to switch
	  tasks - unlike minix which task-switches whatever you do. This
	  means we can more easily implement 387 support (indeed this is
	  already mostly implemented)

	- Interrupts aren't hidden. Some people (among them Tanenbaum)
	  think interrupts are ugly and should be hidden. Not so IMHO.
	  Due to practical reasons interrupts must be mainly handled by
	  machine code, which is a pity, but they are a part of the code
	  like everything else. Especially device drivers are mostly
	  interrupt routines - see kernel/hd.c etc.

	- There is no distinction between kernel/fs/mm, and they are all
	  linked into the same heap of code. This has it's good sides as
	  well as bad. The code isn't as modular as the minix code, but
	  on the other hand some things are simpler. The different parts
	  of the kernel are under different sub-directories in the
	  source tree, but when running everything happens in the same
	  data/code space.

The guiding line when implementing linux was: get it working fast. I
wanted the kernel simple, yet powerful enough to run most unix software.
The file system I couldn't do much about - it needed to be minix
compatible for practical reasons, and the minix filesystem was simple
enough as it was. The kernel and mm could be simplified, though:

	- Just one data structure for tasks. "Real" unices have task
	  information in several places, I wanted everything in one
	  place.

	- A very simple memory management algorithm, using both the
	  paging and segmentation capabilities of the i386. Currently
	  MM is just two files - memory.c and page.s, just a couple of
	  hundreds of lines of code.

These decisions seem to have worked out well - bugs were easy to spot,
and things work.

		4. The "kernel proper"

All the routines handling tasks are in the subdirectory "kernel". These
include things like 'fork' and 'exit' as well as scheduling and minor
system calls like 'getpid' etc. Here are also the handlers for most
exceptions and traps (not page faults, they are in mm), and all
low-level device drivers (get_hd_block, tty_write etc). Currently all
faults lead to a exit with error code 11 (Segmentation fault), and the
system seems to be relatively stable ("crashme" hasn't - yet).

		5. Memory management

This is the simplest of all parts, and should need only little changes.
It contains entry-points for some things that the rest of the kernel
needs, but mostly copes on it's own, handling page faults as they
happen. Indeed, the rest of the kernel usually doesn't actively allocate
pages, and just writes into user space, letting mm handle any possible
'page-not-present' errors.

Memory is dealt with in two completely different ways - by paging and
segmentation.  First the 386 VM-space (4GB) is divided into a number of
segments (currently 64 segments of 64Mb each), the first of which is the
kernel memory segment, with the complete physical memory identity-mapped
into it.  All kernel functions live within this area.

Tasks are then given one segment each, to use as they wish. The paging
mechanism sees to filling the segment with the appropriate pages,
keeping track of any duplicate copies (created at a 'fork'), and making
copies on any write. The rest of the system doesn't need to know about
all this.

		6. The file system

As already mentioned, the linux FS is the same as in minix. This makes
crosscompiling from minix easy, and means you can mount a linux
partition from minix (or the other way around as soon as I implement
mount :-). This is only on the logical level though - the actual
routines are very different.

	NOTE! Minix-1.6.16 seems to have a new FS, with minor
	modifications to the 1.5.10 I've been using. Linux
	won't understand the new system.

The main difference is in the fact that minix has a single-threaded
file-system and linux hasn't. Implementing a single-threaded FS is much
easier as you don't need to worry about other processes allocating
buffer blocks etc while you do something else. It also means that you
lose some of the multiprocessing so important to unix.

There are a number of problems (deadlocks/raceconditions) that the linux
kernel needed to address due to multi-threading.  One way to inhibit
race-conditions is to lock everything you need, but as this can lead to
unnecessary blocking I decided never to lock any data structures (unless
actually reading or writing to a physical device).  This has the nice
property that dead-locks cannot happen.

Sadly it has the not so nice property that race-conditions can happen
almost everywhere.  These are handled by double-checking allocations etc
(see fs/buffer.c and fs/inode.c).  Not letting the kernel schedule a
task while it is in supervisor mode (standard unix practise), means that
all kernel/fs/mm actions are atomic (not counting interrupts, and we are
careful when writing those) if you don't call 'sleep', so that is one of
the things we can count on.

		7. Apologies :-)

This isn't yet the "mother of all operating systems", and anyone who
hoped for that will have to wait for the first real release (1.0), and
even then you might not want to change from minix.  This is a source
release for those that are interested in seeing what linux looks like,
and it's not really supported yet.  Anyone with questions or suggestions
(even bug-reports if you decide to get it working on your system) is
encouraged to mail me.

		8. Getting it working

Most hardware dependancies will have to be compiled into the system, and
there a number of defines in the file "include/linux/config.h" that you
have to change to get a personalized kernel.  Also you must uncomment
the right "equ" in the file boot/boot.s, telling the bootup-routine what
kind of device your A-floppy is.  After that a simple "make" should make
the file "Image", which you can copy to a floppy (cp Image /dev/PS0 is
what I use with a 1.44Mb floppy).  That's it.

Without any programs to run, though, the kernel cannot do anything. You
should find binaries for 'update' and 'bash' at the same place you found
this, which will have to be put into the '/bin' directory on the
specified root-device (specified in config.h). Bash must be found under
the name '/bin/sh', as that's what the kernel currently executes. Happy
hacking.

		Linus Torvalds		[email blocked]
		Petersgatan 2 A 2
		00140 Helsingfors 14
		FINLAND

First posting about Linux:

From: Linus Benedict Torvalds
Newsgroups: comp.os.minix
Subject: Gcc-1.40 and a posix-question
Date: 3 Jul 91 10:00:50 GMT

Hello netlanders,

Due to a project I'm working on (in minix), I'm interested in the posix
standard definition. Could somebody please point me to a (preferably)
machine-readable format of the latest posix rules? Ftp-sites would be
nice.

As an aside for all using gcc on minix - the new version (1.40) has been
out for some weeks, and I decided to test what needed to be done to get
it working on minix (1.37.1, which is the version you can get from
plains is nice, but 1.40 is better :-).  To my surpice, the answer
turned out to be - NOTHING! Gcc-1.40 compiles as-is on minix386 (with
old gcc-1.37.1), with no need to change source files (I changed the
Makefile and some paths, but that's it!).  As default this results in a
compiler that uses floating point insns, but if you'd rather not,
changing 'toplev.c' to define DEFAULT_TARGET from 1 to 0 (this is from
memory - I'm not at my minix-box) will handle that too.  Don't make the
libs, use the old gnulib&libc.a.  I have successfully compiled 1.40 with
itself, and everything works fine (I got the newest versions of gas and
binutils at the same time, as I've heard of bugs with older versions of
ld.c).  Makefile needs some chmem's (and gcc2minix if you're still using
it).

                Linus Torvalds          [email blocked]

PS. Could someone please try to finger me from overseas, as I've
installed a "changing .plan" (made by your's truly), and I'm not certain
it works from outside? It should report a new .plan every time.

First Linux announcement:

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 20:57:08 GMT

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones.  This has been brewing
since april, and is starting to get ready.  I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want.  Any suggestions
are welcome, but I won't promise I'll implement them :-)

                Linus (torva... at kruuna.helsinki.fi)

PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that's all I have :-(.

From: Jyrki Kuoppala [email blocked]
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 Aug 91 23:44:50 GMT

In article Linus Benedict Torvalds writes:

>I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
>This implies that I'll get something practical within a few months, and
>I'd like to know what features most people would want.  Any suggestions
>are welcome, but I won't promise I'll implement them :-)

Tell us more!  Does it need a MMU?

>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>It is NOT protable (uses 386 task switching etc)

How much of it is in C?  What difficulties will there be in porting?
Nobody will believe you about non-portability ;-), and I for one would
like to port it to my Amiga (Mach needs a MMU and Minix is not free).

As for the features; well, pseudo ttys, BSD sockets, user-mode
filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
window size in the tty structure, system calls capable of supporting
POSIX.1.  Oh, and bsd-style long file names.

//Jyrki

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 26 Aug 91 11:06:02 GMT

In article Jyrki Kuoppala writes:
>> [re: my post about my new OS]

>Tell us more!  Does it need a MMU?

Yes, it needs a MMU (sorry everybody), and it specifically needs a
386/486 MMU (see later).

>>PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
>>It is NOT protable (uses 386 task switching etc)

>How much of it is in C?  What difficulties will there be in porting?
>Nobody will believe you about non-portability ;-), and I for one would
>like to port it to my Amiga (Mach needs a MMU and Minix is not free).

Simply, I'd say that porting is impossible.  It's mostly in C, but most
people wouldn't call what I write C.  It uses every conceivable feature
of the 386 I could find, as it was also a project to teach me about the
386.  As already mentioned, it uses a MMU, for both paging (not to disk
yet) and segmentation. It's the segmentation that makes it REALLY 386
dependent (every task has a 64Mb segment for code & data - max 64 tasks
in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

It also uses every feature of gcc I could find, specifically the __asm__
directive, so that I wouldn't need so much assembly language objects.
Some of my "C"-files (specifically mm.c) are almost as much assembler as
C. It would be "interesting" even to port it to another compiler (though
why anybody would want to use anything other than gcc is a mystery).

Unlike minix, I also happen to LIKE interrupts, so interrupts are
handled without trying to hide the reason behind them (I especially like
my hard-disk-driver.  Anybody else make interrupts drive a state-
machine?).  All in all it's a porters nightmare.

>As for the features; well, pseudo ttys, BSD sockets, user-mode
>filesystems (so I can say cat /dev/tcp/kruuna.helsinki.fi/finger),
>window size in the tty structure, system calls capable of supporting
>POSIX.1.  Oh, and bsd-style long file names.

Most of these seem possible (the tty structure already has stubs for
window size), except maybe for the user-mode filesystems. As to POSIX,
I'd be delighted to have it, but posix wants money for their papers, so
that's not currently an option. In any case these are things that won't
be supported for some time yet (first I'll make it a simple minix-
lookalike, keyword SIMPLE).

                Linus [email blocked]

PS. To make things really clear - yes I can run gcc on it, and bash, and
most of the gnu [bin/file]utilities, but it's not very debugged, and the
library is really minimal. It doesn't even support floppy-disks yet. It
won't be ready for distribution for a couple of months. Even then it
probably won't be able to do much more than minix, and much less in some
respects. It will be free though (probably under gnu-license or similar).

From: Alan Barclay [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 27 Aug 91 14:34:32 GMT

In article Linus Benedict Torvalds writes:

>yet) and segmentation. It's the segmentation that makes it REALLY 386
>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

Is that max 64 64Mb tasks or max 64 tasks no matter what their size?
--
  Alan Barclay
  iT                                |        E-mail : [email blocked]
  Barker Lane                       |        BANG-STYLE : [email blocked]
  CHESTERFIELD S40 1DY              |        VOICE : +44 246 214241

From: Linus Benedict Torvalds [email blocked]
Newsgroups: comp.os.minix
Subject: Re: What would you like to see most in minix?
Date: 28 Aug 91 10:56:19 GMT

In article Alan Barclay writes:
>In article Linus Benedict Torvalds writes:
>>yet) and segmentation. It's the segmentation that makes it REALLY 386
>>dependent (every task has a 64Mb segment for code & data - max 64 tasks
>>in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).

>Is that max 64 64Mb tasks or max 64 tasks no matter what their size?

I'm afraid that is 64 tasks max (and one is used as swapper), no matter
how small they should be. Fragmentation is evil - this is how it was
handled. As the current opinion seems to be that 64 Mb is more than
enough, but 64 tasks might be a little crowded, I'll probably change the
limits be easily changed (to 32Mb/128 tasks for example) with just a
recompilation of the kernel. I don't want to be on the machine when
someone is spawning >64 processes, though :-)

                Linus

Early Linux installation guide:

		Installing Linux on your system

Ok, this is a short guide for those people who actually want to get a
running system, not just look at the pretty source code :-). You'll
certainly need minix for most of the steps.

	0.  Back up any important software.  This kernel has been
working beautifully on my machine for some time, and has never destroyed
anything on my hard-disk, but you never can be too careful when it comes
to using the disk directly.  I'd hate to get flames like "you destroyed
my entire collection of Sam Fox nude gifs (all 103 of them), I'll hate
you forever", just because I may have done something wrong.

Double-check your hardware.  If you are using other than EGA/VGA, you'll
have to make the appropriate changes to 'linux/kernel/console.c', which
may not be easy.  If you are able to use the at_wini.c under minix,
linux will probably also like your drive.  If you feel comfortable with
scan-codes, you might want to hack 'linux/kernel/keyboard.s' making it
more practical for your [US|German|...] keyboard.

	1.  Decide on what root device you'll be using.  You can use any
(standard) partition on any of your harddisks, the numbering is the same
as for minix (ie 0x306, which I'm using, means partition 1 on hd2).  It
is certainly possible to use the same device as for minix, but I
wouldn't recommend it.  You'd have to change pathnames (or make a chroot
in init) to get minix and linux to live together peacefully.

I'd recommend making a new filesystem, and filling it with the necessary
files: You need at least the following:

	- /dev/tty0		(same as under minix, ie mknod ...)
	- /dev/tty		(same as under minix)
	- /bin/sh		(link to bash)
	- /bin/update		(I guess this should be /etc/update ...)

Note that linux and minix binaries aren't compatible, although they use
the same (gcc-)header (for ease of cross-compiling), so running one
under the other will result in errors.

	2.  Compile the source, making necessary changes into the
makefiles and linux/include/linux/config.h and linux/boot/boot.s.  I'm
using a slightly hacked gcc-1.40, to which I have added a -mstring-insns
flag, which uses the i386 string instructions for structure copy etc.
Removing the flag from all makefiles should do the trick for you.

NOTE! I'm using -Wall, and I'm not seeing many warnings (2 I think, one
about _exit returning although it's volatile - it's ok.) If you get
more warnings when compiling, something's wrong.

	3.  Copy the resultant code to a diskette of the right type.
Use 'cp Image /dev/PS0' or equivalent.

	4.  Boot with the new diskette.  If you've done everything right
(and if *I've* done everything right), you should now be running bash as
root.  You can't do much (alias ls='echo *' is a good idea :-), but if
you do run, most other things should work.  I'd be happy to hear from
anybody that has come this far - and I'll send any ported binaries you
might want (and I have).  I'll also put them out for ftp if there is
enough interest.  With gcc, make and uemacs, I've been able to stop
crosscompiling and actually compile natively under linux.  (I also have
a term-emu, sz/rz, sed, etc ...)

The boot-sequence should start with "Loading system...", and then a
"Partition table ok" followed by some root-dev info. If you forget to
make the /dev/tty0-character device, you'll never see anything but the
"loading" message. Hopefully errors will be told to the console, but if
there are problems at boot-up there is a distinct possibility that the
machine just hangs.

	5.  Check the new filesystem regularly with (minix) fsck.  I
haven't got any errors for some time now, but I cannot guarantee that
this means it will never happen.  Due to slight differences in 'unlink',
fsck will report "mode inode XXX not cleared", but that isn't an error,
and you can safely ignore it (if you don't like it, do a fsck -a every
once in a while).  Minix "restore" will not work on a file deleted with
linux - so be extra careful if you have a tendency to delete files you
don't really want to.

Logging out from the "login-shell" will automatically do a sync, and
will leave you hanging without any processes (except update, which isn't
much fun), so do the "three-finger-salute" to restart dos/minix/linux or
whatever.

	6.  Mail me and ask about problems/updates etc.  Even more
welcome are success-reports (yeah, sure), and bugreports or even patches
(or pointers to corrections).

NOTE!!! I haven't included diffs with the binaries I've posted for the
simple reason that there aren't any - I've had this silly idea that I'd
rather change the OS than do a lot of porting.  All source to the
binaries can be found on nic.funet.fi under /pub/gnu or /pub/unix.
Changes have been to makefiles or configuration files, and anybody
interested in them might want to contact me. Mostly it's been a matter
of adding a -DUSG to makefiles.

The one exception if gcc - I've made some hacks on it (string-insns),
and have got it (with the gracious help of Bruce Evans) to correctly
emit software floating point. I haven't got diffs to that one either, as
my hard-disk is overflowing and I cannot accomodate both originals and
changes, but as per the GNU copyleft I'll make them available if
someone wants them. I hope nobody want's them :-)

		Linus		[email blocked]

README about early pictures of Linus Torvalds:

I finally got these made, and even managed to persuade Linus into
allowing me to publish three pictures instead of only the first one.
(He still vetoes the one with the toy moose... :-)

linus1.gif, linus2.gif, linus3.gif

        Three pictures of Linus Torvalds, showing what a despicable
        figure he is in real life.  The beer is from the pre-Linux
        era, so it's not virtual.

In nic.funet.fi: pub/OS/Linux/doc/PEOPLE.

--
Lars.Wirzenius [email blocked]  (finger wirzeniu at klaava.helsinki.fi)
   MS-DOS, you can't live with it, you can live without it.

2007/07/26 Slashdot: Dell is asking for Better ATI drivers on Linux!

Thursday, July 26th, 2007

Now here is a story with a better voice for the greater good of Linux:

Open Source IT writes “According to a presentation at Ubuntu Live 2007, Dell is working on getting better ATI drivers for Linux for use in its Linux offerings. While it is not known whether the end product will end up as open source, with big businesses like Google and Dell now behind the push for better Linux graphics drivers, hopefully ATI will make the smart business decision and give customers what they want.”

 From the original story:

 

 Dell knows it won’t happen overnight, but along side wanting to ship audio/video codecs, Intel Wireless 80.211N support for Linux, Broadcom Wireless for Linux, and being able to ship notebooks and desktops with Compiz Fusion enabled, Dell would like to see improved ATI Linux drivers. At Ubuntu Live 2007, Amit Bhutani had a session on Ubuntu Linux for Dell Consumer Systems, where he had shared a slide with Dell’s “area of investigation”, which Amit had said is essentially their Linux road-map. Amit had also stated that the NVIDIA 2D and 3D video drivers were “challenges in platform enablement”. Dell wants to offer ATI Linux systems, but first the driver must be improved for the Linux platform (not necessarily open-source, but improved). Dell currently ships desktop Linux systems with Intel using their open-source drivers as well as NVIDIA graphics processors under Linux. Amit had went on to add that new Dell product offerings and availability in other countries will come later this summer.

This is a great sign! I hope this works out for Linux and for Dell.

2007/07/25 Slashdot.org: Virtual Containerization

Wednesday, July 25th, 2007

Here is a story from Slashdot.org:

Virtual Containerization

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: “It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about ‘containerization,’ to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales.”

Here is the full story from Interopt News

It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about “containerization,” to employ a really ugly but useful word.

Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware’s roaring success as one of the reasons behind last year’s slowdown in server hardware sales. After all, if a copy of VMware ESX lets you replace four or five boxes with just one, some hapless hardware vendor has got to be left holding the bag, right? Ever since this virtualization-kills-the-hardware-star meme got started, Wall Street has been in something of a funk about server hardware stocks.

But if the meme is really true, why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn’t? There has got to be a little head scratching going on over in Framingham just now.

The obvious explanation for Barrett’s investment (which will net Intel a measly 2.5% of VMware’s shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware. This thesis was forcefully articulated on the day of the Intel announcement by the CEO of software startup rPath, Billy Marshall, in a clever blog post that also – naturally – makes the case for his own product. I had a chat with Marshall a few days ago and found what he had to say quite interesting.

Simply put, Marshall’s thesis is that “sales of computers, especially server computers, are currently constrained by the complexity of software.” Anything that makes that complexity go down will make hardware sales (and, one presumes, operating systems sales) go up. Right now it’s so blinking hard to install applications, operating systems and their middleware stacks that once people get an installation up and running they don’t want to touch it again for love or money. But if you install your app stack on a virtual machine – VMware, Xen, or Microsoft – then you can save the result as simple image file. After that, you’re free to do whatever you want with it. You can archive the image on your SAN and deploy it as needed. You can let people download it from your web site. Or you can put it on a CD and mail it to your branch office in Timbuktu or Oshkosh. Anyone will be able to take this collection of bits and install it on the appropriate virtual machine in their own local environment without having to undergo the usual hell of installation and configuration.

This idea of using virtual machine images as a distribution mechanism for fully integrated application stacks is not new. VMware has a Virtual Appliance Marketplace with hundreds of apps available for download. You won’t find Oracle 10g or BEA WebLogic or MySAP here, at least not yet. But you will find plenty of stuff from open source projects and smaller commercial ISVs (independent software vendors). Microsoft also has a download page for pre-configured VDH images of most of its major pieces of server software, including SQL Server 2005 and Exchange Server 2007.

So what does rPath add to the mix? Although Marshall has cleverly hitched his pitch to the virtualization bandwagon, he is actually in a somewhat different business, namely that of providing a roll-your-own-Linux toolkit and update service for ISVs. Marshall likes to recount the following anecdote to explain his value-add. When open source e-mail vendor Zimbra wanted to package its software in VMware disk image format using RHEL clone Centos as the OS, the install snapshot produced a monstrous 2 gigabyte file. You could fit that on a DVD, but this is clearly not in CD territory anymore, and maybe not so convenient to download over a garden variety DSL connection either. The problem is that a fully populated OS drags along huge excess code baggage that a typical application just doesn’t need. In the case of Zimbra, the excess added up to many hundreds of megabytes.

rPath’s solution is to use its own stripped-down and customized Linux distribution. It provides its own collection of Linux kernel and user space components along with a tool called rBuilder for deciding exactly which pieces are necessary to run a particular application. This is not a totally automated process – ISVs will have to roll up their sleeves and make some choices. But when the process is complete, rBuilder will generate a finished image file containing a fully integrated OS-middleware-application stack. This is what rPath calls a software appliance. The appliance can be packaged for any of the major target virtual machines, or for an actual install on raw Intel-based hardware. When Zimbra applied rBuilder to its application stack, swapping out Centos for a custom build of rPath Linux, the resulting VMware image shrank to only 350 megabytes.

In addition to eliminating installation and configuration hell for end users, rPath gives ISVs a platform similar to Red Hat Network for managing the distribution of application updates and OS patches. If rPath releases an OS patch for its version of Linux that the ISV determines is not needed by the ISV’s customers, then the patch doesn’t get distributed to them. This two-stage model is a lot more sensible than the Red Hat system of distributing all patches to everyone and then letting users discover for themselves whether a particular OS patch breaks their application.

rPath was launched at LinuxWorld last year and has already gone through a couple of version updates. Marshall didn’t come up with the vision for his company out of thin air. It’s based in large part on the insight he gained during a multi-year stint in the belly of the beast at Red Hat. In fact, a lot of his team are ex-Red Hatters. Marshall himself put in a couple of years as VP of Sales, and before that he was the guiding light behind the launch of the Red Hat Network provisioning and management platform. His CTO Erik Troan developed the Red Hat Package Manager (RPM) tool at the heart of RHEL and Fedora. Another rPath engineer, Matthew Wilson, wrote Red Hat’s Anaconda installation tool.

These people obviously know a thing or two when it comes to building and maintaining a Linux distribution. Their product concept is ingenious. The question is whether it’s big enough to make a stand-alone company. Right now it’s too early to tell.

There are a couple of real drawbacks to rPath from the end user’s point of view. One is that only Linux stacks are supported. If you are running a Microsoft stack, you’re out of luck. To be fair, you can run your rPath stack on top of Microsoft Virtual Server, and no doubt on the future Viridian hypervisor too. But if you were using just the unadorned VMware image format as your container rather than rPath you could run pretty much any OS stack you pleased.

Another drawback is that even in a pure Linux context, an rPath software appliance can’t use a particular piece of commercial software unless the ISV is an rPath customer. rPath’s basic business model is to sell tools and platforms to ISVs. The rPath appliances available now are mostly pure open source stacks, some commercial and some community. But there is no Oracle database or BEA or IBM middleware, which is a pretty big limitation in the real world of corporate data centers. Marshall does say he is involved in “deep discussions” with BEA, so maybe there will be some movement on this front at some point in the future. But for now it’s wait and see.

What it all boils down to is how credible the rPath Linux distribution can be in the eyes of the ISVs who consider using it. rPath politely avoids using the word “port,” but that is really what an ISV has to do to get its application running on rPath. An ISV that can afford to drop the other platforms it supports and serve its products up only on rPath will reap the full benefits of the system. But big commercial ISVs with big legacy installed bases won’t be able to take such a radical step. Marshall’s spin on this delicate issue seems to be that enterprise ISVs should leverage the end user ease-of-installation benefits of its platform to expand into Small and Medium Business markets where tolerance for complexity is much lower. Of course one could take this argument a step further – which the company for the moment is not willing to do – and say that rPath’s natural home is in the embedded market, just like Debian founder Ian Murdock’s now defunct Progeny (don’t worry about Ian, he landed at Sun).

At the end of the day, I have to wonder whether rPath wouldn’t make itself a lot more credible in the eyes of its ISV target customers by becoming part of a larger software organization. Red Hat obviously comes to mind as a possible home, assuming Red Hat management could swallow its pride enough to buy back the innovation of its ex-employees. But another possibility would be… Oracle. After all, if Larry really wants to get RHEL out of his stack, what better way to do it than to add an entirely free and unencumbered RHEL-like distro to the bottom of every Oracle stack?

Be all that as it may, there is one thing about the rPath concept that really, really intrigues me. What is to prevent Microsoft from trying this? If ISVs had a convenient way to package up highly efficient custom builds of Windows Server 2008 together with key Microsoft or third party applications for the Viridian hypervisor, the idea would be wildly popular. Will it happen? Let’s wait and see what happens after WS 2008 comes out.

Copyright © 2007, Peerstone Research Inc. All rights reserved.

That is all for today and this Came out at 12:10 PM PST

2007/07/24 Slashdot: Are Cheap Laptops a Roadblock for Moore’s Law?

Tuesday, July 24th, 2007

here is an interesting story:

Are Cheap Laptops a Roadblock for Moore’s Law?

Is it that $100 Laptop is trying to kill what Moore said that consumers should be lusting the faster, more expensive hardware and that they (the consumer) should buy the most expensive hardware for laptops so that they will never have the brains to buy that slower way less expensive laptop thats probably less then half the price?

here is an excerpt from Slashdot.org:

Timothy Harrington writes “Cnet.co.uk wonders if the $100 laptop could spell the end of Moore’s Law: ‘Moore’s law is great for making tech faster, and for making slower, existing tech cheaper, but when consumers realize their personal lust for faster hardware makes almost zero financial sense, and hurts the environment with greater demands for power, will they start to demand cheaper, more efficient ‘third-world’ computers that are just as effective?” Will ridiculously cheap laptops wean consumers off ridiculously fast components?”

Here is the story from CNet.co.uk

The One Laptop Per Child organisation’s XO computer, aka the $100 laptop, has just started mass production. And while Crave is happy that thousands of underprivileged African children will reap the benefits of a PC and the Internet, we can’t help but feel a little jealous — and even embarrassed.

Here we are, extolling the virtues of laptops such as the £2,000 Sony Vaio TZ, when for most users the $100 XO would be just as effective. Sure, it doesn’t have a premium badge on the lid, and its 433MHz AMD CPU won’t win any speed records, but it’ll let you surf the Web, send email, enjoy audio and video, and even, as some Nigerian children have discovered, allow you to browse for porn.

Think about your own PC usage — does it honestly include anything more demanding than Facebook stalking, laughing at idiots on YouTube or hitting the digg button underneath the latest lolcat? Can you justify spending £2,000 when a machine costing £50 will do exactly the same thing? Crave thinks the world can learn a lot from the XO, the ClassMate PC and its ilk. These devices could change the computing world as we know it. And despite its makers saying it’s exclusive to the developing world, the XO absolutely should be brought to the West.

Since 1965, the tech world has obsessed about keeping pace with Moore’s Law — an empirical observation that computing performance will double every 24 months. Concurrently, consumers have lusted after the latest and greatest computing hardware, encouraged in part by newer, fatter, ever more demanding operating systems and applications.

Moore’s law is great for making tech faster, and for making slower, existing tech cheaper, but when consumers realise their personal lust for faster hardware makes almost zero financial sense, and hurts the environment with greater demands for power, will they start to demand cheaper, more efficient ‘third-world’ computers that are just as effective?

We think so. The amount of interest generated by the XO, the ClassMate PC, and more recently the £200 Asus Eee PC is phenomenal. Most people in the Crave office are astounded by their low price and relatively high functionality, and are finding it difficult to justify buying anything else. If you want to play the latest games, well, the latest games consoles, while power-hogs, are relatively cheap and graphically very impressive.

It’s almost poetic that the poorest nations in the world have the potential to push the Western tech industry in a new direction. Don’t get us wrong — we love fast, outlandish laptops and PCs as much as the next blog, but we’d be idiots not to show you the alternative. And what a fantastic alternative it is. We predict some very interesting, and money-saving times ahead. -Rory Reid

Thats all for today.

Ryan Orser

2007/07/23 Worldwide XPS 700 Motherboard Exchange Program Launch Date Confirmed

Monday, July 23rd, 2007

Posts now will come out @ 5:00 PM PST which is when the New Day Starts on ryanorser.com

Here is the Dell XPS 700 Motherboard Exchange programs launch date confirmed on July 19th 2007 is now worldwide on Monday, August 13th, 2007 and the world wide exchange will end on October 13th, 2007. So you have 2 Months to get your upgrade kits. here is an excerpt from Direct2Dell:

 

Worldwide XPS 700 Motherboard Exchange Program Launch Date Confirmed

We are pleased to announce that we will launch the XPS 700 Motherboard Exchange Program worldwide on Monday, August 13, 2007.

On August 13, we will launch a website for XPS 700 and 710 customers to register for the program and to tell us what options you prefer. XPS 700 customers will be able to choose a Hardware Kit at no charge, and can also opt for on-site installation service at no charge. XPS 700 customers will also have the option of purchasing a quad-core QX6700 processor for 25% off our Electronics & Accessories price (pricing may vary depending on the time you order). XPS 710 customers will have the option of of purchasing a Hardware Kit and on-site installation service. Pricing and program offering details may vary by region and will be outlined in future posts.

This program will expire on October 13, 2007. All upgrade requests must be submitted no later than midnight Central Standard Time October 13, 2007. For more information, here’s the very first post where we outlined the details of this program, and here’s the link to the XPS 700 Motherboard Exchange Program category that contains all the information we’ve shared so far.

Between now and August 13, I’ll plan to publish more details about how to prepare, how the process will work, pricing details for XPS 710 customers, and more. In the meantime, we’ll continue to prepare for the rollout of this global program.

We appreciate your continued patience.

Published Thursday, July 19, 2007 11:30 PM
by Lionel Menchaca, Digital Media Manager
Filed under , , , , , ,

 

 

This is the 19th of July 2007 on Direct2Dell. Have a nice Day.

Ryan

 

2007/07/23 SSH Tricks

Sunday, July 22nd, 2007

Here is some cool tricks for SSH! This would be great for all the people who use SSH on their computers and their servers. It looks alright though I am hoping that it could help more people then what I have been getting viewing my Blog.

Here is an excerpt from http://polishlinux.org/apps/ssh-tricks/# :

SSH (secure shell) is a program enabling secure access to remote filesystems. Not everyone is aware of other powerful SSH capabilities, such as passwordless login, automatic execution of commands on a remote system or even mounting a remote folder using SSH! In this article we’ll cover these features and much more.
Author: Borys Musielak

SSH works in a client-server mode. It means that there must be an SSH daemon running on the server we want to connect to from our workstation. The SSH server is usually installed by default in modern Linux distributions. The server is started with a command like /etc/init.d/ssh start. It uses the communication port 22 by default, so if we have an active firewall, the port needs to be opened. After installing and starting the SSH server, we should be able to access it remotely. A simple command to log in as user1 to the remote_server (identified by a domain name or an IP address) looks like this:

ssh user1@remote_server

After entering the password to access the remote machine, a changed command prompt should appear, looking similar to user1@remote_server:~$. If this is the case, it means that the login was successful and we’re working in a remote server environment now. Any command we run from this point on, will be executed on the remote server, with the rights of the user we logged in with.

SCP – secure file copying

SCP is an integral part of the OpenSSH package. It is a simple command allowing to copy any file or folder to or from a remote machine using the SSH protocol. The SSH+SCP duo is a great replacement of the non-secure FTP protocol which is widely used by the Internet users nowadays. Not everyone is aware of the fact though, that all the passwords sent while using the FTP protocol are transferred over the network in a plain text format (making it dead easy for crackers to take over) – SCP is a much more reliable alternative. The simplest usage of SCP looks like on the following example:

scp file.txt user1@remote_server:~/

This will copy the local file.txt to the remote server and put it in the home folder of user1. Instead of ~/, a different path can be supplied, i.e. /tmp, /home/public, and any other path we have write access to.

In order to copy a file from a remote server to the local computer, we can use another SCP syntax:

scp user1@remote_server:~/file.txt .

This will copy a file file.txt located in a home folder of user user1 of a remote system to the local folder (the one we are currently in).

Other interesting SCP options:

  • -r – to copy folders recursively (including subfolders),
  • -P port – to use a non-standard port (the default is 22) – of course this option should be used if the server listens on a non-standard port. The option can be helpful when connecting from a firewall-protected network. Setting the SSH server to listen on 443 port (used for secure HTTP connections) is the best way to by-pass the administrator’s restrictions.

GUIs for SCP

If we do not like the console and we prefer GUI (graphical user interface), we can use a graphical (or pseudo-graphical) SCP client. Midnight Commander is one of the programs that provides an SCP client (option shell link). Nautilus and Konqueror are the SCP-capable file managers as well. Entering ssh://user1@remote_server:~/ in the URI field results in a secure shell connection to the remote system. The files can be then copied just as they were available locally.
In the MS Windows environment, we have a great app called WinSCP. The interface of this program looks very much like Total Commander. By the way, there is a plug-in allowing for SCP connections from TC as well.

SSH without passwords – generating keys

Entering passwords upon every SSH connection can be annoying. On the other hand, unprotected remote connection is a huge security risk. The solution to this problem is authorization using the private-public key-pair.

The pair of keys is usually generated using the ssh-keygen command. Below, there is a sample effect of such key generation. RSA or DSA keys can be used.

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key
(/home/user1/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in
/home/user1/.ssh/id_rsa.
Your public key has been saved in
/home/user1/.ssh/id_rsa.pub.
The key fingerprint is
:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx

When the program asks for the key password, we should just press ENTER – this way, a passwordless key will be created. Remember that this is always a security hole to have a passwordless key (in simple words, doing that downgrades your remote system security to the security of your local system) so do it on your own risk. As the ssh-keygen finishes its work, you can see that two keys have been generated. The private key landed in /home/user1/.ssh/id_rsa and we should never make this public. The second public key appeared in /home/user1/.ssh/id_rsa.pub and this is the one we can show the entire world.

Now, if we want to access a remote system from our local computer without passwords (only using the keys), we have to add the information about our public key to the authorized_keys file located in ~/.ssh folder on the remote system. This can be done using the following commands:

$ scp /home/user1/.ssh/id_rsa.pub
user1@remote_server:~/
$ ssh user1@remote_server
$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

The third command will be obviously executed on a remote server. After this operation, all actions performed on the remote server with SSH will not need any password whatsoever. This will certainly make our work easier.

Notice that if you need passwordless access from the remote server to the local one, the similar procedure has to be performed from the remote server. Authorization using keys is a one-way process. The private key can verify the public one, not vice-versa.

Executing commands on a remote system

Well, now when we can already log into remote OS without the password, why wouldn’t we want to execute some command remotely? There can be multiple useful appliances of this, especially when we have to execute some command on a daily basis, and it could not be automated before, because of the need to enter the password manually (or enter it as plain text which is not very secure).

One interesting case is a “remote alert”. Let’s say that we have some crucial process running on the remote system, i.e. a website running on Apache server. We want be be warned when the system gets out of resources (i.e. the disk space is getting short or the system load is too high). We could obviously send an e-mail in such cases. But additionally, we can execute a remote command which plays a warning sound on our local OS! The code for such event would look something like that:

ssh user1@local_server 'play
/usr/share/sounds/gaim/arrive.wav'

This command, executed in a script from the remote server would cause a passwordless login of user1 to the local_server (the one we’re usually working on) and play a wave file with the play command (which is usually available in Linux). The actual case in which we execute this remote command should obviously be specified in a script, but we’re not going to provide a scripting course here, but a way to execute remote commands with passwordless SSH :).

X11 forwarding – running graphical apps remotely

One of the least known functions of SSH is X protocol forwarding. This enables us to run almost every X application remotely! It’s enough to connect to the remote server using the -X option:

ssh -X user1@remote_serwer

and the display of every X application executed from now on will be forwarded to our local X server. We can configure the X11 Forwarding permanently by editing the /etc/ssh/ssh_config file (relevant option is ForwardX11 yes). Of course for the option to work, the remote SSH server needs to support X11 forwarding as well. The /etc/ssh/sshd_config file is responsible for that. This option is however configured by default in most of the Linux distros.

If we just need to execute one single command, we can use the syntax we learned before:

ssh -X user1@remote_serwer 'psi'

– this will execute PSI instant messenger on the remote server, passing the display to the local screen.

Of course the speed of applications executed remotely depends mostly on the network connection speed. It works almost flawlessly in local networks (even things like forwarding Totem playing a DivX movie). In case of Internet connection, a DSL seems reasonable to get apps like Skype or Thunderbird work quite well with a remote call.

Notice that it’s also possible to connect to the remote server without the X11 forwarding enabled, export the DISPLAY variable to point to the local machine and then run the X application. This way, the application would be executed with a remote display, using the generic X server functionality. SSH security would not be applied in such case since this kind of configuration has nothing to do with SSH. Depending on the configuration of the local X server, it may be that the authorization of the remote X applications needs to be turned on in such case. This is usually done by the command xhost. For example, xhost + hostname accepts all the remote applications from the specified hostname for a while. If we plan to use this option regularly, a more secure X server configuration is recommended.

SSHFS – mounting a remote folder

Working on a file located on some remote server via SSH can be quite annoying especially when we need often copy different files in both directions. Using a the fish:// protocol in Midnight Commander or Konqueror is a partly solution – fish tends to be much slower than pure SSH and it often slows down even more while copying files. The ideal solution would be a possibility to mount a remote resource available through SSH only. The good news is that… this option exists for a while already, thanks to sshfs and the fuse project.

Fuse is a kernel module (recently it has been adopted in the official 2.6 series) allowing for mounting different filesystems by an unprivileged user. SSHFS is a program created by the author of fuse himself which enables to mount remote folders/filesystems using SSH. The idea is very simple – a remote SSH folder is mounted as a local folder in the filesystem. Since then, almost all operations on this folder work exactly as if this was a normal local folder. The difference is that the files are silently transferred though SSH in the background.

Installing fuse and sshfs in Ubuntu is as easy as entering (as root):

# apt-get install sshfs

The only remaining action is adding the user that we want to give the permission to mount SSH folders to the fuse group (using a command like usermod -G -a fuse user1 or manually editing the /etc/group file). Eventually, the fuse module needs to be loaded:

# modprobe fuse

And then, after logging in, we can try to mount a remote folder using sshfs:

mkdir ~/remote_folder
sshfs user1@remote_server:/tmp ~/remote_folder

The command above will cause the folder /tmp on the remote server to be mounted as ~/remote_folder on the local machine. Copying any file to this folder will result in transparent copying over the network using SCP. Same concerns direct file editing, creating or removing.

When we’re done working with the remote filesystem, we can unmount the remote folder by issuing:

fusermount -u ~/remote_folder

If we work on this folder on a daily basis, it is wise to add it to the /etc/fstab table. This way is can be automatically mounted upon system boot or mounted manually (if noauto option is chosen) without the need to specify the remote location each time. Here is a sample entry in the table:

sshfs#user1@remote_server:/tmp /home/user1/remote_folder/ fuse    defaults,auto    0 0

If we want to use fuse and sshfs regularly, we need to edit the /etc/modules file adding the fuse entry. In other case we would have to load the module manually each time we want to use it.

Summary

As you can see, SSH is a powerful remote access tool. If we need to work with remote UNIX filesystems often, it’s really worth to learn a few powerful features of SSH and use them in practice. SSH can really make your daily work much more effective and pleasant at the same time. In the following article (to be published later this month) we’re going to cover another great feature of SSH: making different kinds of tunnels with port forwarding using transparent socks and a corkscrew

You should also change the port instead of Port 22 to Port 443. I use Secure File Copying to post something on either of my websites. I also use ssh though i am having a little trouble with it at the moment, and the winSCP to use Secure File Copying on Windows XP. OpenSSH i have heard is good and I am hoping that it can improve. I am also trying TightVNC for my Server. I hope that people could give me some reviews maybe on comments on my blog. Good Luck on this.

Thats all for now. This SSH Tricks Post also has some SCP (Secure File Copying.) Good luck as you will probably get a few minor problems if you do not use Ubuntu or one of its derivitives.