See Full Story
Two weeks ago, I asked for reader aid in getting documentation on early 1990s System VR4 Unix to help understand what things can be done with Linux today but could not be done with AT&T System VR4. The question was naive and I need more help (and aspirin) in struggling toward a better answer than I've reached so far. In fact, because my interest was originally sparked by Linus Torvald's comment that he has nothing to learn from Solaris, the comparison should have been to the SunOS 5.0 kernel from 1992.
You wrote:"On the other hand, this variation on the main question also raises new issues because many of the changes made to process and memory management between the 2.4 and 2.6 Linux kernels look a bit artificial -- meaning that they don't seem to be direct continuations of code evolution up to 2.4 and thus raise the suspicion that the SCO/IBM lawsuit might be having some unexpected design consequences."
Why would you prefer that explanation in lieu of more plausible ones??? Seems you have not been browsing the relevant kernel development discussions (even without wading the voluminous kernel mailing list, useful summaries are at lwn.net and www.kerneltraffic.org). In short, the memory management code of earlier Linux needed updating around version 2.4 and several improved implementations were tried, iterated and sometimes discarded (even during the stable 2.4 series), accompanied with healthy debate on the mailing list. That is how open source development works. Optimal memory management in the kernel is a VERY hard problem, which explains the intense activity surrounding it. And to further show off-the-mark your speculation is, SCO's lawsuits started AFTER the memory manager mayhem in 2.4, in fact after 2.6 was finally released, so they can hardly have affected the developers...
You must be paid by SCO. The difference in two OSs is not only a set of features. But most of all their implementation. It's important HOW it's done, not IF.
With the issue of threading in x86... Yes, it doesn't take advantage of the hardware, since the hardware doesn't exist on that platform. This however is not as big a problem as you think. First, Linux can be compiled for damn near anything in existance, so on systems that can take advantage of it, you can use it at its full potential. Otherwise, it ends up acting like virtual threads, which is important. In a non-threaded system, you run one application at a time, no more. Want to play music in the background? Tough luck, that can't coexist on a single threaded system unless it is the only thing running, asside from the OS itself.
As a better example, as someone stuck with Windows for now, I use a client for telnet (for muds) that supports active scripting, but MS script systems don't support threading models. The result is that if I want to do something relatively simple, like checking what the current top ten status of the mud, I have to use a web browser. The reason being that it takes, on dial-up, 2-3 minutes to load the HTML, so even though I can use MS' Inet API to load a page in scripting, doing so freezes the client for 2-3 minutes. The two processes can't simultaniously coexist and do their jobs, even though they are not directly dependent on each other. The reason being simply because the same thread is used to execute both, so one has to stop to wait for the other to finish. Early Win 3.1 had a much simpler threading system which had the same problem. It literally suspended applications not being used, instead of allowing parallel execution.
So, if there is a question as too why threading is useful in Linux under x86, you have to ask why modern versions of Windows have it (if in a far more limited way that "can't" use multiple processors when available). The answer is simply that it isn't DOS and we need to be able to have dozens of services and/or applications running at the same time, something that requires threading.
There are quite a few people looking forward to when 2+ processors are common, just so they can optimize things to take advantage of it too. ;)