About

The goal of the Linux-Society (LS, dating back to the mid-90s as a professional club and tech-mentoring group) has been a purely-democratic Information Society; many of the articles are sociological in nature. The LS was merged with Perl/Unix of NY to form multi-layered group that included advocacy, project-oriented learning by talented high school students: textbook constructivism. Linux has severe limitations such that it is useless for any computer that will, say, print or scan. It is primarily used for webservers and embedded devices such as the Android. (Google is high-invested in it).

Technology is problematic. During the heyday of technology (1990s), it seemed it had the democratic direction Lewis Mumford said it should have in his seminal
Technics and Civilization.

Today, we are effectively stuck with Windows as Linux is poor on the desktop and has cultured a maladaptive following. Apple is prohibitive, and all other operating systems lack drivers, including Google's Android, an offshoot of linux.

In the late 90s there was hope for new kernels such as LibOS and ExoOS that would bare their hardware to programs, some of which would be virtual machines such as Java uses. Another important player was the L4 system that is a minor relation to the code underlying the Apple's systems. It was highly scientific but fell into the wrong hangs, apparently, and has suffered from having no progress on the desktop. There is a version, "SE" that is apparently running in many cell phones as specialized telecom chips, but is proprietary. SE's closed nature was only recently revealed, which is important because it is apparently built from publicly-owned code as it is not a "clean room" design it may violate public domain protections, and most certainly violates the widely-accepted social contract.

Recent attempts to enjoin into L4 development as an advocate for "the people" have been as frustrating (and demeaning) as previous attempts with the usual attacks to self-esteem by maladaptive "hacks" being reinforced by "leadership" (now mostly university professors).

In short, this leaves us with Windows, which is quite a reversal if you have read earlier posts here. But, upon Windows, we have free and open software development systems in the forms of GTK+ (the windows usually used on Linux) and the Minimal GNU Windows (MinGW and MSYS) systems. It is very likely this direction that development should go (that is, on Windows) such that s/w can then be ported to a currently-valid microkernel system that includes a driver system that can be adapted by hardware developers to reuse of their windows and apple drivers.

From a brief survey of L4, it appears that the last clean copy was the DROPS system of the early 2010s, was a German effort that used the Unix-like "OS kit" from an American University.

If we are going to be stuck on Windows, then it seems that a high level approach to free and open systems integration, such as creating fully transparent mouse communication between apps so that they can seamlessly work together as a single desktop (rather than deliberately conflicting). This would be very helpful for GIMP and Inkscape, both leading graphics programs that are strong in the special ways, but suffer from an inability to easily interrelate.

Another important issue is the nature, if you can call it that, of the "geek" or "hack." Technology is formed democratically but "harvested" authoritarian-ly --if I can coin a term that Mumford might use. Authority is plutarchy: a combination of aristocracy and oligarchy that is kept alive after all these millennia by using, or maligning, the information society as a part of the civilizing (or law-giving) process that embraces the dialectic as its method. Democratic restoration, that is to put humanity back on an evolutionary (and not de-evolutionary) track, I think, will require the exclusion of the "geek" from decision-making. As is, the free/open s/w culture attempts to give leadership to those who write the most lines of code --irrespective of their comprehension of the real world or relationship with normal users. We need normal people to somehow organize around common sense (rather than oligarchic rationalism) to bring to life useful and cohesive software and communications systems.

Interestingly, the most popular page on this site is about Carl Rogers' humanistic psychology, and has nothing to do with technology.




Monday, August 31, 2009

No virtue in virtualization: VMWare, Qemu, VirtualBox

When I moved my laptop to another domain, and then back again, I found that none of the emulators, VirtualBox, VMWare, and Qemu, were talking to the laptop, and only VirtualBox was talking to the outside world. My immediate conclusion is that there was no virtue in virtualization, at least the way things stand right now.

I was working with two "JEOSs," (yet another stupid computer acronym pronounced "juice"), Ubuntu and rPath. A JEOS is just a stripped-down Linux installation for appliance and other small server use, and to it's credit, rPath predates this acronym. All small Linux operating system distributions do as well, such as "Damn Small Linux" which has been around a long time and has its own complaints about Linux (you get the tone, I suppose).

Seeing that VirtualBox was still talking to the outside world, I went with it, and since it has the Ubuntu in it, I searched using the words "jeos ubuntu virutalbox resolve.conf" to get some clues--I found a blog, kefakya7obi, supporting Wordpress in an emulator. For the /etc/networks/interfaces file , the author (thanks!) suggests the following configuration:
iface eth0 inet static
address 192.168.1.5
netmask 255.255.255.0
gateway 192.168.1.254
You might notice the "auto" line in the file which refers to the boot-up process. Without it you have to use the "if-up eth0" command.

And reminded not to forget resolve.conf. The network information for the Vista laptop connection is in the icon in the interfaces section of the Vista networking windows, and I figured I would have to adjust things, but I didn't. I also ran the Vista text window commands ipconfig for IP information and nslookup, for DNS information which produced similar information, but much more quickly.

Update: Things stopped working again, so I went back to the default configuration in /etc/networks/interfaces, and it worked. It should be noted that with all this fiddling, I have not actually gotten to where I can actually work "wget" the digitalus and Zend software! (We do indeed live in a hell--technology--within a heaven--Nature.)

Final Update (I hope): This is a funny way to write a how-to page. I started to familiarize myself with the different virtual network concepts through Qemu docs, and then looked into the VirtualBox PDF. The paper explained how nothing works for everything and how NAT was the best for Internet access, but "local host only" was best for communication between guest and host.

Then I realized that Virtualbox allows for a few interfaces as emulated NIC cards, and reasoned that I might try running both modes on different interfaces. I tested eth0 and eth1 for both NAT and host-only configurations on VirtualBox, and then tried them together. Trying them together failed initially, but when I started host-only first (on eth1), and then added NAT (on eth2) I got an open Internet connection with domain name service, and also a good performance for a complex X11 client, Xemacs.

The host-only interface (eth0) in the JeOS /etc/networks config file is this:

iface eth0 inet static
address 192.168.1.5
netmask 255.255.255.0
gateway 192.168.1.254

Still... it may not work in practice, and I will be testing it. But for the moment (at nearly 2am) this looks like a good solution. I checked to see if this was an orgininal suggestion by google searching using the words"virtualbox two interfaces nat host only" and found only trouble tickets and purely hypothetical examples. Seems I am the first offering this as a way to create a WWW development environment, which really seems odd to me knowing that Linux stands purely as a server, and that the only practical desktop solution (for me and most people) is still Windozer.

With time... it is working pretty well. There was one instance that it did not come up right, but the Windows components are still probably a work in progress, and things have to be just right for it to configure properly, and hence a possible need for a couple of reboots.



Virtualization Performance
VMWare is reputed to be the best of the best, but a table of benchmarks by a Linux gamer shows that these three emulation systems are the same within a single factor in every category.
                     VirtualBox   Qemu   VMware-player
CPU:
DhryStone ALU (MDIPS) 5,716 5,988 5,711
WhetStone FPU (MWIPS) 4,189 4,649 4,401
(thx2 http://www.linux-gamers.net/smartsection.item.56/virtualbox-vs-qemu.html :)

I installed the X11 window server, Xming, and allowed for access from all hosts in the "launch" window (not a necessarily a secure way to go, but there was no granular configuration in the launch window to specify hostnames), and I was very happy to see an rxvt window generated from the Linux living in the emulator. I was also excited to be able to cut+paste text between Windows and Linux's rxvt. You have to "export DISPLAY=192.168.56.1:0.0" from the ipconfig command on the laptop to direct the windowing instructions from the Linux in emulation to the Xserver on the laptop.

Then I tried an Xwindows editor: Xemacs. This started but failed to perform after starting, so I won't be running any fancy IDEs, such as Eclipse, for instance, at least with this arrangement. Perhaps I am short on memory (though all this GNU/Linux confligeration came back to me quickly ;) ). I have a gigabyte coming in the mail: ebay for $3.00 + shipping.

Perhaps WWW-based program editing tools, if they exist in a usable format, will satisfy though they usually don't (such as this one on blogspot doesn't).

Why virtualization is important to me
I am tyring to create an environment on my laptop that will allow me to access Linux from Vista, the system I am stuck with because of the laptop builder, SONY. My strategy is to run a LAMP and Zend framework appliance in an emulator (a more realistic word for what is called virutualization), and connect to it with X windows.

Connection to the Laptop
The connection to the laptop is important so as to have a decent window to work in, or possibly to use with editors such Emacs or an IDE such as ZEND Studio. I have found that there are a lot of problems with Windows LAMP installations such as WAMP; it seems like it is a matter of learning how to run virtualization on Windows, or de-bugging WAMP. Building in a Linux environment is helpful because nearly all web servers are Linux and I have found that migration from windows to linux is difficult enough to take up someone's time full-time.

So what is the problem?
Since the problem is with the virtual "network" within the laptop, Qemu seems to be the best choice, or perhaps path, because Qemu seems to have the most network control. It is unfortunate that the free software community has not resolved and documented solutions for this obvious issue for the two open emulators, Qemu and VirtualBox, or that community members are not sharing the details of the solutions. They just give hints that the problem can be solved but no actual script code. Perhaps they feel that you should suffer as they have; I actually study this phenomena about the open community as part of my work on the Information Society.

Other alternatives
The other path is simply to work on Linux full-time or in a dual boot machine. The problem here is that more time has to be invested into making Wifi work on Linux, as the Linux community as made hardware support nearly impossible for the manufacturing "community" by adhereing to a monolithic model rather than migrating to a micro-kernel. If you remember, Linux was born as a debate between Linus and a professor named Andrew Tannenbaum. Tannenbaum's system, Linux, was (or is) microkernel, as is every other major system: all of Microsoft since Win95, Apple's OSX, and of course the up-coming L4 micro-kernel foundation.

L4 Microkernel as a virtualization solution, and as a solution period
On the topic of L4, the approach is run machines on top of the L4 microkernel, rather than emulate PCs as processes, which if you think about it, is a pretty lame way to do things.

As of a few years ago, L4 ran Linux on top of its microkernel factors faster than linux runs as an independent monolith, which implies that not only Windows sucks, but Linux as well. So much for Linus' fame!

The obvious thing to do is of course look at the needs of the world of systems users, something like 6 billion members of the Information Society, and design a system that suits it as a aggregation of individuals, families, and communities.

Thinman is a model I created nearly a decade ago (before my whole industry got shipped to Bhopol, India !?!?) where I applied knowledge gleaned from the Perl CPAN distribution system with my experiences managing large networks as if each of these network systems, and their supporting network servers, had Perl VMs that received by both data and code instruction to implement the data, or to collect data and form it into a generic complex structure binary format (and not deliberately crippled XML). The Perl community was supposedly building a VM, and it was the most brilliant design imaginable, and it was to be the basis of my Thinman design, but it never came to fruition. I attempted to explain the underlying problem in my article here "Linux and Perl, Both Hands Tied."