Software that should exist #5: Suspendable Processes

Tue, 16 Mar 2010 00:44:41 +0000

So far, most OSes support Suspend to Disk. That is, I can take a snapshot of the current state of my computer and restore it later. Thats a good thing, when having opened a lot of programs, which would have to be started again otherwise.

Some Software allows the suspension of itself, too. SBCL and many other Common Lisp Implementations have Coredumps which can be reloaded (in fact, under SBCL, this is the default way of creating an executable). But it would be nice to have this feature inside the OS itself, working for (mostly) all processes.

That is, I would like to be able to suspend a process and all its subprocesses while keeping anything else running, but also being able to shutdown my computer meanwhile.

This – of course – produces several problems. An application could have mmapped or opened some files. So, if the program has opened a file exclusively, i.e. doesnt expect the file to change, then the OS would have to notice that and keep the file locked, even through shutdowns. Of course, this shouldnt really become a problem, and if something fails – well, there can always be IO-Errors anyway.

Same for sockets and other IPC-Stuff. But on the other hand, when changing something on a whole suspended disk-image, then the problem is equal – except that then you can make your whole kernel crash.

That is – of course, the user isnt allowed to break the security mechanisms of the OS – and if he does, then its his fault.

Of course, you can already do something similar – namely use usermode linux or some light virtualization. But having this for all processes would be better and easier.

Update: I seem not to be the only person thinking this. There is a project called CryoPID. Thank you for your comment, Leslie.


Randomly Found Software: Zile

Thu, 04 Mar 2010 20:47:04 +0000

The „holy wars“ between the Emacs- and the Vi-Users will probably never end. I am an Emacs-user, but actually, as a simple text-editor, Emacs is overkill. For writing configuration files, I never got used to Vi, and well, this hasnt changed so far, but at the moment I am trying to do so by using Vi whenever possible. But using Emacs for writing them is not good either, because Emacs takes its time to start up. I mostly used nano.

Now I found GNU Zile – an Emacs-Like, but small Editor. It has per default active region lightening and some other features I will probably never use. The most important thing is, that it is a powerfull editor like Vi, and in contrast to Emacs just a small text editor.

I am pretty sure I will use it more often from now on, since – well, I got used to the emacs-keybindings. I also installed firemacs.

My first expieriences with Arch Linux

Thu, 04 Mar 2010 02:43:55 +0000

As I already wrote, I want to switch to Arch Linux. Well, I am new to this distribution. Since back to the days when I used SuSE, I only used debian-derivatives on my PCs. Its not that I dont like debian anymore, its a great distribution, but I am interested in learning more about the underbrushes of Linux.

Seems like Arch Linux has a great community, but so far, I cannot contribute anything, since I am completely new to this distribution. But at least, well, I can write a few things here. About some problems I ran into, about tutorials that I read, etc., in the hope that this will be usefull to other people.

The first time I ran Arch inside a VirtualBox. And about the first thing that happened was, that it crashed when trying to start X11. I forgot to install and run hal. Thats nice. Setting DAEMONS in /etc/rc.conf properly solved that problem quickly.

Well, installing on my MacBook Pro was not that easy, though. I chose the most complicated setup on a MacBook Pro – a Triple Boot System with OS X, Windows and Arch Linux. Well, the Tutorial for this was not very helpfull at that point (later it was). Well, since I already installed Ubuntu in the past, I knew a bit of what to do.

Firstly, I performed a clean installation of Mac OS X. Then I ran BootCamp, but only let it repartition the hard disk, not reboot and install Windows. I chose to give Windows 32 Gig. Then, with the Disk Utility, I added a third partition, between the bootcamp-partition and the partition of OS X. I chose to give OS X 32 gig, since I want to mainly run Arch Linux in the future.

Then I installed Windows. I wanted to have Windows 7 64, but unfortunately, this cannot be installed since it lacks of some CD-Driver. So I first installed Windows Vista 64 and then upgraded to Win 7 (thankfully, I can obtain Windows from the MSDNAA). After doing the whole setup and stuff, I rebooted and booted OS X. Then I installed rEFIt.

At this point, I rebooted into the Arch Linux CD-Rom. I followed the usual /arch/setup procedure. I didnt repartition, I set up /dev/sda3 as ext3 to be mounted on /. I ignored the warning that there is no swap and no separate /boot-Partition. The problem is: I cannot have so much partitions – 4 is the Limit. A helpfull source is the Documentation for Ubuntu.

Well, everything worked so far, except that the installation of the bootloader claims that it could not read stage1. Trying to fix it manually didnt work. I downloaded the newest Grub-Sources and compiled them in the chroot on /mnt (where /dev/sda3 is mounted through the installation process) myself, after performing a pacman -Sy and installing some dependencies (I dont know all of them anymore, but the configure-script will tell them). Calling this grub-install gave me some errors, that installing Grub on /dev/sda3 only works with blocklists, which are not reliable. Well, even though it was not recommended, I tried to install it on /dev/sda – but it also didnt work. Somehow it cannot install to the MBR.

I cant tell why, but now at least I have some sort of Grub to boot. Grub just drops a shell now, but I can boot the System manually using

root (hd0,3)
linux /boot/vmlinuz26 root=/dev/sda3
initrd /boot/kernel26.img

in the Grub-shell. Unfortunately, I can only boot Windows 7 from this GRUB now – using the manual chainloader. I dont know why it doesnt recognize my menu.lst, but well, I dont want to mess around with my bootloader-configuration right now, since I dont want to break it – at least it works so far.

A comparably easy task was then installing the X-Server. I installed hal and dbus, and put them into the DAEMONS-Variable. Then I installed xorg and the nvidia-drivers (pacman -S xorg nvidia nvidia-utils), according to the Wiki-Documentation of Arch Linux. I also installed xdm and icewm, since this is the environment I want to use. Afterwards, I ran nvidia-xconfig. I also edited .xinitrc to start icewm-session on X startup.

Then I rebooted and started xdm. It worked. Then I adapted the Settings of the nvidia-driver according to the Wiki-Documentation, and again restarted.

For the Synaptics Touchpad, there is an excellent documentation. The customized part of my hal-settings-file so far are:

<merge key="input.x11_options.VertEdgeScroll" type="string">false</merge>
<merge key="input.x11_options.HorizEdgeScroll" type="string">false</merge>
<merge key="input.x11_options.TapButton1" type="string">0</merge>
<merge key="input.x11_options.TapButton2" type="string">0</merge>
<merge key="input.x11_options.TapButton3" type="string">0</merge>
<merge key="input.x11_options.MaxTapTime" type="string">0</merge>
<merge key="input.x11_options.TapClickFinger1" type="string">1</merge>
<merge key="input.x11_options.TapClickFinger2" type="string">2</merge>
<merge key="input.x11_options.TapClickFinger3" type="string">3</merge>
<merge key="input.x11_options.VertTwoFingerScroll" type="string">true</merge>
<merge key="input.x11_options.HorizTwoFingerScroll" type="string">true</merge>
<merge key="input.x11_options.CircularScrolling" type="string">true</merge>

For Sound, I use the Jack Sound Server – basically, because the performance I got with pure alsa was bad, and because I like jackd. I installed qjackctl, and adjusted the settings. The content of the created .jackdrc with the Settings that work for me are:

/usr/bin/jackd -R -P30 -p512 -m -dalsa -dhw:0 -r44100 -p256 -n3 -S

However, when using jackd, there are some additional problems. Namely, under Debian, I can remember never getting the Flash Plugin to work. Well, this appears not to be a problem using the software flashsupport-jack, which is an AUR-Package, but works perfectly for me. I installed it as stated in the Wiki.

Of course, there are still a few problems remaining. Didnt get jackd-support for all of the packages. The GRUB doesnt work right at the moment. But well, it is a System I can already work with.


Fri, 26 Feb 2010 03:46:41 +0000

I am now using Windows 7. And well, its a well-designed OS (with a not-so-well-designed network-configuration but well, nothing is perfect), its trivial to set up a terminal server with it, and login remotely.

But then … well … its still Windows. It doesnt „feel“ good. As Mac OS X doesnt „feel“ good. Also, Ubuntu didnt „feel“ good. I am just not „feeling“ good using these Mainstream-Systems. So my plans are to get back to some real Linux (or maybe other Unix-Like) System asap. Maybe slackware, maybe arch linux, maybe debian. Dont know right now.

However, as you might know, this is not the easiest task on a MacBook Pro, if you want to keep a native Windows and Mac OS X. It needs a special bootloader, with a lot of restrictions in partition size and number, etc.

So today, at a time when I got too nerved by my work to continue it, I decided to test Wubi. Well, Wubi is an Ubuntu-Installer, and yes, I said, Ubuntu doesnt „feel“ good. But Wubi uses Lupin, and installs itself on a file inside the NTFS-Filesystem of Windows. It is not virtualized, it is a real, native-running Linux Kernel, which is just not installed on a single partition, but on a loopmounted file on an ntfs-partition. It installs GRUB on this file, and is started by the Windows-Bootloader, an extra entry is put into that bootloader.

Thats really nice. You dont have to mess around with partitions and bootloaders and stuff, and well, it worked perfectly for me. It may have some drawbacks (no hibernation, slightly slower, etc.), but I think its a good trade. Actually, maybe it would be better to use a directory with the needed files directly, instead of having a big file with a filesystem on it – so the files could be accessed directly, even from windows. NTFS also has access controlls, and file permissions could be saved in an external file, too. But I see that this could become a lot harder.

On the other hand, well, its Ubuntu. I will keep it installed until I find time to re-setup my system (because I need a Linux at the moment, but I need it only sometimes, and therefore, Wubi is ok). And then I will look for possibilities to do the same with other distributions – I mean, in the end, its a modified initramfs, it should be possible.

Unfortunately, there seems not to be a possibility to do the same under OS X yet (otherwise I would do so). The Webpage sais it is planned. I wonder whats the problem with this. HFS+ is natively supported by Linux, so this should be even easier than with NTFS (which needs FUSE).

Package Managers

Sun, 14 Feb 2010 18:43:43 +0000

A myth about Linux which hardly goes away is that the installation of software is much harder than under commercial operating systems. Of course, the installation of a Linux-System itself is a hard nut to crack. Firstly, you have to chose your distribution – which can become one of the hardest tasks about the linux installation at all. Then, after choosing one distribution, making it run and support all the hardware you have is a complicated (and sometimes – expecially for new or extremely cheap hardware – even impossible) task. But installing actual software is usually no big deal. Every distribution has its package manager with a lot of packages, and mostly it either takes one short shell command or a few klicks in a friendly GUI to install the software you want.

Its just that … most people changing from a commercial desktop-os to a linux distribution expect that they have to download the software they want from somewhere. And they expect the software to take care of its own updates. And they are trying to do the same under Linux, and – since compiling and installing software by hand is really complicated for a newbie – they fail.

On the other hand, the package management systems are very convenient for both users and developers, and many commercial software is already distributed in package form, and some vendors (like Sun) even maintain package-repositories for dpkg and rpm for some of their Linux-Software.

Meanwhile, the installation process under Mac OS X is complicated and chaotic. You (mostly) have a dmg-Image, which (mostly) contains anything you need to install – sort of. Sometimes you just have to copy one app-file somewhere else where you can run it, sometimes you have to open (and run) a pkg-file and go through dialogs, sometimes you have to open the app to download the rest of the software, sometimes the app installs itself and then runs as the ordinary app, etc. Sometimes, you get a zipped pkg-file. Sometimes, you get a zipped dmg-image. Sometimes, you get a sitx-archive. Sometimes its enough to just delete the app-file to uninstall it, sometimes there are special uninstallers which you have to find and run, sometimes you have to manually delete directories.

I could tell similar stories about Windows, but at least under Windows, most software either comes with an installer that registers itself so Windows can find the uninstaller, or installs an uninstaller somewhere in its menu-folder, or doesnt need to be installed at all and can just be run directly.

Well, the situation is definitely not better, if not even worse than under most Linux-Distributions. But nevermind, at least both Windows and Mac OS X have some central registry for installed software to register. And both do have an integrated update mechanism for themselves. I wonder why then every software searches for updates itself. Why doesnt Microsoft or Apple just define a default protocol for upgrade searching, and provide a central update-search mechanism for all the installed programs?

Like – just downloading an RSS-Feed and passing it to some defined procedure or so?

Well, Windows seems to have an integrated package management for its components, at least there is some „pkgmgr.exe“ – but I dont actually know whether its just for Windows-Components or can be used for other software as well. In the latter case, I dont understand why so many software packages (Firefox, Adobe Reader and Flashplugin, Java RE, Apple BootCamp, etc.) have their own update scanners instead of using this one.

And many of the installers and update scanners are either not working properly, or getting on my nerves trying to remind me that the software they are upgrading is installed. And some of them are just linking to upgraded versions which install themselves, etc. – I think thats really annoying.

But well, also the existing package managers on Linux, Solaris, FreeBSD, etc., lack of some features I always waited to see. One thing we could learn from Windows and the app-Files from Mac OS X is to put anything you need into one directory, localized nearby the application itself, and thus not producing that much problems with colliding dependencies between versions (and architectures) of software. Having some sort of copy-on-write-hardlink for this would also make it possible to install one library into many directories without significant loss of space.

And – something I also dont like – often you have postinst and postrm scripts which are running binaries. There is nothing wrong with this, but on the other hand, these scripts tend to do a lot of complicated stuff, and if they fail, the package management itself cannot really undo what they have done, and their postrm-scripts get confused. Its nothing bad to have postinst and postrm scripts (in fact, in some cases it is necessary), but a good package system should provide a lot of additional possibilities for dependent configuration settings, etc., to make this unnecessary for as many cases as possible.

Well, package management is a complicated thing, and a solution for it always has to balance between not having a dependency handler at all and having a turing complete solution which gets easily confused or ist likely to be unusable. The main difficulty is – as far as I saw so far – to make the packaged software integrate itself in the package management. Is this really such a hard task for commercial software on a commercial OS?

Randomly Found Software: Unetbootin – generating bootable USB-Sticks under Windows

Sat, 13 Feb 2010 02:19:00 +0000

So far, I always had to use some VM under Windows to create bootable media with linux or something. For example, I just had the problem to set up an USB-Stick for my thin client with some Linux with a running X-Server. But under Windows, well, I wouldnt even be able to edit the contents of that stick with a hexeditor. Searching for something else, I found Unetbootin on some Ubuntu-Site, which described how to set up a live usb-stick under windows.

This software can download the proper files to set up such a stick (and a variety of other distributions). I downloaded Xubuntu myself and passed the .iso-file to it.

Well, for some reason, sometimes it fails. But now finally it worked and I am currently writing this through rdesktop on that stick. Nice.

On the other hand, I still wonder how one can access Block Devices in Windows programmatically. I already found NT-IFS for Filesystem Creation and some Driver SDK for Block Devices (I think). But I just cannot find an API for listing and accessing block devices.

Randomly Found Software: Advanced Copy

Sat, 30 Jan 2010 18:39:02 +0000

A nice little enhancemend to GNU cp is Advanced Copy (via). It adds Progress-Bars and some information of which file is currently copied where, and how long it will (presumably) take until the copying is finished.

Well, there is not much more to say about it. Its a patch to coreutils. And it compiles (and seems to work) under Mac OS X Snow Leopard. It is one of these little enhancements that a system needs to get more user-friendly. And I am glad that such enhancements are also written for cli-applications, not only for gui-stuff.