Portal und andere Spiele …

Sat, 22 May 2010 00:02:46 +0000

… werden jetzt ja dann wohl in größerer Zahl auf Mac OS Portiert, und damit bleibt zu hoffen, dass man sie dann leichter auf Linux portierbar beziehungsweise emulierbar/virtualisierbar sein werden, denn die Grundlagen beider Systeme sollten doch näher verwandt sein.

Jedenfalls stiefelte ich heute Windows auf meinem neuen Denkblock. Konfigurierte Ubuntu auch gleichzeitig so, dass es virtualisierbar ist unter ebendiesem Windows, sodass nicht nicht mehrere Logs und Feuerfuchsprofile maintainen muss. Lief auch alles wunderbar.

Steam startierte und brauchte lange um Portal herunterzuladen und nachdem es dieses endlich fertiggetan hatte stürzte es erstmal gepflegt ab. Nach einem Neustart und dem versuch, Portal zu starten, klärte es mich auf, dass es meinen Grafikchip nicht kennen würde. Durchaus möglich, es ist nicht der Neueste, und dementsprechend Überrascht war ich, als dann das Spiel doch loslief und ich mit relativ wenigen aber eben immernoch daseienden Rucklern anfangen konnte zu spielen.

Die Ruckler waren nervig, und so wollte ich die Bildqualität heruntersetzen, was zu einem prompten Absturz führte, und vor einem kompletten Systemneustart schaffte ich es auch nicht das Spiel wieder zu starten. Danach stellte ich alles wieder um, in der Hoffnung, danach würde das Spiel wenigstens wieder im vorherigen Zustand sein. Was soll ich sagen, das Spiel hängte sich auf und lastete einen Prozessorkern voll aus, sodass ich mich gezwungen sah, es zu keksen.

Erst dann wurde mir plötzlich klar, was ich da eigentlich gerade tue: Ich versuche ein Windows-Spiel unter Windows zum Laufen zu bringen. Ich, der ich garnicht Windows verwenden will, und Windows nur boote, weil ich ein Spiel spielen will, strenge mich an, damit dieses Spiel funktioniert. Überhaupt gibt es wenig Gründe für mich, ein Windows-System zu starten. Ich habe mein Ubuntu – und wenn mir das nicht mehr gefällt habe ich Arch Linux. Und wenn mir selbst das nicht gefällt werde ich auf Solaris oder ein BSD umsteigen.

Ich muss mich also auch noch anstrengen, damit ein Spiel, das ich zumindest theoretisch gekauft haben könnte, unter einem kooperativen System läuft. Interessant. Selbstverständlich überlege ich mir nun also zweimal, ob ich mir wirklich eines der kostenpflichtigen Spiele kaufe. Ich hatte schon mehrere Spiele im Sinne, oft auch ältere, nur ist bei denen nie so klar ob sie unter Wine wirklich gut laufen – und bei etwas älteren Modellen ist selbst nicht klar ob sie auf einem modernen Windows gut laufen.

Ok, bevor ich gleich damit anfange, was ich am Verhalten der Spielehersteller alles nicht verstehe, erstmal eine Liste mit Dingen die ich verstehe – nur des guten Willens wegen:

  • Ich verstehe, dass sie ihre Spiele – wenigstens am Anfang – nicht Opensourcen. Kopierschutzmaßnahmen werden durch Open Source nahezu unmöglich. Außerdem steckt in einem Spiel mehr Interesse als bloße Software. Es ist ein Gesamtkunstwerk und man will freilich nicht dass Leute es bereits umschreiben bevor sie es überhaupt sinnvoll gespielt haben.
  • Ich verstehe, dass sie DRM-Maßnahmen ergreifen wollen. Ich finde es nicht gut, vor allem, weil die ganzen DRM-Lösungen so beschissen implementiert sind, aber ich verstehe es.
  • Ich verstehe, dass den Spieleherstellern OpenGL ohne diverse Erweiterungen nicht ausreicht.
  • Ich verstehe, dass Spielehersteller nicht die Portierung auf Betriebssysteme bezahlen wollen, die nicht hinreichend verbreitet sind.

Ja, so viel kann ich verstehen. Jetzt kommt dann mal, was ich nicht verstehe:

  • Ich verstehe nicht, dass sie nicht wenigstens Teile ihrer Spiele oder der verwendeten Spieleengines soweit offenlegen, dass die Geeks sich das Spiel entsprechend selbst portieren können. Der Hauptanteil des Unportierbaren sind wohl die direkten Hardwarezugriffe auf den Grafikkartenspeicher, beziehungsweise die niedrigstufigen Aufrufe der Grafikbibliotheken. Diese kann man ersetzen. Man kann jedenfalls der Wine-Community (die immerhin für eine Portierung auf mindestens 4 zusätzliche Betriebssysteme arbeiten würde) die Arbeit erleichtern durch Zusatzangaben.
  • Ich verstehe nicht, warum die ganzen DRM-Maßnahmen so beschissen programmiert sind. Wie wärs mit: Der Maschinencode ist Verschlüsselt und kann nur mit einem entsprechenden Schlüssel entschlüsselt werden. Und das auch nur indem man ihn meinetwegen irgendwo in den heap speichert und dann an eine definierte Speicherstelle springt. Irgendwas von der Form „ich schau regelmäßig im Internet nach ob du mich spielen darfst“ sollte man jedenfalls leichter wegcracken können. Letztendlich kann man alle Kopierschutzmaßnahmen aber irgendwie umgehen, wenn man ein irgendwie offenes System haben will – und damit meine ich nicht Betriebssystem sondern schon sowas von Wegen nicht so einen komplettgeschlossenen Krampf wie das iPhone oder sowas (und selbst das wird gejailbreaked …).
  • Ich verstehe nicht, warum Spielehersteller und Hardwarehersteller Microsoft so dermaßen in den Arsch kriechen, dass sie sich auf DirectX einlassen. Ich sage nicht, dass DirectX irgendwie extrinsisch schlecht ist, aber es ist intrinsisch schlecht weil nicht Portabel. Als Spielehersteller will ich mich doch nicht Abhängig machen von der Grafikbibliothek eines bestimmten Unternehmens, und schon garnicht von dessen Betriebssystem, an dem dieses Unternehmen wild rumbasteln kann ohne mein Einverständnis. Gleichermaßen will ich das doch als Hardwarehersteller nicht. Im Grunde ist der einzige Sinn von 3D-Grafikkarten auf Heim-PCs, dass man mit ihnen Spiele spielen kann. Das heißt, die Spiele wollen im Wesentlichen an die Grafikkarte, die Grafikkarte will im Wesentlichen an die Spiele, Windows ist nur ein Kleber dazwischen, und die Schicht soll möglichst dünn sein. Wieso tun sich nicht die verschiedenen Spiele- und Grafikkartenhersteller zusammen und machen eine eigene Infrastruktur auf – ich meine, sie müssen doch eh Treiber schreiben, so viel mehr Aufwandt kann das doch nicht sein. Vor Allem das Ganze dann einigermaßen Portabel aufzubauen sollte doch gehen. Immerhin ist doch die Rechenleistung selber selten der Flaschenhals, sondern eher die Grafikleistung.
  • Ich verstehe nicht, wieso jemand Spiele auf Macs portieren will. Einen Mac kauft man sich nicht um Spiele zu spielen. Man kauft ihn sich um mit Mac OS X zu spielen.

Jedenfalls werde ich jetzt also weiterhin lieber versuchen, das Ganze unter Wine zum Laufen zu bringen. Ich glaube die Zeit ist sinnvoller investiert. Spielen kann ich eh vergessen im jetzigen Zustand.

Advertisements

Software that should exist #7: File-Allocation without Nullifying

Sun, 16 May 2010 03:26:26 +0000

I dont know about you, but I had this problem more than once: I just need a large file on my disk with no specific content to create some other fs on it. For example when creating Live-CDs, additional Swapfiles or just Test-Volumes for exotic filesystems or virtual machines, I need a big file on which I can then perform filesystem-creation and stuff. The default way to do this is to run dd with appropriate blocksize and blockcount options, let it read from /dev/zero, and write into the file. The problem here is that I then not just allocate the file but also overwrite it with zeroes. In many cases, this would not be necessary. The main reason for using /dev/zero is that /dev/zero is the fastest device one can get to get some data – but actually, I mostly dont care and the only reason for not using /dev/urandom is that /dev/urandom is a lot slower.

So, it would be nice to be able to say „give me a file with size … and random content“, such that the kernel does this by just allocating free blocks into a file, but not overwriting them, thus, the only write-accesses on the disk will be the ones for filesystem-headings like inode-tables, etc.

Problematic with this approach – and therefore probably the reason why this is no default mechanism – is that if every user could do this, the user would possibly be able to access blocks of files that shouldnt be seen by this user, i.e. blocks of files which have already been deleted but needed higher read-permissions. On the other hand, as root, there should be no such problem at all.

One possible solution which sometimes suffices is the creation of sparse-files, but only if the underlying filesystem supports sparse files, and even then, for most of the problems mentioned above the access becomes painfully slow, since the blocks have to be allocated ondemand, while the programs assume to get a blockdevice. Most mkfs-instructions will at least require some kind of „force“-option to create a filesystem on a sparse-file anyway. Loopmounting will most probably fail. Using a sparse file as swap-file isnt allowed at all (at least without strange kernel-patches).

Another solution comes – as far as I read – with ext4, which allows creating files which are nulled „virtually“ in the beginning, without having to be overwritten first. Except that I dont really like or trust ext4, since it doesnt bring the features btrfs would bring, but also doesnt seem to be a lot more stable, this is a solution on the filesystem-level. Unfortunately, such problems mostly arise in situations when you didnt choose the filesystem with respect to this or cant choose your filesystem anyway. There has got to be some more general solution.


Randomly Found Software: finch

Sun, 02 May 2010 04:04:28 +0000

Console-Interfaces tend to have a lot of strange keybindings (mostly not modifiable) and are therefore often hard to use for somebody who is not regularly using a software. That is why I never got used to the midinght commander – it may be a mighty tool, but for everything I needed, either the shell itself or the facilities of emacs were enough for me, so I never quite had the mood to invest time to learn its keybindings.

Of course, emacs is certainly not easier to learn (except that there are modes which make it easier but are not the default), but its simply that I know and use emacs anyway.

Irssi is a bit better. Even newbees can use it, with only little explanation. But on the other hand, irssi might be a versatile software for IRC, but its UI is not very versatile – which is a good thing (there are yet too much blown up interfaces out there), but explains why handling it is easier.

Well, and then there is finch. Finch is a console-based purple-frontend, that is, you can use it with your pidgin-configuration-folder remotely through SSH without having to do x-forwarding. Well, it has a few keybindings which are unusual to the default GUIs, but on the other hand – well, there arent thousands of commands to move the window up, down, left, right and diagonally each, in fact you have to remember two commands to resize and move the windows in there: Alt+m for moving and Alt+r for resizing. The rest can be done with arrow-keys and enter when done with resizing.

Also, you will get a window-list when pressing Alt+w, but you can also iterate through the windows by pressing Alt+n. Essentially, these are the commands you will need. There are a few others, for opening menus, closing windows, etc., but either you will never need them, or you will somehow remember them, for example Alt+c stands for close, which is intuitive.

The windows are not tabbed, they are freely movable (except outside the screen bounds, but I dont care about this, and its certainly not hard to patch it if one really needs this) and overlapping, like in common windowed environments. Getting an intuitive and easy-to-learn interface in the pure console, with software that really can do a lot of down-to-earth things is something that is hard to find.

Unfortunately, it is still a console-application, which means that there is no way of somehow integrating it into the rest of the desktop, which is why I prefer Pidgin when sitting on my local computer.

But the most important thing is that it solves the problem of crappy links when sitting in trains and using mobile internet. Whenever I get disconnected, some problem occurs, for example some messages get lost, OTR is out of sync, etc. – I used to solve this using Pidgin on a remote machine with xpra and NX which are very well done pieces of software, but having a console-ui which I can run inside screen, which produces a lot less traffic and is still easy to handle, is the preferred way for me, especially because I dont have to reconfigure my configuration, finch just takes the pidgin-configuration and uses it.


Free Software deficit rant and wishlist

Sat, 17 Apr 2010 23:54:00 +0000

There are some things that were nice to have as Free Software and some things that totally suck. Don’t get this wrong. I like Linux and Free Software in general, and prefer it to any other closed source piece of Software, because of this openness and because it is more written for geeks than for dumb noobs. Like this other OS which gets more and more tedious to work with. Which gets worse with every version. Now you have to literally dive through piles of much to verbose and mostly nonsensical description „essays“ and error „novels“ only to find out that this OS won’t tell you at all whats going on. Like „A problem has ocurred. Ask your Administrator.“  %$§$&% I AM the Administrator!!!!  „…want to… ask a friend?“ yeah.

Video conferencing

Needed: Skype equivalent.

Available Solutions: ekiga, empathy, kopete, pidgin, kphone, linphone.
Problems with existing solutions: none has the ability to encrypt video and/or audio which I consider essential for a modern communication tool. Skype’s encryption is at least good enough to frustrate the common network admin, which is way better than any open source has to offer out of the boy until now.
And no, additional vpn tunnels or zrtp proxies don’t cut it. They are a PITA to setup and maintain. Zfone has unclear licensing terms which seems to be the reason it isn’t packaged by distributions.
Ekiga seems to be the most usable at this time. Searching google for „ZRTP ekiga“ gives many results, it seams they attempt to include it since 2006, but it isn’t here until now. Additionally ekiga has a clumsy interface and is very hard to setup with things like sip-over-vpn or serverless sip. I guess it’s the typical gnome/windows thing: do some 50% standard things automatically and don’t care about the rest. They seem to be busy plumbing an instant messenger onto sip at the moment.
Empathy and the telepathy framework seems to be most promising now in my opinion because of their modular architecture.
I want: theora(x264) + speex,celt, builtin encryption, easy NAT traversal, easy installation. Everything is here: zrtp/srtp or dtls, stun, ICE, all codecs have open implementations, there are many open media frameworks.
Jingle seemed promising, doing away with the SIP cruft, I have seen many attempts to implement it into OS messaging clients, but none was usable. Don’t know why.

All in all it seems most of the developers are not at all interested in adding encryption to their program, some even seem to oppose it. Hey, lets create a new Conspiracy Theory (TM): „They“ prevent the development of free and open voice/video encryption software.

Video editing

Cinelerra was an utter failure regarding stability and usability. The Lumiera project by some people out of the cinelerra community sounds great, but will apparently not be usable in the foreseeable future. Pitivi and openshot are severely lacking in features until now but may have their userbase. Kdenlive has made large advancements and is the most usable at the time of writing, but it’s lacking basic features like keyframes for effects and stable effects. (ever tried to anonymize e.g. a video of a political demonstration?) Also there are only very few video format presets, and you cannot easily define custom ones.
Let’s hope Lightworks goes the Blender way and not the Xara way… The feature list looks really nice.

64bit runtimes for java, flash

Oh please Adobe and Sun/Oracle: Would you please finally make 64 bit versions of your runtimes that simply work as „well“ as in 32 bit?
As flash is mostly used for videos and advertisements, we can hope it will be replaced by html5 in near future, whereas the lack of a 64 bit java vm with a non-memory-eating client mode is really bitter. At least there is now an applet plugin. Now that memory and cpu time are so abundant that we write whole applications in Javascript, Java would have been a great technology for client apps and web, but of course Sun decided to totally botch it with their former stupid licensing policy and their neglegience for everything else than „Enterprise Business yaddahh“.

Java ME DevKit for Linux

Who in his right mind wants do cross-develop using Windows if he could use Linux, the developer’s platform?

Video recording/synchronization

Have you ever tried to record video directly from webcam to disk? Perhaps you even have an mjpeg camera and hoped it could simply dump frames beneath some audio? I tried ffmpeg, mencoder, vlc, gstreamer, transcode. I was lucky when they handled v4l2 at all, then some of them didn’t even have alsa input drivers (alsa is around since 1998…)

Main problem here is:
There a two completely separeate streams: Video as a somewhat irregular timed sequence of jpegs and audio as a normal alsa stream from an usb-audio-driver. One would expect that it should be possible to simply write both streams nicely timestamped into a container format and process (cut, recode, whatever) the whole thing afterwards in non-realtime. Nope.Vvlc doesn’t work with raw mjpeg, and doesn’t have alsa input. Mencoder does have raw mjpeg, but it segfaults immediately. And it can only write avi files which need a regular framerate.
With ffmpeg A/V-sync was a total mess, although it otherwise worked.
Gstreamers gst-launch is a great tool, but no matter how many buffers and timestamper plugins one inserts audio is not in sync with avi. Mkv works fine, but no other program can open mjpeg-in-mkv or at least convert it to a different mjpeg container format.

Video transcoding

What I want: Start with a DVB recording as one or more TS files and an edit decision list consisting of cut-in/cut-out times.
Goal: Get a properly a/v-synced mp4/mkv file in one single step:

magical-encode -edl blah.txt -o out.mp4 -crf 25 *.ts

At the moment you do this:

ts –cut+sync+demux (projectx)–> m2v+mp2+ac3 –encode video, copy/encode audio–> .avi,vorbis/aac/ac3 –multiplex–> mp4/mkv

This needs 2 – in theory – superfluous sets of temp files. Helped to kill my hd once…

Proper video drivers

Nvidia’s binary blob works as long as you have a new graphics card and as long you are lucky. I have to use an old „legacy“ version now with my old graphics card. At the moment it seems I have some settings wrong. It’s sluggish as hell. Switched to nouveau today. I can live without 3D and power management on this computer as I use it mostly for surfing and as a server. I’m confident nouveau will improve.
Ati’s binary driver was really bad some time ago, but has improved. You still can’t make screencasts with a decent framerate, the computer hangs completely when you start a second x-server, and so on. I’ll change to the open source driver the day they implement power management.
Yeah @Intel: H.264 acceleration in Q3

Decent Open Multi-VPN

Tinc has a great feature list, but what use is a Virtual Private Network that is not guaranteed to be private? I consider their security issues severe and find their reaction to it dubious at best.

Strange:  Most of these issues are related to video, encryption, or bytecode-runtime. Perhaps these are the most difficult and/or boring fields in software development.

If you happen to stumble over a solution to any of the aforementioned, feel free to comment.


Datendeduplikation mit ZFS-Fuse

Fri, 02 Apr 2010 22:42:04 +0000

Ich schrub ja bereits, dass ZFS-Fuse einige Ladungsspitzen hatte bei der Benutzung. Somit ist es für mich momentan nicht zu gebrauchen als Root-Dateisystem – was schade ist. Dennoch habe ich mir heute mal die Zeit genommen, die git-version von zfs-fuse auszuprobieren. Dafür gibt es im AUR das tolle Paket zfs-fuse-git. Im Gegensatz zur Release-Version unterstützt diese nämlich bereits deduplikation.

Also USB-Stick gezückt, und zfs erzeugt.

# zpool create -O dedup=verify dedup /dev/sdc1
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                 85M   21K   85M   1% /dedup
# dd if=/dev/urandom of=/tmp/random bs=1024 count=$((40*1024))
# cp /tmp/random /dedup/
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                 85M   41M   45M  48% /dedup
# cp /tmp/random /dedup/random_b
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                124M   79M   45M  64% /dedup
# cp /tmp/random /dedup/random_c
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                165M  121M   45M  73% /dedup
# cp /tmp/random /dedup/random_d
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                204M  159M   45M  79% /dedup
# cp /tmp/random /dedup/random_e
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                244M  199M   45M  82% /dedup
# cp /tmp/random /dedup/random_f
# cp /tmp/random /dedup/random_g
# df -h
Dateisystem           Size  Used Avail Use% Eingehängt auf
/dedup                324M  279M   45M  87% /dedup

Bemerkenswert.

Update: Ich verwende jetzt mal temporär auf einer meiner Backup-Platten ZFS-Fuse-Git. Mal sehen, ob es sich bewährt.


„Lazy Evaluation“ in POSIX-C, using sigaction(2)

Wed, 31 Mar 2010 23:32:16 +0000

It amazes me time and again how flexible POSIX (and other lowlevel-stuff in common unix-like environments in general) is. Surprising enough that fork(3)-ing doesnt result in a doubled memory-consumption in general, because the Pages are mapped Copy-On-Write – which is a nice thing, but still done by the kernel – it even gives the userspace-processes a lot of control about paging. So I wonder why most of the programmers just use malloc(3) – maybe because its the most portable non-gc-possibility to organize memory. I know that SBCL uses libsigsegv to optimize its memory management.

For a long time now I had the idea of implementing lazy evaluation with pure C using libsigsegv. The plan was to allocate some address-space and mprotect(2) it, and then, as soon as someone accesses it, a sigsegv is thrown, and the handler calculates the contents of the accessed memory block, unprotects it, and saves the calculated value, and then returns back, so the calculated value can be accessed from this point on. Ok, this is not quite lazy evaluation – for example you cannot (at least not trivially) create infinite lists with this, but it goes into that direction, its like „calculating on demand“.

So I wrote a program doing this using libsigsegv, but unfortunately, I couldnt work recursively with libsigsegv, i.e. if I accessed a protected part of the memory during the sigsegv-handler, the program would exit. But libsigsegv is only a wrapper, probably around sigagtion(2). Sigaction itself allows recursive handling of signals, when using the flag SA_NODEFER. So I wrote the below Program. It calculates the fibonacci-sequence „blockwise“ – an array fib is allocated, and – on an x86-processor – split into 1024-wise arrays of integers. It also prints out the registers of the context of the error (that is, it produces a lot of output – be carefull when compiling it). And – well, I only tested it on x86 linux 2.6.26 and x86_64 linux 2.6.32, in the latter case, it doesnt work, it traps in an infinite loop.

#include <sys/mman.h>
#include <sys/user.h>
#include <malloc.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <strings.h>
#include <ucontext.h>
#define __USE_GNU
#include <sys/ucontext.h>
#include <signal.h>

int * fib;
int * fib_e;

#define PAGE_FIBNUM (PAGE_SIZE / sizeof(int))

void sa_sigac (int sig, siginfo_t * siginfo, void* vcontext) {
 struct ucontext* context = (struct ucontext*) vcontext;

 size_t regs_size = sizeof(context->uc_mcontext.gregs);
 char format_hex[(3*regs_size)+10], format_dec[(3*regs_size)+9];
 memcpy (&format_hex, (void*) "Regshex:", 8);
 memcpy (&format_dec, (void*) "Regsdec:", 8);
 format_hex[(3*regs_size)+8] = format_dec[(3*regs_size)+8] = '\n';
 format_hex[(3*regs_size)+9] = format_dec[(3*regs_size)+9] = '';
 int i;
 for (i=8; i < (3*regs_size)+8; i+=3) {
 format_hex[i] = format_dec[i] = ' ';
 format_hex[i+1] = format_dec[i+1] ='%';
 format_hex[i+2] = 'x';
 format_dec[i+2] = 'u';
 }

 printf(format_hex, context->uc_mcontext.gregs);
 printf(format_dec, context->uc_mcontext.gregs); 

 void* failt_address = siginfo->si_addr;
 int number = ((int)failt_address - (int)fib) / sizeof(int);
 printf("Accessed: %d\n", number);
 int firstcalc = number - (number % PAGE_FIBNUM);
 int lastcalc = firstcalc + PAGE_FIBNUM;
 printf("Calculating Fibonacci-Sequence from %d to %d\n",
 firstcalc, lastcalc);
 mprotect(fib+firstcalc, PAGE_SIZE*sizeof(int), PROT_READ | PROT_WRITE);

 if (firstcalc == 0) {
 /* initial elements of fibonacci sequence */
 *(fib+firstcalc) = 0;
 *(fib+firstcalc+1) = 1;
 } else {
 *(fib+firstcalc) = *(fib+firstcalc-1) + *(fib+firstcalc-2);
 *(fib+firstcalc+1) = *(fib+firstcalc) + *(fib+firstcalc-1);
 }

 int * ccalc;

 for (ccalc = fib+firstcalc+2; ccalc < fib+lastcalc; ccalc++) {
 *ccalc = *(ccalc-1) + *(ccalc-2);
 }
}

int main (int argc, char* argv[]) {

 int fnum;

 if (argc == 1) {
 printf ("Please supply a number.\n");
 return -1;
 } else {
 sscanf(argv[1], "%d", &fnum);
 if (fnum > 20*PAGE_SIZE) {
 printf ("The number must not be greater than %u.\n",
 20*PAGE_SIZE);
 return -1;}
 }

 struct sigaction myaction;
 myaction.sa_sigaction = &sa_sigac;
 bzero(&myaction.sa_mask, sizeof(sigset_t));
 myaction.sa_flags = SA_NODEFER | SA_SIGINFO;
 myaction.sa_restorer = NULL;

 sigaction(SIGSEGV, &myaction, NULL);

 int fib_begin[24*PAGE_SIZE];
 int fb = (int) (&fib_begin);
 fib = (int*) ((fb - (fb % PAGE_SIZE)) + PAGE_SIZE);
 fib_e = fib + PAGE_SIZE;
 int e = mprotect (fib, 20*PAGE_SIZE*sizeof(int), PROT_NONE);
 perror("");
 printf("fib(%d) %% %u := %u\n", fnum, -1, fib[fnum]);
 return 0;

}

There are a few flaws in this source: It doesnt work under x86_64, even though I dont really see which part is not independent of the bus-width. Of course, handling it that way is highly unportable, but most unix-derivates should be able to do at least something like that.

The major flaw I dont like is that I am bound to calculating a whole block at once, and not being able to mprotect this block again afterwards. In theory, it should be possible to find out in which register the value should have been read and then manipulating the context and setjmp(3)-ing to it, so it doesnt try to read it again afterwards. But I myself cannot even access the registers directly by their name, though in sys/ucontext.h there is an enumeration with defined index-names, but they seem not to be defined when I am trying to use them. I dont know why. I just print them all out in the hope that I see some structure inside them, but so far, I didnt see much. I guess one has to know a lot of lowlevel-stuff about x86 to understand whats actually going on, which I dont have.

Anyway, I think this is very interesting. It may not be usefull at all, but its a nice thing.


Randomly Found Software: Xpra

Sat, 20 Mar 2010 17:16:53 +0000

One thing I really liked about Windows 7 was its excellent terminal-server-facilities. I could detach a running local session and reattach it again remotely. I could even tunnel it through ssh with ordinary ssh-x-forwarding by installing rdesktop on cygwin. It supported changing the size of the desktop and logging in without being visible on the physical screen.

X11 under Linux never worked that good, let alone Mac OS X, which is sort of the worst of all, with the worst VNC-Implementation I know so far.

Something none of the solutions knew were suspendable rootless GUIs. I dont know for how long I wished there was something like screen for X11-Applications. Well, there it is: Xpra.

It is a surprisingly small set of software, written in python, and though well done. The installation procedure is unusual but well documented, and for Arch Linux, there is an AUR-Package, which is why I love Arch Linux so much more than Debian – there are a lot more buildscripts available.

Having it installed, it can simply be started using

xpra start :1927

This will start a Server listening on Display 1927. To start an application running on this X-Server, we have to set its $DISPLAY to this, or supply it as an argument. I mostly start an XTerm, from which I then can start the rest.

nohup xterm -display :1927 &

If anything works,  the XTerm can be attached to the current server by running

xpra attach :1337

inside another X-Server. It also works remotely, through SSH X-Forwarding. Killing this process via C-c (or – if it is remote – by just pulling the network cable) the XTerm will disappear. But it can be attached again by simply doing the same command as soon as the connection is there again. Like one would do with screen.

When trying to attach a server inside itself, then the connection gets lost, but – surprisingly – I can attach the server afterwards. It doesnt crash. Now if that isnt solid!

There is no Tray-Bar-Integration (yet – I am sure this is possible), so when using tray applications, I use trayer for that.

Of course, it doesnt always work perfectly. I sometimes have to run

setxkbmap de

multiple times. And I heard of some issues with CapsLock. Sometimes I have to give a window the focus twice (i.e. clicking on its titlebar twice) before it actually gets focus. But well, it hasnt even reached version 1.0, and already made my life a little easier. I think its already worth using, and definitely worth being developed (and hopefully integrated into the default package trees of the major distributions).