Neue Blog- und Feed-URLs / New Blog- and Feed-URLs

Sun, 30 May 2010 03:32:19 +0000

Die neue Blogsoftware auf meinem eigenen Server ist aufgesetzt. Ich habe dementsprechend vor, hier keine weiteren Beiträge zu schreiben. Mein neues Blog findet sich unter Würde mich freuen, wenn der eine oder andere Leser mir dahin folgt.

I have set up a blog software on my own server. Thus, I will presumably post anything new there, not posting anything new here. My new blog can be found at I welcome every reader to follow me there.

Get a free PDF Reader

Thu, 27 May 2010 01:11:55 +0000

Looking for some instructions for mozplugger embedding evince, besides the solution I found here, I also found a nice link to this campaign from the FSF Europe.

It is an appeal to use a free PDF Reader. Well, under Linux and other free systems, there are a lot of them, and they are all mostly good. I actually do not understand why there are still people who prefer the Adobe Reader under Linux. Not only are there a lot of alternatives, they are also mostly much better (faster, easier to use). Few PDFs are not working on them – the ones created with some strange WMF-Tools (M$ for the win) and of course the ones which are encrypted such that explicitly only Adobe Reader can open them. I had this situation exactly twice in my whole life – one time a PDF created with some strange settings from Scientific Workplace, and the other time from a Professor who wasnt allowed to publish parts of its book without encryption. Even commercial pdf-providers usually dont use this, because its basically useless – it is a crude form of DRM, but modern eBook-Formats have much better techniques for that.

Also under Windows, I dont want to use the Adobe Reader, but actually I mostly use (the non-free) Foxit Reader there. The FSE’s list names Evince for Windows – but Evince for Windows was in a Beta-State and I wouldnt have recommended it to normal people. Okular was stable but needed a full-blown KDE-Installation, and KDE for Windows is still no fun. I never tried Sumatra PDF though. I will have to do this.

Well, actually, I dont like PDF much. Many modern PDF-Files are bloated. I liked early versions of PostScript much better. And at the moment, I like djvu very much. At least for ebooks, djvu seems to be a good format. As a comparably simple format, I like SVG. I mean, its bloated with XML-Stuff, but at least the inner structure is simple.

Its a pity that only few pieces of free software work properly under Windows. Windows is still the main platform for most people, and to convince them of free software, it could be a good thing to actually make them work with it under Windows already.

Flying Uxul and Sunset

Mon, 24 May 2010 01:00:44 +0000

Are shared libraries still appropriate?

Thu, 20 May 2010 18:13:59 +0000

Currently, I am trying to remove some dependencies of Uxul-World. I was thinking of completely kicking LTK – though I like LTK – but as this is just part of the Level-Editor, till now I just thought I should keep it. On the other hand, it produces additional dependencies – lisp-magick right now, maybe I will switch to cl-gd or to my own little ffi-binding. On the other hand, if I did all that stuff directly without LTK, inside SDL, I would just have to use sdl-gfx to stretch and captionize Images.

However, hardlinking with SBCL against ffi-bindings is hard to impossible, and the License of SDL forbids this for free software anyway as far as I remember. Under Linux, SDL may be a default library which is nearly always installed, while under Windows, I dont think so. Under linux, there is no problem with providing a simple package-dependency-list, as long as the packages are not too exotic and can be easily installed. But of course, I also want the game to be playable under Windows, without having to install a whole Unix-Like Environment before. So maybe, under Windows, I should use OpenGL instead. Well, I will see  that.

I am currently not concentrating on portability but on finally getting some playable content into it. In general though, its good to already think about it: I dont want to produce a dependency-hell. I hate dependency-hells. Having a lot of additional dependencies in a software package can really make me sad. Mostly this leads to a lot of strange Download- and Installation-Procedures, since every project has its own policies, and in the end the only thing I have is additional knowledge about small libraries which I didnt even want to know about.

Having libraries like the zlib or libpng linked dynamically is something that really sounds anachronistic to me. Maybe in embedded devices this makes sense, but on every modern PC, the additional memory footprint should be negligibly small. A real dependency-monster depends on thousands of such small libraries, such that the footprint can get remarkable large. When using dynamic libraries, the executable code can be mapped multible times between different processes by the kernel, which needs less memory, and makes the code really „shared“.

But in the end, the only real bottleneck when just hardlinking against everything and deploying large binaries with small dependencies is the Usage of RAM. Neither hard disk space should be an issue nor should the additional needed traffic be.

And again, the solution I would suggest to this could come from deduplication technologies. Assume you download a binary, and execute it. Then the kernel has to read it, and can therefore create an index of checksums of the memory blocks the binary contents. Assuming that mostly the same libraries are hardly linked, and thus, the same or very similar binary code occurs, the kernel will notice that it loaded equivalent blocks into memory already, and can therefore map them together, like it would do with shared libraries. A main difference would be that the pages would have to be mapped as copy-on-write-pages, since some software may change its executable code (willingly or through a bug ). The binary could additionally provide hints for the kernel, for example flags that tell the kernel not to try to deduplicate certain parts of the loaded process image, for it may change or will only be used in extremely seldom cases, or flags telling to what library (and source-file) some memory-pages belonged, so the kernel can optimize the memory-deduplication.

Just to emphasize this – I am not talking about deduplication of all of the RAM-Memory, only about a small procedure run at the start of a new process, which searches for identical pages that are already mapped somewhere. I am sure this would take longer than just softlinking. But it shouldnt take too much additional time, and one could add heuristics for very small process-images not to deduplicate at all to make them load faster.

In any case, I think it would make the work with binaries easier, as well deploying as using, especially outside some package manager. For example it would produce an easier way of maintaining multiarch-systems.

And – imo – it fits more into the world of free software, where you have a lot of chaotic dependencies and a developer cannot keep track of all of these dependencies‘ newest versions and installation procedures, so he would just put everything inside his project directly.

Its basically giving up a bit of infrastructure while getting a new way of solving problems for which this infrastructure was basically created. And it sounds like everything is already there to implement this. Of course, I am not a kernel developer, I cant say how hard it really is. I am pretty sure, in Linux there wont ever be such a thing, but maybe more innovative Operating Systems like Open Solaris could provide it – as Solaris is known for its propensity to new technologies.

Software that should exist #7: File-Allocation without Nullifying

Sun, 16 May 2010 03:26:26 +0000

I dont know about you, but I had this problem more than once: I just need a large file on my disk with no specific content to create some other fs on it. For example when creating Live-CDs, additional Swapfiles or just Test-Volumes for exotic filesystems or virtual machines, I need a big file on which I can then perform filesystem-creation and stuff. The default way to do this is to run dd with appropriate blocksize and blockcount options, let it read from /dev/zero, and write into the file. The problem here is that I then not just allocate the file but also overwrite it with zeroes. In many cases, this would not be necessary. The main reason for using /dev/zero is that /dev/zero is the fastest device one can get to get some data – but actually, I mostly dont care and the only reason for not using /dev/urandom is that /dev/urandom is a lot slower.

So, it would be nice to be able to say „give me a file with size … and random content“, such that the kernel does this by just allocating free blocks into a file, but not overwriting them, thus, the only write-accesses on the disk will be the ones for filesystem-headings like inode-tables, etc.

Problematic with this approach – and therefore probably the reason why this is no default mechanism – is that if every user could do this, the user would possibly be able to access blocks of files that shouldnt be seen by this user, i.e. blocks of files which have already been deleted but needed higher read-permissions. On the other hand, as root, there should be no such problem at all.

One possible solution which sometimes suffices is the creation of sparse-files, but only if the underlying filesystem supports sparse files, and even then, for most of the problems mentioned above the access becomes painfully slow, since the blocks have to be allocated ondemand, while the programs assume to get a blockdevice. Most mkfs-instructions will at least require some kind of „force“-option to create a filesystem on a sparse-file anyway. Loopmounting will most probably fail. Using a sparse file as swap-file isnt allowed at all (at least without strange kernel-patches).

Another solution comes – as far as I read – with ext4, which allows creating files which are nulled „virtually“ in the beginning, without having to be overwritten first. Except that I dont really like or trust ext4, since it doesnt bring the features btrfs would bring, but also doesnt seem to be a lot more stable, this is a solution on the filesystem-level. Unfortunately, such problems mostly arise in situations when you didnt choose the filesystem with respect to this or cant choose your filesystem anyway. There has got to be some more general solution.

Comment Feeds, Please! (and other things about blogging)

Wed, 12 May 2010 18:46:05 +0000

Well, there may be a lot of „professional“ and „famous“ bloggers out there who might say what they like or dislike when reading blogs and if you want to create a „professional“ blog rather than a private little blog about the things you are interested in, then you might ask these people better than reading on what I am going to write. Because I will now tell you about a few things I dont like on some Blogs, and reasons why a blog might be thrown out of my RSS-Feed-List.

Have and Maintain a Feed

Yes, there are still people who proudly write their own Blog-Software but dont provide any feed. Even though their site might have interesting content, there are thousands of other sites who provide interesting contents, too, and at least for me, its rather hard to produce something so interesting that I am willing to periodically go to your Site and watch for news.

These times are gone. There are too much people writing their opinion online. I have just counted 439 Newsfeeds in my Feed-Reader, and at least half of them are providing information that interests me, but most of them dont do this often. I cannot manage to watch 439 Websites every time, especially because mostly I am just reading this stuff in my free time, mostly without getting anything out of it I really need, i.e. just for fun.

And something that especially gets on my nerves is when I already subscribed to a feed and then the blogger changes his Software and with it the Feed-URL, without writing a note on the old newsfeed. So I only get notice about it by the error messages of my feed reader. This is annoying!

Ah, and especially: Make it easily findable. Provide feed-links as well as link-tags which Feedreaders can recognize. I dont want to have to „search“ your site for them.

I dont care much about design, just make your site work with as little as possible

Many people like Websites which put great efforts into their site-design. There is nothing wrong with that, except that these efforts often lead to huge requirements of your browser.

In particular: If I go to your site, then dont expect me to activate JavaScript, if there is no explicit reason. If you use jsMath because you cannot use LaTeX on your provider’s Server, or you are writing browser games and therefore need JavaScript, then kindly excuse yourself when I go to your site, and ask me to activate JavaScript, rather than commanding me to do so. JavaScript uses my system resources and might produce additional security vulnerabilities – and if you are just too lazy to provide an Interface that doesnt need JS, without really needing it, I am not willing to give you the trust of letting your code execute on my Computer!

Same for Flash-Animations. Flash is like a cullender when it comes to security. There are a few domains which I trust. For example, I trust large Video-Portals like YouTube, Vimeo or Dailymotion. Because if they would become vulnerable, then they would fix it as fast as they could. I dont want to see Flash on your Website, except when its really necessary. Ok, its still necessary for embedding videos or sound – I hope that these times will go away soon, but there is no other possibility that really works by now. So yeah, I can understand that you might use Flash when its impossible not to do so.

Advertisement-Services also sometimes use Flash. I dont see why they do it, instead of just using GIF-Animations, but well, I can understand that you want to get back the money you pay for your provider, so well, keep your Flash-Advertisements – I will block Flash anyway if you dont give me a reason not to do so.

But as soon as your site has some fancy-looking sidebar or other shit programmed in Flash, I will certainly not use it.

Ah, and dont use Cookies, if there is no reason for it. Some Ad-Services might require them, but I will block them if I dont see a reason to give you the opportunity to save data on my PC!

You might use Cascading Style Sheets, and well, if you really want, you might provide additional functionality using JavaScript and Cookies – yes, these technologies are nice for some purposes, and if I read your website for a long time, I might feel comfortable with giving you the opportunity to save small pieces of data and execute small pieces of code on my PC. But if you try to force me to do so, I will not give an inch.

Oh, and a note on CSS: CSS is made to put design on your site to make it viewable with many technologies. Maybe I want to go to your site using lynx. Then please put the boilerplate-elements below the interesting stuff. I dont want to have to scroll down 5 screens of stupid login-, blogroll- and linklist-information before I finally can get to the content I want to see.

Allow comments to all people

There is a lot of comment-spam so I can understand why you might want to look at the comments I write before publishing them. I can understand when you want me to enter a captcha, I can even understand when you require JavaScript for commenting to prevent spam. But if so, dont just assume I have JavaScript turned on, tell me that comments need JavaScript before producing strange errors (or just doing nothing).

You want my name (or a representative Nickname) and of course an E-Mail-Adress of mine. Maybe you are kindly even adding my Gravatar-Icons. But dont forget to give me the possibility to put some Website-URL of mine on top of my comments. Maybe you are not interested, but other people reading my comment might be interested to get to know more about me – I help you to keep your blog alive, so in exchange you can help me. Fair is fair.

Dont expect me to register anywhere or have an OpenID. Yes, I have an OpenID, and if you kindly ask me to provide one, I might think about it. But requiring to have such a service or even registering on your blog before I can post is arrogant and if you dont give something really awesome to me, I just wont post comments on your site. And if I cannot discuss about what you write, well, your site might get less interesting to me.

Of course I can understand you if you have a blog-provider not allowing this. But well, then you might consider changing the provider. At least WordPress allows comments in general. If a blogging service doesnt allow it, just dont use it.

Have thread-based comment-feeds or at least mailing-notifications

So well, you have managed to make me put a comment on your page. Con grats! Now maybe I expect some reaction by you or some other person. If you are using one of the larger blogging-services like WordPress, the thread I just posted in has a Comment-Feed, telling me about comments given there. Sometimes, Blogs dont provide this, but they provide Mailing-Notifications if some new comments come up. I can live with that (I gave you my mail-adress anyway).

But you have to give me something. Otherwise I will have to keep that tab with the comments open. And since I am working on at least 3 distinct computers, partially with distinct browsers, I will certainly not follow these comments for a long time.

But if I cant follow the reactions on my Comment, I will think twice before posting a comment anyway.

Dont be professional

Except when you are a real journalist who has already worked for newspapers or plans to do something like a newspaper, or you host a science blog, dont be professional. I am sick of all this „professional blogging“ stuff. For me, a weblog must not be professional, except maybe when its about science – if its professional, it becomes an online newspaper, but then, it should be stated as such, and compete with others of its kind. In blogs, I want to read the opinions of many unprofessional people.


Well, thats what you should do if you want me to read your blog. If you are a famous blogger, than you might as well ignore me, because you have so much other followers. But surprisingly, most famous bloggers meet my requirements – a coincidence?

I like private blogs. I like scientific blogs. I like small blogs that dont write much more than once a month, as well as bloggers who write five articles a day. I wouldnt read your blog because its special, I would read it because its one of many.

Always remember that you are unique – just like everybody else.

Personal Notes on LaTeX

Fri, 07 May 2010 02:34:08 +0000

I am making this post to finally have everything written down in a central place.

Firstly, everytime I can, I use TexMacs. I dont like LaTeX, I never found a real introduction that goes into the implementation details of latex or tells how to really write code with it – I mean its said to be turing-complete, but I dont actually know how to do some quite simple things with it.

However, TexMacs is said to be capable of vitally anything LaTeX also is, and its scriptable with scheme, but well, sometimes its not capable of some things, and its not as well-documented as latex is. If I just want to write lecture notes, then I use TexMacs because you can really typeset the stuff fast and looking very good.

But for „larger“ things like my project thesis, I decided to use LaTeX. And before I knew TexMacs (which is – unfortunately – very unknown), I also used LaTeX, and before I knew LaTeX, I used the (which isnt that bad at all).

And so, here a few notes of problems that I had and their solutions:

Sometimes one wants to have one page without a page number. Setting \pagestyle{empty} should work in most cases. But not in all. When you use \maketitlepage, it somehow changes the pagestyle, so you have to use \thispagestyle{empty} afterwards instead. Setting the pagestyle back to the „default“ value is done by \pagestyle{plain}, which is the default, afaik.

Well, then there are dedication-pages. What I want is a single page which is empty except for one small text in the center of the page. Well, there are the commands \hfill and \vfill, filling up a line horizontally and a page vertically. Using these should make this possible. So I tried something like \vfill \hfill my-dedication \hfill \vfill \newpage. Didnt work. After a lot of trying around, finally, I „hacked“ around it by just using empty formulas, which make LaTeX think that it has to keep place: $ $\vfill$ $ \hfill my-dedication \hfill $ $ \vfill $ $. Well, not perfect, but it works.

For Code-Listings I finally found the LaTeX-Package „listings“ and this nice tutorial (which is in german). This is yet another of these „I can do everything“-Packages of which LaTeX has so many. In my opinion, a language should give you the possibilities of defining your own routines and only help you for this, but not keeping you as good as it can from doing anything yourself, while providing packages for „everything“.

Meanwhile I always use UTF-8. I dont see any reason for using anything else for my documents. Especially, when I want to include special characters like Hiragana or Katakana. Just to prevent the encoding hell. Actually, I dont quite understand why anybody is using anything else than UTF-8. Ok, some software needs encodings with an equal width, but these are special needs. For vitally everything the user has to handle with, UTF-8 should be the best.

Including graphics is also one major problem which always occurs. There may be a lot of packages which should place the graphics somewhere special, etc. – but none of them actually worked everywhere. Using pdf-files with \includegraphics from the graphicx-Package was sufficient for me so far – especially because I couldnt find anything that really worked better so far.

Then linebreaks. If I have a large formula, or a large word, or a large \tt-form, then LaTeX either goes over the side-boundaries, or gives up completely. I already used \emergencystretch=10000pt which sort of solved this problem (that is, it made some lines stretched pretty hard, but I didnt mind), but it created widows and orphans (seems to undermine the prevention-mechanisms somehow). Ok, it is a problem to choose what to do then. But the default I found was „just do it by hand“, but seriously, this cannot be a solution. Especially since the solution for me was clear: Use your algorithm where you can, but if a line would become too empty to stretch it, then simply dont do this with that line, just use \flushleft for this line. In my opinion, that sounds like the only thing one really can do about it – that is, even if I did it by hand, I would do it that way. But I couldnt find any pre-defined package or instruction defining this behaviour. So what I basically did was to just use \flushleft everywhere. It doesnt look that „pretty“, but well, it also doesnt look that „bad“, at least it looks continuous.