Randomly Found Software: Planarity

Sat, 01 May 2010 13:08:35 +0000

Planarity (via) is a nice game where you get a graph with intersecting edges and have to move its vertices such that the edges dont intersect anymore. This sounds easy, but in fact, it gets complicated in higher levels. Well, its a simple game, and as soon as you got a little practice, you will be very fast and of course will get bored. But its a nice pastime anyway.

Its definitely worth trying out this game!

Advertisements

A nice connection between Russel’s paradox and Gödel’s incompleteness theorems

Wed, 28 Apr 2010 17:05:16 +0000

Russel’s paradox states that there must not be a set y:={x|x∉x}, because then y∈y→y∉y and y∉y→y∈y.

In my younger days, when I had less knowledge but more time and fun when increasing it, I had a time when looking for paradoxes inside common theories. I simply didnt want to believe in the consistencies of the theories we have. I was desperately trying to get articles from Eduard Wette and liked hearing about failures mathematicians had made in the past. Well, this time wasnt too long, and soon I realized that searching for paradoxes in common theories is not useful anyway, since if they really arent consistent, either we will never know (and thus dont have to care) or we will notice someday and then adapt our theories. Anyway, on some day at this time, somehow I was thinking about the Russel Paradox and what happened to it if you did formal set theory inside number theory. Thinking deeper about it, I had a strange result which I firstly didnt quite understand.

First, lets not be too formal – such that you can understand my confusion at first sight.

Lets consider , the standard-model of peano arithmetic. As Gödel did, you can encode propositions about natural numbers in natural numbers themselves. Lets denote by [P] the natural number encoding the proposition P. Especially, you can encode every proposition with one free variable as a number, and a proposition with one free variable is something similar to a set of natural numbers. So lets denote by n∈[P] the fact that P(n) holds (and define it to be false if the second given argument doesnt encode a proposition with exactly one free variable). Now, is a binary relation between natural numbers, which can be expressed inside arithmetic, and so is its negation, which we denote by . Then the proposition n ∉ n has exactly one free variable, and can also be encoded as a natural number, say m:=[n ∉ n]. Now assume m m. Then, by definition of m, we have m m. Same vice versa. Sounds like a contradiction.

Ok, well, something obviously went wrong. Lets get a little more formal.

The first flaw is that we may be able to denote every proposition about natural numbers, but not the question whether it is true. Because we have to give a finite relation which tells when a given proposition should be true. Thus, lets redefine our relation by n∈[P] saying „there is a formal proof of P(n)„, or in signs ⊦P(n) – it is known that this is possible. Then again consider our m:=[n ∉ n]. It applies to all [P] such that ⊦P([P]). Again, we may ask whether m m. This time that means ⊦m m. That is, if m m then we can prove that m m which is a contradiction. Hence, m m if any. At this point we had the contradiction above.

But this time, m m means ¬⊦m m. That is, there is no formal proof for m m. At this point, we have shown that m m is not provable in peano arithmetic, but that it is indeed true in . That is, we have a proposition that is true in the standard model, but not provable, like the one Gödel created.

And if you think about it, you will notice that actually we created exactly the same proposition as Gödel did, namely a proposition stating its own unprovability. But essentially we have followed the same idea as done in Russel’s Paradox.

The main difference is that in Russels‘ Paradox, we assume some „omniscient“ decider for the relation, while in we also assume an „omniscient“ decider for the arithmetic relations we have, but we cannot assume an „omniscient“ decider for meta-propositions [P], we must use provability for this.

To emphasize this a bit, lets not talk about sets but about deciding functions f:’abool. Lets say ‚a may contain every of these functions. That is, every function can be passed an arbitrary function as first argument. Lets again define a relation f a g stating g(f)=True. Then we can define the function NSA(f):=True, if f a g and False otherwise. Then as before, we can ask whether NSA∈aNSA. Of course, this will lead to the same contradictions as above. In this case, the answer is again that such a function NSA must not exist. But this time we can clearly see the connections to both situations above: On the one hand, with NSA, we denote a set of objects, like in the Russel paradox. On the other hand, we want NSA to be something which can always „tell“ us the truth of a proposition, and there must not be such a thing, like in Gödel’s incompleteness theorem.

Btw, NSA stands for Non-Self-Accepting, one can identify it with the class of Non-Self-Accepting algorithms, i.e. algorithms which return False when they are passed their own code, and with a little more recursion theory, the above argument is the proof of NSA not being decidable.


Ein seltsamer Gesprächsverlauf …

Mon, 05 Apr 2010 17:10:24 +0000

In einem neulichen Chat ging es gerade um Ignorieren. Mein Gesprächspartner meinte „Ignorieren geht so:“. Das brachte das Gespräch in eine sehr interessante Situation.

Will er mir nun zeigen, wie Ignorieren geht, so darf er auf keinen meiner Posts mehr antworten. Auf keinen meiner Posts mehr zu antworten setzt aber voraus, dass ich auch etwas schreibe. Schreibe ich also nichts mehr, so handelt es sich nicht um Ignorieren. Damit hätte er gelogen, denn er würde mir nicht zeigen, wie Ignorieren geht …

Sobald ich etwas sage darf er mir nicht mehr antworten. Ich habe also keine Motivation noch etwas zu schreiben … Wenn ich aber nichts schreibe, hat er die Unwahrheit gesagt, denn es handelt sich nicht um Ignorieren. Eine sehr seltsame Situation!

Ich hatte eigentlich vor, sie hier formal aufzuschlüsseln, aber nachdem man Transitionssysteme mit Aktoren braucht, spare ich mir das lieber. Natürlich war in diesem Moment gemeint, dass Aussagen folgen, die das Ignorieren genauer beschreiben. Und natürlich kann man in der natürlichen Sprache immer eine Metaebene höher gehen. Und natürlich versucht die natürliche Sprache nicht, Widersprüche zu vermeiden.

Trotzdem. Eine nette Gesprächssituation.


Randomly found Software: Inverse Graphing Calculator

Sun, 04 Apr 2010 02:26:03 +0000

Some pieces of Software are not made to be usefull, but rather to be nice. One of them is the Inverse Graphing Calculator (via). From the description of the site:

„The way the IGC works is, you type something you’d like as your curve (…). The IGC produces an *equation* which has this phrase as its graph!“

Of course, it would have been even better if it used Bezier-Curves instead of just linear lines. However, a nice little thing.


Unicode Math Entities

Tue, 09 Mar 2010 03:38:20 +0000

There are plenty of possibilities to embed mathematical formulas into webpages. Besides JSMath and several possibilities to include LaTeX-Generated png’s into Websites there is MathML – which should have been preferrable, but just didnt find broad usage.

I just read that there is a collection of Math-GIFs for mathematical symbols.

I dont really know whats the purpose of the latter. Making Formulas visible inside HTML is a problem which is not properly solvable. But for small formulas, you will find a Unicode Character for most of the things you want to express – just look at the plenty of character tables. Embedding these in HTML is not harder than embedding a GIF. Of course, with Unicode, you are bound to a linear notation. But to express small formulas, that is more than enough.

For anything else, I would still prefer LaTeX-Rendered PNG’s with LaTeX-ALT-Strings. That is, because even a blind person should be able to read it – at least with enough efforts, they can interpret the LaTeX-ALT-Strings. Of course, even for this, MathML would be the better choice. But LaTeX has spread. So well, why not use it for formula notation. Even inside HTML. Its a compromise. Its not perfect, but its good enough.


Der fünfhundertfünfundfünfzigste Artikel

Sun, 14 Feb 2010 01:10:59 +0000

Ich möchte hiermit darauf aufmerksam machen, dass dies der fünfhundertfünfundfünfzigste Artikel auf diesem Blog ist.

Nun, was sagt uns diese, ominöse Zahl, fünfhundertfünfundfünfzig? Was findet zum Beispiel Wikipedia dazu? Nun, Wikipedia schlägt unter Anderem das Jahr fünfhundertfünfundfünfzig vor, gefolgt von der Telefonnummer fünfhundertfünfundfünfzig, die in den USA für fiktionale Telefonnummern reserviert zu sein scheint, was der Grund ist, warum diese so oft mit 555 anfangen. Es gibt sogar eine Liste solcher Telefonnummern. Sehr wichtig, das. Ob es wohl auch ein Urheberrecht darauf gibt? Wer weiß. Haben wir in Deutschland sowas eigentlich auch? Wenn nicht muss sofort ein Gesetz erlassen werden. Kann ja nicht sein, dass der Raum der fiktiven Telefonnummern ein rechtsfreier Raum ist.

Jedenfalls haben wir in Deutschland eine Bundesautobahn 555, und die deutsche Wikipedia kennt außerdem noch einen Schaltkreis. Naja, wie lange noch, ist natürlich die Frage. Denn zweifelsohne ist das alles nicht relevant, wird also bestimmt irgendwann gelöscht.

In der englischen Wikipedia erfährt man noch, dass fünfhundertfünfundfünfzig diejenige natürliche Zahl ist, die zwischen fünfhundertvierundfünfzig und fünfhundertsechsundfünfzig ist. Und eine Sphenische Zahl ist, also eine Zahl, die sich als Produkt von drei Primzahlen in der ersten Potenz schreiben lässt, in diesem Falle drei, fünf und siebenunddreißig. Außerdem ist sie eine Harshad-Zahl, also eine Zahl die durch ihre Quersumme teilbar ist, bezüglich der Basen 2, 10, 11, 13 und 16, die jeweiligen Darstellungen sind 1000101011(2), 555(10), 465(11), 339(13), 22B(16), die Quersummen also fünf und fünfzehn. Soweit Wikipedia. Genauer ist da das folgende Programm, das nicht unbedingt effizient geschrieben ist, aber als schneller Hack mit Gedankenminimierung ausreicht:

clisp -x ‚(defun basednum-to-int (base digits) (if digits (+ (car digits) (* base (basednum-to-int base (cdr digits)))) 0)) (defun basednum-inc (base digits) (if digits (if (< (car digits) (1- base)) (cons (1+ (car digits)) (cdr digits)) (cons 0 (basednum-inc base (cdr digits)))) (list 1))) (defun convert-to-base (base int &optional (sofar nil)) (if (equal int (basednum-to-int base sofar)) sofar (convert-to-base base int (basednum-inc base sofar)))) (defun sum-of-digits (base int) (let ((ret 0)) (dolist (i (convert-to-base base int)) (incf ret i)) ret)) (dotimes (i 555) (if (zerop (mod 555 (sum-of-digits (+ 2 i) 555))) (format t „~d, “ (+ 2 i))))‘

dessen Ausgabe 2, 10, 11, 13, 16, 19, 21, 23, 37, 38, 46, 55, 61, 75, 91, 109, 111, 112, 136, 149, 181, 185, 186, 223, 260, 271, 276, 277, 371, 445, 519, 541, 551, 553 wohl alle Basen (außer denen größer gleich 555, für die das sowieso trivialerweise gilt) sein dürften, zu denen diese Zahl eine Harshad-Zahl ist.

Interessant.


Software that should exist #4: Portable, verified, deduplicating filesystem

Wed, 27 Jan 2010 14:51:14 +0000

ZFS is a very interesting filesystem. I never actually used it, just made a few experiments with it so far. Mainly, because there is only sort of experimental support on Linux, no support on Mac OS X at the moment (as far as I read, the project which aimed to do this was frozen, but maybe zfs-fuse can be ported to macfuse), and also absolutely no support on Windows (at least until somebody finally writes a proper winfuse-binding or ports it to Dokan).

Still, the first-choice-possibility of accessing a ZFS-Pool through any other OS than Solaris seems to be just installing OpenSolaris on a VM, and just forwarding the desired blockdevices. I think this is more secure and not really slower than using FUSE, at least it is more portable at the moment.

The reason why I am interested in ZFS is that it serves a few purposes I really like. To understand why, one has to know my general behaviour of managing my files.

Actually, mostly times of having too much free time tend to alternate with times with almost no free time. While having no free time, I am just working with my computers. I dont explicitly manage my files – I just save them somewhere I can find them again. And sometimes I need some old file from some old archive or directory, which is on an external disk, thus, I just copy these files (or extract the archive) to my current harddisk. And leave them there.

When getting more free time again (and less free space on my disk) I tend to „tidy up“ my filesystem. But then, often I changed some old files. Or lost the overview over them. Or simply want to set up my system from scratch because there is a lot of crap running around on it. Mostly then, therefore, I just copy the whole home-directory (and maybe others) onto my external disk – thinking „setting up my system is more important, I can tidy up my files later“

… Now guess what happens …

Of course, I have whole system-backups from years ago, even some from the times when I used Windows. And sometimes I have System-Backups of systems which contain copies of System-Backups. Sorting them would take a lot of time. Sometimes I grub through the old files like an old photo album. I dont want to change these files. I dont want to delete these files. And actually, I am much too lazy to sort them.

So of course, I have the need of more and more space. This is no problem. But also, since so many files have duplicates, the need for space increases exponentially. Well, there are tools like fdupes. But fdupes takes a long time to index the files, and when I change a file afterwards (accidentally, etc.), this affects all other files. And fdupes works only on systems with symlinks or hardlinks. And fdupes cannot shorten files which are only partially the same.

On the other hand, there are a lot of well-engineered backup-tools like rsync with a lot of additional features, and in every production-environment, I would recommend a strict backup-policy basing on these, anyway. But at home, I have a lot of old computers running – sometimes just for experiments. I have no production system. I just have a creative chaos – and I actually want to keep this creative chaos. At the time of desktop-virtualisation, when it is no problem to run three operating systems on one single cpu-core at once, at the time of ramdisks, at the time of WebGL, I simply dont want to manage my files manually, when the filesystem just could deduplicate equal blocks, so I could have hundreds of copies of the same system-snapshot without really having to waste a lot of space.

And of course, I want to just be able to add another external disk to my pool, so I can save files on both of the disks without having to copy them, but – if possible – have anything on both disks when attatching them (or at least being able to just remove one of the disks when all the others are attatched). As far as I know, this can be done with ZFS. And a lot of other weird stuff. The only problem is that there appears not to be any good GUI for it, and no good Desktop-Integration either. And – well – it is only really supported by Solaris. The FUSE-Modules will first have to prove that they are really stable.

So – well, ZFS seems to fit my „wishes“, just would have to port it to Mac OS X and Windows (which is sure a lot of work but shouldnt be too hard either). But well, ZFS is a complicated system. And complicated systems can have a lot of complicated bugs. And the filesystem is the thing anything is saved on. That is, the whole computer can be damaged – as long as your filesystem is ok, you wont lose your data. On the other hand, a damaged filesystem can make it slightly impossible (especially when you encrypted it, for example) to restore any of your data, even though the rest of your computer (including your harddrive which just did what the buggy kernel module told it) works totally well.

Modern filesystems duplicate the necessary parts to restore data on the disk and have policies which make it less likely that data gets damaged. This is a good thing, but obviously, this doesnt help against bugs in your filesystem driver. What helps – at least to a certain degree – would be a formal verification of the sourcecode. I can think of several approaches to gain this.

The easiest way should be to ignore the hardware-problems of data integrity, and thus assuming that the disk one writes on is always correct. Then, the main thing one would want to have is some functions read : name → blob → blob and write : name → blob → blob → blob, such that (write a b (write c d e)) = (write c d (write a b e)) ⇐ a≠b, i.e. it doesnt matter in what order different files are written, (read a (write a b c)) = c, i.e. reading a file that was written before returns the same value that was written before, and (write a b (write a d e)) = (write a b e), i.e. a file’s content is the last content written into it. Maybe there should also be some defined behaviour when a file doesnt exist – i.e. instead of a blob, use some object from boole×blob, encoding whether the reading was successfull.

Of course, maybe we also want to verify that if some hardware fails, at least the probability of the data being read correctly is maximal – or at least doesnt fall below some certain probability. This seems a lot more complicated to axiomatize I think. While having an abstract „blob“ above, I think you will need to specify what a „blob“ is more exactly, that is, define blob as ptr→byte (whatever ptr and byte are exactly). And then, maybe defining an „insecure blob“ iblob=ptr→byte×probability. You would have to specify the probability of success of reading a byte correctly for your disk – a task which must be left to the engineers building the disks and busses – and for example in a raid-system, you can prove that your way of reading and checksumming the data maximizes the probability of correct data. On the other hand, I assume its comparably complicated to calculate this probability – i.e. when you save a block of data with a checksum twice on the same disk, the probability that both of them get corrupted by a damaged sector on the disk could be smaller than the probability that one of them gets corrupted (but needs not, since they could be written at a physically near place on the disk such that the source of corruption affects both of them), but the probability that the data gets lost because of a headcrash doesnt decrease, since both of them will be affected by this – while this probability that the same data saved on different disks gets lost should be smaller.

After axiomatizing all of that stuff, one of course has to implement a filesystem that has all those features like deduplicating, snapshotting, adding additional disks, etc. – and verify it.

As far as I see, this would be a lot of work to do. But on the other hand, filesystems are not something new. There should be a lot of experience with data integrity and backups. With all this experience, shouldnt it be possible to produce such a thing? I mean, its the filesystem – sort of the most essential thing. If it fails, anything else will fail.