Randomly Found Software: Planarity

Sa, 01 Mai 2010 13:08:35 +0200

Planarity (via) is a nice game where you get a graph with intersecting edges and have to move its vertices such that the edges dont intersect anymore. This sounds easy, but in fact, it gets complicated in higher levels. Well, its a simple game, and as soon as you got a little practice, you will be very fast and of course will get bored. But its a nice pastime anyway.

Its definitely worth trying out this game!


A nice connection between Russel’s paradox and Gödel’s incompleteness theorems

Mi, 28 Apr 2010 17:05:16 +0200

Russel’s paradox states that there must not be a set y:={x|x∉x}, because then y∈y→y∉y and y∉y→y∈y.

In my younger days, when I had less knowledge but more time and fun when increasing it, I had a time when looking for paradoxes inside common theories. I simply didnt want to believe in the consistencies of the theories we have. I was desperately trying to get articles from Eduard Wette and liked hearing about failures mathematicians had made in the past. Well, this time wasnt too long, and soon I realized that searching for paradoxes in common theories is not useful anyway, since if they really arent consistent, either we will never know (and thus dont have to care) or we will notice someday and then adapt our theories. Anyway, on some day at this time, somehow I was thinking about the Russel Paradox and what happened to it if you did formal set theory inside number theory. Thinking deeper about it, I had a strange result which I firstly didnt quite understand.

First, lets not be too formal – such that you can understand my confusion at first sight.

Lets consider , the standard-model of peano arithmetic. As Gödel did, you can encode propositions about natural numbers in natural numbers themselves. Lets denote by [P] the natural number encoding the proposition P. Especially, you can encode every proposition with one free variable as a number, and a proposition with one free variable is something similar to a set of natural numbers. So lets denote by n∈[P] the fact that P(n) holds (and define it to be false if the second given argument doesnt encode a proposition with exactly one free variable). Now, is a binary relation between natural numbers, which can be expressed inside arithmetic, and so is its negation, which we denote by . Then the proposition n ∉ n has exactly one free variable, and can also be encoded as a natural number, say m:=[n ∉ n]. Now assume m m. Then, by definition of m, we have m m. Same vice versa. Sounds like a contradiction.

Ok, well, something obviously went wrong. Lets get a little more formal.

The first flaw is that we may be able to denote every proposition about natural numbers, but not the question whether it is true. Because we have to give a finite relation which tells when a given proposition should be true. Thus, lets redefine our relation by n∈[P] saying „there is a formal proof of P(n)„, or in signs ⊦P(n) – it is known that this is possible. Then again consider our m:=[n ∉ n]. It applies to all [P] such that ⊦P([P]). Again, we may ask whether m m. This time that means ⊦m m. That is, if m m then we can prove that m m which is a contradiction. Hence, m m if any. At this point we had the contradiction above.

But this time, m m means ¬⊦m m. That is, there is no formal proof for m m. At this point, we have shown that m m is not provable in peano arithmetic, but that it is indeed true in . That is, we have a proposition that is true in the standard model, but not provable, like the one Gödel created.

And if you think about it, you will notice that actually we created exactly the same proposition as Gödel did, namely a proposition stating its own unprovability. But essentially we have followed the same idea as done in Russel’s Paradox.

The main difference is that in Russels‘ Paradox, we assume some „omniscient“ decider for the relation, while in we also assume an „omniscient“ decider for the arithmetic relations we have, but we cannot assume an „omniscient“ decider for meta-propositions [P], we must use provability for this.

To emphasize this a bit, lets not talk about sets but about deciding functions f:’abool. Lets say ‚a may contain every of these functions. That is, every function can be passed an arbitrary function as first argument. Lets again define a relation f a g stating g(f)=True. Then we can define the function NSA(f):=True, if f a g and False otherwise. Then as before, we can ask whether NSA∈aNSA. Of course, this will lead to the same contradictions as above. In this case, the answer is again that such a function NSA must not exist. But this time we can clearly see the connections to both situations above: On the one hand, with NSA, we denote a set of objects, like in the Russel paradox. On the other hand, we want NSA to be something which can always „tell“ us the truth of a proposition, and there must not be such a thing, like in Gödel’s incompleteness theorem.

Btw, NSA stands for Non-Self-Accepting, one can identify it with the class of Non-Self-Accepting algorithms, i.e. algorithms which return False when they are passed their own code, and with a little more recursion theory, the above argument is the proof of NSA not being decidable.


Ein seltsamer Gesprächsverlauf …

Mo, 05 Apr 2010 17:10:24 +0200

In einem neulichen Chat ging es gerade um Ignorieren. Mein Gesprächspartner meinte „Ignorieren geht so:“. Das brachte das Gespräch in eine sehr interessante Situation.

Will er mir nun zeigen, wie Ignorieren geht, so darf er auf keinen meiner Posts mehr antworten. Auf keinen meiner Posts mehr zu antworten setzt aber voraus, dass ich auch etwas schreibe. Schreibe ich also nichts mehr, so handelt es sich nicht um Ignorieren. Damit hätte er gelogen, denn er würde mir nicht zeigen, wie Ignorieren geht …

Sobald ich etwas sage darf er mir nicht mehr antworten. Ich habe also keine Motivation noch etwas zu schreiben … Wenn ich aber nichts schreibe, hat er die Unwahrheit gesagt, denn es handelt sich nicht um Ignorieren. Eine sehr seltsame Situation!

Ich hatte eigentlich vor, sie hier formal aufzuschlüsseln, aber nachdem man Transitionssysteme mit Aktoren braucht, spare ich mir das lieber. Natürlich war in diesem Moment gemeint, dass Aussagen folgen, die das Ignorieren genauer beschreiben. Und natürlich kann man in der natürlichen Sprache immer eine Metaebene höher gehen. Und natürlich versucht die natürliche Sprache nicht, Widersprüche zu vermeiden.

Trotzdem. Eine nette Gesprächssituation.


Randomly found Software: Inverse Graphing Calculator

So, 04 Apr 2010 02:26:03 +0200

Some pieces of Software are not made to be usefull, but rather to be nice. One of them is the Inverse Graphing Calculator (via). From the description of the site:

„The way the IGC works is, you type something you’d like as your curve (…). The IGC produces an *equation* which has this phrase as its graph!“

Of course, it would have been even better if it used Bezier-Curves instead of just linear lines. However, a nice little thing.


Unicode Math Entities

Di, 09 Mär 2010 03:38:20 +0200

There are plenty of possibilities to embed mathematical formulas into webpages. Besides JSMath and several possibilities to include LaTeX-Generated png’s into Websites there is MathML – which should have been preferrable, but just didnt find broad usage.

I just read that there is a collection of Math-GIFs for mathematical symbols.

I dont really know whats the purpose of the latter. Making Formulas visible inside HTML is a problem which is not properly solvable. But for small formulas, you will find a Unicode Character for most of the things you want to express – just look at the plenty of character tables. Embedding these in HTML is not harder than embedding a GIF. Of course, with Unicode, you are bound to a linear notation. But to express small formulas, that is more than enough.

For anything else, I would still prefer LaTeX-Rendered PNG’s with LaTeX-ALT-Strings. That is, because even a blind person should be able to read it – at least with enough efforts, they can interpret the LaTeX-ALT-Strings. Of course, even for this, MathML would be the better choice. But LaTeX has spread. So well, why not use it for formula notation. Even inside HTML. Its a compromise. Its not perfect, but its good enough.


Der fünfhundertfünfundfünfzigste Artikel

So, 14 Feb 2010 01:10:59 +0200

Ich möchte hiermit darauf aufmerksam machen, dass dies der fünfhundertfünfundfünfzigste Artikel auf diesem Blog ist.

Nun, was sagt uns diese, ominöse Zahl, fünfhundertfünfundfünfzig? Was findet zum Beispiel Wikipedia dazu? Nun, Wikipedia schlägt unter Anderem das Jahr fünfhundertfünfundfünfzig vor, gefolgt von der Telefonnummer fünfhundertfünfundfünfzig, die in den USA für fiktionale Telefonnummern reserviert zu sein scheint, was der Grund ist, warum diese so oft mit 555 anfangen. Es gibt sogar eine Liste solcher Telefonnummern. Sehr wichtig, das. Ob es wohl auch ein Urheberrecht darauf gibt? Wer weiß. Haben wir in Deutschland sowas eigentlich auch? Wenn nicht muss sofort ein Gesetz erlassen werden. Kann ja nicht sein, dass der Raum der fiktiven Telefonnummern ein rechtsfreier Raum ist.

Jedenfalls haben wir in Deutschland eine Bundesautobahn 555, und die deutsche Wikipedia kennt außerdem noch einen Schaltkreis. Naja, wie lange noch, ist natürlich die Frage. Denn zweifelsohne ist das alles nicht relevant, wird also bestimmt irgendwann gelöscht.

In der englischen Wikipedia erfährt man noch, dass fünfhundertfünfundfünfzig diejenige natürliche Zahl ist, die zwischen fünfhundertvierundfünfzig und fünfhundertsechsundfünfzig ist. Und eine Sphenische Zahl ist, also eine Zahl, die sich als Produkt von drei Primzahlen in der ersten Potenz schreiben lässt, in diesem Falle drei, fünf und siebenunddreißig. Außerdem ist sie eine Harshad-Zahl, also eine Zahl die durch ihre Quersumme teilbar ist, bezüglich der Basen 2, 10, 11, 13 und 16, die jeweiligen Darstellungen sind 1000101011(2), 555(10), 465(11), 339(13), 22B(16), die Quersummen also fünf und fünfzehn. Soweit Wikipedia. Genauer ist da das folgende Programm, das nicht unbedingt effizient geschrieben ist, aber als schneller Hack mit Gedankenminimierung ausreicht:

clisp -x ‚(defun basednum-to-int (base digits) (if digits (+ (car digits) (* base (basednum-to-int base (cdr digits)))) 0)) (defun basednum-inc (base digits) (if digits (if (< (car digits) (1- base)) (cons (1+ (car digits)) (cdr digits)) (cons 0 (basednum-inc base (cdr digits)))) (list 1))) (defun convert-to-base (base int &optional (sofar nil)) (if (equal int (basednum-to-int base sofar)) sofar (convert-to-base base int (basednum-inc base sofar)))) (defun sum-of-digits (base int) (let ((ret 0)) (dolist (i (convert-to-base base int)) (incf ret i)) ret)) (dotimes (i 555) (if (zerop (mod 555 (sum-of-digits (+ 2 i) 555))) (format t „~d, “ (+ 2 i))))‘

dessen Ausgabe 2, 10, 11, 13, 16, 19, 21, 23, 37, 38, 46, 55, 61, 75, 91, 109, 111, 112, 136, 149, 181, 185, 186, 223, 260, 271, 276, 277, 371, 445, 519, 541, 551, 553 wohl alle Basen (außer denen größer gleich 555, für die das sowieso trivialerweise gilt) sein dürften, zu denen diese Zahl eine Harshad-Zahl ist.

Interessant.


Software that should exist #4: Portable, verified, deduplicating filesystem

Mi, 27 Jan 2010 14:51:14 +0200

ZFS is a very interesting filesystem. I never actually used it, just made a few experiments with it so far. Mainly, because there is only sort of experimental support on Linux, no support on Mac OS X at the moment (as far as I read, the project which aimed to do this was frozen, but maybe zfs-fuse can be ported to macfuse), and also absolutely no support on Windows (at least until somebody finally writes a proper winfuse-binding or ports it to Dokan).

Still, the first-choice-possibility of accessing a ZFS-Pool through any other OS than Solaris seems to be just installing OpenSolaris on a VM, and just forwarding the desired blockdevices. I think this is more secure and not really slower than using FUSE, at least it is more portable at the moment.

The reason why I am interested in ZFS is that it serves a few purposes I really like. To understand why, one has to know my general behaviour of managing my files.

Actually, mostly times of having too much free time tend to alternate with times with almost no free time. While having no free time, I am just working with my computers. I dont explicitly manage my files – I just save them somewhere I can find them again. And sometimes I need some old file from some old archive or directory, which is on an external disk, thus, I just copy these files (or extract the archive) to my current harddisk. And leave them there.

When getting more free time again (and less free space on my disk) I tend to „tidy up“ my filesystem. But then, often I changed some old files. Or lost the overview over them. Or simply want to set up my system from scratch because there is a lot of crap running around on it. Mostly then, therefore, I just copy the whole home-directory (and maybe others) onto my external disk – thinking „setting up my system is more important, I can tidy up my files later“

… Now guess what happens …

Of course, I have whole system-backups from years ago, even some from the times when I used Windows. And sometimes I have System-Backups of systems which contain copies of System-Backups. Sorting them would take a lot of time. Sometimes I grub through the old files like an old photo album. I dont want to change these files. I dont want to delete these files. And actually, I am much too lazy to sort them.

So of course, I have the need of more and more space. This is no problem. But also, since so many files have duplicates, the need for space increases exponentially. Well, there are tools like fdupes. But fdupes takes a long time to index the files, and when I change a file afterwards (accidentally, etc.), this affects all other files. And fdupes works only on systems with symlinks or hardlinks. And fdupes cannot shorten files which are only partially the same.

On the other hand, there are a lot of well-engineered backup-tools like rsync with a lot of additional features, and in every production-environment, I would recommend a strict backup-policy basing on these, anyway. But at home, I have a lot of old computers running – sometimes just for experiments. I have no production system. I just have a creative chaos – and I actually want to keep this creative chaos. At the time of desktop-virtualisation, when it is no problem to run three operating systems on one single cpu-core at once, at the time of ramdisks, at the time of WebGL, I simply dont want to manage my files manually, when the filesystem just could deduplicate equal blocks, so I could have hundreds of copies of the same system-snapshot without really having to waste a lot of space.

And of course, I want to just be able to add another external disk to my pool, so I can save files on both of the disks without having to copy them, but – if possible – have anything on both disks when attatching them (or at least being able to just remove one of the disks when all the others are attatched). As far as I know, this can be done with ZFS. And a lot of other weird stuff. The only problem is that there appears not to be any good GUI for it, and no good Desktop-Integration either. And – well – it is only really supported by Solaris. The FUSE-Modules will first have to prove that they are really stable.

So – well, ZFS seems to fit my „wishes“, just would have to port it to Mac OS X and Windows (which is sure a lot of work but shouldnt be too hard either). But well, ZFS is a complicated system. And complicated systems can have a lot of complicated bugs. And the filesystem is the thing anything is saved on. That is, the whole computer can be damaged – as long as your filesystem is ok, you wont lose your data. On the other hand, a damaged filesystem can make it slightly impossible (especially when you encrypted it, for example) to restore any of your data, even though the rest of your computer (including your harddrive which just did what the buggy kernel module told it) works totally well.

Modern filesystems duplicate the necessary parts to restore data on the disk and have policies which make it less likely that data gets damaged. This is a good thing, but obviously, this doesnt help against bugs in your filesystem driver. What helps – at least to a certain degree – would be a formal verification of the sourcecode. I can think of several approaches to gain this.

The easiest way should be to ignore the hardware-problems of data integrity, and thus assuming that the disk one writes on is always correct. Then, the main thing one would want to have is some functions read : name → blob → blob and write : name → blob → blob → blob, such that (write a b (write c d e)) = (write c d (write a b e)) ⇐ a≠b, i.e. it doesnt matter in what order different files are written, (read a (write a b c)) = c, i.e. reading a file that was written before returns the same value that was written before, and (write a b (write a d e)) = (write a b e), i.e. a file’s content is the last content written into it. Maybe there should also be some defined behaviour when a file doesnt exist – i.e. instead of a blob, use some object from boole×blob, encoding whether the reading was successfull.

Of course, maybe we also want to verify that if some hardware fails, at least the probability of the data being read correctly is maximal – or at least doesnt fall below some certain probability. This seems a lot more complicated to axiomatize I think. While having an abstract „blob“ above, I think you will need to specify what a „blob“ is more exactly, that is, define blob as ptr→byte (whatever ptr and byte are exactly). And then, maybe defining an „insecure blob“ iblob=ptr→byte×probability. You would have to specify the probability of success of reading a byte correctly for your disk – a task which must be left to the engineers building the disks and busses – and for example in a raid-system, you can prove that your way of reading and checksumming the data maximizes the probability of correct data. On the other hand, I assume its comparably complicated to calculate this probability – i.e. when you save a block of data with a checksum twice on the same disk, the probability that both of them get corrupted by a damaged sector on the disk could be smaller than the probability that one of them gets corrupted (but needs not, since they could be written at a physically near place on the disk such that the source of corruption affects both of them), but the probability that the data gets lost because of a headcrash doesnt decrease, since both of them will be affected by this – while this probability that the same data saved on different disks gets lost should be smaller.

After axiomatizing all of that stuff, one of course has to implement a filesystem that has all those features like deduplicating, snapshotting, adding additional disks, etc. – and verify it.

As far as I see, this would be a lot of work to do. But on the other hand, filesystems are not something new. There should be a lot of experience with data integrity and backups. With all this experience, shouldnt it be possible to produce such a thing? I mean, its the filesystem – sort of the most essential thing. If it fails, anything else will fail.


Löb’s Theorem

Mi, 09 Dez 2009 02:39:36 +0200

A lesser known elementary theorem in basic proof theory is Löb’s Theorem. Even though I know it and know the basic idea of a proof for it, I cannot remember where I know this from, in the logic lectures I took it wasnt mentioned iirc, and in the introducting books I have read I cant remember it, too. And actually, I never needed it so far for anything, so I forgot about it.

The Löb Theorem basically states: If PA⊧ (PA⊧A→A) then PA⊧A.

Well, in the comments of this Blogpost I found a nice illustration of the proof.

Which reminds me of an old xkcd-comic.


The most essential thing Mathematics have taught me for real life

Sa, 21 Nov 2009 05:03:08 +0200

When discussing with non-mathematicians about why I want to become a mathematician, what a mathematician’s work at a university is, and what it is good for, I often answer with the counterquestion what art is good for, because usually, there are two kinds of people asking such questions: the ones that like at least some kind of art, be it just folk music or modern films, and the pragmatic people which believe that investing ressources into research that has no immediate practical application means wasting them, of which most are die-hard capitalists or die-hard communists, which is a kind of people I usually dont try to discuss about such things anyway.

Some people say that art is something good because many people like it, and then mostly admit that mathematics can also be good because many people consider it being beautiful. This is my favourite way of that type of discussions. Some discussions lead into a meta-discussion about the meaning of live at all. And well, some of them argue that art rises creativity and helps you to get a different point of view of some things, and may help you to „find yourself“.

There is no doubt about that, as most artists insist on the fact that they want to tell something with their art. While mathematicians usually dont want to do that in the first place. Mathematicians just want to produce knowledge which they consider interesting and beautiful. Anyway, some theories in Mathematics can be applied to philosophy, especially logic – which is one reason why mathematics are important, even if they dont have a direct practical application. Another thing is that the strictly formal kind of thinking in mathematics taught me a lot of real-life facts which I consider essential for my personality.

So, recently, I was asked to give an example which is the fact that I consider the most essential fact mathematics taught me.

I think, the most essential fact mathematics have taught me is, that most problems can be solved by just properly phrasing them. In fact, most of them will vanish completely. To make it clear, I’ll give some examples.

First, lets look at a mathematical example. It is rather hard to find one which can be understood by non-mathematicians. Something that new math students often seem to be confused about is the set theoretical definition of numbers. In general, the question „what is a number“ is a question which is often asked. „How can you be sure that the axioms number theory bases on are really true?“. The answer is very simple, but in fact, not very satisfying: Numbers are defined to be the elements of an arithmetic, and an arithmetic is defined to satisfy my axioms. That is, these axioms are not assumptions about numbers that can be refuted by physics, these axioms are the definition of what numbers are. They may be similar to our intuition of numbers, and in fact, they are motivated by our intuition of numbers. But should our intuition of numbers at any time find something that is not described by these axioms, all the mathematical theorems derived from these axioms will still be correct, they just wont neccessarily describe our intuition on „numbers“ anymore. That is, what a mathematician calls „natural number“ is something else than what an average person would call a natural number. For most mathematicians it is a finite ordinal (i’ll leave this concept unexplained here), for some mathematicians, it is an object of some arithmetic, but thats no problem, since both concepts are isomorphic, i.e. equal when renaming a few things. In our intuition its „the thingies I can count with“. It may not be a coincidence that both things have the same name, but still, it is something different.

Now, lets go to some real-life-examples. I have heard a lot of stuff about so-called „UFOs“. And in many discussions, people argue whether there are such „UFOs“ at all, showing and arguing about evidences like photographies of them, forgetting what an UFO is. UFO is an acronym for Unidentified Flying Object, or sometimes Unknown Flying Object. And actually, I wouldnt be surprized, if there are videos of flying objects which cannot be identified uniquely. So yes, there are probably UFOs, because there is simply a lot of stuff flying around out there, and since there is a lot of place to fly around out there, we cannot know everything of it. Whats the big deal? The real problem is that „UFO“ became synonymic with „object from outerspace built by alien life forms“. And of course, that is a question which is more difficult to answer. As far as I know, there is no evidence for the existance of alien life forms, but there is a high probability that some form of live exists out there, which does not imply that it is similar to us in any way, and builds apparatus to fly to visit us. Especially, considering UFOs an evidence for alien life forms of that kind would be like considering the loss of a sock as an evidence for sock-eating gnomes who live in tumble driers.

A similar thing are discussions about the existence of „supernatural“ phenomena. It is quite clear that „supernatural“ means that something is above the nature. But before we can even talk about that, we must make clear what we mean by „nature“. Actually I mostly use the term „nature“ to distinguish between things that the human civilization has produced and things whose existence is mostly independent of the existence of humans. In that sense, even the computer I am typing this in this moment would be supernatural. Another meaning of the word „nature“ can be the sum of everything which exists. But in that case, the existence of supernatural phenomena is absurd, because since anything that exists belongs to nature, anything is natural. I have also heard that some people call a thing „natural“ when it can be expierienced by humans in some way. Actually, that is my definition of „existence“ – something exists when there is some way to expierience it – because if there is something I cannot expierience somehow, it cannot influence my life in any way, and I cannot tell anything about it, so why should I consider its existance? And especially, why should I argue about it? Anyway, since the term „phenomenon“ implies that there must be something that can be expierienced, also here, the question is clearly „no“ – as soon as it can be expierienced, as soon as there is any evidence of it, it wouldnt be supernatural anymore. Wikipedia gives another definition: The term supernatural pertains to an order of existence beyond the scientifically visible universe. At least this is a little more clear, even though we must define what „scientifically visible“ means. There are almost certainly phenomena which cannot be seen with recent scientific methods. Science is not a state, science is a process. So in that sense, there are probably supernatural phenomena – which will become natural as soon as they can be made visible. But I think, talking about the scientifically visible universe in that context means talking about things that can be scientifically detected at all, which means, a supernatural thing is a thing which cannot be detected scientifically at any time in the future. But what does that mean, detecting something scientifically? If humans can expierience it, we should mostly be able to detect it at least through some form of EEG. If humans tell that they have expierienced something, then we can test whether they are lying or believing what they say. Then we can look at their overall healthy, test them for mental diseases, etc., and as soon as we wont find anything, we have detected something which we cannot explain yet – but we have detected it.

Similar to that is the question whether there are miracles. Depends on what you consider being a miracle.

So, back to earth again, well, there are also simpler examples. Lets, for example, talk about a sort of recent topic, gay marriage. What does „marriage“ mean? In the past centuries, as far as I know, „marriage“ referred to the sacramentum of marriage in the catholic church, and later christian churches in general. It was clearly seen as something god likes and wants us to do. And clearly, the christians in those days considered homosexuality as a sin, and in that sense, there is no point of gay marriage. Today, we have basically two kinds of marriage, the civil ones and the religious ones. Since we are secularized, the latter ones have almost no lawful meaning as far as I know. There is a way to combine both kinds of marriage when the religious one comes from a religion to which a huge part of the people belongs, but this is rather pragmatic – many people want it, so they can have it – but this makes no difference before the law. Religious marriage has a religious meaning – the state cannot force any religion to marry people, nor can it forbid that. So, if somebody wants this kind of gay marriage, he just has to find a religion which marries gay couples. So what about civil marriage. What is civil marriage about at all? Civil marriage gives you some tax privileges, because the state wants to encourage people to get married and found a family. So we must make clear why the state wants to encourage marriage at all – and I think that is something which nobody really did so far. One thing are the children which are usually produced by married couples – same-sex-couples wont produce children, so if this was the only reason for the state to encourage marriage, there is no point in having the same thing for gay couples. Another thing is that spouses can be forced to pay social contributions, even after a divorce, which is – in general – something desirable for the state, and for both mates. But, honestly, if that was what gay couples are concerned about, there would be no point about marriage or living together at all when two people dont trust in each other that far. And civil marriage gives you the right to get information about your husband or wife in case of emergency – but you can as well authorize any other person you trust. The only thing that could really be a difference is, as far as I see at the moment, the question whether gay couples should be allowed to adopt children. But thats a completely different question – there can be no „right“ to adopt children, because the welfare of the children is always more important than the happyness of potential parents, so it boils down to the question whether children which are adopted by same-sex-couples have any disadvantages compared to other children, and if so, are they at least less than the disadvantages when growing up in a children’s home – thats a complex topic which unfortunately cannot be solved by phrasing it out, and of course there is a necessity to discuss about that. But this has nothing more to do with marriage than that of course if a couple adopts a child, there should be some formal obstacles ensuring that the couple is really a couple willing to share their life together at least as long as the child grows up. I dont think that such a formal obstacle is what couples want in first place. So in the end, civil marriage is worth nothing more than taxes and social contributions. On the one hand then, I wonder why same sex couples want this so desperately, on the other hand, I wonder what is the problem with it for some politicians.

Yet another example is connected to Intelligent Design. Well, there are people believing that live has completely evolved through evolution, and others, who believe in the christian theology of creation (and of course there are a lot of others, but lets name the most important ones). Of course, in the end, its a question of believe. But somehow, people dont want to accept that, and fundamentalists try to find evidences of their believe, and flaws in the scientific theory of evolution. Of course, there is no evidence for a god so far, which is why – by the occam-principle – science doesnt act on the assumption of its existence. Thats what science does – in the end, its a principle which proved itself, when looking at the amazing technical achievements we have made through it, but in the end its nothing more. So, if somebody wants to make science accept the existence of a god, you will have to give evidences for its existence, because even if it does exist, as long as there is no evidence, science will not accept it. On the other hand, nobody forces you to believe in what science sais. Sometimes people argue that evolution cannot form anything as complex as humans. But in fact, evolutionary systems, i.e. systems having some kind of mutation and some kind of selection, can evolve very complex systems, none of them as complex as a human, but at least there is no reason why it shouldnt be possible to create arbitrary complex systems. That is, maybe live didnt evolve through evolution, but at least in theory it could have evolved that way, and since evolution of life forms can be seen on some islands, where animals have been separated, that is, since evolution of live takes place right now at least in some places, it is plausible to consider that it took place before. Same here: find an evidence that evolutionary systems have an upper bound of complexity, and science will accept it and adapt its theories – but when you dont have evidence, well, there is no reason for believing that evolution cannot evolve arbitrarily complex systems. Still, it boiles down to one simple question: If you are not a scientist, why do you want to change the scientists point of view, rather than considering science as a tool which has brought us a lot of benefits, even though it cannot always be right?

Finally, a nice example is the discussion whether humans have a free will or are deterministic. For some people, its the question of whether a human is just a machine. For me, this is a problem that can be solved by phrasing it out. The first question we have to ask is what we mean by a „free will“. Basically, it means that a human can behave however he wants. If he wants to be good, he can be good. If he wants to be bad, he can be bad. Of course, looking at several mental diseases, this concept has some flaws, but of course, we only talk about „healthy“ people in the sense that they act as they are supposed to act. Another thing that is often made wrong is that people dont distinguish between determinism and predictability. Just because something is deterministic, it needs not to be predictable, the calculations for a prediction could be too complex to do them faster than the system itself to give just one example. Some interpretation I have heard was that parts of a human are „outside“ the universe we expierience, and thus do not underly the universal laws, thus have a free will. The question then is, do these parts „outside“ the universe underly any laws? If not, they are just random, that is, humans have an integrated randomizer, and thus, they act randomly, even though maybe not uniquely distributed, but still randomly. In that case, they wouldnt have anything which I would consider a „free will“ – acting randomly has nothing to do with freedom. If these parts outside the universe underly laws, then they are deterministic, and thus, humans would be deterministic – just because something doesnt underly the universal laws we expierience, this doenst mean that it cannot be deterministic. And if humans have no parts outside the universe we expierience, and therefore underlying the universal laws, the question boils down to whether the universe is determined – which is the same as asking if our universe underlies laws and has no random components. If it has random components which influence our decisions, then we would act randomly, like in the above case. If not, we are deterministic. The reader may wonder why I am telling that this problem can be solved by phrasing it out while not giving a real solution. Well, this time, the solution lies in the question itself. The question makes an implicit statement about a coherence between indeterminism and free will. The flaw lies in what we want that „free will“ to be: We dont want it to be something that is restricted by laws we can understand, we dont want it to be something that is restricted by laws we dont understand, but we also dont want it to be random. But there is nothing else. Either something is restricted by laws, or it is not restricted by laws, but then it is random. In fact, free will has nothing to do with that. What I think people really mean by free will is that a human as such can make a decision of changing his behaviour according to his inputs, but he is not stateless, that is, he does not only depend on what he receives, but also on what happens inside his mind.

So. I hope you saw that sometimes it is good to phrase some things out in detail. It often boils down to things that are much less problematic to solve. Not always, of course. But the remaining problems are at least real problems, rather than just problems of phrasing.


Mißverstädnisse zu Quantencomputern

Mi, 07 Okt 2009 16:55:19 +0200

Was die Quantencomputer nicht alles leisten sollen. Gerade lese ich bei Newsscientist,

every additional qubit doubles the computing power.

also jedes zusätzliche Bit verdopple die Leistungsfähigkeit eines Quantencomputers, mit anderen Worten: Quantencomputer seien EXPTIME-mächtig. Nur zur Erinnerung: NP in PSPACE in EXPTIME, man geht davon aus, daß beide Inclusionen echt sind. Dabei sind Quantencomputer (vermutlich) noch nicht einmal NP-hart, was trotzdem viele immer glauben. Die Komplexitätsklasse der Quantencomputer ist BQP, damit lassen sich gerade mal Zahlen schnell faktorisieren und Listen in linearer Zeit sortieren. Also nicht, daß das nichts tolles wäre, aber eben bei weitem nicht so toll, wie viele immer behaupten.
Und zu dem vermutlich weiter oben: Falls P=NP sind Quantencomputer NP-hart, normale aber auch, damit hätte man mit Quantencomputern erst recht nichts gewonnen. Und EXPTIME wären sie dann auch noch nicht.

Jetzt finde ich es aber noch verständlich, wenn jemand das verwechselt, der sich mit dem Sachverhalt nicht oder nur ein bißchen auskennt. Aber jemand, der daran entwickelt? Oder ein Wissenschaftliches Magazin. Ich will lieber nicht wissen, was die noch alles nicht wissen und trotzdem machen. Oder schreiben die das absichtlich falsch?