Why I love Functional Analysis

I really loved that functional analysis course last year, so I decided to visit its continuation this year. Well, actually I dont visit it very often because its too early in the morning. But at least I am doing the exercises.

And I really love it! It looks so hard, but in fact, it mostly isnt as hard as it looks. Thats what I love when doing functional analysis. It looks very complicated what you do, but mostly its just applying a few major theorems which are similar to stuff you already know from linear algebra. Of course, there are also pretty hard problems I cannot solve or only solve with a huge amount of time, too. But then mostly, these things apply to finite dimensional vector spaces, too, which is interesting to me – it unites finity and infinity.

In fact, as far as I see, Functional Analysis is the part of mathematics where you write down numbers when talking about vectors and actually mean sequences of  functions from sequences of sequences into sequences of sequences of sequences.

But – compared to what I have seen in higher algebra or topology so far – in functional analysis this really works. Just write something down, forget what it means, and work with it, and get something new out of it. Thats the spirit of logic – and I like logic, which may be the reason why I like functional analysis.

To give you an example, here is a little excerpt from one of my solutions. You will notice that it looks very complicated. It looks like a heap of mud. But it is as simple as doing linear algebra:

5.2. $\sup_{\varphi \neq 0} \frac{\| \Delta \varphi \|}{\| \varphi \|} =  \sup_{\varphi \neq 0} \frac{\|2 d \cdot \varphi - \sum_{i = 1}^d L_i \varphi -  \sum_{j = 1}^d R_j \varphi \|}{\| \varphi \|} = $ $ \sup_{\| \varphi \|= 1} \|2 d \cdot \varphi - \sum_{i = 1}^d L_i \varphi - \sum_{j = 1}^d R_j \varphi \| \leqslant \sup_{\| \varphi \|= 1} \|2 d \varphi \|+ \sup_{\| \varphi \|= 1} \| \sum_{i = 1}^d L_i \varphi \|+ \sup_{\| \varphi \|= 1} \| \sum_{j = 1}^d R_j \varphi \| \leqslant 2 d + d + d = 4 d$. For the equality case, consider $$\varphi_n (x_1, \ldots, x_d) = \left\{ \begin{array}{l}   0, \mbox{for} x_i > n \mbox{or} x_i < 0 \mbox{ for } \mbox{ some } i\\   1, \mbox{for} 0 \leqslant x_i \leqslant n \mbox{ for } \mbox{ all } i \mbox{ and }   \sum_{j = 1}^d x_j \equiv 0 \mbox{ mod } 2\\   - 1, \mbox{for} 0 \leqslant x_i \leqslant n \mbox{ for } \mbox{ all } i   \mbox{ and } \sum_{j = 1}^d x_j \equiv 1 \mbox{ mod } 2 \end{array} \right.$$Trivially $\| \varphi_n \|^2 = n^d$. Furthermore $| \Delta \varphi_n (x_1, \ldots, x_d) | = $ $|2d \varphi (x_1, \ldots, x_d) - \sum_{j = 1}^d \varphi (x_1, \ldots, x_j - 1, \ldots, x_d) - \sum_{j = 1}^d \varphi (x_1, \ldots, x_j + 1 \ldots, x_d) | = | \pm 2 d - \mp d - \mp d| = 4 d$ if $\forall i 1 \leqslant x_i \leqslant n - 1$. Hence $\| \Delta \varphi_n \|^2 \geqslant 16 d^2 (n - 1)^d$. We get $( \frac{\| \Delta \varphi_n \|}{\| \varphi_n \|})^2 \geqslant \frac{16 d^2 (n - 1)^d}{n^d}$, so $\frac{\| \Delta \varphi_n \|}{\| \varphi_n \|} \geqslant 4 d \sqrt{\frac{(n - 1)^d}{n^d}}$. So $\lim_{n \rightarrow \infty} \frac{\| \Delta \varphi_n \|}{\| \varphi_n \|} \geqslant \lim_{n \rightarrow \infty} 4 d \sqrt{\frac{(n - 1)^d}{n^d}} = 4 d$. Also $\| \Delta \| \geqslant 4 d$, which completes the proof for $\| \Delta \|= 4 d$.

It confuses anyone who doesnt know what its about. It looks complicated, but it in fact isnt complicated to understand if you forget about the details, consider functions as vectors, etc. – as soon as you dont do this anymore, it gets complicated again.


Kommentar verfassen

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:


Du kommentierst mit Deinem WordPress.com-Konto. Abmelden /  Ändern )

Google+ Foto

Du kommentierst mit Deinem Google+-Konto. Abmelden /  Ändern )


Du kommentierst mit Deinem Twitter-Konto. Abmelden /  Ändern )


Du kommentierst mit Deinem Facebook-Konto. Abmelden /  Ändern )


Verbinde mit %s

%d Bloggern gefällt das: