├── .gitignore ├── 5_Tangent_plane.png ├── 5_Tangential_vector.png ├── 8_Riem_Geometric_meaning.png ├── Gravity_Notes_grande.tex ├── LICENSE ├── README.md ├── Rn.sage ├── lecture1.tex ├── lecture10.tex ├── lecture11.tex ├── lecture12.tex ├── lecture13.tex ├── lecture14.tex ├── lecture15.tex ├── lecture18.tex ├── lecture2.tex ├── lecture22.tex ├── lecture3.tex ├── lecture4.tex ├── lecture5.tex ├── lecture6.tex ├── lecture7.tex ├── lecture8.tex ├── lecture9.tex ├── main.pdf ├── main.tex ├── pdfs └── Gravity_Notes_grande.pdf ├── tutorial1.tex ├── tutorial11.tex ├── tutorial13.tex ├── tutorial2.tex ├── tutorial4.tex ├── tutorial5.tex ├── tutorial7.tex ├── tutorial8.tex └── tutorial9.tex /.gitignore: -------------------------------------------------------------------------------- 1 | *.aux 2 | *.glo 3 | *.idx 4 | *.log 5 | *.toc 6 | *.ist 7 | *.acn 8 | *.acr 9 | *.alg 10 | *.bbl 11 | *.blg 12 | *.dvi 13 | *.glg 14 | *.gls 15 | *.ilg 16 | *.ind 17 | *.lof 18 | *.lot 19 | *.maf 20 | *.mtc 21 | *.mtc1 22 | *.out 23 | *.synctex.gz -------------------------------------------------------------------------------- /5_Tangent_plane.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lazierthanthou/Lecture_Notes_GR/c05a0ba9442a3898f0f83b84886cd467db2e8cae/5_Tangent_plane.png -------------------------------------------------------------------------------- /5_Tangential_vector.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lazierthanthou/Lecture_Notes_GR/c05a0ba9442a3898f0f83b84886cd467db2e8cae/5_Tangential_vector.png -------------------------------------------------------------------------------- /8_Riem_Geometric_meaning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lazierthanthou/Lecture_Notes_GR/c05a0ba9442a3898f0f83b84886cd467db2e8cae/8_Riem_Geometric_meaning.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Lecture Notes on General Relativity 2 | ------------- 3 | > Based on the [Central Lecture Course](https://www.youtube.com/channel/UCUHKG3S9N_QeIE2jQXd2-VQ/feed) by Dr. Frederic P. Schuller (**A thorough introduction to the theory of general relativity**) introducing the mathematical and physical foundations of the theory in 24 self-contained lectures at the International Winter School on Gravity and Light in Linz/Austriafor the WE Heraeus International Winter School of Gravity and Light, 2015 in Linz as part of the world-wide celebrations of the 100th anniversary of Einstein's theory of general relativity and the International Year of Light 2015. 4 | 5 | These lectures develop the theory from first principles and aim at an audience ranging from ambitious undergraduate students to beginning PhD students in mathematics and physics. Satellite Lectures (see other videos on this channel) by Bernard F Schutz (Gravitational Waves), Domenico Giulini (Canonical Formulation of Gravity), Marcus C Werner (Gravitational Lensing) and Valeria Pettorino (Cosmic Microwave Background) expand on the topics of this central lecture course and take students to the research frontier. 6 | 7 | I agree with [Ernest Yeung](https://github.com/ernestyalumni) that these lectures are "unequivocally, the best, most lucid, and well-constructed lecture series on General Relativity and Gravity". This repository has been created by forking his repository [ernestyalumni/Gravite](https://github.com/ernestyalumni/Gravite). 8 | 9 | ---------- 10 | ### Why this fork 11 | The initial purpose of this fork is the following: 12 | * filling in missing parts in lectures or to modify existing parts to suit me better 13 | * to confine the output pdf only to lectures 14 | * using the documentclass{article} in A4 paper 15 | * put each lecture/tutorial in its own file 16 | 17 | ---------- 18 | 19 | 20 | -------------------------------------------------------------------------------- /lecture1.tex: -------------------------------------------------------------------------------- 1 | \section{Topology} 2 | \begin{framed} 3 | \textbf{Motivation}: At the coarsest level, spacetime is a set. But, a set is not enough to talk about continuity of maps, which is required for classical physics notions such as trajectory of a particle. We do not want jumps such as a particle disappearing at some point on its trajectory and appearing somewhere. So we require continuity of maps. There could be many structures that allow us to talk about continuity, e.g., distance measure. But we need to be very minimal and very economic in order not to introduce undue assumptions. So we are interested in the weakest structure that can be established on a set which allows a good definition of continuity of maps. Mathematicians know that the weakest such structure is topology. This is the reason for studying topological spaces. 4 | \end{framed} 5 | 6 | \subsection{Topological Spaces} 7 | \begin{definition} 8 | Let $M$ be a set and $\mathcal{P}(M)$ be the power set of $M$, i.e., the set of all subsets of $M$. \\ 9 | A set $\mathcal{O} \subseteq \mathcal{P}(M)$ is called a \textbf{topology}, if it satisfies the following: 10 | \begin{enumerate} 11 | \item[(i)] $\emptyset \in \mathcal{O}$, $M \in \mathcal{O}$ 12 | \item[(ii)] $U \in \mathcal{O}$, \, $V \in \mathcal{O} \implies U \cap V \in \mathcal{O}$ 13 | \item[(iii)] $U_{\alpha} \in \mathcal{O}$, \, $\alpha \in \mathcal{A}$ ($\mathcal{A} \text{ is an index set } \implies \left( \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \right) \in \mathcal{O}$ 14 | \end{enumerate} 15 | \end{definition} 16 | 17 | \textbf{Terminology}: 18 | \begin{enumerate} 19 | \item the tuple $(M , \mathcal{O})$ is a \textbf{topological space}. 20 | \item $\mathcal{U} \in M$ is an \textbf{open set} if $\mathcal{U} \in \mathcal{O}$. 21 | \item $\mathcal{U} \in M$ is a \textbf{closed set} if $M \setminus \mathcal{U} \in \mathcal{O}$. 22 | \end{enumerate} 23 | 24 | \begin{definition} 25 | $(M , \mathcal{O})$, where $\mathcal{O} = \lbrace \emptyset, M\rbrace$ is called the \textbf{chaotic topology}. 26 | \end{definition} 27 | 28 | \begin{definition} 29 | $(M , \mathcal{O})$, where $\mathcal{O} = \mathcal{P}(M)$ is called the \textbf{discrete topology}. 30 | \end{definition} 31 | 32 | \begin{definition} 33 | A \textbf{soft ball} at the point $p$ in $\R^d$ is the set 34 | \begin{equation} 35 | \displaystyle\mathcal{B}_r(p) := \left\{ (q_1, q_2, ..., q_d) \quad | \quad \sum_{i=1}^{d} (q_i - p_i)^2 < r^2 \right\} \text{ where } r \in \R^+ 36 | \end{equation} 37 | \end{definition} 38 | 39 | \begin{definition} 40 | $(\R^d, \stdtop)$ is the \textbf{standard topology}, provided that $U \in \stdtop$ iff \\ 41 | $\forall p \in U, \exists r \in \R^+: \mathcal{B}_r(p) \subseteq U$ 42 | \end{definition} 43 | 44 | \begin{proof} 45 | $\emptyset \in \stdtop$ since $\forall \, p \in \emptyset$, $\exists r \in \R^+$: $\mathcal{B}_r(p) \subseteq \emptyset$ (i.e. satisfied ``vacuously'') \\ 46 | $\R^d \in \stdtop$ since $\forall p \in \R^d$, $\exists r = 1 \in \R^+$: $\mathcal{B}_r(p) \subseteq \R^d$ 47 | 48 | Suppose $U, V \in \stdtop$. Let $p \in U \cap V \implies \exists \, r_1, r_2 \in \R^+$ s.t. $\quad \mathcal{B}_{r_1}(p) \subseteq U, \quad \mathcal{B}_{r_2}(p) \subseteq V$. \\ 49 | Let $r=\min{ \lbrace r_1, r_2 \rbrace} \implies \mathcal{B}_r(p) \subseteq U$ and $\mathcal{B}_r(p) \subseteq V \implies \mathcal{B}_r(p) \subseteq U \cap V \implies U \cap V \in \stdtop$. 50 | 51 | Suppose, $U_{\alpha} \in \stdtop, \forall \, \alpha \in \mathcal{A}$. Let $p \in \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \implies \exists \alpha \in \mathcal{A}: p \in U_{\alpha} \\ 52 | \implies \exists \, r \in \R^+ : \mathcal{B}_{r}(p) \subseteq U_{\alpha} \subseteq \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \implies \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \in \stdtop$. 53 | \end{proof} 54 | 55 | \subsection{Continuous maps} 56 | A map $f$, $f: M \to N$, connects each element of a set $M$ (domain set) to an element of a set $N$ (target set). 57 | 58 | \textbf{Terminology}: 59 | \begin{enumerate} 60 | \item If $f$ maps $m \in M$ to $n \in N$, then we may say $f(m) = n$, or $m$ maps to $n$, or $m \mapsto f(m)$ or $m \mapsto n$. 61 | \item If $V \subseteq N, \text{preim}_{f}(V) := \lbrace m \in M | f(m) \in V \rbrace$ 62 | \item If $\forall n \in N, \exists m \in M : n = f(m)$, then $f$ is \textbf{surjective}. Or, $f : M \surjmapto N$. 63 | \item If $m_1, m_2 \in M, m_1 \neq m_2 \implies f(m_1) \neq f(m_2)$, then $f$ is \textbf{injective}. Or, $f : M \injmapto N$. 64 | \end{enumerate} 65 | 66 | \begin{definition} 67 | Let $(M , \mathcal{O}_{M})$ and $(N, \mathcal{O}_{N})$ be topological spaces. A map $f: M \to N$ is called \textbf{continuous} w.r.t. $\mathcal{O}_{M}$ and $\mathcal{O}_{N}$ if $V \in \mathcal{O}_{N} \implies (\text{preim}_{f}(V)) \in \mathcal{O}_{M}$. 68 | \end{definition} 69 | 70 | \textit{\textbf{Mnemonic}: A map is continuous iff the preimages of all open sets are open sets.} 71 | 72 | \subsection{Composition of continuous maps} 73 | \begin{definition} 74 | If $f: M \to N$ and $g: N \to P$, then \\ 75 | \begin{equation*} 76 | g \after f: M \to P \text{ such that } m \mapsto (g \after f)(m) := g(f(m)) 77 | \end{equation*} 78 | \end{definition} 79 | 80 | \begin{theorem} 81 | If $f: M \to N$ is continuous w.r.t. $\mathcal{O}_{M}$ and $\mathcal{O}_{N}$ and $g: N \to P$ is continuous w.r.t. $\mathcal{O}_{N}$ and $\mathcal{O}_{P}$, then $g \after f: M \to P$ is continuous w.r.t. $\mathcal{O}_{M}$ and $\mathcal{O}_{P}$. 82 | \end{theorem} 83 | 84 | \begin{proof} 85 | Let $W \in \mathcal{O}_{P}$. \begin{align*} 86 | \text{preim}_{g \after f}(W) &= \lbrace m \in M | g(f(m)) \in W \rbrace &\because (g \after f)(m) = g(f(m)) \\ 87 | &= \lbrace m \in M | f(m) \in \text{preim}_{g}(W) \rbrace & \text{preim}_{g}(W) \in \mathcal{O}_{N} \because g \text{ is continuous} \\ 88 | &= \text{preim}_{f}(\text{preim}_{g}(W)) &\in \mathcal{O}_{M} \because f \text{ is continuous} \\ 89 | &\implies g \after f \text{ is continuous} 90 | \end{align*} 91 | \end{proof} 92 | 93 | \subsection{Inheriting a topology} 94 | Given a topological space $(M, \mathcal{O}_{M})$, one way of inheriting a topology from it is the subspace topology. 95 | 96 | \begin{theorem} 97 | If $(M, \mathcal{O}_{M})$ is a topological space and $S \subseteq M$, then the set $\mathcal{O}|_S \subseteq \mathcal{P}(S)$ such that $\mathcal{O}|_S := \lbrace S \cap U | U \in \mathcal{O}_{M} \rbrace$ is a topology. $\mathcal{O}|_S$ is called the \textbf{subspace topology} inherited from $\mathcal{O}_{M}$. 98 | \end{theorem} 99 | 100 | \begin{proof} 101 | \begin{enumerate} 102 | %empty set & entire set condition 103 | \item $\emptyset, S \in \mathcal{O}|_S \because \emptyset = S \cap \emptyset, S = S \cap M$. 104 | 105 | %intersection condition 106 | \item $S_1, S_2 \in \mathcal{O}|_S \implies \exists U_1, U_2 \in \mathcal{O}_{M} : S_1 = S \cap U_1, S_2 = S \cap U_2 \implies U_1 \cap U_2 \in \mathcal{O}_{M} \\ 107 | \implies S \cap (U_1 \cap U_2) \in \mathcal{O}|_S \implies (S \cap U_1) \cap (S \cap U_2) \in \mathcal{O}|_S \implies S_1 \cap S_2 \in \mathcal{O}|_S$. 108 | 109 | %union condition 110 | \item Let $\alpha \in \mathcal{A}$, where $\mathcal{A}$ is an index set. Then $S_{\alpha} \in \mathcal{O}|_S \implies \exists U_{\alpha} \in \mathcal{O}_{M} : S_{\alpha} = S \cap U_{\alpha}$. \\ 111 | Further, let $\mathcal{U} = \left( \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \right)$. Therefore, $\mathcal{U} \in \mathcal{O}_{M}$. \\ 112 | Now, $\left( \bigcup_{\alpha \in \mathcal{A}} S_{\alpha} \right) = \left( \bigcup_{\alpha \in \mathcal{A}} (S \cap U_{\alpha}) \right) = S \cap \left( \bigcup_{\alpha \in \mathcal{A}} U_{\alpha} \right) = S \cap \mathcal{U} \implies \left( \bigcup_{\alpha \in \mathcal{A}} S_{\alpha} \right) \in \mathcal{O}|_S$. 113 | \end{enumerate} 114 | \end{proof} 115 | 116 | \begin{theorem} 117 | If $(M, \mathcal{O}_{M})$ and $(N, \mathcal{O}_{N})$ are topological spaces, and $f: M \to N$ is continuous w.r.t $\mathcal{O}_{M}$ and $\mathcal{O}_{N}$, then the restriction of $f$ to $S \subseteq M, f|_S: S \to N$ s.t. $f|_S(s \in S) = f(s)$, is continuous w.r.t $\mathcal{O}|_S$ and $\mathcal{O}_{N}$. 118 | \end{theorem} 119 | 120 | \begin{proof} 121 | Let $V \in \mathcal{O}_N$. Then, $\text{preim}_{f}(V) \in \mathcal{O}_M$. \\ 122 | Now $\text{preim}_{f|_S}(V) = S \cap \text{preim}_{f}(V) \implies \text{preim}_{f|_S}(V) \in \mathcal{O}|_S \implies f|_S$ is continuous. 123 | \end{proof} 124 | -------------------------------------------------------------------------------- /lecture10.tex: -------------------------------------------------------------------------------- 1 | \section{Metric Manifolds} 2 | We establish a structure on a smooth manifold that allows one to assign vectors in each tangent space a length (and an angle between vectors in the same tangent space). From this structure, one can then define a notion of length of a curve. Then we can look at shortest curves (which will be called \textbf{geodesics}). 3 | 4 | Requiring then that the shortest curves coincide with the straight curves (w.r.t. $\nabla$) will result in $\nabla$ being determined by the metric structure $g$. $\nabla$, in turn determines the curvature given by $Riem$. Thus 5 | \[ 6 | g \overset{\substack{\text{straight} = \text{shortest/} \\ \text{longest/ stationary curves} \\ T =0}}{\rightsquigarrow} \nabla \rightsquigarrow Riem 7 | \] 8 | 9 | \subsection{Metrics} 10 | \begin{definition} 11 | A metric $g$ on a smooth manifold $\mfd$ is a $(0,2)$-tensor field satisfying 12 | \begin{enumerate}[(i)] 13 | \item \textbf{symmetry: }$g(X,Y) = g(Y,X) \quad \forall \, X, Y \text{ vector fields}$ 14 | \item \textbf{non-degeneracy: }the \underline{musical map} 15 | \begin{align*} 16 | \text{``flat''} \, \, \flat : \Gamma(TM) & \to \Gamma(T^*M) \\ 17 | X \mapsto \flat(X) \\ 18 | \text{ where } & \flat(X)(Y):= g(X,Y) \\ 19 | & \flat(X) \in \Gamma(T^*M) 20 | \end{align*} 21 | In thought bubble: $\flat(X) = g(X,\cdot)$ 22 | 23 | \dots is a $C^{\infty}$-isomorphism in other words, it is invertible. 24 | \end{enumerate} 25 | \end{definition} 26 | 27 | \underline{Remark}: $(\flat(X))_a$ \quad \, or \\ 28 | $X_a$ \\ 29 | $(\flat(X))_a := g_{am} X^m$ 30 | 31 | Thought bubble: $\flat^{-1} = \sharp$ 32 | 33 | $\flat^{-1}(\omega)^a := g^{am}\omega_m$ \\ 34 | $\flat^{-1}(\omega)^a := (g^{``-1''})^{am}\omega_m \implies$ not needed. (all of this is not needed) 35 | 36 | \begin{definition} 37 | The $(2,0)$-tensor field $g^{``-1''}$ with respect to a metric $g$ is the symmetric 38 | \begin{align*} 39 | g^{``-1''} : \Gamma(T^*M) \times \Gamma(T^*M) \xrightarrow{ ~ } C^{\infty}(M) \\ 40 | (\omega, \sigma) \mapsto \omega(\flat^{-1}(\sigma)) \quad \quad \, \flat^{-1}(\sigma) \in \Gamma(TM)) 41 | \end{align*} 42 | 43 | \underline{chart}: $g_{ab} = g_{ba} \quad \quad (g^{-1})^{am} g_{mb} = \delta^a_b$ 44 | \end{definition} 45 | 46 | \underline{Example}: Consider $(S^2, \mathcal{O}, \A)$ and the chart $(U,x)$ 47 | \begin{align*} 48 | \varphi \in (0,2\pi), & \quad \quad \theta \in (0,\pi) 49 | \end{align*} 50 | 51 | Define the metric \\ 52 | \[ 53 | g_{ij}(x^{-1}(\theta,\varphi)) = \left[ \begin{matrix} R^2 & 0 \\ 54 | 0 & R^2\sin^2{\theta} \end{matrix} \right]_{ij} 55 | \] 56 | $R \in \R^+$ 57 | 58 | ``the metric of the \underline{round sphere of radius $R$}'' 59 | 60 | \subsection{Signature} 61 | Linear algebra: \quad \quad \, $\begin{aligned} & A\indices{^a_{m}}v^m = \lambda v^a & \quad \quad \quad \, \left(\begin{matrix} \lambda_1 & & 0 \\ 62 | & \ddots & \\ 63 | 0 & & \lambda_n \end{matrix} \right) \\ 64 | & g_{am} v^m = \lambda \cdot v^a ? \rightsquigarrow & \quad \quad \quad \, \left( \begin{matrix} 65 | 1 & & & & & & & \\ 66 | & \ddots & & & & & & & \\ 67 | & & 1 & & & & & & \\ 68 | & & & -1 & & & & & \\ 69 | & & & & \ddots & & & & \\ 70 | & & & & & -1 & & & \\ 71 | & & & & & & 0 & & \\ 72 | & & & & & & & \ddots & \\ 73 | & & & & & & & & 0 \end{matrix} \right) 74 | \end{aligned}$ 75 | 76 | $(1,1)$ tensor has eigenvalues \\ 77 | $(0,2)$ has \underline{signature} $(p,q)$ (well-defined) 78 | 79 | $\left. \begin{aligned} 80 | (+++) \\ 81 | (++-) \\ 82 | (+--) \\ 83 | (---) \end{aligned} \right\rbrace$ $d+1$ if $p+q = \text{dim}V$ 84 | 85 | \begin{definition} A metric is called \textbf{Riemannian} if its signature is $(++ \dots +)$, and \textbf{Lorentzian} if it is $(+-\dots -)$. 86 | \end{definition} 87 | 88 | \subsection{Length of a curve} 89 | Let $\gamma$ be a smooth curve. Then we know its veloctiy $v_{\gamma,\gamma(\lambda)}$ at each $\gamma(\lambda) \in M$. 90 | 91 | \begin{definition} 92 | On a Riemannian metric manifold $(M, \mathcal{O}, \A, g)$, the \textbf{speed} of a curve at $\gamma(\lambda)$ is the number 93 | \begin{equation} 94 | \boxed{s(\lambda) = (\sqrt{g(v_{\gamma}, v_{\gamma})})_{\gamma(\lambda)}} 95 | \end{equation} 96 | \end{definition} 97 | 98 | (I feel the need for speed, then I feel the need for a metric) 99 | 100 | \underline{Aside}: $[v^a] = \frac{1}{T}$ \\ 101 | \phantom{Aside:} $[g_{ab}] = L^2 $ \\ 102 | \phantom{Aside:} $[\sqrt{g_{ab}v^av^b}] = \sqrt{ \frac{L^2}{T^2}} = \frac{L}{T}$ 103 | 104 | \begin{definition} 105 | Let $\gamma:(0,1) \to M$ a smooth curve. Then the \textbf{length of $\gamma$}, $L[\gamma] \in \R$ is the number 106 | \begin{equation} 107 | \boxed{L[\gamma] := \int_0^1 d\lambda s(\lambda) = \int_0^1 d\lambda \sqrt{ (g(v_{\gamma}, v_{\gamma}))_{\gamma(\lambda)}}} 108 | \end{equation} 109 | \end{definition} 110 | 111 | F. Schuller: ``velocity is more fundamental than speed, speed is more fundamental than length'' 112 | 113 | \textbf{Example:} Reconsider the round sphere of radius $R$. Consider its equator: 114 | \begin{align*} 115 | \theta(\lambda) := (x^1 \after \gamma)(\lambda) = \frac{\pi}{2}, & \quad \varphi(\lambda) := (x^2 \after \gamma)(\lambda) = 2\pi \lambda^3 \\ 116 | \implies \theta'(\lambda) = 0, & \quad \varphi'(\lambda) = 6\pi\lambda^2 117 | \end{align*} 118 | 119 | On the same chart $g_{ij} = \left[ \begin{matrix} R^2 & \\ 120 | & R^2 \sin^2{\theta} \end{matrix} \right]$ 121 | 122 | Do everything in this chart 123 | \begin{align*} 124 | L[\gamma] & = \int_0^1 d\lambda \sqrt{g_{ij}(x^{-1}(\theta(\lambda), \varphi(\lambda)))(x^i \after \gamma)'(\lambda)(x^j \after \gamma)'(\lambda)} \\ 125 | & = \int_0^1 d\lambda \sqrt{R^2 \cdot 0 + R^2\sin^2{(\theta(\lambda))} 36 \pi^2 \lambda^4} \\ 126 | & = 6\pi R \int_0^1 d\lambda \lambda^2 = 6\pi R [\frac{1}{3} \lambda^3]^1_0 = 2\pi R 127 | \end{align*} 128 | 129 | \begin{theorem} 130 | $\gamma : (0,1) \to M$ and $\sigma :(0,1) \to (0,1)$ smooth bijective and \underline{increasing} ``reparametrization'' \\ 131 | $L[\gamma] = L[\gamma \after \sigma]$ 132 | \end{theorem} 133 | 134 | \begin{proof} 135 | in Tutorials 136 | \end{proof} 137 | 138 | \subsection{Geodesics} 139 | \begin{definition} 140 | A curve $\gamma : (0,1) \to M$ is called a \textbf{geodesic} on a Riemannian manifold $(M, \mathcal{O}, \A, g)$ if it is a stationary curve with respect to a length functional $L$. 141 | \end{definition} 142 | 143 | Thought bubble: In classical mechanics, deform the curve a little, $\epsilon$ times this deformation, to first order, it agrees with $L[\gamma]$. 144 | 145 | \begin{theorem} 146 | $\gamma$ is geodesic iff it satisfies the Euler-Lagrange equations for the Lagrangian 147 | \end{theorem} 148 | \begin{align*} 149 | \mathcal{L} : & TM \to \R \\ 150 | & X \mapsto \sqrt{g(X,X)} 151 | \end{align*} 152 | In a chart, the Euler Lagrange equations take the form: 153 | \[ 154 | \left(\cibasis[\mathcal{L}]{\dot{x}^m}\right)^{\cdot} - \cibasis[\mathcal{L}]{x^m} = 0 155 | \] 156 | F.Schuller: this is a chart dependent formulation 157 | 158 | here: 159 | \[ 160 | \mathcal{L}(\gamma^i, \dot{\gamma}^i) = \sqrt{g_{ij}(\gamma(\lambda)) \dot{\gamma}^i(\lambda) \dot{\gamma}^j(\lambda)} 161 | \] 162 | Euler-Lagrange equations: 163 | \begin{align*} 164 | \cibasis[\mathcal{L}]{\dot{\gamma}^m} = \frac{1}{\sqrt{\dots}} g_{mj}(\gamma(\lambda)) \dot{\gamma}^j(\lambda) \\ 165 | \left(\cibasis[\mathcal{L}]{\dot{\gamma}^m}\right)^{\cdot} = \left(\frac{1}{\sqrt{\dots}} \right)^{\cdot} g_{mj}(\gamma(\lambda)) \cdot \dot{\gamma}^j(\lambda) + \frac{1}{\sqrt{\dots}} \left(g_{mj}(\gamma(\lambda)) \ddot{\gamma}^j(\lambda) + \dot{\gamma}^s(\partial_s g_{mj}) \dot{\gamma}^j(\lambda) \right) 166 | \end{align*} 167 | Thought bubble: reparametrize $g(\dot{\gamma}, \dot{\gamma}) = 1$ (it's a condition on my reparametrization) 168 | 169 | By a clever choice of reparametrization $(\frac{1}{\sqrt{\dots}})^{\cdot} = 0$ 170 | \[ 171 | \cibasis[\mathcal{L}]{\gamma^m} = \frac{1}{2\sqrt{\dots}} \partial_m g_{ij}(\gamma(\lambda)) \dot{\gamma}^i(\lambda) \dot{\gamma}^j(\lambda) 172 | \] 173 | putting this together as Euler-Lagrange equations: 174 | \begin{align*} 175 | g_{mj} \ddot{\gamma}^j + \partial_s g_{mj} \dot{\gamma}^s \dot{\gamma}^j - \frac{1}{2} \partial_m g_{ij} \dot{\gamma}^i \dot{\gamma}^j = 0 \\ 176 | \ddot{\gamma^q} + (g^{-1})^{qm}(\partial_i g_{mj} - \frac{1}{2} \partial_m g_{ij}) \dot{\gamma}^i \dot{\gamma}^j = 0 && (\text{multiply on both sides }(g^{-1})^{qm}) \\ 177 | \boxed{\ddot{\gamma^q} + (g^{-1})^{qm}\frac{1}{2} (\partial_i g_{mj} + \partial_j g_{mi} - \partial_m g_{ij}) \dot{\gamma}^i \dot{\gamma}^j = 0} 178 | \end{align*} 179 | geodesic equation for $\gamma$ in a chart. 180 | \[ 181 | \boxed{(g^{-1})^{qm}\frac{1}{2} (\partial_i g_{mj} + \partial_j g_{mi} - \partial_m g_{ij} ) =: \ccf{q}{ij}(\gamma(\lambda)) 182 | } 183 | \] 184 | Thought bubble: $\left(\cibasis[\mathcal{L}]{\xi_x^{a+\text{dim}M}} \right)^{\cdot}_{\sigma(x)} - \left(\cibasis[\mathcal{L}]{xi^a_x} \right)_{\sigma(x)} = 0$ 185 | 186 | \begin{definition} 187 | \textbf{Christoffel symbol} ${\,}^{\text{L.C.}}\Gamma$ are the connection coefficient functions of the so-called Levi-Civita connection ${\,}^{\text{L.C.}}\nabla$ 188 | \end{definition} 189 | We usually make this choice of $\nabla$ if $g$ is given. 190 | 191 | $(M, \mathcal{O}, \A, g) \to (M, \mathcal{O}, \A, g, {\,}^{\text{L.C.}}\nabla)$ 192 | 193 | \underline{abstract way}: $\nabla g = 0$ and $T = 0$ (torsion) \\ 194 | $\Longrightarrow \nabla = {\,}^{\text{L.C.}}\nabla$ 195 | 196 | \begin{definition} 197 | \begin{enumerate}[(a)] 198 | \item The \textbf{Riemann-Christoffel curvature} is defined by 199 | \begin{equation} 200 | \boxed{R_{abcd} := g_{am}R\indices{^m_{bcd}}} 201 | \end{equation} 202 | 203 | \item \textbf{Ricci} 204 | \begin{equation} 205 | \boxed{R_{ab} = R\indices{^m_{amb}}} 206 | \end{equation} 207 | Thought bubble: with a metric, ${\,}^{\text{L.C.}}\nabla$ 208 | 209 | \item (Ricci) scalar curvature: 210 | \begin{equation} 211 | \boxed{R = g^{ab} R_{ab}} 212 | \end{equation} 213 | 214 | Thought bubble: ${\,}^{\text{L.C.}}\nabla$ 215 | \end{enumerate} 216 | \end{definition} 217 | 218 | \begin{definition} 219 | \textbf{Einstein curvature} of $(M, \mathcal{O}, \A, g)$ is defined as 220 | \begin{equation} 221 | \boxed{G_{ab} := R_{ab} - \frac{1}{2} g_{ab} R} 222 | \end{equation} 223 | \end{definition} 224 | 225 | \underline{Convention}: $g^{ab} := (g^{``-1''})^{ab}$ 226 | 227 | F. Schuller: these indices are not being pulled up, because what would you pull them up with 228 | 229 | (student) Question: Does the Einstein curvature yield new information? \\ 230 | Answer: \\ 231 | $g^{ab} G_{ab} = R_{ab} g^{ab} - \frac{1}{2} g_{ab} g^{ab} R = R - \delta^a_a R = R - \frac{1}{2} \text{dim}M \, R = (1- \frac{d}{2}) R$ 232 | -------------------------------------------------------------------------------- /lecture11.tex: -------------------------------------------------------------------------------- 1 | \section{Symmetry} 2 | \begin{framed} 3 | This lecture is about symmetry but we will pick a number of elementary techniques in differential geometry that we will need in Einstein's theory. We shall motivate these techniques by appealing to the feeling that the round sphere $(S^2, \mathcal{O}, \mathcal{A},g^{\text{round}})$ has rotational symmetry, while the potato $(S^2, \mathcal{O},\mathcal{A}, g^{\text{potato}})$ does not. 4 | 5 | So far we have considered symmetry by having inner product first, and then demanding that w.r.t. that inner product we classify linear maps $A$ acting on vectors $X$ and $Y$ such that inner product of $AX$ and $AY$ results in inner product $XY$. 6 | 7 | Here we talk about an altogether different idea. Firstly, since the distinction between the two is entirely contained in $g$, we are talking about the rotational symmetry of $g^{\text{round}}$. Secondly, while an inner product is on one tangent space, there are many different tangent spaces with different inner products. $g$ talks about the distribution of these inner products over the sphere, and that distribution in some sense is rotationally invariant or not. 8 | 9 | Therefore, the question is: How to describe the symmetries of a metric? This is important because nobody has solved Einstein's Equations without assuming some sort of additional assumptions such as symmetry of the solution. 10 | \end{framed} 11 | 12 | \subsection{Push-forward map} 13 | \begin{definition} 14 | Let $M$ and $N$ be smooth manifolds with tangent bundles $TM$ and $TN$ respectively. Let $\phi : M \to N$ be a smooth map. Then, the \textbf{push-forward map} of $\phi$ is the map 15 | \begin{align} 16 | \phi_{\ast} : & TM \to TN \nonumber \\ 17 | & X \mapsto \phi_\ast(X) \nonumber \\ 18 | \label{eq_defPushForwardMap}\text{where } & \phi_\ast(X) f := X (f \after \phi) && (\forall \, f \in C^\infty(N)) 19 | \end{align} 20 | \end{definition} 21 | 22 | \begin{SCfigure}[5][h] 23 | \label{fig:L11_pushForwardMap} 24 | \centering 25 | \begin{tikzpicture} 26 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 27 | { 28 | TM & TN & \\ 29 | M & N & \mathbb{R} \\ 30 | }; 31 | \path[->] 32 | (m-1-1) edge node [auto] {$\phi_\ast$} (m-1-2) 33 | edge node [auto] {$\pi_{TM}$} (m-2-1) 34 | (m-1-2) edge node [auto] {$\pi_{TN}$} (m-2-2) 35 | (m-2-2) edge node [auto] {$f$} (m-2-3) 36 | (m-2-1) edge node [auto] {$\phi$} (m-2-2) 37 | edge [bend right=30] node [auto] {$f \after \phi$} (m-2-3); 38 | \end{tikzpicture} 39 | \caption{\textbf{Push-forward map}: $\phi_\ast$ takes a vector $X \in T_pM$ in the tangent space at the point $p \in M$ to the vector $\phi_\ast(X) \in T_qN$ in the tangent space at the point $\phi(p) = q \in N$, such that the action of $\phi_\ast(X)$ on any smooth function $f \in C^\infty(N)$ results in the same value as the action of $X$ on the function $(f \after \phi)$.} 40 | \end{SCfigure} 41 | 42 | \textbf{Note}: If we take an entire fibre at the point $p \in M$, applying $\phi_\ast$ on it remains within the fibre at the point $\phi(p) \in N$. That is \\ 43 | \[ 44 | \phi_\ast(T_pM) \subseteq T_{\phi(p)}N 45 | \] 46 | 47 | \textit{Mnemonic: ``vectors are pushed forward'' across tangent bundles in a manner dictated by the underlying map.} 48 | 49 | \textbf{Components of push-forward $\phi_\ast$ w.r.t charts $(U,x) \in \mathcal{A}_M$ and $(V,y) \in \mathcal{A}_N$}: Let $p \in U$ and $\phi(p) \in V$. Since $\cibasis{x^i}_p$ is a vector, we have $\phi_\ast(\cibasis{x^i}_p)$ as a vector in N. Then we can select a component of this vector by using $dy^a$ as follows: \\ 50 | \begin{equation} \label{eq_pushForwardComponents} 51 | \underbrace{dy^a\left(\phi_\ast\left(\left(\cibasis{x^i}\right)_p\right)\right)}_{:= \phi^a_{\ast i}} = \phi_\ast\left(\left(\cibasis{x^i}\right)_p\right)y^a = \left(\cibasis{x^i}\right)_p (y^a \after \phi) = \left(\cibasis{x^i}\right)_p (y \after \phi)^a := \left(\cibasis[\hat{\phi}^a]{x^i}\right)_p 52 | \end{equation} 53 | 54 | \begin{SCfigure}[5][h] 55 | \centering 56 | \begin{tikzpicture} 57 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 58 | { 59 | M \supseteq U & V \subseteq N \\ 60 | \underbrace{x(U)}_{\subseteq \mathbb{R}^{\text{dim }M}} & \underbrace{y(V)}_{\subseteq \mathbb{R}^{\text{dim }N}}\\ 61 | }; 62 | \path[->] 63 | (m-1-1) edge node [auto] {$\phi$} (m-1-2) 64 | edge node [auto] {$x$} (m-2-1) 65 | edge node [sloped, anchor=center, above] {$y \after \phi =: \hat{\phi}$} (m-2-2) 66 | (m-1-2) edge node [auto] {$y$} (m-2-2) 67 | (m-2-1) edge node [below] {$y \after \phi \after x^{-1}$} (m-2-2); 68 | \end{tikzpicture} 69 | \caption{Components of push-forward map w.r.t charts $(U,x) \in \mathcal{A}_M$ and $(V,y) \in \mathcal{A}_N$.} 70 | \end{SCfigure} 71 | 72 | \begin{theorem} 73 | If $\gamma : \mathbb{R} \to M$ is a curve in $M$, then $\phi \after \gamma : \mathbb{R} \to N$ is a curve in $N$. Then, $\phi_\ast$ pushes the tangent to a curve $\gamma$ (velocity) to the tangent to the curve $(\phi \after \gamma)$, i.e., 74 | \begin{equation} 75 | \phi_\ast\left(v_{\gamma, p}\right) = v_{(\phi \after \gamma), \phi(p)} 76 | \end{equation} 77 | \end{theorem} 78 | \begin{proof} 79 | Let $p = \gamma(\lambda_0)$. Then $\forall \, f \in C^\infty(N)$, 80 | \begin{align*} 81 | \phi_\ast\left(v_{\gamma, p}\right) f & = v_{\gamma, p} (f \after \phi) && (\text{by Eq.~\ref{eq_defPushForwardMap}}) \\ 82 | & = ((f \after \phi) \after \gamma)^\prime (\lambda_0) && (\text{by Eq.~\ref{eq_velocity}}) \\ 83 | & = (f \after (\phi \after \gamma))^\prime (\lambda_0) && (\text{by associativity of composition}) \\ 84 | & = v_{(\phi \after \gamma), \phi(\gamma(\lambda_0))} f && (\text{by Eq.~\ref{eq_velocity}}) \\ 85 | & = v_{(\phi \after \gamma), \phi(p)} f && (\gamma(\lambda_0) = p) \\ 86 | \implies \phi_\ast\left(v_{\gamma, p}\right) & = v_{(\phi \after \gamma), \phi(p)} 87 | \end{align*} 88 | \end{proof} 89 | 90 | \subsection{Pull-back map} 91 | \begin{definition} 92 | Let $M$ and $N$ be smooth manifolds with cotangent bundles $T^\ast M$ and $T^\ast N$ respectively. Let $\phi : M \to N$ be a smooth map. Then, the \textbf{pull-back map} of $\phi$ is the map 93 | \begin{align} 94 | \phi^{\ast} : & T^\ast N \to T\ast M \nonumber \\ 95 | & \omega \mapsto \phi^\ast(\omega) \nonumber \\ 96 | \label{eq_defPullBackMap}\text{where } & \phi^\ast(\omega)(X) := \omega (\phi_\ast(X)) && (\forall \, X \in T_pM) 97 | \end{align} 98 | \end{definition} 99 | 100 | \textbf{Components of pull-back $\phi^\ast$ w.r.t charts $(U,x) \in \mathcal{A}_M$ and $(V,y) \in \mathcal{A}_N$}: Let $p \in U$ and $\phi(p) \in V$. Since $dy^a$ is a covector, we have $\phi^\ast(dy^a_p)$ as a covector in N. Then we can select a component of this covector by using $\cibasis{x^i}$ as follows: \\ 101 | \begin{align*} 102 | \phi^{\ast a} & := \phi^\ast\left(\left(dy^a\right)_{\phi(p)}\right) \left(\left(\cibasis{x^i}\right)_p\right) \\ 103 | & = \left(dy^a\right)_{\phi(p)} \phi_\ast \left(\left(\cibasis{x^i}\right)_p\right) && (\text{by Eq.~\ref{eq_defPullBackMap}}) \\ 104 | & = \left(\cibasis[\hat{\phi}^a]{x^i}\right)_p = \phi^a_{\ast i} && (\text{by Eq.~\ref{eq_pushForwardComponents}}) 105 | \end{align*} 106 | Thus, the components of the push-forward and pull-back maps are exactly the same. 107 | \begin{align*} 108 | \left(\phi_\ast\left(X\right)\right)^a & = \phi^a_{\ast i} X^i \\ 109 | \left(\phi^\ast\left(\omega\right)\right)_i & = \phi^a_{\ast i} \omega_a 110 | \end{align*} 111 | Remember, $a = (1, \dotsc, \text{dim} N)$ and $i = (1, \dotsc, \text{dim} M)$. 112 | 113 | Claim: $\phi^\ast(df) = d(f \after \phi)$. 114 | 115 | \textit{Mnemonic: ``covectors are pulled back'' across tangent bundles in a manner dictated by the underlying map.} 116 | 117 | \textbf{Important application}: 118 | \begin{definition} 119 | Let $M$ and $N$ be smooth manifolds. Let $\phi : M \injmapto N$ be an injective map. If we know a metric $g$ on $N$, then the \textbf{induced metric} $g_M$ in $M$ is defined using the push-forward map $\phi_\ast$ as follows: 120 | \begin{equation}\label{eq_inducedMetric} 121 | \boxed{g_M(X,Y) := g\left(\phi_\ast(X),\phi_\ast(Y)\right)} \quad \quad (\forall \, X, Y \in T_pM) 122 | \end{equation} 123 | \end{definition} 124 | In terms of components, 125 | \begin{equation} 126 | \left(\left(g_M\right)_{ij}\right)_p = \left(g_{ab}\right)_{\phi(p)} \left(\cibasis[\hat{\phi}^a]{x^i}\right)_{\phi(p)} \left(\cibasis[\hat{\phi}^b]{x^j}\right)_{\phi(p)} 127 | \end{equation} 128 | \textbf{Example}: $N = (\mathbb{R}^3, \stdtop, \mathcal{A})$ and $M = (S^2, \mathcal{O}, \mathcal{A})$, then we can have several injective maps $\phi : S^2 \injmapto \mathbb{R}^3$. For example, $S^2$ could live in $\mathbb{R}^3$ either as a potato or a round sphere. However, suppose $\mathbb{R}^3$ is equipped with the Euclidean metric $g_E$ ...(TODO complete this example) 129 | 130 | \subsection{Flow of a complete vector field} 131 | \begin{definition} 132 | Let $X$ be a vector field on a smooth manifold $(M,\mathcal{O},\mathcal{A})$. A curve $\gamma : I \subseteq \mathbb{R} \to M$ is called an \textbf{integral curve of $X$} if 133 | \[ 134 | v_{\gamma,\gamma(\lambda)} = X_{\gamma(\lambda)} 135 | \] 136 | \end{definition} 137 | 138 | \begin{definition} A vector field $X$ is \textbf{complete} if all integral curves have $I = \mathbb{R}$ (i.e. domain is all of $\mathbb{R}$). 139 | \end{definition} 140 | 141 | \begin{theorem} 142 | Compactly supported smooth vector field is complete. 143 | \end{theorem} 144 | 145 | \begin{definition} The \textbf{flow of a complete vector field $X$} on a manifold $M$ is a 1-parameter family 146 | \begin{align*} 147 | h^X : & \mathbb{R} \times M \to M \\ 148 | & (\lambda, p) \mapsto \gamma_p(\lambda) 149 | \end{align*} 150 | where $\gamma_p : \mathbb{R} \to M$ is the integral curve of $X$ with $\gamma(0) = p$. 151 | \end{definition} 152 | 153 | Then for fixed $\lambda \in \mathbb{R}$, $h_{\lambda}^X : M \to M$ is smooth. 154 | 155 | \textbf{Picture}: If $S$ is a set of points in $M$, then $h^X_{\lambda}(S)$ can be seen as the new position of these points under the flow $h^X$ after the passage of $\lambda$ units of parameter. In general, $h^X_{\lambda}(S) \neq S (\text{ if } X \neq 0)$. 156 | 157 | \subsection{Lie subalgebras of the Lie algebra $(\Gamma(TM), [\cdot, \cdot])$ of vector fields} 158 | We know that $\Gamma(TM) = \lbrace \text{ set of all vector fields } \rbrace$, which can be seen as a $C^{\infty}(M)$-module, or as a $\mathbb{R}$-vector space. 159 | 160 | $(\Gamma(TM), [\cdot, \cdot])$ with $[X,Y]$ defined by its action on a function $f$ by $[X,Y] f := X(Yf) - Y(Xf)$ is a Lie algebra since $X,Y \in \Gamma(TM) \implies [X,Y] \in \Gamma(TM)$ and the following properties are satisfied: 161 | \begin{enumerate}[(i)] 162 | \item \textbf{Anticommutativity}: $[X,Y] = -[Y,X]$ 163 | \item \textbf{Linearity}: $[\lambda X + Z, Y] = \lambda [X,Y] + [Z,Y]$ where $\lambda \in \mathbb{R}$ 164 | \item \textbf{Jacobi identity}: $[X,[Y,Z]] + [Z,[X,Y]] + [Y,[Z,X]] = 0$ 165 | \end{enumerate} 166 | 167 | Let $X_1, \dotsc, X_s$ be $s$ (many) vector fields on $M$, such that 168 | \[ 169 | \forall i,j \in \lbrace 1,\dotsc,s \rbrace \quad [X_i,X_j] = \underbrace{C^k_{ij} X_k}_{\text{linear combination of }X_k s} \quad \quad \text{where } C^k_{ij} \in \mathbb{R} 170 | \] 171 | $C^k_{ij}$ are called \textbf{structure constants}. 172 | 173 | Let $\text{span}_{\mathbb{R}} \lbrace X_1,\dotsc,X_s \rbrace := \lbrace \text{all linear combinations of }X_k \rbrace$. Then $\left(\text{span}_{\mathbb{R}} \lbrace X_1,\dotsc,X_s \rbrace, [\cdot, \cdot]\right)$ is a Lie subalgebra of $(\Gamma(TM), [\cdot, \cdot])$. 174 | 175 | \textbf{Example}: In $S^2$, assume that the vector fields $X_1,X_2,X_3$ satisfy $[X_1,X_2] = X_3$, $[X_2,X_3] = X_1$ and $[X_3,X_1] = X_2$. Then $\left(\text{span}_{\mathbb{R}} \lbrace X_1,X_2,X_3 \rbrace, [\cdot, \cdot]\right)$ ($= SO(3)$) is a Lie subalgebra. An instance of vector fields satisfying these conditions is (with $X_i, \theta, \phi$ all taken at a point $p$, and $x^1 = \theta, x^2 = \phi$) 176 | \begin{align*} 177 | & X_1 = - \sin\phi \cibasis{\theta} - \cot\theta \cos\phi \cibasis{\phi} \\ 178 | & X_2 = \cos\phi \cibasis{\theta} - \cot\theta \cos\phi \cibasis{\phi} \\ 179 | & X_3 = \cibasis{\phi} 180 | \end{align*} 181 | Note that the above is defined on a merely smooth manifold without any additional structure like metric. 182 | 183 | \subsection{Symmetry} 184 | \begin{definition} 185 | A finite-dimensional Lie subalgebra $(L,[\cdot,\cdot])$ is said to be a \textbf{symmetry} of a metric tensor field $g$ if 186 | \[ 187 | \boxed{g \left( \left(h^X_\lambda\right)_\ast (A), \left(h^X_\lambda\right)_\ast (B) \right) = g(A,B)} \quad\quad (\forall X \text{ (complete vector field) } \in L, \quad \forall \lambda \in \mathbb{R}, \quad \forall A, B \in T_pM) 188 | \] 189 | \end{definition} 190 | 191 | In another formulation (using pullback), $\boxed{\left(h^X_\lambda\right)^\ast g = g}$. The pullback of $\phi : M \to M$ on $g$, itself, is defined as follows: 192 | \[ 193 | (\phi^\ast g)(A, B) := g(\phi_\ast(A), \phi_\ast(B)) 194 | \] 195 | 196 | %\textbf{Example}: 197 | 198 | \subsection{Lie derivative} 199 | It can be shown that the following expression is precisely the Lie derivative of $g$ w.r.t a vector field 200 | \begin{equation} 201 | \boxed{\mathcal{L}_X g := \lim_{\lambda \to 0} \frac{\left( h_\lambda^X \right)^\ast g - g}{\lambda}} 202 | \end{equation} 203 | Clearly, $L$ is a symmetry of $g$, iff $\mathcal{L}_X g = 0$. 204 | 205 | \begin{definition} 206 | The \textbf{Lie derivative} $\mathcal{L}$ on a smooth manifold $(M, \mathcal{O}, \mathcal{A})$ a pair of a vector field $X$ and a $(p,q)$-tensor field $T$ to a $(p,q)$-tensor field such that 207 | \begin{enumerate}[(i)] 208 | \item $\mathcal{L}_X f = Xf \quad \forall f \in C^{\infty}M$ 209 | 210 | \item $\mathcal{L}_X Y = [X, Y] \quad\quad \text{ where }X, Y \text{ are vector fields}$ 211 | \begin{framed} 212 | This condition sucks in information about the vector field $X$. It is not $C^{\infty}$-linear in the lower index. If it were, the derivative would be independent of values of $X$ at nearby points to the point where derivative is evaluated. This is an important difference between the covariant derivative $\nabla$ and the Lie derivative. 213 | \end{framed} 214 | 215 | \item $\mathcal{L}_X (T + S) = \mathcal{L}_X T + \mathcal{L}_X S \quad \text{ where }T, S \text{ are } (p,q) \text{-tensors}$ 216 | 217 | \item \textbf{Leibnitz rule: } $\mathcal{L}_X T(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) = (\mathcal{L}_X T)(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) \\ 218 | + T(\mathcal{L}_X \omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) + \dotsb + T(\omega_1,\dotsc,\mathcal{L}_X \omega_p,Y_1,\dotsc,Y_q) \\ 219 | + T(\omega_1,\dotsc,\omega_p,\mathcal{L}_X Y_1,\dotsc,Y_q) + \dotsb + T(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,\mathcal{L}_X Y_q) \quad \text{ where }T \text{ is a }(p,q)\text{-tensor}$ 220 | \begin{framed} 221 | Note that for a $(p,q)$-tensor $T$ and a $(r,s)$-tensor $S$, since: \\ 222 | $(T \otimes S) (\omega_{(1)}, \dotsc, \omega_{(p+r)}, Y_{(1)}, \dotsc, Y_{(q+s)}) = \\ T(\omega_{(1)}, \dotsc, \omega_{(p)}, Y_{(1)}, \dotsc, Y_{(q)} ) \cdot S( \omega_{(p+1)}, \dotsc, \omega_{(p+r)} , Y_{(q+1)}, \dotsc, Y_{(q+s)})$, \\ 223 | Leibnitz rule implies $\mathcal{L}_X (T \otimes S) = (\mathcal{L}_X T) \otimes S + T \otimes (\mathcal{L}_X S)$. 224 | \end{framed} 225 | 226 | \item $\mathcal{L}_{X+Y} T = \mathcal{L}_X T + \mathcal{L}_Y T$ 227 | \end{enumerate} 228 | \end{definition} 229 | 230 | Observe that, in chart $(U,x)$ 231 | \[ 232 | \left(\mathcal{L}_X Y\right)^i = X^m \cibasis{x^m}(Y^i) - \underbrace{\cibasis{x^s} (X^i)}_{\text{requires knowing } X \text{ around the point} } Y^s 233 | \] 234 | However, for covariant derivative 235 | \[ 236 | \left(\nabla_X Y\right)^i = X^m \cibasis{x^m}(Y^i) + \ccf{i}{sm} X^m Y^s 237 | \] 238 | In general, for a $(1,1)$-tensor $T$ 239 | \[ 240 | \left(\mathcal{L}_X T \right)\indices{^i_j} = X^m \cibasis{x^m}(T\indices{^i_j}) \underbrace{- \cibasis[X^i]{x^s} T\indices{^x_j}}_{('-' \text{ for lower index})} \underbrace{+ \cibasis[X^s]{x^j} T\indices{^i_s}}_{('+' \text{ for upper index})} 241 | \] 242 | 243 | \textbf{Application}: As above, it is easy to calculate components of Lie derivative of metric g, $\mathcal{L}_X g$. Thus, by checking whether the derivative equals $0$ or not, it can be determined whether a metric features a symmetry. 244 | -------------------------------------------------------------------------------- /lecture12.tex: -------------------------------------------------------------------------------- 1 | \section{Integration} 2 | \begin{framed} 3 | This lecture will be the completion of our ``lift'' of analysis on charts to the manifold level. We want to be able to integrate a function $f$ over a manifold $M$. This $\int_{M} f$ will be an important tool for writing down the action on Einstein Equations. 4 | 5 | However, to define integral we need a mild new structure on the smooth manifold $\mfd$. It requires \\ 6 | (i) a choice of a certain tensor field, the so-called \textbf{volume form} and \\ 7 | (ii) a restriction on the atlas $\A$, which is called '\textbf{orientation}'. 8 | \end{framed} 9 | 10 | \subsection{Review of integration on $\mathbb{R}^d$} 11 | We review this because this is what, after all, happens on charts; and we want to use this knowledge to have a well-defined integration on manifolds. 12 | \begin{enumerate}[a)] 13 | \item If $F : \R \to \R$, we assume a notion of integration is known. We define an integral over an interval $(a,b)$ as follows: \\ 14 | \[ 15 | \int_{(a,b)} F := \int_a^b dx \, F(x) \quad\quad \text{(which is understood in terms of, say, Riemann's integral)}. 16 | \] 17 | 18 | \item If $F : \R^k \to \R$, then 19 | \begin{enumerate}[(i)] 20 | \item on a box-shaped domain, $Box = (a,b) \times (c,d) \times \dotsb \times (u,v) \subseteq \R^k$, the integral on the box is a series of integrals which have to be evaluated one after the other as follows \\ 21 | \[ 22 | \int_{Box} F := \int_a^b dx^1 \int_c^d dx^2 \dotsi \int_u^v dx^k \, F(x^1, x^2, \dotsc, x^k) 23 | \] 24 | \item for other domains, $G \subseteq \R^k$, we first introduce an \textbf{indicator function} $\mu_G : \R^k \to \R$ such that 25 | \[ 26 | \mu_G(x) = \begin{cases} 27 | 1, \quad \quad x \in G \\ 28 | 0, \quad \quad x \not\in G 29 | \end{cases} 30 | \] 31 | and then define 32 | \[ 33 | \int_{G} F := \int_{-\infty}^{+\infty} dx^1 \int_{-\infty}^{+\infty} dx^2 \dotsi \int_{-\infty}^{+\infty} dx^k \, \mu_G(x) \cdot F(x^1, x^2, \dotsc, x^k) 34 | \] 35 | While this may not be a practical definition, it tells us what we mean by an integral over a function from $\R^k$ to $\R$ over an arbitrary domain $G$. 36 | \end{enumerate} 37 | \end{enumerate} 38 | 39 | \textit{\textbf{Note}: All of the above comes with the disclaimer '\textbf{if the integral exists}' since there could be many issues that do not allow the existence of the integral as defined above.} 40 | 41 | \textbf{Change of variables}, which may also be called integration by substitution. 42 | 43 | \begin{theorem} 44 | If $F : \R^k \ni G \to \R$ and $\phi : preim_\phi(G) (\in \R^k) \to G$, then 45 | \[ 46 | \int_G F(x) = \int_{preim_\phi(G)} \underbrace{\left\lvert det(\partial x)(y) \right\rvert}_{\text{Jacobian of }\phi} \cdot (F \after \phi)(y) 47 | \] 48 | \end{theorem} 49 | 50 | \begin{tikzpicture} 51 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 52 | { 53 | \R^k \ni preim_\phi(G) & G \in \R^k \\ 54 | & \R \\ 55 | }; 56 | \path[->] 57 | (m-1-1) edge node[auto] {$\phi$} (m-1-2) 58 | edge node[sloped, anchor=center, below] {$F \after \phi$} (m-2-2) 59 | (m-1-2) edge node[auto] {$F$} (m-2-2); 60 | \end{tikzpicture} 61 | 62 | \textbf{Example}: Consider the domain $G \subset \R^2$, which includes the entire $R^2$ except the x-axis. Let 63 | \begin{align*} 64 | \phi : & \R^+ \times \lbrace(0,\pi) \cup (\pi,2\pi)\rbrace \to G \\ 65 | & (r, \varphi) \mapsto (r\cos\varphi, r\sin\varphi) 66 | \end{align*} 67 | Thus, $G$ is in Cartesian coordinates and $preim_\phi(G)$ is in polar coordinates. Let us calculate the Jacobian. 68 | \begin{align*} 69 | (\partial_a x^b) (r, \varphi) & = \begin{vmatrix} 70 | \cos\varphi & \sin\varphi \\ 71 | -r\sin\varphi & r\cos\varphi 72 | \end{vmatrix} \\ 73 | det(\partial_a x^b) (r, \varphi) & = r \\ 74 | \implies \int_G \underbrace{dx^1 \, dx^2}_{\text{volume element}} \, F(x^1,x^2) & = \int_0^\infty \int_0^{2\pi} \underbrace{dr \, d\varphi \, r}_{\text{volume element}} F(r\cos\varphi, r\sin\varphi) 75 | \end{align*} 76 | 77 | \subsection{Integration on one chart} 78 | Let $\mfd$ be a smooth manifold, $f : M \to \R$ and choose charts $(U,x), (U,y) \in \A$ 79 | 80 | \begin{tikzpicture} 81 | \matrix (m) [matrix of math nodes, row sep=5em, column sep=7em, minimum width=2em] 82 | { 83 | y(U) \in \R^k & \\ 84 | U & \R \\ 85 | x(U) \in \R^k & \\ 86 | }; 87 | \path[->] 88 | (m-2-1) edge node[auto] {$f$} (m-2-2) 89 | edge node[auto] {$y$} (m-1-1) 90 | edge node[auto] {$x$} (m-3-1) 91 | (m-1-1) edge node[sloped, anchor=center, above] {$f_{(y)} := f \after y^{-1}$} (m-2-2) 92 | (m-3-1) edge node[sloped, anchor=center, below] {$f_{(x)} := f \after x^{-1}$} (m-2-2) 93 | (m-3-1) edge [bend left=40] node[left] {$y \after x^{-1}$} (m-1-1); 94 | \end{tikzpicture} 95 | 96 | Consider $\int_U f$ 97 | \[ 98 | \int_U f := \int_{x(U)} d^k\alpha \, f_{(x)}(\alpha) 99 | \] 100 | \subsection{Volume forms} 101 | 102 | \begin{definition} 103 | On a smooth manifold $(M,\mathcal{O},\mathcal{A})$ \\ 104 | a $(0,\text{dim}M)$-tensor field $\Omega$ is called a \underline{volume form} if 105 | \begin{enumerate} 106 | \item[(a)] $\Omega$ vanishes nowhere (i.e. $\Omega \neq 0 \, \, \forall \, p \in M$) 107 | \item[(b)] totally antisymmetric 108 | \[ 109 | \Omega(\dots , \underbrace{X}_{i\text{th}} , \dots , \underbrace{Y}_{j\text{th}} \dots ) = - \Omega(\dots , \underbrace{Y}_{i\text{th}} , \dots , \underbrace{X}_{j\text{th}} \dots ) 110 | \] 111 | \end{enumerate} 112 | 113 | In a chart: 114 | \[ 115 | \Omega_{i_1 \dots i_d} = \Omega_{ [i_1 \dots i_d ]} 116 | \] 117 | \end{definition} 118 | 119 | \underline{Example} $(M,\mathcal{O}, \mathcal{A},g)$ metric manifold 120 | 121 | construct volume form $\Omega$ from $g$ 122 | 123 | In \underline{any} chart: $(U,x)$ 124 | 125 | \[ 126 | \Omega_{i_1 \dots i_d} := \sqrt{ \text{det}(g_{ij}(x)) } \epsilon_{i_1 \dots i_d} 127 | \] 128 | where \textbf{Levi-Civita symbol} $\epsilon_{i_1 \dots i_d}$ is \underline{defined} as $\begin{aligned} & \quad \\ 129 | & \epsilon_{123 \dots d} = +1 \\ 130 | & \epsilon_{1\dots d} = \epsilon_{[i_1 \dots i_d]} \end{aligned}$ 131 | 132 | \begin{proof} (well-defined) Check: What happens under a change of charts 133 | \[ 134 | \begin{aligned} 135 | \Omega(y)_{i_1 \dots i_d} & = \sqrt{ \text{det}(g(y)_{ij}) } \epsilon_{i_1 \dots i_d} = \\ 136 | & = \sqrt{ \text{det}(g_{mn}(x) \frac{ \partial x^m}{ \partial y^i} \frac{ \partial x^n}{ \partial y^j} )} \frac{ \partial y^{m_1} }{ \partial x^{i_1} } \dots \frac{ \partial y^{m_d}}{ \partial x^{i_d}} \epsilon_{ [m_1 \dots m_d] } = \\ 137 | & = \sqrt{ | \text{det}g_{ij}(x) | } \left| \text{det}\left( \frac{ \partial x}{ \partial y} \right) \right| \text{det}\left( \frac{ \partial y}{ \partial x} \right) \epsilon_{i_1 \dots i_d} = \sqrt{ \text{det}g_{ij}(x)} \epsilon_{i_1 \dots i_d} \text{sgn}\left( \text{det}\left( \frac{ \partial x}{ \partial y} \right) \right) 138 | \end{aligned} 139 | \] 140 | \end{proof} 141 | 142 | EY : 20150323 143 | 144 | Consider the following: 145 | \[ 146 | \begin{aligned} 147 | \Omega(y)(Y_{(1)} \dots Y_{(d)} ) & = \Omega(y)_{i_1 \dots i_d}Y_{(1)}^{i_1} \dots Y_{(d)}^{i_d} = \\ 148 | & = \sqrt{ \text{det}(g_{ij}(y)) } \epsilon_{i_1 \dots i_d} Y^{i_1}_{(1)} \dots Y^{i_d}_{(d)} = \\ 149 | & = \sqrt{ \text{det}(g_{mn}(x)) \frac{ \partial x^m}{ \partial y^i} \frac{ \partial x^n }{ \partial y^j} } \epsilon_{i_1 \dots i_d} \frac{ \partial y^{i_1}}{ \partial x^{m_1} } \dots \frac{ \partial y^{i_d} }{ \partial x^{m_d} } X^{m_1} \dots X^{m_d} = \\ 150 | & = \sqrt{ \text{det}(g_{mn}(x))\frac{ \partial x^m}{ \partial y^i} \frac{ \partial x^n}{ \partial y^j}} \text{det}\left( \frac{ \partial y}{ \partial x}\right) \epsilon_{m_1 \dots m_d} X^{m_1} \dots X^{m_d} = \\ 151 | & = \sqrt{ \text{det}(g_{mn}(x)) } \left| \text{det}\left( \frac{ \partial x}{ \partial y} \right) \right| \text{det}\left( \frac{ \partial y}{ \partial x} \right) \epsilon_{m_1 \dots m_d} X^{m_1} \dots X^{m_d} = \\ 152 | & = \sqrt{\text{det}(g_{mn}(x))} \epsilon_{m_1 \dots m_d} \text{sgn}\left(\text{det}\left( \frac{ \partial x}{ \partial y} \right) \right) X^{m_1} \dots X^{m_d} = \text{sgn}(\text{det}\left( \frac{ \partial x}{ \partial y} \right)) \Omega_{m_1 \dots m_d}(x) X^{m_1} \dots X^{m_d} 153 | \end{aligned} 154 | \] 155 | 156 | If $\text{det}\left( \frac{ \partial y}{ \partial x} \right) > 0$, 157 | \[ 158 | \Omega(y)(Y_{(1)} \dots Y_{(d)}) = \Omega(x)(X_{(1)} \dots X_{(d)} ) 159 | \] 160 | This works also if Levi-Civita symbol $\epsilon_{i_1\dots i_d}$ doesn't change at all under a change of charts. (around 42:43 \url{https://youtu.be/2XpnbvPy-Zg}) 161 | 162 | \hrulefill 163 | 164 | Alright, let's require, \\ 165 | \phantom{\quad \, } restrict the smooth atlas $\mathcal{A}$ \\ 166 | \phantom{\quad \quad \, } to a subatlas ($\mathcal{A}^{\uparrow}$ still an atlas) 167 | \[ 168 | \mathcal{A}^{\uparrow} \subseteq \mathcal{A} 169 | \] 170 | s.t. $\forall \, (U,x), (V,y)$ have chart transition maps $\begin{aligned} & \quad \\ 171 | & y\circ x^{-1} \\ 172 | & x\circ y^{-1} \end{aligned}$ 173 | 174 | s.t. $\text{det}\left( \frac{ \partial y}{ \partial x} \right) >0$ \\ 175 | \phantom{ \quad \, } such $\mathcal{A}^{\uparrow} $ called an \textbf{oriented} atlas 176 | 177 | \[ 178 | (M, \mathcal{O}, \mathcal{A},g) \Longrightarrow (M,\mathcal{O},\mathcal{A}^{\uparrow} ,g) 179 | \] 180 | Note: associated bundles. 181 | 182 | Note also: 183 | $ \text{det}\left( \frac{ \partial y^b}{ \partial x^a} \right) = \text{det}(\partial_a(y^bx^{-1}))$ \phantom{ \quad \quad \, } $\frac{ \partial y^b}{ \partial x^a}$ is an endomorphism on vector space $V$. $\begin{aligned} & \quad \\ 184 | & \varphi : V \to V \\ 185 | & \text{det}\varphi \quad \, \text{ independent of choice of basis } \end{aligned}$ 186 | 187 | \phantom{\quad \quad \, } $g$ is a $(0,2)$ tensor field, not endomorphism (not independent of choice of basis) $\sqrt{ |\text{det}(g_{ij}(y)) | }$ 188 | 189 | \begin{definition} $\Omega$ be a volume form on $(M,\mathcal{O}, \mathcal{A}^{\uparrow} )$ and consider chart $(U,x)$ 190 | \begin{definition} $\omega_{(X)} := \Omega_{i_1\dots i_d} \epsilon^{i_1\dots i_d}$ 191 | same way $\begin{aligned} & \quad \\ 192 | & \epsilon^{12 \dots d} = +1 \\ 193 | & \epsilon^{[\dots ]} \end{aligned}$ 194 | 195 | one can show 196 | 197 | \[ 198 | \boxed{ \omega_{(y)} = \text{det}\left( \frac{ \partial x}{ \partial y} \right) \omega_{(x)} } \quad \quad \, \text{ scalar density } 199 | \] 200 | \end{definition} 201 | \end{definition} 202 | 203 | \subsection{Integration on \underline{one} chart domain $U$} 204 | 205 | \begin{definition} 206 | \begin{equation} 207 | \boxed{ \int_U f :\overset{ (U,y) }{=} \int_{y(U)} d^d\beta \omega_{(y)}(y^{-1}(\beta)) f_{(y)}(\beta) } 208 | \end{equation} 209 | \end{definition} 210 | 211 | \begin{proof}: Check that it's (well-defined), how it changes under change of charts 212 | \[ 213 | \begin{gathered} 214 | \int_U f :\overset{ (U,y) }{=} \int_{y(U)} d^d\beta \omega_{(y)}(y^{-1}(\beta)) f_{(y)}(\beta) = \underset{ (U,y)}{=} \int_{x(U)} \int d^d\alpha \left| \text{det}\left( \frac{ \partial y }{ \partial x}\right) \right| f_{(x)}(\alpha) \omega_{(x)}(x^{-1}(\alpha) \text{det}\left( \frac{ \partial x}{ \partial y } \right) = \\ 215 | = \int_{x(U)} d^d \alpha \omega_{(x)}(x^{-1}(x)) f_{(x)}(\alpha) 216 | \end{gathered} 217 | \] 218 | \end{proof} 219 | 220 | On an oriented metric manifold $(M,\mathcal{O}, \mathcal{A}^{\uparrow}, g)$ 221 | \[ 222 | \int_Uf:= \int_{x(U)} d^d\alpha \underbrace{ \sqrt{ \text{det}(g_{ij}(x))(x^{-1}(\alpha)) } }_{\sqrt{g}} f_{(x)}(\alpha) 223 | \] 224 | 225 | \subsection{Integration on the entire manifold} 226 | 227 | -------------------------------------------------------------------------------- /lecture13.tex: -------------------------------------------------------------------------------- 1 | \section{Lecture 13: Relativistic spacetime} 2 | 3 | Recall, from Lecture 9, the definition of Newtonian spacetime 4 | \[ 5 | (M, \mathcal{O}, \mathcal{A}, \nabla, t) \quad \quad \quad \, \begin{aligned} 6 | & \nabla \text{ torsion free } \\ 7 | & t \text{(the so-called absolute time) } \in C^{\infty}(M) \\ 8 | & dt \neq 0 \\ 9 | & \nabla dt = 0 \quad \, \text{ (uniform time) } 10 | \end{aligned} 11 | \] 12 | and the definition of relativistic spacetime (before Lecture 1) 13 | \[ 14 | (M, \mathcal{O}, \mathcal{A}^{\uparrow}, \nabla, g, T ) \quad \quad \quad \, \begin{aligned} 15 | & \nabla \text{ torsion-free } \\ 16 | & g \text{ Lorentzian metric} (+---) \\ 17 | & T \text{ time-orientation } 18 | \end{aligned} 19 | \] 20 | The role played by $t$ in Newtonian spacetime is played by the interplay of two new structures $g$ and $T$ in relativistic spacetime. 21 | 22 | \subsection{Time orientation} 23 | 24 | \begin{definition} 25 | $(M,\mathcal{O},\mathcal{A}^{\uparrow},g)$ a Lorentzian manifold. Then a time-orientation is given by a vector field $T$ that 26 | \begin{enumerate} 27 | \item[(i)] does \textbf{not} vanish anywhere 28 | \item[(ii)] $g(T,T)>0$ 29 | \end{enumerate} 30 | \end{definition} 31 | 32 | Newtonian vs. relativistic \\ 33 | Recall that a vector $X$ in Newtonian spacetime was called future-directed if $dt(X) > 0$. 34 | 35 | $\forall \, p \in M$, take half plane, half space of $T_pM$ \\ 36 | also stratified atlas so make planes of constant $t$ straight \\ 37 | relativistic \\ 38 | half cone $\forall \, p, q \in M$, half-cone $\subseteq T_pM$ \\ 39 | 40 | This definition of \underline{spacetime} 41 | 42 | Question \\ 43 | I see how the cone structure arises from the new metric. I don't understand however, how the $T$, the time orientation, comes in \\ 44 | 45 | Answer \\ 46 | $(M,\mathcal{O}, \mathcal{A},g)$ $g \xleftarrow (+---)$ 47 | 48 | requiring $g(X,X)>0$, select cones \\ 49 | $T$ chooses which cone \\ 50 | 51 | This definition of \underline{spacetime} has been made to enable the following physical postulates: 52 | \begin{enumerate} 53 | \item[(P1)] The worldline $\gamma$ of a \underline{massive} particle satisfies 54 | \begin{enumerate} 55 | \item[(i)] $g_{\gamma(\lambda)}(v_{\gamma, \gamma(lambda)} , v_{\gamma,\gamma(\lambda)} ) >0$ 56 | \item[(ii)] $g_{\gamma(\lambda)}(T, v_{\gamma,\gamma(\lambda)}) >0$ 57 | \end{enumerate} 58 | \item[(P2)] Worldlines of \underline{massless} particles satisfy 59 | \begin{enumerate} 60 | \item[(i)] $g_{\gamma(\lambda)}(v_{\gamma,\gamma(\lambda)}, v_{\gamma,\gamma(\lambda)}) = 0$ 61 | \item[(ii)] $g_{\gamma(\lambda)}(T,v_{\gamma,\gamma(\lambda)}) >0$ 62 | \end{enumerate} 63 | \underline{picture}: spacetime: 64 | \end{enumerate} 65 | 66 | Answer (to a question) $T$ is a smooth vector field, $T$ determines future vs. past, ``general relativity: we have such a time orientation; smoothness makes it less arbitrary than it seems'' -FSchuller, 67 | 68 | 69 | \underline{Claim}: $9/10$ of a metric are determined by the cone 70 | 71 | spacetime determined by distribution, only one-tenth error 72 | 73 | \subsection{Observers} $(M,\mathcal{O}, \mathcal{A}^{\uparrow},\nabla ,g, T)$ 74 | \begin{definition} 75 | An \underline{observer} is a worldline $\gamma$ with 76 | \[ 77 | \begin{aligned} 78 | & g(v_{\gamma}, v_{\gamma}) > 0 \\ 79 | & g(T,v_{\gamma}) > 0 80 | \end{aligned} 81 | \] 82 | together with a choice of basis 83 | \[ 84 | v_{\gamma,\gamma(\lambda)} \equiv e_0(\lambda) , e_1(\lambda), e_2(\lambda), e_3(\lambda) 85 | \] 86 | of each $T_{\gamma(\lambda)}M$ where the observer worldline passes, if $g(e_a(\lambda), e_b(\lambda)) = \eta_{ab} = \left[ \begin{matrix} 1 & & & \\ & -1 & & \\ & & -1 & \\ & & & -1 \end{matrix} \right]_{ab}$ 87 | 88 | \underline{precise}: observer $=$ \underline{smooth} curve in the frame bundle $LM$ over $M$ 89 | \end{definition} 90 | 91 | \subsubsection{Two physical postulates} 92 | 93 | \begin{enumerate} 94 | \item[(P3)] A \textbf{clock} carried by a specific observer $(\gamma, e)$ will measure a \textbf{time} 95 | \[ 96 | \tau := \int_{\lambda_0}^{\lambda_1} d\lambda \sqrt{ g_{\gamma(\lambda)}(v_{\gamma,\gamma(\lambda)}, v_{\gamma,\gamma(\lambda)}) } 97 | \] 98 | between the two ``\underline{events}'' 99 | \[ 100 | \gamma(\lambda_0) \quad \quad \quad \, \text{ ``start the clock'' } 101 | \] 102 | and 103 | \[ 104 | \gamma(\lambda_1) \quad \quad \quad \, \text{ ``stop the clock'' } 105 | \] 106 | \underline{Compare} with Newtonian spacetime: 107 | \[ 108 | t(p)=7 109 | \] 110 | 111 | Thought bubble: \underline{proper time/eigentime} $\tau$ 112 | 113 | \underline{Application/Example.} 114 | $\begin{aligned} 115 | & M = \mathbb{R}^4 \\ 116 | & \mathcal{O} = \mathcal{O}_{\text{st}} \\ 117 | & \mathcal{A} \ni (\mathbb{R}^4, \text{id}_{\mathbb{R}^4} ) \\ 118 | & g : g_{(x)ij} = \eta_{ij} \quad \, ; \quad \quad \, T_{(x)}^i =(1,0,0,0)^i 119 | \end{aligned} 120 | $ 121 | \[ 122 | \Longrightarrow \Gamma_{(x) \, \, jk }^i = 0 \text{ everywhere } 123 | \] 124 | $\Longrightarrow (M,\mathcal{O}, \mathcal{A}^{\uparrow},g,T,\nabla)$ \quad \, $\text{Riemm}=0$ \\ 125 | $\Longrightarrow $ spacetime is flat 126 | 127 | This situation is called special relativity. 128 | 129 | Consider two observers: 130 | \[ 131 | \begin{aligned} & 132 | \begin{aligned} 133 | & \gamma : (0,1) \to M \\ 134 | & \gamma_{(x)}^i = (\lambda , 0 ,0 ,0 )^i \end{aligned} \\ 135 | & 136 | \begin{aligned} 137 | & \delta :(0,1) \to M \\ 138 | \alpha \in (0,1) : & \delta_{(x)}^i = \begin{cases} ( \lambda , \alpha \lambda , 0 , 0)^i & \lambda \leq \frac{1}{2} \\ 139 | (\lambda, (1-\lambda)\alpha, 0,0)^i & \lambda > \frac{1}{2} \end{cases} 140 | \end{aligned} 141 | \end{aligned} 142 | \] 143 | let's calculate: 144 | \[ 145 | \begin{aligned} 146 | & \tau_{\gamma}:= \int_0^1 \sqrt{ g_{(x)ij} \dot{\gamma}^i_{(x)} \dot{\gamma}^j_{(x)} } = \int_0^1 d\lambda 1 = 1 \\ 147 | & \tau_{\delta} := \int_0^{1/2} d\lambda \sqrt{ 1- \alpha^2} + \int_{1/2}^1 \sqrt{ 1^2 - (-\alpha)^2 } = \int_0^1 \sqrt{ 1 - \alpha^2 } = \sqrt{ 1 - \alpha^2} 148 | \end{aligned} 149 | \] 150 | Note: piecewise integration 151 | 152 | Taking the clock postulate (P3) seriously, one better come up with a realistic clock design that supports the postulate. 153 | \underline{idea}. 154 | 155 | 2 little mirrors 156 | \item[(P4)] \underline{Postulate} 157 | 158 | Let $(\gamma, e)$ be an observer, and \\ 159 | $\delta$ be a \emph{massive} particle worldline that is parametrized s.t. $g(v_{\gamma}, v_{\gamma})=1$ (for parametrization/normalization convenience) 160 | 161 | Suppose the observer and the particle \emph{meet} somewhere (in spacetime) 162 | \[ 163 | \delta(\tau_2) = p = \gamma(\tau_1) 164 | \] 165 | 166 | \emph{This} observer measures the 3-velocity (spatial velocity) of this particle as 167 | \begin{equation}\label{Eq:spatialv} 168 | v_{\delta}: \epsilon^{\alpha}( v_{\delta, \delta(\tau_2)} ) e_{\alpha} \quad \quad \quad \, \alpha =1,2,3 169 | \end{equation} 170 | where $\epsilon^0, \boxed{ \epsilon^1,\epsilon^2,\epsilon^3}$ is the unique dual basis of $e_0,\boxed{ e_1,e_2,e_3}$ 171 | \end{enumerate} 172 | 173 | EY:20150407 174 | 175 | There might be a major correction to Eq. (\ref{Eq:spatialv}) from the Tutorial 14 : Relativistic spacetime, matter, and Gravitation, see the second exercise, Exercise 2, third question: 176 | \begin{equation} 177 | v := \frac{ \epsilon^{\alpha}({v}_{\delta} ) }{ \epsilon^0({v}_{\delta}) } e_{\alpha} 178 | \end{equation} 179 | 180 | \underline{Consequence}: 181 | An observer $(\gamma, e)$ will extract quantities measurable in his laboratory from objective spacetime quantities always like that. 182 | 183 | \underline{Ex}: $F$ Faraday $(0,2)$-tensor of electromagnetism: 184 | 185 | \[ 186 | F(e_a,e_b) = F_{ab} = \left[ \begin{matrix} 0 & E_1 & E_2 & E_3 \\ 187 | -E_1 & 0 & B_3 & -B_2 \\ 188 | -E_2 & -B_3 & 0 & B_1 \\ 189 | -E_3 & B_2 & -B_1 & 0 \end{matrix} \right] 190 | \] 191 | observer frame $e_a,e_b$ 192 | 193 | $E_{\alpha} := F(e_0,e_{\alpha})$ \\ 194 | $B^{\gamma}:= F(e_{\alpha},e_{\rho})\epsilon^{\alpha \beta \gamma}$ 195 | where 196 | $\epsilon^{123} = +1$ totally antisymmetric 197 | 198 | \subsection{Role of the Lorentz transformations} 199 | 200 | Lorentz transformations emerge as follows: \\ 201 | Let $(\gamma,e)$ and $(\widetilde{\gamma},\widetilde{e})$ be observers with $\gamma(\tau_1) = \widetilde{\gamma}(\tau_2)$ 202 | 203 | (for simplicity $\gamma(0) = \widetilde{\gamma}(0)$ 204 | 205 | Now 206 | \[ 207 | \begin{gathered} 208 | e_0 , \dots , e_1 \quad \quad \quad \, \text{ at } \tau = 0 \\ 209 | \text{ and } 210 | \widetilde{e}_0 , \dots , \widetilde{e}_1 \quad \quad \quad \, \text{ at } \tau = 0 \\ 211 | \end{gathered} 212 | \] 213 | both bases for the same $T_{\gamma(0)}M$ 214 | 215 | \underline{Thus}: $\widetilde{e}_a = \Lambda^b_{ \, \, a} e_b $ \quad \quad \, $\Lambda \in GL(4)$ 216 | 217 | Now: 218 | 219 | \[ 220 | \begin{aligned} 221 | \eta_{ab} = g(\widetilde{e}_a, \widetilde{e}_b) & = g(\Lambda^m_{ \, \, a}e_m, \Lambda^n_{ \, \, b} e_n ) = \\ 222 | & = \Lambda^m_{ \, \, a} \Lambda^n_{ \, \, b} \underbrace{g(e_m,e_n)}_{ \eta_{mn}} 223 | \end{aligned} 224 | \] 225 | i.e. $\Lambda \in O(1,3)$ 226 | 227 | \underline{Result}: Lorentz transformations relate the \emph{frames} of \emph{any two observers} at the same point. 228 | 229 | ``$\widetilde{x}^{\mu} - \Lambda^{\mu}_{ \, \, \nu} x^{\nu}$'' is utter nonsense 230 | 231 | \subsection*{Tutorial} 232 | 233 | I didn't see a tutorial video for this lecture, but I saw that the Tutorial sheet number 14 had the relevant topics. Go there. 234 | 235 | -------------------------------------------------------------------------------- /lecture14.tex: -------------------------------------------------------------------------------- 1 | \section{Lecture 14: \underline{Matter}} 2 | 3 | two types of matter 4 | 5 | point matter 6 | 7 | \underline{field matter} 8 | 9 | \underline{point matter} 10 | 11 | massive point particle 12 | 13 | more of a phenomenological importance 14 | 15 | \underline{field matter} 16 | 17 | electromagnetic field 18 | 19 | more fundamental from the GR point of view 20 | 21 | 22 | both classical matter types 23 | 24 | 25 | \subsection{Point matter} 26 | 27 | Our postulates (P1) and (P2) already constrain the possible particle worldlines. 28 | 29 | But what is their precise law of motion, possibly in the presence of ``forces'', 30 | 31 | \begin{enumerate} 32 | \item[(a)] \underline{without external forces} 33 | \[ 34 | S_{\text{massive}}[\gamma] := m \int d\lambda \sqrt{ g_{\gamma(\lambda)}( v_{\gamma,\gamma(\lambda)} , v_{\gamma,\gamma(\lambda) } ) } 35 | \] 36 | \underline{with}: 37 | \[ 38 | g_{\gamma(\lambda)}(T_{\gamma(\lambda)}, v_{\gamma, \gamma(\lambda) } ) > 0 39 | \] 40 | dynamical law Euler-Lagrange equation 41 | 42 | \underline{similarly} 43 | \[ 44 | S_{\text{massless}}[\gamma,\mu] = \int d\lambda \mu g(v_{\gamma, \gamma(\lambda)} , v_{\gamma,\gamma(\lambda)} ) 45 | \] 46 | \[ 47 | \begin{aligned} 48 | \delta_{\mu} \quad \quad \, & g(v_{\gamma,\gamma(\lambda)}, v_{\gamma,\gamma(\lambda) } ) = 0 \\ 49 | \delta_{\gamma} \quad \quad \, & \text{e.o.m.} 50 | \end{aligned} 51 | \] 52 | 53 | Reason for describing equations of motion by actions is that composite systems have an action that is the sum of the actions of the parts of that system, possibly including ``\underline{interaction terms.}'' 54 | 55 | \underline{Example}. \[ 56 | S[\gamma] + S[\delta] + S_{\text{int}}[\gamma,\delta] 57 | \] 58 | \item[(b)] \underline{presence of external forces} \\ 59 | or rather presence of \underline{fields} to which a particle ``\underline{couples}'' 60 | 61 | \underline{Example} 62 | \[ 63 | S[\gamma;A] = \int d\lambda m \sqrt{ g_{\gamma(\lambda)}(v_{\gamma, \gamma(\lambda)}, v_{\gamma,\gamma(\lambda)} ) } + qA(v_{\gamma,\gamma(\lambda)}) 64 | \] 65 | where $A$ is a \textbf{covector field} on $M$. $A$ fixed 66 | (e.g. the electromagnetic potential) 67 | \end{enumerate} 68 | 69 | Consider Euler-Lagrange eqns. $L_{\text{int}} = q A_{(x)} \dot{\gamma}^m_{(x)}$ 70 | \[ 71 | m (\nabla_{v_{\gamma}} v_{\gamma})_a + \underbrace{ \dot{ \left( \frac{ \partial L_{\text{int}} }{ \partial \dot{\gamma}^m_{(x)} } \right) }- \frac{ \partial L_{\text{int}} }{ \partial \gamma^m_{(x)} } }_{*} = 0 \Longrightarrow \boxed{ m (\nabla_{v_{\gamma} } v_{\gamma})^a = \underbrace{ -q F^a_{ \, \, m } \dot{\gamma}^m }_{\text{Lorentz force on a charged particle in an electromagnetic field } } } 72 | \] 73 | \[ 74 | \frac{ \partial L}{ \partial \dot{\gamma}^a} = qA_{(x)a}, \quad \quad \, \dot{ \left( \frac{ \partial L}{ \partial \dot{\gamma}^m} \right) } = q \cdot \frac{ \partial }{ \partial x^m} (A_{(x)m} ) \cdot \dot{\gamma}^m_{(x)} 75 | \] 76 | \[ 77 | \frac{ \partial L}{ \partial \gamma^a} = q \cdot \frac{ \partial }{ \partial x^a} (A_{(x)m} ) \dot{\gamma}^m 78 | \] 79 | \[ 80 | \begin{aligned} 81 | * & = q\left( \frac{ \partial A_a}{ \partial x^m} - \frac{ \partial A_m}{ \partial x^a} \right) \dot{\gamma}^m_{(x)} 82 | & = q \cdot F_{(x)am} \dot{\gamma}^m_{(x)} 83 | \end{aligned} 84 | \] 85 | $F \leftarrow $ Faraday 86 | 87 | \[ 88 | S[\gamma] = \int(m\sqrt{g(v_{\gamma},v_{\gamma} ) } + q A(v_{\gamma}) ) d\lambda 89 | \] 90 | 91 | \subsection{Field matter} 92 | 93 | \begin{definition} 94 | Classical (non-quantum) field matter is any tensor field on spacetime where equations of motion derive from an action. 95 | \end{definition} 96 | 97 | \underline{Example}: 98 | \[ 99 | S_{\text{Maxwell}}[A] = \frac{1}{4}\int_M d^4x \sqrt{-g}F_{ab}F_{cd}g^{ac}g^{bd} 100 | \] 101 | $A$ $(0,1)$-tensor field \\ 102 | $=$ thought cloud: for \underline{simplicity} one chart covers all of $M$ \\ 103 | $-$ for $\sqrt{-g}$ $(+---)$ \\ 104 | 105 | $F_{ab} := 2\partial_{[a}A_{b]} = 2(\nabla_{[a} A)_{b]}$ 106 | 107 | \underline{Euler-Lagrange equations for fields} 108 | \[ 109 | 0 = \frac{ \partial \mathcal{L}}{ \partial A_m} - \frac{ \partial }{ \partial x^s} \left( \frac{ \partial \mathcal{L}}{ \partial \partial _s A_m } \right) + \frac{ \partial }{ \partial x^s} \frac{ \partial }{ \partial x^t} \frac{ \partial^2 \mathcal{L}}{ \partial \partial_t \partial_s A_m } 110 | \] 111 | 112 | \underline{Example} \dots 113 | \[ 114 | (\nabla_{\frac{ \partial }{ \partial x^m} }F)^{ma} = j^a 115 | \] 116 | \textbf{in}homogeneous Maxwell 117 | 118 | thought bubble $j=qv_{\gamma}$ 119 | 120 | \[ 121 | \partial_{[a}F_{b]} - () 122 | \] 123 | homogeneous Maxwell 124 | 125 | Other example well-liked by textbooks 126 | \[ 127 | S_{\text{Klein-Gordon}}[\phi] := \int_M d^4x \sqrt{-g}[g^{ab}(\partial_a \phi) (\partial_b \phi ) - m^2\phi^2] 128 | \] 129 | $\phi$ $(0,0)$-tensor field 130 | 131 | \subsection{Energy-momentum tensor of matter fields} 132 | 133 | At some point, we want to write down an \underline{action} for the metric tensor field itself. 134 | 135 | But then, this action $S_{\text{grav}}[g]$ will be added to any $S_{\text{matter}}[A,\phi,\dots]$ in order to describe the total system. 136 | 137 | \[ 138 | S_{\text{total}}[g,A] = S_{\text{grav}}[g] + S_{\text{Maxwell}}[A,g] 139 | \] 140 | 141 | \[ 142 | \begin{aligned} 143 | & \delta A & : \Longrightarrow \text{ Maxwell's equations } \\ 144 | & \delta g_{ab} & : \boxed{ \frac{1}{ 16 \pi G } G^{ab} } + (-2T^{ab} ) = 0 145 | \end{aligned} 146 | \] 147 | $G$ Newton's constant 148 | 149 | \[ 150 | G^{ab} = 8 \pi G_N T^{ab} 151 | \] 152 | 153 | \begin{definition} 154 | $ S_{\text{matter}}[\Phi,g] $ is a matter action, the \textbf{so-called energy-momentum tensor} is 155 | \[ 156 | T^{ab} := \frac{-2}{ \sqrt{-g}} \left( \frac{ \partial \mathcal{L}_{\text{matter}} }{ \partial g_{ab}} - \partial_s \frac{ \partial \mathcal{L}_{\text{matter}} }{ \partial \partial_s g_{ab}} + \dots \right) 157 | \] 158 | \end{definition} 159 | $-$ of $\frac{-2}{\sqrt{g}}$ is Schr\"{o}dinger minus (EY : 20150408 F.Schuller's joke? but wise) 160 | 161 | choose all sign conventions s.t. 162 | \[ 163 | T(\epsilon^0,\epsilon^0) >0 164 | \] 165 | 166 | \underline{Example}: For $S_{\text{Maxwell}}$: 167 | \[ 168 | T_{ab} = F_{am} F_{bn}g^{mn} - \frac{1}{4} F_{mn} F^{mn} g_{ab} 169 | \] 170 | $T_{ab} \equiv T_{\text{Maxwell}ab}$ 171 | 172 | \[ 173 | T(e_0,e_0) = \underline{E}^2+\underline{B}^2 174 | \] 175 | \[ 176 | T(e_0,e_{\alpha}) = (E\times B)_{\alpha} 177 | \] 178 | 179 | \underline{Fact}: One often does not specify the fundamental action for some matter, but one is rather satisfied to assume certain properties / forms of 180 | \[ 181 | T_{ab} 182 | \] 183 | 184 | \underline{Example} Cosmology: (homogeneous \& isotropic) 185 | 186 | perfect fluid \\ 187 | 188 | of pressure $p$ and density $\rho$ 189 | modelled by 190 | \[ 191 | T^{ab} = (\rho + p)u^a u^b - pg^{ab} 192 | \] 193 | 194 | radiative fluid 195 | 196 | What is a fluid of photons: 197 | 198 | observe: $\begin{aligned} 199 | & T_{\text{Maxwell}}^{ \, \, ab} g_{ab} = 0 \\ 200 | & T_{\text{p.f.}}^{ \, \, ab} g_{ab} \overset{!}{=} 0 \\ 201 | & = (\rho + p)u^a u^b g_{ab} - p\underbrace{ g^{ab} g_{ab} }_{ 4} 202 | \end{aligned}$ 203 | 204 | \[ 205 | \begin{aligned} 206 | \leftrightarrow & \rho _ p 04p = 0 \\ 207 | & \rho = 3p 208 | \end{aligned} 209 | \] 210 | $p=\frac{1}{3}\rho$ 211 | 212 | Reconvene at 3 pm? (EY : 20150409 I sent a Facebook (FB) message to the International Winter School on Gravity and Light: there was no missing video; it continues on Lecture 15 immediately) 213 | 214 | \subsection*{Tutorial 14: Relativistic Spacetime, Matter and Gravitation} 215 | 216 | \exercisehead{2: Lorentz force law} 217 | 218 | \questionhead{electromagnetic potential} 219 | 220 | 221 | 222 | 223 | 224 | -------------------------------------------------------------------------------- /lecture15.tex: -------------------------------------------------------------------------------- 1 | \section{Einstein gravity} 2 | 3 | Recall that in Newtonian spacetime, we were able to reformulate the Poisson law $\Delta \phi = 4\pi G_N \rho$ in terms of the Newtonian spacetime curvature as 4 | \[ 5 | R_{00} = 4\pi G_N \rho 6 | \] 7 | $R_{00}$ with respect to $\nabla_{\text{Newton}}$, and $G_N = $ Newtonian gravitational constant. 8 | 9 | This prompted Einstein to postulate that the relativistic field equations for the Lorentzian metric $g$ of (relativistic) spacetime 10 | \[ 11 | R_{ab} = 8\pi G_N T_{ab} 12 | \] 13 | However, this equation suffers from a problem. We know from matter theory that in RHS, $(\nabla_a T)^{ab} = 0$ since this has been formulated from an action. But in LHS, $(\nabla_a R)^{ab} \neq 0$ generically. Einstein tried to argue this problem away. Nevertheless, the equations cannot be upheld. 14 | 15 | \subsection{Hilbert} 16 | Hilbert was a specialist for variational principles. To find the appropriate LHS of the gravitational field equations, Hilbert suggested to start from an action 17 | \[ 18 | S_{\text{Hilbert}}[g] = \int_M \sqrt{-g} R_{ab} g^{ab} 19 | \] 20 | which, in a sense, is formulated in terms of ``simplest action''. \\ 21 | \underline{Aim}: varying this w.r.t. metric $g_{ab}$ will result in some tensor $G^{ab}$. 22 | 23 | \subsection{Variation of $S_{\text{Hilbert}}$} 24 | \begin{align*} 25 | 0 \overset{!}{=} \underbrace{\delta}_{g_i} S_{\text{Hilbert}}[g] = \int_M [ \underbrace{\delta \sqrt{-g} }_{1} \, g^{ab}R_{ab} + \sqrt{-g} \, \underbrace{\delta g^{ab}}_{2} R_{ab} + \sqrt{-g} \, g^{ab} \underbrace{\delta R_{ab}}_{3} ] 26 | \end{align*} 27 | 28 | ad 1: $\delta \sqrt{-g} = \frac{- (\text{det}g)g^{mn} \delta g_{mn}}{2 \sqrt{-g}} = \frac{1}{2} \sqrt{-g} g^{mn} \delta g_{mn}$ \\ 29 | the above comes from $\delta \text{det}(g) = \text{det}(g) g^{mn} \delta g_{mn} \text{ e.g. from } \text{det}(g) = \exp{\text{tr}{\ln{g}}}$ 30 | 31 | ad 2: $g^{ab}g_{bc} = \delta^a_c \implies (\delta g^{ab})g_{bc} + g^{ab}(\delta g_{bc}) = 0 \implies \delta g^{ab} = -g^{am} g^{bn} \delta g_{mn}$ 32 | 33 | ad 3: \begin{align*} 34 | \Delta R_{ab} & \underbrace{=}_{\text{normal coords at point}} \delta \partial_b \ccf{m}{am} - \delta \partial_m \ccf{m}{ab} + \Gamma \Gamma - \Gamma \Gamma \\ 35 | & = \partial_b \delta \ccf{m}{am} - \partial_m \delta \ccf{m}{ab} = \nabla_b (\delta \Gamma)\indices{^{m}_{am}} - \nabla_m (\delta \Gamma)\indices{^{m}_{ab}} \\ 36 | & \implies \sqrt{-g} g^{ab} \delta R_{ab} = \sqrt{-g} 37 | \end{align*} 38 | ``if you formulate the variation properly, you'll see the variation $\delta$ commute with $\partial _b$'' 39 | %EY : 20150408 I think one uses the integration at the bounds, integration by parts trick 40 | 41 | $\ccfx{i}{jk}{(x)} - \widetilde{\Gamma_{(x)}}\indices{^i_{jk}}$ are the components of a $(1,2)$-tensor. \\ 42 | Let us use the notation: $(\nabla_b A)\indices{^i_j} =: A\indices{^i_{j;b}}$ 43 | 44 | \[ 45 | \therefore \sqrt{-g} g^{ab} \delta R_{ab} \underbrace{=}_{ \nabla g = 0 } \sqrt{-g} (g^{ab} \delta \ccf{m}{am})_{;b} - \sqrt{-g} (g^{ab} \delta \ccf{m}{ab})_{ ; m} = \sqrt{-g} \, A\indices{^b_{;b}} - \sqrt{-g} \, B\indices{^m_{, m}} 46 | \] 47 | 48 | Question: Why is the difference of coefficients a tensor? 49 | 50 | Answer: 51 | \begin{align*} 52 | \ccfx{i}{jk}{(y)} = \cibasis[y^i]{x^m} \cibasis[x^m]{y^j} \cibasis[x^q]{y^k} \ccfx{m}{nq}{(x)} + \cibasis[y^i]{x^m} \frac{ \partial^2 x^m}{ \partial y^j \partial y^k} 53 | \end{align*} 54 | 55 | Collecting terms, one obtains 56 | \begin{align*} 57 | 0 & \overset{!}{=} \delta S_{\text{Hilbert}} = \int_M [ \frac{1}{2} \sqrt{-g} \, g^{mn} \delta g_{mn} g^{ab} R_{ab} - \sqrt{-g} \, g^{am} g^{bn} \delta g_{mn} R_{ab} + \underbrace{(\sqrt{-g} \, A^a)_{ \, , a} }_{\text{surface}} - \underbrace{(\sqrt{-g} \, B^b)_{ \, , b }}_{\text{surface term}}] \\ 58 | & = \int_M \sqrt{-g} \, \delta \underbrace{g_{mn}}_{\text{arbitrary variation}} [\frac{1}{2} g^{mn} R - R^{mn}] \implies G^{mn} = R^{mn} - \frac{1}{2} g^{mn} R 59 | \end{align*} 60 | 61 | Hence Hilbert, from this ``mathematical'' argument, concluded that one may take 62 | \[ 63 | \boxed{ R_{ab} - \frac{1}{2} g_{ab} R = 8 \pi G_N T_{ab} } \\ 64 | \] 65 | Einstein equations 66 | \[ 67 | S_{E-H}[g] = \int_M \sqrt{-g} \, R 68 | \] 69 | 70 | \subsection{Solution of the $\nabla_a T^{ab} =0$ issue} 71 | One can show ($\to$ Tutorials) that the \underline{Einstein curvature} 72 | \[ 73 | G_{ab} = R_{ab} - \frac{1}{2} g_{ab}R 74 | \] 75 | satisfy the so-called \underline{contracted differential Bianchi identity} $(\nabla_a G)^{ab} = 0$. 76 | 77 | \subsection{Variants of the field equations} 78 | \begin{enumerate}[(a)] 79 | \item a simple rewriting: 80 | \begin{align*} 81 | & R_{ab} - \frac{1}{2} g_{ab} R = 8 \pi G_N T_{ab} = T_{ab} && (G_N = \frac{1}{8\pi}) \\ 82 | & R_{ab} - \frac{1}{2} g_{ab} R = T_{ab} \, || \, g^{ab} && (\text{contract on both sides with } g^{ab}) \\ 83 | & R - 2R = T := T_{ab}g^{ab} \\ 84 | \implies & R = -T \\ 85 | \implies & R_{ab} + \frac{1}{2} g_{ab} T = T_{ab} \\ 86 | \Longleftrightarrow & R_{ab} = (T_{ab} - \frac{1}{2} Tg_{ab}) =: \widehat{T}_{ab} \\ 87 | \therefore \quad & \boxed{ R_{ab} = \widehat{T}_{ab}} 88 | \end{align*} 89 | 90 | \item $S_{E-H}[g] := \int_M \sqrt{-g} (R+ 2\Lambda)$ \quad \quad ($\Lambda$ is called cosmological constant) 91 | 92 | \underline{History:} \\ 93 | 1915: $\Lambda < 0$ (Einstein) in order to get a non-expanding universe \\ 94 | $>$1915: $\Lambda = 0$ (Hubble) \\ 95 | today: $\Lambda > 0$ to account for an accelerated expansion \\ 96 | $\Lambda \neq 0$ can be interpreted as a contribution $-\frac{1}{2} \Lambda g$ to the energy-momentum of matter in spacetime. This energy, which does not interact with anything but contributes to the curvature is called ``dark energy''. 97 | 98 | Question: surface terms scalar? 99 | 100 | Answer: for a careful treatment of the surface terms which we discarded, see, e.g. E. Poisson, ``A relativist's toolkit'' C.U.P. ``excellent book'' 101 | 102 | Question: What is a constant on a manifold? \\ 103 | Answer: $\int \sqrt{-g} \, \Lambda = \Lambda \int \sqrt{-g} \, 1$ 104 | 105 | [back to dark energy] 106 | 107 | [Weinberg used QCD to calculate $\Lambda$ using the idea that $\Lambda$ could arise as the vacuum energy of the standard model fields. It turns out that \\ 108 | $\Lambda_{\text{calculated}} = 10^{120} \times \Lambda_{\text{obs}}$ \\ 109 | which is called the ``worst prediction of physics''. 110 | 111 | 112 | \underline{Tutorials}: \underline{check that } 113 | \begin{itemize} 114 | \item Schwarzscheld metric (1916) 115 | \item FRW metric 116 | \item pp-wave metric 117 | \item Reisner-Nordstrom 118 | \end{itemize} 119 | $\Longrightarrow $ are solutions to Einstein's equations 120 | \end{enumerate} 121 | 122 | in high school 123 | 124 | $m\ddot{x} + m\omega^2 x^2=0$ 125 | 126 | $x(t) = \cos{(\omega t)}$ 127 | 128 | \underline{ET}: [elementary tutorials] 129 | 130 | study motion of particles \& observers in Schwarzschild S.T. 131 | 132 | \underline{Satellite lectures}: \\ 133 | Marcus C. Werner: Gravitational lensing 134 | 135 | odd number of pictures Morse theory (EY:20150408 Morse Theory !!!) 136 | 137 | Domenico Giulini: Canonical Formulations of GR 138 | 139 | Hamiltonian form 140 | 141 | Key to Quantum Gravity 142 | -------------------------------------------------------------------------------- /lecture18.tex: -------------------------------------------------------------------------------- 1 | \section{L18: Canonical Formulation of GR-I} 2 | 3 | \subsection{Dynamical and Hamiltonian formulation of General Relativity} 4 | Purpose: 5 | \begin{enumerate}[1)] 6 | \item formulate and solve initial-value problems 7 | \item integrate Einstein's Equations by numerical codes 8 | \item characterise degrees of freedom 9 | \item characterise isolated systems, associated symmetry groups and conserved quantities like Energy/Mass, Momenta (linear and angular), Poincare charges 10 | \item starting point for ``canonical quantisation'' program 11 | \end{enumerate} 12 | 13 | How do we achieve this goal? We will rewrite Einstein's Equations in form of a constrained Hamiltonian system. 14 | 15 | \[ 16 | \underbrace{R\indices{_{\mu\nu}} - \frac{1}{2} g\indices{_{\mu\nu}}R}_{G\indices{_{\mu\nu}}} + \underbrace{\Lambda}_{\text{cosmological constant}} g\indices{_{\mu\nu}} = \underbrace{k}_{\frac{8 \pi G}{c^4}} T\indices{_{\mu\nu}} 17 | \] 18 | $k = \frac{8 \pi G}{c^4}$ is an important quantity as it turns the energy density $T\indices{_{\mu\nu}}$ into curvature. \\ 19 | Physical dimensions: \\ 20 | \begin{align*} 21 | \text{for curvature, } [G\indices{_{\mu\nu}}] & = \frac{1}{m^2}, \\ 22 | \text{for energy density } [T\indices{_{\mu\nu}}] & = \frac{\text{Joule}}{m^3} \\ 23 | \therefore \, [k] & = \frac{\frac{1}{m^2}}{\frac{J}{m^3}} = \frac{m}{J} 24 | \end{align*} 25 | 26 | \begin{framed} 27 | \textbf{Convention} (for this lecture): \\ 28 | Greek indices run from $0$ to $3$ and latin indices from $1$ to $3$ \\ 29 | signature is $(-,+,+,+)$ as it makes space positive definite in $3+1$-decomposition\\ 30 | $T\indices{_{00}}$ is positive energy density. \\ 31 | \end{framed} 32 | -------------------------------------------------------------------------------- /lecture2.tex: -------------------------------------------------------------------------------- 1 | \section{Manifolds} 2 | \begin{framed} 3 | \textbf{Motivation}: There exist so many topological spaces that mathematicians cannot even classify them. For spacetime physics, we may focus on topological spaces $(M, \mathcal{O})$ that can be charted, analogously to how the surface of the earth is charted in an atlas. 4 | \end{framed} 5 | 6 | \subsection{Topological manifolds} 7 | \begin{definition} 8 | A topological space $(M, \mathcal{O})$ is called a \textbf{d-dimensional topological manifold} if \\ 9 | $\forall p \in M : \exists U \in \mathcal{O} : p \in U, \exists x : U \to x(U) \subseteq \R^d$ satisfying the following: 10 | \begin{enumerate} 11 | \item[(i)] \textbf{$x$ is invertible}: $x^{-1} : x(U) \to U$ 12 | \item[(ii)] \textbf{$x$ is continuous} w.r.t. $(M, \mathcal{O})$ and $(\R^d, \mathcal{O}_{std})$ 13 | \item[(iii)] \textbf{$x^{-1}$ is continuous} 14 | \end{enumerate} 15 | \end{definition} 16 | 17 | \subsection{Terminology} 18 | \begin{enumerate} 19 | \item The tuple $(U , x)$ is a \textbf{chart} of $(M, \mathcal{O})$, 20 | \item An \textbf{atlas} of $(M, \mathcal{O})$ is a set $\A = \lbrace (U_{\alpha}, x_{\alpha}) | \alpha \in A, \text{an index set} \rbrace : \bigcup_{\alpha \in A}U_{\alpha} = M$. 21 | \item The map $x : U \to x(U) \subseteq \R^d$ is called the \textbf{chart map}. 22 | \item The chart map $x$ maps a point $ p \in U$ to a d-tuple of real numbers $x(p) = (x^1(p), x^2(p), \dots , x^d(p))$. This is equivalent to d-many maps $x^i(p): U \to \R$, which are called the \textbf{coordinate maps}. 23 | \item If $p \in U$, then $x^i(p)$ is the \textbf{ith coordinate of $p$} w.r.t. the chart $(U, x)$. 24 | \end{enumerate} 25 | 26 | \subsection{Chart transition maps} 27 | Imagine 2 charts $(U, x)$ and $(V, y)$ with overlapping regions, i.e., $U \cap V \neq \emptyset$. 28 | 29 | \[ 30 | \begin{tikzpicture} 31 | \matrix (m) [matrix of nodes, row sep=3em, column sep=5em, text height=1.5ex, text depth=0.25ex] 32 | { \text{ } & $U \cap V$ & \text{ } \\ 33 | $\R^d \supseteq x(U \cap V)$ & \text{ } & $y(U \cap V) \subseteq \R^d$ \\ }; 34 | \path[->] 35 | (m-1-2) edge node[above] {$y$} (m-2-3) 36 | edge node[below] {$x$} (m-2-1); 37 | \path[->] 38 | (m-2-1) edge[bend left=20] node[above] {$x^{-1}$} (m-1-2) 39 | edge node[below] {$y \after x^{-1}$} (m-2-3); 40 | \end{tikzpicture} 41 | \] 42 | 43 | The map $y \after x^{-1}$ is called the \textbf{chart transition map}, which maps an open set of $\R^d$ to another open set of $\R^d$. This map is continuous because it is composition of two continuous maps, Informally, these chart transition maps contain instructions on how to glue together the charts of an atlas, 44 | 45 | \subsection{Manifold philosophy} 46 | Often it is desirable (or indeed the only way) to define properties (e.g., `continuity') of real-world object (e.g., the curve $\gamma : \R \to M$) by judging suitable coordinates not on the `real-world' object itself, but on a chart-representation of that real world object. 47 | 48 | For example, in the picture below, we can use the map $x \after \gamma$ to infer the continuity of the curve $\gamma$ in $U \subseteq M$. 49 | \[ 50 | \begin{tikzpicture} 51 | \matrix (m) [matrix of nodes, row sep=3em, column sep=5em, text height=1.5ex, text depth=0.25ex] 52 | { $\R$ & $U \subseteq M$ \\ 53 | \text{ } & $x(U) \subseteq \R^d$ \\ }; 54 | \path[->] 55 | (m-1-1) edge node[above] {$\gamma$} (m-1-2) 56 | edge node[sloped, anchor=center, below] {$x \after \gamma$} (m-2-2); 57 | \path[->] 58 | (m-1-2) edge node[right] {$x$} (m-2-2); 59 | \end{tikzpicture} 60 | \] 61 | 62 | However, we need to ensure that the defined property does not change if we change our chosen chart. For example, in the picture below, continuity in $x \after \gamma$ should imply $y \after \gamma$. This is true, since 63 | $y \after \gamma = y \after (x^{-1} \after x) \after \gamma = (y \after x^{-1}) \after (x \after \gamma)$ is continuous because it is a composition of two continuous functions, thanks to the continuity of the chart transition map $y \after x^{-1}$. 64 | 65 | \[ 66 | \begin{tikzpicture} 67 | \matrix (m) [matrix of nodes, row sep=4em, column sep=5em, text height=1.5ex, text depth=0.25ex] 68 | { \text{ } & $y(U) \subseteq \R^d$ \\ 69 | $\R$ & $U \subseteq M$ \\ 70 | \text{ } & $x(U) \subseteq \R^d$ \\ }; 71 | \path[->] 72 | (m-2-1) edge node[above] {$\gamma$} (m-2-2) 73 | edge node[sloped, anchor=center, above] {$y \after \gamma$} (m-1-2) 74 | edge node[sloped, anchor=center, below] {$x \after \gamma$} (m-3-2); 75 | \path[->] 76 | (m-2-2) edge node[right] {$y$} (m-1-2) 77 | edge node[right] {$x$} (m-3-2); 78 | \path[->] 79 | (m-3-2) edge[bend left=20] node[left] {$x^{-1}$} (m-2-2) 80 | edge[bend right=50] node[right] {$y \after x^{-1}$} (m-1-2); 81 | \end{tikzpicture} 82 | \] 83 | 84 | What about differentiability? Does differentiability of $x \after \gamma$ guarantee differentiability of $y \after \gamma$? No. Since composition of a differentiable map and a continuous map might only be continuous, The solution is to restrict the atlas by removing those charts which are not differentiable. Thus, we have got rid of our problem. However, we must remember that with the present structure, we cannot define differentiability at manifold level since we do not know how to subtract or divide in $U \subseteq M$. Therefore, differentiability of $\gamma : \R \to M$ makes no sense yet. 85 | -------------------------------------------------------------------------------- /lecture22.tex: -------------------------------------------------------------------------------- 1 | \section{Lecture 22: \underline{Black Holes}} 2 | 3 | Only depends on Lectures 1-15, so does lecture on ``Wednesday'' 4 | 5 | Schwarzschild solution also vacuum solution (from tutorial EY : oh no, must do tutorial) 6 | 7 | Study the Schwarzschild as a vacuum solution of the Einstein equation: 8 | 9 | $m = G_N M$ where $M$ is the ``mass'' 10 | \[ 11 | g = \left( 1 - \frac{2m}{r} \right) dt \otimes dt - \frac{1}{ 1 - \frac{2m}{r} } dr \otimes dr - r^2 ( d\theta \otimes d\theta + \sin^2{\theta} d\varphi \otimes d\varphi 12 | \] 13 | in the so-called \underline{Schwarzschild coordinates} $\begin{aligned} & & & & \quad \\ 14 | t \quad & r \quad & \theta \quad & \varphi \\ 15 | (-\infty,\infty) \quad & (0,\infty) \quad & (0,\pi) \quad & (0,2\pi) \end{aligned}$ 16 | 17 | What staring at this metric for a while, two questions naturally pose themselves: 18 | 19 | \begin{enumerate} 20 | \item[(i)] What exactly happens \@ $r= 2m$? 21 | 22 | $\begin{aligned} & & & & \quad \\ 23 | t \quad & r \quad & \theta \quad & \varphi \\ 24 | (-\infty,\infty) \quad & (0,2m) \cup ( 2m, \infty) \quad & (0,\pi) \quad & (0,2\pi) \end{aligned}$ 25 | 26 | 27 | \item[(ii)] Is there anything (in the real world) beyond $\begin{aligned} & \quad \\ 28 | & t \to -\infty \\ 29 | & t\to +\infty \end{aligned}$? 30 | 31 | \underline{idea}: Map of Linz, blown up 32 | 33 | Insight into these two issues is afforded by stopping to stare. 34 | 35 | Look at \emph{geodesic} of $g$, instead. 36 | 37 | \end{enumerate} 38 | 39 | \subsection{Radial null geodesics} 40 | 41 | null - $g(v_{\gamma},v_{\gamma} ) = 0$ 42 | 43 | Consider null geodesic in ``\underline{Schd}'' 44 | 45 | \[ 46 | S[\gamma ] = \int d\lambda \left[ \left( 1 - \frac{2m}{r} \right)\dot{t}^2 - \left(1 - \frac{2m}{r} \right)^{-1} \dot{r}^2 - r^2( \dot{\theta}^2 + \sin^2{\theta} \dot{\varphi}^2 ) \right] 47 | \] 48 | with $[\dots ] =0$ 49 | 50 | and one has, in particular, the $t$-eqn. of motion: 51 | 52 | \[ 53 | \left( \left( 1- \frac{2m}{r} \right) \dot{t} \right)^{.} = 0 54 | \] 55 | $\Longrightarrow$ 56 | \[ 57 | \boxed{ \left( 1 - \frac{2m}{r} \right)\dot{t} = k } = \text{ const. } 58 | \] 59 | Consider \underline{radial} null geodesics \\ 60 | $\theta \overset{!}{=} \text{ const. }$ \quad \quad \, $\varphi = \text{ const. }$ 61 | 62 | From $\Box $ and $\Box $ 63 | \[ 64 | \Longrightarrow \dot{r}^2 = k^2 \leftrightarrow \dot{r} = \pm k 65 | \] 66 | \[ 67 | \Longrightarrow r(\lambda) = \pm k \cdot \lambda 68 | \] 69 | Hence, we may consider 70 | \[ 71 | \widetilde{t}(r) := t(\pm k\lambda) 72 | \] 73 | 74 | \underline{Case A:} $\oplus$ 75 | 76 | \[ 77 | \frac{d\widetilde{t}}{dr} = \frac{ \dot{ \widetilde{t}} }{ \dot{r}} = \frac{k}{ \left( 1 - \frac{2m}{r} \right) k } = \frac{r}{r-2m} 78 | \] 79 | \[ 80 | \Longrightarrow \widetilde{t}_+(r) = r + 2m \ln{ |r-2m | } 81 | \] 82 | (\textbf{outgoing} null geodesics) 83 | 84 | \underline{Case b.} $\pm$ (Circle around $-$, consider $-$): 85 | 86 | \[ 87 | \widetilde{t}_-(r) = -r - 2m \ln{ |r - 2m | } 88 | \] 89 | (\textbf{ingoing} null geodesics) 90 | 91 | Picture 92 | 93 | \subsection{Eddington-Finkelstein} 94 | 95 | Brilliantly simple idea: 96 | 97 | change (on the domain of the Schwarzschild coordinates) to different coordinates, s.t. \\ 98 | in those new coordinates, \\ 99 | \emph{ingoing} null geodesics appear as straight lines, of slope $-1$ 100 | 101 | This is achieved by 102 | 103 | \[ 104 | \bar{t}(t,r,\theta, \varphi) := t + 2m \ln{ | r-2m | } 105 | \] 106 | \underline{Recall}: ingoing null geodesic has 107 | \[ 108 | \widetilde{t}(r) = -(r + 2m \ln{ |r-2m |} ) \quad \quad \, (Schd coords) 109 | \] 110 | 111 | \[ 112 | \Longleftrightarrow \bar{t} - 2m \ln{ |r-2m |} = -r - 2m \ln{ |r-2m |} + \text{ const. } 113 | \] 114 | \[ 115 | \therefore \bar{t} = -r + \text{ const. } 116 | \] 117 | 118 | (Picture) 119 | 120 | \emph{outgoing} null geodesics 121 | 122 | \[ 123 | \bar{t} = r + 4 m \ln{ |r - 2m| } + \text{ const. } 124 | \] 125 | 126 | Consider the new chart $(V,g)$ while $(U,x)$ was the Schd chart. 127 | 128 | \[ 129 | \underbrace{U}_{\text{Schd}} \bigcup \lbrace \text{ horizon } \rbrace = V 130 | \] 131 | ``chart image of the horizon'' 132 | 133 | Now calculate the \emph{Schd metric $g$ } w.r.t. Eddington-Finkelstein coords. 134 | 135 | \[ 136 | \begin{aligned} 137 | & \bar{t}(t,r,\theta,\varphi) = t + 2m\ln{ |r -2m | } \\ 138 | & \bar{r}(t,r,\theta,\varphi) = r \\ 139 | & \bar{\theta}(t,r,\theta,\varphi) = \theta \\ 140 | & \bar{\varphi}(t,r,\theta,\varphi) = \varphi 141 | \end{aligned} 142 | \] 143 | 144 | EY : 20150422 I would suggest that after seeing this, one would calculate the metric by your favorite CAS. I like the Sage Manifolds package for Sage Math. 145 | 146 | \href{https://github.com/ernestyalumni/diffgeo-by-sagemnfd/blob/master/Schwarzschild_BH.sage}{Schwarzschild\_BH.sage on github} 147 | 148 | \href{https://www.patreon.com/file?s=645287&h=2254352&i=108637}{Schwarzschild\_BH.sage on Patreon} 149 | 150 | \href{https://drive.google.com/file/d/0B1H1Ygkr4EWJdllTR3czQU9DeW8/view?usp=sharing}{Schwarzschild\_BH.sage on Google Drive} 151 | 152 | \lstset{language=Python,basicstyle=\scriptsize\ttfamily, 153 | commentstyle=\ttfamily\color{gray}} 154 | \begin{lstlisting}[frame=single] 155 | sage: load(``Schwarzschild_BH.sage'') 156 | 4-dimensional manifold 'M' 157 | expr = expr.simplify_radical() 158 | Levi-Civita connection 'nabla_g' associated with the Lorentzian metric 'g' on the 4-dimensional manifold 'M' 159 | Launched png viewer for Graphics object consisting of 4 graphics primitives 160 | \end{lstlisting} 161 | 162 | Then calculate the Schwarzschild metric $g$ but in Eddington-Finkelstein coordinates. Keep in mind to calculate the set of coordinates that uses $\bar{t}$, not $\widetilde{t}$: 163 | 164 | \begin{lstlisting}[frame=single] 165 | sage: gI.display() 166 | gI = (2*m - r)/r dt*dt - r/(2*m - r) dr*dr + r^2 dth*dth + r^2*sin(th)^2 dph*dph 167 | sage: gI.display( X_EF_I_null.frame()) 168 | gI = (2*m - r)/r dtbar*dtbar + 2*m/r dtbar*dr + 2*m/r dr*dtbar + (2*m + r)/r dr*dr + r^2 dth*dth + r^2*sin(th)^2 dph*dph 169 | \end{lstlisting} 170 | 171 | 172 | 173 | -------------------------------------------------------------------------------- /lecture3.tex: -------------------------------------------------------------------------------- 1 | \section{Multilinear Algebra} 2 | 3 | \begin{framed} 4 | \textbf{Motivation}: The essential object of study of linear algebra is vector space. However, a word of warning here. We will not equip space(time) with vector space structure. This is evident since, unlike in vector space, expressions such as $5 \cdot \text{Paris}$ and $\text{Paris} + \text{Vienna}$ do not make any sense. If multilinear algebra does not further our aim of studying spacetime, then why do we study it? The tangent spaces $T_pM$ (defined in Lecture 5) at a point $p$ of a smooth manifold $M$ (defined in Lecture 4) carries a vector space structure in a natural way even though the underlying position space(time) does not have a vector space structure. Once we have a notion of tangent space, we have a derived notion of a tensor. Tensors are very important in differential geometry. \\ 5 | It is beneficial to study vector spaces (and all that comes with it) abstractly for two reasons: (i) for construction of $T_pM$, one needs an intermediate vector space $C^{\infty}(M)$, and (ii) tensor techniques are most easily understood in an abstract setting. 6 | \end{framed} 7 | 8 | \subsection{Vector Spaces} 9 | \begin{definition} 10 | A $\mathbb{R}$-\textbf{vector space} is a triple $(V, +, \cdot)$, where 11 | \begin{enumerate}[i)] 12 | \item $V$ is a set, 13 | \item $+ : V \times V \to V$ \quad (addition), and 14 | \item $. : \mathbb{R} \times V \to V$ \quad (S-multiplication) 15 | \end{enumerate} 16 | satisfying the following: 17 | \begin{enumerate}[a)] 18 | \item $\forall u, v \in V : u + v = v + u$ \quad (commutativity of +) 19 | \item $\forall u, v, w \in V : (u + v) + w = u + (v + w)$ \quad (associativity of +) 20 | \item $\exists O \in V : \forall v \in V : O + v = v$ \quad (neutral element in +) 21 | \item $\forall v \in V : \exists (-v) \in V : v + (-v) = 0$ \quad (inverse of element in +) 22 | 23 | \item $\forall \lambda, \mu \in \mathbb{R}, \forall v \in V : \lambda \cdot (\mu \cdot v) = (\lambda \cdot \mu) \cdot v$ \quad (associativity in $\cdot$) 24 | \item $\forall \lambda, \mu \in \mathbb{R}, \forall v \in V : (\lambda + \mu) \cdot v = \lambda \cdot v + \mu \cdot v$ \quad (distributivity of $\cdot$) 25 | \item $\forall \lambda \in \mathbb{R}, \forall u, v \in V : \lambda \cdot u + \lambda \cdot v = \lambda \cdot (u + v)$ \quad (distributivity of $\cdot$) 26 | \item $\exists 1 \in \mathbb{R} : \forall v \in V : 1 \cdot v = v$ \quad (unit element in $\cdot$) 27 | \end{enumerate} 28 | \end{definition} 29 | 30 | \textbf{Terminology}: If $(V,+,\cdot)$ is a vector space, an element of $V$ is often referred to, informally, as a \textbf{vector}. But, we should remember that it makes no sense to call an element of $V$ a vector unless the vector space itself is specified. 31 | 32 | \textbf{Example}: Consider a set of polynomials of fixed degree, 33 | \begin{equation*} 34 | P := \left\lbrace p:(-1,+1) \to \mathbb{R} \quad \Big| \quad p(x) = \displaystyle\sum_{n=0}^{N} p_n \cdot x^n, \text{ where } p_n \in \mathbb{R} \right\rbrace 35 | \end{equation*} \\ 36 | with $\oplus : P \times P \to P$ with $(p,q) \mapsto p \oplus q : (p \oplus q)(x) = p(x) + q(x)$ and \\ 37 | $\odot : \mathbb{R} \times P \to P$ with $(\lambda,p) \mapsto \lambda \odot p : (\lambda \odot p)(x) = \lambda \cdot p(x)$. \textbf{$(P,\oplus,\odot)$} is a vector space. 38 | 39 | \textbf{Caution}: \textit{We are considering real vector spaces, that is S-multiplication with the elements of $\mathbb{R}$. We shall often use same symbols `+' and `$\cdot$' for different vector spaces, but the context should make things clear. When $\mathbb{R}, \mathbb{R}^2$, etc. are used as vector spaces, the obvious (natural) operations shall be understood to be used.} 40 | 41 | \subsection{Linear Maps} 42 | These are the structure-respecting maps between vector spaces. 43 | \begin{definition} 44 | If $(V,+_v,\cdot_v)$ and $(W,+_w,\cdot_w)$ are vector spaces, then $\phi : V \to W$ is called a \textbf{linear map} if 45 | \begin{enumerate}[i)] 46 | \item $\forall v, \tilde{v} \in V: \phi(v +_v \tilde{v}) = \phi(v) +_w \phi(\tilde{v})$, and 47 | \item $\forall \lambda \in \mathbb{R}, v \in V : \phi(\lambda \cdot_v v) = \lambda \cdot_w \phi(v)$. 48 | \end{enumerate} 49 | \end{definition} 50 | 51 | \textbf{Notation}: $\phi : V \to W \text{ is a linear map } \iff \phi : V \linearmapto W$ 52 | 53 | \textbf{Example}: Consider the vector space $(P,\oplus,\odot)$ from the above example, \\ 54 | Then, $\delta : P \to P$ with $p \mapsto \delta (p) := p^\prime$ is a linear map, because \\ 55 | $\forall p,q \in P : \delta(p \oplus q) = (p \oplus q)^\prime = p^\prime \oplus q^\prime = \delta(p) \oplus \delta(q)$ and \\ 56 | $\forall \lambda \in \mathbb{R}, p \in P : \delta(\lambda \odot p) = (\lambda \odot p)^\prime = \lambda \odot p^\prime$. 57 | 58 | \begin{theorem} 59 | If $\phi : U \linearmapto V$ and $\psi : V \linearmapto W$, then $\psi \after \phi : U \linearmapto W$. 60 | \end{theorem} 61 | \[ 62 | \begin{tikzpicture} 63 | \matrix (m) [matrix of nodes, row sep=3em, column sep=5em, text height=1.5ex, text depth=0.25ex] 64 | { $U$ & $V$ & $W$ \\ 65 | }; 66 | \path[->] 67 | (m-1-1) edge node[above] {$\phi$} (m-1-2) 68 | edge[bend right = 30] node[below] {$\psi \after \phi$} (m-1-3); 69 | \path[->] 70 | (m-1-2) edge node[above] {$\psi$} (m-1-3); 71 | \end{tikzpicture} 72 | \] 73 | 74 | \begin{proof} 75 | $ \forall u, \tilde{u} \in U, (\psi \after \phi)(u +_u \tilde{u}) = \psi(\phi(u +_u \tilde{u})) = \psi(\phi(u) +_v \phi(\tilde{u})) = \psi(\phi(u)) +_w \psi(\phi(\tilde{u})) = (\psi \after \phi)(u) +_w (\psi \after \phi)(\tilde{u}) $. 76 | 77 | $ \forall \lambda \in \mathbb{R}, u \in U, (\psi \after \phi)(\lambda \cdot_u u) = \psi(\phi(\lambda \cdot_u u)) = \psi(\lambda \cdot_v \phi(u)) = \lambda \cdot_w \psi(\phi(u)) = \lambda \cdot_w (\psi \after \phi)(u) $ 78 | \end{proof} 79 | 80 | \textbf{Example}: Consider the vector space $(P,\oplus,\odot)$ and the differential $\delta : P \to P$ with $p \mapsto \delta (p) := p^\prime$ from previous example. Then, $p^{\prime\prime}$, the second differential is also linear since it is composition of two linear maps, i.e., $\delta \after \delta : P \linearmapto P$. 81 | 82 | \subsection{Vector Space of Homomorphisms} 83 | \begin{definition} 84 | If $(V, +, \cdot)$ and $(W, +, \cdot)$ are vector spaces, then $Hom(V,W) := \left\lbrace \phi : V \linearmapto W \right\rbrace$. 85 | \end{definition} 86 | 87 | \begin{theorem} 88 | $(Hom(V,W),+,\cdot)$ is a vector space with \\ 89 | $+ : Hom(V,W) \times Hom(V,W) \to Hom(V,W)$ with $(\phi,\psi) \mapsto \phi + \psi : (\phi + \psi)(v) = \phi(v) + \psi(v)$ and \\ 90 | $\cdot : \mathbb{R} \times Hom(V,W) \to Hom(V,W)$ with $(\lambda,\phi) \mapsto \lambda \cdot \phi : (\lambda \cdot \phi)(v) = \lambda \cdot \phi(v)$. 91 | \end{theorem} 92 | 93 | \textbf{Example}: $(Hom(P,P),+,\cdot)$ is a vector space. $\delta \in Hom(P,P)$, $\delta \after \delta \in Hom(P,P)$, $\delta \after \delta \after \delta \in Hom(P,P)$, etc. Therefore, maps such as $5 \cdot \delta + \delta \after \delta \in Hom(P,P)$. Thus, mixed order derivatives are in $Hom(P,P)$, and hence linear. 94 | 95 | \subsection{Dual Vector Spaces} 96 | \begin{definition} 97 | If $(V, +, \cdot)$ is a vector space, and $V^{\ast} := \left\lbrace \phi : V \linearmapto \mathbb{R} \right\rbrace = Hom(V,\mathbb{R})$ then \\ 98 | $(V^{\ast},+,\cdot)$ is called the \textbf{dual vector space to V}. 99 | \end{definition} 100 | 101 | \textbf{Terminology}: $\omega \in V^{\ast}$ is called, informally, a \textbf{covector}. 102 | 103 | \textbf{Example}: Consider $I : P \linearmapto \mathbb{R}$, i.e., $I \in P^\ast$. We define $I(p) := \int_0^1 \! p(x) \, \mathrm{d}x$, which can be easily checked to be linear with $I(p + q) = I(p) + I(q)$ and $I(\lambda \cdot p) = \lambda \cdot I(p)$. Thus $I$ is a covector, which is the integration operator $\int_0^1 \! ( \quad ) \, \mathrm{d}x$ which eats a function. 104 | 105 | \textit{Remarks: We shall also see later that the gradient is a covector. In fact, lots of things in physicist's life, which are covectors, have been called vectors not to bother you with details. But covectors are neither esoteric nor unnatural.} 106 | 107 | \subsection{Tensors} 108 | We can think of tensors as multilinear maps. 109 | 110 | \begin{definition} 111 | Let $(V, +, \cdot)$ be a vector space. An \textbf{(r,s) -tensor} T over V is a multilinear map 112 | \begin{equation*} 113 | T : \underbrace{V^\ast \times V^\ast \times \dots \times V^\ast}_\text{r times} \times \underbrace{V \times V \times \dots \times V}_\text{s times} \linearmapto \mathbb{R} 114 | \end{equation*} 115 | \end{definition} 116 | 117 | \textbf{Example}: If T is a (1,1)-tensor, then \\ 118 | $T(\omega_1 + \omega_2, v) = T(\omega_1, v) + T(\omega_2, v)$, \\ 119 | $T(\omega, v_1 + v_2) = T(\omega, v_1) + T(\omega, v_2)$, \\ 120 | $T(\lambda \cdot \omega, v) = \lambda \cdot T(\omega, v)$, and \\ 121 | $T(\omega, \lambda \cdot v) = \lambda \cdot T(\omega, v)$. \\ 122 | Thus, $T(\omega_1 + \omega_2, v_1 + v_2) = T(\omega_1, v_1) + T(\omega_1, v_2) + T(\omega_2, v_1) + T(\omega_2, v_2)$. \\ 123 | 124 | \textit{Remarks}: Sometimes it is said that a (1,1)-tensor is something that eats a vector and outputs a vector. Here is why. For $T : V^\ast \times V \linearmapto \mathbb{R}$, define $\phi_T : V \linearmapto (V^\ast)^\ast$ with $v \mapsto T(( \cdot ), v)$. But, clearly $T(( \cdot ), v) : V^\ast \linearmapto \mathbb{R}$, which eats a covector and spits a number. In other words, $T(( \cdot ), v) \in (V^\ast)^\ast$. Although we are yet to define dimension, let us just trust, for the time being, that for finite-dimensional vector spaces, $(V^\ast)^\ast = V$. So, $\phi_T : V \linearmapto V$. 125 | 126 | \textbf{Example}: Let $g : P \times P \linearmapto \mathbb{R}$ with $(p,q) \mapsto \int_{-1}^1 \! p(x) \cdot q(x) \, \mathrm{d}x$. Then, $g$ is a (0,2)-tensor over $P$. 127 | 128 | \subsection{Vectors and Covectors as Tensors} 129 | \begin{theorem} 130 | If $(V,+,\cdot)$ is a vector space, $\omega \in V^\ast$ is a (0,1)-tensor. 131 | \end{theorem} 132 | \begin{proof} 133 | $\omega \in V^\ast$ and, by definition, $V^{\ast} := \left\lbrace \phi : V \linearmapto \mathbb{R} \right\rbrace$, which is a collection of (0,1)-tensors. 134 | \end{proof} 135 | 136 | \begin{theorem} 137 | If $(V,+,\cdot)$ is a vector space, $v \in V$ is a (1,0)-tensor. 138 | \end{theorem} 139 | \begin{proof} 140 | We have already stated, without proof and without defining dimensions, that $V = (V^\ast)^\ast$ for finite-dimensional vector spaces. Therefore, $v \in V \implies v \in (V^\ast)^\ast \implies v \in \left\lbrace \phi : V^\ast \linearmapto \mathbb{R} \right\rbrace \implies$ $v$ is a (1,0)-tensor. 141 | \end{proof} 142 | 143 | \subsection{Bases} 144 | \begin{definition} 145 | Let $(V,+,\cdot)$ is a vector space. A subset $B \subseteq V$ is called a \textbf{basis} if \\ 146 | $\forall v \in V, \exists ! finite v_1,v_2,\dotsc,v_n \in B, \exists ! f_1,f_2,\dotsc,f_n \in \mathbb{R} : v = \displaystyle\sum_{i=1}^n f_i \cdot v_i$. 147 | \end{definition} 148 | 149 | \begin{definition} 150 | A vector space $(V,+,\cdot)$ with a basis $B$ is said to be \textbf{$d$-dimensional} if $B$ has $d$ elements. In other words, $dim V := d$. 151 | \end{definition} 152 | 153 | \textit{Remarks: The above definition is well-defined only if every basis of a vector space has the same number of elements.} 154 | 155 | \textbf{Remarks}: Let $(V,+,\cdot)$ is a vector space. Having chosen a basis $e_1,e_2,\dotsc,e_n$, we may uniquely associate $v \mapsto (v_1,v_2,\dotsc,v_n)$, these numbers being the components of $v$ w.r.t. chosen basis where $v = \displaystyle\sum_{i=1}^n v_i \cdot e_i$. 156 | 157 | \subsection{Basis for the Dual Space} 158 | Let $(V,+,\cdot)$ is a vector space. Having chosen a basis $e_1,e_2,\dotsc,e_n$ for $V$, we can choose a basis $\epsilon^1,\epsilon^2,\dotsc,\epsilon^n$ for $V^\ast$ entirely independent of basis of $V$. However, it is more economical to require that 159 | \begin{equation*} 160 | \epsilon^a (e_b) = \delta_b^a = \begin{cases} 161 | 1 &\quad \text{if } a = b \\ 162 | 0 &\quad \text{if } a \neq b \\ 163 | \end{cases} 164 | \end{equation*} This uniquely determines $\epsilon^1,\epsilon^2,\dotsc,\epsilon^n$ from choice of $e_1,e_2,\dotsc,e_n$. 165 | 166 | \textit{Remarks: The reason for using indices as superscripts or subscripts is to be able to use the Einstein summation convention, which will be helpful in dropping cumbersome $\sum$ symbols in several equations.} 167 | 168 | \begin{definition} 169 | For a basis $e_1,e_2,\dotsc,e_n$ of vector space $(V,+,\cdot)$, $\epsilon^1,\epsilon^2,\dotsc,\epsilon^n$ is called the \textbf{dual basis} of the dual space, if $\epsilon^a (e_b) = \delta_b^a$. 170 | \end{definition} 171 | 172 | \textbf{Example}: Consider polynomials $P$ of degree 3. Choose $e_0,e_1,e_2,e_3 \in P$ such that $e_0(x) = 1, e_1(x) = x, e_2(x) = x^2$ and $e_3(x) = x^3$. Then, it can be easily verified that the dual basis is $\epsilon^a = \displaystyle\frac{1}{a!}\partial^a\Big|_{x=0}$. 173 | 174 | \subsection{Components of Tensors} 175 | \label{ss:L3_TensorComponents} 176 | \begin{definition} 177 | Let $T$ be a $(r,s)$-tensor over a $d$-dimensional (finite) vector space $(V,+,\cdot)$. Then, with respect to some basis $\lbrace e_1, \dotsc, e_r \rbrace$ and the dual basis $\lbrace \epsilon^1, \dotsc, \epsilon^s \rbrace$, define $(r+s)^d$ real numbers 178 | \begin{equation*} 179 | T\indices{^{i_1 \dots i_r}_{j_1 \dots j_s}} := T(\epsilon^{i_1}, \dotsc, \epsilon^{i_r}, e_{j_1}, \dotsc, e_{j_s}) 180 | \end{equation*} such that the indices $i_1, \dotsc, i_r, j_1, \dotsc, j_s$ take all possible values in the set $\lbrace 1,\dotsc,d \rbrace$. These numbers $T\indices{^{i_1 \dots i_r}_{j_1 \dots j_s}}$ are called the \textbf{components of the tensor} $T$ w.r.t. the chosen basis. 181 | \end{definition} 182 | 183 | This is useful because knowing components (and the basis w.r.t which these components have been chosen), one can reconstruct the entire tensor. 184 | 185 | \textbf{Example}: If $T$ is a $(1,1)$-tensor, then $T\indices{^{i}_{j}} := T(\epsilon^i,e_j)$. Then\\ 186 | \begin{equation*} 187 | T(\omega,v) = T\left(\sum_{i=1}^d \omega_i \cdot \epsilon^i,\sum_{j=1}^d v^j \cdot e_j \right) = \sum_{i=1}^d \sum_{j=1}^d \omega_i v^j T(\epsilon^i,e_j) = \sum_{i=1}^d \sum_{j=1}^d \omega_i v^j T\indices{^{i}_{j}} =: \omega_i v^j T\indices{^{i}_{j}} 188 | \end{equation*} 189 | -------------------------------------------------------------------------------- /lecture4.tex: -------------------------------------------------------------------------------- 1 | \section{Differential Manifolds} 2 | \begin{framed} 3 | \textbf{Motivation}: So far we have dealt with topological manifolds which allow us to talk about continuity. But to talk about smoothness of curves on manifolds, or velocities along these curves, we need something like differentiability. Does the structure of topological manifold allow us to talk about differentiability? The answer is a resounding no. 4 | 5 | So this lecture is about figuring out what structure we need to add on a topological manifold $M$ to start talking about differentiability of curves ($\mathbb{R} \to M$) on a manifold, or differentiability of functions ($M \to \mathbb{R}$) on a manifold, or differentiability of maps ($M \to N$) from one manifold $M$ to another manifold $N$. 6 | \end{framed} 7 | 8 | \subsection{Strategy} 9 | 10 | \begin{tikzpicture}[decoration=snake] 11 | \matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em] 12 | { 13 | \gamma : \mathbb{R} & U \\ 14 | & x(U) \subseteq \mathbb{R}^d \\ 15 | }; 16 | \path[->] 17 | (m-1-1) edge node [above] {$$} (m-1-2) 18 | edge node [sloped, anchor=center, below] {$x \circ \gamma$} (m-2-2) 19 | (m-1-2) edge node [right] {$x$} (m-2-2); 20 | \end{tikzpicture} 21 | 22 | \underline{idea}. try to ``lift'' the undergraduate notion of differentiability of a curve on $\mathbb{R}^d$ to a notion of differentiability of a curve on $M$ 23 | 24 | \underline{Problem} Can this be well-defined under change of chart? 25 | 26 | \begin{tikzpicture}[decoration=snake] 27 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 28 | { 29 | & y(U\cap V) \subseteq \mathbb{R}^d \\ 30 | \gamma : \mathbb{R} & U \cap V \neq \emptyset \\ 31 | & x(U\cap V) \subseteq \mathbb{R}^d \\ 32 | }; 33 | \path[->] 34 | 35 | (m-2-1) edge node [auto] {$$} (m-2-2) 36 | edge node [sloped, anchor=center, below] {$x \circ \gamma$} (m-3-2) 37 | edge node [sloped, anchor=center, above] {$y \circ \gamma$} (m-1-2) 38 | (m-2-2) edge node [auto] {$x$} (m-3-2) 39 | edge node [auto] {$y$} (m-1-2) 40 | (m-3-2) edge [bend right=40] node [right] {$y\circ x^{-1}$} (m-1-2); 41 | \end{tikzpicture} 42 | 43 | $x\circ \gamma$ undergraduate differentiable (``as a map $\mathbb{R} \to \mathbb{R}^d$'') 44 | 45 | \[ 46 | \begin{gathered} 47 | \underbrace{y \circ \gamma}_{\text{maybe only continuous, but not undergraduate differentiable} } = \underbrace{ ( \overbrace{ y\circ x^{-1}}^{\mathbb{R}^d \to \mathbb{R}^d } )}_{\text{continuous}} \circ \underbrace{ \overbrace{ (x\circ \gamma) }^{\mathbb{R}\to \mathbb{R}^d} }_{ \text{ undergrad differentiable } } = y \circ (x^{-1} \circ x) \circ \gamma 48 | \end{gathered} 49 | \] 50 | 51 | At first sight, strategy does not work out. 52 | 53 | \subsection{Compatible charts} 54 | 55 | In section 1, we used any imaginable charts on the topological manifold $(M,\mathcal{O})$. 56 | 57 | To emphasize this, we may say that we took $U$ and $V$ from the \emph{maximal atlas} $\mathcal{A}$ of $(M,\mathcal{O})$. 58 | 59 | 60 | \begin{definition} 61 | Two charts $(U,x)$ and $(V,y)$ of a topological manifold are called \ding{96}-compatible if 62 | either 63 | \begin{enumerate} 64 | \item[(a)] $U \cap V = \emptyset$, or 65 | \item[(b)] $U\cap V \neq \emptyset$ : chart transition maps 66 | \[ 67 | \begin{aligned} 68 | & y \circ x^{-1} : x(U \cap V) \subseteq \mathbb{R}^d \to y(U\cap V) \subseteq \mathbb{R}^d \text{, and}\\ 69 | & x\circ y^{-1} : y(U\cap V) \subseteq \mathbb{R}^d \to x(U\cap V) \subseteq \mathbb{R}^d 70 | \end{aligned} 71 | \] 72 | have undergraduate \ding{96} property. 73 | \end{enumerate} 74 | \end{definition} 75 | Since both $y \circ x^{-1}$ and $y \circ x^{-1}$ are $\mathbb{R}^d \to \mathbb{R}^d$ maps, can use undergradate \ding{96} properties such as continuity or differentiability. 76 | 77 | 78 | \underline{Philosophy}: 79 | 80 | \begin{definition} 81 | An atlas $\mathcal{A}_{\text{\ding{96}}}$ is a \ding{96}-compatible atlas if any two charts in $\mathcal{A}_{\text{\ding{96}}}$ are \ding{96}-compatible. 82 | 83 | \end{definition} 84 | 85 | \begin{definition} 86 | A \textbf{\ding{96}-manifold} is a triple $(\underbrace{ M,\mathcal{O} }_{\text{top. mfd.} }, \mathcal{A}_{\text{\ding{96}}})$ \quad \, $\mathcal{A}_{\text{\ding{96}}} \subseteq \mathcal{A}_{\text{maximal}} $ 87 | \end{definition} 88 | 89 | 90 | \begin{tabular}{ l | c | p{11cm}} 91 | \ding{96} & undergraduate \ding{96} & \\ 92 | \hline 93 | $C^0$ & $C^0(\mathbb{R}^d \to \mathbb{R}^d) =$ & continuous maps w.r.t. $\mathcal{O}$ (we know from section 1 that every atlas is $C^0$-compatible atlas.) \\ 94 | $C^1$ & $C^1(\mathbb{R}^d \to \mathbb{R}^d) = $ & differentiable (once) and is continuous \\ 95 | $C^k$ & & $k$-times continuously differentiable \\ 96 | $D^k$ & & $k$-times differentiable \\ 97 | $\vdots$ & & \\ 98 | $C^{\infty}$ & $C^{\infty}(\mathbb{R}^d \to \mathbb{R}^d)$ & continuously differentiable arbitrarily many times; also called ``smooth manifolds''\\ 99 | $\mathbin{\rotatebox[origin=c]{-90}{$\supseteq$}}$ & & \\ 100 | $C^{\omega}$ & & $\exists $ multi-dimensional Taylor expansion \\ 101 | $\mathbb{C}^{\infty}$ & & satisfy Cauchy-Riemann equations, pair-wise \\ 102 | \hline 103 | \end{tabular} 104 | 105 | 106 | EY : 20151109 Schuller says: $C^k$ is easy to work with because you can judge $k$-times continuously differentiability from existence of all partial derivatives \textbf{and} their continuity. There are examples of maps that partial derivatives exist but are not $D^k$, $k$-times differentiable. 107 | 108 | \begin{theorem}[Whitney\footnote{\url{http://mathoverflow.net/questions/8789/can-every-manifold-be-given-an-analytic-structure}}] 109 | % Any $C^{k\geq 1}$-manifold $(M,\mathcal{O}, \mathcal{A}_{C^{k\geq 1}})$ 110 | Any $C^{k\geq 1}$-atlas, $\mathcal{A}_{C^{k\geq 1}}$ of a topological manifold \emph{contains} a $C^{\infty}$-atlas. 111 | 112 | Thus we may w.l.o.g. always consider $C^{\infty}$-manifolds (i.e., ``smooth manifolds''), unless we wish to define Taylor expandibility/complex differentiability \dots 113 | \end{theorem} 114 | 115 | \begin{definition} 116 | A smooth manifold $(\underbrace{ M,\mathcal{O} }_{\text{top. mfd. } }, \underbrace{ \mathcal{A}}_{C^{\infty}-\text{atlas}} )$ 117 | \end{definition} 118 | 119 | \textit{Remarks: We should distinguish the real physical object from the maps used to communicate them.} 120 | 121 | \begin{tikzpicture} 122 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 123 | { 124 | \mathbb{R} & M \\ 125 | & \mathbb{R}^d \\ 126 | }; 127 | \path[->] 128 | (m-1-1) edge node [auto] {$\gamma$} (m-1-2) 129 | edge node [auto] {$x\circ \gamma$} (m-2-2) 130 | (m-1-2) edge node [auto] {$x$} (m-2-2); 131 | \end{tikzpicture} 132 | 133 | \textit{While the physical object is the curve $\gamma : \mathbb{R} \to M$, but we communicate it using maps such as $x \circ \gamma : \mathbb{R} \to \mathbb{R}^d$ in physics. But, the thing of which we should require any properties is the real physical object (in this case, the curve $\gamma$).} 134 | 135 | \textit{Remarks: TODO from video 40:30 to 42:20} 136 | 137 | \subsection{Diffeomorphisms} 138 | 139 | We study isomorphisms, i.e., structure preserving bijections. 140 | 141 | If $M,N$ are naked sets (i.e., with no additional structure), then $M \cong_{\text{set}} N$, i.e., $M$ and $N$ are (set-theoretically) isomorphic to each other if $\exists \, $ bijection $\phi : M \to N$. 142 | 143 | \underline{Examples}. $\mathbb{N} \cong_{\text{set}} \mathbb{Z}$, $\mathbb{N} \cong_{\text{set}} \mathbb{Q}$ (using diagonal counting scheme), $\mathbb{N} \cancel{\cong_{\text{set}}} \mathbb{R}$. 144 | 145 | Now $(M, \mathcal{O}_M) \cong_{\text{top}} (N,\mathcal{O}_N)$, i.e., they are topologically isomorphic (also called ``homeomorphic'') if $\exists \, $ bijection $\phi : M \to N$ s.t. $\phi$ and $\phi^{-1}$ are continuous. 146 | 147 | Two vector spaces are isomorphic , i.e., $(V,+,\cdot) \cong_{\text{vec}} ( W,+_w,\cdot_w)$ if $\exists \, \text{ linear bijection } \phi : V \to W$. 148 | 149 | Finally, 150 | \begin{definition} 151 | Two $C^{\infty}$-manifolds $(M,\mathcal{O}_M, \mathcal{A}_M)$ and $(N,\mathcal{O}_N, \mathcal{A}_N)$ are said to be \textbf{diffeomorphic} if $\exists \, $ bijection $\phi : M \to N$ s.t. both $\phi : M \to N$ and $\phi^{-1} : N \to M$ are $C^{\infty}$-maps. 152 | 153 | \begin{tikzpicture} 154 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 155 | { 156 | \mathbb{R}^d & \mathbb{R}^e \\ 157 | M \supseteq U & V\subseteq N \\ 158 | \mathbb{R}^d & \mathbb{R}^e \\ 159 | }; 160 | \path[->] 161 | (m-1-1) edge node [auto] {$\widetilde{y} \circ \phi \circ \widetilde{x}^{-1}$} (m-1-2) 162 | (m-2-1) edge node [auto] {$\widetilde{x}$} (m-1-1) 163 | edge node [auto] {$\phi$} (m-2-2) 164 | edge node [auto] {$x$} (m-3-1) 165 | (m-3-1) edge node [auto] {$ \substack{ y\circ \phi \circ x^{-1} \\ 166 | \text{ undergraduate } C^{\infty} }$} (m-3-2) 167 | edge [bend left=50] node [auto] {$C^{\infty}$} (m-1-1) 168 | (m-2-2) edge node [auto] {$\widetilde{y}$} (m-1-2) 169 | edge node [auto] {$y$} (m-3-2) 170 | (m-3-2) edge [bend right=50] node [auto] {$$} (m-1-2); 171 | \end{tikzpicture} 172 | 173 | \end{definition} 174 | 175 | \begin{theorem} 176 | $\# = $ number of $C^{\infty}$-manifolds one can make out of a given $C^0$-manifolds (if any) - up to diffeomorphisms. 177 | 178 | \begin{tabular}{l | c | r } 179 | $\text{dim }M$ & $\#$ & \\ 180 | \hline 181 | 1 & 1 & Morse-Radon theorems \\ 182 | 2 & 1 & Morse-Radon theorems \\ 183 | 3 & 1 & Morse-Radon theorems \\ 184 | 4 & uncountably infinite & \\ 185 | 5 & finite & surgery theory \\ 186 | 6 & finite & surgery theory \\ 187 | \vdots & finite & surgery theory \\ 188 | \hline 189 | \end{tabular} 190 | 191 | \end{theorem} 192 | -------------------------------------------------------------------------------- /lecture6.tex: -------------------------------------------------------------------------------- 1 | \section{Fields} 2 | 3 | So far, we have focussed technically on a single tangent space and a vector/ covector in it, a basis if we chose a chart. As physicists, we are interested in things such as vector fields such that at any point of a manifold, there is a vector. The proper way to deal with it technically is \textit{theory of bundles}. 4 | 5 | \subsection{Bundles} 6 | 7 | \begin{definition} 8 | A \textbf{bundle} is a triple $\boxed{E \projmapto M}$, where \\ 9 | $E$ is a smooth manifold, called the \textbf{total space}, \\ 10 | $M$ is a smooth manifold, called the \textbf{base space}, and \\ 11 | $\pi$ is a smooth map (surjective), called the \textbf{projection map}. \\ 12 | \end{definition} 13 | 14 | \begin{definition} 15 | Let $E \projmapto M$ be a bundle and $p \in M$. Then, \textbf{fibre over} $p := \text{preim}_{\pi}(\lbrace p \rbrace)$. \\ 16 | \end{definition} 17 | 18 | \begin{definition} 19 | A \textbf{section} $\sigma$ of a bundle $E \projmapto M$ is the map $\sigma : M \to E$ such that $\pi \after \sigma = id_M$. \\ 20 | \end{definition} 21 | 22 | \begin{tikzpicture} 23 | \matrix (m) [matrix of math nodes, row sep=3em, column sep=8em, minimum width=1em] 24 | { E & M \\ }; 25 | \path[->] 26 | (m-1-1) edge node [above] {$\pi$} (m-1-2) 27 | (m-1-2) edge [bend left=30] node [below] {$\sigma$} (m-1-1); 28 | \end{tikzpicture} 29 | 30 | \underline{Example}: $E$ is a cylinder, $M$ a circle and $\pi$ maps vertical lines on the cylinder to the point of intersection of this line with the circle. 31 | 32 | \underline{Example}: If the fibre of $p \in M$ is a tangent space, the section would pick one vector from the tangent space. 33 | 34 | \underline{Aside}: In quantum mechanics, $\psi : M \to \mathbb{C}$ is called a wavefunction, but it is actually a section which selects one value from $\mathbb{C}$ for each $p \in M$. 35 | 36 | 37 | \subsection{Tangent bundle of smooth manifold} 38 | For this entire subsection, let $\mfd$ be a smooth manifold and let $d := dim \, M$. 39 | 40 | Define the set, 41 | \begin{equation} 42 | \boxed{TM : = \dot{\bigcup}_{p \in M} T_pM} 43 | \end{equation} 44 | 45 | Now define a surjective map $\pi$ as follows: 46 | \begin{equation} 47 | \boxed{\begin{split} 48 | \pi : & TM \to M \\ 49 | & X \mapsto \pi(X) := p \in M \text{ such that } X \in T_pM 50 | \end{split}} 51 | \end{equation} 52 | 53 | \underline{Situation}: $\underbrace{TM}_{\text{set}} \underbrace{ \projmapto}_{\text{surjective map}} \underbrace{M}_{\text{smooth manifold}}$ 54 | 55 | For a bundle, $TM$ should be a smooth manifold and $\pi$ a smooth map. Let us construct a topology on $TM$ that is the coarsest topology such that $\pi$ is just continuous. (\textbf{initial topology} with respect to $\pi$). Define 56 | 57 | \begin{equation} 58 | \boxed{\mathcal{O}_{TM} := \lbrace \text{preim}_{\pi}(U) | U \in \mathcal{O} \rbrace} 59 | \end{equation} 60 | 61 | It can be shown that $(TM,\mathcal{O}_{TM})$ is a topological space. But we need a smooth atlas.\\ 62 | 63 | \underline{Construction of a $C^{\infty}$-atlas on $TM$ from the $C^{\infty}$-atlas $\A$ on $M$} \\ 64 | Define 65 | \begin{equation}\label{eq:atlasTangentBundle} 66 | \boxed{ 67 | \begin{split} 68 | \A_{TM} := & \lbrace (TU,\xi_x) \, | \, (U,x) \in \A \rbrace \text{ where } \\ 69 | \xi_x : & TU \to \R^{2d} \\ 70 | & X \mapsto \left(\underbrace{(x^1 \after \pi)(X), \dotsc, (x^d \after \pi)(X)}_{(U,x)-\text{ coords of } \pi(X) \, (d\text{-many})}, \underbrace{(dx^1)_{\pi(X)}(X), \dotsc, (dx^d)_{\pi(X)}(X)}_{\text{components of $X$ w.r.t } (U,x) \, (d\text{-many})}\right) 71 | \end{split} 72 | } 73 | \end{equation} 74 | 75 | In the above, $(x^1 \after \pi)(X) = x^1(\pi(X)) = x^1(p) = x^1 \text{-coordinate}$, and \\ 76 | $X \in T_{\pi(X)}M \implies X = X_{(x)}^i \left(\cibasis{x^i}\right)_{\pi(X)} \implies (dx^j)_{\pi(X)}(X) = (dx^j)_{\pi(X)} \left(X^i_{(x)}\left(\cibasis{x^i}\right)_{\pi(X)} \right) = X^i_{(x)}\delta_i^j = X^j_{(x)}$. \\ 77 | Thus $\xi_x$ maps $X$ to the coordinates of its base point $\pi(X)$ under the chart $(U,x)$ and the components of the vector $X$ w.r.t the basis induced by this chart. 78 | 79 | We can write $\xi_x^{-1}$ as follows: 80 | \begin{equation}\label{eq:xiInverseTangentBundle} 81 | \boxed{ 82 | \begin{split} 83 | \xi_x^{-1} \, : \, & \underbrace{\xi_x(TU)}_{\subseteq \R^{2d}} \to TU \\ 84 | & (\alpha^1, \dotsc, \alpha^d, \beta^1, \dotsc, \beta^d) := \beta^i \left(\cibasis{x^i}\right)_{\underbrace{x^{-1}(\alpha^1, \dotsc, \alpha^d)}_{\pi(X)}} 85 | \end{split} 86 | } 87 | \end{equation} 88 | 89 | Now we check, whether the atlas $\A_{TM}$ smooth. That is, are the transitions between its charts smooth? 90 | 91 | \begin{theorem} 92 | $\A_{TM}$ is a smooth atlas. 93 | \end{theorem} 94 | 95 | \begin{proof} 96 | Let $(U,\xi_x) \in \A_{TM}, \quad (V,\xi_y) \in \A_{TM} \quad \text{ and } \quad U \cap V \ne \emptyset$. Calculate the chart transition 97 | \begin{align*} 98 | & (\xi_y \after \xi_x^{-1})(\alpha^1, \dotsc, \alpha^d, \beta^1, \dotsc, \beta^d) = \xi_y \left(\beta^i \left(\cibasis{x^i} \right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)}\right) && \text{by Eq.~\ref{eq:xiInverseTangentBundle}} \\ 99 | & = \left(\dotsc, (y^i \after \pi)\left(\beta^m \cdot \left(\cibasis{x^m}\right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)}\right), \dotsc, \dotsc, (dy^i)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)} \left(\beta^m \left(\cibasis{x^m} \right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)} \right), \dotsc \right) && \text{by Eq.~\ref{eq:atlasTangentBundle}} \\ 100 | & = \left(\dotsc, y^i \left(\underbrace{\pi\left(\beta^m \cdot \left(\cibasis{x^m}\right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)}\right)}_{\text{the base point,}\, x^{-1}(\alpha^1, \dotsc, \alpha^d)}\right), \dotsc, \dotsc, (\beta^m \underbrace{(dy^i)_{x^{-1} (\alpha^1, \dotsc, \alpha^d)} \left( \left(\cibasis{x^m}\right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)} \right)}_{\left(\cibasis[y^i]{x^m}\right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)}}, \dotsc \right) \\ 101 | & = \left(\dotsc, (y^i \after x^{-1})(\alpha^1, \dotsc, \alpha^d), \dotsc, \dotsc, \beta^m \left(\left(\cibasis[y^i]{x^m}\right)_{x^{-1}(\alpha^1, \dotsc, \alpha^d)} \right), \dotsc\right) \\ 102 | & = \left(\dotsc, (y^i \after x^{-1})(\alpha^1, \dotsc, \alpha^d), \dotsc, \dotsc, \beta^m \left(\partial_m (y^i \after x^{-1})( x (x^{-1}(\alpha^1, \dotsc, \alpha^d))) \right), \dotsc\right) \\ 103 | & = \left(\dotsc, \underbrace{(y^i \after x^{-1})(\alpha^1, \dotsc, \alpha^d)}_{\text{smooth } \because \mathcal(A) \text{ is smooth atlas}}, \dotsc, \dotsc, \underbrace{\beta^m \left(\partial_m (y^i \after x^{-1})(\alpha^1, \dotsc, \alpha^d)\right)}_{\text{smooth } \because \text{ chart transition map is } C^\infty \text{ smooth}}, \dotsc\right) \\ 104 | & \implies (\xi_y \after \xi_x^{-1}) \text{ is smooth} \implies \A_{TM} \text{ is smooth} 105 | \end{align*} 106 | \end{proof} 107 | 108 | Further, the surjective map $\pi$ is a smooth map because, in the chart representation, $\pi$ takes the $2d$ components of $X \in TM$ to the $d$-coordinates of the base point in $M$, which can be seen to happen smoothly by seeing how the components are mapped. Therefore, we have the following definition. 109 | \begin{definition} 110 | Then, using the smooth manifold $\mfd$ as the base space and the smooth manifold $(TM, \mathcal{O}_{TM}, \A_{TM})$ as the total space, the \textbf{tangent bundle} is the triple 111 | \begin{equation}\label{eq:TangentBundle} 112 | \boxed{TM \projmapto M} 113 | \end{equation} 114 | \end{definition} 115 | 116 | \subsection{Vector fields} 117 | Why did we put so much effort in making a smooth atlas on $TM$ and defining a tangent bundle? The answer is in the following definition of \emph{smooth} vector field, not just any vector field. 118 | 119 | \begin{definition} 120 | For a tangent bundle $TM \projmapto M$, a \textbf{smooth vector field} $\chi$ is a smooth map such that $\pi \after \chi = id_{M}$, $\chi$ is a \textit{smooth section}. 121 | \end{definition} 122 | 123 | \begin{tikzpicture} 124 | \matrix (m) [matrix of math nodes, row sep=3em, column sep=8em, minimum width=1em] 125 | { TM & M \\ }; 126 | \path[->] 127 | (m-1-1) edge node [above] {$\pi$} (m-1-2) 128 | (m-1-2) edge [bend left=30] node [below] {$\chi$} (m-1-1); 129 | \end{tikzpicture} 130 | 131 | \textit{Remarks: $\chi$ is a section, which couldn't have been a smooth map unless we had both $M$ and $TM$ as smooth manifolds.} 132 | 133 | \subsection{The $C^{\infty}(M)$-module $\Gamma(TM)$} 134 | We already know that $C^{\infty}(M)$, the collection of all smooth functions is a vector space with S-multiplication with $\R$. So we may also consider the structure $(C^{\infty}(M),+,\cdot)$ with point-wise addition between elements of $C^{\infty}(M)$ and point-wise multiplication between elements of $C^{\infty}(M)$. This structure satisfies all the requirements of a field (commutativity, associativity, neutral element, inverse element under both operations, and distributivity) except that there is no inverse for all non-zero elements under multiplication. This is so because a function that is not zero everywhere, may be zero at some points and then point-wise multiplication with no function would result in the value 1 everywhere. Such a structure is called a \textit{ring}. 135 | 136 | A module over a ring is a generalization of the notion of vector space over a field, wherein the corresponding scalars are the elements of an arbitrary given ring. 137 | 138 | Let us consider the module made from the set of all smooth vector fields over the ring $C^{\infty}(M)$. Define \\ 139 | \begin{equation} 140 | \Gamma(TM) = \lbrace \chi \, : \, M \to TM \, | \, \chi \text{ is a smooth section} \rbrace 141 | \end{equation} 142 | 143 | \begin{definition} 144 | $(\Gamma(TM),\oplus,\odot)$ is a $C^{\infty}(M)$-module over the ring of $C^{\infty}(M)$ functions with $\chi, \widetilde{\chi} \in \Gamma(TM)$ and $g \in C^{\infty}(M)$, such that \\ 145 | $(\chi \oplus \widetilde{\chi})(f) := (\chi f) \underbrace{+}_{C^{\infty}(M)} (\widetilde{\chi}f)$ \\ 146 | $(g \odot \chi)(f) := g \underbrace{\cdot}_{C^{\infty}(M)} (\chi f)$ 147 | \end{definition} 148 | 149 | \underline{Facts}: Besides other differences, there are following 2 important facts: 150 | \begin{enumerate} 151 | \item[(1)] Proving that \textit{every vector space has a basis} depends upon the choice of set theory; in particular, on the Axiom of Choice in ZFC theory. 152 | \item[(2)] No such result exists for modules. 153 | \end{enumerate} 154 | 155 | This is a shame, because otherwise, we could have chosen (for any manifold) vector fields, $\chi_{(1)}, \dotsc, \chi_{(d)} \in \Gamma(TM)$ and would be able to write every vector field $\chi$ in terms of component functions $f^i$ as $\chi = f^i \cdot \chi_{(i)}$. 156 | 157 | \textbf{Simple counterexample:} Take a sphere. Can we find a smooth vector field over the entire sphere. Can you comb the sphere? No. For the field to be smooth, there is a problem. Morse Theory tells us that every smooth vector field on a sphere must vanish at 2 points $\implies$ basis cannot be chosen. We cannot choose a global basis. Therefore, if required, we only expand a vector field in terms of a basis on a domain where it is possible. 158 | 159 | 160 | \textit{Remarks: Although we cannot have a global basis for $\Gamma(TM)$, it is possible to do so locally. Thus, for the chart $(U,x)$ we can take the \textbf{chart-induced basis of the vector field} in the chart domain $U$ as the map \\ 161 | \begin{equation} 162 | \begin{split} 163 | \cibasis{x^i} : & \, U \xrightarrow{\text{ smooth }} TU \\ 164 | & p \mapsto \left(\cibasis{x^i}\right)_p 165 | \end{split} 166 | \end{equation} 167 | } 168 | 169 | \subsection{Tensor fields} 170 | So far we have constructed the sections over the tangent bundle. That is, $\Gamma(TM) = $''set of smooth vector fields'' as a $C^{\infty}(M)$-module. 171 | 172 | Exactly along the same lines we can construct the \textbf{cotangent bundle} $\Gamma(T^*M) = $ ``set of covector fields'' as a $C^{\infty}(M)$-module, by mapping a covector to the coordinates of its base point and components of the covector. $\Gamma(TM)$ and $\Gamma(T^*M)$ are the basic building blocks for every tensor field. 173 | 174 | \begin{definition} 175 | An \textbf{$(r,s)$-tensor field} $T$ is a $C^{\infty}(M)$ multilinear map 176 | \begin{equation} 177 | T:\underbrace{\Gamma(T^*M) \times \dotsb \times \Gamma(T^*M)}_{r} \times \underbrace{\Gamma(TM) \times \dotsb \times \Gamma(TM)}_{s} \linearmapto C^{\infty}(M) 178 | \end{equation} 179 | \end{definition} 180 | 181 | \textit{Remarks: the multilinearity is in $C^{\infty}(M)$, in terms of addition in the modules and S-multiplication with functions in $C^{\infty}(M)$.} 182 | 183 | \textbf{Example:} Let $f\in C^{\infty}(M)$. Then, define a ($0,1$)-tensor field $df$ as 184 | \[ 185 | \begin{gathered} 186 | \begin{aligned} 187 | df : & \Gamma(TM) \linearmapto C^{\infty}(M) \\ 188 | & \chi \mapsto df(\chi) := \chi f && \text{ such that } (\chi f)(\underbrace{p}_{ \in M}) := \underbrace{\chi(p)}_{\in T_pM}f 189 | \end{aligned} 190 | \end{gathered} 191 | \] 192 | It can be checked that $df$ is $C^{\infty}-$linear. 193 | -------------------------------------------------------------------------------- /lecture7.tex: -------------------------------------------------------------------------------- 1 | \section{Connections} 2 | \begin{framed} 3 | \textbf{Motivation}: So far, all we have dealt with (e.g., sets, topological manifolds, smooth manifolds, fields, bundles, etc.) are structures that we have to provide by hand before we can start doing physics as we know it. Why? Because we don't have equations which determine what we have done so far. These are assumptions you need to submit before you can do physics. 4 | 5 | In this lecture we introduce yet another structure called connections which are determined by Einstein's equations. Everything from now on will be objects that are the subject of Einstein's equations depending on the matter in the Universe. Connections are also called covariant derivatives. Even though these are different, for our purposes we shall not distinguish the two and use the more general connections. 6 | \end{framed} 7 | 8 | So far, we saw that a vector field $X$ can be used to provide a directional derivative of a function $f \in C^{\infty}(M)$ in the direction $X$\\ 9 | \begin{equation*} 10 | \nabla_X f := Xf 11 | \end{equation*} 12 | Isn't this a notational overkill? We already know \\ 13 | \begin{equation*} 14 | \nabla_X f = Xf = (df)X 15 | \end{equation*} 16 | Actually, they are not quite the same because 17 | \[ 18 | \begin{aligned} 19 | X : C^{\infty}(M) \to C^{\infty}(M) \\ 20 | df : \Gamma(TM) \to C^{\infty}(M) \\ 21 | \nabla_X : C^{\infty}(M) \to C^{\infty}(M) 22 | \end{aligned} 23 | \] 24 | where $\nabla_X$ can be generalized to eat an arbitrary $(p,q)$-tensor field and yield a $(p,q)$-tensor field whereas $X$ can only eat functions. \\ 25 | \[ 26 | \begin{tikzpicture} 27 | \matrix (m) [matrix of nodes, row sep=3em, column sep=3em, minimum width=1em] 28 | { 29 | $\nabla_X : C^{\infty}(M)$ & $C^{\infty}(M)$ \\ 30 | $\nabla_X : (p,q)$-tensor field & $(p,q)$-tensor field \\ }; 31 | \path[->] 32 | (m-1-1) edge (m-1-2); 33 | \path[->] 34 | (m-2-1) edge (m-2-2); 35 | \path[->] 36 | (m-1-1) edge[snake it] node[left] {$\vdots$} (m-2-1); 37 | \path[->] 38 | (m-1-2) edge[snake it] node[left] {$\vdots$} (m-2-2); 39 | \end{tikzpicture} 40 | \] 41 | 42 | We need $\nabla_X$ to provide the new structure to allow us to talk about directional derivatives of tensor fields and vector fields. Of course, only in cases where $\nabla_X$ acts on function $f$ which is a $(0,0)$-tensor, it is exactly the same as $Xf$. 43 | 44 | \subsection{Directional derivatives of tensor fields} 45 | We formulate a wish list of properties which $\nabla_X$ acting on a tensor field should have. We put this in form of a definition. There may be many structures that satify this wish list. Any remaining freedom in choosing such a $\nabla$ will need to be provided as additional structure beyond the structure we already have. And we assume all this takes place on a smooth manifold. 46 | 47 | \begin{definition}\label{def:connection} 48 | A \textbf{connection} $\nabla$ on a smooth manifold $\mfd$ is a map that takes a pair consisting of a vector (field) $X$ and a $(p,q)$-tensor field $T$ and sends them to a $(p,q)$-tensor (field) $\nabla_X T$ satisfying 49 | \begin{enumerate}[i)] 50 | \item $\nabla_X f = Xf \quad \forall f \in C^{\infty}M$ 51 | 52 | \item $\nabla_X (T + S) = \nabla_X T + \nabla_X S \quad \text{ where }T, S \text{ are } (p,q) \text{-tensors}$ 53 | 54 | \item \textbf{Leibnitz rule: } $\nabla_X T(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) = (\nabla_X T)(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) \\ 55 | + T(\nabla_X \omega_1,\dotsc,\omega_p,Y_1,\dotsc,Y_q) + \dotsb + T(\omega_1,\dotsc,\nabla_X \omega_p,Y_1,\dotsc,Y_q) \\ 56 | + T(\omega_1,\dotsc,\omega_p,\nabla_X Y_1,\dotsc,Y_q) + \dotsb + T(\omega_1,\dotsc,\omega_p,Y_1,\dotsc,\nabla_X Y_q) \quad \text{ where }T \text{ is a }(p,q)\text{-tensor}$ 57 | \begin{framed} 58 | Note that for a $(p,q)$-tensor $T$ and a $(r,s)$-tensor $S$, since: \\ 59 | $(T \otimes S) (\omega_{(1)}, \dotsc, \omega_{(p+r)}, Y_{(1)}, \dotsc, Y_{(q+s)}) = \\ T(\omega_{(1)}, \dotsc, \omega_{(p)}, Y_{(1)}, \dotsc, Y_{(q)} ) \cdot S( \omega_{(p+1)}, \dotsc, \omega_{(p+r)} , Y_{(q+1)}, \dotsc, Y_{(q+s)})$, \\ 60 | Leibnitz rule implies $\nabla_X (T \otimes S) = (\nabla_X T) \otimes S + T \otimes (\nabla_X S)$. 61 | \end{framed} 62 | 63 | \item \textbf{$C^{\infty}$-linearity: }$\forall f \in C^{\infty}(M), \nabla_{fX+Z} T = f\nabla_X T + \nabla_Z T$ 64 | \begin{framed} 65 | $C^{\infty}$-linearity means that no matter how the function $f$ scales the vectors at different points of the manifold, the effect of the scaling at any point is independent of scaling in the neighbourhood and depends only on how the scaling happens at that point. 66 | \end{framed} 67 | \end{enumerate} 68 | \end{definition} 69 | 70 | A \textbf{manifold with a connection} $\nabla$ is a quadruple $(M, \mathcal{O}, \A, \nabla)$, where $M$ is a set, $\mathcal{O}$ is a topology and $\A$ is a smooth atlas. 71 | 72 | Remark: If $\nabla_X (\cdot)$ can be seen as an extension of $X$, \\ 73 | then $\nabla_{(\cdot)}(\cdot)$ can be seen as an extension of $d$. 74 | 75 | \subsection{New structure on $\mfd$ required to fix $\nabla$} 76 | How much freedom do we have in choosing such a structure? 77 | 78 | Consider vector fields $X, Y$ and chart $(U,x) \in \A$. Then 79 | \begin{align*} 80 | \nabla_X Y & = \nabla_{\left(X^i \cibasis{x^i}\right)} \left(Y^m \cibasis{x^m}\right) && \text{by expanding in chart-induced basis} \\ 81 | & = X^i \cdot \nabla_{\left(\cibasis{x^i}\right)} \left(Y^m \cibasis{x^m}\right) && \text{by }C^\infty\text{-linearity} \\ 82 | & = X^i \underbrace{\left(\nabla_{\left(\cibasis{x^i}\right)} Y^m\right)}_{=\cibasis{x^i} Y^m} \cibasis{x^m} + X^i \cdot Y^m \cdot \underbrace{\left(\nabla_{\left(\cibasis{x^i}\right)} \cibasis{x^m}\right)}_{\text{a vector field, by defn.}} && \text{using Leibnitz rule} \\ 83 | & = X^i \left(\cibasis{x^i} Y^m\right) \cibasis{x^m} + X^i \cdot Y^m \cdot \left(\ccf{q}{mi} \cibasis{x^q}\right) 84 | \end{align*} 85 | 86 | Thus, by change of indices, 87 | \begin{equation} 88 | \boxed{\left(\nabla_X Y\right)^i = X^m \left(\cibasis{x^m} Y^i\right) + X^m \cdot Y^n \cdot \ccf{i}{nm}} 89 | \end{equation} 90 | So we need $(dim\,M)^3$-many functions to define directional derivative of a vector field. 91 | 92 | \begin{definition} 93 | Given $(M, \mathcal{O}, \A, \nabla)$ and $(U,x) \in \A$, then the \textbf{connection coefficient functions} ($\Gamma$s) on $M$ of $\nabla$ w.r.t $(U,x)$ are $(dim\,M)^3$-many functions given by 94 | \begin{align} 95 | \ccf{i}{jk} : \quad & U \to \R \nonumber \\ 96 | & p \mapsto \ccf{i}{jk}(p) := \left(dx^i \left(\nabla_{\left(\cibasis{x^k}\right)} \cibasis{x^j}\right)\right)(p) 97 | \end{align} 98 | \end{definition} 99 | 100 | \textit{Note: $\cibasis{x^j}$ is a vector field; $\therefore\,\nabla_{\left(\cibasis{x^k}\right)} \cibasis{x^j}$ is a vector field, and $dx^i$ is a covector which will result in a function after acting on a vector field.} 101 | 102 | On a chart domain $U$, choice of the $(dim\,M)^3$-many functions $\ccf{i}{jk}$ suffices to fix the action of $\nabla$ on a vector field. What about the directional derivative of a covector field, or a tensor field? Will we have to provide more and more coefficients? Fortunately, the same $(dim\,M)^3$-many functions fix the action of $\nabla$ on any tensor field. 103 | 104 | We know that, for a covector, $\nabla_{\cibasis{x^m}}\left(dx^i\right) = \Sigma\indices{^{i}_{jm}} dx^j$, since $dx^i$ form a dual basis. Are these $\Sigma$s independent of $\Gamma$s? Consider the following. 105 | \begin{align*} 106 | & \displaystyle\nabla_{\cibasis{x^m}} \left(dx^i \left(\cibasis{x^j}\right)\right) = \nabla_{\cibasis{x^m}} \delta^i_j = \cibasis{x^m}(\delta^i_j) = 0 \\ 107 | & \implies \displaystyle\left(\nabla_{\cibasis{x^m}} dx^i \right)\left(\cibasis{x^j}\right) + dx^i \underbrace{\left(\nabla_{\cibasis{x^m}} \cibasis{x^j}\right)}_{\ccf{q}{jm}\cibasis{x^q}} = 0 \\ 108 | & \implies \displaystyle\left(\nabla_{\cibasis{x^m}} dx^i \right)\left(\cibasis{x^j}\right) + dx^i \ccf{q}{jm} \cibasis{x^q} = 0 \\ 109 | & \implies \left(\nabla_{\cibasis{x^m}} dx^i \right)\left(\cibasis{x^j}\right) = - dx^i \ccf{q}{jm} \cibasis{x^q} = - \ccf{q}{jm} dx^i \cibasis{x^q} = - \ccf{q}{jm} \delta^i_q = - \ccf{i}{jm} \\ 110 | & \implies \left(\nabla_{\cibasis{x^m}} dx^i \right)\underbrace{\left(\cibasis{x^j}\right) dx^j}_{= \delta^j_j = 1} = -\ccf{i}{jm} dx^j \\ 111 | & \implies \boxed{\nabla_{\cibasis{x^m}} dx^i = -\ccf{i}{jm} dx^j} \\ 112 | & \implies \boxed{\left(\nabla_{\cibasis{x^m}} dx^i\right)_j = -\ccf{i}{jm}} 113 | \end{align*} 114 | In summary, 115 | \begin{align} 116 | \displaystyle\left(\nabla_X Y\right)^i & = X(Y^i) + \ccf{i}{jm} Y^j X^m \\ 117 | \displaystyle\left(\nabla_X \omega\right)_i & = X\left(\omega_i\right) - \ccf{j}{im} \omega_j X^m 118 | \end{align} 119 | Note that for the immediately above expression for $(\nabla_X Y)^i$, in the second term on the right hand side, $\ccf{i}{jm}$ has the last entry at the bottom, $m$ going in the direction of $X$, so that it matches up with $X^m$. This is a good mnemonic to memorize the index positions of $\ccf{}{}$. 120 | 121 | Similarly, as an example, by further application of Leibnitz rule, for a $(1,2)$-tensor field $T$, 122 | \begin{align*} 123 | \left(\nabla_X T\right)\indices{^i_{jk}} = X\left(T\indices{^i_{jk}}\right) + \ccf{i}{sm} T\indices{^s_{jk}} X^m - \ccf{s}{jm} T\indices{^i_{sk}} X^m - \ccf{s}{km} T\indices{^i_{js}} X^m 124 | \end{align*} 125 | 126 | %Student's Question: If in a Euclidean space, do the $\Gamma$s all vanish in a global chart? Yes, it is so by definition. But what is a Euclidean space? \\ 127 | %$\left(M = \R^n, \mathcal{O}_{\text{st}}, \A\right)$ smooth manifold. \\ 128 | %Assume $(\R^n, \text{id}_{\R^n} ) \in \A$ and 129 | %\[ 130 | %\ccfx{i}{jk}{(x)} = dx^i \left( (\nabla_{\text{\underline{E}}})_{\cibasis{x^k}}\cibasis{x^j} \right) \overset{!}{=} 0 131 | %\] 132 | 133 | \subsection{Change of $\Gamma$'s under change of chart} 134 | Let $(U,x)$, $(V,y) \in \A$ and $U \cap V \neq \emptyset$. 135 | \begin{align*} 136 | \ccfx{i}{jk}{(y)} & := dy^i \left(\nabla_{\cibasis{y^k}} \cibasis{y^j} \right) \\ 137 | & = \cibasis[y^i]{x^q} dx^q \left(\nabla_{\cibasis[x^p]{y^k} \cibasis{x^p}} \cibasis[x^s]{y^j} \cibasis{x^s} \right) \\ 138 | & = \cibasis[y^i]{x^q} dx^q \left(\cibasis[x^p]{y^k} \left[ \left(\nabla_{\cibasis{x^p}} \cibasis[x^s]{y^j} \right) \cibasis{x^s} + \cibasis[x^s]{y^j} \left(\nabla_{\cibasis{x^p}} \cibasis{x^s} \right) \right] \right) && \because \nabla \text{ is } C^{\infty}-linear \\ 139 | & = \cibasis[y^i]{x^q} \underbrace{\cibasis[x^p]{y^k} \cibasis{x^p}}_{\cibasis{y^k}} \cibasis[x^s]{y^j} \delta^q_s + \cibasis[y^i]{x^q} \cibasis[x^p]{y^k} \cibasis[x^s]{y^j} \ccfx{q}{sp}{(x)} 140 | \end{align*} 141 | 142 | \begin{equation}\label{Eq:WEHCG0703_changeofGamma} 143 | \ccfx{i}{jk}{(y)} = \cibasis[y^i]{x^q} \frac{\partial^2 x^q}{\partial y^k \partial y^j} + \cibasis[y^i]{x^q} \cibasis[x^s]{y^j} \cibasis[x^p]{y^k} \ccfx{q}{sp}{(x)} 144 | \end{equation} 145 | 146 | Eq. (\ref{Eq:WEHCG0703_changeofGamma}) is the change of connection coefficient function under the change of chart $(U\cap V,x) \to (U\cap V,y)$. $\ccf{}{}$ is not a tensor due to the first term on left hand side in Eq. (\ref{Eq:WEHCG0703_changeofGamma}). However, for linear transformation between coordinates in two charts, the term $\frac{\partial^2 x^q}{\partial y^k \partial y^j}$ always vanishes and then, if $\Gamma$s are zero in one chart, they will be zero in the other chart too. However, there is no reason not to select a coordinate which is not a linear transformation of another one. 147 | 148 | \subsection{Normal Coordinates} 149 | Can we find a coordinate system that makes the $\ccf{}{}$s vanish? 150 | 151 | \begin{theorem} 152 | Let $p \in M$ of $(M, \mathcal{O}, \A, \nabla)$. Having chosen a point $p$, one can construct a chart $(U,x)$ with $p \in U$ such that the symmetric part of $\Gamma$s vanish at the point $p$ (not necessarily in any neighbourhood). That is, \\ 153 | $\displaystyle\forall \, p \in M, \, \exists \, (U,x) \in \A \, : \, p \in U \text{ and } \ccfx{i}{(jk)}{(x)}(p) = 0$. \\ 154 | Such $(U,x)$ is called a \textbf{normal coordinate chart} of $\nabla$ at $p \in M$. 155 | \end{theorem} 156 | 157 | \begin{proof} 158 | Let $(V,y) \in \A$ and $p \in V$. Then consider a new chart $(U,x)$ to which one transits using the map $(x \after y^{-1})$ whose $i^{th}$ component is given by\\ 159 | \begin{align*} 160 | \left(x \after y^{-1}\right)^i\left(\alpha^1,\dotsc,\alpha^d\right) := \alpha^i - \ccfx{i}{(jk)}{(y)} \alpha^j \alpha^k && \text{ where the } \ccf{}{} \text{s are taken at the point } p \\ 161 | \implies \displaystyle\cibasis[x^i]{y^j} = \partial_j\left(x^i \after y^{-1}\right) = \delta^i_j - \ccfx{i}{(jm)}{(y)} \alpha^m \\ 162 | \implies \displaystyle\frac{\partial^2 x^i}{\partial y^k \partial y^j} = - \ccfx{i}{(jk)}{(y)} \\ 163 | \end{align*} 164 | To end the proof one can see that, without loss of generality, the coordinates of $y$ can be chosen so that the chart coordinates vanish at $p$. Then in applying formula (\ref{Eq:WEHCG0703_changeofGamma}) one has to evaluate derivatives at point $p$. Also, given that $\delta$ is its own inverse. 165 | \begin{align*} 166 | \implies & \ccfx{i}{jk}{(x)}(p) = \ccfx{i}{(jk)}{(y)}(p) - \ccfx{i}{(jk)}{(y)}(p) = \ccfx{i}{[jk]}{(y)}(p) \\ 167 | \implies & \ccfx{i}{(jk)}{(x)}(p) = 0. 168 | \end{align*} 169 | \end{proof} 170 | 171 | We can say that, up to a \textit{torsion}, we can make the $\Gamma$s vanish. 172 | Later in the course one should see that the antisymmetric part of the $\Gamma$s is in fact a tensor and that can ce set consistently to zero. I can then look at special kind of connections where the $\Gamma^{i}_{~[jk]}$ vanishes, those are \textit{torsion-free connections}. 173 | -------------------------------------------------------------------------------- /lecture8.tex: -------------------------------------------------------------------------------- 1 | \section{Parallel Transport \& Curvature} 2 | 3 | \subsection{Parallelity of vector fields} 4 | \begin{definition} 5 | Let $(M, \mathcal{O}, \mathcal{A}, \nabla)$ be a smooth manifold with connection $\nabla$. 6 | \begin{enumerate} 7 | \item[(1)] A vector field $X$ on $M$ is said to be \textbf{parallely transported} along a smooth curve $\gamma: \mathbb{R} \to M$ if 8 | \begin{equation}\label{eq:parallelTransport} 9 | \boxed{\nabla_{v_{\gamma}} X = 0} 10 | \end{equation} 11 | To make explicit, how this equation applies along the curve, we may state 12 | \begin{equation*} 13 | \left(\nabla_{v_{\gamma, \gamma(\lambda)}} X\right)_{\gamma(\lambda)} = 0 14 | \end{equation*} 15 | \item[(2)] A slightly weaker condition is ``\textbf{parallel}'' if, for $\mu : \mathbb{R} \to \mathbb{R}$, 16 | \begin{equation} 17 | \boxed{\left(\nabla_{v_{\gamma, \gamma(\lambda)}} X\right)_{\gamma(\lambda)} = \mu(\lambda) X_{\gamma(\lambda)}} 18 | \end{equation} 19 | \end{enumerate} 20 | \end{definition} 21 | 22 | \textit{Remarks: Even though \textbf{parallely transported} sounds like an action, it is a property.} 23 | 24 | \subsection{Autoparallely transported curves} 25 | \begin{definition} 26 | A curve $\gamma: \mathbb{R} \to M$ is called \textbf{autoparallely transported} if 27 | \begin{equation} 28 | \boxed{\nabla_{v_{\gamma}}v_{\gamma} = 0} 29 | \end{equation} 30 | \end{definition} 31 | 32 | \textit{Remarks: Sometimes, this curve is called an autoparallel curve. But we wish to call a curve autoparallel if $\nabla_{v_{\gamma}}v_{\gamma} = \mu v_{\gamma}$.} 33 | 34 | \subsection{Autoparallel equation} 35 | Express $\nabla_{v_{\gamma}} v_{\gamma} = 0$ in terms of chart representation. 36 | \begin{align*} 37 | 0 & = \left(\nabla_{v_{\gamma}} v_{\gamma}\right) \\ 38 | & = \left(\nabla_{\left(\dot{\gamma}^m_{(x)} \cibasis{x^m}\right)} \dot{\gamma}^n_{(x)} \cibasis{x^n}\right) && \text{ remember that } \gamma^m_{(x)} := x^m \after \gamma \\ 39 | & = \dot{\gamma}^m \left(\nabla_{\left(\cibasis{x^m}\right)} \dot{\gamma}^n\right) \cibasis{x^n} + \dot{\gamma}^m \dot{\gamma}^n \left(\nabla_{\left(\cibasis{x^m}\right)} \cibasis{x^n}\right) && \text{x index is understood, hence suppressed} \\ 40 | & = \dot{\gamma}^m \left(\cibasis{x^m} \dot{\gamma}^n\right) \cibasis{x^n} + \dot{\gamma}^m \dot{\gamma}^n \left(\nabla_{\left(\cibasis{x^m}\right)} \cibasis{x^n}\right) && \text{} \\ 41 | & = \dot{\gamma}^m \left(\cibasis{x^m} \dot{\gamma}^q\right) \cibasis{x^q} + \dot{\gamma}^m \dot{\gamma}^n \left(\Gamma\indices{^{q}_{nm}} \cibasis{x^q}\right) && \text{change of index in 1st term} \\ 42 | & = \left(\dot{\gamma}^m \cibasis{x^m} \dot{\gamma}^q + \dot{\gamma}^m \dot{\gamma}^n \Gamma\indices{^{q}_{nm}}\right) \cibasis{x^q} && \text{} \\ 43 | & = \left(\ddot{\gamma}^q + \dot{\gamma}^m \dot{\gamma}^n \Gamma\indices{^{q}_{nm}}\right) \cibasis{x^q} && \text{TODO: show that 1st term is 2nd derivative} 44 | \end{align*} 45 | 46 | %\begin{frame} 47 | %\begin{figure} 48 | %\label{fig:L8_2ndDerivativeDerivation} 49 | %\centering 50 | %\begin{align*} 51 | %\dot{\gamma}^m \cibasis{x^m} \dot{\gamma}^q & = \left(x^m \after \gamma\right)^\prime \cdot \partial_m\left(\dot{\gamma}^q \after x^{-1}\right) 52 | %\end{align*} 53 | %\caption{Second derivative of a curve} 54 | %\end{figure} 55 | %\end{frame} 56 | 57 | In summary: 58 | \begin{equation}\label{Eq:L8_autoParallelTransportChartExpression} 59 | \boxed{\ddot{\gamma}^q_{(x)}(\lambda) + (\Gamma_{(x)})\indices{^{q}_{mn}}(\gamma(\lambda)) \dot{\gamma}^m_{(x)}(\lambda) \dot{\gamma}^n_{(x)}(\lambda) = 0} 60 | \end{equation} 61 | Eq. (\ref{Eq:L8_autoParallelTransportChartExpression}) is the chart expression of the condition that $\gamma$ be autoparallely transported. 62 | 63 | \textbf{Example:} (a) In Euclidean plane having a chart $(U = \mathbb{R}^2, x = id_{\mathbb{R}^2})$, $\ccfx{i}{jk}{(x)} = 0 \\ 64 | \implies \ddot{\gamma}_{(x)}^m = 0 \implies \gamma_{(x)}^m (\lambda) = a^m \lambda + b^m$, where $a,b \in \mathbb{R}^d$. 65 | 66 | (b) Consider the round sphere $(S^2, \mathcal{O}, \mathcal{A}, \nabla_{round}$), i.e., the sphere $(S^2, \mathcal{O}, \mathcal{A})$ with the connection $\nabla_{round}$. Consider the chart $x(p) = (\theta, \phi)$ where $\theta \in (0,\pi)$ and $\phi \in (0, 2\pi)$. In this chart $\nabla_{round}$ is given by 67 | \begin{align*} 68 | \ccfx{1}{22}{(x)}\left(x^{-1}(\theta,\phi)\right) & := - \sin\theta \cos\theta \\ 69 | \ccfx{2}{12}{(x)}\left(x^{-1}(\theta,\phi)\right) = \ccfx{2}{21}{(x)}\left(x^{-1}(\theta,\phi)\right) & := \cot\theta 70 | \end{align*} 71 | All other $\Gamma$s vanish. Then, using the sloppy notation (familiar to us from classical mechanics) i.e., $x^1(p) = \theta(p)$ and $x^2(p) = \phi(p)$, the autoparallel equation is 72 | \begin{equation*} 73 | \left.\begin{aligned} 74 | \ddot{\theta} + \ccf{1}{22} \dot{\phi}\dot{\phi} &= 0 \\ 75 | \ddot{\phi} + 2 \ccf{2}{12} \dot{\theta}\dot{\phi} &= 0 76 | \end{aligned} 77 | \right\} \implies 78 | \begin{split} 79 | \ddot{\theta} - \sin\theta \cos\theta \dot{\phi}\dot{\phi} &= 0 \\ 80 | \ddot{\phi} + 2 \cot\theta \dot{\theta}\dot{\phi} &= 0 \\ 81 | \end{split} 82 | \end{equation*} 83 | It can be seen that the above equations are satisfied at the equator where $\theta(\lambda) = \pi/2$, and $\phi(\lambda) = \omega\lambda + \phi_0$ (running around the equator at constant speed $\omega$). Thus, this curve is autoparallel. However, $\phi(\lambda) = \omega\lambda^2 + \phi_0$ wouldn't be autoparallel. 84 | 85 | \subsection{Torsion} 86 | Can we use $\nabla$ to define tensors on $(M,\mathcal{O},\mathcal{A},\nabla)$? 87 | 88 | \begin{definition} 89 | The \textbf{torsion} of a connection $\nabla$ is the $(1,2)$-tensor field 90 | \begin{equation} 91 | \boxed{T(\omega,X,Y) := \omega(\nabla_X Y - \nabla_Y X - [X,Y])} 92 | \end{equation} 93 | where $[X,Y]$, called the commutator of $X$ and $Y$ is a vector field defined by $[X,Y]f:= X(Yf) - Y(Xf)$. 94 | \end{definition} 95 | 96 | \begin{proof} 97 | We shall check that $T$ is $C^{\infty}$-linear in each entry. 98 | \begin{align*} 99 | T(f\omega, X, Y) & = f\omega(\nabla_{X} Y - \nabla_Y (X) - [X,Y]) \\ 100 | & = fT(\omega, X, Y) \\ 101 | T(\omega + \psi, X, Y) & = (\omega + \psi)(\nabla_{X} Y - \nabla_Y (X) - [X,Y]) \\ 102 | & = T(\omega, X, Y) + T(\psi, X, Y) \\ 103 | T(\omega, fX, Y) & = \omega(\nabla_{fX} Y - \nabla_Y (fX) - [fX,Y]) \\ 104 | & = \omega(f\nabla_{X} Y - (\nabla_Y (f))X - f(\nabla_Y X) - [fX,Y]) \\ 105 | & = \omega(f\nabla_{X} Y - (Yf)X - f(\nabla_Y X) - [fX,Y]) \\ 106 | \text{But } [fX,Y]g & = fX(Yg) - Y(fX)g = fX(Yg) - (Yf)(Xg) - fY(Xg) \implies [fX,Y] = f[X,Y] - (Yf)X \\ 107 | \therefore T(\omega, fX, Y) & = \omega(f\nabla_{X} Y - (Yf)X - f(\nabla_Y X) - f[X,Y] + (Yf)X) \\ 108 | & = \omega(f\nabla_{X} Y - f(\nabla_Y X) - f[X,Y]) \\ 109 | & = f\omega(\nabla_{X} Y - (\nabla_Y X) - [X,Y]) = fT(\omega,X,Y) \\ 110 | \text{Further, } T(\omega,X,Y) & = - T(\omega,Y,X), \text{ which means scaling in the last factor need not be checked separately.} \\ 111 | \end{align*} 112 | Additivity in the last two factors can also be checked. 113 | \end{proof} 114 | 115 | \begin{definition} 116 | A $(M, \mathcal{O}, \mathcal{A}, \nabla)$ is called torsion-free if the torsion of its connection is zero. That is, $T = 0$. 117 | \end{definition} 118 | 119 | In a chart 120 | \begin{align*} 121 | T\indices{^i_{ab}} := T\left(dx^i, \cibasis{x^a}, \cibasis{x^b}\right) & = dx^i (\dots) = \ccf{i}{ab} - \ccf{i}{ba} = 2 \ccf{i}{[ab]} 122 | \end{align*} 123 | 124 | From now on, in these lectures, we only use torsion-free connections. 125 | 126 | \subsection{Curvature} 127 | 128 | \begin{definition} 129 | \textbf{Riemann curvature} of a connection $\nabla$ is the $(1,3)$-tensor field 130 | \begin{equation} 131 | \boxed{Riem(\omega,Z,X,Y) := \omega(\nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]} Z)} 132 | \end{equation} 133 | \end{definition} 134 | \begin{proof} 135 | It can be shown that $C^{\infty}$-linear in each slot. This has been left as an exercise to the reader. 136 | \end{proof} 137 | 138 | \textbf{Algebraic relevance of $Riem$:} We ask whether there is difference in applying the two directional derivatives in different order, i.e. \\ 139 | $\nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z = Riem(\cdot,Z,X,Y) + \nabla_{[X,Y]} Z$ \\ 140 | In one chart $(U,x)$, denoting $\nabla_{\cibasis{x^a}}$ by $\nabla_a$, \\ 141 | $\left(\nabla_a \nabla_b Z \right)^m - \left(\nabla_b \nabla_a Z \right)^m = Riem\indices{^m_{nab}}Z^n + \nabla_{\underbrace{\left[\cibasis{x^a},\cibasis{x^b}\right]}_{=0,\text{ since they commute}}} Z$ \\ 142 | As the last term vanishes, we can see how the $Riem$ tensor components contain all the information about how the $\nabla_a$ and $\nabla_b$ fail to commute if they act on a vector field. If they act on a tensor field, there are several terms on RHS like the one term above; if they act on a function, of course they commute. Being a tensor, $Riem$ vanishes in all coordinate systems if it vanishes in one coordinate system, as it does in flat spaces. 143 | 144 | \textbf{Geometric significance of $Riem$:} 145 | \begin{SCfigure}[5][h] 146 | \label{fig:L8_Riem_Geometric_meaning} 147 | \centering 148 | \includegraphics[width=0.3\textwidth]{8_Riem_Geometric_meaning} 149 | \caption{If we parallel transport a vector $u$ at p to q along two different paths $vw$ and $wv$, the resulting vectors at q are different in general. If, however, we parallel transport a vector in a Euclidean space, where the parallel transport is defined in our usual sense, the resulting vector does not depend on the path along which it has been parallel transported. We expect that this non-integrability of parallel transport characterizes the intrinsic notion of curvature, which does not depend on the special coordinates chosen. \textit{From an answer of Sepideh Bakhoda~\cite{mse465672} on \url{http://math.stackexchange.com/q/465672}}} 150 | \end{SCfigure} 151 | 152 | For small $v$ and $w$, if $T = 0$, $(\delta u)^m = Riem\indices{^m_{nab}} v^a w^b u^n + \mathit{O}(v^2w,vw^2)$. 153 | -------------------------------------------------------------------------------- /lecture9.tex: -------------------------------------------------------------------------------- 1 | \section{Newtonian spacetime is curved!} 2 | 3 | \begin{axiom}[Newton I:] 4 | A body on which \emph{no} force acts moves uniformly along a straight line. 5 | \end{axiom} 6 | 7 | \begin{axiom}[Newton II:] 8 | Deviation of a body's motion from such uniform straight motion is effected by a force, reduced by a factor of the body's reciprocal mass. 9 | \end{axiom} 10 | 11 | \textit{Remarks: \begin{enumerate} 12 | \item[(1)] 1st axiom - in order to be relevant - must be read as a measurement prescription for the geometry of space. If somehow, we know that no force acts on a particle, we know that the path it takes is a straight line -- thus, we learn about the geometry of space. After all, unlike in maths, there is no obvious way to tell what is a straight line. Remember, if we don't know what a straight line is, we don't know what a deviation from a straight line is. 13 | \item[(2)] Since gravity universally acts on every particle, in a universe with at least two particles, gravity must not be considered a force if Newton I is supposed to remain applicable. 14 | \end{enumerate}} 15 | 16 | \subsection{Laplace's questions} 17 | \underline{Question}: Can gravity be encoded in a curvature of space, such that its effects show if particles under the influence of (no other) force we postulated to more along straight lines in this curved space? 18 | 19 | \underline{Answer}: No! 20 | 21 | \begin{proof} 22 | Gravity, as a force point of view: 23 | \[ 24 | m\ddot{x}^{\alpha}(t) = \underbrace{mf^{\alpha}}_{force : F^{\alpha}}(x(t)) 25 | \] 26 | where $-\partial_{\alpha} f^{\alpha} = 4 \pi G \rho$ (Poisson); $\rho =$ mass density of matter. \\ 27 | The same $m$ appearing on both sides of the equation is an experimental fact, also known as the \textbf{weak equivalence principle}. 28 | \[ 29 | \therefore \ddot{x}^{\alpha}(t) - f^{\alpha}(x(t)) = 0 30 | \] 31 | Laplace asks: Is this ($\ddot{x}(t)$) of the form $\ddot{x}^{\alpha}(t) + \ccf{\alpha}{\beta \gamma}(x(t)) \dot{x}^{\beta}(t) \dot{x}^{\gamma}(t) = 0$? That is, does it take the form of autoparallel equation? 32 | 33 | No. Because the $\Gamma$ can only depend on the point $x$ where you are, but the velocities $\dot{x}^{\beta}(t)$ and $\dot{x}^{\gamma}(t)$ can take any value and therefore the $\Gamma$s cannot take care of the $f^{\alpha}$ in the preceding equation. Had there been such $\Gamma$s, we would be able to find the notion of straight line that could have absorbed the effect that we usually attribute to a force. 34 | 35 | Conclusion: One cannot find $\Gamma$ s such that Newton's equation takes the form of an autoparallel equation. 36 | \end{proof} 37 | 38 | \subsection{The full wisdom of Newton I} 39 | Laplace asked: Can we find a curvature of space such that particles move along straight lines? 40 | 41 | Use the information from Newton's first law that particles (under influence of no force) move not just in straight line, but also \textbf{uniformly}. A curve, after all, is not just a set of points, but also how their parameter is associated with the points. 42 | 43 | Introduce the appropriate setting to talk about the difference easily. How? We use spacetime instead of just space. By using the extra coordinate viz. time, we do not need to keep track of the curve parameter since we can just refer to time to ascertain uniformity of the motion. 44 | 45 | \textbf{Insight:} $\boxed{\text{Uniform \& straight motion}}$ in space is simply straight motion in \textbf{spacetime}. We do not need to say uniform. This can be seen by drawing the path of the particle in a t-x graph, wherein straight line results only when the motion is uniform. So let's try in spacetime: \\ 46 | 47 | $\boxed{\left. \begin{aligned} 48 | \text{Let } x : \mathbb{R} \to \mathbb{R}^3 \\ 49 | \text{\quad be a particle's} \\ 50 | \text{trajectory in space} \end{aligned} \right\rbrace \longleftrightarrow \left\lbrace \right. 51 | \begin{aligned} 52 | & \text{worldline (history) of the particle} \\ 53 | X : & \mathbb{R} \to \mathbb{R}^4 \\ 54 | & t \mapsto (t, x^1(t), x^2(t), x^3(t)) := (X^0(t), X^1(t), X^2(t), X^3(t)) \end{aligned}}$ 55 | 56 | That's all it takes. Let us assume that $x : \mathbb{R} \to \mathbb{R}^3$ satisfies Newton's law concerning gravitational force, i.e. we can omit $m$ on both sides of the equation $\ddot{x}^\alpha = - f^\alpha(x(t))$. \\ 57 | Trivial rewritings: \\ 58 | $\dot{X}^0 =1$ 59 | \[ 60 | \Longrightarrow \boxed{\begin{aligned} 61 | & \ddot{X}^0 & = 0 \\ 62 | & \underbrace{\ddot{X}^{\alpha} - f^{\alpha}(X(t))\cdot \dot{X}^0 \cdot \dot{X}^0}_{(\alpha = 1,2,3)} & = 0 63 | \end{aligned} } \quad \, \Longrightarrow \begin{gathered} 64 | a = 0,1,2,3 \\ 65 | \boxed{\ddot{X}^a + \ccf{a}{bc} \dot{X}^b \dot{X}^c = 0} \\ 66 | \text{autoparallel eqn. in spacetime} 67 | \end{gathered} 68 | \] 69 | 70 | Yes, choosing $\ccf{0}{ab} = 0, \quad \ccf{\alpha}{\beta \gamma} = 0 = \ccf{\alpha}{0 \beta} = \ccf{\alpha}{\beta 0}$. Only $\boxed{\ccf{\alpha}{00} \overset{!}{=} -f^{\alpha}}$. 71 | 72 | \textbf{Question}: Is this a coordinate-choice artifact? \\ 73 | No, since $R\indices{^{\alpha}_{0 \beta 0}} = - \cibasis{x^{\beta}} f^{\alpha}$ (only non-vanishing components) (tidal force tensor, $-$ the Hessian of the force component) 74 | 75 | Ricci tensor $\Longrightarrow R_{00} = R\indices{^m_{0m0}} = -\partial_{\alpha} f^{\alpha} = 4 \pi G \rho$ 76 | 77 | Poisson: $-\partial_{\alpha} f^{\alpha} = 4 \pi G\cdot \rho$ 78 | 79 | \underline{writing}: $T_{00} = \frac{1}{2}s$ 80 | \[ 81 | \Longrightarrow \boxed{ R_{00} = 8 \pi G T_{00}} 82 | \] 83 | Einstein in 1912 $ \boxed{\xcancel{R_{ab} = 8\pi G T_{ab}}}$ 84 | 85 | \underline{Conclusion}: Laplace's idea works in spacetime 86 | 87 | \underline{Remark} 88 | \[ 89 | \begin{gathered} 90 | \ccf{\alpha}{00} = -f^{\alpha} \\ 91 | R\indices{^{\alpha}_{\beta \gamma \delta}} = 0 \quad \quad \, \alpha, \beta , \gamma, \delta = 1,2,3 \\ 92 | \boxed{R_{00} = 4 \pi G \rho} 93 | \end{gathered} 94 | \] 95 | 96 | \underline{Q}: What about transformation behavior of LHS of 97 | \[ 98 | \underbrace{\ddot{x}^a + \ccf{a}{bc} \dot{X}^b \dot{X}^c}_{\underbrace{(\nabla_{v_X}v_X)^a}_{:= a^a \text{``acceleration \underline{vector}''}}} = 0 99 | \] 100 | 101 | \subsection{The foundations of the geometric formulation of Newton's axiom} 102 | \begin{definition} 103 | A \textbf{Newtonian spacetime} is a quintuple $(M, \mathcal{O}, \mathcal{A}, \nabla, t)$ where $(M, \mathcal{O}, \mathcal{A})$ is a 4-dimensional smooth manifold, and \\ 104 | $t : M \to \mathbb{R} \text{ smooth function }$ 105 | 106 | \begin{enumerate} 107 | \item[(i)] ``There is an absolute space'' \quad $(dt)_p \neq 0 \quad \quad \, \forall \, p \in M$ 108 | \item[(ii)] ``Absolute time flows uniformly'' 109 | \[ 110 | \underbrace{\nabla dt}_{(0,2)\text{-tensor field}} = 0 \quad \quad \text{everywhere} 111 | \] 112 | 113 | \item[(iii)] add to axioms of Newtonian spacetime $\nabla = 0$ torsion free 114 | \end{enumerate} 115 | \end{definition} 116 | 117 | \begin{definition} 118 | Absolute space at time $\tau$ 119 | \[ 120 | S_{\tau} := \lbrace p \in M | t(p) = \tau \rbrace \\ 121 | \xrightarrow{dt \neq 0} M = \coprod S_{\tau} 122 | \] 123 | \end{definition} 124 | 125 | \begin{definition} A vector $X \in T_pM$ is called 126 | \begin{enumerate}[(a)] 127 | \item \textbf{future-directed}, if $dt(X) > 0$ 128 | \item \textbf{spatial}, if $dt(X) = 0$ 129 | \item \textbf{past-directed}, if $dt(X) < 0$ 130 | \end{enumerate} 131 | \end{definition} 132 | 133 | \underline{Picture} 134 | \underline{Newton I}: The worldline of a particle under the influence of no force (gravity isn't one, anyway) is a \underline{future-directed autoparallel} i.e. 135 | \[ 136 | \begin{gathered} 137 | \nabla_{v_{X}} v_{X} = 0 \\ 138 | dt(v_{X}) > 0 139 | \end{gathered} 140 | \] 141 | 142 | \underline{Newton II}: \\ 143 | $\nabla_{v_{X}} v_X = \frac{F}{m} \Longleftrightarrow m \cdot a = F$ \\ 144 | where $F$ is a spatial vector field: $dt(F) = 0$. 145 | 146 | \textbf{Convention}: restrict attention to atlases $\mathcal{A}_{stratified}$ whose charts $(U,x)$ have the property 147 | \[ 148 | \begin{aligned} 149 | & x^0 : U \to \mathbb{R} \\ 150 | & x^1 : U \to \mathbb{R} \\ 151 | & \vdots \quad \, \vdots \\ 152 | & x^3 153 | \end{aligned} 154 | \quad \quad \, 155 | x^0 = \left. t \right|_{U} \quad\quad \, \Longrightarrow \begin{gathered} 0 \overset{\text{``absolute time flows uniformly''} }{=} \nabla dt \\ 156 | 0 = \nabla_{\cibasis{x^a}} dx^0 = - \ccf{0}{ba} \quad \quad \, a = 0,1,2,3 157 | \end{gathered} 158 | \] 159 | 160 | Let's evaluate in a chart $(U,x)$ of a stratified atlas $\mathcal{A}_{sheet}$: Newton II: \\ 161 | $\nabla_{v_X} v_X = \frac{F}{m}$ \\ 162 | in a chart. 163 | \begin{align*} 164 | (X^0)'' + \cancel{\ccf{0}{cd} (X^a)' (X^b)' }^{ \text{stratified atlas}} = 0 \\ 165 | (X^{\alpha})'' + \ccf{\alpha}{\gamma \delta} X^{\gamma'} X^{\delta'} + \ccf{\alpha}{00} X^{0'} X^{0'} + 2\ccf{\alpha}{\gamma 0} X^{\gamma'} X^{0'} = \frac{F^{\alpha}}{m} \quad \quad \, \alpha = 1,2,3 166 | \end{align*} 167 | 168 | \[ 169 | \begin{gathered} 170 | \Longrightarrow (X^0)''(\lambda) = 0 \Longrightarrow X^0(\lambda) = a\lambda + b \quad \, \text{ constants $a,b$ } \text{ with} \\ 171 | X^0(\lambda) = (x^0 \after X)(\lambda) \overset{\text{stratified}}{=} (t \after X)(\lambda) 172 | \end{gathered} 173 | \] 174 | \underline{convention} parametrize worldline by absolute time 175 | \[ 176 | \frac{d}{d\lambda} = a \frac{d}{dt} 177 | \] 178 | \[ 179 | \begin{gathered} 180 | a^2 \ddot{X}^{\alpha} + a^2 \ccf{\alpha}{\gamma \delta} \dot{X}^{\gamma} \dot{X}^{\delta} + a^2 \ccf{\alpha}{00} \dot{X}^0 \dot{X}^0 + 2\ccf{\alpha}{\gamma 0} \dot{X}^{\gamma} \dot{X}^{0} = \frac{F^{\alpha}}{m} \\ 181 | \Longrightarrow \underbrace{\ddot{X}^{\alpha} + \ccf{\alpha}{\gamma \delta} \dot{X}^{\gamma} \dot{X}^{\delta} + \ccf{\alpha}{00} \dot{X}^0 \dot{X}^0 + 2\ccf{\alpha}{\gamma 0} \dot{X}^{\gamma} \dot{X}^{0}}_{a^{\alpha}} = \frac{1}{a^2} \frac{F^{\alpha}}{m} 182 | \end{gathered} 183 | \] 184 | -------------------------------------------------------------------------------- /main.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lazierthanthou/Lecture_Notes_GR/c05a0ba9442a3898f0f83b84886cd467db2e8cae/main.pdf -------------------------------------------------------------------------------- /main.tex: -------------------------------------------------------------------------------- 1 | % file: main.tex 2 | 3 | \documentclass[10pt,a4paper,oneside]{article} 4 | \usepackage{setspace} 5 | %\usepackage[ngerman]{babel} 6 | \usepackage[utf8]{inputenc} 7 | \usepackage{fancyhdr} 8 | \usepackage{tabularx} 9 | %\renewcommand{\rmdefault}{phv} 10 | %\renewcommand{\sfdefault}{phv} 11 | \usepackage[a4paper,left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} 12 | \onehalfspacing 13 | 14 | \setcounter{tocdepth}{2} % to get subsubsections in toc 15 | % cf. http://www.latex-community.org/forum/viewtopic.php?f=47&p=44760 16 | 17 | \usepackage{amssymb,latexsym} 18 | \usepackage{amsmath, amsthm} 19 | 20 | %for bibliography; installation using 'sudo tlmgr install amsrefs' 21 | \usepackage{amsrefs} 22 | 23 | \usepackage{graphics} 24 | 25 | \usepackage{hyperref} 26 | \hypersetup{colorlinks=true, urlcolor=blue} 27 | 28 | \usepackage{cancel} % http://jansoehlke.com/2010/06/strikethrough-in-latex/ 29 | 30 | \usepackage{listings} % http://en.wikibooks.org/wiki/LaTeX/Source_Code_Listings 31 | % http://olmjo.com/files/teaching/PSC505/LaTeXandR.pdf 32 | 33 | % package for flower symbol (\ding(96)) 34 | \usepackage{pifont} 35 | % required installation: sudo apt-get install texlive-fonts-recommended (30MB) 36 | % http://tug.ctan.org/info/symbols/comprehensive/symbols-a4.pdf 37 | 38 | \usepackage{tikz} % for diagrams 39 | \usetikzlibrary{matrix,positioning,arrows,calc,decorations.pathmorphing,shapes} 40 | % for snaky lines (http://tex.stackexchange.com/questions/209942/curved-arrows-in-tikz) 41 | \tikzset{snake it/.style={-stealth, 42 | decoration={snake, 43 | amplitude = .4mm, 44 | segment length = 2mm, 45 | post length=0.9mm},decorate}} 46 | 47 | \usepackage[parfill]{parskip} 48 | 49 | \usepackage{framed} %for putting some text in boxes using \begin{framed} 50 | 51 | \usepackage{enumerate} 52 | 53 | %for displaying tensor indices properly. requires installation of tensor package using 'sudo tlmgr install tensor' 54 | \usepackage{tensor} 55 | 56 | %for placing captions of figures on the side instead of above/below the figure 57 | \usepackage{sidecap} 58 | \linespread{1.2} 59 | 60 | %plain makes sure that we have page numbers 61 | \pagestyle{plain} 62 | 63 | \theoremstyle{plain} 64 | \newtheorem{axiom}{Axiom} 65 | \newtheorem{theorem}{Theorem} 66 | \newtheorem{corollary}{Corollary} 67 | \newtheorem*{main}{Main Theorem} 68 | \newtheorem{lemma}{Lemma} 69 | \newtheorem{proposition}{Proposition} 70 | 71 | \theoremstyle{definition} 72 | \newtheorem{definition}{Definition} 73 | 74 | \theoremstyle{remark} 75 | \newtheorem*{notation}{Notation} 76 | 77 | \numberwithin{equation}{section} 78 | \numberwithin{figure}{section} 79 | \numberwithin{theorem}{section} 80 | 81 | %symbol for maps 82 | \renewcommand{\to}{\longrightarrow} 83 | \newcommand{\injmapto}{\hookrightarrow} 84 | \newcommand{\surjmapto}{\twoheadrightarrow} 85 | \newcommand{\linearmapto}{\stackrel{\sim}{\longrightarrow}} 86 | \newcommand{\projmapto}{\stackrel{\pi}{\longrightarrow}} 87 | 88 | %for real numbers 89 | \newcommand{\R}{\mathbb{R}} 90 | 91 | % manifold, atlas and topology 92 | \newcommand{\A}{\mathcal{A}} 93 | %\newcommand{\O}{\mathcal{O}} 94 | \newcommand{\mfd}{(M, \mathcal{O}, \mathcal{A})} 95 | 96 | \newcommand{\after}{\circ} 97 | \newcommand{\stdtop}{\mathcal{O}_{std}} 98 | \newcommand{\cibasis}[2][]{\frac{\partial #1}{\partial #2}} 99 | 100 | %connection coefficient functions or gammas 101 | \newcommand{\ccf}[2]{\Gamma\indices{^{#1}_{#2}}} 102 | \newcommand{\ccfx}[3]{\left(\Gamma_{#3}\right)\indices{^{#1}_{#2}}} % with chart index 103 | 104 | %set theory symbols 105 | %\renewcommand{\exists}{\exists\,} 106 | %\renewcommand{\forall}{\forall\,} 107 | 108 | %This defines a new command \questionhead which takes one argument and prints out Question #. with some space. 109 | \newcommand{\questionhead}[1] 110 | { 111 | \noindent{\small\bf Question #1.} 112 | } 113 | 114 | \newcommand{\problemhead}[1] 115 | { 116 | \noindent{\small\bf Problem #1.} 117 | } 118 | 119 | \newcommand{\exercisehead}[1] 120 | { \smallskip 121 | \noindent{\small\bf Exercise #1.} 122 | } 123 | 124 | \newcommand{\solutionhead}[1] 125 | { 126 | \noindent{\small\bf Solution #1.} 127 | } 128 | 129 | \newcommand{\bubblethis}[2]{ 130 | \tikz[remember picture,baseline]{\node[anchor=base,inner sep=0,outer sep=0](#1) {#1};\node[overlay,cloud callout,callout relative pointer={(0.2cm,-0.7cm)}, aspect=2.5,fill=white!90] at ($(#1.north)+(-0.5cm,1.6cm)$) {#2};} 131 | } 132 | 133 | %----------------------------------- 134 | \begin{document} 135 | 136 | %----------------------------------- 137 | 138 | \title{Lecture Notes on General Relativity (GR)} 139 | \author{lazierthanthou \\ (\url{https://github.com/lazierthanthou/Lecture_Notes_GR})} 140 | %\thanks{} 141 | %\keywords{General Relativity, Gravity, Differential Geometry, Manifolds, Integration, mathematics, physics} 142 | \date{\today} 143 | 144 | \maketitle 145 | 146 | \tableofcontents 147 | 148 | \begin{abstract} 149 | These are lecture notes on General Relativity. 150 | 151 | They are based on the \href{https://www.youtube.com/channel/UCUHKG3S9N_QeIE2jQXd2-VQ/feed}{Central Lecture Course} by \textbf{Dr. Frederic P. Schuller} (\textbf{A thorough introduction to the theory of general relativity}) introducing the mathematical and physical foundations of the theory in 24 self-contained lectures at the International Winter School on Gravity and Light in Linz/Austria for the WE Heraeus International Winter School of Gravity and Light, 2015 in Linz as part of the world-wide celebrations of the 100th anniversary of Einstein's theory of general relativity and the International Year of Light 2015. 152 | 153 | These lectures develop the theory from first principles and aim at an audience ranging from ambitious undergraduate students to beginning PhD students in mathematics and physics. Satellite Lectures (see other videos on this channel) by Bernard F Schutz (Gravitational Waves), Domenico Giulini (Canonical Formulation of Gravity), Marcus C Werner (Gravitational Lensing) and Valeria Pettorino (Cosmic Microwave Background) expand on the topics of this central lecture course and take students to the research frontier. 154 | 155 | Spacetime is the physical key object, we shall be concerned about. 156 | 157 | \begin{framed} 158 | \textbf{Spacetime} is a \textbf{4-dimensional topological manifold} with a \textbf{smooth atlas} carrying a \textbf{torsion-free connection} compatible with a \textbf{Lorentzian metric} and a \textbf{time orientation} satisfying the \textbf{Einstein equations}. 159 | \end{framed} 160 | 161 | \end{abstract} 162 | 163 | \include{lecture1} % Topology 164 | %\include{tutorial1} 165 | 166 | \include{lecture2} % Manifolds 167 | %\include{tutorial2} 168 | 169 | \include{lecture3} % Multilinear Algebra 170 | 171 | \include{lecture4} % Differentiable Manifolds 172 | %\include{tutorial4} 173 | 174 | \include{lecture5} % Tangent Spaces 175 | 176 | \include{lecture6} % Fields 177 | 178 | \include{lecture7} % Connections 179 | %\include{tutorial4} 180 | 181 | \include{lecture8} % Parallel Transport & Curvature 182 | %\include{tutorial8} 183 | 184 | \include{lecture9} % Newtonian spacetime is curved! 185 | 186 | \include{lecture10} % Metric Manifolds 187 | %\include{tutorial9} 188 | 189 | \include{lecture11} % Symmetry 190 | %\include{tutorial11} 191 | 192 | \include{lecture12} % Integration 193 | \include{lecture13} % Relativistic Spacetime 194 | \include{lecture14} % Matter 195 | 196 | \include{lecture15} % Einstein Gravity 197 | %\include{tutorial13} % Schwarzschild Spacetime 198 | 199 | \include{lecture18} % Canonical Formulation of GR-I 200 | 201 | \include{lecture22} % Black Holes 202 | 203 | %\include{lecture-others} 204 | 205 | \begin{bibdiv} 206 | \begin{biblist} 207 | \bib{mse465672}{misc}{ 208 | title={Are there simple examples of Riemannian manifolds with zero curvature and nonzero torsion}, 209 | author={Sepideh Bakhoda (http://math.stackexchange.com/users/36591/sepideh-bakhoda)}, 210 | note={URL: http://math.stackexchange.com/q/465672 (version: 2013-08-12)}, 211 | eprint={http://math.stackexchange.com/q/465672}, 212 | organization={Mathematics Stack Exchange} 213 | } 214 | \end{biblist} 215 | \end{bibdiv} 216 | 217 | \end{document} 218 | -------------------------------------------------------------------------------- /pdfs/Gravity_Notes_grande.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lazierthanthou/Lecture_Notes_GR/c05a0ba9442a3898f0f83b84886cd467db2e8cae/pdfs/Gravity_Notes_grande.pdf -------------------------------------------------------------------------------- /tutorial1.tex: -------------------------------------------------------------------------------- 1 | \section*{Topology Tutorial Sheet} 2 | filename : \texttt{main.pdf} \\ 3 | The WE-Heraeus International Winter School on Gravity and Light: Topology \\ 4 | 5 | EY : 20150524 6 | 7 | What I won't do here is retype up the solutions presented in the Tutorial (cf. \url{https://youtu.be/_XkhZQ-hNLs}): the presenter did a very good job. If someone wants to type up the solutions and copy and paste it onto this LaTeX file, in the spirit of open-source collaboration, I would encourage this effort. 8 | 9 | Instead, what I want to encourage is the use of as much CAS (Computer Algebra System) and symbolic and numerical computation because, first, we're in the 21st century, second, to set the stage for further applications in research. I use Python and Sage Math alot, mostly because they are open-source software (OSS) and fun to use. Also note that the structure of Sage Math modules matches closely to Category Theory. 10 | 11 | In checking whether a set is a topology, I found it strange that there wasn't already a function in Sage Math to check each of the axioms. So I wrote my own; see my code snippet, which you can copy, paste, edit freely in the spirit of OSS here, titled \texttt{topology.sage}: 12 | 13 | \href{https://gist.github.com/ernestyalumni/903eefd01be1f214598b}{gist github ernestyalumni topology.sage} \\ 14 | \href{https://gist.githubusercontent.com/ernestyalumni/903eefd01be1f214598b/raw/67083e3b3dec2faf2087713236b413b741bd1180/topology.sage}{Download topology.sage} 15 | 16 | Loading \texttt{topology.sage}, after changing into (with the usual Linux terminal commands, cd, ls) by 17 | \lstset{language=Python,basicstyle=\scriptsize\ttfamily, 18 | commentstyle=\ttfamily\color{gray}} 19 | \begin{lstlisting}[frame=single] 20 | sage: load(``topology.sage'') 21 | \end{lstlisting} 22 | 23 | \exercisehead{2: Topologies on a simple set} 24 | 25 | \questionhead{Does $\mathcal{O}_1:= \dots$ constitute a topology \dots?} 26 | 27 | \textbf{Solution}: Yes, since we check by typing in the following commands in Sage Math: 28 | 29 | \begin{lstlisting}[frame=single] 30 | emptyset in O_1 31 | Axiom2check(O_1) # True 32 | Axiom3check(O_1) # True 33 | \end{lstlisting} 34 | 35 | \questionhead{What about $\mathcal{O}_2$ \dots ?} 36 | 37 | \textbf{Solution}: No since the 3rd. axiom fails, as can be checked by typing in the following commands in Sage Math: 38 | \begin{lstlisting}[frame=single] 39 | emptyset in O_2 40 | Axiom2check(O_2) # True 41 | Axiom3check(O_2) # False 42 | \end{lstlisting} 43 | -------------------------------------------------------------------------------- /tutorial11.tex: -------------------------------------------------------------------------------- 1 | \subsection*{Tutorial 11 Symmetry } 2 | 3 | \exercisehead{1}\textbf{: True or false?} 4 | 5 | \begin{enumerate} 6 | \item[(a)] 7 | \begin{itemize} 8 | \item 9 | \item $\phi^*:T^*N \to T*M$ i.e. $\phi^*\nu(X) = \nu(\phi_*X)$ for smooth $\phi:M \to N$, so the pullback of a covector $\nu \in T^*N$ maps to a covector in $T*M$. 10 | \item 11 | \item 12 | \item 13 | \item 14 | \end{itemize} 15 | \item[(b)] 16 | \item[(c)] 17 | \end{enumerate} 18 | 19 | \exercisehead{2}: Pull-back and push-forward 20 | 21 | \questionhead{}Let's check this locally 22 | \[ 23 | \begin{aligned} 24 | & \phi^*(df)(X) = (df)(\phi_*X) = (df)(X^i \frac{ \partial y^j}{\partial x^i} \frac{ \partial }{ \partial y^j} ) = X^i \frac{ \partial y^j}{ \partial x^i} \frac{ \partial f}{ \partial y^j} \text{ where } 25 | & \phi_* X = X^i \frac{ \partial y^j}{ \partial x^i} \frac{ \partial }{ \partial y^j} \\ 26 | & d(\phi^*f)(X) = d(f(\phi))(X) = \frac{ \partial f}{ \partial y^j} \frac{ \partial y^j}{ \partial x^i } dx^i(X) = X^i \frac{ \partial y^j}{ \partial x^i} \frac{ \partial f}{ \partial y^j} 27 | \end{aligned} 28 | \] 29 | So 30 | \[ 31 | \boxed{ \phi^*(df) = d(\phi^* f) } \quad \quad \, \forall \, p \in M , \, \, \forall \, X \in \mathfrak{X}(M) 32 | \] 33 | The big idea is that this is a showing of the \textbf{naturality} of the pullback $\phi^*$ with $d$, i.e. that this commutes: 34 | 35 | \begin{tikzpicture} 36 | \matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em] 37 | { 38 | \Omega^1(M) & \Omega^1(N) \\ 39 | C^{\infty}(M) & C^{\infty}(N) \\ }; 40 | % \path[-stealth] 41 | \path[->] 42 | (m-1-2) edge node [above] {$\phi^*$} (m-1-1) 43 | % edge node [left] {$\text{ev}_0$} (m-2-2) 44 | % (m-1-1) edge node [left] {$\alpha$} (m-2-1) 45 | (m-2-2) edge node [auto] {$d$} (m-1-2) 46 | % edge node [below] {$\pi_M$} (m-2-2); 47 | edge node [auto] {$\phi^*$} (m-2-1) 48 | (m-2-1) edge node [left] {$d$} (m-1-1); 49 | \end{tikzpicture} 50 | 51 | \questionhead{} 52 | 53 | \[ 54 | (\phi_*)^a_{ \, \, b} := (dy^a)(\phi_*( \frac{ \partial }{ \partial x^b } ) ) 55 | \] 56 | \[ 57 | \text{ Let } g \in C^{\infty}(N) 58 | \] 59 | \[ 60 | \begin{gathered} 61 | \phi_* \left( \frac{ \partial }{ \partial x^b} \right) g = \frac{ \partial x^b} g\phi(p) = \frac{ \partial }{ \partial x^b} g\phi x^{-1}x(p) = \frac{ \partial }{ \partial x^b}(gyy^{-1}\phi x^{-1})(x) = \\ 62 | = \frac{ \partial }{ \partial x^b}(gy^{-1}(y\phi x^{-1}(x(p))) ) = \left. \frac{ \partial g}{ \partial y}^b \right|_y \left. \frac{ \partial y^a}{ \partial x^b} \right|_x = \frac{ \partial y^a}{ \partial x^b} \frac{ \partial g}{ \partial y^a} 63 | \end{gathered} 64 | \] 65 | Then 66 | \[ 67 | \phi_*\left( \frac{ \partial }{ \partial x^b} \right) = \frac{ \partial y^a}{ \partial x^b} \frac{ \partial }{ \partial y^a} 68 | \] 69 | and so 70 | \[ 71 | (\phi_*)^a_{ \, \, b} = \frac{ \partial y^a}{ \partial x^b} 72 | \] 73 | 74 | \questionhead{} 75 | 76 | \exercisehead{3}\textbf{:Lie derivative-the pedestrian way} 77 | 78 | \questionhead{} While it is true that $\forall \, p \in S^2$, for $x(p) = (\theta, \varphi)$, and $(yix^{-1})(\theta,\varphi) = (y^1,y^2,y^3) \in \mathbb{R}^3$ and that, at this point $p$, $(y^1)^2/a^2 + (y^2)^2/b^2 +(y^3)^2/c^3 = 1$, this doesn't imply (EY: 20150321 I think) that, globally, it's an ellipsoid (yet). In the familiar charts given, \\ 79 | spherical chart $(U,x) \in \mathcal{A}$ and \\ 80 | $(\mathbb{R}^3, y=\text{id}_{\mathbb{R}^3}) \in \mathcal{B}$ \\ 81 | it looks like an ellipsoid, but change to another choice of charts, and it could look something very different. 82 | 83 | \questionhead{} 84 | 85 | Equip $(\mathbb{R}^3, \mathcal{O}_{\text{st}}, \mathcal{B})$ with the Euclidean metric $g$, and pullback $g$. 86 | 87 | Note that the pullback of the inclusion from $\mathbb{R}^3$ onto $S^2$ for the Euclidean metric is the following: 88 | \[ 89 | i^* g\left( \frac{ \partial }{ \partial \theta^i }, \frac{ \partial }{ \partial \theta^j} \right) = g\left( i_*\frac{ \partial }{ \partial \theta^i }, i_*\frac{ \partial }{ \partial \theta^j} \right) = g\left( \frac{ \partial x^a}{ \partial \theta^i} \frac{ \partial }{ \partial x^a} , \frac{ \partial x^b}{ \partial \theta^j} \frac{ \partial }{ \partial x^b } \right) = g_{ab} \frac{ \partial x^a}{ \partial \theta^i} \frac{ \partial x^b}{ \partial \theta^j} 90 | \] 91 | With $g_{ab}=\delta_{ab}$, the usual Euclidean metric, this becomes the following: 92 | \[ 93 | g^{\text{ellipsoid}}_{ij} = \frac{ \partial x^a}{ \partial \theta^i} \frac{ \partial x^a}{ \partial \theta^j} 94 | \] 95 | 96 | At this point, one should get smart (we are in the 21st century) and use some sort of CAS (Computer Algebra System). I like Sage Math (version 6.4 as of 20150322). I also like the Sage Manifolds package for Sage Math. 97 | 98 | I like Sage Math for the following reasons: 99 | \begin{itemize} 100 | \item Open source, so it’s open and freely available to anyone, which fits into my principle of making online education open and freely available to anyone, anytime 101 | \item Sage Math structures everything in terms of Category Theory and Categories and Morphisms naturally correspond to Classes and Class methods or functions in Object-Oriented Programming in Python and they’ve written it that way 102 | \end{itemize} 103 | and I like Sage Manifolds for roughly the same reasons, as manifolds are fit into a category theory framework that’s written into the Python code. e.g. 104 | 105 | {\small \begin{verbatim} 106 | sage: S2 = Manifold(2, 'S^2', r'\mathbb{S}^2', start_index=1) ; print S2 107 | sage: print S2 108 | 2-dimensional manifold 'S^2' 109 | sage: type(S2) 110 | 111 | \end{verbatim}} 112 | 113 | With code (I’ve provided for convenience; you can make your own as I wrote it based upon to example of $S^2$ on the sagemanifolds documentation website page), load it and do the following: 114 | 115 | cf. \url{https://github.com/ernestyalumni/diffgeo-by-sagemnfd/blob/master/S2.sage} \\ 116 | \url{http://sagemanifolds.obspm.fr/examples.html} 117 | 118 | {\scriptsize \begin{verbatim} 119 | sage: load("S2.sage") 120 | sage: U_ep = S2.open_subset('U_{ep}') 121 | sage: eps. = U_ep.chart() 122 | sage: a = var(“a”) 123 | sage: b = var(“b”) 124 | sage: c = var("c") 125 | sage: inclus = S2.diff_mapping(R3, {(eps, cart): [ a*cos(phi)*sin(the), b*sin(phi)*sin(the),c*cos(the) ]} , name="inc",latex_name=r'\mathcal{i}') 126 | sage: inclus.pullback(h).display() 127 | inc_*(h) = (c^2*sin(the)^2 + (a^2*cos(phi)^2 + b^2*sin(phi)^2)*cos(the)^2) dthe*dthe - (a^2 - b^2)*cos(phi)*cos(the)*sin(phi)*sin(the) dthe*dphi 128 | - (a^2 - b^2)*cos(phi)*cos(the)*sin(phi)*sin(the) dphi*dthe + (b^2*cos(phi)^2 + a^2*sin(phi)^2)*sin(the)^2 dphi*dphi 129 | sage: inclus.pullback(h)[2,2].expr() 130 | (b^2*cos(phi)^2 + a^2*sin(phi)^2)*sin(the)^2 131 | \end{verbatim} 132 | } 133 | A new open subset $U_{\text{ep}}$ was declared in $S^2$, a new chart $(U_{\text{ep}}, (\theta,\phi))$ was declared, the constants, $a,b,c$, were declared, and the inclusion map given in the problem 134 | \[ 135 | y\circ \mathfrak{i} \circ x^{-1} : (\theta, \phi) \mapsto ( a\cos{\phi} \sin{\theta}, b \sin{\phi} \sin{\theta}, c\cos{\theta}) 136 | \] 137 | Then the pullback of the inclusion map $\mathcal{i}$ was done on the Euclidean metric $h$, defined earlier in the file \begin{verbatim}S2.sage\end{verbatim}. Then one can access the components of this metric and do, for example, \begin{verbatim}simplify_full(),full_simplify(), reduce_trig()\end{verbatim} on the expression. 138 | 139 | In Python, I could easily do this, and give an answer quick in LaTeX: 140 | 141 | %{\scriptsize 142 | \begin{verbatim} 143 | sage: for i in range(1,3): 144 | ....: for j in range(1,3): 145 | ....: print inclus.pullback(h)[i,j].expr() 146 | ....: latex(inclus.pullback(h)[i,j].expr() ) 147 | ....: 148 | c^2*sin(the)^2 + (a^2*cos(phi)^2 + b^2*sin(phi)^2)*cos(the)^2 149 | \end{verbatim} 150 | (EY: I'll suppress the LaTeX output but this sage math function gives you LaTeX code) 151 | %c^{2} \sin\left(\mathit{the}\right)^{2} + {\left(a^{2} \cos\left(\phi\right)^{2} + 152 | %b^{2} \sin\left(\phi\right)^{2}\right)} \cos\left(\mathit{the}\right)^{2} 153 | %-(a^2 - b^2)*cos(phi)*cos(the)*sin(phi)*sin(the) 154 | %-{\left(a^{2} - b^{2}\right)} \cos\left(\phi\right) \cos\left(\mathit{the}\right) \sin\left(\phi\right) \sin\left(\mathit{the}\right) 155 | %-(a^2 - b^2)*cos(phi)*cos(the)*sin(phi)*sin(the) 156 | %-{\left(a^{2} - b^{2}\right)} \cos\left(\phi\right) \cos\left(\mathit{the}\right) \sin\left(\phi\right) \sin\left(\mathit{the}\right) 157 | %(b^2*cos(phi)^2 + a^2*sin(phi)^2)*sin(the)^2 158 | %{\left(b^{2} \cos\left(\phi\right)^{2} + a^{2} \sin\left(\phi\right)^{2}\right)} \sin\left(\mathit{the}\right)^{2} 159 | % 160 | % 161 | 162 | and so 163 | 164 | \[ 165 | \boxed{ \begin{gathered} 166 | i^* g = c^{2} \sin\left(\mathit{the}\right)^{2} + {\left(a^{2} \cos\left(\phi\right)^{2} + b^{2} \sin\left(\phi\right)^{2}\right)} \cos\left(\mathit{the}\right)^{2} d\theta \otimes d\theta + \\ 167 | -2 {\left(a^{2} - b^{2}\right)} \cos\left(\phi\right) \cos\left(\mathit{the}\right) \sin\left(\phi\right) \sin\left(\mathit{the}\right) d\theta \otimes d\phi + \\ 168 | + {\left(b^{2} \cos\left(\phi\right)^{2} + a^{2} \sin\left(\phi\right)^{2}\right)} \sin\left(\mathit{the}\right)^{2} d\phi \otimes d\phi 169 | \end{gathered} } 170 | \] 171 | 172 | \questionhead{} 173 | 174 | {\small 175 | \begin{verbatim} 176 | sage: polar_vees = eps.frame() 177 | sage: X_1 = - sin(phi) * polar_vees[1] - cot( the ) * cos(phi) * polar_vees[2] 178 | sage: X_2 = cos( phi ) * polar_vees[1] - cot( the ) * sin( phi) * polar_vees[2] 179 | sage: X_3 = polar_vees[2] 180 | sage: X_2.lie_der(X_1).display() 181 | (cos(the)^2 - 1)/sin(the)^2 d/dphi 182 | sage: X_3.lie_der(X_1).display() 183 | cos(phi) d/dthe - cos(the)*sin(phi)/sin(the) d/dphi 184 | sage: X_3.lie_der(X_2).display() 185 | sin(phi) d/dthe + cos(phi)*cos(the)/sin(the) d/dphi 186 | \end{verbatim} 187 | } 188 | 189 | Indeed, one can check on a scalar field $f_{\text{eps}} \in C^{\infty}(S^2)$: 190 | {\small 191 | \begin{verbatim} 192 | sage: f_eps = S2.scalar_field({eps: function('f', the, phi ) }, name='f' ) 193 | sage: (X_1( X_2(f_eps)) - X_2(X_1(f_eps) ) ).display() 194 | U_{ep} --> R 195 | (the, phi) |--> -D[1](f)(the, phi) 196 | sage: X_2.lie_der(X_1) == -X_3 197 | True 198 | sage: X_3.lie_der(X_1) == X_2 199 | True 200 | sage: X_3.lie_der(X_2) == -X_1 201 | True 202 | \end{verbatim} 203 | } 204 | 205 | \[ 206 | \Longrightarrow \boxed{ [X_i, X_j] = -\epsilon_{ijk}X_k } 207 | \] 208 | So $\text{span}_{\mathbb{R}} \lbrace X_1,X_2,X_3 \rbrace$ equipped with $[ \, , \, ]$ constitute a Lie subalgebra on $S^2$ (It's closed under $[ \, , \, ]$ 209 | 210 | -------------------------------------------------------------------------------- /tutorial13.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial 13 Schwarzschild Spacetime} 2 | 3 | EY : 20150408 I'm not sure which tutorial follows which lecture at this point. 4 | 5 | The tutorial video is excellent itself. Here, I want to encourage the use of CAS to do calculations. There are many out there. Again, I'm partial to the Sage Manifolds package for Sage Math which are both open-source and based on Python. I'll use that here. 6 | 7 | \exercisehead{1} \textbf{Geodesics in a Schwarzschild spacetime} 8 | 9 | \questionhead{Write down the Lagrangian} 10 | 11 | Load ``Schwarzschild.sage'' in Sage Math, which will always be available freely here \url{https://github.com/ernestyalumni/diffgeo-by-sagemnfd/blob/master/Schwarzschild.sage}: 12 | 13 | {\scriptsize 14 | \begin{verbatim} 15 | sage: load("Schwarzschild.sage") 16 | 4-dimensional manifold 'M' 17 | open subset 'U_sph' of the 4-dimensional manifold 'M' 18 | Levi-Civita connection 'nabla_g' associated with the Lorentzian metric 'g' on the 4-dimensional manifold 'M' 19 | \end{verbatim}} 20 | and so on. 21 | 22 | Look at the code and I had defined the Lagrangian to be \begin{verbatim}L\end{verbatim}. To get out the coefficients of $L$ of the components of the tangent vectors to the curve, i.e. $t', r',\theta',\phi'$, denoted \begin{verbatim}tp,rp,thp,php\end{verbatim} in my .sage file, do the following: 23 | 24 | \begin{verbatim} 25 | sage: L.expr().coefficients(tp)[1][0].factor().full_simplify() 26 | (2*G_N*M_0 - r)/r 27 | sage: L.expr().coefficients(rp)[1][0].factor().full_simplify() 28 | -r/(2*G_N*M_0 - r) 29 | sage: L.expr().coefficients(php)[1][0].factor().full_simplify() 30 | r^2 31 | sage: L.expr().coefficients(thp)[1][0].factor().full_simplify() 32 | r^2*sin(th)^2 33 | \end{verbatim} 34 | 35 | \questionhead{There are 4 Euler-Lagrange equations for this Lagrangian. Derive the one with respect to the function $t(\lambda)$!} 36 | 37 | \begin{verbatim} 38 | sage: L.expr().diff(t) 39 | 0 40 | \end{verbatim} 41 | This confirms that $\frac{ \partial L}{ \partial t} =0$ 42 | 43 | For $\frac{d}{d\lambda} \frac{ \partial L}{ \partial t'}$, then one needs to consider this particular workaround for Sage Math (computer technicality). One takes derivatives with respect to declared variables (declared with var) and then substitute in functions that are dependent upon $\lambda$, and then take the derivative with respect to the parameter $\lambda$. This does that: 44 | 45 | {\scriptsize 46 | \begin{verbatim} 47 | sage: L.expr().diff( thp ).factor().subs( r == gamma1 ).subs( thp == gamma3.diff( tau ) ).subs( th == gamma3 ).diff(tau)\ 48 | ....: .factor() 49 | 2*(2*cos(gamma3(tau))*gamma1(tau)*D[0](gamma3)(tau)^2 + 2*sin(gamma3(tau))*D[0](gamma1)(tau)*D[0](gamma3)(tau) 50 | + gamma1(tau)*sin(gamma3(tau))*D[0, 0](gamma3)(tau))*gamma1(tau)*sin(gamma3(tau)) 51 | \end{verbatim} } 52 | 53 | \questionhead{Show that the Lie derivative of $g$ with respect to the vector fields $K_t :=\frac{\partial}{\partial t}$} 54 | 55 | The first line defines the vector field by accessing the frame defined on a chart with spherical coordinates and getting the time vector. The second line is the Lie derivative of $g$ with respect to this vector field. 56 | \begin{verbatim} 57 | sage: K_t = espher[0] 58 | sage: g.lie_der(K_t).display() # 0, as desired 59 | 0 60 | \end{verbatim} 61 | 62 | EY : 20150410 My question is this: $\forall \, X \in \Gamma(TM)$ i.e. $X$ is a vector field on $M$, or, specifically, a section of the tangent bundle, then does 63 | \[ 64 | \mathcal{L}_Xg = 0 65 | \] 66 | instantly mean that $X$ is a symmetry for $(M,g)$? $\mathcal{L}_Xg$ is interpreted geometrically as how $g$ changes along the flow generated by $X$, and if it equals $0$, then $g$ doesn't change. 67 | 68 | -------------------------------------------------------------------------------- /tutorial2.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial Topological manifolds} 2 | 3 | filename: \verb|Sheet_1.2.pdf| 4 | 5 | %\exercisehead{1} 6 | 7 | \exercisehead{4: Before the invention of the wheel} 8 | 9 | \emph{Another one-dimensional topological manifold. Another one?} 10 | 11 | Consider set $F^1:= \lbrace (m,n)\in \mathbb{R}^2 | m^4 + n^4=1 \rbrace$, equipped with subset topology $\left. \mathcal{O}_{\text{std}} \right|_{F^1}$ 12 | 13 | \questionhead{$x:F^1 \to \mathbb{R}$ is what?} 14 | 15 | \solutionhead{} EY : 20150525 The tutorial video \url{https://youtu.be/ghfEQ3u_B6g} is really good and this solution is how I'd write it, but it's really the same (I needed the practice). 16 | 17 | \[ 18 | \boxed{ \begin{aligned} 19 | x : F^1 & \to \mathbb{R} \\ 20 | (m,n) & \mapsto m 21 | \end{aligned} } 22 | \] 23 | 24 | If $m=0$, $n^4=1$ so $n=\pm 1$ so it's not injective. 25 | 26 | Let the closed $n$-dim. upper half-space $\mathbb{H}^n \subseteq \mathbb{R}^1$. Then 27 | \[ 28 | \begin{aligned} 29 | \mathbb{H}^n = \lbrace (x_1 \dots x_n) \in \mathbb{R}^n | x_n \geq 0 \rbrace \\ 30 | \text{int}\mathbb{H}^n = \lbrace (x_1 \dots x_n) \in \mathbb{R}^n | x_n > 0 \rbrace \\ 31 | - \mathbb{H}^n = \lbrace (x_1 \dots x_n) \in \mathbb{R}^n | x_n \leq 0 \rbrace \\ 32 | -\text{int}\mathbb{H}^n = \lbrace (x_1 \dots x_n) \in \mathbb{R}^n | x_n <0 \rbrace 33 | \end{aligned} 34 | \] 35 | 36 | \questionhead{This map $x$ may be made injective by restricting its domain to either of 2 maximal open subsets of $F^1$. Which ones?} 37 | 38 | \solutionhead{} 39 | 40 | Let 41 | \[ 42 | \begin{aligned} 43 | & U_+ = F^1 \cap \text{int}\mathbb{H}^2 \\ 44 | & U_- = F^1 \cap -\text{int}\mathbb{H}^2 45 | \end{aligned} 46 | \] 47 | 48 | Look at 49 | \[ 50 | \begin{aligned} 51 | & x^4 = 1 - n^4 \\ 52 | \Longrightarrow & x = \pm ( 1 - n^4)^{1/4} 53 | \end{aligned} 54 | \] 55 | 56 | Then for 57 | \[ 58 | \begin{aligned} 59 | x_+^{-1}: (-1,1) \subseteq \mathbb{R} & \to U_+ \\ 60 | m & \mapsto (m,(1-m^4)^{1/4}) \\ 61 | x_-^{-1}: (-1,1) \subseteq \mathbb{R} & \to U_- \\ 62 | m & \mapsto (m,-(1-m^4)^{1/4}) \\ 63 | \end{aligned} 64 | \] 65 | $x_+$,$x_-$ injective (since left inverse exists). 66 | 67 | 68 | \questionhead{Construct injective $y$} 69 | 70 | \solutionhead{} 71 | 72 | Let 73 | \[ 74 | \begin{aligned} 75 | & V_+ = F^1 \cap \text{int}\mathbb{H}^1 \\ 76 | & V_- = F^1 \cap -\text{int}\mathbb{H}^1 77 | \end{aligned} 78 | \] 79 | 80 | Then 81 | \[ 82 | \begin{aligned} 83 | y_+: V_+ & \to (-1,1) \subseteq \mathbb{R} \\ 84 | (m,n) & \mapsto n \\ 85 | y_-: V_- & \to (-1,1) \subseteq \mathbb{R} \\ 86 | (m,n) & \mapsto n 87 | \end{aligned} 88 | \] 89 | 90 | \questionhead{Construct inverse $y^{-1}$} 91 | \solutionhead{} 92 | 93 | 94 | For 95 | \[ 96 | \begin{aligned} 97 | y_+^{-1}: (-1,1) \subseteq \mathbb{R} & \to V_+ \\ 98 | n & \mapsto ((1-n^4)^{1/4},n) \\ 99 | y_-^{-1}: (-1,1) \subseteq \mathbb{R} & \to V_- \\ 100 | n & \mapsto (-(1-n^4)^{1/4},n) \\ 101 | \end{aligned} 102 | \] 103 | $y_+$,$y_-$ injective (since left inverse exists). 104 | 105 | 106 | 107 | Note $\begin{aligned} & \quad \\ 108 | & (-1,0) \notin U_+,U_- \\ 109 | & (1,0) \notin U_+,U_- \\ 110 | \end{aligned}$ 111 | 112 | and 113 | 114 | $\begin{aligned} & \quad \\ 115 | & (0,1) \notin V_+,V_- \\ 116 | & (0,-1) \notin V_+,V_- \\ 117 | \end{aligned}$ 118 | 119 | 120 | \questionhead{construct \emph{transition map } $x \circ y^{-1}$} 121 | 122 | \solutionhead{} 123 | 124 | \[ 125 | \begin{aligned} 126 | & 127 | \begin{aligned} 128 | x_+y_+^{-1} : (0,1) \subseteq \mathbb{R} & \to (0,1) \subseteq \mathbb{R} \\ 129 | n & \mapsto (1-n^4)^{1/4} 130 | \end{aligned} \\ 131 | & 132 | \begin{aligned} 133 | x_-y_+^{-1} : (-1,0) \subseteq \mathbb{R} & \to (0,1) \subseteq \mathbb{R} \\ 134 | n & \xrightarrow{ y_+^{-1} } ( (1-n^4)^{1/4}, n) \xrightarrow{ x_- } (1-n^4)^{1/4} 135 | \end{aligned} \\ 136 | & \begin{aligned} 137 | x_+y_-^{-1} : (0,1) \subseteq \mathbb{R} & \to (-1,0) \subseteq \mathbb{R} \\ 138 | n & \mapsto -(1-n^4)^{1/4} 139 | \end{aligned} \\ 140 | & \begin{aligned} 141 | x_-y_-^{-1} : (-1,0) \subseteq \mathbb{R} & \to (-1,0) \subseteq \mathbb{R} \\ 142 | n & \mapsto -(1-n^4)^{1/4} 143 | \end{aligned} 144 | \end{aligned} 145 | \] 146 | 147 | \questionhead{\dots Does the collection of these domains and maps form an atlas of $F^1$?} 148 | 149 | Yes, with atlas 150 | 151 | \[ 152 | \mathcal{A} = \lbrace \begin{aligned} & (U_+,x_+) \\ 153 | & (U_-,x_-) \end{aligned}, \, \begin{aligned} & (V_+,y_+) \\ & (V_-,y_-) \end{aligned} \rbrace 154 | \] 155 | 156 | Clearly 157 | \[ 158 | \begin{gathered} 159 | U_+ \cup U_- \cup V_+ \cup V_- = (F^1 \cap \text{int}\mathbb{H}^2) \cup (F^1 \cap -\text{int}\mathbb{H}^2)\cup (F^1 \cap \text{int}\mathbb{H}^1) \cup (F^1 \cap -\text{int}\mathbb{H}^1) = \\ 160 | = F^1 \cap \mathbb{R}^2\backslash \lbrace (0,0) \rbrace = F^1 161 | \end{gathered} 162 | \] 163 | and (the point is that) $x_{\pm},y_{\pm}$ are homeomorphisms of open sets of $F^1$ onto open sets of 1 dim. $\mathbb{R}^1$ (namely $(-1,1) \subseteq \mathbb{R}^1$), and so $\mathcal{A}$ is an atlas of $F^1$. 164 | -------------------------------------------------------------------------------- /tutorial4.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial 4 Differentiable Manifolds} 2 | 3 | EY : 20151109 The \url{gravity-and-light.org} website, where you can download the tutorial sheets \emph{and} the full length videos for the tutorials and lectures, are no longer there. $=($ 4 | 5 | Hopefully, the YouTube video will remain: \url{https://youtu.be/FXPdKxOq1KA?list=PLFeEvEPtX_0RQ1ys-7VIsKlBWz7RX-FaL} 6 | 7 | \exercisehead{1: True or false?} \emph{These basic questions are designed to spark discussion and as a self-test.} 8 | 9 | Tick the correct statements, but not the incorrect ones! 10 | 11 | \begin{enumerate} 12 | \item[(a)] The function $f: \mathbb{R} \to \mathbb{R}$, \dots 13 | \begin{itemize} 14 | \item 15 | \item 16 | \item \dots , defined by $f(x) = |x^3|$, lies in $C^3(\mathbb{R} \to \mathbb{R})$. 17 | 18 | \solutionhead{1a3} For $f: \mathbb{R} \to \mathbb{R}$, $f(x) = |x^3| = \begin{cases} x^3 & \text{ if } x \geq 0 \\ 19 | -x^3 & \text{ if } x < 0 \end{cases}$ 20 | \[ 21 | \begin{aligned} 22 | & f'(x) = \begin{cases} 3x^2 & \text{ if } x \geq 0 \\ 23 | -3x^2 & \text{ if } x < 0 \end{cases} \\ 24 | & f''(x) = \begin{cases} 6x & \text{ if } x \geq 0 \\ 25 | -6x & \text{ if } x < 0 \end{cases} 26 | \end{aligned} 27 | \] 28 | Thus, 29 | \[ 30 | \boxed{ f(x) = |x^3| \in C^1(\mathbb{R}) \text{ but } f(x) \notin C^2(\mathbb{R}) \subseteq C^3(\mathbb{R}) } 31 | \] 32 | \item 33 | \item 34 | \end{itemize} 35 | \item[(b)] 36 | \item[(c)] 37 | \end{enumerate} 38 | 39 | \textbf{Short} \exercisehead{4: Undergraduate multi-dimensional analysis } 40 | 41 | \emph{A good notation and basic results for partial differentiation}. 42 | 43 | For a map $f: \mathbb{R}^d \to \mathbb{R}$ we denote by the map $\partial_i f: \mathbb{R}^d \to \mathbb{R}$ the partial derivative with respect to the $i$-th entry. 44 | 45 | \questionhead{:} Given a function 46 | \[ 47 | f: \mathbb{R}^3 \to \mathbb{R}; \, (\alpha, \beta, \delta) \mapsto f(\alpha,\beta,\delta) := \alpha^3\beta^2 + \beta^2 \delta + \delta 48 | \] 49 | calculate the values of the following derivatives: 50 | 51 | \solutionhead{:} 52 | 53 | \begin{itemize} 54 | \item $(\partial_2f)(x,y,z) = $ 55 | \item $(\partial_1f)(\square,\circ,*) =$ 56 | \item $(\partial_1 \partial_2 f)(a,b,c) = $ 57 | \item $(\partial_3^2 f)(299,1222,0) =$ 58 | \end{itemize} 59 | 60 | EY: 20151110 61 | 62 | For $f(\alpha,\beta,\delta) := \alpha^3\beta^2 + \beta^2 \delta + \delta$, or $f(x,y,z) = x^3 y^2 + y^2 z + z$, 63 | \[ 64 | \begin{aligned} 65 | & (\partial_2 f) = 2(x^3y+yz) \\ 66 | & (\partial_1 f) = 3x^2 y^2 \\ 67 | & (\partial_1\partial_2 f) = 6x^2 y \\ 68 | & (\partial_3^2f) = 0 69 | \end{aligned} 70 | \] 71 | and so 72 | \begin{itemize} 73 | \item $(\partial_2f)(x,y,z) = 2(x^3 y + yz) $ 74 | \item $(\partial_1f)(\square,\circ,*) = 3\square^2 \circ^2$ 75 | \item $(\partial_1 \partial_2 f)(a,b,c) = 6a^2 b$ 76 | \item $(\partial_3^2 f)(299,1222,0) = 0$ 77 | \end{itemize} 78 | 79 | 80 | 81 | \exercisehead{5: Differentiability on a manifold} 82 | 83 | \emph{How to deal with functions and curves in a chart} 84 | 85 | Let $(M, \mathcal{O}, \mathcal{A})$ be a smooth $d$-dimensional manifold. Consider a chart $(U,x)$ of the atlas $\mathcal{A}$ together with a smooth curve $\gamma : \mathbb{R} \to U$ and a smooth function $f:U \to \mathbb{R}$ on the domain $U$ of the chart. 86 | 87 | \questionhead{:} Draw a commutative diagram containing the chart domain, chart map, function, curveand the respective representatives of the function and the curve in the chart. 88 | 89 | \solutionhead{:} 90 | 91 | \begin{tikzpicture}[decoration=snake] 92 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 93 | { 94 | \mathbb{R} & U & \mathbb{R}^d \\ 95 | & \mathbb{R} & \\ 96 | }; 97 | \path[->] 98 | (m-1-1) edge node [above] {$\gamma$} (m-1-2) 99 | edge [bend left=40] node [auto] {$x\circ \gamma$} (m-1-3) 100 | (m-1-3) edge [bend left=15] node [auto] {$x^{-1}$} (m-1-2) 101 | edge node [right] {$(f\circ x^{-1})$ } (m-2-2) 102 | (m-1-2) edge node [left] {$f$} (m-2-2) 103 | edge node [auto] {$x$} (m-1-3); 104 | \end{tikzpicture} \quad \quad \, \begin{tikzpicture}[decoration=snake] 105 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 106 | { 107 | \tau \in \mathbb{R} & p \in U & x(p) = (x\circ \gamma)(\tau) \in \mathbb{R}^d \\ 108 | & f(p) \in \mathbb{R} & \\ 109 | }; 110 | \path[|->] 111 | (m-1-1) edge node [above] {$\gamma$} (m-1-2) 112 | edge [bend left=40] node [auto] {$x\circ \gamma$} (m-1-3) 113 | (m-1-3) edge [bend left=15] node [auto] {$x^{-1}$} (m-1-2) 114 | edge node [right] {$(f\circ x^{-1})$ } (m-2-2) 115 | (m-1-2) edge node [left] {$f$} (m-2-2) 116 | edge node [auto] {$x$} (m-1-3); 117 | \end{tikzpicture} 118 | 119 | 120 | 121 | \questionhead{:} Consider, for $d=2$, 122 | \[ 123 | (x\circ \gamma)(\lambda):= (\cos{(\lambda)}, \sin{(\lambda)} ) \text{ and } (f\circ x^{-1})((x,y)) := x^2 +y^2 124 | \] 125 | Using the chain rule, calculate 126 | \[ 127 | (f\circ \gamma)'(\lambda) 128 | \] 129 | explicitly. 130 | 131 | \solutionhead{:} 132 | 133 | EY : 20151109 Indeed, the domains and codomains of this $f\gamma$ mapping makes sense, from $\mathbb{R} \to \mathbb{R}$ for 134 | \begin{tikzpicture}[decoration=snake] 135 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 136 | { 137 | \mathbb{R} & U & \mathbb{R}^d \\ 138 | & \mathbb{R} & \\ 139 | }; 140 | \path[->] 141 | (m-1-1) edge node [above] {$\gamma$} (m-1-2) 142 | edge [bend left=40] node [auto] {$x\circ \gamma$} (m-1-3) 143 | edge node [auto] {$f\circ \gamma$} (m-2-2) 144 | (m-1-3) edge [bend left=15] node [auto] {$x^{-1}$} (m-1-2) 145 | edge node [right] {$(f\circ x^{-1})$ } (m-2-2) 146 | (m-1-2) edge node [left] {$f$} (m-2-2) 147 | edge node [auto] {$x$} (m-1-3); 148 | \end{tikzpicture} \quad \quad \, \begin{tikzpicture}[decoration=snake] 149 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 150 | { 151 | \tau \in \mathbb{R} & p \in U & x(p) = (x\circ \gamma)(\tau) \in \mathbb{R}^d \\ 152 | & f(p) \in \mathbb{R} & \\ 153 | }; 154 | \path[|->] 155 | (m-1-1) edge node [above] {$\gamma$} (m-1-2) 156 | edge [bend left=40] node [auto] {$x\circ \gamma$} (m-1-3) 157 | edge node [auto] {$f\circ \gamma$} (m-2-2) 158 | (m-1-3) edge [bend left=15] node [auto] {$x^{-1}$} (m-1-2) 159 | edge node [right] {$(f\circ x^{-1})$ } (m-2-2) 160 | (m-1-2) edge node [left] {$f$} (m-2-2) 161 | edge node [auto] {$x$} (m-1-3); 162 | \end{tikzpicture} 163 | 164 | \[ 165 | \begin{gathered} 166 | (f\circ \gamma)'(\lambda) = (Df)\cdot \dot{\gamma}(\lambda) = \frac{ \partial f}{ \partial x^j} \dot{\gamma}^j(\lambda) = 2x (-\sin{\lambda} ) + 2y \cos{\lambda} = 2(-\cos{\lambda} \sin{\lambda} + \sin{\lambda} \cos{\lambda} ) = 0 167 | \end{gathered} 168 | \] 169 | -------------------------------------------------------------------------------- /tutorial5.tex: -------------------------------------------------------------------------------- 1 | \section{Lecture 5: Tangent Spaces} 2 | 3 | lead question: ``what is the velocity of a curve $\gamma$ \@ point $p$? 4 | 5 | \subsection{Velocities} 6 | 7 | \begin{definition} 8 | $(M,\mathcal{O},\mathcal{A})$ smooth mfd. \\ 9 | curve $\gamma : \mathbb{R} \to M$ at least $C^1$. \\ 10 | Suppose $\gamma(\lambda_0) =p$ \\ 11 | The \textbf{velocity} of $\gamma$ \@ $p$ is the linear map 12 | \[ 13 | \begin{gathered} 14 | v_{\gamma, p} : C^{\infty}(M) \xrightarrow{ \sim } \mathbb{R} 15 | \end{gathered} 16 | \] 17 | $C^{\infty}(M) := \lbrace f: M \to \mathbb{R} | f \text{ smooth function } \rbrace$ equipped with $\begin{gathered} \quad \\ 18 | (f\oplus g)(p) := f(p) + g(p) \\ 19 | (\lambda \otimes g)(p) := \lambda \cdot g(p) \end{gathered}$ 20 | 21 | $\sim$ denotes linear map on top of $\xrightarrow{}$. 22 | 23 | \[ 24 | f \mapsto v_{\gamma,p}(f):= (f\circ \gamma)'(\lambda_0) 25 | \] 26 | 27 | 28 | \end{definition} 29 | 30 | intuition 31 | \begin{tikzpicture} 32 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 33 | { 34 | \mathbb{R} & M & \mathbb{R} \\ 35 | }; 36 | \path[->] 37 | (m-1-1) edge node [auto] {$\gamma$} (m-1-2) 38 | edge [bend right=40] node [auto] {$f\circ \gamma $} (m-1-3) 39 | (m-1-2) edge node [auto] {$f $} (m-1-3); 40 | \end{tikzpicture} 41 | 42 | 43 | 44 | Schuller says: children run around the world. Temperature function as temperature contour lines. You feel the temperature. You observe the rate of change of temperature as you run around. $f$ is temperature. 45 | 46 | \underline{past}: `` $\underbrace{v^i}_{} (\partial_i f) = (\underbrace{v^i \partial_i}_{\text{vector}})f$ 47 | 48 | \subsection{Tangent vector space} 49 | 50 | \begin{definition} 51 | For each point $p \in M$ \\ 52 | def the \textbf{set} ``tangent space $\neq_0 M$ \@ $p$ `` 53 | \[ 54 | T_p M := \lbrace v_{\gamma, p} | \gamma \text{ smooth curves } \rbrace 55 | \] 56 | \end{definition} 57 | 58 | \underline{picture}:\\ 59 | rather $M$ than (embedded) $p$ $T_pM$ EY : 20151109 see \url{https://youtu.be/pepU_7NJSGM?t=12m38s} for the picture 60 | 61 | \underline{Observation}: $T_pM$ can be made into a vector space. 62 | 63 | \[ 64 | \begin{aligned} 65 | & \begin{aligned} 66 | \oplus : & T_pM \times T_pM \to \\ 67 | & (v_{\gamma,p} \oplus v_{\delta,p})(\underbrace{f}_{ \in C^{\infty}(M)} ) := v_{\gamma,p}(f) +_{\mathbb{R}} v_{\delta,p}(f) \\ 68 | \end{aligned} \\ 69 | & \begin{aligned} 70 | \odot : & \mathbb{R} \times T_pM \to \text{Hom}(C^{\infty}(\mathbb{R}),\mathbb{R}) \\ 71 | & (\alpha \odot v_{\gamma,p} )(f) := \alpha \cdot_{\mathbb{R}} v_{\gamma, p}(f) 72 | \end{aligned} 73 | \end{aligned} 74 | \] 75 | Remains to be shown that 76 | \begin{enumerate} 77 | \item[(i)] $\exists \, \sigma$ curve : $v_{\gamma,p} \oplus v_{\delta,p} = v_{\sigma,p}$ 78 | \item[(ii)] $\exists \, \tau $ curve : $\alpha \odot v_{\gamma,p} = v_{\tau,p}$ 79 | \end{enumerate} 80 | 81 | \underline{Claim}: $\begin{aligned} & \quad \\ 82 | \tau : \mathbb{R}& \to M \\ 83 | & \mapsto \tau(\lambda) := \gamma(\alpha \lambda + \lambda_0) = (\gamma \circ \mu_{\alpha})(\lambda) 84 | \end{aligned}$ 85 | where $\begin{aligned} & \quad \\ 86 | \mu_{\alpha}: & \mathbb{R} \to \mathbb{R} \\ 87 | & r \mapsto \alpha \cdot r + \lambda_0 \end{aligned}$, 88 | does the trick. 89 | 90 | $\tau(0) = \gamma(\lambda_0) =p$ 91 | 92 | \[ 93 | \begin{aligned} 94 | v_{\tau,p} & := (f\circ \tau)'(0) = (f\circ \gamma \circ \mu_{\alpha} )'(0) \\ 95 | & = (f\circ \gamma)'(\lambda_0) \cdot \alpha = \\ 96 | & = \alpha \cdot v_{\gamma,p} 97 | \end{aligned} 98 | \] 99 | 100 | Now for the sum: %(EY:20151109 ??) 101 | 102 | $v_{\gamma,p} \oplus v_{\delta,p} \overset{?}{=} v_{\sigma, p} $ 103 | 104 | make a \underline{choice} of chart $(\underbrace{U}_{\ni p} , x)$ In cloud: ill definition alarm bells. 105 | 106 | and define: 107 | 108 | Claim: 109 | \[ 110 | \begin{aligned} 111 | & \sigma : \mathbb{R} \to M \\ 112 | & \sigma(\lambda) := x^{-1}( \underbrace{ (x\circ \gamma)(\lambda_0 + \lambda)}_{\mathbb{R} \to \mathbb{R}^d} + (x\circ \delta)(\lambda_1+ \lambda) - (x\circ \gamma)(\lambda_0) ) 113 | \end{aligned} 114 | \] 115 | does the trick. 116 | \begin{proof} 117 | Since: 118 | \[ 119 | \begin{aligned} 120 | \sigma_x(0) & = x^{-1}((x\circ \gamma)(\lambda_0) + (x\circ \delta)(\lambda_1) - (x\circ \gamma)(\lambda_0)) \\ 121 | & = \delta(\lambda_1) = p \end{aligned} 122 | \] 123 | Now: 124 | \[ 125 | \begin{aligned} 126 | v_{\sigma_x,p}(f) & := (f\circ \sigma_x)'(0) = \\ 127 | & = ( \underbrace{ (f\circ x^{-1}) }_{\mathbb{R}^d \to \mathbb{R}} \circ \underbrace{ (x\circ \sigma_x) }_{\mathbb{R} \to \mathbb{R}^d} )'(\gamma) = \underbrace{ (x\circ \sigma_x)'(0) }_{(x\circ \gamma)'(\lambda_0) + (x\circ \delta)'(\lambda_1) } \cdot \left( \partial_i (f\circ x^{-1}) \right)(x( \underbrace{ \sigma(0)}_{p} ) ) = \\ 128 | & = (x\circ \gamma)'(\lambda_0)(\partial_i (f\circ x^{-1}) )(x(p)) + (x\circ \delta)(\lambda_1)(\partial_i (f\circ x^{-1}) )(x(p)) \\ 129 | & = (f\circ \gamma)'(\lambda_0) + (f\circ \delta)'(\lambda_1) = \\ 130 | & = v_{\gamma,p}(f) + v_{\delta,p}(f) \quad \, \forall \, f \in C^{\infty}(M) 131 | \end{aligned} 132 | \] 133 | 134 | \[ 135 | \boxed{ v_{\gamma,p} \oplus v_{\delta,p} = v_{\sigma, p} } 136 | \] 137 | \end{proof} 138 | 139 | \begin{tikzpicture} 140 | \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em, minimum width=2em] 141 | { 142 | \mathbb{R} & M & \mathbb{R} \\ 143 | & \mathbb{R}^d & \\ 144 | }; 145 | \path[->] 146 | (m-1-1) edge node [auto] {$\sigma$} (m-1-2) 147 | edge node [auto] {$x\circ \sigma $} (m-2-2) 148 | (m-1-2) edge node [auto] {$x $} (m-2-2) 149 | edge node [auto] {$f$} (m-1-3) 150 | (m-2-2) edge node [below] {$f\circ x^{-1}$} (m-1-3); 151 | \end{tikzpicture} 152 | 153 | 154 | \underline{picture}: (cf. \url{https://youtu.be/pepU_7NJSGM?t=39m5s}) 155 | 156 | \[ 157 | \begin{aligned} 158 | \gamma : \mathbb{R} \to M \\ 159 | \delta : \mathbb{R} \to M \end{aligned} 160 | \] 161 | $(\gamma \oplus)(\lambda) := \gamma(\lambda) + \delta(\lambda)$ 162 | 163 | EY : 20151109 Schuller says adding trajectories is chart dependent, bad. Adding velocities is good. 164 | \subsection{Components of a vector wrt a chart} 165 | 166 | \begin{definition} 167 | Let $(U,x) \in \mathcal{A}_{\text{smooth}}$. \\ 168 | Let $\begin{aligned} & \gamma : \mathbb{R} \to U \\ 169 | & \gamma(0) = p \end{aligned}$. 170 | 171 | Calculate 172 | \[ 173 | \begin{aligned} 174 | v_{\gamma,p}(f) & := (f \circ \gamma)'(0) = (\underbrace{ (f\circ x^{-1}) }_{\mathbb{R}^d \to \mathbb{R} } \circ \underbrace{ (x\circ \gamma)}_{\mathbb{R} \to \mathbb{R}^d} )'(0) \\ 175 | & = \underbrace{ (x\circ \gamma)^{i'}(0) }_{\dot{\gamma}_x^i(0) } \cdot \underbrace{ (\partial_i(f\circ x^{-1} ) )(x(p)) }_{ =: \left( \frac{ \partial f}{ \partial x^i } \right)_p } 176 | \end{aligned} 177 | \] 178 | think cloud $f:M\to \mathbb{R}$ 179 | \[ 180 | = \boxed{ \dot{\gamma}_x^i(0) \cdot \left( \frac{ \partial }{ \partial x^i} \right)_p } f \quad \, \forall \, f \in C^{\infty}(M) 181 | \] 182 | $\therefore$ as a map. 183 | 184 | \[ 185 | v_{\gamma,p} \underbrace{=}_{\text{use of chart} } \underbrace{ \gamma_x^i(0) }_{ \text{ ``components of the velocity $v_{\gamma,p}$'' } } \underbrace{ \left( \frac{ \partial }{ \partial x^i} \right)}_{ \substack{ \text{ basis elements of the $T_pM$ wrt which the components need to be understood.} \\ 186 | \text{ ``chart induced basis of $T_pM$''} } } 187 | \] 188 | \end{definition} 189 | 190 | Picture: \url{https://youtu.be/pepU_7NJSGM?t=1h16s} 191 | 192 | \subsection{4. Chart-induced basis} 193 | 194 | \begin{definition} 195 | $(U,x) \in \mathcal{A}_{\text{smooth}}$ \\ 196 | the $\left( \frac{ \partial }{ \partial x^1} \right)_p , \dots , \left( \frac{ \partial }{ \partial x^d} \right)_p \in T_pU \subseteq T_pM$ 197 | 198 | constitute a \textbf{basis} of $T_pU$ 199 | 200 | \end{definition} 201 | 202 | \begin{proof} remains: linearly independent 203 | \[ 204 | \begin{gathered} 205 | \lambda^i \left( \frac{ \partial }{ \partial x^i} \right)_p \overset{!}{=} 0 \\ 206 | \Longrightarrow \lambda^i \left( \frac{ \partial }{ \partial x^i} \right)_p(x^j) = \lambda^i \partial_i (\underbrace{ x^j \circ x^{-1} }_{} )( x(p)) = \\ 207 | = \lambda^i \delta_i^{\,\,j} = \lambda^j \quad \quad \, j = 1 , \dots , d 208 | \end{gathered} \quad \quad \, \begin{gathered} 209 | x^j \circ x^{-1} : \mathbb{R}^d \to \mathbb{R} \\ 210 | (\alpha^1 , \dots , \alpha^d) \mapsto \alpha^j 211 | \end{gathered} 212 | \] 213 | in cloud: $x^j : U \to \mathbb{R}$ differentiable 214 | 215 | 216 | 217 | 218 | \end{proof} 219 | 220 | 221 | \begin{corollary} 222 | $ \text{dim}T_pM = d = \text{dim}M$ 223 | \end{corollary} 224 | 225 | \underline{Terminology}: $X \in T_pM$ $\to $ $\exists \, \gamma : \mathbb{R} \to M : X = v_{\gamma,p}$ and \\ 226 | \phantom{\underline{Terminology}:} $\exists \, \underbrace{ X_1^1 , \dots , X^d }_{\in \mathbb{R} } : X = X^i \left( \frac{ \partial }{ \partial x^i} \right)_p$ 227 | 228 | 229 | 230 | \subsection{5. Change of vector \emph{\underline{components}} under a change of chart} 231 | 232 | \ding{56} vector does \textbf{not} change under change of chart. 233 | 234 | Let $(U,x)$ and $(V,y)$ be overlapping charts and $p \in U\cap V$. \\ 235 | Let $X \in T_pM$ 236 | 237 | \[ 238 | X^i_{(y)}\cdot \left( \frac{ \partial }{ \partial y^i} \right)_p \underbrace{=}_{(V,y)} X \underbrace{=}_{ (U,x) } X^i_{x} \left( \frac{ \partial }{ \partial x^i} \right)_p 239 | \] 240 | to study change of components formula: 241 | \[ 242 | \begin{aligned} 243 | \left( \frac{ \partial }{ \partial x^i} \right)_p f & = \partial_i(f\circ x^{-1} )(x(p)) = \\ 244 | & = \partial_i (\underbrace{ (f\circ y^{-1}) }_{\mathbb{R}^d \to \mathbb{R} } \circ (\underbrace{ y\circ x^{-1}}_{\mathbb{R}^d \to \mathbb{R}^d} )(x(p)) \\ 245 | & = (\partial_i (y^i\circ x^{-1} ) )(x(p)) \cdot (\partial_j (f\circ y^{-1}) )(y(p)) = \\ 246 | & = \boxed{ \left( \frac{ \partial y^p}{ \partial x^i} \right)_p \cdot \left( \frac{ \partial f}{ \partial y^j} \right)_p } f 247 | \end{aligned} 248 | \] 249 | \[ 250 | \begin{gathered} 251 | \Longrightarrow X^i_{(x)} \left( \frac{ \partial y^j}{ \partial x^i} \right)_p \left( \frac{ \partial }{ \partial y^j} \right)_p = X^j_{(y)}\left( \frac{ \partial }{ \partial y^j} \right)_p \\ 252 | \Longrightarrow \boxed{ X^j_{(y)} = \left( \frac{ \partial y^j}{ \partial x^i} \right)_pX^i_{(x)} } 253 | \end{gathered} 254 | \] 255 | 256 | \subsection{6. Cotangent spaces } 257 | 258 | $T_pM = V$ 259 | 260 | trivial $(T_pM)^* := \lbrace \varphi : T_pM \xrightarrow{\sim} \mathbb{R} \rbrace$ 261 | 262 | \underline{Example}: $f\in C^{\infty}(M)$ 263 | 264 | \[ 265 | \begin{aligned} 266 | (df)_p : & T_p M \xrightarrow{ \sim } \mathbb{R} \\ 267 | & X \mapsto (df)_p(X) 268 | \end{aligned} 269 | \] 270 | i.e. $\boxed{ (df)_p \in T_pM^* } $ 271 | 272 | $(df)_p$ called the gradient of $f$ \@ $p\in M$. 273 | 274 | Calculate components of gradient w.r.t. chart-induced basis $(U,x)$ 275 | 276 | \[ 277 | \begin{aligned} 278 | \left( (df)_p \right)_j & := (df)_p\left( \left( \frac{ \partial }{ \partial x^j} \right)_p \right) \\ 279 | & = \left( \frac{ \partial f}{ \partial x^j } \right)_p = \partial_j (f\circ x^{-1} )(x(p)) 280 | \end{aligned} 281 | \] 282 | 283 | \begin{theorem} 284 | Consider chart $(U,x) \Longrightarrow x^i : U \to \mathbb{R}$ 285 | 286 | \underline{Claim}: $(d x^1)_p, (dx^2)_p, \dots , (dx^d)_p$ basis of $T_p^*M$ 287 | 288 | $\Longrightarrow $ In fact: dual basis: 289 | \[ 290 | (dx^a)_p \left( \left( \frac{ \partial }{ \partial x^b} \right)_p \right) = \left( \frac{ \partial x^a}{ \partial x^b} \right)_p = \dots = \delta_b^a 291 | \] 292 | \end{theorem} 293 | 294 | \subsection{ 7. Change of \emph{ \underline{components} } of a covector under a change of chart: } 295 | 296 | \[ 297 | \begin{gathered} 298 | \underbrace{ T_p^*M }_{ \ni \omega} \text{ with } 299 | \omega_{(y)} (dy^j)_p = \omega = \omega_{(x)i} (dx^i)_p \\ 300 | \Longrightarrow \boxed{ \omega_{(y)i} = \frac{ \partial x^j}{ \partial y^i } \omega_{(x)j } } 301 | \end{gathered} 302 | \] 303 | -------------------------------------------------------------------------------- /tutorial7.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial 7 Connections} 2 | 3 | \exercisehead{1}\textbf{: True or false?} 4 | 5 | \begin{enumerate} 6 | \item[(a)] 7 | \begin{itemize} 8 | \item $\nabla_{fX}Y = f\nabla_XY$ by definition so $\nabla_{fX} = f\nabla_X$ i.e. $\nabla_X$ is $C^{\infty}(M)$-linear in $X$ 9 | \item $f\in C^{\infty}(M)$ is a $(0,0)$-tensor field. $\nabla_Xf = Xf \equiv X(f)$ by definition. 10 | \item If the manifold is flat, I'm assuming that means that the manifold is globally a Euclidean space, and by definition, $\Gamma=0$. 11 | \[ 12 | \nabla_X Y = X^j \frac{ \partial }{ \partial x^j} (Y^i) \frac{ \partial }{ \partial x^i } + \Gamma^i_{jk} Y^k X^k \frac{ \partial }{ \partial x^i} = X^j \frac{ \partial Y^i}{ \partial x^j} \frac{ \partial }{ \partial x^i} + 0 13 | \] 14 | and similarly for any $(p,q)$-tensor field, i.e. 15 | \[ 16 | \nabla_X T = X^j \frac{ \partial T^{i_1 \dots i_p}_{ j_1 \dots j_q} }{ \partial x^j} 17 | \] 18 | \item \[ 19 | \nabla_X f = X^j \frac{ \partial f}{ \partial x^j} = X\cdot \text{grad}(f) 20 | \] 21 | \item $\forall \, (U,x) \in \mathcal{A}$, locally (after working out the first few cases, and doing induction, one can look up the expression for the local form; I found it in Nakahara's \textbf{Geometry, Topology and Physics}, Eq. 7.26, and it needs to be modified for the convention of order of bottom indices for $\Gamma$: 22 | \[ 23 | \nabla_{\nu} t^{\lambda_1 \dots \lambda_p }_{ \mu_1 \dots \mu_q} = \partial_{\nu} t^{\lambda_1 \dots \lambda_p}_{ \mu_1 \dots \mu_q} + \Gamma^{\lambda_1}_{ \, \kappa \nu } t^{\kappa \lambda_2 \dots \lambda_p }_{\mu_1 \dots \mu_q} + \dots + \Gamma^{\lambda_p}_{ \kappa \nu } t^{\lambda_1 \dots \lambda_{p-1} \kappa }_{ \mu_1 \dots \mu_q} - \Gamma^{\kappa}_{ \mu_1 \nu} t^{\lambda_1 \dots \lambda_p }_{ \kappa \mu_2 \dots \mu_q} - \dots - \Gamma^{\kappa}_{ \mu_q \nu} t^{\lambda_1 \dots \lambda_p }_{\mu_1 \dots \mu_{q-1} \kappa } 24 | \] 25 | Clearly, $\nabla_X$ is uniquely fixed $\forall \, p \in M$ by choosing each of the $(\text{dim}M)^3$ many connection coefficient functions $\Gamma$. 26 | \end{itemize} 27 | \item[(b)] 28 | \begin{itemize} 29 | \item $\begin{aligned} & \quad \\ & \nabla: \mathfrak{X}(M) \to \mathfrak{X}(M) \\ 30 | & \nabla : (p,q)\text{-tensor field} \mapsto (p,q)\text{-tensor field} \end{aligned}$ 31 | \item By definition, $\nabla$ satisfies the Leibniz rule. 32 | \item 33 | \item 34 | \item 35 | \end{itemize} 36 | \end{enumerate} 37 | 38 | \exercisehead{2}: \textbf{Practical rules for how $\nabla$ acts} 39 | Torsion-free covariant derivative boils down to a connection coefficient function $\Gamma$ that is symmetric in the bottom indices. 40 | 41 | \begin{itemize} 42 | \item \[ 43 | \nabla_Xf = X(f) = X^i \frac{ \partial f}{ \partial x^i } 44 | \] 45 | \item \[ 46 | (\nabla_X Y)^a = X^i \frac{ \partial Y^a}{ \partial x^i} + \Gamma^a_{jk} Y^j X^k 47 | \] 48 | \item \[ 49 | (\nabla_X \omega)_a = X^i \frac{ \partial \omega_a}{ \partial x^j} - \Gamma^i_{ak} \omega_i X^k 50 | \] 51 | \item \[ 52 | (\nabla_m T)^a_{ \, \, bc} = \frac{ \partial }{ \partial x^m} (T^a_{ \, \, bc} ) + \Gamma^a_{ \, \, im} T^i_{bc} - \Gamma^i_{bm} T^a_{ic} - \Gamma^j_{cm} T^a_{bj} 53 | \] 54 | \item \[ 55 | (\nabla_{ \left[ m \right. } A)_{ \left. n \right] } = (\nabla_m A)_n - (\nabla_n A)_m = \frac{ \partial A_n}{ \partial x^m } - \Gamma^i_{ nm} A_i - \left( \frac{ \partial A_m}{ \partial x^n} - \Gamma^i_{mn} A_i \right) = \frac{ \partial A_m}{ \partial x^m} - \frac{ \partial A_m}{ \partial x^n } 56 | \] 57 | \item \[ 58 | (\nabla_m \omega)_{nr} = \frac{ \partial \omega_{nr}}{ \partial x^m} - \Gamma^i_{nm } \omega_{ir} - \Gamma^i_{rm} \omega_{ni} 59 | \] 60 | \end{itemize} 61 | 62 | 63 | \exercisehead{3}\textbf{: Connection coefficients} 64 | 65 | \questionhead{} 66 | 67 | The connection coefficient functions $\Gamma$ in chart $(U \cap V,y)$ is given, in terms of chart $(U\cap V,x)$ as follows: 68 | 69 | Recall Eq. (\ref{Eq:WEHCG0703_changeofGamma}) 70 | \[ 71 | \Gamma^i_{jk}(y) = \frac{ \partial y^i}{ \partial x^q} \frac{ \partial^2 x^q}{ \partial y^j \partial y^k} + \frac{ \partial y^i}{ \partial x^q } \frac{ \partial x^s }{ \partial y^j} \frac{ \partial x^p }{ \partial y^k} \Gamma^q_{sp}(x) 72 | \] 73 | -------------------------------------------------------------------------------- /tutorial8.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial 8 Parallel transport \& Curvature} 2 | 3 | \exercisehead{1} 4 | 5 | \exercisehead{2}\textbf{: Where connection coefficients appear} 6 | 7 | It was suggested in the tutorial sheets and hinted in the lecture that the following should be committed to memory. 8 | 9 | \questionhead{: Recall the autoparallel equation for a curve $\gamma$} 10 | \begin{enumerate} 11 | \item[(a)] \[ 12 | \nabla_{v_{\gamma}} v_{\gamma} = 0 13 | \] 14 | \item[(b)] 15 | \[ 16 | \nabla_{v_{\gamma}} v_{\gamma} = \nabla_{ \dot{\gamma} \frac{ \partial }{ \partial x^{\mu}} } v_{\gamma} = \dot{\gamma}^{\nu} \nabla_{ \partial_{\nu}} v_{\gamma} = \dot{\gamma}^{\nu} \left[ \frac{ \partial v^{\mu}_{\gamma}}{ \partial x^{\nu} } + \Gamma^{\rho}_{\mu \nu} v_{\gamma}^{\mu} \right] \frac{ \partial }{ \partial x^{\rho }} = \dot{\gamma}^{\nu} \left[ \frac{ \partial \dot{\gamma}^{\rho }}{ \partial x^{\nu}} + \Gamma^{\rho}_{\mu \nu} \dot{\gamma}^{\mu} \right] \frac{ \partial }{ \partial x^{\rho }} = 0 17 | \] 18 | \[ 19 | \Longrightarrow \boxed{ \ddot{\gamma}^{\rho} + \Gamma^{\rho}_{\mu \nu} \dot{\gamma}^{\mu} \dot{\gamma}^{\nu} } 20 | \] 21 | as, for example, for $F(x(t))$, 22 | \[ 23 | \frac{dF(x(t))}{dt} = \dot{x} \frac{ \partial F}{ \partial x} = \frac{d}{dt} F 24 | \] 25 | so that 26 | \[ 27 | \dot{\gamma}^{\nu} \frac{ \partial v_{\gamma}^{\mu} }{ \partial x^{\nu}} = \frac{d}{d\lambda} v_{\gamma}^{\mu} = \frac{d^2}{d\lambda^2} \gamma^{\mu} 28 | \] 29 | \end{enumerate} 30 | 31 | \questionhead{: Determine the coefficients of the Riemann tensor with respect to a chart $(U,x)$} 32 | 33 | Recall this manifestly covariant definition 34 | 35 | \[ 36 | \text{Riem}(\omega, Z,X,Y) = \omega ( \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]}Z ) 37 | \] 38 | We want $R^i_{ \, \, jab}$. 39 | 40 | now 41 | \[ 42 | \begin{gathered} 43 | \nabla_X \nabla_Y Z = \nabla_X ( ( Y^{\mu} \frac{ \partial }{ \partial x^{\mu }} Z^{\rho} + \Gamma^{\rho}_{\mu \nu } Z^{\mu} Y^{\nu} ) \frac{\partial}{ \partial x^{\rho}} ) = (X^{\alpha} \frac{ \partial }{ \partial x^{\alpha}} (Y^{\mu} \frac{ \partial }{ \partial x^{\mu}} Z^{\rho} + \Gamma^{\rho}_{ \mu \nu} Z^{\mu} Y^{\nu} ) + \Gamma^{\rho}_{\alpha \beta} (Y^{\mu} \frac{ \partial }{ \partial x^{\mu} } Z^{\alpha} + \Gamma^{\alpha}_{\mu \nu} Z^{\mu} Y^{\nu} ) X^{\beta} )\frac{\partial }{ \partial x^{\rho }} 44 | \end{gathered} 45 | \] 46 | 47 | For $X = \partial_a$, $Y = \partial_b$, $Z=\partial_j$, then the partial derivatives of the coefficients of the input vectors become zero. 48 | 49 | \[ 50 | \Longrightarrow \nabla_{ \partial_a} \nabla_{\partial_b} \partial_j = \frac{ \partial }{ \partial x^a} (\Gamma^i_{ jb} ) + \Gamma^i_{\alpha a} \Gamma^{\alpha}_{jb} 51 | \] 52 | 53 | Now 54 | \[ 55 | [X,Y]^i = X^j \frac{ \partial }{ \partial x^j} Y^i - Y^j \frac{ \partial X^i}{ \partial x^j} 56 | \] 57 | For coordinate vectors, $[\partial_i, \partial_j] = 0$ $\forall \, i,j = 0, 1 \dots d$. 58 | 59 | Thus 60 | \[ 61 | \boxed{ R^i_{ \, \, jab} = \frac{ \partial }{ \partial x^a} \Gamma^i_{jb} - \frac{ \partial }{ \partial x^b} \Gamma^i_{ja} + \Gamma^i_{\alpha a} \Gamma^{\alpha}_{jb} -\Gamma^i_{\alpha b} \Gamma^{\alpha}_{ja} } 62 | \] 63 | 64 | 65 | \questionhead{:$\text{Ric}(X,Y):=\text{Riem}^m_{ \, \, amb} X^a Y^b$ define $(0,2)$-tensor?} 66 | 67 | Yes, transforms as such: 68 | 69 | \[ 70 | \begin{gathered} 71 | \end{gathered} 72 | \] 73 | 74 | \subsection*{EY developments} 75 | 76 | I roughly follow the spirit in Theodore Frankel's \textbf{The Geometry of Physics: An Introduction} Second Ed. 2003, Chapter 9 Covariant Differentiation and Curvature, Section 9.3b. The Covariant Differential of a Vector Field. P.S. EY : 20150320 I would like a copy of the Third Edition but I don't have the funds right now to purchase the third edition: go to my tilt crowdfunding campaign, \url{http://ernestyalumni.tilt.com}, and help with your financial support if you can or send me a message on my various channels and ernestyalumni gmail email address if you could help me get a hold of a digital or hard copy as a pro bono gift from the publisher or author. 77 | 78 | The spirit of the development is the following: 79 | \begin{quote} 80 | ``How can we express connections and curvatures in terms of forms?'' -Theodore Frankel. 81 | \end{quote} 82 | 83 | From Lecture 7, connection $\nabla$ on vector field $Y$, in the ``direction'' $X$, 84 | \[ 85 | \begin{gathered} 86 | \nabla_{ \frac{ \partial }{ \partial x^k } } Y = \left( \frac{ \partial Y^i }{ \partial x^k } + \Gamma^i_{jk} Y^j \right) \frac{ \partial }{ \partial x^i } 87 | \end{gathered} 88 | \] 89 | Make the ansatz (approche, impostazione) that the connection $\nabla$ acts on $Y$, the vector field, first: 90 | \[ 91 | \begin{gathered} 92 | \nabla Y(X) = \left( X^k \frac{ \partial Y^i}{ \partial x^k} + \Gamma^i_{jk} Y^j X^k \right) \frac{ \partial}{ \partial x^i } = X^k \left( \nabla_{ \frac{ \partial }{ \partial x^k} } Y \right)^i \frac{ \partial }{ \partial x^i} = (\nabla_X Y)^i \frac{ \partial}{ \partial x^i} = \nabla_XY 93 | \end{gathered} 94 | \] 95 | 96 | Now from Lecture 7, Definition for $\Gamma$, 97 | \[ 98 | dx^i \left( \nabla_{ \frac{ \partial }{ \partial x^k } } \frac{ \partial }{ \partial x^j } \right) = \Gamma^i_{jk} 99 | \] 100 | 101 | Make this ansatz (approche, impostazine) 102 | \[ 103 | \nabla \frac{ \partial}{ \partial x^j } = \left( \Gamma^i_{jk} dx^k \right) \otimes \frac{ \partial }{ \partial x^i} \in \Omega^1(M,TM) = T^*M \otimes TM 104 | \] 105 | where $\Omega^1(M,TM) = T^*M \otimes TM$ is the set of all $TM$ or vector-valued 1-forms on $M$, with the 1-form being the following: 106 | \[ 107 | \Gamma^i_{jk} dx^k = \Gamma^i_{ \, \, j } \in \Omega^1(M) \quad \quad \, \begin{aligned} 108 | & \quad \\ 109 | & i = 1 \dots \text{dim}(M) \\ 110 | & j = 1\dots \text{dim}(M) \end{aligned} 111 | \] 112 | So $\Gamma^i_{ \, \, j}$ is a $\text{dim}M \times \text{dim}M$ matrix of 1-forms (EY !!!). 113 | 114 | Thus 115 | \[ 116 | \nabla Y = (d(Y^i) + \Gamma^i_j Y^j ) \otimes \frac{ \partial }{ \partial x^i} 117 | \] 118 | 119 | So the connection is a (smooth) map from $TM$ to the set of all vector-valued 1-forms on $M$, $\Omega^1(M,TM)$, and then, after ``eating'' a vector $Y$, yields the ``covariant derivative'': 120 | \[ 121 | \begin{aligned} 122 | & \nabla: TM \to \Omega^1(M,TM) = T^*M \otimes TM \\ 123 | & \nabla : Y \mapsto \nabla Y \\ 124 | & \nabla Y : TM \to TM \\ 125 | & \nabla Y(X) \mapsto \nabla Y(X) = \nabla_X(Y) 126 | \end{aligned} 127 | \] 128 | 129 | Now 130 | \[ 131 | \left[ \frac{ \partial }{ \partial x^i} , \frac{ \partial }{ \partial x^j} \right] f = \frac{ \partial }{ \partial x^i } \left( \frac{ \partial }{ \partial x^j} \right) - \frac{ \partial }{ \partial x^j } \left( \frac{ \partial }{ \partial x^i} \right) = 0 132 | \] 133 | (this is okay as on $p \in (U,x)$; $x$-coordinates on same chart $(U,x)$) 134 | 135 | EY : 20150320 My question is when is this nontrivial or nonvanishing (i.e. not equal to $0$). 136 | \[ 137 | [e_a,e_b] = ? 138 | \] 139 | for a frame $(e_c)$ and would this be the difference between a tangent bundle $TM$ vs. a (general) vector bundle? 140 | 141 | Wikipedia helps here. cf. wikipedia, ``Connection (vector bundle)'' 142 | 143 | \[ 144 | \begin{gathered} 145 | \nabla : \Gamma(E) \to \Gamma(T^*M \otimes E) = \Omega^1(M,E) \\ 146 | \nabla e_a = \omega^c_{ab} f^b \otimes e_c \\ 147 | f^b \in T^*M \text{ (this is the dual basis for $TM$ and, note, this is for the manifold, $M$ } \\ 148 | \nabla_{f_b}e_a = \omega^c_{ab} e_c \in E 149 | \end{gathered} 150 | \] 151 | \[ 152 | \omega^c_a = \omega^c_{ab} f^b \in \Omega^1(M) 153 | \] 154 | is the connection 1-form, with $a,c = 1 \dots \text{dim}V$. EY : 20150320 This $V$ is a vector space living on each of the fibers of $E$. I know that $\Gamma(T^*M \otimes E)$ looks like it should take values in $E$, but it's meaning that it takes vector values of $V$. Correct me if I'm wrong: ernestyalumni at gmail and various social media. 155 | 156 | Let $\sigma \in \Gamma(E)$, $\sigma = \sigma^ae_a$ 157 | \[ 158 | \begin{gathered} 159 | \nabla \sigma = (d\sigma^c + \omega^c_{ab} \sigma^a f^b) \otimes e_c \text{ with } \\ 160 | d\sigma^c = \frac{ \partial \sigma^c}{ \partial x^b } f^b 161 | \end{gathered} 162 | \] 163 | \[ 164 | \Longrightarrow \nabla_X \sigma = \left( X^b \frac{ \partial \sigma^c}{ \partial x^b} + \omega^c_{ab} \sigma^a X^b \right)e_c = X^b \left( \frac{ \partial \sigma^c}{ \partial x^b } + \omega^c_{ab} \sigma^a \right)e_c 165 | \] 166 | -------------------------------------------------------------------------------- /tutorial9.tex: -------------------------------------------------------------------------------- 1 | \section*{Tutorial 9: Metric manifolds} 2 | 3 | 4 | 5 | \exercisehead{3: Levi-Civita Connection} 6 | Suppose torsion-free $T=0$ and metric-compatible connection $\nabla g=0$ 7 | 8 | \questionhead{Recall $T=0$ on a chart} 9 | 10 | 11 | 12 | 13 | 14 | 15 | \[ 16 | \boxed{ \Gamma^c_{ba} = \frac{1}{2} (g^{-1})^{cm} \left( \frac{ \partial g_{bm} }{ \partial x^a} + \frac{ \partial g_{ma}}{ \partial x^b} - \frac{ \partial g_{ab}}{ \partial x^m} \right) } 17 | \] 18 | or 19 | \[ 20 | \Gamma^a_{bc} = \frac{1}{2} (g^{-1})^{am} \left( \frac{ \partial g_{bm}}{\partial x^c} + \frac{ \partial g_{mc}}{ \partial x^b} - \frac{ \partial g_{bc}}{ \partial x^m} \right) 21 | \] 22 | 23 | 24 | --------------------------------------------------------------------------------