Back to Ishan Levy’s website

Lie Algebras

5 Root Systems

First we will need to show that we get a root system from a semisimple Lie algebra. Let \(\mg \) be a fixed semisimple Lie algebra, \(\mh \) a Cartan subalgebra, \(\Delta \) the set of roots. \(\kappa \) induces a nondegenerate symmetric bilinear form on \(\mh ^*\), and we will use \(\kappa \) to denote this as well. Let \((-)^*\) be the isomorphism between \(\mg \) and its dual induced by \(\kappa \).

  • Lemma 5.1 (\(\msl _2\)-triples). If \(\alpha \in \Delta \), \(e \in \mg _{\alpha }\), \(f \in \mg _{-\alpha }\), then \([e,f] = \kappa (e,f)\alpha ^*\). Moreover, if \(\alpha \in \Delta , \kappa (\alpha ,\alpha ) \neq 0\).

  • Proof. \(\kappa (h,[e,f]) = \kappa (\ad h(e),f) = \alpha (h) \kappa (e,f) = \kappa (h,\alpha ^*\kappa (e,f))\), giving the first result since \(\kappa \) is nondegenerate on \(\mh \). For the second, choose \(e,f\) so that \(\kappa (e,f)=1\). If \(\kappa (\alpha ,\alpha ) = \kappa (\alpha ,[e,f]) = \alpha ([e,f])\) is \(0\), then \(\alpha ^*,e,f\) form a subalgebra presented by the relations \([e,f] = \alpha ^*,[\alpha ^*,e] = [\alpha ^*,f]=0\). This is a nilpotent subalgebra, so by Lie’s theorem, \(\alpha ^* = [e,f]\) must act nilpotently on \(\mg \). but it acts semisimply so \(\ad \alpha ^*=0\), a contradiction.

Thus given \(\alpha \), we can pick \(e_{\alpha },f_{\alpha }\) so that \(\kappa (e_{\alpha },f_{\alpha }) = \frac 2 {\kappa (\alpha ,\alpha )}\). Then \([e_\alpha ,f_\alpha ] = h_\alpha = \frac {2\alpha ^*}{\kappa (\alpha ,\alpha )}\). Then \(e_\alpha ,f_\alpha ,h_\alpha \) give an \(\msl _2\)-triple associated with \(\alpha \).

\(\msl _2\)-triples associated with \(\alpha \) are unique up to scaling \(f \mapsto \lambda f, e \mapsto \lambda ^{-1}e\). This follows from the next theorem.

For convenience, define \(\langle \alpha |\beta \rangle \) as \(2\frac {\kappa (\alpha ,\beta )}{\kappa (\alpha ,\alpha )}\). This notation is nonstandard, but \(|\) is being used to remind us that it is not symmetric.

  • Theorem 5.2.

    • 1. \(\dim \mg _\alpha = 1, \alpha \in \Delta \).

    • 2. If \(\alpha , \beta \in \Delta \), then \(\beta +r\alpha , r \in k\) is a connected string where \(r\) are integers going from \(-p\) to \(q\) where \(p-q = \langle \alpha |\beta \rangle \)

    • 3. If \(\alpha ,\beta ,\alpha +\beta \in \Delta \), then \([\mg _\alpha ,\mg _\beta ] =\mg _{\alpha +\beta } \).

    • 4. \(\alpha \in \Delta \implies r\alpha \in \Delta \) iff \(r = \pm 1\).

  • Proof. \((1)\): If \(e,e'\) are nonzero in \(\mg _\alpha \), rescale them and choose an \(f\) in \(\mg _{-\alpha }\), so that \(e,f,[e,f]\) and \(e',f,[e',f]\) extend to an \(\msl _2\)-triple. But \([e,f] = \frac {2\alpha ^*}{\kappa (\alpha ,\alpha )} = [e',f]\), so the representation theory of \(\msl _2\) tells us \(e = e'\).

    \((2\&3)\): Consider \(M = \oplus _{r \in k} \mg _{\beta +r\alpha }\) as a representation of an \(\msl _2\)-triple associated to \(\alpha \). \(h_{\alpha }\mg _{\beta +r\alpha } = (\langle \alpha |\beta \rangle +2r) \mg _\beta \) by definition of \(h_\alpha \), so \(M\) is irreducible by \((1)\) and Corollary 4.3. The results then follows from the representation theory of \(\msl _2\).

    \((4)\): This follows from the former observation that \(M\) is an irreducible \(\msl _2\) module for \(\beta =\alpha \) along with the fact that \([\mg _\alpha ,\mg _\alpha ]=0\).

Because the (nonzero) roots are \(1\)-dimensional, and \(\mh \) is abelian, it follows that if \(a,b \in \mh \), then \(\kappa (a,b) = \sum _{\alpha \in \Delta }\alpha (a)\alpha (b)\).

\(\kappa \) restricted to the roots is really defined over \(\QQ \) and is positive definite.

  • Theorem 5.3.

    • 1. \(\Delta \) spans \(\mh ^*\).

    • 2. \(\kappa (\alpha ,\beta ) \in \QQ \) for \(\alpha ,\beta \in \Delta \).

    • 3. \(\kappa |_{\mh ^*_\QQ }\) is positive definite.

  • Proof. \((1)\): If there is some \(h \in \mh \) such that \(\kappa (\alpha ^*,h) = \alpha (h)=0\) for all \(\alpha \in \Delta \), then \(h\) acts trivially on all \(\mg _\alpha \), so must be \(0\).

    \((2\&3)\): Let \(\alpha ,\beta \in \Delta \). Then the formula for \(\kappa \) shows \(\frac {4}{\kappa (\alpha ,\alpha )} = \sum _{\lambda \in \Delta } \langle \alpha | \lambda \rangle \in \ZZ \) so \(\kappa (\alpha ,\alpha ) \in \QQ \). Since \(\frac {\langle \alpha |\beta \rangle \kappa (\alpha ,\alpha )}{2} = \kappa (\alpha ,\beta )\), \(\kappa (\alpha ,\beta ) \in \QQ \). Finally the formula for \(\kappa (\alpha ,\alpha )\) shows it is positive, since it is a sum of squares of rational numbers.

We have shown that \(h^*,\Delta \) is a root system, of which a definition will now be given. I think the right way to do this gives something called root datum, where you don’t identify the vector space with its dual via an inner product.

  • Definition 5.4. A root system is an vector space \(V\) with a conformal structure (i.e. an inner product up to scaling) and a collection of nonzero elements (roots) \(\Delta \) spanning it, such that:

    • • If \(\alpha \in \Delta \), \(n\alpha \in \Delta \) iff \(n = \pm 1\).

    • • If \(\alpha , \beta \in \Delta \), then \(\beta +n\alpha ,n \in \ZZ \) is in \(\Delta \coprod 0\) for \(n\) in a connected string from \(-p\) to \(q\), where \(p-q = \frac {2(\alpha ,\beta )}{(\alpha ,\alpha )}\).

The second condition is called the string condition.

We denote \(\frac {2(\alpha ,\beta )}{(\alpha ,\alpha )}\) as \(\langle \alpha |\beta \rangle \) like before.

We would like to determine when a Lie algebra is simple rather than just semisimple. Let us consider a sum of two semisimple Lie algebras \(\mg ,\mg '\), let \(\mh ,\mh '\) be their Cartan subalgebras and let \(\Delta ,\Delta '\) be the roots. \(\mh \oplus \mh '\) is a Cartan subalgebra and \(\Delta \coprod \Delta '\) is the corresponding set of roots. Note that the splitting of \(\mg \oplus \mg '\) can be detected entirely through the root system. Namely \(\mh ,\mh '\) orthogonally decompose the space, and \(\Delta , \Delta '\) respects the decomposition. Conversely such a decomposition clearly indicates how to decompose the Lie algebra. By the axioms of a root system, this is equivalent to finding a partition \(\Delta \coprod \Delta '\) where \(a+b \notin \Delta \coprod \Delta ' \coprod 0\) if \(a \in \Delta , b \in \Delta '\). When a nontrivial partition exists, the root system is indecomposable.

  • Lemma 5.5. \(\mg \) is simple iff its root system is indecomposable.

We can easily check indecomposability as follows:

  • Lemma 5.6. A root system \(\Delta \) is indecomposable iff any two roots can be connected by a sequence such that for any neighboring pair in the sequence, the sum is in \(\Delta \coprod 0\).

The existence of a root system is really equivalent to being semisimple. Now we will produce many examples of semisimple Lie algebras and hence root systems. To exhibit a Lie algebra as semisimple, it really suffices to provide a root space decomposition.

  • Theorem 5.7. If \(\mg \) has a decomposition as \(\mh \oplus \bigoplus _{\alpha \in \Delta }\mg _\alpha \), \(\Delta \subset \mh ^*\) such that

    • 1. \(\Delta \) spans \(\mh ^*\), and \(\mh \) is nilpotent.

    • 2. \(\mg _\alpha \) are weight spaces for the action of \(\mh \).

    • 3. \(\alpha (a,\mg _{-\alpha })\neq 0\) for \(a \in \mg _\alpha \).

    Then \(\mg \) is semisimple, and \(\mh ,\Delta ,\mg _\alpha \) denote the usual things.

  • Proof. Consider an abelian ideal \(\ma \). It cannot be contained in \(\mh \), as \(\alpha \) of any element in \(\mh \) is nonzero for some \(\alpha \) by \((1)\), so \(\mg _\alpha \) would also be in it. Consider any element \(a \in \ma -\mh \). By acting on it by elements of \(\mh \) many times, since \(\mh \) is nilpotent and \(\mg _\alpha \) are weight spaces for \(\mh \), we can assume its component in \(\mh \) is \(0\) and its components in the \(\mg _\alpha \) are restricted to \(\alpha \) supported on a line. By further acting by an element in \(\mh \) for which \(\alpha \) doesn’t vanish, and taking linear combinations, we can assume \(a\) is supported on \(\mg _\alpha \oplus \mg _{-\alpha }\). Suppose its component in \(\mg _\alpha \) is \(0\). Then by acting on \(a\) by an element of \(\mg _{-\alpha }\), by \((3)\) we can produce an element \(b \in \ma \) supported in \(\mh \oplus \mg _{-2\alpha }\) that has \(\alpha \neq 0\), but this doesn’t commute with \(a\).

Now we can construct the classical simple Lie algebras and their root decompositions. We will not show they are simple since it easily follows from our criteria.

  • Example 5.8 (\(\msl _n\)). The diagonal matricies form an abelian subalgebra, with basis \(e_i = E_{i,i}-E_{i+1,i+1}\) for \(1 \leq i < n\). Anything that commutes with \(e_i\) commutes with all diagonal matricies preserves their eigenspaces so must be diagonal. \(\ad e_i(E_{jk})\) was computed before, and we see that \(E_{jk},j\neq k\) are the weight spaces with weight \(e_j^*-e_k^*\). The roots are then \(e_j^*-e_k^*\) for \(j\neq k\) both less than \(n\).

Given a bilinear form \(B\) on a vector space \(V\), the set of endomorphisms \(A\) such that \(B(Aa,b)+B(a,Ab)=0\) is a Lie algebra. When \(B\) is a nondegenerate symmetric bilinear form on a vector space of dimension \(n\), this is called \(\so _n\), and when it is a nondegenerate alternating biliner form on a vector space of dimension \(2n\), this is called \(\msp _{2n}\).

  • Example 5.9 (\(\so _{n}\)). Here \(n\geq 3\). Choose a basis so that \(B\) is the matrix that is \(1\)s on the antidiagonal. Then \(\so _n\) consists of all matrices that are anti symmetric with respect to flipping across the antidiagonal. A Cartan subalgebra consists of all matrices that are diagonal, a basis given by \(e_i = E_{ii}-E_{n+1-i,n+1-i}\) for \(1 \leq i \leq \frac n 2\). The root spaces are given by \(E_{ij}-E_{n+1-i,n+1-j}\), corresponding to the roots \(e_i^*-e_{j}^*\), where \(e_j^*\) really means \(-e_{n+1-j}^*\) when \(j>\frac n 2\) and \(0\) when \(j=\frac n 2\). When \(n\) is even, these then give \(\pm (e_i^* + e_j^*), e_i^*-e_j^*\) for \(i\neq j\) at most \(\frac {n}{2}\), and when \(n\) is odd, they give \(\pm (e_i^* + e_j^*),e_i^*-e_j^*, \pm e_i^*\) for \(i \neq j\) at most \(\frac n 2\).

  • Example 5.10 (\(\msp _{2n}\)). Here \(n\geq 1\). We can choose a basis so the bilinear form looks like \(1\) on the upper half of the antidiagonal and \(-1\) on the other half. We can split any matrix in \(\msp _{2n}\) into four quadrants like \(\begin {bmatrix} a&b\\c&d \end {bmatrix}\), and if \(a'\) denotes the reflection across the antidiagonal, the conditions are that \(a'=-d,b'=b,c'=c\). Again the diagonal matrices form a Cartan subalgebra, with bases \(e_i = E_{ii}-E_{n+1-i,n+1-i}\). Now there are different kinds of rootspaces. One kind is \(E_{ij}-E_{n+1-i,n+1-j}\) where \(i,j\) is in the upper left quadrant, which gives \(e_i^*-e_j^*\). Another is \(E_{ij}+E_{n+1-i,n+1-j}\) where \(i,j\) is in the upper left half of either the top right or bottom left quadrant, giving \(\pm (e_i^* + e_j^*)\). The last kind is \(E_{ij}\) where \(i,j\) lies on the antidiagonal, giving \(\pm 2e_i^*\).

There is another naming scheme related to Dynkin diagrams: \(A_n = \msl _{n+1}, B_n = \so _{2n+1}, C_n = \msp _{2n}, D_n = \so _{2n} (n>1)\).

The Killing forms are easy to understand up to scalar multiplication:

  • Lemma 5.11. Any nontrivial invariant bilinear form on a simple Lie algebra is nondegenerate. Moreover any invariant nondegenerate bilinear form on a simple Lie algebra is unique up to scalar multiplication. In particular it must be proportional to \(\kappa \).

  • Proof. An invariant bilinear form is a map of \(\mg \)-modules \(\mg \otimes \mg \to k\). \(\Hom (\mg \otimes \mg ,k) = \Hom (\mg ,\mg ^*)\) which is one-dimensional since \(\mg \) is simple. The result follows since \(\kappa \) is an example of a nontrivlal form.

Thus the trace form of the standard representation is proportional to \(\kappa \). Now let’s study root systems in general. There is an analog of the previous lemma for root systems.

  • Proposition 5.12. An indecomposable root system has a unique up to scalars inner product making it a root system. Moreover, if \((\alpha ,\alpha ) \in \QQ \) for some \(\alpha \in \Delta \), it is true for all \((\alpha ,\beta )\).

  • Proof. Given \((\alpha ,\alpha )\), the string condition lets us compute \((\alpha ,\beta )\) whenever \(\alpha +\beta \in \Delta \), which then lets us compute \((\beta ,\beta )\). Continuing this way via Lemma 5.6, we can recover the rest of the inner product showing it is determined by \((\alpha ,\alpha )\) and moreover in \(\QQ \) if \((\alpha ,\alpha )\) is.

Thus we can really choose to work over \(\QQ \) or \(\RR \) and it makes no difference. \(\ZZ \Delta \) is a finitely generated subgroup of a rational vector space, so is a lattice called the root lattice.

Let \(E_r\) be the lattice consisting of linear combinations \(a_ie_I\) where either \(a_i \in \ZZ \) or \(a_i\in \ZZ +\frac 1 2\) for all \(i\) and \(\sum _i a_i \in 2\ZZ \).

A lattice is even if \((a,b) \in \ZZ \) for any \(a,b\) in it. The following is straightforward.

  • Lemma 5.13. \(E_r\) is even iff \(8|r\).

  • Theorem 5.14. Given an even lattice such that \(\Delta \), the set of elements in the lattice such that \((a,a) =2\) span the vector space, \(\Delta \) is a root system.

  • Proof. One really only needs to check the string property. Since \(0 \leq (\alpha -\beta ,\alpha -\beta ) = (\alpha ,\alpha )+(\beta ,\beta )-2(\alpha ,\beta )\), we see that \((\alpha ,\beta ) \leq 2\). One can calculate in each case that the string condition holds.

  • Example 5.15 (\(E_8\)). The theorem implies \(E_8\) is a root system, which can be checked is indecomposable. It can be described as the root system consisting of \(\pm (e_i + e_j), \pm e_i \mp e_j\) and things of the form \(\frac {1}{2}(e_1\pm \dots e_8)\) where there are an even number of minus signs.

  • Example 5.16 (\(E_7\)). We can construct another called \(E_7\) as a sub-root system as follows: consider the vectors in \(E_8\) orthogonal to the constant vector with entry \(\frac 1 2\). It is still even, so a root system. These are of the form \(e_i-e_j, i \neq j\) and things of the form \(\frac {1}{2}(e_1\pm \dots e_8)\) where the number of minus signs is divisible by four.

  • Example 5.17 (\(E_6\)). Finally, \(E_6\) can be constructed as the root system consisting of vectors in \(E_7\) that are orthogonal to \(e_7+e_8\). It is still even, so a root system. It contains \(e_i-e_j\) such that if either of \(\{i,j\}\cap \{7,8\}\) is \(\{7,8\}\) or \(\phi \), as well as things of the form \(\frac {1}{2}(e_1\pm \dots e_8)\) where the number of minus signs is divisible by four, and \(e_7,e_8\) have opposite signs.

There are two more exceptional root systems, which have a different origin.

  • Example 5.18 (\(F_4\)). Consider the vector space generated by \(e_1,e_2,e_3,e_4\) and consider the lattice consisting of elements of the form \(\sum _i a_i e_i\) where either \(a_i \in \ZZ \) or all \(a_i \in \ZZ +\frac 1 2\). The elements of the lattice such that \((x,x) = 1,2\) form an indecomposable root system. More explicitly, the roots are \(\pm e_i, \pm (e_i + e_j), e_i-e_j, \pm \frac 1 2 (e_1\pm \dots e_4)\).

  • Example 5.19 (\(G_2\)). Here the lattice is the same as that for \(A_2\), namely all elements \(a_1e_1+a_2e_2+a_3e_3\) such that \(\sum _i a_i = 0\), except we take elements with \((x,x) = 2,6\). This is an indecomposable root system whose elements are \(\pm (e_i + e_j), e_i - e_j, \pm (2e_i-e_j-e_k)\).