Back to Ishan Levy’s website

Vector Bundles on P1

2 Algebraic Vector Bundles

Algebraically the classification is a bit more subtle. The goal is to prove:
  • Theorem 2.1. A vector bundle on \(\PP ^1_k\) is isomorphic to \(\oplus _i\cO (n_i)\). Moreover the collection \(n_i\) are uniquely determined (with multiplicity).

The fact that these are distinct is easy. One can observe that each of these vector bundles \(\sF \) is determined by \(H^0(\PP ^1,\sF \otimes \cO (n))\).

Moreover, we will also show

  • Theorem 2.2. \(K_0(\PP ^1)=\ZZ [\gamma ]/(\gamma -1)^2\)

This is the same result as in the topological setting.

There are 100 variations on the proof of these theorems, but we will start with an extremely elementary approach. Given any locally free sheaf on \(\PP ^1\), it must be trivial on the two open affines \(\AA ^1_0\) and \(\AA ^1_1\). Thus it is determined by a transition matrix lying in \(\GL _n(k[x,x^{-1}])\). When this transition matrix is \(x^n\), we get the line bundle \(\cO (n)\). However this transition matrix determined the line bundle only up to changing the trivializations on the affine charts. Algebraically this shows that what we are trying to study is the double cosets:

\[ \GL _n(k[x])\backslash \GL _n(k[x,x^{-1}])/\GL _n(k[x^{-1}])\]

This is a really concrete linear algebra problem: we want to study invertible matricies with coefficients in \(k[x,x^{-1}]\) where we are allowed to do row operations with \(k[x]\) coefficients and column operations with \(k[x^{-1}]\) coefficients.

Given such a matrix, we can without loss of generality multiply all the entries by \(x^n\) so that all the entries really lie in \(k[x]\) (this is just for notational/psychological convenience). This amounts to tensoring our vector bundle with a large power of \(\cO (n)\).

Now via the Euclidean algorithm (or really because \(k[x]\) is a PID), we can do row operations so that our matrix becomes upper triangular. Moreover, since the determinant must be of the form \(cx^n,c\in k\) and all the entries are polynomials, each diagonal entry must look like \(c'x^l\). We can scale the rows to eliminate the coefficient \(c'\). Thus we have reduced to considering a matrix that looks like

\[\begin {bmatrix} x^{f_1} & A_{12} & A_{13} &\dots & A_{1l}\\ & x^{f_2} & A_{23} & \dots & A_{2l}\\ & & x^{f_3} & \dots & A_{2l}\\ & & & \ddots & \vdots \\ & & & & x^{f_l} \end {bmatrix}\]

Note that this already proves something interesting, namely that there is a filtration of our vector bundle whose associated graded pieces are \(\cO (f_i)\). These extensions can be nontrivial, but we will show that by changing the filtration, we can make it split. From now on assume \(j>i\). We can do more row operations to further assume that \(A_{ij}\) has no monomials of power larger than \(x^{f_j}\). Futhermore by doing column operations, we can force \(A_{ij}\) to vanish modulo \(x^{f_{i}+1}\). If we have for example \(f_{j}\leq f_i\), then these two conditions on \(A_{ij}\) force it to vanish, showing the splitting.

Thus it will suffice to show that if \(f_{i+1}>f_i\) we can do operations to achieve \(f_{i+1}\leq f_i\) (from which it will follow that \(f_j\leq f_i\)). We can arrange this by essentially focusing only on the \(2x2\) submatrix:

\[\begin {bmatrix} x^{f_i}& A_{i,i+1}\\ & x^{f_{i+1}} \end {bmatrix}\]

The lowest monomial in \(A_{i,i+1}\) is higher degree than \(x^{f_i}\), so we can simply switch the two columns around and rerun our algorithm to make it return to upper triangular form. The first diagonal entry becomes the gcd of \(A_{i,i+1},x^{f_{i+1}}\), which is a higher power of \(x\) than \(f_i\).

Observe that this last part of the computation actually shows more, namely that if \(n+m=n'+m'\), then \(\cO (n)\oplus \cO (m)\) has a filtration whose associated graded pieces are \(\cO (n')\) and \(\cO (n')\). This quickly leads to the same \(K\)-theory computation as in the topological setting. The only other input necessary is the notion of degree, which shows that \(\cO (1)\) is linearly independent from \(\cO _{\PP ^1}\).

Here is an alternate more beautiful proof that gives the filtration that we want already with the property that \(f_{i+1}>f_i\). The difference is that it uses cohomlogy instead of plain linear algebra. Given a locally free sheaf \(\sF \), we know that \(\sF (n)\) has global sections for \(n>>0\), and one can show that it has no global sections for \(n<<0\) (either you can use Serre duality, or you can resolve \(\sF ^{\vee }(n)\) by sums of line bundles and dualize to make \(\sF (-n)\) embed in something with no global sections). Thus there is a critical value \(\sF (n)\) where it obtains global sections. A nonzero global section gives a map \(\cO _{\PP ^1} \to \sF (n)\). Taking the cokernel of all global sections, there can be no torsion, or else there would be a global section extending to a map from \(\cO (1)\) by examining the stalk near the torsion point, contradicting criticality of \(n\). This uses the fact that any positive divisor corresponds to \(\cO (m)\) for some \(m>0\), which really requires genus \(0\).

Thus the quotient is locally free, and by induction on rank it is a sum of line bundles \(\cO (n_i)\). This gives the desired filtration and it suffices to show that \(n_i<0\), but this follows by examining the long exact sequence on cohomology, since \(\sF (n)\) has no global sections and \(H^1(\cO _{\PP ^1})=0\). Once again we are just using genus \(0\).

There is also a more hands free way to see that the filtration splits as a direct sum. We simply note that for \(n\geq 0\), \(\Ext ^1(\cO (-n),\cO _{\PP ^1}) = \Ext ^1(\cO _{\PP ^1},\cO (n))=H^1(\cO (n))\) vanishes.

Vector bundles on \(\PP ^n\) in general don’t split as sums of line bundles, but there is a cohomological criterion for when they do. Namely, \(\sF \) splits as line bundles iff \(H^i(\sF (m))\) vanishes for \(0<i<n\).