Shape Operator

Matrix squaring can also rapidly sum the geometric series

22 Mar 2026

The geometric series,

S_{n}(x) = \sum_{k=0}^{n} x^k,

satisfies a linear recurrence

S_{n+1}(x) = x \, S_n(x) + 1

with constant (in n) coefficients. This means it can also be written as a matrix power,

S_{n}(x) = \begin{pmatrix} 1 & 0 \end{pmatrix} M^{n} \begin{pmatrix} 1 \\ 1 \end{pmatrix},

with

M = \begin{pmatrix} x & 1 \\ 0 & 1 \end{pmatrix}.

Repeatedly squaring M can rapidly generate high matrix powers and thus sum large finite geometric series.

The first few iterations of repeated squaring give

\begin{aligned} M^2 &= \begin{pmatrix} x^2 & 1+x \\ 0 & 1 \end{pmatrix}, \\ M^4 &= \begin{pmatrix} x^4 & 1+x+x^2+x^3 \\ 0 & 1 \end{pmatrix}, \\ M^8 &= \begin{pmatrix} x^8 & 1+x+x^2+\ldots+x^7 \\ 0 & 1 \end{pmatrix}, \end{aligned}

and in general,

\begin{aligned} M^{2^n} &= \begin{pmatrix} x^{2^n} & S_{2^{n}-1}(x) \\ 0 & 1 \end{pmatrix}. \end{aligned}

Let a_n and b_n denote the upper-left and upper-right entries of M^{2^n}. Writing the result of squaring the matrix shows that they satisfy the recurrence

\begin{aligned} a_{n+1} &= a_n^2, \\ b_{n+1} &= b_n (1+a_n), \\ a_0 &= x, \\ b_0 &= 1. \end{aligned}

Since b_n = S_{2^{n}-1}(x), this recurrence allows computing S_{2^{n}-1}(x) with 2n multiplications and n additions: exactly the same operation count as the Newton iteration discussed in yesterday’s post.

The matrix squaring iteration is connected to the Newton iteration by the identity

a_{n} = 1 - (1-x)b_n = x^{2^n}.

This allows writing a recurrence for b_n alone,

b_{n+1} = b_n(2 - (1-x)b_n),

which is exactly the Newton iteration from yesterday (where it was written using the variable y_n).

The two iterations—matrix squaring, and the Newton iteration—compute identical results (in exact arithmetic) with identical operation counts, but they are not quite identical computationally. They operate on different state spaces: the Newton iteration operates on a single number, and the matrix squaring iteration operates on a pair of numbers.

They also arrange their multiplications and additions differently, so that in floating point arithmetic, they have different numerical stability. I claim that the Newton iteration is more numerically stable in the region where the series converges, -1\lt x\lt 1, but I won’t go into further detail today.