3.6 Abbreviated writing of sums and products

Very often we come across the need to sum a finite sequence of numbers $a_1, a_2, \ldots, a_n$, or to discus the properties of such a sum. Instead of a lengthy and potentially ambiguous37 expression

\begin{equation}\label{eqsouceta}\tag{3.14} a_1 + a_2 + \cdots + a_n \end{equation}

we write

\begin{equation*} \sum_{k=1}^n a_k. \end{equation*}

The summation sign, the symbol $\sum$ 38, is the enlarged capital Greek letter „S“ (sum in English, summa in Latin). A „local variable“ $k$ is called the index of summation, $1$ is the lower bound of summation and $n$ the upper bound of summation. It does not matter what we call the summation index. For instance, the summations

\begin{equation*} \sum_{k=1}^n a_k \quad \text{and} \quad \sum_{j=1}^n a_j \end{equation*}

are equal because they represent the same sum (3.14), which of course does not depend on any summation index (it does not contain $k$ or $j$). Note, however, that the summation index under the summation sign as well as in the summands is always the same. On the other hand,

\begin{equation*} \sum_{k=1}^n a_j = \underbrace{a_j + a_j + \cdots + a_j}_{n\times} = k \cdot a_j, \end{equation*}

which is something totally different.

Because addition is associative and commutative, (see equation (3.8)) we have that

\begin{equation}\label{eqsum1}\tag{3.15} \sum_{k=1}^n (a_k + b_k) = \sum_{k=1}^n a_k + \sum_{k=1}^n b_k. \end{equation}

Indeed, we can use the associative and commutative laws for addition and arrange the summands in a suitable way. Similarly, because of the distributive law (see equation (3.9)) it is true that

\begin{equation}\label{eqsum2}\tag{3.16} \sum_{k=1}^n (c \cdot a_k) = c \cdot \sum_{k=1}^n a_k, \end{equation}

where $c\in\mathbb{R}$ is a constant, i.e. a number independent on $k$. This equation represents the generalization of a familiar operation of „factoring out in front of brackets“. It is essential for both equations (3.15) and (3.16) that the lower bounds and the upper bounds are the same.

Let's demonstrate this concept on a concrete example. We want to talk about the sum of all natural numbers from $3$ to $10$. The shorthand is the following

\begin{equation}\label{eqsum}\tag{3.17} S = {\color{red}3} +4+5+6+7+8+9+ {\color{blue}10} = \sum_{k= {\color{red}3}}^{{\color{blue}10}} k. \end{equation}

Compare this expression to the use of the for cycle for finding this sum in C.

int main()
  int sum = 0;
  for (int k = 3; k <= 10; k++) sum += k;
  cout << The sum is:  << sum << endl;
  return 0;

In order to calculate using the summation notation, it is helpful to know how to manipulate summation indices. For instance, the sum $S$ in (3.17) can be also written as (we sum in the reverse order)

\begin{equation*} S = \sum_{k=1}^8 (11-k) = 10 + 9 + 8 + 7 + 6 + 5 + 4 + 3, \end{equation*}

or (the index of summation starts from $1$ and we sum in the same order as originally)

\begin{equation*} S = \sum_{k=1}^8 (2+k). \end{equation*}

The point of this paragraph is One sum can be expressed in a number of equivalent ways.

Sometimes the change of the bounds of summation has no influence on the result, such as here

\begin{equation*} \sum_{k={\color{red}1}}^n k^2 = \sum_{k={\color{red}0}}^n k^2. \end{equation*}

We just added one summand for $k=0$, and $0^2 = 0$.

Example 3.4 (Gauss trick)

The story goes that in elementary school, children were given the task to sum up all numbers from $1$ to $100$. To the surprise of the teacher, the young Gauss came up with an answer really quickly. He did not add the numbers one by one, but he noticed that if he adds the first number (i.e. $1$) and the last number (i.e. $100$) he gets $101$. If he adds the second number (i.e. $2$) and the one before last (i.e. $99$), then he gets $101$ again. If this way we can go all the way to $50 + 51 = 101$. This method is pictured in Figure 3.7. So, the result is

\begin{equation*} 50 \cdot 101 = 5050. \end{equation*}

The general formula for the sum of all numbers from $1$ to some $n$ is

\begin{equation}\label{eqgauss}\tag{3.18} \sum_{k=1}^n k = \frac{n(n+1)}{2} = n \cdot \frac{n+1}{2}. \end{equation}

Figure 3.7: Gauss' trick for summing the first one hundred natural numbers.

Using the summation notation we can express Gauss' thoughts like this

\begin{equation*} \sum_{k=1}^{100}k = \sum_{k=1}^{50} \big( k + (101 - k) \big) = \sum_{k=1}^{50} 101 = 50 \cdot 101 = 5050. \end{equation*}

Note that we can use the same trick for adding numbers from number $1$ to an arbitrary number $n$:

\begin{equation*} \begin{aligned} 2 \sum_{k=1}^n k &= \sum_{k=1}^n k + \sum_{k=1}^n (n+1-k) = \sum_{k=1}^n \big( k + (n + 1 - k) \big) = \\ &= \sum_{k=1}^n (n+1) = n(n+1) \end{aligned} \end{equation*}

Thus we get the famous formula (3.18).


To appreciate this result, one should consider the difference between the given task (to add up numbers from $1$ to $100$) and the formula. On the left-hand side of the equality

\begin{equation*} \sum_{k=1}^{100} k = \frac{100\cdot(100+1)}{2} \end{equation*}

we have to carry out in total $99$ operations of addition compared with one addition, multiplication and division on the right-hand side. That is why Gauss was the only one to get a good result. Note that if we increase $n$, the number of operations on the left-hand side will also increase, however, the number of operations required to evaluate Gauss' formula will be still the same. Implementing this particular sum using simple summation would therefore be considerably inefficient. Using Landau notation, this observation can be expressed by stating that the computational complexity of the sum itself is $\mathcal{O}(n)$ and of Gauss' formula it is $\mathcal{O}(1)$. You will learn about Landau notation in BIE-ZMA and especially BIE-ZDM lectures.

Another sum which can be expressed explicitly without the summation sign is shown in the next example.

Example 3.5 (Součet prvních několika členů geometrické posloupnosti)

For any real $q$ different from $1$ and a natural $n$ we have that

\begin{equation}\label{eq-geom}\tag{3.19} \sum_{k=1}^n q^{k-1} = \frac{1-q^n}{1-q}. \end{equation}

We denote the expression in question by

\begin{equation*} S_n := \sum_{k=1}^n q^{k-1}, \quad n\in\N. \end{equation*}

Note what this expression does when we multiply it by the quotient $q$. From the definition of $S_n$ we have that

\begin{equation*} \begin{aligned} S_{n+1} &= 1 + q + q^2 + q^3 + \cdots + q^{n-1} + q^n = 1 + q\big( 1 + q + \cdots + q^{n-2} + q^{n-1} \big) = \\ &= 1 + q S_n, \\ S_{n+1} &= S_n + q^n, \end{aligned} \end{equation*}

which is valid for any positive natural $n$. By comparing the two formulas for $S_{n+1}$ we get the equality

\begin{equation*} 1 + q S_n = S_n + q^n, \quad n\in\N, \end{equation*}


\begin{equation*} S_n (1-q) = 1 - q^n, \quad n\in\N. \end{equation*}

Assuming that $q\neq 1$, the formula (3.19) immediately follows.Alternatively, we could also refer to Remark 2.2.


Question 3.7

Can we remove the assumption that $q\neq 1$ in the example above? Is it then necessary to change the formula (3.19)?

Show answer

Yes, if $q=1$ then $\displaystyle\sum_{k=1}^n q^{k-1} = n$.

Question 3.8

To practise basic operations with sums calculate the sums below

\begin{equation*} \sum_{k=1}^5 1, \quad \sum_{k=1}^6 k - \sum_{k=1}^6 (k+1). \end{equation*}

Show answer

$5$, $-6$.

Question 3.9

Which of the below expressions can be uniquely interpreted (i.e. evaluated) without further specifications?

\begin{equation*} \begin{aligned} &\text{a)} \ \sum_{k}^4 k+1, & &\text{b)} \ j\sum_{j=1}^{30} 30k, \\ &\text{c)} \ \sum_{j} 2j, & &\text{d)} \ \sum_{j=1}^{2j} \sin j. \end{aligned} \end{equation*}

Show answer

None, all expressions are ambiguous.

3.6.1 Abbreviation of product

Analogously to summation, there is an abbreviated notation for product. We use the Greek capital letter $\prod$ (read pi, product). For instance, the product of the first ten natural numbers can be written as

\begin{equation*} {\color{red}1} \cdot 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 7 \cdot 8 \cdot 9 \cdot {\color{blue}10} = \prod_{k= {\color{red}1}}^{{\color{blue}10}} k. \end{equation*}

We work with products in a similar way as with sums. The only difference is that we use multiplication instead of addition; the underlying concept is the same. For example, we have that

\begin{equation*} \begin{aligned} \prod_{k=1}^n a_k \cdot b_k &= \Big( \prod_{k=1}^n a_k \Big) \cdot \Big( \prod_{k=1}^n b_k \Big), \\ \prod_{k=1}^n c \cdot a_k &= c^n \prod_{k=1}^n a_k. \end{aligned} \end{equation*}