For some, finding the exponent is just a matter of seeing a familiar
exponent on the upper right corner of $x$ and then (pattern) matching it to
memorized number. Sometimes evaluating exponentiated number is a matter of
applying the well known exponent “laws”, refactoring the
exponent into a simpler form and doing the evaluation.
On the other hand, for a computer without a specific instruction for
computing exponentiated numbers, it’s possible that the computer needs to
resort to some sort of multiplication instruction when computing
$a_nx^n$ for some $n$. For large exponents this can result in a very many
consecutive multiplication operations.
As an example, expanding the exponents in each term of $P$ is
$$
\begin{array}{l c l}
a_0 & = & \underbrace{a_0 \cdot x^0}_{1 \ \text{multip}} \ , \\[0.5em]
a_5 x^5 & = & \underbrace{a_5(x \cdot x \cdot x \cdot x \cdot x)}_{5 \ \text{multips}} \ , \\[0.5em]
a_{10} x^{10}
& = & \underbrace{a_{10}(x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x)}_{10 \ \text{multip}s} \ , \\[0.5em]
\end{array}
$$
and
$$
\begin{array}{l c l}
a_{15} x^{15}
& = & \underbrace{a_{15}(x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x)}_{15 \ \text{multips}} \ ,
\end{array}
$$
$$
\begin{array}{l c l}
a_0 & = & \underbrace{a_0 \cdot x^0}_{= \ 1 \ \text{multip}} \ , \\[0.5em]
a_5 x^5 & = & \underbrace{a_5(x \cdot x \cdot x \cdot x \cdot x)}_{= \ 5 \ \text{multips}} \ , \\[0.5em]
a_{10} x^{10} & = & \underbrace{
\begin{array}{r}
a_{10}(x \cdot x \cdot x \cdot x \cdot x \\
\cdot x \cdot x \cdot x \cdot x \cdot x) \ ,
\end{array}
}_{= \ 10 \ \text{multip}s}
\end{array}
$$
and
$$
\begin{array}{l c l}
a_{15} x^{15} & = & \underbrace{
\begin{array}{c}
a_{15}(x \cdot x \cdot x \cdot x \cdot x \\
\cdot \ x \cdot x \cdot x \cdot x \cdot x \\
\cdot \ x \cdot x \cdot x \cdot x \cdot x) \ ,
\end{array}
}_{= \ 15 \ \text{multips}}
\end{array}
$$
so the total number of multiplications is $1 + 5 + 10 + 15 = 31$.
Note that for polynomials like $T(x) = a_3x^3 + a_1x + a_0$ the property
$(2)$ also holds, because the multiplications are there, but the
multiplications are by $0$. That is,
The number of additions performed in $(1)$ is $(n - 1)$ additions for $n$
terms. For example in the case of $T$, there are $4$ terms in it’s
definition, and so this polynomial has $3$ addition operations, when the
terms with zero coefficients are taken into count.
Ofcourse, for a polynomial like $a_{56}x^{56} + 1$, one shouldn’t multiply
the lower terms like $(0 \cdot x^{55})$, $(0 \cdot x^{54})$, and so on,
because this computation will results with a lot of redundant
multplications for which the result is always known to be $0$.
Nevertheless, if a generic computing routine for evaluating polynomials is
needed, then there’s a reason for seeking out information on how many
multiplication and addition computations is needed to be taken to get a
polynomial evaluated.
Then, summing the total number of multiplications and additions of a
polynomial leads to the total number of multiplications and additions of a
polynomial. That is,
Then to the original question: How to reduce the amount of multiplications?
In the book Numerical analysis by Sauer (2017), a method called
the Horner’s method, that also goes by the name nested
multiplication, is described. The idea of this method is to express a
polynomial in not as a “left-to-right” multiplication that is
utilised above, but as a sequence of nested multiplication and additions
operations.
As is demonstrated in the book, the general form of polynomial
i.e. once per $n$ terms. The nested multiplication method reduces just the
number of multiplications, so the number of additions still remains the
same, that is, $(n-1)$ addition operations per $n$ terms. The total number
of multiplications and additions using the nested multiplication method
means a total of
$$
n + (n - 1) = 2n - 1 \tag{4}
$$
operations for $n$ items. So for the the polynomial $T$ above, the total
number of multiplications and additions is $(2 \cdot 3) - 1 = 5$ and for
$P$, $(2 \cdot 15) - 1 = 29$ operations, respectively.
Comparing the number of operations, as is shown in the next figure,
it can be seen that in “$\text{more common}$” and
“$\text{nested}$” method for polynomials of $1 \leq n \leq 50$
degrees, the former is computationally more intensive as the degree of
polynomial increases. This is because the “$\text{more common}$”
method is of $\mathcal{O}(n^2)$, that is, of quadratic complexity, due to
the $n^2$ term in $(3)$, whereas the “$\text{nested}$” method
is of $\mathcal{O}(n)$, that is, of linear complexity.
Because of this, when considering a computational routine for evaluating
polynomials, it can be beneficial to make use of the nested multiplication
scheme, since it allows for more computing done in a unit of time.
References
Sauer, T. (2017). Numerical analysis (3rd ed.). Pearson.