Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden
We have already seen the vector space of linear maps
between two
-vector spaces
and
. We will now consider the case where the vector space
corresponds to the field
.
Consider the following example: We want to buy apples and pears. An apple costs $
and a pear $
. If
is the number of apples and
is the number of pears, how much do we have to pay in total? The formula for the total price is
. We can express this equation as
-linear map
Let's assume that the prices increase by half. To get the formula that gives the new total price, we need to multiply the old formula by
. The formula that gives this price would then be
. The corresponding linear map is
We thus recognize that
.
Suppose now that the price of apples increases by $
and the price of pears by $
. We obtain the corresponding formula for the total price by adding
to the original formula, i.e.
. This can be understood as the addition of linear maps. We define
by
and
. Then
holds true. So in this example, we simpy added linear maps from
to
and multiplied them by scalars.
The total price is indicated by linear maps from
. Such a map assigns a value, namely the price, to each vector. In other words, we can say that the mapping "measures" these vectors. This is why we call linear maps from
to
linear measurement functions. We have seen above that sums and scalar multiples of such maps are again linear maps. In other words, linear combinations of linear maps are again linear maps. So also on the set of linear maps on
, we can find a vector space structure.
What about other vector spaces? Let's look at the
-vector space
of complex polynomials of degree at most
. There are a number of simple measurement functions here. These can, for example, assign to a polynomial
its value at a point
:
Alternatively, we can assign to a polynomial the value of its derivative at the point
:
Since the coefficients of polynomials are scalars, we can use them to define further measurement functions. For example, for
, consider the mappings
defined by
and
. Then
. We can also see here that sums of measurement functions are again measurement functions.
In general, we can also consider the space of linear measurement functions
over an arbitrary
-vector space
. We will see that, as in the previous examples, this is a vector space. This space is called the dual space of
.
The following theorem states that the dual space is a vector space.
Theorem (
is a vector space)
Let
be a vector space over a field
. Then
with the two relations
and
a
-vector space.
Example (Characterization of
)
The dual space of
is the vector space of all linear maps from
to
. Each such linear map
is given by multiplication with a (1x2) matrix, the representing matrix, and is therefore of the form
for certain
. Thus, the elements in the dual space of
are described by linear equations of the form
.
More generally, an element of
is represented by a (1xn) matrix
or a linear equation of the form
with coefficients
.
Example (Integral)
Let
be the space of continuous functions
. Consider the mapping
which sends a continuousfunction on
to its integral over this interval. As an example, for
,
We verify by direct calculation that the mapping
is linear: For
and
the following applies
This follows from the properties of the integral. So
is an element of
.
We now know what the dual space
of a
-vector space
is: It consists of all linear maps from
to
. Intuitively, we can understand these maps as linear maps that measure vectors from
. This is why we sometimes call elements of the dual space
"(linear) measurement functions" in this article.
Motivated by this intuitive notion of "measurements", we ask ourselves: Is there a subset
of measurement functions that can be used to uniquely determine vectors? In other words, is there a subset
so that we can find a measurement function
with
for every choice of vectors
with
?
Let's first consider what this means using an example:
Example (Unique determination of vectors using measurement functions)
Let us consider
. Then the dual space
is the vector space of all linear maps
. Consider the linear maps
with
If
, we cannot use these functions to determine vectors uniquely: For
and
, we have
, but
.
Even with the measurement functions in
, the vectors
and
cannot be distinguished: We also have
.
However, if we consider the subset of measurement functions
instead, then vectors in
are uniquely determined by the measurements in
: Let
and
be any vectors with
. Assume that
and
apply. From
we obtain
. Together with
, we would then also get
, i.e.
. This would mean that
, which is a contradiction to our assumption. Therefore,
or
(or both) applies. Hence, for each choice of different vectors in
, at least one of the two measurements in
provides different values for
and
. Vectors are therefore uniquely determined by the measurements in
.
In sumary, our question is: Does there exist a subset
such that
applies to all vectors: If
applies to all measurements
, then
must be true.
We will first try to answer this question in
.
Measurement functions for unique determination of vectors
[edit | edit source]
A vector
is uniquely determined by its entries
. If we select measurement functions from
in such a way that their values provide us with the entries of a vector, then we have ensured that a vector is already uniquely determined by these values. Let us therefore consider the following mappings for
You can check that the maps
are linear. In addition,
holds for every
. The map
therefore provides the
-th entry of vectors in
. A vector
is already uniquely determined by the values of
: Suppose we have vectors
and
in
with equal function values among the
, i.e., with
for all
. Then
applies for all
and therefore
. Thus, if
with
for all
, then
follows.
It is also intuitively clear that we cannot omit any of the measurement functions
in order to uniquely determine a vector by its measurement values. For example, if we omit
,
, then for
we may have
for all measurement functions with
, but nevertheless
. The measurement functions
with
therefore no longer uniquely determine a vector.
So the
with
form a set of measurement functions that uniquely determine vectors from
. Further, they are minimal because we cannot omit any of the functions.
Can we generalize this to a general vector space
? In
we have used the fact that a vector
is uniquely determined by its entries
. Now, the
are precisely the coordinates of
with respect to the standard basis
:
In a general vector space
, we do not have a standard basis. However, as soon as we have chosen any basis
, we can speak of the coordinates of a vector with respect to
in the same way as in
. Just as in
with the standard basis, in
with the selected basis
, a vector
is uniquely determined by its coordinates with respect to
. As soon as we have chosen a basis, we can try to proceed in the same way as in
.
In the following, we assume that
is finite-dimensional, i.e.
. Let
be a basis of
. Then every vector
is of the form
with uniquely determined coordinates
. Analogous to
, we now define the linear measurement functions for
in
One of the measurement functions
therefore determines the
-th coordinate of vectors with respect to the basis
. Thus,
for every vector
.
Warning
Note that the definition of
depends on the selected basis
.
Since vectors in
are already uniquely determined by their coordinates, they are also already uniquely determined by the values of
. In other words, for all
we have
For the same reason as with
, none of the
can be omitted: If the
-th measurement function
,
, is missing, then any two vectors for which only the
-th coordinate with respect to
differs, can no longer be distinguished.
Question: Which two vectors can you choose here?
We choose an example analogous to
and set
and
Then
holds for all
, but nevertheless
. If the
-th measurement function is omitted, then vectors are no longer uniquely determined by the function values of
.
Let
be a vector space with a fixed basis
and let the
be defined as above. If you want to determine vectors uniquely using the values of
, you cannot do without any of the
. The reason for this is that the result of a measurement
(the
-th coordinate of
with respect to
) cannot be deduced from the other measurements. That means, we cannot represent any of the measurement functions
as a linear combination of the other
(
). In other words, the measurement functions
are linearly independent.
On the other hand, the values of
already tell us everything there is to know about a vector
: Its coordinates with respect to the selected basis
. Can all other measurement functions from
therefore be combined from
? Any measurement function
from
is already uniquely determined by its values on the basis vectors
according to the principle of linear continuation. For
, let
be these values. Furthermore,
and
apply for
and all
. By inserting the
we obtain that
assume the same values on the basis vectors. According to the principle of linear continuation, the two linear maps are therefore identical. Thus, every
can be written as a linear combination of
. In other word, the measurement functions
form a generating system of
.
Hence,
is a basis of the dual space and we can prove the following theorem:
Theorem (Existence of a dual basis)
Let
be a finite dimensional vector space and
a basis of
. Then there exists a unique basis
of
such that
is true for all
.
Proof (Existence of a dual basis)
Proof step: Existence and uniqueness of the
.
According to the principle of linear continuation, the linear maps
exist and are uniquely determined by their values on the basis vectors of
.
Proof step: The
are linearly independent.
Let
with
. Let further
. Because
and
for
, we obtain the following by plugging in
:
Because
was arbitrary, we conclude
.
Proof step: The
form a generating system.
Let
be arbitrary. For
we define
and set
. Then, proceeding as in the proof of linear independence, we obtain
for each
. Because
applies to all
and because a linear map is already uniquely determined by the images of its basis vectors, we have
. The
therefore form a generating system.
We call the uniquely determined basis
the dual basis with respect to
and denote its basis vectors by
.
Warning
Note that
depends on the basis chosen for
. Furthermore, you cannot "dualize" individual vectors from
, but only entire bases.
Above, we only considered the case
. Can we proceed analogously if
is infinite dimensional? To define the measurement functions
, we must first choose a basis of
. Let
be a basis of
, where
is an (infinite) index set. The principle of linear continuation also applies in infinite dimensions: For given values
,
, there is exactly one linear map
with
for all
. Just as in the finite-dimensional case, we can therefore define the map
for
using the rule
We can then show that
is also a linearly independent subset of
in infinite dimensions. The proof is analogous to the proof of linear independence in the theorem on the dual basis.
However, in infinitely many dimensions,
cannot be a generating system of
: One can consider the function
which assumes the value 1 on all basis vectors. This function cannot be represented as a finite linear combination of
.
So in infinitely many dimensions, the "dual basis"
is not a basis of the dual space.
Math for Non-Geeks: Template:Aufgabe
Exercise (Determining the dual basis)
- Consider the basis
of
. Determine the basis
which is dual to
, that is, for
determine the explicit form of the function

- Consider the basis
of
. Determine the basis dual to
, i.e. for
determine the explicit form of the function
![{\displaystyle p_{i}^{*}\colon \mathbb {R} [t]_{\leq 3}\to \mathbb {R} ,\quad a_{3}t^{3}+a_{2}t^{2}+a_{1}t+a_{0}\mapsto p_{i}^{*}(a_{3}t^{3}+a_{2}t^{2}+a_{1}t+a_{0}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e173d7b0f34f2cf2f99253ad8dc5d671f9684a8)
- Consider the basis
of
. Determine the basis
dual to
, i.e. for
etermine the explicit form of the function

Solution (Determining the dual basis)
Solution sub-exercise 1:
Set
,
and
. We are looking for linear maps
whose values we only know on the basis vectors
. We must define
for general
.
By definition of the dual basis, we already know the function values of each
on the basis vectors in
. Applying the principle of linear continuation, we can determine all function value: Because
is a basis, there are coordinates
for each
such that
. With the help of linearity we get
We know the values
by definition of the dual basis. We therefore only need to determine the coordinates of any vector
with respect to
. Then we can write out the
.
Proof step: Determining the coordinates of any vector
with respect to 
We want to determine the coordinates with respect to
of any vector
. Let
. We write
The coordinates of
with respect to the standard basis
are therefore simply
,
and
. If we write
for the coordinate map, this means
We can convert these into coordinates
with respect to
by multiplying the coordinate vector of
from the left by the basis transition matrix
that implements the transfer from
to
. Then
In order to determine the basis transition matrix
, we calculate the coordinates of the standard basis vectors
with respect to
. These form the columns of
.
We start with
: We are looking for
such that
For this we hae to solve the linear system
which yields
,
and
. In the same way, we determine the coordinates
of
with respect to
and the coordinates
of
with respect to
. Then
Note: We could also have solved all three systems at once by summarizing the "right-hand sides" column by column, i.e. by taking the inverse of
. This makes sense, because this matrix is the basis transition matrix from
to the standard basis. Its inverse is therefore the matrix
that transitions from
to
.
The coordinates of
with respect to
are therefore
Of course, it is also okay to guess the coordinates of
with respect to
by looking closely without solving systems of equations.
Proof step: Result for 
We can now write any
as
Using linearity of
and the definition of the dual basis, we obtain
In the same way, we calculate
and
. In total, we have therefore determined the three basis vectors of the dual basis:
Solution sub-exercise 2:
We know what the map
does with the basis vectors
. To find out how
acts on a general vector
, we can express it in the basis
via linear combination:
This allows us to calculate the desired functions. For
we have
For
we get
So the function of
is
For
we get
In summary, we obtain the following functions
Solution sub-exercise 3:
We know the values of each
when applied to the basis vectors
and want to find the value for any matrix
. To do this, we express
as a linear combination of
:
Using the definition of the dual basis and the linearity of
, we can now specify the solution: We have
for
and
, so the following applies
Math for Non-Geeks: Template:Aufgabe
Solution (Dual basis and hyperplanes)
Solution sub-exercise 2:
According to the principle of linear continuation, a linear mapping is determined by what it does on a basis. To be able to use this principle, we first choose a basis
of
. The basis completion theorem then provides us with a vector
, such that
is a basis of
.
According to the principle of linear continuation, we can then define a candidate for the linear map
by saying what happens on a basis of
. The vectors
are elements of
. Since
is to be the kernel of
, we must require
for
. The last basis vector
is not in
. This means that
must not lie in the kernel of
. For example, that we can demand
. To summarize, we define
as the linear map with
Since
is generated by
, we have
. We therefore only have to show that
. For this, let
. Because
is a basis of
, we find
with
. Now we know that
Hence
and
. Therefore, we have
.
In the last task, we required
because we needed an element in the proof that is neither
nor
. The field
only consists of the elements
and
. This means that if we want to construct a linear map
that has an
-dimensional subspace
as its kernel, then we must define it as
This map is linear amd it is the only way to have a linear map with kernel
. Thus, for
we arrive at a different result in the last sub-exercise: The map is then unique.
Math for Non-Geeks: Template:Aufgabe
Exercise
Consider the basis
of
.
- For
determine the dual basis
with
for
.
- Determine the kernel
and draw it in
for
.
Solution
Solution sub-exercise 1:
The matrix of a linear map
with respect to
. of the canonical bases
of
and
of
is the uniquely determined matrix
with
for all
.
We are looking for the formula of the linear maps
,
. That means, we determine the three corresponding representative matrices
with respect to the canonical bases. By definition of the dual basis, the following should hold
and the same for
. If we summarize these equations in matrix form, we get
We must therefore determine an inverse of the matrix on the left-hand side of the equation, which has the basis vectors in
as columns.
The inverse is
The rows are the desired dual basis vectors. We therefore have
Solution sub-exercise 2:
From the previous exercise we know that
,
and
. Plotted in
, we obtain a plane spanned by the two vectors in
.
Instead of using the previous exercise, we can also calculate the kernels of the matrices
:
Proof step: 
The kernel of
contains all
with
, i.e., with
. So the following holds:
Note that
, so the result for the kernel is the same as in the previous exercise.
Proof step: 
The kernel of
contains all
with
, i.e., with
. So the following holds:
Here, also
, so the result is the same as in the previous exercise.
Proof step: 
The kernel of
contains all
with
, i.e. with
. So the following holds:
Because
, this agrees with the result determined in the previously exercise.
Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden