Jump to content

Dual space – "Math for Non-Geeks"

From Wikibooks, open books for an open world

Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden We have already seen the vector space of linear maps between two -vector spaces and . We will now consider the case where the vector space corresponds to the field .

Motivation

[edit | edit source]

Consider the following example: We want to buy apples and pears. An apple costs $ and a pear $. If is the number of apples and is the number of pears, how much do we have to pay in total? The formula for the total price is . We can express this equation as -linear map Let's assume that the prices increase by half. To get the formula that gives the new total price, we need to multiply the old formula by . The formula that gives this price would then be . The corresponding linear map is We thus recognize that . Suppose now that the price of apples increases by $ and the price of pears by $. We obtain the corresponding formula for the total price by adding to the original formula, i.e. . This can be understood as the addition of linear maps. We define by and . Then holds true. So in this example, we simpy added linear maps from to and multiplied them by scalars.

The total price is indicated by linear maps from . Such a map assigns a value, namely the price, to each vector. In other words, we can say that the mapping "measures" these vectors. This is why we call linear maps from to linear measurement functions. We have seen above that sums and scalar multiples of such maps are again linear maps. In other words, linear combinations of linear maps are again linear maps. So also on the set of linear maps on , we can find a vector space structure.

What about other vector spaces? Let's look at the -vector space of complex polynomials of degree at most . There are a number of simple measurement functions here. These can, for example, assign to a polynomial its value at a point : Alternatively, we can assign to a polynomial the value of its derivative at the point : Since the coefficients of polynomials are scalars, we can use them to define further measurement functions. For example, for , consider the mappings defined by and . Then . We can also see here that sums of measurement functions are again measurement functions.

In general, we can also consider the space of linear measurement functions over an arbitrary -vector space . We will see that, as in the previous examples, this is a vector space. This space is called the dual space of .

Definition

[edit | edit source]

Definition (Dual space)

Let be a vector space over a field . Then the space of linear mappings between the -vector spaces and is called the dual space of .

The following theorem states that the dual space is a vector space.

Theorem ( is a vector space)

Let be a vector space over a field . Then with the two relations and a -vector space.

Proof ( is a vector space)

We know from the article on function spaces that for -vector spaces and , the set of linear maps is also a -vector space. Since itself is a (1-dimensionsl) -vector space, we know that for every vector space , also is a vector space.

Examples of vectors in the dual space

[edit | edit source]

Example (Characterization of )

The dual space of is the vector space of all linear maps from to . Each such linear map is given by multiplication with a (1x2) matrix, the representing matrix, and is therefore of the form for certain . Thus, the elements in the dual space of are described by linear equations of the form .

More generally, an element of is represented by a (1xn) matrix or a linear equation of the form with coefficients .

Example (Limit of convergent sequences)

Let be the space of convergent sequences . Because sums and scalar multiples of convergent sequences are convergent sequences again, is a vector space. You can read a proof of the vector space properties here.

We consider the mapping , which sends a sequence to its limit value. For example, or . From the limit theorems we know that applies to all convergent sequences and scalars . It follows that is a linear map and therefore holds.

Example (Polynomial space and the evaluation mapping)

Let be a field. We consider the polynomial ring as a -vector space. For we define the mapping which evaluates a polynomial at the position . For example, we have and .

By direct computation, e can verify that this mapping is -linear, i.e. an element of :

For and we then have:

Example (Derivative)

Let be the space of continuously differentiable functions . Let be fixed and consider the mapping which sends a differentiable function to its derivative at the point . For example, for , the value of the mapping in is given by We verify by direct computation that the mapping (for fixed ) is linear: For and we have This follows from the properties of the derivative. So is an element of .

Example (Integral)

Let be the space of continuous functions . Consider the mapping which sends a continuousfunction on to its integral over this interval. As an example, for , We verify by direct calculation that the mapping is linear: For and the following applies This follows from the properties of the integral. So is an element of .

Dual Basis

[edit | edit source]

We now know what the dual space of a -vector space is: It consists of all linear maps from to . Intuitively, we can understand these maps as linear maps that measure vectors from . This is why we sometimes call elements of the dual space "(linear) measurement functions" in this article.

Motivated by this intuitive notion of "measurements", we ask ourselves: Is there a subset of measurement functions that can be used to uniquely determine vectors? In other words, is there a subset so that we can find a measurement function with for every choice of vectors with ?

Let's first consider what this means using an example:

Example (Unique determination of vectors using measurement functions)

Let us consider . Then the dual space is the vector space of all linear maps . Consider the linear maps with If , we cannot use these functions to determine vectors uniquely: For and , we have , but .

Even with the measurement functions in , the vectors and cannot be distinguished: We also have .

However, if we consider the subset of measurement functions instead, then vectors in are uniquely determined by the measurements in : Let and be any vectors with . Assume that and apply. From we obtain . Together with , we would then also get , i.e. . This would mean that , which is a contradiction to our assumption. Therefore, or (or both) applies. Hence, for each choice of different vectors in , at least one of the two measurements in provides different values for and . Vectors are therefore uniquely determined by the measurements in .

In sumary, our question is: Does there exist a subset such that applies to all vectors: If applies to all measurements , then must be true.

We will first try to answer this question in .

Measurement functions for unique determination of vectors

[edit | edit source]

A vector is uniquely determined by its entries . If we select measurement functions from in such a way that their values provide us with the entries of a vector, then we have ensured that a vector is already uniquely determined by these values. Let us therefore consider the following mappings for You can check that the maps are linear. In addition, holds for every . The map therefore provides the -th entry of vectors in . A vector is already uniquely determined by the values of : Suppose we have vectors and in with equal function values among the , i.e., with for all . Then applies for all and therefore . Thus, if with for all , then follows.

It is also intuitively clear that we cannot omit any of the measurement functions in order to uniquely determine a vector by its measurement values. For example, if we omit , , then for we may have for all measurement functions with , but nevertheless . The measurement functions with therefore no longer uniquely determine a vector.

So the with form a set of measurement functions that uniquely determine vectors from . Further, they are minimal because we cannot omit any of the functions.

Can we generalize this to a general vector space ? In we have used the fact that a vector is uniquely determined by its entries . Now, the are precisely the coordinates of with respect to the standard basis : In a general vector space , we do not have a standard basis. However, as soon as we have chosen any basis , we can speak of the coordinates of a vector with respect to in the same way as in . Just as in with the standard basis, in with the selected basis , a vector is uniquely determined by its coordinates with respect to . As soon as we have chosen a basis, we can try to proceed in the same way as in .

In the following, we assume that is finite-dimensional, i.e. . Let be a basis of . Then every vector is of the form with uniquely determined coordinates . Analogous to , we now define the linear measurement functions for in One of the measurement functions therefore determines the -th coordinate of vectors with respect to the basis . Thus, for every vector .

Warning

Note that the definition of depends on the selected basis .

Since vectors in are already uniquely determined by their coordinates, they are also already uniquely determined by the values of . In other words, for all we have For the same reason as with , none of the can be omitted: If the -th measurement function , , is missing, then any two vectors for which only the -th coordinate with respect to differs, can no longer be distinguished.

Question: Which two vectors can you choose here?

We choose an example analogous to and set and Then holds for all , but nevertheless . If the -th measurement function is omitted, then vectors are no longer uniquely determined by the function values of .

The measurement functions form a basis

[edit | edit source]

Let be a vector space with a fixed basis and let the be defined as above. If you want to determine vectors uniquely using the values of , you cannot do without any of the . The reason for this is that the result of a measurement (the -th coordinate of with respect to ) cannot be deduced from the other measurements. That means, we cannot represent any of the measurement functions as a linear combination of the other (). In other words, the measurement functions are linearly independent.

On the other hand, the values of already tell us everything there is to know about a vector : Its coordinates with respect to the selected basis . Can all other measurement functions from therefore be combined from ? Any measurement function from is already uniquely determined by its values on the basis vectors according to the principle of linear continuation. For , let be these values. Furthermore, and apply for and all . By inserting the we obtain that assume the same values on the basis vectors. According to the principle of linear continuation, the two linear maps are therefore identical. Thus, every can be written as a linear combination of . In other word, the measurement functions form a generating system of .

Hence, is a basis of the dual space and we can prove the following theorem:

Theorem (Existence of a dual basis)

Let be a finite dimensional vector space and a basis of . Then there exists a unique basis of such that is true for all .

Proof (Existence of a dual basis)

Proof step: Existence and uniqueness of the .

According to the principle of linear continuation, the linear maps exist and are uniquely determined by their values on the basis vectors of .

Proof step: The are linearly independent.

Let with . Let further . Because and for , we obtain the following by plugging in : Because was arbitrary, we conclude .

Proof step: The form a generating system.

Let be arbitrary. For we define and set . Then, proceeding as in the proof of linear independence, we obtain for each . Because applies to all and because a linear map is already uniquely determined by the images of its basis vectors, we have . The therefore form a generating system.

We call the uniquely determined basis the dual basis with respect to and denote its basis vectors by .

Definition (Dual basis)

Let be a finite dimensional vector space with basis . The uniquely determined basis with is called the dual basis of .

Warning

Note that depends on the basis chosen for . Furthermore, you cannot "dualize" individual vectors from , but only entire bases.

What happens in the infinite dimension?

[edit | edit source]

Above, we only considered the case . Can we proceed analogously if is infinite dimensional? To define the measurement functions , we must first choose a basis of . Let be a basis of , where is an (infinite) index set. The principle of linear continuation also applies in infinite dimensions: For given values , , there is exactly one linear map with for all . Just as in the finite-dimensional case, we can therefore define the map for using the rule We can then show that is also a linearly independent subset of in infinite dimensions. The proof is analogous to the proof of linear independence in the theorem on the dual basis.

However, in infinitely many dimensions, cannot be a generating system of : One can consider the function which assumes the value 1 on all basis vectors. This function cannot be represented as a finite linear combination of .

So in infinitely many dimensions, the "dual basis" is not a basis of the dual space.

Exercises

[edit | edit source]

Math for Non-Geeks: Template:Aufgabe

Exercise (Determining the dual basis)

  1. Consider the basis of . Determine the basis which is dual to , that is, for determine the explicit form of the function
  2. Consider the basis of . Determine the basis dual to , i.e. for determine the explicit form of the function
  3. Consider the basis of . Determine the basis dual to , i.e. for etermine the explicit form of the function

Solution (Determining the dual basis)

Solution sub-exercise 1:

Set , and . We are looking for linear maps whose values we only know on the basis vectors . We must define for general .

By definition of the dual basis, we already know the function values of each on the basis vectors in . Applying the principle of linear continuation, we can determine all function value: Because is a basis, there are coordinates for each such that . With the help of linearity we get We know the values by definition of the dual basis. We therefore only need to determine the coordinates of any vector with respect to . Then we can write out the .

Proof step: Determining the coordinates of any vector with respect to

We want to determine the coordinates with respect to of any vector . Let . We write The coordinates of with respect to the standard basis are therefore simply , and . If we write for the coordinate map, this means We can convert these into coordinates with respect to by multiplying the coordinate vector of from the left by the basis transition matrix that implements the transfer from to . Then In order to determine the basis transition matrix , we calculate the coordinates of the standard basis vectors with respect to . These form the columns of .

We start with : We are looking for such that For this we hae to solve the linear system which yields , and . In the same way, we determine the coordinates of with respect to and the coordinates of with respect to . Then

Note: We could also have solved all three systems at once by summarizing the "right-hand sides" column by column, i.e. by taking the inverse of . This makes sense, because this matrix is the basis transition matrix from to the standard basis. Its inverse is therefore the matrix that transitions from to .

The coordinates of with respect to are therefore

Of course, it is also okay to guess the coordinates of with respect to by looking closely without solving systems of equations.

Proof step: Result for

We can now write any as Using linearity of and the definition of the dual basis, we obtain In the same way, we calculate and . In total, we have therefore determined the three basis vectors of the dual basis:

Solution sub-exercise 2:

We know what the map does with the basis vectors . To find out how acts on a general vector , we can express it in the basis via linear combination: This allows us to calculate the desired functions. For we have For we get So the function of is For we get In summary, we obtain the following functions

Solution sub-exercise 3:

We know the values of each when applied to the basis vectors and want to find the value for any matrix . To do this, we express as a linear combination of : Using the definition of the dual basis and the linearity of , we can now specify the solution: We have for and , so the following applies

Math for Non-Geeks: Template:Aufgabe

Exercise (Dual basis and hyperplanes)

Let be an -dimensional -vector space.

  1. Let with . Show that holds.
  2. Let be an -dimensional subspace of . Show that there is an element with .
  3. Assuming that , is it true that the from sub-exercise 2 is uniquely determined by the subspace ?

An -dimensional subspace of an -dimensional vector space is also called a hyperplane in . For example, the hyperplanes in are exactly the planes through the origin. The first part of the exercise thus shows that the kernel of a non-zero element in dual space is a hyperplane in .

Solution (Dual basis and hyperplanes)

Solution sub-exercise 1:

We can use the dimension formula to relate the dimension of the kernel to the dimension of :

So we have shifted our problem to the calculation of . Now , that is, . This means that the dimension of is either or .

We know that , so there is a with . This means that and the dimension of cannot be . Therefore, and we get

Solution sub-exercise 2:

According to the principle of linear continuation, a linear mapping is determined by what it does on a basis. To be able to use this principle, we first choose a basis of . The basis completion theorem then provides us with a vector , such that is a basis of .

According to the principle of linear continuation, we can then define a candidate for the linear map by saying what happens on a basis of . The vectors are elements of . Since is to be the kernel of , we must require for . The last basis vector is not in . This means that must not lie in the kernel of . For example, that we can demand . To summarize, we define as the linear map with

Since is generated by , we have . We therefore only have to show that . For this, let . Because is a basis of , we find with . Now we know that

Hence and . Therefore, we have .

Solution sub-exercise 3:

The mapping is not unique: We know that because . Therefore exists with . Because , there is an element with . Thus . Now consider the linear map . This map has the same kernel as because if . This is the case if , since .

Furthermore, , because . The linear map from the second part is therefore not unique.

In the last task, we required because we needed an element in the proof that is neither nor . The field only consists of the elements and . This means that if we want to construct a linear map that has an -dimensional subspace as its kernel, then we must define it as This map is linear amd it is the only way to have a linear map with kernel . Thus, for we arrive at a different result in the last sub-exercise: The map is then unique.

Math for Non-Geeks: Template:Aufgabe

Exercise

Consider the basis of .

  1. For determine the dual basis with for .
  2. Determine the kernel and draw it in for .

Solution

Solution sub-exercise 1:

The matrix of a linear map with respect to . of the canonical bases of and of is the uniquely determined matrix with for all .

We are looking for the formula of the linear maps , . That means, we determine the three corresponding representative matrices with respect to the canonical bases. By definition of the dual basis, the following should hold and the same for . If we summarize these equations in matrix form, we get We must therefore determine an inverse of the matrix on the left-hand side of the equation, which has the basis vectors in as columns.

The inverse is The rows are the desired dual basis vectors. We therefore have

Solution sub-exercise 2:

From the previous exercise we know that , and . Plotted in , we obtain a plane spanned by the two vectors in .

Instead of using the previous exercise, we can also calculate the kernels of the matrices :

Proof step:

The kernel of contains all with , i.e., with . So the following holds: Note that , so the result for the kernel is the same as in the previous exercise.

Proof step:

The kernel of contains all with , i.e., with . So the following holds: Here, also , so the result is the same as in the previous exercise.

Proof step:

The kernel of contains all with , i.e. with . So the following holds: Because , this agrees with the result determined in the previously exercise.

Exercise (Dual map)

Let be a linear map. We define the map

  1. Show that is linear.
  2. Show: and for linear maps and .
  3. Show: If is surjective, then is injective.
  4. Show: If is injective, then is surjective.
  5. Show: If is bijective, then is bijective and the inverse is given by .

is called the dual mapping with respect to . By definition, the dual map therefore receives linear mappings from to as input and turns them into linear mappings from to . This is achieved by precomposition with . A mapping therefore becomes . In words, can be described as "execute first".

Solution (Dual map)

Solution sub-exercise 1:

For more clarity in the proof, we write or for the addition of linear maps in or and for the addition in the vector space . We also write or for the scalar multiplication in or and for the scalar multiplication in .

Let and . We have to show that We must therefore prove the equality of elements in , i.e., of maps . To do this, we show and for all .

Proof step:

Let . Then Because was arbitrary, this shows the equality of the maps and .

Proof step:

Let . Then Because was arbitrary, this shows the equality of the maps and .

Solution sub-exercise 2:

We show for all . It then follows that is the identity on . Let . By definition of the dual map, we have Since was arbitrary, the statement is shown.

Now let and . Then applies, i.e. . Furthermore, and and therefore . To show the equality of the maps , we show that holds for all . So if , then we get Because was arbitrary, the statement is shown.

Solution sub-exercise 3:

Let be surjective. We want to show that is injective. Due to the linearity of , it is sufficient to show that . Let with . This means that maps from to and is the zero mapping from to . We want to conclude that is the zero mapping in , i.e. that for all . For this, let be arbitrary. Because is surjective, there exists an with . It follows that Because was arbitrary, we conclude .

Solution sub-exercise 4:

Let be injective. We want to show that is surjective. So let be arbitrary. This means that is a linear map from to . We want to define a map from to such that .

Because is injective, the restriction of to the image of is an isomorphism. We denote this restriction by . Then and the following holds Because is defined on , we can define and obtain : Because was arbitrary, the surjectivity of is shown.

Solution sub-exercise 5:

If is bijective, then it follows from the previous two sub-exercises that is also bijective. We calculate that is the inverse of : From sub-exercise 2 we get Analogously, one can show .

Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden