Endomorphism and Automorphism – "Math for Non-Geeks"
Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden An endomorphism is a linear deformation of a vector space . Formally, an endomorphism is a linear mapping that sends to itself, i.e., . A bijective endomorphism is called an automorphism. Intuitively, an automorphism is a linear deformation that can be undone.
Derivation
[edit | edit source]We already know linear maps. These are mappings between two vector spaces which are compatible with the (linear) vector space structure. We now examine a few examples of linear maps that we have already learned about in previous articles.
Examples in the
[edit | edit source]Stretching in -direction
[edit | edit source]First, we consider the stretching of a vector in the plane by a factor of in the -direction. Our mapping is thus One can easily verify that is a linear map. We can illustrate as follows: We place a checkerboard pattern in the plane and apply to this checkerboard pattern.

The result is that the boxes are stretched by a factor of in the -direction.
Rotation around the origin
[edit | edit source]We now consider a rotation by the angle counter-clockwise, with the origin as center of rotation. This is a mapping which assigns to each vector the vector rotated by the angle :
In the introductory article on linear maps we have seen that rotations about the origin are linear. We can visualize as in the first example by applying the mapping to the checkerboard pattern. The individual squares then sustain their shape, but they are rotated.
Projection on a line
[edit | edit source]
At last we consider the map The map "presses" vectors onto the straight line . You can easily check that is a linear map. We also apply this linear map to the checkerboard pattern to visualize it.

The entire grid is "flattened" onto the straight line .
Linear deformations of an arbitrary vector space
[edit | edit source]In all the above examples, we were able to visualize the linear maps as distortions of the checkerboard pattern in . This was possible because all of the above functions map from into itself. We can illustrate any linear maps as a deformation of a checkerboard. The deformation of the checkerboard shows us how the map acts on the standard basis vectors and of and integer multiples of them.
Any linear map is a linear deformation of the space . Let us generalize this idea to general vector spaces . We can think of linear maps from to as linear deformations or transformations of the vector space . In contrast, a linear map is a transport of the vector space to . We give a separate name to a linear map which deforms the vector space, i.e., which maps from to : Such a linear map will be called an endomorphism. So endomorphisms are just those linear maps for which the domain of definition and the target space coincide.
Reversible deformations
[edit | edit source]In the examples in we have seen that some deformations preserve the space and others "flatten" it in a certain sense. The mappings that preserve space can be undone. When the space is flattened, the ma cannot be undone because information is lost. For example, in the above linear map "projection onto a straight line", information is lost about what the -component of the original vector was. It is not possible to recover the vector after applying the transformation. So there are deformations of space that can be undone, and some that cannot. One can undo a deformation exactly if the associated mapping is bijective, i.e., invertible. This gives us the definition of a reversible deformation of space, i.e., an invertible endomorphism. Such a mapping is called an automorphism.
Definition
[edit | edit source]Definition (Endomorphism and automorphism)
Let be a -vector space. A linear map that maps to itself is called an endomorphism. We denote the set of all endomorphisms of by .
We call a bijective endomorphism automorphism. The set of all automorphisms of is denoted by .
Hint
Every automorphism is a bijective linear map and therefore also an isomorphism. But not every isomorphism is an automorphism. This is because isomorphisms can also map two different vector spaces to each other.
Examples
[edit | edit source]Examples in
[edit | edit source]Reflection
[edit | edit source]We consider the linear map . Since it maps the vector space into itself, is an endomorphism. The mapping preserves and sends to . Thus, we can think of as a reflection along the -axis. We can undo a reflection by reflecting a second time. This means is its own inverse mapping. Formally, this is denoted or . Such a mapping is also called "self-inverse." Because has an inverse, that is, it is invertible, it follows that is bijective. Thus, is also an automorphism.
Rotation by 90°
[edit | edit source]Next we consider the endomorphism . This is a counter-clockwise rotation by degrees. To convince yourself that really is such a rotation, you may calculate how acts on the standard basis vectors and . If it is such a rotation on these two vectors, then by linearity, must be such a rotation everywhere. A short calculation gives , as well as , which is exactly the desired rotation. Again, we can easily specify an inverse by "turning back" or rotating clockwise by degrees. This rotation is given by . We briefly check that is indeed the inverse of : for . So and is also an automorphism in this example.
Shears
[edit | edit source]Let . The following animation shows how this mapping deforms the plane.

The transformation looks reversible, that is, it looks like an automorphism. We can check this by showing that is both injective and surjective.
To show injectivity, let us look at the kernel of , i.e., the set . For a vector in the kernel, then holds. From this we directly get and therefore also . Thus, the kernel consists only of the zero vector and hence is injective.
To show surjectivity, we take any and find a suitable preimage. That means, we are looking for some with . It is directly clear that must hold. Furthermore, must be true. This can be transformed to . So is a preimage of . Since was arbitrary, is surjective.
Linear maps of the form with are called shears. You can show as an exercise that a shear is always an automorphism, no matter what is.
Flattening to the -axis
[edit | edit source]Let us now consider the mapping . This is an endomorphism from to that maps every point on the plane to a point on the -axis. So we can think of as "flattening" the 2-dimensional plane onto the 1-dimensional -axis."
Since maps the points in exclusively onto the axis, is not a surjective mapping. Nor is it injective, because for every we can find different such that holds, e.g., and vice versa. So is not an automorphism.

Example in
[edit | edit source]Let us now consider an example in . For this we look at the linear map . Because maps the vector space back to , the mapping is an endomorphism.
We now want to check whether is also an automorphism. To do this, we need to check surjectivity and injectivity. For injectivity, we consider the kernel of , that is, the set . Thus, for vectors from the kernel of , holds. From this we can directly conclude that and , so , must hold. We thus see that the kernel of contains not only the zero vector, but also the set of all vectors . Hence, is not injective and therefore cannot be bijective. In particular, is not an automorphism.
Visually, compresses vectors onto the plane . Thus, information is lost. Given a vector , it is no longer possible to say in an unambiguous way from which vector it arose under the mapping , since there are very many ways to represent as the sum of two numbers . For example, .
Example in sequence space
[edit | edit source]There are also endomorphisms on vector spaces other than and . For instance, we may take an arbitrary field and consider, as a vector space over it, the sequence space Now, we take the mapping where If we write out the first sequence members, the situation looks like this: Thus, the mapping interchanges even and odd sequence members. We briefly justify why is linear. Addition and scalar multiplication in the sequence space is understood component-wise, i.e., for and and we have Since only swaps the order of the components, is linear. We can also check linearity of explicitly.
Question: How do you directly show that is linear?
We prove additivity and homogeneity of . Let and and . Then and
So is an endomorphism of . Is also an automorphism? To answer that, we need to verify that can be undone. The mapping swaps even and odd sequence members. So if we swap the sequence members back, is undone. This second swap can be done by just applying again. As with the very first example, is self-inverse - in formulas this is called or . Since is invertible, the mapping is bijective. So is an automorphism.
Endomorphisms form a ring with unit
[edit | edit source]In the article Vector space of a linear map we saw that the set of linear maps between two -vector spaces and again forms a vector space. Since holds, the set of endomorphisms is also a vector space. That is, we can add endomorphisms of a vector space and multiply them by scalars. In particular, we can link two endomorphisms and by addition and obtain an endomorphism . This space is given by where denotes addition in the vector space .
Can and be connected in another way? Intuitively, and are two representations of the vector space . We can now deform the space with and then deform the result with . This produces a new deformation of the vector space. That is, we again get an endomorphism of . This resulting mapping, that corresponds to applying and one after the other, is called composition and denoted . Thus, the composition of two endomorphisms is always an endomorphism. In summary, we can "connect" two endomorphisms and by forming the addition or the composition .
Because we have composition as an operation in addition to addition, carries more structure than just vector space structure. We will prove later that the set of endomorphisms on forms a ring with these two operations. Here, the addition in the ring is the addition of the mappings and the multiplication in the ring is the composition of the mappings.
It is now an interesting question, when the ring has a unit element and is commutative. If this was the case, we would even have a field and could build more vector spaces over it. Now, a unit element exists if there is a neutral element of multiplication. That is, if there is a such that and holds for all . We already know a mapping that satisfies this property: the identity . This is a linear map and so holds. So the ring has a unit element.
Is a commutative ring? To answer this, we need to check that holds for all . To get an intuition whether the statement is true or false, we consider again examples with . Let be the projection onto the -axis; that is, for , holds. Furthermore, let be the rotation by clockwise (or by counter-clockwise) about the origin; that is, holds. We want to investigate whether . What do the maps and do visually? The map first pushes all space onto the -axis and then rotates it by in a clockwise direction. So our result is on the -axis.

The mapping first rotates the space by clockwise and then pushes everything to the -axis. So the result of the mapping lies on the -axis.

Consequently, and are different mappings. Therefore, is not a commutative ring. More generally, for any vector space with , is not a commutative ring. We deal with this below in a corresponding exercise.
As announced above, we now prove that is always a ring:
Theorem (Endomorphism ring)
Let be a field and be a -vector space. Then the set of endomorphisms on together with addition and composition a ring with unit.
For two endomorphisms , the endomorphisms and are defined by
and for
Here, is the addition on .
Proof (Endomorphism ring)
In order that forma a ring with unit, the following properties must be satisfied: Template:List
Before we start with the proof, let's keep the following simple fact in mind:
Let and . The mappings and map elements of to elements of . Accordingly, are elements of . By premise, is a vector space. Therefore, we can apply the computational rules that hold in the -vector space to the elements .
Proof step: () is an Abelian group
Proof step: Associative law of addition
The associative law of addition reds:
We prove this equation by establishing for all the equality for each vector .
So let and . Then This shows the associative law of addition.
Proof step: Commutative law of addition
The commutative law of addition is: We prove this by establishing that for all we have for each .
So let and . Then This shows the commutativity of the addition.
Proof step: Existence of an additive neutral element
We need to show the following statement:
We prove this statement by establishing:
For this, let us choose , where is the zero map from to . We now show that is the neutral element of the addition. For this, let and . Then
The additive neutral element here is therefore the zero mapping .
Proof step: Existence of additive inverses
We have to show the following statement:
This is done by establishing:
Since is a -vector space, any vector has an additive inverse, namely . Then holds. Therefore, for any endomorphism , we can simply choose . Now we still have to show that. For this, let and . Then
So the additive inverse of a mapping is given by .
We have thus proved that is an Abelian group.
We could have shown this statement differently. In the article Vector space of a linear map we considered the set of linear maps between two -vector spaces and . We call this set . We have seen that forms a -vector space. It is then true that . So is also a vector space and thus an Abelian group.
Proof step: () is a monoid
Proof step: Associative law of multiplication
The associative law of multiplication in is:
This is true because the composition of mappings is associative.
Proof step: Existence of a multiplicative neutral element
We have to establish the following statement:
This is proven by establishing the following statement:
We choose , where is the identity on . We further want to show that is the neutral element of the multiplication. For this, let and . Then So the neutral element of the multiplication in given by the identity on , i.e., .
Proof step: Distributive laws
Proof step: Distributive law I
The distributive law I reads:
We prove this equation by establishing for all the equality with . For this, let and . Then This establishes the distributive law I.
Proof step: Distributive law II
The distributive law II reads:
We prove this equation by establishing the equation for all and . So let and . Then This establishes the distributive law II.
Automorphisms and flattening
[edit | edit source]The finite-dimensional case
[edit | edit source]Above we have already examined some examples of endomorphisms and automorphisms. We have seen that endomorphisms which "flatten" a vector space are not bijective and therefore not automorphisms. On the other hand, endomorphisms which do not "flatten" a vector space are indeed automorphisms.
Question: What does "not flattening" mean in a mathematical language?
An endomorphism "does not flatten a vector space" if there is no vector mapped to zero by . That is, for all vectors , holds. Since is a linear map, this is exactly the case if is injective, i.e., if it is a monomorphism.
For endomorphisms of finite-dimensional vector spaces, being "non-flattening" is equivalent to being an automorphism: Let be an endomorphism of an -dimensional vector space . If the mapping is an automorphism, then it is injective. So does not flatten . Conversely, if we assume that does not flatten , it follows that is injective. Thus, no information from is lost when mapping with . From this, we can conclude that the image is also -dimensional. So must hold. Thus, is also surjective and therefore an automorphism.
We have seen that an injective endomorphism over a finite-dimensional vector space is automatically surjective. Does the converse statement also hold? In other words: If is a surjective endomorphism of a -dimensional vector space, does it follow that is injective? If is surjective, then and hence holds. Suppose is not injective. Then there is a vector for which . Thus, "flattens the direction" in which points. This means, when mapping by , we lose at least one dimension of . Consequently, we would have . This is a contradiction to . Therefore, must be injective. So if is surjective, then is also injective.
We show these statements again formally in the following theorem.
Theorem (Endomorphisms on finite-dimensional vector spaces)
Let be a finite-dimensional vector space and be an endomorphism. Then, the following statements are equivalent
- is an isomorphism
- is a monomorphism
- is an epimorphism
Proof (Endomorphisms on finite-dimensional vector spaces)
We already know that for two finite-dimensional vector spaces and with and a linear map , that the three statements
- is an isomorphism
- is a monomorphism
- is an epimorphism
are equivalent. So for an endomorphism from a finite-dimensional vector space , the three statements must also be equivalent.
The infinite-dimensional case
[edit | edit source]In the infinite-dimensional case, the above argument no longer works. We have exploited in the finite-dimensional case that for an -dimensional vector space and a subspace it already follows from that . Above, we used . However, in infinite-dimensional vector spaces this does not hold. Here, a paradoxical effect occurs: One can place an infinite-dimensional subspace into another infinite-dimensional space of the same size, without filling all of (This is related to Hilbert's Hotel paradox).
So for endomorphisms of an infinite-dimensional vector space , it does not hold that is surjective exactly when is injective. To understand this better, we now examine concrete counterexamples.
Example (An injective endomorphism that is not surjective)
Let be the sequence space over . We define the endomorphism You can easily verify that is linear. Why is injective? For with we have , so . Thus, follows and is injective.
Why is not surjective? To see this, we need to find a vector in that is not "hit" by . For instance, consider . No matter which we choose, it holds for that the first sequence member is equal to . So is never hit by . Therefore, is not surjective.
Hint
The procedure in this example is an example often given in the context of the Hilbert Hotel paradox. In this example, one shifts all elements of the set by by the mapping This mapping is also injective, but just like it is not surjective. The difference is that is a linear map between vector spaces and is only a map on . So infinite-dimensional vector spaces show some similar weird effects as infinitely large sets.
Example (A surjective endomorphism that is not injective)
We consider again the sequence space over . Now we define the endomorphism So for . Again, one can easily verify that is linear.
First we verify that is surjective. For this, let be any vector. We want to find a vector for which holds. This is true for and thus is surjective.
Why is not injective? To see this, we need to find some with and . An example is (again) given by . Then , but . Thus, is not injective.
The automorphism group
[edit | edit source]We know that the endomorphisms form a ring with unit. The automorphisms are exactly all invertible endomorphisms. In other words, the automorphisms of a vector space are exactly the multiplicatively invertible elements, of the endomorphism ring. Recall that the multiplication in the endomorphism ring is just the composition of mappings. In the following theorem we show that is indeed a group with respect to this multiplication.
Theorem (Automorphism group)
Let be a field and a -Vector space. The set forms a group with respect to composition .
Further, we have .
Proof (Automorphism group)
We need to show that
- is closed with respect to
- is associative
- There is a neutral element with respect to in
- Each element in has a multiplicative inverse.
Proof step: Closedness with respect to
We prove that for all automorphisms and it also holds that .
For this, let and . So and are also endomorphisms. Because forms a ring with multiplication , is closed with respect to . Thus, holds. So all we have to do is to justify that is bijective. Because and are bijective, there are respectively inverses and . We now show that is the inverse of . It holds that and So has an inverse and is therefore bijective. Therefore, is an automorphism.
Proof step: Associativity of
We need to show that for all automorphisms and the following holds
Let and , as well as be any vector. Then Since the mappings and coincide on all vectors , they are equal, i.e., .
Proof step: Existence of a neutral element
We need to find an element such that for all it is true that .
We choose . Then , so is linear. Thus, . Further, is bijective, so .
Let now . Then So is the neutral element in .
Proof step: Existence of a multiplicative inverse
We prove that for every there exists an automorphism such that .
Let be an automorphism. Then is bijective and thus there exists an inverse mapping , for which . We only have to show that . What we know is that inverses of linear maps are again linear. Consequently, , as the inverse of the linear map , is also linear. Furthermore, is bijective because has an inverse . So holds, and thus is an inverse of in .
The automorphisms form a group, but are no longer a ring. This is because no longer has an additive structure: If we have two automorphisms and from a vector space , need not be an automorphism again. And indeed, there are counterexamples for this:
Example (Sum of automorphisms which is not an automorphism)
We consider the vector space and define the automorphisms It is easy to show that and are linear and bijective, so . Now Since this mapping does not hit the vector , it is not surjective. So is not bijective and therefore not an automorphism.
Hint
is never closed under addition, unless .
For vector spaces with the automorphism group is not commutative. As with the endomorphism ring, the composition of the mappings is not commutative. We demonstrate this non--commutativity in an exercise below.
Exercises
[edit | edit source]Math for Non-Geeks: Template:Aufgabe
Math for Non-Geeks: Template:Aufgabe Math for Non-Geeks: Template:Aufgabe Math for Non-Geeks: Template:Aufgabe Math for Non-Geeks: Template:Aufgabe
Hint
In the above two problems we saw that is commutative if and noncommutative if . What is the situation for ? If , then is the null vector space. There is only one endomorphism over the null space . This endomorphism is the null mapping . So holds. Since the ring consists of only one element, is commutative.
Thus, is commutative if and only if .
Fehler: Aktuelle Seite wurde in der Sitemap nicht gefunden. Deswegen kann keine Navigation angezeigt werden