and . If and , we have , so the only set with this composition law and identity is the set .
a)
b) , but in general. Consider for example the permutations , so , but .
a) is a subgroup of
b) is a subgroup of .
c) Positive integers are not a subgroup of , as they do not contain inverse elements of even 0.
d) The positive reals form a subgroup of .
e) The set is not a subgroup of , as it does not contain the identity matrix.
Let be a subgroup of . If has an identity element (possibly different from the identity of ), then we have , and multiplying both sides by shows the identity elements are the same. Similarly if has an inverse in , we necessarily have implying the inverses are the same.
Using the standard Euclidean algorithm we get and .
Let be positive integers such that is a prime number. Then, if , we have , so . Because we want to be prime, the only possibility is .
Let and such that . Then so .
Assume that a group doesn't have a proper subgroup. Then
- must be finite. If is infinite, take any such that doesn't generate , and consider the set . Since does not generate , is a proper subgroup of , as is easy to show.
- has to be cyclic. Indeed, assume is not cyclic, so no single element generates . Then, for any element , is a subgroup of , since is finite.
- has to have a prime number order. For contradiction, assume the order of is . Then, since is a finite cyclic group, generated by an element , we have that is a proper subgroup.
- Finally, if has a prime order, no element of generates a proper subgroup of . This is because any element is of the form for some element and integer . From Proposition 2.4.3 we know that the order of is , where is the order of the group and . In this case is a prime number, so the order of is either or 1 (in the case that ).
Let be a cyclic group. Take any subgroup . We assume is finite (as the finite case has the main focus in the book, and in fact the definition on page 46 implicitly assumes so), so that is finite as well. Let . Then, since is a subgroup, for every it holds . Hence, for the exponents for which , it holds . Theorem 2.3.3 tells us that we can write for some integer . This shows that any element of is of the form so .
Elementary matrices of the first type are matrices with the entries , for exactly one pair of indices and 0 otherwise. It is easy to see from the determinant formula that the determinant of such matrices is always 1. From the product rule for the determinant it follows that all products of such matrices have determinant 1.
For the other direction, we consider the 2 by 2 case first and let with . Adding a scaled row to the other row and using the relation between the entries, we can manipulate the matrix as
.
Since we can arrive to the identity matrix using only operations corresponding to multiplication with matrices of the first type and the condition , the original matrix can be produced from the identity by reversing the operations. Hence, we can generate with only elementary matrices of the first type.The general case follows by induction: for any matrix with determinant 1, assume we can manipulate into the form
,
where and . It is easy to see that we can further manipulate as follows:
.
Note that in the second step the element manipulates to 1 as we eliminate the row entries of the vector , since otherwise the determinant of the manipulated matrix would not be equal to 1.
Elements of order 2 necessarily swap a pair of elements, or two pairs of elements. Such permutations are totaling 9 permutations or order 2.
a) Any transposition that swaps elements has a permutation matrix that is an identity matrix where rows have been swapped. These are elementary matrices of the second type. Any permutation matrix corresponding to a permutation in is an identity matrix whose rows have been permuted according to the permutation. From the permutation interpretation of we know that is invertible, so by Theorem 1.2.6 it is a product of elementary matrices. It is easy to see that only elementary matrices of type 2 are essential in such a product, as scaling or adding rows together are not needed. Hence, can be decomposed into a product of elementary matrices of the second type, implying that the permutation is a product of transpositions.
b) First we show the following claim:
Claim: Any product of two transpositions can be written as a product of three-cycles.
Proof: Let be a product of two transpositions, where are some integers (possible some equal). Then we can write .
Since any permutation in is generated by transpositions, and the determinant of a transposition matrix is , any permutation in is a product of an even number of transpositions. Let be a permutation, with a decomposition into transpositions . Then we can group the product as , where each product of two transpositions is also expressible as a product of three-cycles as shown in Claim.
Let so that . Then so is a group homomorphism. We have and .
Any homomorphism satisfies , so must be a linear function with integer coefficient. In other words, with . For to be injective, it suffices that . For to be surjective, we need , since clearly . The surjective functions are also bijective, so they are isomorphisms.
Two elements are conjugate, if for some , . It is easy to see that for the given matrices, any matrix of the form with does the job. This matrix is also invertible, as its determinant is . Since the determinant is negative, the matrices are not conjugate in .
Let , and be a relation given by if and only if there exists such that
Reflexivity: , so .
Symmetry: if , then , so . Hence , and vice versa.
Transitivity: if and , then and , so . Therefore .
- The set defines an equivalence relation on that is the same as the usual "=" relation on the reals.
- The relation defined by the empty set satisfies symmetry and transitivity, but fails reflexivity.
- The locus satisfies symmetry and transitivity, but fails reflexivity (for example is not true).
- The locus is reflexive ( because ), symmetric (since ) and transitive (If and are solutions to the equation, then by symmetry also and are. Therefore also and thus is a solution.)
The number of equivalence relations in a set is equal to the number of partitions of the set. If the set has 5 elements, we have the following partitions:
- Partitions to 1 set: 1
- Partitions to 2 sets of sizes 1, 4: 5
- Partitions to 2 sets of sizes 2, 3:
- Partitions to 3 sets of sizes 1, 1, 3:
- Partitions to 3 sets of sizes 1, 2, 2:
- Partitions to 4 sets of sizes 1, 1, 1, 2:
- Partitions to 5 sets of size 1 each: 1
The total number of partitions is thus 52.
Let be a group such that for some prime . For any , is a subgroup of and by Lemma 2.8.7 has order where . If , we are done as we have found an element of order . Otherwise, consider and notice that , so has order .
For a group of 35 elements we have a few cases.
- If the group is cyclic, i.e., , we have that has order 7 and has order 5.
- If the group is not cyclic, then for every there is some integer such that is a subgroup of order . The order of divides the order of , so if is not the identity, the order of is either 5 or 7. Assume is of order 5 and there is another element of order 5 that is not a power of . Then the subgroup of generated by and has order 25. Indeed, all elements for are distinct (if not, we have for that , which implies that is a power of ), and there is 25 ways to choose the exponents. But 25 does not divide 35, so there cannot be another subgroup of order 5, and thus any element that is not a power of must generate a cyclic group of order 7.
If a group contains an element of order 6 and an element of order 10, then and are subgroups of and thus 6 and 10 divide the order of . Hence, we can say that .
Let be a subgroup such that . Then, since left (and right) cosets partition the group, we have for some . Because , we must have and so by Proposition 2.8.17 is normal.
For an example that this is not true for such that , consider and . Since , we have indeed . Then, and , so is not normal.
Claim 1: For all we have .
Proof: Assume that . Then , since . But since , we have , which contradicts the assumption that the cosets of partition .
Claim 2: For all we have .
Proof: Assume for some . Then we have for some . This implies , and by Claim 1 we have , so . But then , so unless we have a contradiction.
Claim 1 and Claim 2 together prove that is a subgroup of .
The numbers involved are so small that solving by brute force becomes viable. To solve we note that so so .
To investigate the congruence we note by computing every possibility that there is no element modulo 6. Therefore there is no solution to the congruence.
Given equations
we can solve and substitute to the other congruence equation to obtain . This equation has a solution exactly when there exists modulo , or in other words an integer such that . This translates to saying that the equation must have an integer solution . But that is only possible when .
It is not clear from the problem statement what the answer should look like. Therefore, the following candidate for answer might not be what is intended.
We show that each cycle of the form has sign . This is easy to see, as we can decompose . Each transposition corresponds to an elementary matrix of type two, which has determinant . The product form of the cycle has transpositions, so the sign is as claimed. Therefore, if a permutation has a cycle decomposition to cycles, each having terms, the sign of the permutation is .
Let and . We have , and has the subgroups and the subgroups . The correspondences given by are thus
Subgroup in
|
Subgroup in
|
|
|
|
|
|
|
|
|
Notice that the subgroups and do not contain .
Let and assume is generated by an element . Then, for every element of we would have for some . But since are infinite, there are no such that and , since the first condition would require and the second . The assumption that are infinite means that and .
a) is isomorphic to by the function given by which has the inverse . Seeing that is a homomorphism is a direct consequence of multiplication rules of real numbers and properties of absolute value.
b) Let be the set of invertible upper triangular matrices, the set of invertible diagonal matrices and the set of upper diagonal matrices with ones on the diagonal. In order to have a homomorphism , we must have
,
for appropriate reals and . Using this notation for we have
. (1)
On the other hand,
. (2)
For to be a homomorphism, we require these two to evaluate equal. However, the second coordinate in (1) has a dependency on and , whereas (2) does not have it. Hence, (1) and (2) are not equal in general.
c) Let , (the angles of a circle with sum as the group operation) and . Then is isomorphic to via the homomorphism given by . That is a bijective homomorphism follows easily from the polar form representation of complex numbers.
Let be a subgroup of . Clearly is not normal, as . Then and . It is a straightforward computation to check that . This set has 4 elements, but all cosets must have size 2, so it is not a coset.
Let be the group of upper triangular matrices with .
a) Let be the subset of defined by , i.e., the set of invertible diagonal matrices. It is easy to see that is a group, as the inverse for each such matrix is also a diagonal matrix, and the product of two diagonal matrices is a diagonal matrix, and the identity is a diagonal matrix. is however not a normal subgroup, since for any with inverse we can compute
,
which is not in when .
b) Let be the subset of defined by . Again, it is easy to see that is a subgroup of . As above, we can compute
,
so is a normal subgroup. The cosets are the sets containing the matrices
,
which are distinguished by the parameter , as the variables can be chosen arbitrarily (as long as ). Hence, the quotient group is isomorphic to . The homomorphism whose kernel is is given by .
c) Let be the subset of defined by . Then again is clearly a subgroup of . We have
,
so is a normal subgroup.To investigate the cosets of this group, we compute
.
Here we can choose as we like, so the top-right element doesn't impose any conditions on the cosets. Then, the elements of for which it holds generates a coset of uniquely. This way we also obtain all cosets of , since for every choice of we can choose and we see that this element is in the coset generated by .
The quotient group is again isomorphic to with the homomorphism whose kernel is given by .
a) We observe the following:
- Reflexivity: A point is connected to itself by the path . If , then the path is contained in .
- Symmetry: Let such that there exists a path from to contained in . Then the path is a path from to contained in
- Transitivity. If such that is a path from to contained in and from to contained in , then is a path from to contained in .
b) Path connection is an equivalence relation as shown in part a). Hence, path connected subsets partition the set . In particular, the transitivity property ensures that if two points can be path connected, any points connected to those two points can be connected with both of the points.
a) Let such that given by and given by . Then connects to . This path is also in , since both paths and are in and is a group.
b) We show that for any matrix , also the matrix for any . If is the path joining and , simply consider the path , since . Hence, all matrices connected to identity by path form a normal subgroup.
a) Elementary matrices of the first type are matrices with the entries , for exactly one pair of indices and 0 otherwise. For each such , there is a path to just by setting as the same matrix as , with the exception that the element . Clearly this is a continuous path from to , and since is an elementary matrix of the first type, it also has determinant 1, and thus the path stays in . Now, if we have the elementary matrices of the first type , we have by M.7 a) that is connected to , as are connected to . We have now shown that any product of elementary matrices of type 1 are path connected to , which implies that is path connected, as the elementary matrices of type 1 generate .
b) Let such that . We can path connect to a matrix with determinant 1 by so that . This path is composition of continuous functions since and has always positive determinant, so it is a continuous path in . Therefore, matrices with form a connected component of . By the same reasoning, if we just assumed that , we can reason that the matrices with form a connected component of .
What remains to show is that these components are not connected. Let be a matrix with a positive determinant and a path connecting to a matrix with a negative determinant. Then, has elements that are continuous functions. From the determinant formula (1.6.4) we see that the determinant is then also a continuous function of the elements, as it is a sum of product of continuous functions. Then, since and , we must have for some . Therefore is a path not contained in , and hence is a disjoint union of two connected components.
Double cosets partition a group. Indeed, for every , the double coset contains , since . On the other hand, if , then there are elements such that . This implies that , and since are subgroups, , so . This implies in turn that .
Multiplying from the left with the matrices and their inverses corresponds to adding and subtracting rows. Multiplying from the right corresponds to adding and subtracting columns. Now, consider any .
Claim 1: We can bring the matrix to the form where or only by adding and subtracting rows.
Proof. If , we have since the matrix has determinant 1, so we can add the second column to the first one to obtain another matrix with determinant 1, whose entry on the first column, second row is non-zero. So we assume from now on that . Using the row operations, we can perform what is essentially the Euclidean algorithm on the first column by subtracting the smaller column 1 element from the larger. Since , we have , which implies that and so the row operation Euclidean algorithm terminates once it produces 1 on one of the rows of the 1st column.
Now, using the matrix produced in Claim 1, we can simply bring the matrix to the identity form only by applying row and column additions/subtractions. Hence, we have , where are some of the matrices and their inverses.This can be solved to show that .