Home > Analysis, Group Theory, Higher Algebra, Linear Algebra > Agregation 2014 – Mathematiques Generales – Parts 1-3

## Agregation 2014 – Mathematiques Generales – Parts 1-3

This post contains the first three parts of the the Mathematiques Generales part French Agregation contest 2014.

Introduction and notations

For ${m \leq n}$ we denote ${ [m..n] =\{m,m+1,..,n\} }$. For an integer ${n \geq 1}$ we denote ${S_n}$ the group of permutations of ${[1..n]}$.

We say that a square matrix is inferior (superior) unitriangular if it is inferior (superior) triangular and all its diagonal elements are equal to ${1}$.

For two integers ${ n \geq 1}$ and ${k \geq 0}$ we denote ${\mathcal{P}_k(n)}$ the family of ${k}$-element subsets of ${[1..n]}$.

Let ${m,n}$ be two positive integers and ${A}$ a ${m\times n}$ matrix with elements in a field ${\Bbb{K}}$. (all fields are assumed commutative in the sequel) A minor of ${A}$ is the determinant of a square matrix extracted from ${A}$. We can define for ${k \in [1..\min(m,n)]}$ and ${(I,J) \in \mathcal{P}_k(m) \times \mathcal{P}_k(n)}$ the minor

$\displaystyle \left| \begin{matrix}a_{i_1,j_1} & ... & a_{i_1,j_k} \\ \vdots & \ddots & \vdots \\ a_{i_k,j_1} & ... & a_{i_k,j_k} \end{matrix} \right|$

where ${i_1,..,i_k}$ (respectively ${j_1,..,j_k}$) are the elements of ${I}$ (respectively ${J}$) arranged in increasing order. We denote this minor ${\Delta_{I,J}(A)}$.

0 Classic results

Theorem A. (to be proved) Let ${E}$ be a vector space of finite dimension over a field ${\Bbb{K}}$ and ${F,G}$ linear subspaces of ${E}$. Then

$\displaystyle \dim (F+G)+\dim(F\cap G)=\dim(F)+\dim(G).$

Theorem B. The determinant of a matrix ${A}$, with coefficients in a field, which admits a decomposition in blocks in the form

$\displaystyle A= \left( \begin{array}{c|c} B & C \\ \hline 0 & F \end{array} \right) \text{ or } A= \left( \begin{array}{c|c} B & 0 \\ \hline D & F \end{array} \right)$

where ${B}$ and ${F}$ are square matrices is given by ${\det(A)=\det(B)\det(F)}$.

Theorem C. Let ${\Bbb{K}}$ be a field and ${n \geq 2}$. If ${x_1,..,x_n}$ are elements of ${\Bbb{K}}$ then

$\displaystyle \left| \begin{matrix} 1& x_1 & x_1^2 & ... & x_1^{n-1}\\ 1& x_2 & x_2^2 & ... & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1& x_n & x_n^2 & ... & x_n^{n-1}\end{matrix} \right| =\prod_{1\leq i

Theorem D. Let ${\Bbb{K}}$ be a field and ${n}$ a positive integer. Consider ${A \in \mathcal{M}_n(\Bbb{K})}$. The following propositions are equivalent:

(i) For every ${p \in [1..n]}$ the minor ${\Delta_{[1..p],[1..p]}(A)}$ is non-zero.

(ii) There exists in ${\mathcal{M}_n(\Bbb{K})}$ a unique factorization ${A=LDU}$ where ${D}$ is an invertible diagonal matrix and ${L,U}$ are, respectively, inferior and superior unitriangular matrices.

Part 1 – Totally positive matrices

1.1 Let ${\Bbb{K}}$ be a field and ${m,n,p}$ three positive integers. Consider ${A \in \mathcal{M}_{m,n}(\Bbb{K})}$ and ${B \in \mathcal{M}_{n,p}(\Bbb{K})}$. We form the product ${C=AB}$ and write ${A=(a_{j,i}),\ B=(b_{i,j}),\ C=(c_{h,i})}$ with ${h \in [1..m],\ i \in [1..n],\ j \in [1..p]}$.

Consider ${k \in [1..\min(m,p)]}$ end ${(H,J) \in \mathcal{P}_k(m)\times \mathcal{P}_k(n)}$. Denote ${h_1,..,h_k}$ and ${j_1,..,j_k}$ the elements of ${H}$ and ${J}$ in increasing order. Denote ${X_1,..,X_n}$ and ${Y_1,..,Y_k}$ the columns of the matrices

$\displaystyle \begin{pmatrix} a_{h_1,1} & ... & a_{h_1,n} \\ \vdots & \ddots & \vdots\\ a_{h_k,1} & ... & a_{h_k,n} \end{pmatrix} \text{ and } \begin{pmatrix} c_{h_1,j_1} & ... & c_{h_1,j_k} \\ \vdots & \ddots & \vdots\\ c_{h_k,j_1} & ... & a_{h_k,j_k} \end{pmatrix},$

respectively. These columns belong to ${\Bbb{K}^k}$.

(a) Express ${Y_1,...,Y_k}$ in function of ${X_1,...,X_n}$ and the coefficients ${b_{i,j}}$ of ${B}$.

(b) Let ${f : E^k \rightarrow \Bbb{K}}$ be an alternated ${k}$-form. Prove that

$\displaystyle f(Y_1,...,Y_k) = \sum_{\substack{\varphi : [1..k] \rightarrow [1..n]\\ \varphi \text{ injective}}} b_{\varphi(1),j_1}...b_{\varphi(k),j_k}f(X_{\varphi(1)},...,X_{\varphi(k)})$

(c) Under the hypothesis of the previous question prove that

$\displaystyle f(Y_1,...,Y_k)= \sum_{I \in \mathcal{P}_k(n)} \sum_{\sigma \in S_k} b_{i_{\sigma(1)},j_1}...b_{i_{\sigma(k)},j_k}f(X_{i_{\sigma(1)}},...,X_{i_{\sigma(k)}}),$

where ${i_1,..,i_k}$ are the elements of ${I}$ arranged in increasing order.

(d) Prove the Binet-Cauchy formula:

$\displaystyle \Delta_{H,J}(C)= \sum_{I \in \mathcal{P}_k(n)} \Delta_{H,I}(A)\Delta_{I,J}(B)$

We say that a square matrix with real coefficients is totally positive (in short TP) if every one of its minors is positive (${\geq 0}$).

1.2 In this question the matrices are considered square and with real coefficients.

(a) Prove that the coefficients of a TP matrix are positive.

(b) Prove that the transposed of a TP matrix is also TP.

(c) Prove that for every ${n \geq 1}$ the identity matrix of size ${n}$ is TP.

(d) Prove that the product of two TP matrices is also TP.

(e) Is it true that tie inverse of a TP matrix is also TP?

1.3 Let ${n \geq 1}$ be a positive integer. Denote ${\mathcal{C}_n \subset \Bbb{R}^n}$ the subset of ${n}$-uples which are strictly increasing.

(a) Let ${(b_1,...,b_n) \in \mathcal{C}_n}$ and ${(\lambda_1,...,\lambda_n) \in \Bbb{R}^n}$. Prove that if the function defined by

$\displaystyle x \mapsto \lambda_1 e^{b_1x}+ ...+\lambda_n e^{\lambda_n x}$

has ${n}$ distinct zeros then ${\lambda_1=...=\lambda_n=0}$.
(Indication: use induction and derivation)

(b) Given two elements ${a=(a_1,...,a_n)}$ and ${b=(b_1,...,b_n)}$ of ${\mathcal{C}_n}$ we can construct the matrix ${E=(e_{i,j})}$ of size ${n \times n}$ with ${e_{i,j}=e^{a_ib_j}}$. Prove that this matrix is invertible.
(Indication: use the associated homogeneous system)

(c) Prove that ${\mathcal{C}_n}$ is connected.

(d) With the notations of question (b) prove that ${\det E>0}$ for every ${(a,b) \in \mathcal{C}_n^2}$.

1.4 We fix an integer ${n \geq 1}$. Denote ${\mathcal{G}=GL_b(\Bbb{R})}$. Denote ${\mathcal{G}_+}$ the set of TP matrices in ${\mathcal{G}}$. Denote ${\mathcal{G}_+^*}$ the set of matrices in ${\mathcal{G}}$ which have all minors strictly positive.

(a) Let ${A,B}$ be two elements of ${\mathcal{G}}$. Prove that if ${A \in \mathcal{G}_+}$ and ${B \in \mathcal{G}_+^*}$ then ${AB \in \mathcal{G}_+^*}$.

(b) Prove that for every ${\theta \in (0,1)}$ the matrix ${T=(t_{i,j})}$ of size ${n \times n}$ and coefficients ${t_{i,j} = \theta^{(i-j)^2}}$ is in ${\mathcal{G}_+^*}$.
(Indication: use question 1.3(d))

(c) Construct a sequence of matrices in ${\mathcal{G}_+^*}$ which have as limit the identity matrix in ${\mathcal{G}}$.

(d) Prove that ${\mathcal{G}_+}$ is the closure of ${\mathcal{G}_+^*}$ in ${\mathcal{G}}$.

Part 2 – The factorisation LDU of an invertible TP matrix

The scope of this part is to prove that vor all integers ${n>p \geq 1}$ and every TP matrix ${A \in \mathcal{M}_n(\Bbb{R})}$ we have

$\displaystyle \det(A) \leq \Delta_{[1..p],[1..p](A)} \Delta_{[p+1..n],[p+1..n]}(A).\ \ \ (*)$

We will prove this inequality by induction over ${n}$ writing ${(*)}$ for a matrix ${D}$ of smaller size constructed from ${A}$. The core argument is Sylvester’s identity which permits us to express the minors of ${D}$ in function of the minors of ${A}$.

2.1 Let ${\Bbb{K}}$ be a field and ${q,n}$ two integers such that ${n > q \geq 1}$. Consider ${A \in \mathcal{M}_n(\Bbb{K})}$ and define ${D \in \mathcal{M}_{n-q}(\Bbb{K})}$ by

$\displaystyle d_{i,j} = \Delta_{[1..q]\cup\{q+i\} , [1..q]\cup\{q+j\}}(A)$

.

In the questions (a),(b),(c) we suppose that the minor ${\Delta_{[1..q],[1..q]}(A)}$ is not zero.

(a) Orive that ${A}$ factorizes in a unique way as the product of block matrices of the type

$\displaystyle A= \left(\begin{array}{c|c}I_q & 0 \\ \hline E &I_{n-q} \end{array} \right)\left(\begin{array}{c|c}B & F\\ \hline 0 &C \end{array} \right),$

where ${B}$ is of size ${q \times q}$ and ${C}$ is of times ${(n-q) \times (n-q)}$.

(b) Express ${D}$ in function of ${B}$ and ${C}$.
(Indication: Use the Cauchy-Binet formula proved in question 1.1(d))

(c) Prove the Sylvester identity: ${\det(D)=(\Delta_{[1..q],[1..q]})^{n-q-1}\det(A)}$.

(d) Prove that Sylvester’s identity remains true even if ${\Delta_{[1..q],[1..q]}(A)=0}$.
(Note: We make the convention ${0^0=1}$.)

In the rest of this part the matrices considered are with real coefficients.

2.2 Consider ${n>q \geq 1}$ two integers and ${A \in \mathcal{M}_n(\Bbb{R})}$. Construct ${D \in \mathcal{M}_{n-q}(\Bbb{R})}$ like in question 2.1. Prove that if ${A}$ is ${TP}$ then ${D}$ is also ${TP}$.

2.3 In this question we prove ${(*)}$ using induction over ${n}$.

The case ${n=2}$ is without difficulty. Necessarily we have ${p=1}$ and ${(*)}$ writes

$\displaystyle a_{1,1}a_{2,2}-a_{1,2}a_{2,1}\leq a_{1,1}a_{2,2}$

which is true by the positivity of ${a_{1,2}}$ and ${a_{2,1}}$.

We take ${n \geq 3}$ and we suppose the result is true for matrices of size strictly smaller than ${n}$. Let ${A \in \mathcal{M}_n(\Bbb{R})}$ be a TP matrix. Because ${(*)}$ is obviously true if ${\det(A)=0}$ we suppose that ${A}$ is invertible.

(a) Prove that ${a_{1,1}>0}$.
(Indication: Argue by contradiction and use the positivity of the minors ${\Delta_{\{1,i\},\{1,j\}}(A)}$, for ${i,j \geq 2}$)

(b) Prove that ${A}$ satisfies the inequality for ${p \in [2..n-1]}$.
(Indication: introduce the matrix ${D}$ from question 2.1 for ${q=1}$ and find the inequality ${\Delta_{[p..n-1],[p..n-1]}(D)\leq (a_{1,1})^{n-p}\Delta_{[p+1..n],[p+1..n]}(A)}$)

(c) Treat the case ${p=1}$ by arriving at the case ${p=n-1}$ of another matrix.

Let ${A}$ be an invertible TP matrix. The inequality ${(*)}$ implies that ${\Delta_{[1..p],[1..p]}(A)>0}$ for every ${p\in [1..n]}$. Theorem D implies that there is a unique factorization ${A=LDU}$ where ${D}$ is an invertible diagonal matrix and ${L,U}$ are, respectively, lower and upper unitriangular matrices. The coefficients of these matrices can be written as quotients of minors of ${A}$ and are, therefore, positive. It is possible to prove that the matrices ${L,D,U}$ are TP. This result reduces the study of invertible TP matrices at the study of unitriangular TP matrices. We will continue this study in Part 6.

Part 3 – Relative position of two flags

In this part ${n\geq 2}$ is an integer.

To every permutation ${\sigma \in S_n}$ we associate two tables: ${(p_{i,j}(\sigma))}$ for ${(i,j) \in [1..n]^2}$ and ${(d_{i,j}(\sigma)}$ for ${(i,j) \in [0..n]}$. These tables are defined as follows:

$\displaystyle p_{i,j}(\sigma)=\begin{cases} 1 & \text{ if }i=\sigma(j) \\ 0 & \text{ otherwise} \end{cases}$

$\displaystyle d_{i,j}(\sigma)=\begin{cases} \text{Card}([1..i]\cap \sigma^{-1}([1..j])) & i \geq 1,\ j \geq 1 \\ 0 & i=0 \text{ or }j=0. \end{cases}$

The matrix ${P_\sigma}$ is called the permutation matrix of ${\sigma}$.

3.1 Determine the table ${(d_{i,j}(\sigma))}$ associated to the permutation ${\sigma \in S_3}$ defined by

$\displaystyle \sigma(1)=2,\ \sigma(2)=3,\ \sigma(3)=1.$

3.2 (a) Prove that for every permutation ${\sigma \in S_n}$ we have

$\displaystyle d_{i,j}(\sigma) = \sum_{k=1}^i \sum_{l=1}^j p_{k,l}(\sigma)$

(b) Prove that for every permutation ${\sigma \in S_n}$ we have

$\displaystyle p_{i,j}(\sigma)= d_{i,j}(\sigma)-d_{i,j-1}(\sigma)-d_{i-1,j}(\sigma)+d_{i-1,j-1}(\sigma).$

(c) Consider a table of integers ${\delta_{i,j}}$ with ${(i,j) \in [0..n]^2}$ such that
(i) ${\delta_{i,0}=0}$ and ${\delta_{i,n}=i}$ for ${i\in [0..n]}$;
(ii) ${\delta_{0,j}=0}$ and ${\delta_{n,j}=j}$ for ${j \in [0..n]}$;
(iii) ${\delta_{i,j}-\delta_{i,j-1}-\delta_{i-1,j}+\delta_{i-1,j-1}\in \{0,1\}}$ for ${(i,j) \in [0,n]^2}$.
Prove that there exists a unique permutation ${\sigma \in S_n}$ such that ${\delta_{i,j} =d_{i,j}(\sigma)}$ for every ${(i,j) \in [0,n]^2}$.

In the rest of this part we consider a field ${\Bbb{K}}$ and a ${\Bbb{K}}$-vector space ${E}$ of dimension ${n}$.

We call a flag a sequence ${\Bbb{F}=(F_0,..,F_n)}$ of linear subspaces of ${E}$ such that ${\dim F_k=k}$ and ${F_{k-1} \subset F_k}$ for every ${k}$. Denote ${\mathcal{F}}$ the set of all flags.

Given a flag ${\Bbb{F}=(F_0,...,F_n)}$ and an automorphism ${g \in GL(E)}$, we can take the images of the linear subspaces ${F_k}$ by ${g}$ and we obtain a new flag ${g \cdot \Bbb{F} = (g(F_0),...,g(F_n))}$. The application ${(g,\Bbb{F})\mapsto g \cdot \Bbb{F}}$ defines a group action of ${GL(E)}$ over ${\mathcal{F}}$.

3.3 Prove that the action of ${GL(E)}$ over ${\mathcal{F}}$ is transitive, which means that ${\mathcal{F}}$ contains only one orbit.

3.4 Let ${\Bbb{F}=(F_0,F_1,...,F_n)}$ and ${\Bbb{G}=(G_0,G_1,...,G_n)}$ be two flags. For ${(i,j) \in [0,n]^2}$ we denote ${\delta_{i,j} =\dim (F_i\cap G_j)}$.

(a) Prove that for every ${(i,j) \in [1..n]^2}$ we have

$\displaystyle \delta_{i,j}-\delta_{i,j-1}-\delta_{i-1,j}+\delta_{i-1,j-1}=\dim \left(\frac{F_i\cap G_j}{(F_i\cap G_{j-1})+(F_{i-1}\cap G_j)}\right).$

(b) Prove that there is a unique permutation ${\sigma \in S_n}$ such that ${\delta_{i,j}=d_{i,j}(\sigma)}$ for every ${(i,j) \in [0..n]^2}$.

We say that the couple ${(\Bbb{F},\Bbb{G})}$ is in position ${\sigma}$. We observe in light of previous questions that

$\displaystyle p_{i,j}(\sigma)=\dim \left(\frac{F_i\cap G_j}{(F_i\cap G_{j-1})+(F_{i-1}\cap G_j)}\right).$

For every permutation ${\sigma \in S_n}$ we denote ${\mathcal{O}_\sigma}$ the set of pairs of flags ${(\Bbb{F},\Bbb{G})}$ in position ${\sigma}$.

3.5 Consider ${\Bbb{F}=(F_0,F_1,...,F_n)}$ and ${\Bbb{G}=(G_0,G_1,...,G_n)}$ two flags and ${\sigma \in S_n}$. Prove that the following statements are equivalent:

(i) ${(\Bbb{F},\Bbb{G}) \in \mathcal{O}_\sigma}$.

(ii) There exists a basis ${(e_1,...,e_n)}$ of ${E}$ such that for every ${(i,j) \in [0..n]^2}$ the linear subspace ${F_i}$ is spanned by ${\{e_1,...,e_i\}}$ and the linear subspace ${G_j}$ is spanned by ${\{e_{\sigma(1)},...,e_{\sigma(j)}\}}$.

We define an action of ${GL(E)}$ on ${\mathcal{F}^2}$ by ${g \cdot (\Bbb{F},\Bbb{G})=(g \cdot \Bbb{F},g\cdot \Bbb{G})}$.

3.6 Prove that ${\mathcal{O}_\sigma}$ (for ${\sigma \in S_n}$) are the orbits of ${GL(E)}$ in ${\mathcal{F}^2}$.

For ${k \in [1..n-1]}$ we denote ${\tau_k}$ the transposition which exchanges ${k}$ and ${k+1}$.

3.7 Let ${(\sigma,k) \in S_n \times [1..n-1]}$ and consider ${(\Bbb{F},\Bbb{G})}$ a couple of flags in position ${\sigma}$. We denote ${\mathcal{F}^\prime}$ the set of flags ${\Bbb{F}^\prime}$ which differ from ${\Bbb{F}}$ only on the linear subspace of dimension ${k}$.

(a) Prove that for every ${\Bbb{F}^\prime \in \mathcal{F}^\prime}$ the couple ${(\Bbb{F}^\prime, \Bbb{G})}$ is in position ${\sigma}$ or ${\tau_k \circ \sigma}$.

(b) Suppose that ${\sigma^{-1}(k)<\sigma^{-1}(k+1)}$. Prove that for every ${\Bbb{F}^\prime \in \mathcal{F}^\prime\setminus\{\Bbb{F}\}}$ the couple ${(\Bbb{F}^\prime,\Bbb{G})}$ is in position ${\tau_k \circ \sigma}$.