Skip to main content

Matrices - Notes, Concept and All Important Formula

MATRICES

1. INTRODUCTION:

A rectangular array of mn numbers in the form of \(m\) horizontal lines (called rows) and \(n\) vertical lines (called columns), is called a matrix of order \(m\) by \(n\), written as \(m \times n\) matrix. In compact form, the matrix is represented by \(\mathrm{A}=\left[\mathrm{a}_{\mathrm{i}}\right]_{\mathrm{mx}}\)




2. SPECIAL TYPE OF MATRICES :

(a) Row Matrix (Row vector) : \(A=\left[a_{11}, a_{12}, \ldots \ldots \ldots . a_{1 n}\right]\) i.e. row matrix has exactly one row.
(b) Column Matrix (Column vector) : \(A=\left[\begin{array}{c}a_{11} \\ a_{21} \\ : \\ a_{m 1}\end{array}\right]\) i.e. columr matrix has exactly one column.
(c) Zero or Null Matrix : \(\left(A=O_{m \times n}\right)\), An \(m \times n\) matrix whose all entries are zero.
(d) Horizontal Matrix : A matrix of order \(\mathrm{m} \times \mathrm{n}\) is a horizontal matrix if \(\mathrm{n}>\mathrm{m}\).
(e) Vertical Matrix : A matrix of order \(m \times n\) is a vertical matrix if \(\mathrm{m}>\mathrm{n}\)
(f) Square Matrix : (Order n) If number of rows \(=\) number of column, then matrix is a square matrix.

Note :
(i) The pair of elements \(a_{i j} \) & \(a_{\text {in }}\) are called Conjugate Elements.
(ii) The elements \(\mathrm{a}_{11}, \mathrm{a}_{22}, \mathrm{a}_{33}, \ldots \ldots . \mathrm{a}_{\mathrm{m}}\) are called Diagonal Elements. The line along which the diagonal elements lie is called "Principal or Leading diagonal". The quantity \(\sum \mathrm{a}_{\mathrm{ii}}=\) trace of the matrix written as, \(\mathrm{t}_{\mathrm{r}}(\mathrm{A})\)



3. SQUARE MATRICES :

SQUARE MATRICES
Note :
(i) Minimum number of zeros in triangular matrix of order \(\mathrm{n}\) \(=\mathrm{n}(\mathrm{n}-1) / 2\)
(ii) Minimum number of zeros in a diagonal matrix of order \(\mathrm{n}\) \(=\mathrm{n}(\mathrm{n}-1)\)
(iii) Null square matrix is also a diagonal matrix.



4. EQUALITY OF MATRICES :

Matrices \(A=\left[a_{i_{i}}\right] \) & \(B=\left[b_{i j}\right]\) are equal if,
(a) both have the same order.
(b) \(\mathrm{a}_{\mathrm{ij}}=\mathrm{b}_{\mathrm{iij}}\) for each pair of \(\mathrm{i} \) & \( \mathrm{j}\).



5. ALGEBRA OF MATRICES :

(I) Addition : \(A+B=\left[a_{i j}+b_{i j}\right]\) where \(A \) & \(B\) are of the same order.

(a) Addition of matrices is commutative: \( A+B=B+A\)
(b) Matrix addition is associative: \((A+B)+C=A+(B+C)\)
(c) \(\mathrm{A}+\mathrm{O}=\mathrm{O}+\mathrm{A}\) (Additive identity)
(d) \(A+(-A)=(-A)+A=O\) (Additive inverse)

(II) Multiplication of A Matrix By A Scalar:

If \(A=\left[\begin{array}{lll}a & b & c \\ b & c & a \\ c & a & b\end{array}\right]\), then \(k A=\left[\begin{array}{lll}k a & k b & k c \\ k b & k c & k a \\ k c & k a & k b\end{array}\right]\)

(III) Multiplication of matrices (Row by Column):

Let A be a matrix of order \(m \times n\) and \(B\) be a matrix of order \(p\) \(\times \mathrm{q}\) then the matrix multiplication \(\mathrm{AB}\) is possible if and only if \(\mathrm{n}=\mathrm{p}\)

Let \(A_{m \times n}=\left[a_{i j}\right]\) and \(B_{n \times p}=\left[b_{i j}\right]\), then order of \(A B\) is \(m \times p\) & \(\boxed{(\mathrm{AB})_{\mathrm{ij}}=\sum_{\mathrm{r}=1}^{\mathrm{n}} \mathrm{a}_{\mathrm{ir}} \mathrm{b}_{\mathrm{rj}}}\)

(IV)Properties of Matrix Multiplication:

(a) \(A B=O \neq A=O\) or \(B=O\) (in general)
Note :
If \(A\) and \(B\) are two non-zero matrices such that \(A B=O\), then \(A\) and \(B\) are called the divisors of zero. If \(A\) and \(B\) are two matrices such that
(i) \(A B=B A\) then \(A\) and \(B\) are said to commute
(ii) \(\mathrm{AB}=-\mathrm{BA}\) then \(\mathrm{A}\) and \(\mathrm{B}\) are said to anticommute
(b) Matrix Multiplication Is Associative :
If \(A, B \) & \(C\) are conformable for the product \(A B \) & \( B C\), then \((\mathrm{AB}) \mathrm{C}=\mathrm{A}(\mathrm{BC})\)
(c) Distributivity :
\(A(B+C)=A B+A C]\) Provided \(A, B \) & \( C\) are conformable \((A+B) C=A C+B C\) for respective products

(V) Positive Integral Powers of A square matrix :

(a) \(A^{m} A^{n}=A^{m+n}\)
(b) \(\left(\mathrm{A}^{\mathrm{m}}\right)^{\mathrm{n}}=\mathrm{A}^{\mathrm{mn}}=\left(\mathrm{A}^{\mathrm{n}}\right)^{\mathrm{m}}\)
(c) \(\mathrm{I}^{\mathrm{m}}=\mathrm{I}, \mathrm{m}, \mathrm{n} \in \mathrm{N}\)



6. CHARACTERISTIC EQUATION:

Let \(A\) be a square matrix. Then the polynomial in \(x,|A-x I|\) is called as characteristic polynomial of \(A \) \(\&\) the equation \(|A-x I|=0\) is called characteristic equation of \(A\)




7. CAYLEY - HAMILTON THEOREM :

Every square matrix A satisfy its characteristic equation i.e. \(a_{0} x^{n}+a_{1} x^{n-1}+\ldots \ldots . .+a_{n-1} x+a_{n}=0\) is the characteristic equation of matrix \(A\), then \(a_{0} A^{n}+a_{1} A^{n-1}+\ldots \ldots . .+a_{n-1} A+a_{n} I=0\)




8. TRANSPOSE OF A MATRIX : (Changing rows & columns)

Let \(A\) be any matrix of order \(m \times n\). Then transpose of \(A\) is \(A^{T}\) or \(A^{\prime}\) of order \(\mathrm{n} \times \mathrm{m}\) and \(\left(\mathrm{A}^{\mathrm{T}}\right)_{\mathrm{ij}}=(\mathrm{A}_{ji})\).

Properties of transpose :

If \(A^{\mathrm{T}} \) & \( \mathrm{~B}^{\mathrm{T}}\) denote the transpose of \(\mathrm{A}\) and \(\mathrm{B}\)
(a) \((\mathrm{A}+\mathrm{B})^{\mathrm{T}}=\mathrm{A}^{\mathrm{T}}+\mathrm{B}^{\mathrm{T}} ;\) note that \(\mathrm{A} \) & \(\mathrm{~B}\) have the same order.
(b) \((\mathrm{A} \mathrm{B})^{\mathrm{T}}=\mathrm{B}^{\mathrm{T}} \mathrm{A}^{\mathrm{T}}\) (Reversal law) \(\mathrm{A} \) & \(\mathrm{~B}\) are conformable for matrix product \(A B\)
(c) \(\left(\mathrm{A}^{\mathrm{T}}\right)^{\mathrm{T}}=\mathrm{A}\)
(d) \((\mathrm{kA})^{\mathrm{T}}=\mathrm{kA}^{\mathrm{T}}\), where \(\mathrm{k}\) is a scalar.
General: \(\left(A_{1} \cdot A_{2}, \ldots \ldots . A_{n}\right)^{\mathrm{T}}=A_{n}^{\mathrm{T}} \cdots \cdots \cdot \cdot A_{2}^{\mathrm{T}} \cdot A_{1}^{\mathrm{T}}\) (reversal law for transpose)



9. ORTHOGONAL MATRIX

A square matrix is said to be orthogonal matrix if \(\mathrm{A} \mathrm{A}^{\mathrm{T}}=\mathrm{I}\)

Note :
(i) The determinant value of orthogonal matrix is either 1 or \(-1\). Hence orthogonal matrix is always invertible
(ii) \(\mathrm{AA}^{\mathrm{T}}=\mathrm{I}=\mathrm{A}^{\mathrm{T}} \mathrm{A}\). Hence \(\mathrm{A}^{-1}=\mathrm{A}^{\mathrm{T}}\).



10. SOME SPECIAL SQUARE MATRICES :

(a) Idempotent Matrix : A square matrix is idempotent provided \(\mathrm{A}^{2}=\mathrm{A} .\)

For idempotent matrix note the following:
(i) \(A^{n}=A\,\,  \forall \,\, n \in N\)
(ii) determinant value of idempotent matrix is either 0 or 1
(iii) If idempotent matrix is invertible then it will be an identity matrix i.e. I.

(b) Periodic Matrix : A square matrix which satisfies the relation \(\mathrm{A}^{\mathrm{k}+1}=\mathrm{A}\), for some positive integer \(\mathrm{K}\), is a periodic matrix. The period of the matrix is the least value of \(\mathrm{K}\) for which this holdstrue.
Note that period of an idempotent matrix is 1 .

(c) Nilpotent Matrix : A square matrix is said to be nilpotent matrix of order \(m, m \in N\), if \(A^{m}=O, A^{m-1} \neq O\)
Note that a nilpotent matrix will not be invertible.

(d) Involutary Matrix : If \(A^{2}=I\), the matrix is said to be an involutary matrix.
Note that \(A=A^{-1}\) for an involutary matrix.

(e) If \(A\) and \(B\) are square matrices of same order and \(A B=B A\) then
\((\mathrm{A}+\mathrm{B})^{\mathrm{n}}={ }^{\mathrm{n}} \mathrm{C}_{0} \mathrm{~A}^{\mathrm{n}}+{ }^{\mathrm{n}} \mathrm{C}_{1} \mathrm{~A}^{\mathrm{n}-1} \mathrm{~B}+{ }^{\mathrm{n}} \mathrm{C}_{2} \mathrm{~A}^{\mathrm{n}-2} \mathrm{~B}^{2}\)\(+\ldots \ldots\)\( \ldots . .+{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{n}} \mathrm{B}^{\mathrm{n}}\)



11. SYMMETRIC & SKEW SYMMETRIC MATRIX :

(a) Symmetric matrix :

For symmetric matrix \(\mathbf{A}=\mathbf{A}^{\mathbf{T}}\) i.e. \(a_{i j}=a_{\mathrm{j}} \,\, \forall \,\, \mathrm{i}, \mathrm{j}\)
Note : Maximum number of distinct entries in any symmetric matrix of order \(\mathrm{n}\) is \(\dfrac{\mathrm{n}(\mathrm{n}+1)}{2}\).

(b) Skew symmetric matrix :

Square matrix \(\mathrm{A}=\left[a_{\mathrm{ij}}\right]\) is said to be skew symmetric if \(\mathrm{A}^{\mathrm{T}}=-\mathrm{A}\) i.e. \(\mathrm{a}_{\text {ii }}=-\mathrm{a}_{\mathrm{ii}} \,\, \forall \,\, \mathrm{i} \, \& \, \mathrm{j} .\) Hence if \(\mathrm{A}\) is skew symmetric, then \(\mathrm{a}_{\mathrm{ii}}=-\mathrm{a}_{\mathrm{ii}} \Rightarrow \mathrm{a}_{\mathrm{ii}}=0 \,\, \forall \,\, \mathrm{i}\)
Thus the diagonal elements of a skew square matrix are all zero, but not the converse.

(c) Properties of symmetric & skew symmetric matrix :

(i) Let \(A\) be any square matrix then, \(A+A^{T}\) is a symmetric matrix & \( A-A^{T}\) is a skew symmetric matrix.
(ii) The sum of two symmetric matrix is a symmetric matrix and the sum of two skew symmetric matrix is a skew symmetric matrix.

(iii) If \(A \) & \(B\) are symmetric matrices then,
(1) \(\mathrm{AB}+\mathrm{BA}\) is a symmetric matrix
(2) \(\mathrm{AB}-\mathrm{BA}\) is a skew symmetric matrix.

(iv) Every square matrix can be uniquely expressed as a sum or difference of a symmetric and a skew symmetric matrix.
\(\begin{aligned}A &=\underbrace{\frac{1}{2}\left(\mathrm{~A}+\mathrm{A}^{\mathrm{T}}\right)}_{\text {symmetric }}+\underbrace{\frac{1}{2}\left(\mathrm{~A}-\mathrm{A}^{\mathrm{T}}\right)}_{\text {skew symmetric }} \\\text { and } \quad A &=\frac{1}{2}\left(\mathrm{~A}^{\mathrm{T}}+\mathrm{A}\right)-\frac{1}{2}\left(\mathrm{~A}^{\mathrm{T}}-\mathrm{A}\right)\end{aligned}\)



12. ADJOINT OF A SQUARE MATRIX :

Let \(A=\left[a_{i j}\right]=\left(\begin{array}{lll}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right)\) be a square matrix and let the matrix formed by the cofactors of \(\left[a_{i j}\right]\) in determinant \(|A|\) is \(\left(\begin{array}{lll}\mathrm{C}_{11} & \mathrm{C}_{12} & \mathrm{C}_{13} \\ \mathrm{C}_{21} & \mathrm{C}_{22} & \mathrm{C}_{23} \\ \mathrm{C}_{31} & \mathrm{C}_{32} & \mathrm{C}_{33}\end{array}\right) .\) Then \((\operatorname{adj} \mathrm{A})=\left(\begin{array}{ccc}\mathrm{C}_{11} & \mathrm{C}_{21} & \mathrm{C}_{31} \\ \mathrm{C}_{12} & \mathrm{C}_{22} & \mathrm{C}_{32} \\ \mathrm{C}_{13} & \mathrm{C}_{23} & \mathrm{C}_{33}\end{array}\right)\)= Transpose of cofactor matrix. 

Note :
If \(A\) be a square matrix of order \(n\), then
(i) \(\quad A(\operatorname{adj} A)=|A| I_{n}=(\operatorname{adj} A) \cdot A\)
(ii) \(\quad|\operatorname{adj} A|=|A|^{n-1}, n \geq 2\)
(iii) \(\operatorname{adj}(\operatorname{adj} A)=|A|^{n-2} A, \quad|A| \neq 0\).
(iv) \(\operatorname{adj}(\mathrm{AB})=(\operatorname{adj} B)(\operatorname{adj} \mathrm{A})\)
(v) \(\operatorname{adj}(\mathrm{KA})=\mathrm{K}^{\mathrm{n}-1}(\operatorname{adj} \mathrm{A})\), where \(\mathrm{K}\) is a scalar



13. INVERSE OF A MATRIX (Reciprocal Matrix) :

A square matrix A said to be invertible (non singular) if there exists a matrix \(B\) such that, \(A B=I \quad\) (Note that \(A B=I \Leftrightarrow B A=I\) ) \(B\) is called the inverse (reciprocal) of \(A\) and is denoted by \(A^{-1}\). Thus \(\mathrm{A}^{-1}=\mathrm{B} \Leftrightarrow \mathrm{AB}=\mathrm{I}=\mathrm{BA}\)
We have, \(A \cdot(\operatorname{adj} A)=|A| I_{n}\)
\(\Rightarrow \mathrm{A}^{-1} \cdot \mathrm{A}(\operatorname{adj} \mathrm{A})=\mathrm{A}^{-1} \mathrm{I}_{\mathrm{n}}|\mathrm{A}|\)
\(\Rightarrow \mathrm{I}_{\mathrm{n}}(\operatorname{adj} \mathrm{A})=\mathrm{A}^{-1}|\mathrm{~A}| \mathrm{I}_{\mathrm{n}}\)
\(\therefore \quad \mathrm{A}^{-1}=\frac{(\operatorname{adj} \mathrm{A})}{|\mathrm{A}|}\)

Note : The necessary and sufficient condition for a square matrix \(\mathrm{A}\) to be invertible is that \(|\mathrm{A}| \neq 0\)

Theorem : If \(A \) & \( B\) are invertible matrices of the same order, then \((\mathrm{AB})^{-1}=\mathrm{B}^{-1} \mathrm{~A}^{-1}\)

Note:
(i) If \(A\) be an invertible matrix, then \(A^{T}\) is also invertible & \(\left(\mathrm{A}^{\mathrm{T}}\right)^{-1}=\left(\mathrm{A}^{-1}\right)^{\mathrm{T}}\)

(ii) If \(A\) is invertible,
(a) \(\left(\mathrm{A}^{-1}\right)^{-1}=\mathrm{A}\)
(b) \(\left(A^{k}\right)^{-1}=\left(A^{-1}\right)^{k}=A^{-k} ; k \in N\)

(iii) \(\left|A^{-1}\right|=\dfrac{1}{|A|}\).




14. SYSTEM OF EQUATION & CRITERIA FOR CONSISTENCY Gauss - Jordan method:

Example :
\(a_{1} x+b_{1} y+c_{1} z=d_{1}\)
\(a_{2} x+b_{2} y+c_{2} z=d_{2}\)
\(a_{3} x+b_{3} y+c_{3} z=d_{3}\)

\(\Rightarrow\left[\begin{array}{l}a_{1} x+b_{1} y+c_{1} z \\ a_{2} x+b_{2} y+c_{2} z \\ a_{3} x+b_{3} y+c_{3} z\end{array}\right]=\left[\begin{array}{l}d_{1} \\ d_{2} \\ d_{3}\end{array}\right]\)\( \Rightarrow\left[\begin{array}{lll}a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \\ a_{3} & b_{3} & c_{3}\end{array}\right]\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{l}d_{1} \\ d_{2} \\ d_{3}\end{array}\right]\)

\(\Rightarrow A X=B \quad \Rightarrow A^{-1} A X=A^{-1} B\), if \(|A| \neq 0\)

\(\Rightarrow X=A^{-1} B=\dfrac{\operatorname{Adj} A}{|A|} \cdot B\)

Note:
(i) If | \(A \mid \neq 0\), system is consistent having unique solution
(ii) If \(|A| \neq 0 \) & \((\operatorname{adj} A) \cdot B \neq O\) (Null matrix), system is consistent having unique non-trivial solution.
(iii) If \(|A| \neq 0 \) & \((\operatorname{adj} A) \cdot B=O\) (Null matrix), system is consistent having trivial solution. 
(iv) If \(|A|=0\), then
SYSTEM OF EQUATION & CRITERIA FOR CONSISTENCY Gauss - Jordan method:








Comments

Popular posts from this blog

Indefinite Integration - Notes, Concept and All Important Formula

INDEFINITE INTEGRATION If  f & F are function of \(x\) such that \(F^{\prime}(x)\) \(=f(x)\) then the function \(F\) is called a PRIMITIVE OR ANTIDERIVATIVE OR INTEGRAL of \(\mathrm{f}(\mathrm{x})\) w.r.t. \(\mathrm{x}\) and is written symbolically as \(\displaystyle \int \mathrm{f}(\mathrm{x}) \mathrm{d} \mathrm{x}\) \(=\mathrm{F}(\mathrm{x})+\mathrm{c} \Leftrightarrow \dfrac{\mathrm{d}}{\mathrm{dx}}\{\mathrm{F}(\mathrm{x})+\mathrm{c}\}\) \(=\mathrm{f}(\mathrm{x})\) , where \(\mathrm{c}\) is called the constant of integration. Note : If \(\displaystyle \int f(x) d x\) \(=F(x)+c\) , then \(\displaystyle \int f(a x+b) d x\) \(=\dfrac{F(a x+b)}{a}+c, a \neq 0\) All Chapter Notes, Concept and Important Formula 1. STANDARD RESULTS : (i) \( \displaystyle \int(a x+b)^{n} d x\) \(=\dfrac{(a x+b)^{n+1}}{a(n+1)}+c ; n \neq-1\) (ii) \(\displaystyle \int \dfrac{d x}{a x+b}\) \(=\dfrac{1}{a} \ln|a x+b|+c\) (iii) \(\displaystyle \int e^{\mathrm{ax}+b} \mathrm{dx}\) \(=\dfrac{1}{...

Logarithm - Notes, Concept and All Important Formula

LOGARITHM LOGARITHM OF A NUMBER : The logarithm of the number \(\mathrm{N}\) to the base ' \(\mathrm{a}\) ' is the exponent indicating the power to which the base 'a' must be raised to obtain the number \(\mathrm{N}\) . This number is designated as \(\log _{\mathrm{a}} \mathrm{N}\) . (a) \(\log _{a} \mathrm{~N}=\mathrm{x}\) , read as \(\log\) of \(\mathrm{N}\) to the base \(\mathrm{a} \Leftrightarrow \mathrm{a}^{\mathrm{x}}=\mathrm{N}\) . If \(a=10\) then we write \(\log N\) or \(\log _{10} \mathrm{~N}\) and if \(\mathrm{a}=e\) we write \(\ln N\) or \(\log _{e} \mathrm{~N}\) (Natural log) (b) Necessary conditions : \(\mathrm{N}> \,\,0 ; \,\, \mathrm{a}> \,\,0 ; \,\, \mathrm{a} \neq 1\) (c) \(\log _{a} 1=0\) (d) \(\log _{a} a=1\) (e) \(\log _{1 / a} a=-1\) (f) \(\log _{a}(x . y)=\log _{a} x+\log _{a} y ; \,\, x, y> \,\,0\) (g) \(\log _{a}\left(\dfrac{\mathrm{x}}{y}\right)=\log _{\mathrm{a}} \mathrm{x}-\log _{\mathrm{a}} \mathrm{y} ; \,\, \mathrm{...

Trigonometry Equation - Notes, Concept and All Important Formula

TRIGONOMETRIC EQUATION 1. TRIGONOMETRIC EQUATION : An equation involving one or more trigonometrical ratios of unknown angles is called a trigonometric equation. All Chapter Notes, Concept and Important Formula 2. SOLUTION OF TRIGONOMETRIC EQUATION : A value of the unknown angle which satisfies the given equations is called a solution of the trigonometric equation. (a) Principal solution :- The solution of the trigonometric equation lying in the interval \([0,2 \pi]\) . (b) General solution :- Since all the trigonometric functions are many one & periodic, hence there are infinite values of \(\theta\) for which trigonometric functions have the same value. All such possible values of \(\theta\) for which the given trigonometric function is satisfied is given by a general formula. Such a general formula is called general solutions of trigonometric equation. 3. GENERAL SOLUTIONS OF SOME TRIGONOMETRICE EQUATIONS (TO BE REMEMBERED) :   (a) If \(\sin \theta=0\) , then \(\theta=...