## Introduction

We will see two types of matrices in this chapter. The identity matrix or the inverse of a matrix are concepts that will be very useful in the next chapters. We will see at the end of this chapter that we can solve systems of linear equations by using the inverse matrix. So hang on!

## 2.3 Identity and Inverse Matrices

## Identity matrices

The identity matrix $\bs*_n$ is a special matrix of shape ($n \times n$) that is filled with $0$ except the diagonal that is filled with 1.*

*A 3 by 3 identity matrix*

An identity matrix can be created with the Numpy function eye() :

When ‘apply’ the identity matrix to a vector the result is this same vector:

#### Example 1.

$ \begin

### Intuition

You can think of a matrix as a way to transform objects in a $n$-dimensional space. It applies a linear transformation of the space. We can say that we *apply* a matrix to an element: this means that we do the dot product between this matrix and the element (more details about the dot product in 2.2). We will see this notion thoroughly in the next chapters but the identity matrix is a good first example. It is a particular example because the space doesn’t change when we *apply* the identity matrix to it.

The space doesn’t change when we *apply* the identity matrix to it

We saw that $\bs*$.*

## Inverse Matrices

#### Example 2.

For this example, we will use the Numpy function linalg.inv() to calculate the inverse of $\bs$. Let’s start by creating $\bs$:

Now we calculate its inverse:

We can check that $\bs

We will see that inverse of matrices can be very usefull, for instance to solve a set of linear equations. We must note however that non square matrices (matrices with more columns than rows or more rows than columns) don’t have inverse.

## Sovling a system of linear equations

An introduction on system of linear equations can be found in 2.2.

The inverse matrix can be used to solve the equation $\bs**$ by adding it to each term:**

Since we know by definition that $\bs^<-1>\bs=\bs*$, we have:*

We saw that a vector is not changed when multiplied by the identity matrix. So we can write:

This is great! We can solve a set of linear equation just by computing the inverse of $\bs$ and apply this matrix to the vector of results $\bs**$!**

#### Example 3.

We will take a simple solvable example:

$ \begin

We will use the notation that we saw in 2.2:

$ \begin

Here, $x_1$ corresponds to $x$ and $x_2$ corresponds to $y$. So we have:

$ \begin

And the vector $\bs**$ containing the solutions of individual equations is:**

Under the matrix form, our systems becomes:

Let’s find the inverse of $\bs$:

Since we saw that

This is our solution!

This means that the point of coordinates (1, 2) is the solution and is at the intersection of the lines representing the equations. Let’s plot them to check this solution:

We can see that the solution (corresponding to the line crossing) is when $x=1$ and $y=2$. It confirms what we found with the matrix inversion!

### BONUS: Coding tip — Draw an equation

To draw the equation with Matplotlib, we first need to create a vector with all the $x$ values. Actually, since this is a line, only two points would have been sufficient. But with more complex functions, the length of the vector $x$ corresponds to the sampling rate. So here we used the Numpy function arrange() (see the doc) to create a vector from $-10$ to $10$ (not included).

The first argument is the starting point and the second the ending point. You can add a third argument to specify the step:

Then we create a second vector $y$ that depends on the $x$ vector. Numpy will take each value of $x$ and apply the equation formula to it.

Finally, you just need to plot these vectors.

## Singular matrices

Some matrices are not invertible. They are called **singular**.

## Conclusion

This introduces different cases according to the linear system because $\bs^<-1>$ exists only if the equation $\bs**$ has one and only one solution. The next chapter is almost all about systems of linear equations and number of solutions.**

Feel free to drop me an email or a comment. The syllabus of this series can be found in the introduction post. All the notebooks can be found on Github.

This content is part of a series following the chapter 2 on linear algebra from the Deep Learning Book by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is constructed as my understanding of these concepts. You can check the syllabus in the introduction post.

## NumPy Inverse Matrix in Python

NumPy linalg.inv() function in Python is used to compute the (multiplicative) inverse of a matrix. The inverse of a matrix is that matrix which when multiplied with the original matrix, results in an identity matrix. In this article, I will explain how to use the NumPy inverse matrix to compute the inverse of the matrix array using this function.

### 1. Quick Examples of Inverse Matrix

If you are in a hurry, below are some quick examples of how to use Python NumPy inverse matrix.

### 2. Syntax of numpy.linalg.inv() Function

Following is the syntax to create numpy.linalg.inv() function.

#### 2.1 Parameter of Inverse Matrix

Following are the parameters of the inverse matrix.

arr : This parameter represents the matrix to be inverted.

#### 2.2 Return Value of Inverse Matrix

This function returns the inverse of the matrix array.

### 3. Usage of numpy.linalg.inv() Function

Using Python numpy.linalg.inv() function to the inverse of a matrix in simple mathematics can be defined as a matrix.

#### 3.1 Use numpy.linalg.inv() Function

We can use a matrix as a rectangular arrangement of data or numbers, in other words, we can say that it is a rectangular array of data the horizontal entries in the matrix are called rows and the vertical entries are called columns. For the matrix inverse function, we need to use np.linalg.inv() function. This function will inverse the given matrix. Python NumPy provides an easy function to calculate the inverse of the matrix. The function helps the user to check numpy.linalg.inv() is available in the Python library.

### 4. Get the Inverse of a Matrix Using scipy.linalg.inv() Function

We can also use the scipy module to perform different scientific calculations using its functionalities. Using scipy.linalg.inv() function is used to return the inverse of a given square matrix in NumPy Python. It works the same way as the numpy.linalg.inv() function.

Yields the same output as above.

### 5. Inverse of a Matrix NumPy Two Multi-Dimensional Arrays

We can also use np.linalg.inv() function to compute the multiplicative inverse of a matrix of the two multi-dimensional arrays elementwise.

### 6. Conclusion

In this article, I have explained how to use the inverse matrix to compute the inverse of the matrix array with examples.

## Linear Algebra and Numpy

I ‘d like to present the blog post on the linear algebra and how it benefits to Machine Learning and Deep Learning. so, the goal of this post is to help the beginners in the field of Machine learning and Deep learning. basically, we have a focus on the Numpy Library, who don’t know about the Numpy Library. it is a Python Package basically stands for Numerical Python. I can assume that you are familiar with the basic Python Programming. So, Let’s start.

**What is Linear Algebra?**

Linear algebra is a branch of mathematics that is widely used throughout science and engineering. Yet because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. A good understanding of linear algebra is essential for understanding and working with many machine learning algorithms, especially deep learning algorithms.

In this Module, we learn about Linear algebra topic called Matrix and their Operations i.e Vectors, Matrix Multiplication, determinant, trace and many more.

And learn how to implement these in the Programming way. so, for this, we will be using the Python Package called Numpy.

let's familiarise with numpy and their installation.

## Numpy —

NumPy is the fundamental package for scientific computing with Python. adding support for large, multi-dimensional arrays and matrices, along with a large collection of high mathematics functions to operate on these arrays.

Numpy Contains—

->a powerful N-dimensional array object.->sophisticated (broadcasting) functions.->tools for integrating C/C++ and Fortran code.->usefullinear algebra, Fourier transform, and random number capabilities.

**Installation of Numpy** —

Numpy Installation via pip

Numpy Installation for Anaconda Jupyter Notebook user

To use the Numpy library in the program all you need to do in to import it.

don't confuse with np it just a method of the renaming of module **numpy** into **np**.

Now, the system is set up for writing the programs and learn Mathematics with hands-on Implementation.

So, Take a cup of coffee, wear your headphones and enjoy coding.

Let’s familiarize with the basics of Numpy —

### Basic of Numpy —

**Note: an array can be referred to as a matrix.****a) Create an Array and Print an Array —**

**b) Finding the Number of array dimensions —**

**c) Finding the shape of an Array —**

**d) Reshaping of an Array —**

**e) Some Mathematical Operation —**

**f) Finding the mean, median, variance and standard deviation of an array**:

Now, we come to the main part i.e. Linear Algebra and we first discuss the Vectors.

## Vectors —

Vectors and vector spaces are fundamental to *linear algebra*, and they’re used in many machine learning models. Vectors describe spatial lines and planes, enabling you to perform calculations that explore relationships in multi-dimensional space.

### What is Vector —

A vector is a numeric element that has both *magnitude* and *direction*. The magnitude represents a distance (for example, “2 miles”) and the direction indicates which way the vector is headed (for example, “East”). Vectors are defined by an n-dimensional coordinate that describes a point in space that can be connected by a line from an arbitrary origin.

Our vector can be written as **v**=(2,3), Run the below code in your cell to visualize the vector **v** (which remember is described by the coordinate (2,3)).

Make sure you install the matplotlib Python Package.

### Calculating Magnitude —

Calculating the magnitude of the vector from its cartesian coordinates requires measuring the distance between the arbitrary starting point and the vector head point. For a two-dimensional vector, we’re actually just calculating the length of the hypotenuse in a right-angled triangle — so we could simply invoke Pythagorean theorem and calculate the square root of the sum of the squares of its components. So, here we can calculate the magnitude of a vector by two methods —

**Code 1** (simple)-by using math module

**Code 2** (By using *linalg* library provided by numpy) —

In Python, *numpy* provides a linear algebra library named **linalg** that makes it easier to work with vectors — you can use the **norm** function in the following code to calculate the magnitude of a vector.

the output is the same as above

### Calculating Direction —

To calculate the direction, or *amplitude*, of a vector from its cartesian coordinates, you must employ a little trigonometry. We can get the angle of the vector by calculating the *inverse tangent*; sometimes known as the *arctan* (the *tangent* calculates an angle as a ratio — the inverse tangent, or **tan-1**, expresses this in degrees).

n any right-angled triangle, the tangent is calculated as the *opposite* over the *adjacent*. In a two dimensional vector, this is the *y value over the x* value, so for our **v** vector (2,3):

=> tan( ��) = 3/2

=>�� = tan-1(3/2) = 56.309932474020215

Run the following Python code in your cell to confirm this:

and you will see the output —

tan = 1.5

inverse-tan = 56.309932474020215

There is an added complication, however, because if the value for *x or y* (or both) is negative, the orientation of the vector is not standard, and a calculator can give you the wrong tan-1 value from this method. To ensure you get the correct direction for your vector, use the following rules:

Both

x and yis positive: Use the tan-1 value.

x is negative, yis positive: Add 180 to the tan-1 value.Both

x and yis negative: Add 180 to the tan-1 value.

x is positive, yis negative: Add 360 to the tan-1 value.

In the previous Python code, we used *math.**atan* *function to calculate the inverse tangent from a numeric tangent. The *numpy* library includes a similar ** arctan** function. When working with numpy arrays, you can also use the

*numpy.*

*arctan2**function to return the inverse tangent of an array-based vector in radians*, and you can use the

*numpy.*

*degrees**function to convert this to degrees. The*

*arctan2**function automatically makes the necessary adjustment for negative x*and

*y*values.

output —

v: 56.309932474020215

s: 146.30993247402023

### Vector Addition —

So far, we’ve worked with one vector at a time. What happens when you need to add two vectors.

let's take an example —

v = (2,1)

s = (-3,1)

addition of v and s give (-1,2)

Run the cell below to create **v and plot it together with s.**

now, we add the v and s vectors

output —

[-1 2]

now, let see how this is looking like (Plotting)

here, the blue arrow is vector s, the red arrow is vector v and the green arrow is them summation.

## Vector Multiplication —

Vector multiplication can be performed in three ways:

- Scalar Multiplication
- Dot Product Multiplication
- Cross Product Multiplication

### Scalar Multiplication —

Let’s start with *scalar* multiplication — in other words, multiplying a vector by a single numeric value.

suppose I want to multiply vector v by 3, which I could write like this: w = 3v.

so, basically, value 3 multiply to all elements of the vector. for example —

=> v = (-1,1)

=> w = 3v

=> w = (-3,3)

let’s do it by running the code —

The same approach is taken for scalar division.

Try it for yourself — to calculate a new vector named **b** based on the following definition: b = v/2

code of this problem is here

### Dot Product Multiplication —

To get a scalar product, we calculate the *dot product*. This takes a similar approach to multiply a vector by a scalar, except that it multiplies each component pair of the vectors and sums the results. To indicate that we are performing a dot product operation, we use the • operator:

So for vectors **v (2,3) and s** **(-3,1)**, our calculation looks like this:

��⃗ ⋅��⃗ =(2⋅−3)+(3⋅1)=−6+3=−3

In Python, you can use the *numpy.* dot function to calculate the dot product of two vector arrays.

you will the same output as we calculated: -3

In Python 3.5 and later, we can also use the **@** operator to calculate the dot product:

### The Cosine Rule —

A useful property of vector dot product multiplication is that we can use it to calculate the cosine of the angle between two vectors.

Here’s that calculation in Python:

Match your output with theta = 105.25511870305779

### Cross Product Multiplication —

To get the *vector product* of multiplying two vectors together, you must calculate the *cross product*. The result of this is a new vector that is at right angles to both the other vectors in 3D Euclidean space. This means that the cross-product only really makes sense when working with vectors that contain three components.

In Python, we can use the *numpy.*cross-function to calculate the cross product of two vector arrays:

The output of this code: [-8 5 1]

## Matrices —

A matrix is an array of numbers that are arranged into rows and columns.

So, we already learn how to create a matrix(Array) and their basic operation in section Basic of Numpy. here, we learn more advanced of Matrix i.e Transpose, Inverse, Eigen Vector, etc. but before we start more advanced let’s do a simple program to create a matrix.

In Python, we can define a matrix as a 2-dimensional *numpy.*array, like this:

The output of this code is as you expect —

but we can also use *numpy.*matrix type, which is a specialist **subclass** of an array:

The output of this code is the same as np.array() give

There are some differences in behavior between ** array** and

**types — particularly with regards to**

*matrix***multiplication**(which we’ll explore later). You can use either, but most experienced Python programmers who need to work with both vectors and matrices tend to prefer the

**type for consistency.**

*array*We also prefer an array.

### Matrix Transposition —

You can *transpose* a matrix, that switches the orientation of its rows and columns. You indicate this with a superscript **T**, like this:

In Python, both *numpy.*array and numpy**.**matrix have a **T** function:

### Matrix Multiplication —

To multiply two matrices together, you need to calculate the *dot product* of rows and columns. This means multiplying each of the elements in each row of the first matrix by each of the elements in each column of the second matrix and adding the results. We perform this operation by applying the *RC* rule — always multiplying ** Rows by Columns**. For this to work, the number of

**in the first matrix must be the same as the number of**

*columns***in the second matrix so that the matrices are**

*rows**conformable*for the dot product operation.

In Python, we can use *numpy.***dot function** or the **@** operator to multiply matrices and two-dimensional arrays.

As you can see both ** np.dot(A, B)** and

**give the same output. so, you can anyone for multiplication.**

*A@B*Now, here is one case where there is a difference in behavior between ** numpy.array **and

**You can also use a regular multiplication operator with a matrix, but not with an array:**

*numpy.matrix*,You can compare the output of this code with the upper one that both are same.

Now, you already read about the property of the matrices that commutative law is not applicable in matrix multiplication.so, let’s proof this one.

Analyze your output that commutative law is applicable or not.

### The inverse of Matrix —

So, how do you calculate the inverse of a matrix? For a 2×2 matrix

1)Either we use the Gauss — Jordan method(row or column elementary operation)

2)you can follow this formula:

In Python, we can use *numpy.linalg.**inv**function to get the inverse of a matrix in an array* or *matrix* object:

Additionally, the *matrix* type has an ** I** method that returns the inverse matrix:

For larger matrices, the process to calculate the inverse is more complex.

Steps to calculate the Inverse of a Matrix:

- Step 1: calculating the Matrix of Minors,
- Step 2: then turn that into the Matrix of Cofactors,
- Step 3: then the Adjugate(also called Adjoint), and
- Step 4: multiply that by 1/Determinant.

As you can see, this can get pretty complicated. But the program to calculate is not so complicated.

So, till now we have learned multiplication, Inverse of the matrices, etc.

Let’s done a problem to Solve the Systems of Equations with Matrices.

Can you do yourself?

here, is the problem —

you have to solve these equations with the help of Matrices.

Hint — your equation looks like this:

Problem code solution is here.

## Transformations —

Matrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We’re not going to cover the subject exhaustively here, but we’ll focus on a few key concepts that are useful to know when you plan to work with *machine learning*.

### Linear Transformations —

We can manipulate a vector by multiplying it with a matrix. The matrix acts as a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are *linear transformations* that transform the input vector into the output vector.

for example, consider this matrix ** A** and vector

**:**

*v*We can define a transformation ** T** like this: T( ��⃗) = A ��⃗

To perform this transformation, we simply calculate the dot product by applying the *RC* rule; multiplying each row of the matrix by the single column of the vector.

Here’s the calculation in Python:

In this case, both the input vector and the output vector have 2 components — in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:

Note that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another — or in notation:

Here it is in Python:

### Transformations of Magnitude and Amplitude —

When we multiply a vector by a matrix, we transform it in at least one of the following two ways:

- Scale the length (
*magnitude*) of the matrix to make it longer or shorter. - Change the direction (
*amplitude*) of the matrix.

For example, consider the following matrix and vector:

As before, we transform the vector ** v** by multiplying it with the matrix

**In this case, the resulting vector has changed in length (**

*A.*

*magnitude*) but has not changed its direction (

*amplitude*).

Let’s visualize that in Python:

The original vector ** v** is shown in orange, and the transformed vector

**is shown in blue — note that**

*t***has the same direction (**

*t**amplitude*) as

**but a greater length (**

*v**magnitude*).

Now let’s use a different matrix to transform the vector ** v**:

This time, the resulting vector has been changed to a different amplitude but has the same magnitude.

Now let’s see change the matrix one more time:

Now our resulting vector has been transformed into a new amplitude *and* magnitude — the transformation has affected both direction and scale.

### Affine Transformations —

An Affine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as *bias*; like this:

This kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the *features*, the first vector is the *coefficients*, and the bias vector is the *intercept*.

here’s an example of an Affine transformation in Python:

## Eigenvectors and Eigenvalues —

So we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.

For example, earlier we examined the following transformation that dot-multiplies a vector by a matrix:

We can achieve the same result by multiplying the vector by the scalar value ** 2**:

The following python performs both of this calculation and shows the results, which are identical.

In cases like these, where a matrix transformation is the equivalent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:

Where the vector ** v** is an eigenvector and the value

**is an eigenvalue for transformation**

*λ***.**

*T*When the transformation

**is represented as matrix multiplication, as in this case where the transformation is represented by matrix**

*T***:**

*A*where ** λ** is a scalar value called the ‘eigenvalue’. This means that the linear transformation A on vector ��⃗is completely defined by

*λ.*We can rewrite the equation as follows:

Where, I is the identity matrix of the same dimensions as A.

A matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it’s generally easier to use a tool or programming language. For example, in Python, you can use the ** linalg.eig** function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.

Here’s an example that returns the eigenvalue and eigenvector pairs for the following matrix:

So there are two eigenvalue-eigenvector pairs for this matrix, as shown here:

Let’s verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here’s the first pair:

So far so good. Now let’s check the second pair:

So our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.

Here’s the equivalent code in Python, using the ** eVals** and

**variables you generated in the previous code cell:**

*eVecs*The output of this code —

You can use the following code to visualize these transformations:

Similarly, can you examine the following matrix transformation:

Hint: you can achieve the same result by multiplying the vector by the scalar value *2.*

*The solution is*

*here*

*.*

## Eigen decomposition-

So we’ve learned a little about eigenvalues and eigenvectors, but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.

Recall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or *basis*; and the same transformation can be applied in different *bases*.

We can decompose a matrix using the following formula:

Where ** A** is a transformation that can be applied to a vector in its current base,

**is a matrix of eigenvectors that defines a change of basis, and**

*Q***is a matrix with eigenvalues on the diagonal that defines the same linear transformation as**

*Λ***in the base defined by**

*A***.**

*Q*Let’s look at these in some more detail. Consider this matrix:

** Q** is a matrix in which each column is an eigenvector of

**; which as we’ve seen previously, we can calculate using Python:**

*A*So for matrix ** A**,

**is the following matrix:**

*Q*** Λ** is a matrix that contains the eigenvalues for

**on the diagonal, with zeros in all other elements; so for a 2×2 matrix, Λ will look like this:**

*A*In our Python code, we’ve already used the ** linalg.eig** function to return the array of eigenvalues for

**into the variable**

*A***, so now we just need to format that as a matrix:**

*l*So ** Λ** is the following matrix:

Now we just need to find ** Q-1**, which is the inverse of

**:**

*Q*The inverse of ** Q** is:

So what does that mean? Well, it means that we can decompose the transformation of *any* vector multiplied by a matrix ** A** into the separate operations

**:**

*QΛQ-1*To prove this, let’s take vector ** v**:

Our matrix transformation using ** A** is:

So let’s show the results of that using Python:

And now, let’s do the same thing using the ** QΛQ-1** sequence of operations:

And you can see that the output of this code is the same as above.

So ** A** and

**are equivalent.**

*QΛQ-1*If we view the intermediary stages of the decomposed transformation, you can see the transformation using

**in the original base for**

*A***(orange to blue) and the transformation using**

*v***in the change of basis described by**

*Λ***(red to magenta):**

*Q*So from this visualization, it should be apparent that the transformation ** Av** can be performed by changing the basis for

**using**

*v***(from orange to red in the above plot) applying the equivalent linear transformation in that base using**

*Q***(red to magenta), and switching back to the original base using**

*Λ***(magenta to blue).**

*Q-1*## Rank of a Matrix-

The **rank** of a square matrix is the number of non-zero eigenvalues of the matrix. A **full rank** matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A **rank-deficient** matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist.

Consider the following matrix ** A**:

Let’s find its eigenvalues (** Λ**):

This matrix has full rank. The dimensions of the matrix are 2. There are two non-zero eigenvalues.

Now consider this matrix:

Note that the second and third columns are just scalar multiples of the first column.

Let’s examine its eigenvalues:

Note that the matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.

### Inverse of a Square Full Rank Matrix —

We can calculate the inverse of a square full rank matrix by using the following formula:

Let’s apply this to matrix ** A**:

Let’s find the matrices for ** Q**,

**, and**

*Λ-1***:**

*Q-1*Let’s calculate that in Python:

That gives us the result:

We can apply ** np.linalg.inv** function directly to

**to verify this:**

*A*This will give you the same A inverse value as above.

Let’s end this here! All the code of this discussion is here.

So, congrats you do it perfectly.

I know it a little bit longer. but Hope I have given some idea about Numpy and the Linear Algebra and how it Apply in the field like Machine Learning, Deep Learning, and Computer Vision.