I will continue to share my knowledge of linear algebra, hoping we can progress together.


This diagram illustrates augmented matrix, which is usually called in Chinese:
Augmented Matrix
Some people also call it expanded matrix
Its core meaning is very simple:
By combining the coefficient matrix and the constant column of a system of linear equations, you can write a more compact matrix.
The left side of the diagram is matrix A:
The right side is the column vector B:
Then, appending B to the right of A gives us the augmented matrix:
The vertical line | in the middle here is very important; it's reminding you of something.
The left side is the coefficient
The right side is the constant on the right-hand side of the equation.
Therefore, augmented matrices are not randomly pieced together, but rather pieced together with a clear meaning.
Because the system of linear equations could originally be written like this:
Extracting the coefficients of the unknowns yields the coefficient matrix . If we isolate the constants on the right, we get the column vector .
Therefore, this system of equations can be written in matrix form:
The corresponding augmented matrix is:
Therefore, it can be understood as:
"Matrix-based notation of linear equation systems"
Its greatest use is:
It is especially convenient when performing Gaussian elimination and elementary transformations.
Because you don't have to keep writing , you can just perform row operations on the matrix.
for example:
Its augmented matrix is
Next, we can perform elimination on this matrix instead of repeatedly performing textual operations on the entire set of equations.
Since there was originally only a coefficient matrix , now the constant column on the right side is appended, which is equivalent to "expanding" the matrix by one column.
so:
is the original coefficient matrix.
is the "augmented" matrix.
“augmented” means “added to or expanded upon”.
The diagram shows a 3×3 matrix and a 3×1 column vector .
Therefore, the augmented result is a 3 x 4 matrix.
This usually corresponds to:
3 equations
3 unknowns
because:
3 lines: Represents 3 equations
The first 3 columns: represent the coefficients of the 3 unknowns.
Last column: Represents the constant terms on the right-hand side of each equation
Augmented matrix
It's essentially about compressing and recording the following system of equations:
Each row corresponds to an equation.
For example, the first line:
express:
A regular matrix simply stores numbers.
However, the numbers in the augmented matrix have a clear role:
First few columns: Coefficients of the unknowns
Last column: Constant terms
Therefore, augmented matrices are actually "semantic" matrices.
The last column, in particular, should not be confused with the preceding coefficient columns; this is why it is often written as...
Instead of simply writing it as a regular matrix.
After elimination, the augmented matrix can also help determine the solution of a system of equations.
for example:
If after simplification...
The last line indicates:
That is
This is impossible, therefore there is no solution.
If the left side can be simplified to obtain the identity matrix:
That means:
Therefore, there is a unique solution.
If there are free variables after simplification, for example:
This shows that the equations are not contradictory, but the constraints are insufficient, there are free variables, and therefore infinitely many solutions.
An augmented matrix is a matrix formed by combining the coefficient matrix and the constant column on the right-hand side of a system of linear equations. It is primarily used for Gaussian elimination and solving systems of linear equations.
The notation is:
in:
: Coefficient matrix
: Constant column vector
Originally, I was supposed to write:
Now packaged as:
This makes it more suitable for "mechanized" elimination calculations.
We will directly use a specific 3×3 integer equation system to go through the entire process of "how to use augmented matrices".
Consider the system of equations
This system of equations can be written as follows:
in
Therefore, the augmented matrix is...
Each line here corresponds to an equation:
First line:
Second line:
Third line:
Our goal is to transform the left side into an upper triangle, or even into a unit matrix.
First, remember these three lines:
Step 1: Eliminate the numbers at the bottom of the first column
Since the first item in the second row is 2, we want to change it to 0:
calculate:
First calculate :
so
calculate:
Therefore, the matrix becomes:
Step 2: Eliminate the numbers at the bottom of the second column
The pivot element is now located in the second row and second column, -3.
To eliminate the 1 in the second item of the third row, we can do the following:
Because the second item in the third row is 1, and the second item in the second row is -3, therefore
First calculate:
Add :
so
The matrix now becomes:
This is already an upper triangle.
Now we'll start solving from the third row upwards.
so
The second line indicates:
Substitute into the equation:
The first line indicates:
Substituting y=2, z=3:
Therefore, the solution to the system of equations is...
Because you will find:
We hardly need to write throughout the entire process.
This is a complete equation, but it only manipulates numerical tables:
This is the greatest value of augmented matrices:
Organize the system of equations into a uniform format
Convenient for performing elementary row transformations
Convenient for determining whether there is a solution, no solution, unique solution, or infinitely many solutions.
The transformation I just made:
Actually, it corresponds to:
"The second equation minus twice the first equation"
This will not change the solution set of the system of equations.
same,
correspond:
"Subtract the first equation from the third equation"
It will not change the solution.
so:
Row operations change the way a system of equations is written, but do not change its solution.
This is the fundamental reason why Gaussian elimination works.
We just stopped at the upper triangle, but we can actually simplify it further into a more standard form.
from
start.
First divide the third row by -7.
get
The second line contains -1, so do:
have to
The first line contains 1, so do:
have to
Therefore, it becomes:
get
have to
Finally obtained
This directly means:
When you see an augmented matrix, you can immediately think of these points:
It consists of a "coefficient matrix + constant column".
Each row corresponds to one equation.
Performing row operations on an augmented matrix is equivalent to performing an equivalent transformation on a system of equations.
After simplifying it, the solution can be seen directly.
The augmented matrix can be viewed as:
Left side: The "Rules" section of the question.
Right side: The "Results" section of the question.
For example
The on the left tells you how the variables are combined. The 6 on the right tells you what the combination equals.
An augmented matrix is a way to compress and store this information line by line.
Why do row operations on augmented matrices not change the solution?
This is actually answering:
Why is Gaussian elimination legal?
This explains why, even after repeatedly "transforming" the original system of equations, the final solution obtained is still the solution to the original system of equations.
The three basic row operations performed on the augmented matrix do not change the solution set of the system of equations.
These three transformations are:
Swap the two lines
Multiply a row by a non-zero constant
Add a certain multiple of one row to another row
They do not change "which are solutions", they only change how the system of equations is written.
Because each row of the augmented matrix essentially represents an equation.
For example, augmented matrix
Corresponding system of equations
Therefore, performing a "row transformation" on a matrix is essentially performing a transformation on the equation.
It is valid as long as the transformation does not change the solution set.
For example:
If we swap the two lines, it becomes
This, of course, does not change the solution.
Because if a solution satisfies both of the original equations... Then it must also satisfy the two equations after changing the order.
This is like saying:
"Do question A first, then do question B."
And "Do question B first, then do question A"
The order of the questions has changed, but the content remains the same.
Therefore, the solution set remains unchanged.
For example, equations
If the entire row is multiplied by 3, we get...
The solutions to these two equations are exactly the same.
because:
If and satisfy , multiplying both sides by 3 gives . Conversely, if and satisfy , dividing both sides by 3 gives .
Therefore, they are equivalent equations.
Because if multiplied by 0:
There were constraints before, but after multiplication, the constraints disappeared, so the solution set naturally changed.
Therefore, it can only be multiplied by non-zero constants.
This is the most crucial one.
For example, the original system of equations:
Now replace the second equation with:
That is to say:
Simplifying, we get:
Therefore, the new system of equations becomes:
Why didn't the solution change?
First, consider the statement "The original solution must be a new solution".
If satisfies the original system of equations, then it satisfies:
Since it satisfies these two equations, it must also satisfy:
Right now
Therefore, any solution to the original system of equations is also a solution to the new system of equations.
The new system of equations is:
But the second equation is actually from...
Obtained.
If a satisfies:
Substituting the first equation into the second equation will restore the original second equation:
Therefore, it also satisfies the original system of equations.
Therefore, the solution sets on both sides are the same.
This shows that:
Adding a certain number of times to one row of another is essentially just rearranging the information in the original system of equations; it does not add new conditions nor lose old ones.
Look at this system of equations:
Its solution is
Now perform a row transformation:
The second line becomes:
Right now
Therefore, the new system of equations is:
The solution remains:
Because you didn't introduce "new independent constraints," you just rewrote the original two constraints.
In two dimensions, a linear equation typically represents a straight line.
For example:
It is a straight line.
It is also a straight line.
The solution to the system of equations is the intersection of these two lines.
Now perform a row transformation:
become:
That is
This is yet another new straight line.
Although the second line has changed, it is a combination of the first two equations, and it still passes through the original intersection point .
Moreover, as long as both lines of the new system are satisfied at the same time, the original system can definitely be deduced.
so:
The way straight lines are represented has changed.
But the common intersection point remains unchanged.
This is the geometric intuition of "solution invariance".
Let the original augmented matrix be
Performing an elementary row operation on it is equivalent to left-multiplying it by an elementary matrix :
The corresponding system of equations is:
That is
If is a solution to the original equation, i.e., , then multiplying both sides by on the left will necessarily yield the following:
Conversely, since elementary matrices are always invertible, and can be derived from...
Multiplying both sides by on the left, we get
so:
The solution sets are exactly the same.
This is the simplest and most standard theoretical explanation.
Because Gaussian elimination preserves the solution set at every step.
so:
Original system of equations The system of equations after the first step of transformation The system of equations after the second step of transformation. The final upper triangular system of equations
Since each step does not change the solution, the final solution is naturally the solution to the initial system of equations.
Row transformations do not change the solution because they are all equivalent rewrites of the original system of equations: rearranging the equations, scaling a single equation proportionally, or using one equation to eliminate certain terms in another.
They neither create new information nor lose old information.
When you see line transformations, remember these three sentences:
Swapping two lines: This simply changes the order of the equations.
Multiplying a non-zero constant in a single line: This simply rewrites the same equation.
Adding one row to another multiple of another: This simply recombines existing equations.
Therefore, the wording has changed, but the solution remains the same.


This diagram illustrates augmented matrix, which is usually called in Chinese:
Augmented Matrix
Some people also call it expanded matrix
Its core meaning is very simple:
By combining the coefficient matrix and the constant column of a system of linear equations, you can write a more compact matrix.
The left side of the diagram is matrix A:
The right side is the column vector B:
Then, appending B to the right of A gives us the augmented matrix:
The vertical line | in the middle here is very important; it's reminding you of something.
The left side is the coefficient
The right side is the constant on the right-hand side of the equation.
Therefore, augmented matrices are not randomly pieced together, but rather pieced together with a clear meaning.
Because the system of linear equations could originally be written like this:
Extracting the coefficients of the unknowns yields the coefficient matrix . If we isolate the constants on the right, we get the column vector .
Therefore, this system of equations can be written in matrix form:
The corresponding augmented matrix is:
Therefore, it can be understood as:
"Matrix-based notation of linear equation systems"
Its greatest use is:
It is especially convenient when performing Gaussian elimination and elementary transformations.
Because you don't have to keep writing , you can just perform row operations on the matrix.
for example:
Its augmented matrix is
Next, we can perform elimination on this matrix instead of repeatedly performing textual operations on the entire set of equations.
Since there was originally only a coefficient matrix , now the constant column on the right side is appended, which is equivalent to "expanding" the matrix by one column.
so:
is the original coefficient matrix.
is the "augmented" matrix.
“augmented” means “added to or expanded upon”.
The diagram shows a 3×3 matrix and a 3×1 column vector .
Therefore, the augmented result is a 3 x 4 matrix.
This usually corresponds to:
3 equations
3 unknowns
because:
3 lines: Represents 3 equations
The first 3 columns: represent the coefficients of the 3 unknowns.
Last column: Represents the constant terms on the right-hand side of each equation
Augmented matrix
It's essentially about compressing and recording the following system of equations:
Each row corresponds to an equation.
For example, the first line:
express:
A regular matrix simply stores numbers.
However, the numbers in the augmented matrix have a clear role:
First few columns: Coefficients of the unknowns
Last column: Constant terms
Therefore, augmented matrices are actually "semantic" matrices.
The last column, in particular, should not be confused with the preceding coefficient columns; this is why it is often written as...
Instead of simply writing it as a regular matrix.
After elimination, the augmented matrix can also help determine the solution of a system of equations.
for example:
If after simplification...
The last line indicates:
That is
This is impossible, therefore there is no solution.
If the left side can be simplified to obtain the identity matrix:
That means:
Therefore, there is a unique solution.
If there are free variables after simplification, for example:
This shows that the equations are not contradictory, but the constraints are insufficient, there are free variables, and therefore infinitely many solutions.
An augmented matrix is a matrix formed by combining the coefficient matrix and the constant column on the right-hand side of a system of linear equations. It is primarily used for Gaussian elimination and solving systems of linear equations.
The notation is:
in:
: Coefficient matrix
: Constant column vector
Originally, I was supposed to write:
Now packaged as:
This makes it more suitable for "mechanized" elimination calculations.
We will directly use a specific 3×3 integer equation system to go through the entire process of "how to use augmented matrices".
Consider the system of equations
This system of equations can be written as follows:
in
Therefore, the augmented matrix is...
Each line here corresponds to an equation:
First line:
Second line:
Third line:
Our goal is to transform the left side into an upper triangle, or even into a unit matrix.
First, remember these three lines:
Step 1: Eliminate the numbers at the bottom of the first column
Since the first item in the second row is 2, we want to change it to 0:
calculate:
First calculate :
so
calculate:
Therefore, the matrix becomes:
Step 2: Eliminate the numbers at the bottom of the second column
The pivot element is now located in the second row and second column, -3.
To eliminate the 1 in the second item of the third row, we can do the following:
Because the second item in the third row is 1, and the second item in the second row is -3, therefore
First calculate:
Add :
so
The matrix now becomes:
This is already an upper triangle.
Now we'll start solving from the third row upwards.
so
The second line indicates:
Substitute into the equation:
The first line indicates:
Substituting y=2, z=3:
Therefore, the solution to the system of equations is...
Because you will find:
We hardly need to write throughout the entire process.
This is a complete equation, but it only manipulates numerical tables:
This is the greatest value of augmented matrices:
Organize the system of equations into a uniform format
Convenient for performing elementary row transformations
Convenient for determining whether there is a solution, no solution, unique solution, or infinitely many solutions.
The transformation I just made:
Actually, it corresponds to:
"The second equation minus twice the first equation"
This will not change the solution set of the system of equations.
same,
correspond:
"Subtract the first equation from the third equation"
It will not change the solution.
so:
Row operations change the way a system of equations is written, but do not change its solution.
This is the fundamental reason why Gaussian elimination works.
We just stopped at the upper triangle, but we can actually simplify it further into a more standard form.
from
start.
First divide the third row by -7.
get
The second line contains -1, so do:
have to
The first line contains 1, so do:
have to
Therefore, it becomes:
get
have to
Finally obtained
This directly means:
When you see an augmented matrix, you can immediately think of these points:
It consists of a "coefficient matrix + constant column".
Each row corresponds to one equation.
Performing row operations on an augmented matrix is equivalent to performing an equivalent transformation on a system of equations.
After simplifying it, the solution can be seen directly.
The augmented matrix can be viewed as:
Left side: The "Rules" section of the question.
Right side: The "Results" section of the question.
For example
The on the left tells you how the variables are combined. The 6 on the right tells you what the combination equals.
An augmented matrix is a way to compress and store this information line by line.
Why do row operations on augmented matrices not change the solution?
This is actually answering:
Why is Gaussian elimination legal?
This explains why, even after repeatedly "transforming" the original system of equations, the final solution obtained is still the solution to the original system of equations.
The three basic row operations performed on the augmented matrix do not change the solution set of the system of equations.
These three transformations are:
Swap the two lines
Multiply a row by a non-zero constant
Add a certain multiple of one row to another row
They do not change "which are solutions", they only change how the system of equations is written.
Because each row of the augmented matrix essentially represents an equation.
For example, augmented matrix
Corresponding system of equations
Therefore, performing a "row transformation" on a matrix is essentially performing a transformation on the equation.
It is valid as long as the transformation does not change the solution set.
For example:
If we swap the two lines, it becomes
This, of course, does not change the solution.
Because if a solution satisfies both of the original equations... Then it must also satisfy the two equations after changing the order.
This is like saying:
"Do question A first, then do question B."
And "Do question B first, then do question A"
The order of the questions has changed, but the content remains the same.
Therefore, the solution set remains unchanged.
For example, equations
If the entire row is multiplied by 3, we get...
The solutions to these two equations are exactly the same.
because:
If and satisfy , multiplying both sides by 3 gives . Conversely, if and satisfy , dividing both sides by 3 gives .
Therefore, they are equivalent equations.
Because if multiplied by 0:
There were constraints before, but after multiplication, the constraints disappeared, so the solution set naturally changed.
Therefore, it can only be multiplied by non-zero constants.
This is the most crucial one.
For example, the original system of equations:
Now replace the second equation with:
That is to say:
Simplifying, we get:
Therefore, the new system of equations becomes:
Why didn't the solution change?
First, consider the statement "The original solution must be a new solution".
If satisfies the original system of equations, then it satisfies:
Since it satisfies these two equations, it must also satisfy:
Right now
Therefore, any solution to the original system of equations is also a solution to the new system of equations.
The new system of equations is:
But the second equation is actually from...
Obtained.
If a satisfies:
Substituting the first equation into the second equation will restore the original second equation:
Therefore, it also satisfies the original system of equations.
Therefore, the solution sets on both sides are the same.
This shows that:
Adding a certain number of times to one row of another is essentially just rearranging the information in the original system of equations; it does not add new conditions nor lose old ones.
Look at this system of equations:
Its solution is
Now perform a row transformation:
The second line becomes:
Right now
Therefore, the new system of equations is:
The solution remains:
Because you didn't introduce "new independent constraints," you just rewrote the original two constraints.
In two dimensions, a linear equation typically represents a straight line.
For example:
It is a straight line.
It is also a straight line.
The solution to the system of equations is the intersection of these two lines.
Now perform a row transformation:
become:
That is
This is yet another new straight line.
Although the second line has changed, it is a combination of the first two equations, and it still passes through the original intersection point .
Moreover, as long as both lines of the new system are satisfied at the same time, the original system can definitely be deduced.
so:
The way straight lines are represented has changed.
But the common intersection point remains unchanged.
This is the geometric intuition of "solution invariance".
Let the original augmented matrix be
Performing an elementary row operation on it is equivalent to left-multiplying it by an elementary matrix :
The corresponding system of equations is:
That is
If is a solution to the original equation, i.e., , then multiplying both sides by on the left will necessarily yield the following:
Conversely, since elementary matrices are always invertible, and can be derived from...
Multiplying both sides by on the left, we get
so:
The solution sets are exactly the same.
This is the simplest and most standard theoretical explanation.
Because Gaussian elimination preserves the solution set at every step.
so:
Original system of equations The system of equations after the first step of transformation The system of equations after the second step of transformation. The final upper triangular system of equations
Since each step does not change the solution, the final solution is naturally the solution to the initial system of equations.
Row transformations do not change the solution because they are all equivalent rewrites of the original system of equations: rearranging the equations, scaling a single equation proportionally, or using one equation to eliminate certain terms in another.
They neither create new information nor lose old information.
When you see line transformations, remember these three sentences:
Swapping two lines: This simply changes the order of the equations.
Multiplying a non-zero constant in a single line: This simply rewrites the same equation.
Adding one row to another multiple of another: This simply recombines existing equations.
Therefore, the wording has changed, but the solution remains the same.
Share Dialog
Share Dialog
I will continue to share my knowledge of linear algebra, hoping we can progress together.

Subscribe to sonadorje

Subscribe to sonadorje
<100 subscribers
<100 subscribers
No activity yet