computerages Posted February 26, 2008 Posted February 26, 2008 Hello, My teacher went over the proof to find the general [math]A^{-1}[/math] for 2x2 matrices. He told us that finding the general [math]A^{-1}[/math] for 3x3 matrices might take about 5 pages at least, but there's also another method that can do it in a few steps. I was wondering if anyone can provide me a link to the web page where both methods are shown step-by-step, or at least just guide me through finding the general rule if it's really lengthy to show here. Thanks
ajb Posted February 26, 2008 Posted February 26, 2008 Did your teacher use Gauss-Jordan Elimination? You can show that [math]A^{-1} = \frac{1}{\det A}A^{*}[/math] where [math]A^{*}[/math] is the adjoint. If you can show this in general, you can present explicit formula.
Country Boy Posted March 8, 2008 Posted March 8, 2008 A method I prefer is "row reduction". You are probably familiar with the concept but I will review it here: There are basically three kinds of "row operations" you can do to a matrix: 1) Multiply every number in a row by the same number. 2) Swap two rows. 3) Add a multiple of one row to another. The crucial point is that every row operation corresponds to multiplication by a specific matrix: in fact, if you apply a row operation to the identity matrix, you get an "elementary" matrix- multiplying any matrix by that elementary matrix is the same as applying the corresponding row operation to it. Write your matrix "A" and the identity matrix side by side. Apply row operations, one after another, to reduce A to the identity matrix. This can be done fairly mechanically. (If you get a row consisting entirely of 0s, you can't do this. That is exactly the situtation in which A does not have an inverse.) As you apply each row operation to A, apply it also to the identity matrix beside. By the time you have reduced A to the identity matrix, you will have changed the identity matrix to A-1. This is because applying the sequence of row operations to A is the same as multiplying A by their corresponding elementary matrices. Multiplying all of those elementary matrices would give the one matrix that, multiplying A gives the identity matrix: A-1. Applying the row operations to the identity matrix is the same as multiplying them all by the identity matrix, a quick way of "multiplying" those elementary matrices without having to write them down.
DJBruce Posted June 26, 2008 Posted June 26, 2008 http://mathforum.org/library/drmath/view/55480.html
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now