AllCombinations Posted August 6, 2015 Posted August 6, 2015 What are coefficients, really? The elementary answer is, for, say, a polynomial of the form y=ax^2+bx+c, that a and b are coefficients that are usually held to be constant or that might be construed as being variables in some cases, and that c is a constant that is, in the case of a polynomial, the y intercept. But what ARE coefficients? What is the formal definition of each one and/or what they are as a set? That is, if we say that the elements of a polynomial (or any function/mapping) are S={a1,a2,...,an}, what is this set by itself? And the answer should include polynomials but can include rational expressions or linear functions, though I know that in the linear case the numbers m and b, for instance, are the slope and y intercept. But what do these numbers make up as a set? Maybe there is no greater definition but if there is one I would like to hear it. Thanks!
ajb Posted August 6, 2015 Posted August 6, 2015 Like you say, coefficients are elements, usually from a ring or field, that are considered constant in a given expression.
AllCombinations Posted August 6, 2015 Author Posted August 6, 2015 (edited) That is interesting, except I do not know what rings and fields are. In looking them up online I see that they appear to be concepts in abstract algebra, which I have yet to learn. I have only studied linear algebra. Is abstract algebra a requisite to comprehending how the coefficients of a function (polynomial, rational, transcendental, etc.) affect the function itself and how thoes coefficients behave together or might be considered as a set unto themselves? I am largely self-taught and I am looking for guidance in what to learn. Math is so vast a subject and I am not sure which direction to head in. Specifically, apart from wishing to comprehend and discuss coefficients in the manner I have mentioned, I have a textbook on tensor analysis (Tensor Analysis by Richard L. Bishop and Samuel I. Goldberg) and another that claims to be indepth on the determinants of matrices (Determinants and Their Applications in Mathematical Physics by Robert Vein and Paul Dale). Would it be more beneficial to study these two books first and then an abstract algebra course, or would either one of these make decent requisites to abstract algebra? Or do none of these necessarily follow from the other? In particular, I suppose I am interested in how coefficient matrices might be studied. I am looking to save myself time by asking for guidance in how to approach these various subjects, how they relate to each other, and so on, and I am beginning to worry that I am repeating myself and being redundant so I will stop here. Thank you for your help, by the way. Edited August 6, 2015 by AllCombinations
studiot Posted August 6, 2015 Posted August 6, 2015 Coefficient = cooperating to produce a result, from the Latin. Coefficients mean nothing by themselves but they modify variables in some way, cooperating with them to produce a result that neither could produce by themselves. This is more common in Physics than Maths They can also act a 'filters' selecting a particular set of circumstances from many available ones. This is more common in Maths than in Physics. For example stress = a coefficient x strain. (Hookes Law) This coefficient, called Young's Modulus, converts units of strain to units of stress. also Voltage = a coefficient x current (Ohms Law) This coefficient, called resistance, converts units of current to units of voltage You mentioned first order equations y = ax + b. For any given a, the coefficient b selects a particular straight line from an infinite set of parallel lines covering the xy plane. also For any given b, the coefficient a selects a particular straight line from an infinite starburst of rays passing through b and again all covering the plane.
Acme Posted August 6, 2015 Posted August 6, 2015 Coefficient = cooperating to produce a result, from the Latin. Coefficients mean nothing by themselves but they modify variables in some way, cooperating with them to produce a result that neither could produce by themselves. This is more common in Physics than Maths ... Pascal's triangle binomial theorem cough cough.
studiot Posted August 6, 2015 Posted August 6, 2015 Pascal's triangle binomial theorem cough cough What about it, go on when you have cleared your throat.
Acme Posted August 7, 2015 Posted August 7, 2015 (edited) What about it, go on when you have cleared your throat.Just some gentle ribbing of you in defense of coefficients in mathematics having some meaning in-and-of themselves. Similarly, coefficients have meaning in linear algebra as matrix operands. Nothing to get hung about. Edited August 7, 2015 by Acme
AllCombinations Posted August 7, 2015 Author Posted August 7, 2015 Well, according to wikipedia at least, https://en.wikipedia.org/wiki/Polynomial_ring#The_polynomial_ring_K.5BX.5D the coefficients are elements of a field, whatever a "field" is. Learning is hard with that site because a polynomial ring in terms of fields is directed to a page that says that a field is a kind of a ring. Kind of a catch-22, which you get a lot with wikipedia. I don't have any books that talk about rings and fields and groups though.
fiveworlds Posted August 7, 2015 Posted August 7, 2015 (edited) What coefficients really are? Coefficients are our means of describing manipulations on functions. This might be difficult in school where there is little use for them. However say I was a farmer and I have 100cows and 50sheep. Lets say that on average each is giving me approximately 1 litre of milk a day. The creamery pays me 0.3$ per litre of cow's milk(x) and 0.2$ per litre of sheep's milk(z). To work out how much money I will make is a simple matter of multiplication. Daily Earnings=0.3(x)+0.2(z) Daily Earnings=0.3(100)+0.2(50) Now it is easy to notice how at the moment I get paid 0.3$ per litre for cows milk but this is subject to change over time. Then the government tells me they are taxing cow's milk up to 40 litres at 0.02$ per litre and 0.04$ after that. With goats milk being taxed at 0.1$ per litre. So I say ok I will have to factor that in while doing my accounts. Daily Earnings=(0.3-0.02)(40)+(0.3-0.04)(60)+(0.2-0.1)(50) Next they tell me that I have to pay for all the water that I use at 0.03$ a litre. A cow on average drinks 8 litres a day. A goat drinks 4 litres a day. Daily Earnings=(0.3-0.02-(0.03*8))(40)+(0.3-0.04-(0.03*8))(60)+(0.2-0.1-(0.03*4))(50) Then I am told that the price of a bale of hay is 10$. I know that I go through approximately three bales of hay a day. Daily Earnings=(0.3-0.02-(0.03*8))(40)+(0.3-0.04-(0.03*8))(60)+(0.2-0.1-(0.03*4))(50)-10(≈3) Then I am told that there is a tax on my daily earnings @2% Daily Earnings=(0.3-0.02-(0.03*8))(40)+(0.3-0.04-(0.03*8))(60)+(0.2-0.1-(0.03*4))(50)-10(≈3)-(Daily Earnings*0.02) Normally after 60 days I sell my cattle to the slaughter house. The slaughter house pays 150$ for a cow and $200 for a goat. However there is a 15% tax on the sale Earnings for 60 days = (Daily Earnings*60)+(150*0.85)(100)+(200*0.85)(50) The government then tells me for filling out my paperwork on time I get 2% of my total tax paid back. Not including water charges. Tax back=( (0.03)(40)(60)+(0.04)(60)(60)+(0.1)(50)(60)+(150*0.15)(100)+(200*0.85)(50) ) * 0.02 Then the government says that it would be irresponsible to allow anybody to work on a farm without a degree in agricultural science. Now it won't affect farmers like myself but it would mean that for my child to inherit my farm I would need to be able to pay to put my child through college. But not only that in order to get into these colleges my child must attain a minimum standard of education. These college cost approximately 4000$ per year over a four year term. If I continue on the path I am on given my current earnings. How large a deficit will I have in 20 years time? Edited August 7, 2015 by fiveworlds
Acme Posted August 7, 2015 Posted August 7, 2015 Coefficients are our means of describing manipulations on functions. ... The slaughter house pays 150$ for a cow and $200 for a goat. ... Raising goats is the preferred cowefficient and mutton you can say will change my mind.
ajb Posted August 7, 2015 Posted August 7, 2015 (edited) Is abstract algebra a requisite to comprehending how the coefficients of a function (polynomial, rational, transcendental, etc.) affect the function itself and how thoes coefficients behave together or might be considered as a set unto themselves? For example, if you have some polynomials in an abstract variable and you want the set of all such things to have some further algebraic properties, then you need the coefficients to have some algebraic properties themselves. The minimum one can usually take is that the coefficients come from a ring, but this depends on what you want exactly. Specifically, apart from wishing to comprehend and discuss coefficients in the manner I have mentioned, I have a textbook on tensor analysis (Tensor Analysis by Richard L. Bishop and Samuel I. Goldberg) and another that claims to be indepth on the determinants of matrices (Determinants and Their Applications in Mathematical Physics by Robert Vein and Paul Dale). Would it be more beneficial to study these two books first and then an abstract algebra course, or would either one of these make decent requisites to abstract algebra? Or do none of these necessarily follow from the other? Knowing the basics of matrices over the reals would be helpful before reading about tensors. If you want to read up on abstract algebra the thing to really be comfortable with is the notions of a field and a vector space. the coefficients are elements of a field, whatever a "field" is. Lean by example. The real numbers form a field and also complex numbers form a field. Loosley, a field is a set for which all the rules of the real numbers also holds. Edited August 7, 2015 by ajb
AllCombinations Posted August 8, 2015 Author Posted August 8, 2015 Yeah... well, I am going to go study the abstract algebra series on youtube. I guess I will get back to you.
AllCombinations Posted November 11, 2015 Author Posted November 11, 2015 Okay... so coefficients are usually over a field. A field is some set like the rationals, reals, or complex numbers. However, according to this book I found "Linear Algebra via Exterior Products" by Sergei Winitzk, found at http://www.ime.unicamp.br/~llohann/Algebra Linear Verao 2013/Material extra/Linear algebra via exterior product.pdf bottom of page 16 and into page 17 (pages 22, 23 of the pdf file) there is an example where the coefficients of a polynomial are defined as a dual basis to a polynomial with a basis 1, x, x^2. It sounds as if 1, x, x^2 is a basis then the arbitrary coefficients a, b, c are the dual basis. Is this always the relation between variables/indeterminants and coefficients in algebraic and transcendental functions? Or am I not understanding what that document is saying? Thanks in advance.
studiot Posted November 11, 2015 Posted November 11, 2015 (edited) Okay... so coefficients are usually over a field. A field is some set like the rationals, reals, or complex numbers. However, according to this book I found "Linear Algebra via Exterior Products" by Sergei Winitzk, found at http://www.ime.unica...ior product.pdf bottom of page 16 and into page 17 (pages 22, 23 of the pdf file) there is an example where the coefficients of a polynomial are defined as a dual basis to a polynomial with a basis 1, x, x^2. It sounds as if 1, x, x^2 is a basis then the arbitrary coefficients a, b, c are the dual basis. Is this always the relation between variables/indeterminants and coefficients in algebraic and transcendental functions? Or am I not understanding what that document is saying? I don't know if your reading since August will have taken you this far but I would say that your article is referring to linear functionals. Extract from Wikipedia. Algebraic dual space[edit]Given any vector space V over a field F, the dual space V∗ is defined as the set of all linear maps φ: V → F (linear functionals). The dual space V∗ itself becomes a vector space over F when equipped with an addition and scalar multiplication satisfying: for all φ and ψ ∈ V∗, x ∈ V, and a ∈ F. Elements of the algebraic dual space V∗ are sometimes called covectors or one-forms. The pairing of a functional φ in the dual space V∗ and an element x of V is sometimes denoted by a bracket: φ(x) = [φ,x] [1] or φ(x) = ⟨φ,x⟩.[2] The pairing defines a nondegenerate bilinear mapping[3][·,·] : V∗ × V → F. Functional analysis is important, and was all the rage 50 years ago. But to appreciate it you need to know what functionals and the dual space are. Indeed you need to know what a space is. So a set is simply a collection of memebrs. A space is when we impose a 'structure' to that set. Roughly a structure means rules and relationships between members. Sometimes we include more than one set in the space. This is the case with vector spaces. A vector space has two sets, the set of vectors and a second set of coefficients. A requirement of the second set is that it forms a 'Field'. As you so rightly observe certain specific number sets such as the reals form a field. A functional is a function that associates each vector in the set of vectors with an element from the field set in the space. A common example is the definite integral. The set of all functionals is called the 'dual space' of the vector space. Edited November 11, 2015 by studiot
AllCombinations Posted November 11, 2015 Author Posted November 11, 2015 Following your example, I ended up on the linear functional page of wikipedia, which reads (in part) that a linear functional (a.k.a. linear form, one-form, or covector) is a linear map from a vector space to its field of scalars. In R^n if vectors are represented as column vectors then the linear functionals are represented as row vectors. That is good enough for me. Then, given a vector space V over a field F, the dual space V* is the set of all linear maps from V to F... a.k.a. the linear functionals. So I can say that if we have a vector space V such as R^2 over the field of real numbers and we have a couple of vectors v1, v2, and if we arrange those vectors in a square array (a matrix) then if the columns of that matrix are in V then its rows are in V*? Does that mean that if the columns of the matrix correspond to some basis e1, e2 then the rows correspond to some "dual basis" e*1, e*2 which would then follow the rows? Do I have that right?
studiot Posted November 11, 2015 Posted November 11, 2015 So I can say that if we have a vector space V such as R^2 over the field of real numbers and we have a couple of vectors v1, v2, and if we arrange those vectors in a square array (a matrix) then if the columns of that matrix are in V then its rows are in V*? Does that mean that if the columns of the matrix correspond to some basis e1, e2 then the rows correspond to some "dual basis" e*1, e*2 which would then follow the rows? Yes sort of. You have the right idea but be careful with the teminology. This only comes with practise. Is R2 just a vector space? for instance? Wouldn't that statement imply that all transformations in R2 are linear, Are they?
AllCombinations Posted November 11, 2015 Author Posted November 11, 2015 Um... I don't know. No, I guess not. I am still trying to get used to speaking in such abstract terms. I guess I was just trying to formulate a more concrete example such as a 2x2 matrix in R2. You're right though, I need to be careful with how things are phrased. According to wikipedia's article on row and column vectors, The column space can be viewed as the dual space to the row space, since any linear functional on the space of column vectors can be represented uniquely as an inner product with a specific row vector. And the column space is the space spanned by the columns of a matrix... therefore, if the columns of a matrix are linearly independent then the columns span whatever space corresponds with the number of vectors. Again, for a 2x2 matrix that would be R2, assuming everything is linear. I don't know for sure how to be more careful in talking about such things. But suppose that the columns of that matrix are linearly independent but the rows are not. Then the space might be linearly independent but its dual space will not be. For instance, if the columns of the matrix correspond with the standard basis but the rows are identical. Hmmm... but that wouldn't work either, because then the columns would not be linearly independent. Does this imply that if the space is linearly independent that the dual space will be too? Or, to put it another way, that the space and the dual space will ALWAYS have the same dimension? Maybe i am getting a little off topic. Like I said, I am trying to adjust to thinking about things so abstractly. Maybe it would be better to find concrete examples first and then work from the specific to the general.
studiot Posted November 11, 2015 Posted November 11, 2015 Your musings about matrices are pretty close to the mark. Dual spaces turn up in all sorts of places and functional analysis is of great importance in applied maths. Keep up the good work.
AllCombinations Posted November 11, 2015 Author Posted November 11, 2015 Thanks! And thanks for your encouragement. It's pretty rare on the Internet. Just talking about all of this stuff to someone else has helped me to think through it. Going back to the original example from that book I cited, we say that {1,x,x2} is a basis and {a,b,c} is a dual basis. Suppose then that we wish to perform polynomial interpolation of a function of the form y=a+bx+cx2 through the points, say, (1,1), (2,3), and (7,11). Then, in choosing a system of equations, if we pick a+b+c=1 a+2b+4c=3 a+7b+49c=11 Then the basis of the columns that would be formed by the left side of the system would be {1, x, x2}. Then, when these values are solved for, we get a=-17/15, b=33/15, and c=-1/15. Yet in maintaining the array format of the equation, then a, b, and c go down the rows when solved for each of these things as a system of equations or elimination or whatever. I am understanding it better, I think, at least in terms of matrices, rows, and columns. And this also gives me a better understanding of what the coefficients of an arbitrary function are. Thanks again.
Keen Posted November 16, 2015 Posted November 16, 2015 That is interesting, except I do not know what rings and fields are. In looking them up online I see that they appear to be concepts in abstract algebra, which I have yet to learn. I have only studied linear algebra. Is abstract algebra a requisite to comprehending how the coefficients of a function (polynomial, rational, transcendental, etc.) affect the function itself and how thoes coefficients behave together or might be considered as a set unto themselves? I am largely self-taught and I am looking for guidance in what to learn. Math is so vast a subject and I am not sure which direction to head in. Specifically, apart from wishing to comprehend and discuss coefficients in the manner I have mentioned, I have a textbook on tensor analysis (Tensor Analysis by Richard L. Bishop and Samuel I. Goldberg) and another that claims to be indepth on the determinants of matrices (Determinants and Their Applications in Mathematical Physics by Robert Vein and Paul Dale). Would it be more beneficial to study these two books first and then an abstract algebra course, or would either one of these make decent requisites to abstract algebra? Or do none of these necessarily follow from the other? In particular, I suppose I am interested in how coefficient matrices might be studied. I am looking to save myself time by asking for guidance in how to approach these various subjects, how they relate to each other, and so on, and I am beginning to worry that I am repeating myself and being redundant so I will stop here. Thank you for your help, by the way. Algebra being my favorite subject in mathematics, I think I can give you some suggestions concerning what you can study. It's not very easy to get into abstract algebra, because if you only consider structures like fields groups rings vector spaces by themselves, you usually don't get far in their study as they are in some sense closely related, so I do not think there is a perfect order in which one should learn about all these structures. But since one has to start somewhere, I would personally recommend to learn in this order: Start with some elementary group theory. Nothing complicated, just get the hang of notions like order, symmetric group, group action etc... Then learn about finite dimensional vector spaces and linear maps and how they relate to matrices and systems of equations. Try to think about different examples: it seems you already got the hang of polynomial functions and that's great. Linear algebra is an extremely powerful tool in mathematics and I think it is important to familiarize oneself with as many linear structures as possible. Once you are familiar enough with basic notions in linear algebra, you can study things like scalar product, determinant and eigenvalues. Then I think you will be ready to study the ring structure. The most important here is to learn what are ideals, quotient rings and ring of polynomials. Once you understand it, you can study fields and general linear algebra over abstract fields. I come from a French academic background, so I am not sure I can recommend you some good books on those subjects in English, but you can try to start for example with J.S Milne Group Theory Chapters 1-6 Then for example continue with this book on linear algebra https://www.math.ucdavis.edu/~linear/linear-guest.pdfand you can end with http://www.maths.usyd.edu.au/u/bobh/UoS/rfwhole.pdf Those are what I consider to be the basic notions in algebra.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now