Dr. Jekyll
Senior Members-
Posts
41 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Dr. Jekyll
-
Curvature and spatial closure of the universe.
Dr. Jekyll replied to Dr. Jekyll's topic in Astronomy and Cosmology
Thanks for the reply Martin! Since I'm a bit drunk again, let us divide: (13.7e9/sqrt(0.003))/1e9 = 250 billion light years. How is [math]\Omega_k[/math] or [math]\Omega_{tot}[/math] estimated? I don't remeber the details, and weeding through the Nasa articles to refresh is a bit tedious. [math]\Omega_{tot}=\Omega_m+\Omega_r+\Omega_k+\Omega_{\Lambda}[/math] correct? I mean, does WMAP data give a [math]\Omega_{tot}[/math] (loosely speaking), and then we can derive [math]\Omega_k[/math]. Or, is [math]\Omega_m[/math], [math]\Omega_k[/math], etc. measured and they sum up to give a value of [math]\Omega_{tot}[/math]? I have assumed the latter. -
Everytime I get into this subject I get confused. All these [math]\Omega_j[/math] and [math]\Lambda_k[/math]. Hmm...now, where was I? Oh here, let us take the latest WMAP-data. Maybe I got some decimals wrong (disregard that), but let us assume a density parameter as [math] \Omega_{tot}=1.003 \pm 0.010[/math] I'm a bit drunk atm, , but [math]\Omega_{tot}[/math] is a sum of different ratios (density ratios), also including a possible cosmological constant - [math]\Lambda_{something}[/math]? Now, assume, hypotetically, that [math]\Omega_{tot}=1.0000000...[/math]. As I get it, that only implies that the universe will expand on and on. However, the universe CAN still be closed. Which parameters measured, determine if the universe is spatially closed or open? Back in the mid 90s, when I studied physics, it was thought that it all depends on gravity vs. mass. But, now we all know that shit is wrong due to dark energy. So, can someone (Martin?) elaborate on this subject?
-
At present I would say that we do not have enough data or/and lacking some theoretics to determine the case. But, if I would place a bet I would go for a finite universe; that I "believe." (If we discard scenarios/assumptions as, e.g., other, additional universes that could sum up to "infinitely many finite universes.")
-
Have there been any new discoveries/data regarding this? Got the impression from the latest WMAP articles, and statements from cosmologists regarding WMAP, that they seem "more confident" that universe is flat/infinite.
-
Aterna's question---the unexpected cosmic horizon
Dr. Jekyll replied to Martin's topic in Astronomy and Cosmology
I read the SciAm article, I have seen it before but didn't pay much attention to the Hubbe distance-part . It is 100% clear that light from galaxies beyond the Hubble distance can reach us, if the Hubble constant decreases. But, I thought that the Hubble constant increases with time, since the expansion of the universe is accelerating. I mean, that for a distance [math]d[/math] we have [math]v(t)=H(t)d[/math] and the expansion of the universe is accelerating giving [math]v'(t)>0 \Rightarrow v'(t)=H'(t)d>0 \Rightarrow H'(t)>0[/math]. I guess I misunderstood something? Physics wasn't my major, math was! -
Indeed! Hahaha . However, everyone who has tried to prove something has at one time or another ended up with 0=0, 1=1 or such. To defend the poster, I think it is a very important equation since it tells us we have ****ed up!
-
Trick question: What if [math]T=a[/math]? Quite surprised none even mentioned that (it was my first thought), maybe I overlooked some defs..
-
I take a 2 by 3 case as example and you can work out the general m by n case easily. Let [math]a= \left[\begin{array}{c} a_1 \\ a_2\\ \end{array} \right] [/math] and [math]b= \left[\begin{array}{c} b_1 \\ b_2\\ b_3\\ \end{array} \right] [/math] Then [math]ab^T=\left[\begin{array}{ccc} a_1b_1 & a_1b_2 & a_1b_3\\ a_2b_1 & a_2b_2 & a_2b_3\\ \end{array} \right] [/math] Each column vector is on the form [math]b_i \left[\begin{array}{c} a_1 \\ a_2\\ \end{array} \right] \ \ i=1,2,3.[/math] Then all column vectors are linear dependent, i.e., column rank is 1. If we discard the knowledge that row and column rank are equal, the same applies to the row rank, as each row vector is on the form [math]a_i \left[\begin{array}{ccc} b_1 & b_2 & b_3\\ \end{array} \right] \ \ i=1,2.[/math] All rows are linear dependent and row rank is 1.
-
Aterna's question---the unexpected cosmic horizon
Dr. Jekyll replied to Martin's topic in Astronomy and Cosmology
I only got a BA in physics and it was around 13yrs ago since I studied physics, at that point the accelerated expansion of universe wasn't even known. I know that the expansion of space does not contradict the typical "information being send faster than light," in relativity. I just thought that it is possible to set up the typical thought experiment "traveling train on a rail that emits light" and also include expansion of space. I figure that the expansion of space does not play any role, and we still get the same SR formulas. But I havn't sat down and calculated on it. Yes, I mixed up Hubble radius with cosmic horizon. I meant Hubble radius. If we get signals from galaxies outside the Hubble radius, then if we traveled at the speed of light (hypothetically) we could go there? The converse is true, since those galaxies sends information to us at the speed of light. Maybe I misunderstood your post where you say one could only reach z=1.7 galaxies when traveling at the speed of light, I got the impression that those galaxies are at the Hubble radius. I imagined that the Hubble radius would be kinda like the event horizon. When a galaxy exits the Hubble radius, we would only observe photons emitted at that boundary. Forever and ever, similar to what you as an outside observer would see when an object hits the event horizon. I will check out the SciAm article, I figure it clears things up! -
First I thought it was monomial, but that is without any coefficient. It seems as a monomial [math]x^n[/math] with a coefficient is called "term." http://mathworld.wolfram.com/Monomial.html http://mathworld.wolfram.com/Term.html
-
Aterna's question---the unexpected cosmic horizon
Dr. Jekyll replied to Martin's topic in Astronomy and Cosmology
Yes, I find this interesting. The galaxis (well "any space") at the cosmic horizon travels away from us at the speed of light, and the galaxies beyond at an even greater speed. How does relativity theory handle these velocities, that arise due to the expansion of space? If we can observe galaxies beyond the cosmic horizon, we must be observing them as they were just before they exited the horizon, or? Is the cosmic horizon something like the event horizon for a black hole? Do all these questions have an answer? If not, why? -
Hello Pete! Can't one just re-measure the particles? I mean, then you should get the same result as the previous. If the results are the same, the particles are no longer entangled, or they never were entangled.
-
Small scale PCR:ing and the components.
Dr. Jekyll replied to Dr. Jekyll's topic in Biochemistry and Molecular Biology
Oki, thanks! I have no intuition regarding the size of such small quantities, e.g., 0.05ml. Didn't even bother to make a simple calculation of it, since I figured it would be kinda like dipping a needle in a cup of water . But, it is much larger than I thought: [math](0.05\times 10^{-3})^{1/3}\approx 3.7\textrm{mm}^3[/math]. -
I'm interested in small scale PCR, but I'm just a layman in this area. I got some questions regarding the components needed. I start with the dNTP. dNTP-mixes (or the individual dATP, dCTP, dGTP, dTTP) are often sold in quantities of around 0.5ml with a concentration of around 10-100mM. As I got it, quite often you only would like to use 50ul with a concentration of around (say) 50-100uM per vial. If I do large scale PCRs I would probably dilute and use up most of the dNTP-mix I bought. But if you're doing small scale PCRs with say just 1-2 vials at the time, how is the dNTP handled/stored, since it is not wise to freeze/melt dNTP more than a couple of times? Is it possible to dilute the bought (say 0.5ml, 100mM) dNTP and store/freeze it in quantities of 0.05ml? Seems as a pretty tiny amount to freeze, but is this how to do it? Hopefully you get what I'm aiming at, and hopefully I've understood it correctly. That is, the bought dNTP needs to be diluted and stored in small quantities, since otherwise 90+% would go to waste when doing small scale PCR.
-
I was aiming mainly at Air's post, who talked about Omega i relation to the fate of universe .. and then Snail filled it up (correctly) with associating it with the geometry of the universe. For the rest, thanks for the lecture! You clear up quite a bit for me. Thanks!
-
Are we not mixing up the definitions here? Typically, [math]\Omega[/math] stands for the curvature of the universe. This [math]\Omega[/math] is measured to be close to 1, which means a flat universe. The cosmological constant on the other hand, is more commonly denoted [math]\Lambda[/math] or when connected to the calculation of the curvature of the universe [math]\Omega_{\Lambda}[/math] is used. Now... [math]\Omega_{\Lambda}[/math], or [math]\Lambda[/math], describes the "destiny" of our universe (big freeze, big crunch or just even out), and as I read from WMAP data it is around [math]\Omega_{\Lambda}\approx0.75[/math] atm. The "real/standard cosmological constant [math]\Lambda[/math]," is it then [math]\Lambda=1-\Omega_{\Lambda}\approx0.25[/math]? It is a mess to read the WMAP articles, since I'm just a layman in the area.. and when I took a couple of astronomy/cosmology courses 10+ yrs ago, dark matter wasn't even know.
-
Hmm, which [math]\Omega[/math] is that? NASAs latest WMAP data concludes that [math]0.9988\leq\Omega_{Tot}\leq1.0116[/math]. Before that, earlier WMAP/SDSS/etc. indicated that it was also around [math]1[/math].
-
Yes, I don't know what I thought of with that sentence (I blame it on the beer hehe). I'm actually contradicting myself in the next sentences with Feyman diagrams. That is, an observer in space time would not see any "randomness as we know it," atleast not any random spontaneous decay - since they would see it kinda like a Feynman diagram. Maybe kinda OT ...
-
Wouldn't causality (as we know it) be "non existent" for anyone who can observe the whole space time? Like, e.g., spontaneous decay seems random. But, if we draw a Feynman diagram it seems as we have causality. Wouldn't the same apply to an space-time observer?
-
This kinda thinking I like! But it drags me down to another touchy area. If we regard the whole space-time as static (as a cone/unit ball/etc.), there is no ramdomness or "random spontaneouss decay," "double slit experiment randomness," "entangled information being send instantaneously," etc. Why not? It would be perfectly obvious for anyone who can observe the 4D space time what would happen. Yes? There is no room for "no the particle DIDN'T decay" if we allready know it will do so; since we can observe the whole space-time. Maybe I just had a dozen too many beers, but ... Bite me!
-
Using the Lagrange multiplier to find extrema?
Dr. Jekyll replied to CalleighMay's topic in Analysis and Calculus
First note that you can chose to minimize [math](x^2+y^2)[/math] instead of [math]\sqrt(x^2+y^2)[/math] (they have the same minima). Doing Lagrange relaxation: [math] \mathcal{L}(x,y,\lambda)=x^2+y^2+\lambda(2x+4y-15) [/math] For extreme value we have [math]\nabla \mathcal{L}=0[/math] and [math]\nabla \mathcal{L}=[2x+2\lambda, 2y+4\lambda, 2x+4y-15]=[0, 0, 0][/math]. Three equations and three variables to solve for ( then you might want to check if it is a minimum of maximum). -
Here is a link to a describing .doc-file: http://numericalmethods.eng.usf.edu/nbm/gen/07int/nbm_gen_int_txt_gaussquadrature.doc They only deal with lower orders, but the main procedure is there. I figure a proof of a general n-order Gauss quadrature becomes quite messy with lotsa algebra. I don't think it was Gauss that invented this formula, it is only named after him. Anyone know who in that case (or have the stamina to Google )?
-
It probably was me! More seriously. I dont have this fresh in mind, but Gauss quadrature is merely the integral of a polynomial that is fitted to the given function points (the function you wish to integrate). I.e., if you have two points you do a linear approximation. Three points you do a quadratic approximation (interpolation), etc. The weights, and basis functions, falls out in that way. You could probably derive the same formula by yourself with the above knowledge, and a load of stamina. More rigorous, they are given by setting up a linear system of equations, but not actually very interesting (since I don't have the details in my mind heheh), and I dont have the interest to refresh them either . More important is to know what the actual quadrature do, i.e., approximates the given integrand with a polynomial and computes the integral value. Maybe I just stated what you already knew, cheers anyway!