-
Posts
2575 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Bignose
-
I put that in my edit -- the language of the question is important. If the question were "which strategy wins more prizes?", then you have to use the expected value. ----------------------- The real question is whether Mokele or his wife won the debate ?!?
-
This still isn't right. You are not taking into account the multiple wins in the expected value. The expected value of wins is equal to (1/10 per mini raffle)*10 min raffles = 1. The binomial distribution is exactly the same thing: The expected value = 0 * probability of 0 successes in 10 tries (denoted P(0,10))+ 1*P(1,10) + 2*P(2,10) + 3*P(3,10) + 4*P(4,10) + 5*P(5,10) + 6*P(6,10) + 7*P(7,10) + 8*P(8,10) + 9*P(9,10) + 10*P(10,10). Calculate this number out and you get 1. This says that, on average, every time this raffle occurs (this raffle consisting of 10 mini raffles where you have a 1 in 10 chance of winning each mini), you will win 1 prize. Again, it says nothing specific about any individual raffle -- you may win 5 prizes once, 0 prizes if it is repeated, 2 prizes if repeated again, etc. But, in the long run, if the raffle is repeated time and time again, the average will be 1 prize. Not 0.651 prizes, 1 prize. EDITED to add: this actually probably all comes down to the way Mokele asked the question. I am thinking that your answer answers the question better -- because his question was what is the probability of winning something, which would be 1-P(0,10). These questions are always confusing in being sure of answering exactly what the asker wants.
-
I didn't insist on anything, I was trying to give a hint, and I still am not going to provide a detailed proof because I don't like spoon feeding anybody. If you really are driven to find someone else to do the work for you, these proofs can be found in the literature (and no, I'm not going to tell you which book).
-
You don't just use the binomial distribution for 1 success. Because winning twice or three times or more should still be a success in the winners eyes. Your first calculation was correct. Each trial, you have a 1 in 10 chance of winning. And, when you do 10 chances -- on average when you do 10 chances, you will get 1 winner. Just because the average is 1 (or 100%), does not mean it will happen. But, it is the average. If the same raffle (with 10 mini-raffles in each where the chance of winning a prize was 10% in each mini) were held 10,000 times, the total number of prizes at the end would be near 10,000 -- some raffles you would win 0 prizes, some raffles you would win 5 prizes, but in the end the average is 1 prize per set of 10 mini raffles that are ran. As a related example. On average, if you roll a fair 6 sided die 6 times, how many time will the number 1 appear? Once. Same for all the numbers, because on average each number appears once every 6 rolls. The key phrase is on average, because how any individual trial plays out is unknown. All we can do is calculate averages (and variances and other broad numbers), not specifics for any individual case. So, on average, it is better to play 1 ticket per cup (assuming that you can be reasonably sure that the cups will stay that way) -- the more interesting question is if everybody continues to distribute them evenly, is it still better to spread it around, or should you weigh more heavily on one side? Is there a certain number where a "break even" point is reached. I.e. 86 tickets per cup and you should spread around your 10, but if there are 87 you should put all 10 into 1 to maximize you chances of at least 1 win?
-
Proofs by contradiction usually work well for the first two -- that is, assume that there isn't a unique zero and show that assuming that that leads to conclusions that break the axioms, and therefore there is a unique zero. Same with stating with a non-unique inverse. Whether it is homework for a class or not, the rule remains the same. Members can give clue or hints (like the above), but we still don't just do all the work for someone else. If you want to post your proof or a few different ones in the interest of discussing them, that's fine. But, the forum justifiably so will always err on the side of caution. And the cautious approach here is that this looks like homework, so I really hope that no one just posts answers.
-
No, that has nothing to do with it. You can't just manipulate "dy" like you can an x in x/x or an h in h/h and the like. The differential there obeys certain extras rules. Sometime you can manipulate it like an algebraic term and sometimes not. In this case, you can't. The expansion works because of the chain rule, not because of algebra. http://en.wikipedia.org/wiki/Chain_rule
-
Sure, they are. But x^2 and x^3 aren't equivalent. You can't represent one with the other. Your equation: 4x^3 + C = 12x^2 is only true for certain values of x, no in general. That's the point. 4x^3 + C and 12xx^2 aren't equivalent. The function A(x)=x^2 is only one dimensional because it only has one input, x. The area of a rectangle with sides x and y would be 2 dimensional, A(x,y) = xy, because it has two inputs. Just because the units of the terms in the equation have squared or cubed or some other power in them, does not necessarily imply that that function has the same dimension. A(x) =x^2 is 1 dimensional because if x=5, A=25 and only 25. In this case, the inverse is true also, in that if A=16, then x can only equal 4. Once you fix one of the values, the other is completely determined. Now, look at A(x,y)=xy. Fix x=5. That doesn't tell you anything about the area of that rectangle or what y is. That's because it is a 2-D function and you've only fixed one of them. If you fix 2 of them, say x=5 and A=30, then y is fixed at 6. And, it doesn't matter what the units are. B(x,y) = x/y would be dimensionless if both x and y had the same units, but it is still a function with 2 degrees of freedom. Differentiation and integration don't necessarily change the degrees of freedom of a function. It can. Let [math]f(x)=x^4 + 7[/math] [math]\frac{df}{dy} = 0[/math] because f is not a function of y. In this case, a function with 1 degree of freedom when differentiated with respect to y becomes a function with no degrees of freedom because no matter what the input is, the output is 0.
-
I don't know what you mean by re-stating. Because while x^2 and x^3 can be close to each other in value for a small range... they aren't the same. You cannot use one to express the other. They are both different functions. They are also both only 1 dimensional, because they have just the 1 input, x. To be 2 dimensional, it would have to be f(x,y). 3-D, f(x,y,z), and etc.
-
Also, it is "surveyor" not "survayer", though I like the pics. I think 50 to 75 years is a bit optimistic, however.
-
Are we at the edge of the universe already?
Bignose replied to rudolfhendrique's topic in Speculations
Please go through http://relativity.livingreviews.org/Articles/lrr-2006-3/ and show how every single experiment detailed in that excellent review paper is wrong, then. Because that paper is an excellent review of many of the experiments which resulted in evidence that confirms general relativity. If it is "easy" to show relativity wrong, then you should be able to provide contradictory evidence for each and every single experiment in that paper. If you cannot, it is only fair to ask you to retract that statement. -
Be careful with your notation here, because writing 1.0 = 6 is essentially meaningless. Because it is simply a false statement. You meant something like f(1.0)=6. And you are trying to discover what f(x) is given f(1.0)=6, f(0.9)=3, f(0.3)=0.8. Then, as tree said, being limited to only three points, extrapolation or interpolation to other points is a very iffy proposition. I.e. you can find the best-fit line to those three points, but if the phenomena is not linear, then using the line is going to give you erroneous results. These things are usually best worked on from attempting to determine the pathology of the function in the first place and then doing the fitting. It usually doesn't work the other way around. It really won't work very well with only three points. Just as an example, there is a second order polynomial that will go through all three of those points. But, again, that certainly doesn't mean that the phenomena is truly a second order polynomial. And, a second order polynomial will give you some farcically bad results -- it will turn back at some point and that turning point is not a physically possible point. It fits your data, but doesn't mean anything.
-
I didn't look for a source for this or anything, but I seem to recall a few years back that they opened jars of honey that were buried in the tombs of various Egyptian leaders and that the honey was just as good as day one. The interest there was to try to figure out what flowers the bees were visiting back in the day, and they could still do it because the honey was still fresh.
-
Peer-review is not a guarantee Or why to always read sources critically.
Bignose replied to Mokele's topic in Other Sciences
Clearly, it means something that so far, only moderators and resident experts have posted in this thread. This thread almost needs to be copied to P&S --- those are the people that need to read it far, far, far more than the people who have responded to this thread to date! -
Is it really that hard to draw up a diagram as asked? I don't think anybody is looking for a professional-level diagram. Something scratched together in MS Paint would probably get the point across. Heck, something scratched on a cocktail napkin and scanned in would probably get the point across. I just think that you could really help your case. And, I don't appreciate the insulting tone I perceive in your response. I am showing you plenty of respect, as is everybody else in this thread; we deserve the same from you.
-
If you want us to understand, why don't you do what we've asked to help us understand? If you really want to get your point across, shouldn't you be very interested in trying to make it as clear to others exactly what your point is? Why the resistance to try to make it easy for the rest of us to see?
-
Why don't you just post a drawing so that everyone can see clearly what you're talking about? I've read through this thread, and I personally cannot tell what exactly you are talking about. J.C. has been a saint in this thread, most people would have given up a long time ago. The big thing, ABV, is that the onus is on you to demonstrate the issue, since you are the one claiming it. The onus is not on any of us to go to wikipedia and use their generic picture in your situation. If you want to convince us of the issue, draw us a diagram of your specific situation, put in all the forces, and show us what the issue is. We shouldn't have to, and most of us aren't going to piece together all the parts of the puzzle for you -- you should have the puzzle completely together before you start, and give that to us. Then, we can look at the whole picture instead of it piece by piece.
-
Don't underestimate the simple fact that mathematics is an excellent environment to learn problem solving skills -- that are obviously applicable to any other field. Math is a good environment for this because math has well defined tools, and math problems until you get to some very high level problems, always have an answer and have answers that are clearly correct or incorrect. Most real-world problems aren't that clear cut; real world problems may not have a solution, or may have multiple solutions and you have to weigh which is best. Nevertheless, the problem solving skills that you hone in a mathematics class can still be used to solve real-world problems.
-
A weird eigenvalue problem
Bignose replied to ahmethungari's topic in Linear Algebra and Group Theory
This forum doesn't support using $ as the math tags like regular LaTeX. You need to use (math) and (/math) but with square brackets in place of the beginning and ending $'s and the LaTeX will appear correctly. -
I find it very funny you complain of lack of time to spend in cyberspace and yet keep responding... Anyhoo. ------------------------------------------- Just a thought -- if you started a company making machines that use the ideas you have -- again being able to move several tons of material around (even if limited to non-daylight hours) without cranes or other heavy machinery -- would make you a boatload of money. Money which you could then spend to reach a lot more people who you are trying to save rather than just trying to reach a few over the Internet. The two goals aren't mutually exclusive. ------------------------------------- Anyhow, if this is your attitude, then you probably should just drop your claim, at least on this forum. Because you haven't posted any convincing evidence whatsoever, and until you do, you will be met with extreme skepticism. The current theory has been tested many thousands of times over and works pretty doggone well. Your theory has a few unvalidated stories behind it. I'm going to stick with the current theory.
-
I'll write out some more specific examples to try to make it clearer: Consider a distribution of particles, described by their volume, v. Let the distribution of particle volumes be denoted [math]f(v)[/math] How that volume distribution changes over time is described by the population balance equation in this form: [math]\frac{\partial f}{\partial t} + \frac{\partial (\dot{V}f)}{\partial v} = h[/math] where [math]\dot{V}[/math] is the growth rate of the particles and h on the right hand side is all birth and death functions (agglomeration, breakage, nucleation). Let's keep it simple and consider only a birth process of nucleation of particles that nucleate at a size [math]v_n[/math]. In this case, [math]h=\dot{n}(S)\delta(v-v_n)[/math] [math]\frac{\partial f}{\partial t} + \frac{\partial (\dot{V}f)}{\partial v} = \dot{n}\delta(v-v_n)[/math] The terms meaning that a change in the number distribution of particles of size v is due to the combined effects of number of particles that grow into that size, the number of particles that grow out of that size, and the number of particles at that size that nucleate. Now, let's use a finite volume method (in the pop balance literature typically called a sectional method) to solve this equation. That is, we are going to set up a number of bins 1,2,3,...,i, integrate the population balance over the volumes covered by each bin, and solve for the number of particles in each bin as a function of time. Let the volumes of particles in bin i be [math](x_i,x_{i-1})[/math]. And let [math]N_i[/math] be the number of particles in each bin. I.e. [math]N_i = \int^{x_i}_{x_{i-1}}f(v)dv[/math] So, let's integrate over the population balance equation [math]\int^{x_i}_{x_{i-1}}\left( \frac{\partial f}{\partial t} \right)dv=\int^{x_i}_{x_{i-1}} \left( -\frac{\partial (\dot{V}f)}{\partial v} + \dot{n}\delta(v-v_n) \right) dv [/math] After doing the integrations, you will end up with a set of equations that looks like: [math]\frac{dN_i}{dt} = G_1(N_{i-1}\dot(V)) - G_2(N_i\dot(V))[/math] where the G functions are the growth (I didn't write out all the details, because it gets messy, and usually fixed sized bins yields inaccurate results). They are meant to show that the change in the number of particles in bin i are due to particles from bin i-1 growing into bin i, and the number of particles in bin i growing into particles covered by bin i+1. Except the smallest bin where nucleation results, that will have an equation that looks like: [math]\frac{dN_i}{dt} = - G_2(N_i\dot(V)) + \dot(n)[/math] There is no smaller bin for particles to grow up from, but particles can nucleate into that size. In this equation, the nucleation, which is described by a delta function becomes a source in the equation for number of particles in the smallest bin. Is this totally realistic? No, because particles don't just nucleate are one size only. And, the whole description above is assuming something like a perfectly mixed batch crystalizer -- no inhomogeneity in supersaturation anywhere, no inhomogeneity in the particle size distribution anywhere. But, using a delta function to describe the nucleation events is pretty accurate -- accurate enough in terms of all the other errors in the simulation, and that's why it works. ----------------------------------- Let me show another one: I want to start with convection equation in a fluid fluid [math]\rho \frac{\partial \phi}{\partial t} + \rho \mathbf{v} \cdot \nabla \phi = \nabla \cdot D \nabla \phi + S [/math] [math]\phi[/math] is the conserved substance -- this could be temperature, concentration of a solute, etc. v is the fluid velocity convecting the conserved qty D is the diffusion rate of the conserved qty S is the source or sink of the conserved qty Let [math]\phi[/math] be the fluid temperature. Further, consider the flow over a hot-wire anemometer. A hot-wire anemometer is a device inserted into the fluid flow to meter the flow. It works by measuring how much electric current is drawn through it based on the cooling effect of the fluid flow around it. But, it is a very thin piece of wire, and in some situations (high turbulence) you can ignore its disturbances to the fluid flow. But it is a source for heat. If the above equation were written for 2-D x&y, and the anemometer were placed in the fluid along the z direction, there would be a heat source in the fluid that is a point source at the given x & y location of anemometer. That is, the source in the above equation would be [math]S=W\delta{\mathbf{x}-\mathbf{x}_a}[/math] where [math]\mathbf{x}_a[/math] is the location of the wire, and W is the rate of heat coming from the wire. Let's disrectize the above equation using finite volumes. I'm not going to do through the details (you can find them in any good Computational Fluid Dynamics (CFD) text) but the the space gets broken into little squares. The notation is pretty simple in that each discretized equation is written for the finite volume located at P. The cells along the x axis are labeled E (east) and W (west) with typically W being the cell next to P with the smaller x value, and E being the cell next to P with the higher x value. Along the y axis, there is N (north) and S (south), with N being higher y and S being smaller y. You will integrate the above equation over each cell to create a set of discrete equations: [math]\int^{x_E}_{x_W} \int^{y_N}_{y_S}(eqn)dy dx [/math] The discretized equations look like: [math]a_P\phi_P = a_W\phi_W + a_E\phi_E + a_N\phi_N + a_S\phi_S [/math] where the a's for E,W,N,&S take care of all the diffusion and convection. a_P takes care of the time derivative. In words, this is the change of the conserved quantity [math]\phi[/math] in cell P is due to the convection and diffusion through the east face of the cell, the convection and diffusion through the west face, and similarly the north and south faces. The temperature will get convected and diffused around everywhere, except the finite volume where the location of the anemometer is. The discrete equation for that cell looks like: [math]a_P\phi_P = a_W\phi_W + a_E\phi_E + a_N\phi_N + a_S\phi_S + W[/math]. Because the integrals above were over space and the heat from the anemometer is treated as a point source that only shows up in one cell. Of a discrete equations for the temperature across the 2-D solution space, only one of them has a source. That's where the delta function comes in -- to treat the heat source as a point source. ----------------- I showed two different examples where the point sources have to be described using the delta function so that when you discretize the solution space, the sources show up in the discretized equations correctly. This is where they are useful. Because, they simplify the simulation method by only having source in one discretized cell. Realistically, this may not be correct -- in real life there is no such thing as a perfect point source -- but it may be good enough for the simulation accuracy to be all right. I hope that this explains what I mean better. I have many more examples where they are used all the time, but I don't want to spend the time writing them all out (the above took about an hour as it is) because I think I've explained it in a lot of detail already.
-
You didn't actually address any of my points (which is pretty common among the pseudoscientists), but if you "don't have too much time to spend in cyberspace" why are you bothering to post here? Again, why aren't you out demonstrating these things you have discovered and making yourself a boatload of cash? If everything works like you say it does, you will be fabulously wealthy, you won't have to work 40 hours a week, you can sleep 12 hours a day if you want, and pay other people to go to cyberspace and defend the ideas. Why aren't you out there doing it instead of trying to convince a bunch of random strangers on the Internet with unconvincing evidence? You know what would be convincing... actually making machines based on the principles in the book and selling them! Not stories about your concrete block, not stories about some dude who was malnourished, not stories about some magic coral stone thingy in Florida -- actually go out and do it! If it is as easy as you say, you should be able to make quite a lot of money in a very short amount of time. So, quit wasting time posting here and go start that company, make several million bucks, and then come back in 2 years and tell us all "I told you so!".
-
It doesn't really matter if they know what the Dirac Delta function is or not: do they use any point sources in their simulations at all? Because that would be represented by a delta function. I know I do: There is a finite time when a particle hits a solid object or when two particles collide, but I still approximate the momentum impulse and energy change by a delta function because it makes the math easier. Particles that form during a nucleation process in a supersaturated solution don't actually appear at a finite size instantaneous, nevertheless, I still use a delta function to represent the nucleation process because the error introduced is negligible. When there is a reaction occurring on a particle or on a wall or anything else smaller than the discrete volume, the heat source and mass sources and sinks are treated as point sources and sinks -- and hence are delta functions. Anything anywhere that can be treated as a point source is mathematically represented by a delta function.
-
DH, I said that already. I said you can only treat things as point masses when you are far away.
-
roshan, were the examples and explanations above unclear? And if they were, can you ask more specific questions?
-
You got one things backwards: the longer the pipe, the more pressure drop there would be, the harder it is to move things pneumatically.