Cap'n Refsmmat Posted March 29, 2011 Posted March 29, 2011 I performed an experiment in my physics lab course recently involving the use of a scintillator to count the number of radioactive decays that occur over a certain period. For example, I placed radioactive sodium-22 in front of the scintillators and observed a certain number of gamma decays in ten seconds. In my analysis, I need to estimate the uncertainty of this measurement. Now, I know radioactive decay is a random process, so in ten seconds I may observe 107 decays, or 95, or 111. As I take larger samples over longer intervals, the random variance becomes less significant compared to the total count. My question is: how do I estimate the random variance in the gamma ray count? Surely it's 97±n, where n is some number dependent on the time and decay rate, but I don't know how to calculate it.
swansont Posted March 29, 2011 Posted March 29, 2011 Sampling error is [math]\sqrt{N}[/math] (assuming the noise is random) 97 ± 10 (It's why polls are always crappily reported at 3% — they sample about 1000 people, and completely ignore the large biases in their methods)
Cap'n Refsmmat Posted March 29, 2011 Author Posted March 29, 2011 Ah, I see. And since [math]\sqrt{N}[/math] increases more slowly than [math]N[/math], the relative error decreases with the number of samples. Thanks.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now