Hamster Posted January 27, 2010 Posted January 27, 2010 Hi, I am not a student but felt this question was at such a basic level it could go here I understand the concept of radioactive decay and obtaining a ratio of parent to decay product. I feel I am missing something very basic though. Lets say you are using Uranium-Lead to measure an age. Is all the lead in the sample assumed to be decay product ? I suppose this is what I get for trying to learn this stuff off the internets
Sisyphus Posted January 27, 2010 Posted January 27, 2010 (edited) From what I gather, it's something like: material in molten form accumulates uranium but diffuses lead, so no lead accumulates. This would be determined experimentally for each material. Once it cools and solidifies into crystal form, it becomes a closed system, and so lead can begin to accumulate. Thus, you have an identifiable "zero point" at which the clock is reset: the point at which it last dropped below the critical temperature, i.e. the point at which the rock formed. I know they have methods for determining whether there would be contamination, but I don't know what they are. But it does make sense that the crystal structure would be a closed system with a uniform ratio of uranium to lead throughout, so deviations would be easy to detect. Also, you can cross-check between different dating methods (like U235 vs. U238), so the probability that various points in the same sample are contaminated uniformly by the right, different minerals to give you the exact same wrong result for different methods would be ridiculously low. Edited January 27, 2010 by Sisyphus
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now