Jump to content

Recommended Posts

Posted

Let's take a classical example of an urn containing red and green balls. Balls are drawn from it without returning them back. If you know that there are r red balls in an urn from the total amount of n, then the probability that a random ball drawn from it is r/n.

 

Now let's assume you know nothing about the ratio of red and green balls in an urn. In such a case all 101 possible values of r are equiprobable, and the subjective probability of drawing a red ball is 0.5.

 

But, assuming you have drawn the first ball and it proved to be red, what is the chance that the next one will be red as well? I suppose it would be more then 0.5 since drawing a red ball somewhat increases chances that there are more red balls in the urn. For example, if you've drawn 10 balls and all of them turned out red, then you get almost sure there are more red balls in the urn, and the chance to draw next red ball gets substantially higher than 0.5.

 

What I'd like to ask is how to quantify such things, e. g. is there a way to calculate how the subjective probability changes when you get an additional evidence. I suppose the Bayes theorem is applicable here, I know how to apply it to calculate probabilities of dicrete events, but how to calculate the changes in subjective probability distributions (like the distribution of probable number of red balls in my example)?

 

Thank you in advance.

Posted
Let's take a classical example of an urn containing red and green balls. Balls are drawn from it without returning them back. If you know that there are r red balls in an urn from the total amount of n, then the probability that a random ball drawn from it is r/n.

 

Now let's assume you know nothing about the ratio of red and green balls in an urn. In such a case all 101 possible values of r are equiprobable, and the subjective probability of drawing a red ball is 0.5.

 

Where did you get this 101 number from?

 

Without any knowledge of the ratio whatsoever, any guess is a good as any other. There is nothing mathematics can tell you.

 

But, assuming you have drawn the first ball and it proved to be red, what is the chance that the next one will be red as well? I suppose it would be more then 0.5 since drawing a red ball somewhat increases chances that there are more red balls in the urn. For example, if you've drawn 10 balls and all of them turned out red, then you get almost sure there are more red balls in the urn, and the chance to draw next red ball gets substantially higher than 0.5.

 

What I'd like to ask is how to quantify such things, e. g. is there a way to calculate how the subjective probability changes when you get an additional evidence. I suppose the Bayes theorem is applicable here, I know how to apply it to calculate probabilities of dicrete events, but how to calculate the changes in subjective probability distributions (like the distribution of probable number of red balls in my example)?

 

Thank you in advance.

 

Bayes theorem is one way to go. But, the experiment as you described is much harder to analyze. You have described sampling without replacement. Sampling with replacement is much easier since the probabilities you are trying to discover don't change with time. In a similar way, it is probably easier to learn how to play blackjack if the deck is shuffled after each hand. That way you always know what the probabilites are, and you don't have to count cards.

 

The biggest thing is that with no prior knowledge whatsoever about an experiment, any guess is as good as the other. Mathematics has nothing to say if you give it no information.

Posted
Where did you get this 101 number from?

 

Sorry, it was my mistake. I've meant n+1 possible values, in case there are 100 balls in an urn it results in 101 (0,1,2,...,100).

 

Bayes theorem is one way to go. But, the experiment as you described is much harder to analyze. You have described sampling without replacement. Sampling with replacement is much easier since the probabilities you are trying to discover don't change with time.

 

Replacements doesn't mean much in my example. You can imagine an urn with a very large amount of balls, so the replacement of several balls won't sufficiently influence the probability.

 

The biggest thing is that with no prior knowledge whatsoever about an experiment, any guess is as good as the other. Mathematics has nothing to say if you give it no information.

 

You are right, at the beginning there is no information. But it comes up as you are beginning to draw balls. Imagine, if you take a shuffled pack of cards, draw a dozen cards and all them are spades, than your subjective probability to draw spades should be more than just 1/4, as you have a strong reason to suggest there are more spades in a pack than just 1/4. My question was: how to determine and calculate such probabilities.

Posted

Well, you said you know how to apply Bayesian analysis. Any first guess is a good as any other, when no information is present. As more and more data comes in your guess should become less and less weighted, and the actual results more and more weighted.

 

Maybe some of the book on Bayesian analysis could help. Amazon has several listed, including one called Subjective Probability: The Real Thing by Richard Jeffrey. If your library doesn't have it, they should be able to get it via interlibrary loan.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.