Jump to content

Recommended Posts

Posted

Correct me if my thinking is wrong. I have a series of independent events; in each of those the probability of a desired outcome is 0.1, for example. I need to find out the probability of getting a desired outcome at least once in a random series of n events.

 

It's been a while since I'd done any probability theory. The only way of not getting a desired outcome is if all n events produce a non-desired outcome. The probability of a non-desired outcome in my example is (1-0.1)=0.9, therefore failing n times will have the probability of 0.9n and then finally the probability of at least one desired outcome will be (1-0.9n).

 

a) Is this correct?

 

b) If events are entirely independent, will the answer change whether I have n successive events or n random events?

 

Posted

a.) Yes

 

b.) Not sure what you mean. The events are successive and random (and independent) - that is identically and independently distributed (i.i.d) - aren't they? Or do you mean that they are no longer independent, or no longer identically distributed?

Posted (edited)

a.) Yes

 

b.) Not sure what you mean. The events are successive and random (and independent) - that is identically and independently distributed (i.i.d) - aren't they? Or do you mean that they are no longer independent, or no longer identically distributed?

 

Thanks for the help. In the second question I'm talking about a situation where, let's say, I have a sample of 1000 events like in the original post. Would it matter in any way, whether I take a series of consecutive events, say 1-10 or if i take a random sample of 10 events, for example #34, 121, 357.. etc?

Edited by pavelcherepan
Posted

Ah, i see. Essentially you are sampling from a population and wish to know whether taking sequential samples or random samples leads to different results. Interesting problem, never thought about it.

 

I don't think it would make any difference given that the original sequence is i.i.d. The original sequence is a Bernoulli process, and if you mix up the order of that process you still have a Bernoulli process. So if [latex]X_i[/latex] is a i.i.d Bernoulli R.V. then the sequence [latex]X_1, X_2, X_3 = X_2, X_3,X_1[/latex]. I'd have to think about a more formal proof, but it'll give me something to procrastinate on later.

  • 2 weeks later...
Posted

Reminds me of a lotery system that I thought up when lotteries were still fairly new and could have have represented an edge during times such as when the pool became very large to make it worth playing.

 

The problem with large pools is that the chance for multiple winners could reduce your winnings considerably. In that case you should play only numbers that no one else would ever consider playing so that your less likely to have to split the winnings. For example picking 1, 2, 3, 4, 5, 6 has the same odds of winning as playing 6 random numbers but no one thinks thats true.

 

However, once I started telling people about the strategy, it was no longer a valid strategy. I read an article with the same idea about a year after I first started telling people about it to prove the point.

Posted

However, once I started telling people about the strategy, it was no longer a valid strategy.

 

Why not?

And I'm not entirely covinced it's true. Obviously, random numbers are picked more frequently than sequential numbers, but that's not what we are talking about here.

 

We are talking about the sequence 1-6 being more or less frequently picked than any specific ''random'' sequence. I'm not really convinced that the numbers 4, 7, 14, 15, 22, 27 are more commonly picked than the numbers 1, 2, 3, 4, 5, 6. Or that any ''random'' sequence is less frequently picked than those numbers.

Posted

 

However, once I started telling people about the strategy, it was no longer a valid strategy.

Why not?
Because once a unique strategy gets out its no longer unique. Just one other player using the same strategy cuts it's value in half since your guaranteed to split two ways.

 

And I'm not entirely covinced it's true. Obviously, random numbers are picked more frequently than sequential numbers, but that's not what we are talking about here.

 

We are talking about the sequence 1-6 being more or less frequently picked than any specific ''random'' sequence. I'm not really convinced that the numbers 4, 7, 14, 15, 22, 27 are more commonly picked than the numbers 1, 2, 3, 4, 5, 6. Or that any ''random'' sequence is less frequently picked than those numbers.

It has to do with origional thinking.

 

Free will is not expressed as a random series of events, they are a completely biased series of events based on human nature which in turn is biased based on human experiences creating intuitive conception, which is always false when the real context falls outside of our experiences.

 

If nobody ever thought of that strategy being a valid strategy, then no one would pick such a series of numbers when they thought it was guaranteed to fail.

 

Back then, it was a real chore explaining that concept to those I tried to explain it to. Its easier to explain today because people have likely had some experience in their lifetime that had to do with large numbers.

 

The real question is if the idea was origional or not. Since It was just after lotteries became legally run by the states and nobody thought about large numbers back then except for some mathematicians, who never would ever consider playing the lotery even in a hypothetical, I'm pretty certain it was an origional idea. I could have been wrong, but I doubt it.

Posted

If nobody ever thought of that strategy being a valid strategy, then no one would pick such a series of numbers when they thought it was guaranteed to fail.

 

...

The real question is if the idea was origional or not.

 

Surely, that's not the only factor in this equation. Understanding that any combination of numbers has the same odds of being drafted as any other combination is enough to disprove your point.

People might have picked the numbers 1-6 not because they thought no one else would, but simply because they were lazy to do anything else, since the odds are always the same.

 

So they didn't even have to think about sharing the prize to pick the ''obvious'' sequential combinations. Your premise is valid, but not for the numbers 1-6. It's too obvious for that. Perhaps a less obvious combination which would neither be picked when choosing random numbers, nor sequential numbers. Perhaps something like 7,8,9,10,11,12 sounds better than 1-6.

Posted (edited)

Surely, that's not the only factor in this equation. Understanding that any combination of numbers has the same odds of being drafted as any other combination is enough to disprove your point.

People might have picked the numbers 1-6 not because they thought no one else would, but simply because they were lazy to do anything else, since the odds are always the same.

 

So they didn't even have to think about sharing the prize to pick the ''obvious'' sequential combinations. Your premise is valid, but not for the numbers 1-6. It's too obvious for that. Perhaps a less obvious combination which would neither be picked when choosing random numbers, nor sequential numbers. Perhaps something like 7,8,9,10,11,12 sounds better than 1-6.

Clearly youve never played poker. Human nature is not unpredictable, it's extremely predictable and nobody played the lotery like that when it first got started. Edited by TakenItSeriously
Posted

And how do you know that? The two friends who played the lottery told you so?

 

Also, I will repeat: of course the amount of people who play random moves is far greater than the amount of people who play sequential moves, but you need to compare a certain ''random'' sequence to your 1-6, not all random sequences together. Pick any random sequence for the lottery. Do it two times more. Are you sure these exact combinations were less frequently picked than your 1-6 sequence?

Posted

And how do you know that? The two friends who played the lottery told you so?

 

Also, I will repeat: of course the amount of people who play random moves is far greater than the amount of people who play sequential moves, but you need to compare a certain ''random'' sequence to your 1-6, not all random sequences together. Pick any random sequence for the lottery. Do it two times more. Are you sure these exact combinations were less frequently picked than your 1-6 sequence?

At that time, yes, I'd say that the chances were zero percent that ayone would have picked any well ordered sequence of numbers vs a more chaotic sequence which was due to the misperception of what people understood about random patterns back then.

 

As I said before, human nature makes people much more predictable than you think at certain things and sometime it's not about what most people would pick but about what no one would have ever picked which has more to do with psychology than probability.

 

The human psyche has evolved to look for patterns of non-random events, not random patterns so people naturally assumed that random ment the absence of any kind of pattern.

Posted

At that time, yes, I'd say that the chances were zero percent that ayone would have picked any well ordered sequence of numbers vs a more chaotic sequence which was due to the misperception of what people understood about random patterns back then.

 

Claiming that zero percent of people used sequental combinations of numbers is a bit preposterous. How can you be sure of that?

 

Of course, I would agree with you that more people use that now than in those days, but that doesn't say much and doesn't quanfify the statistic.

 

 

which was due to the misperception of what people understood about random patterns back then.

 

I struggle to believe that no one understood this at that point. There are people who don't understand this today, but they don't represent all of the lottery players. This is something that is entirely logical and doesn't need to be taught. One could understand it without having been taught probability theory.

 

Again, I'm not claiming that a significant portion of people played sequential numbers (specifically 1-6 since you brought it up), I'm saying that it isn't clear to me that this sequence was picked less than any other isolated ''random'' sequence, or most of them for that matter.

Posted

 

Again, I'm not claiming that a significant portion of people played sequential numbers (specifically 1-6 since you brought it up), I'm saying that it isn't clear to me that this sequence was picked less than any other isolated ''random'' sequence, or most of them for that matter.

It was just an example, not a totality.

 

For example picking 1, 2, 3, 4, 5, 6 has the same odds of winning as playing 6 random numbers but no one thinks thats true.

Posted

Correct me if my thinking is wrong. I have a series of independent events; in each of those the probability of a desired outcome is 0.1, for example. I need to find out the probability of getting a desired outcome at least once in a random series of n events.

 

It's been a while since I'd done any probability theory. The only way of not getting a desired outcome is if all n events produce a non-desired outcome. The probability of a non-desired outcome in my example is (1-0.1)=0.9, therefore failing n times will have the probability of 0.9n and then finally the probability of at least one desired outcome will be (1-0.9n).

 

a) Is this correct?

 

b) If events are entirely independent, will the answer change whether I have n successive events or n random events?

 

 

Hello pavel, good to see you back again, you often add something useful to threads.

 

:)

 

With regard to part (b) of your question the answer is not so simple.

 

It depends upon what you are sampling.

 

Here are two examples.

 

1) You have divided a cornfield into squares for testing yield. Placing all the tests sequentially in one corner may hit fertile or stony ground.

 

2) You are a buyer checking a box of apples for % rotten apples. A rotten apple is much more likely to infect neighbours than remote apples.

 

In both of these sampleing sequentially may well produce different resutls from a proper spread of testing.

 

Sampling theory developed from real life situations like these and it is obviously important to ensure that the sample is as representative as possible of the whole sampled population.

This is why sampling theory is a huge area in its own right.

Posted

 

Hello pavel, good to see you back again, you often add something useful to threads.

 

:)

 

With regard to part (b) of your question the answer is not so simple.

 

It depends upon what you are sampling.

 

Here are two examples.

 

1) You have divided a cornfield into squares for testing yield. Placing all the tests sequentially in one corner may hit fertile or stony ground.

 

2) You are a buyer checking a box of apples for % rotten apples. A rotten apple is much more likely to infect neighbours than remote apples.

 

In both of these sampleing sequentially may well produce different resutls from a proper spread of testing.

 

Sampling theory developed from real life situations like these and it is obviously important to ensure that the sample is as representative as possible of the whole sampled population.

This is why sampling theory is a huge area in its own right.

 

 

"2) You are a buyer checking a box of apples for % rotten apples. A rotten apple is much more likely to infect neighbours than remote apples."

 

I find this very intriguing. You are right of course, but the upshot is so complicated. A corner apple has only three neighbours which might have been able to turn it rotten - a centre apple has eight etc.; assuming a rectangular array. Assuming different number of generations of infection possible then the odds of a centre apple remaining fresh with a rotten in the box is contrasted with the odds of a corner apple remaining fresh - as the number of generations grows this comparison moves from same, to centre more likely rotten, and back to the same again.

 

I know it will have been does before - but when I have a decent pc in front of me I think a little analysis and some monte carlo or bust is called for

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.