Jump to content

Recommended Posts

Posted

Hi guys, just doin some number crunching on some "data sets" that im creating using the random function (in scilab), now the values vary from 1000 to 1100 and im using some basic logic on these values. Now whats striking me as odd is that although ive done exstensive tests the results always give a similar mean, not a single run has thrown out say numbers over 1050 even 20 times in a row, i know its the law of probability but should i not see some freak results atleast once?

 

I would prefer not to apply my own logic to get irregularities if possible but thus far the "randomness" is pretty predictable.

Posted (edited)

Hi guys, just doin some number crunching on some "data sets" that im creating using the random function (in scilab), now the values vary from 1000 to 1100 and im using some basic logic on these values. Now whats striking me as odd is that although ive done exstensive tests the results always give a similar mean, not a single run has thrown out say numbers over 1050 even 20 times in a row, i know its the law of probability but should i not see some freak results atleast once?

 

I would prefer not to apply my own logic to get irregularities if possible but thus far the "randomness" is pretty predictable.

It depends on the algorithm that the tool you are using uses to carry out the randomization function. If the algorithm is developed correctly, then the likely good of getting "freak" results should diminish.

Edited by Unity+
Posted (edited)

The larger the sample sizes (data sets), the more convergent your results will be.

Edited by MonDie
Posted (edited)

If it was random there should be the same chance of getting each number. have you graphed the frequency of each number?

I ran a macro looking at the random generator in Excel 2007 and over 300,000 events each number 0-9 was chosen very evenly.

"4" seemed to be a little light at 259 less than the average of 30,000 per number.

Next 300,000 throws and it caught up.

Edited by Robittybob1
Posted (edited)

I have no idea what is scilab, but in C/C++ programming rand() function is returning results always the same way in a row. You have to use srand(time()) or similar instead of time() to have real pseudo-random values, every time different.
http://www.cplusplus.com/reference/cstdlib/rand/

http://www.cplusplus.com/reference/cstdlib/srand/

 

Typical application is using rand()/srand(), so try searching for how to feed srand() (shortcut from random seed).

http://en.wikipedia.org/wiki/Random_seed


Maybe this will help a bit

https://help.scilab.org/docs/5.3.3/en_US/rand.html


Description "Warning: without a seed, the sequence will remain the same from a session to the other."

Edited by Sensei
Posted

According to a PDF I googled (http://turing.une.edu.au/~amth142/Lectures/Lecture_13.pdf) scilab's rand() method, which I assume you may be using, is a linear congruential random number generator. This type of random number generator is usually considered unsuitable for scientific use. I would not be too surprised if such a random number generator was technically incapable of throwing "heads" twenty times in a row. If you feel you need a better random number generator, use a better random number generator.

Posted

If it was random there should be the same chance of getting each number.

 

That depends on the frequency distribution of the random number generator. Some might have a Gaussian distribution, which would greatly reduce the probability of extreme values.

Posted (edited)

Well the high and low always get very close 1000 and 1100 after 100 runs too, ive not seen a run with a low higher than 1010, the interface for scilab is pretty poor for debugging purposes but very easy language to start doin math in quickly, i can use python, java and c but presumed this issue may be generic.

 

I can apply my own logic to the random numbers to get the freak results i require but it just seemed strange how over a large set of data the random function seems somewhat predictable.

Edited by DevilSolution
Posted (edited)

Maybe it would help if you could specify what you are doing and what the "freak results" you expect are supposed to be. I did a check with the c++ random number generator to test if a linear congruential rnd-gen can possibly create a series of 20 subsequent "tails" on a dice throw (head and tail having been defined as the random number being in the upper or lower half of [0; RAND_MAX], respectively). I had my doubts. The little experiment indicates that frequencies with which n subsequent "tails" appear seem to be what you'd expect from basic probability theory (P(n+1)/P(n) = 1/2). So in this respect the linear congruential random number generator you are likely to use seems sufficient.

 

Btw.: The probability not to see a number lower than 1110 in n random numbers uniformly drawn from [1100;1110] is roughly 0.9^n. The approximate number of runs you have to perform to see such a freak result is therefore 1/(0.9^n). In other words, if your sequence consists of n=100 numbers you'd have to look at ~40000 runs to see the freak result "no number lower than 110" - in this case it would be expected not to see such a result in only 100 runs.

 

In case I did not mention it: I strongly suggest to be much more detailed/precise about what you are looking at. My feeling is that you questions/issues are very basic and that a lot of people in this forum could provide helpful comments if they knew what you are doing.

 

EDIT: As a remark: It is not necessarily the random function that is predictable. Random numbers are surprisingly predictable in large amounts, at least for some observables. This is why casinos and (some) lotteries work. More scientifically, a large fraction of our scientific theories base on the predictability of randomness (statistical physics/thermodynamics and many data analysis techniques for scientific experiments).

Edited by timo
  • 1 year later...
Posted

I believe it highly improbable a computer can create truely random numbers, other than its specific algorithm which can be tailored to make probability fit or to alter it such that its made to give less deterministic results yet highly spread with an ordinary deviation overall.......

 

Essentially the only thing that can be used outside of the computers logic is time and unless you specifically verify the time for each use of randomization then even time would be constructed with a logical input hence the results are but a result of that logic and not actual randomness.

 

I know its a strangely late update to a dead topic but working within various languages i've still found nothing that is exclusively random, deterministicly random generally. If but to mathematically define an equation for time.

 

The monte carlo simulation brings some light to the un-probabilistic world we live, but is only brute force randomness in computing terms.

 

I suppose you could always use a set from Pi, or Primes, but then the method by which the primes or Pi is calculated becomes evident. I can only summize that some self altering morphology be applied to a non-random series by which to seed any method..

Posted

Yeah, anything external can work to varying degrees.

 

Not going to find a function that can do it though. Outputs have to be independent of each other.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.