-
Posts
511 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by TakenItSeriously
-
OK, fine, so when he pushes down on one side he is doing the equivalent of imparting some of his own mass to one side so after removing his hand then: m₁ = m₂ ∴ y₁ = y₂ I should add that there should be a sine and cosine function involved with the m and y variables since the lever equation is the perpendicular force but movement is in an arc while the force of gavity is verticle but the fact remains that when both masses and both lever lengths are equal then the y offsets must be equal. ...edit to add: Not that it matters because there are only three terms on either side of the equation in this problem, but I thought I should check what the accepted equation for a balance was. After googling balance equation, I found no results other than accounting references. After googling balance paradox, I was surprised to see that there were some hits that seemed to try and show a paradox without any explanations. Hasn't anyone ever formalised an equation for the balance yet?
-
Ahh, I think I understand. You are wondering what is allowing the two sides to communicate and find an equilibrium point. The Paradoxical effect is created by a common mistake of forgetting about changes that we assume subconsciously due to repeatedly cancelling out L. we arent aware of anything our subconscious does which is why we say it's a hidden assumption the professor does without even knowing it.i So the origional problem is two levers that meet at the focal point with forces applied at each end. M₁ x L ₁ x Y₁ = M₂ x L₂ x Y₂ where M₁, M₂ are the two masses L₁, L₂ Are the horizontal distances from each mass to the focal point in the middle. Y¹, Y₂ Are the vertical offsets of eash plate. However, instead of writing the problem as shown above he subconsciously cancells out L without even being aware that he did it so we call it a hidden assumption. M₁ × Y₁ = M₂ x Y₂ . This form seems weird where the equation that models two mases that balance at equilibrium but the plates seem to be hovering in mid air and the information has no mechanism to cross from one plate to the other. Another form of the kind of mistake that follows from hidden assumptions the hidden L may cause a person to ignore the offset Y. since it no longer seems to be relavent without the crossbar. So you end up with M₁ = M₂ the illusion is completed because the absent minded professor isn't aware that he just made these mistakes, but students are left to wonder if he's being serious about the paradox where unequal masses seem to be equal. to answer the origional question the information travels accross the cross bar to get to the other side as infinitessimal stresses that exist across the cross bar.
-
Finding large Primes using Standing Wave Harmonics
TakenItSeriously replied to TakenItSeriously's topic in Mathematics
While I believe it is possible to factorize large semiprimes more efficiently than can currently be done, I have no interest in breaking security protocals at least until companies replace using prime factor security. -
It was just an example, not a totality.
-
In the video, they meantioned that the series was was used in 26 dimensional String Theory as well, which is why I speculated that Infinity could be realized in reality but requires extra dimensions. FWIW, certain aspects of infinity are included in my own interpretation of GR which involves falling through the EH of BHs at the SoL.
-
At that time, yes, I'd say that the chances were zero percent that ayone would have picked any well ordered sequence of numbers vs a more chaotic sequence which was due to the misperception of what people understood about random patterns back then. As I said before, human nature makes people much more predictable than you think at certain things and sometime it's not about what most people would pick but about what no one would have ever picked which has more to do with psychology than probability. The human psyche has evolved to look for patterns of non-random events, not random patterns so people naturally assumed that random ment the absence of any kind of pattern.
-
Any Anomalies in Bell's Inequality Data?
TakenItSeriously replied to TakenItSeriously's topic in Quantum Theory
I agree that information must be exchanged, though it may be obfuscated. Consider this protocol. We know the following: When Alice tests spin A then Bob tests spin A, then if Alice retests spin A hers results are always the same. When Alice tests spin A and Bob tests spin B then when Alice retests spin A the result is different 50% of the time. So Alice and bob have worked out a protocal to communicate where Alice initializes by testing A Each time. Bob replies back by testing either A for 1 or B for 0. Alice retests A and if it changes which happens 50% of the time, then she has just successfully received a 0. If it doesnt change after some number of repeated trials, then she may conclude with a high probability that Bob has been testing A each time and his intent was to send a 1 bit. This process can continue to send any number of bits back and forth. -
Clearly youve never played poker. Human nature is not unpredictable, it's extremely predictable and nobody played the lotery like that when it first got started.
-
I should have clarified that I don't believe in physical manifestations of infinity such as the type given in the OP. Conceptually, however, I agree. I think infinity is a very important concept as a boundary condition. IMHO, I tend to think of approaching infinity without ever reaching infintity for purposes of asymptotes or approximations, at least in terms of anything physical in the domain of the observable Universe, as the proper way to treat infinity. My only exception being, when falling through the Event Horizon of a Black Hole at the speed of light where infinity goes crazy but in a self consistent, non-singularity kind of way, which is a whole other topic. That I can't talk about and probably won't be accepted for decades. I seriously doubt that if the world can wait that long, before the current mistakes of humanity become irreversible.
-
one example I saw a Numberfile youtube video where the infinite series of something like1+2+3+4... infinity = -1/12. I could be remembering it wrong and it could have been the product instead of the sum, but the point is that the series logically expands to infinity but there was a proof that used a combination of other infinite series to find the -1/12 answer to be true. I tried to search for that video, but couldn't find it. Another simpler example I heard when I was very young. If you lived in a world with an infinite population of immortals where one immortal was born in some random location every year and everyone knew their own age, then from the point of view of any random immortal you happened to pick, what are the odds of him running into someone younger than he was. The answer is infinitessimally small because there are an infinite number of older people while theres only a finite number younger so it doesnt even matter how old he was, the odds against meetig someone younger is still infinitessimally small. And yet from the perspective of a third party, one person is always younger than another person when they meet. Before thinking about it too much, it could be like saying that if infinity could exist in the universe, then conservation laws would probably not be true. At least not in the form that they are currently understood.
-
Why not? Because once a unique strategy gets out its no longer unique. Just one other player using the same strategy cuts it's value in half since your guaranteed to split two ways. It has to do with origional thinking. Free will is not expressed as a random series of events, they are a completely biased series of events based on human nature which in turn is biased based on human experiences creating intuitive conception, which is always false when the real context falls outside of our experiences. If nobody ever thought of that strategy being a valid strategy, then no one would pick such a series of numbers when they thought it was guaranteed to fail. Back then, it was a real chore explaining that concept to those I tried to explain it to. Its easier to explain today because people have likely had some experience in their lifetime that had to do with large numbers. The real question is if the idea was origional or not. Since It was just after lotteries became legally run by the states and nobody thought about large numbers back then except for some mathematicians, who never would ever consider playing the lotery even in a hypothetical, I'm pretty certain it was an origional idea. I could have been wrong, but I doubt it.
-
Finding large Primes using Standing Wave Harmonics
TakenItSeriously replied to TakenItSeriously's topic in Mathematics
Regarding the wheel modification, it essentially ignored even numbers and redundant prime factors since only one prime factor is required to define a composite number and I had already accounted for all of those mods in the Excel proof. It's difficult to display the algorithms contained in each cell directly and Excel syntax is not very reader friendly. Also the last completed version was corrupted by hackers so it won't even open in the first place. Right now I'm only recreating portions to be able to display examples. The Harmonic Wave modification does much more than that and is a real game changer. One form of infinite compression is already embedded in the method when you consider that the SoE saves the primality data as absolute numbers where a single number could take up hundreds of gigabytes of data. Using wave patterns the same data can be stored as local coordinate data within a local matrix reference frame with a single long index to the matrix datum. Think of it like this: 2 is a prime factor for all numbers divisible by 2, 3 is a prime factor for all numbers divisible by 3, 5 is a prime factor for all numbers divisible by 5 and so on. The thing that each of those series of prime factors have in common is that they each occur at a constant interval or they each have a constant period though each is unique from eachother. Period is by definition the inverse of frequency. So each prime number would be unique like the leading edge of a wave that has a different frequency. What this means is thay by taking the product of the first n prime numbers you find a common node point for all n primes and then the pattern for those primes repeats itself. So if we break the infinite number line into line segments that are each N numbers long and line them up side by side to form a matrix, then all composite numbers containing primes below the nth prime will allign and form columns. The composite numbers containing prime factors that are larger than the nth prime will be arranged into diagonal patterns because their will always be a little extra piece of the wave left over after reaching the end of the line segment that will be be at the start of the next line segment. To understand it better, you should just take the smallest node: N = 2x3 = 6 You can then write the numbers down in rows of six and see what happens: 01 02 03 04 04 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Note that this is a base 6 structure that I'm expressing as decimals because no one thinks of primes in base 6. Because 3 goes into 6 twice there are two columns that contain all numbers divisible by 3, and because 2 goes into 6 3 times there are 3 columns that contain all numbers divisible by 2. Note that there is one collumn that is redundant where it contains numbers divisible by both two and three. This leaves only two out of six collumns where all other numbers both prime and composits of prime factors larger than 3 must exist. Since the multiples of the prime factors greater than three form diagonal lines they must intersect those vertical collumns periodically and thus they mark the position of another composite number. Note that the columns containing the mix of primes/composites are vertical while the numbrs are incrementing horizontally within the matrix, thus we gain the advantage of havig a two way reference system to mark the relative positions of those numbers in a 2D plane. Note that we ave taken a 1D number line and just converted it into a 2D region of space. We could actually do this again to form a 3D region of space by multiplying another series of primes together, but I don't have the ability to demonstrate this in excel or the bandwidth to create a new software application that could achieve this. However if we keep it a 2D rectangle of infinite length then we can use the next series of prime products, e.g. Ny = 5x7 to find a vertical node pattern that is 35 squares long and those prime factors also repeat every 35 squares. more primes we add to the second product the longer the rectangle becomes up to infinity. Now we can take any square and project it down the line to any point in space and be able to calculate the waves in that region using local coordinates instead of the large prime numbers themselves. This is the infinite, or unbounded if you prefer compression that I was speaking about before. This can be further exploited in other forms to transmit data of any form which is another topic. -
Finding large Primes using Standing Wave Harmonics
TakenItSeriously replied to TakenItSeriously's topic in Mathematics
False Alarm. You're right in part. It uses something like the SoE for the first line segment. However it's quite a bit different after that. The SoE works on a single infinite line, has only absolute references, counts only the pitches for finding prime factors. This means that it must starts from the first prime every time, must save actual number values to record primaliety information, eventually runs out of space on the hard drive. However, when breaking the line into segments at lengths equal to a resonant nodes, N = P₁×P₂...xPn Then it can leverage 2D wave like patterns vs the 1D pitches the SoE depends on. For primes < Pn it will create vertical columns of composite multiples that can be ignored. That reduces the numbers that need to be acted on by 80-95%. For primes > Pn then the composite multiples of those primes create consistant diagonal patterns that can be defined by a starting position along with a Δx and Δy slope. Finding the points of intersection in the remaining collumns defines the remaining composite numbers, meaning all prime locations are what are left behind. Therefore rather than marking all primes and composite numbers locations by recording their large numbers that could be humdreds of millions of digits long on the hard drive, the new method finds the relative positions of primes and composite numbers, saving only the data of the collumn as a binary string. Furthermore, the 2D region can be localized to a using a datum which can't be done in 1D. A datum makes the position references all based on local numbers instead of large numbers. The format of the new method is optimized to need only a tiny fraction of the space to record all primality data. eg for any region of 44,100 numbers, It only needs 44 binary strings 110 bits each regardless,of the number sizes involved. example: Without the memory limitations, accessing large primes or even large Mersenne Primes should just be a matter of indexing them in real time. Again, if that's true then infinite compression may just be a step away based on an idea that was told to me by a brilliant man whom I was introduced to through the Stanford professor who taught me EE. -
I doubt that any math problem that involves infinity is a valid problem, unless your talking about hidden parallel dimensions of some kind. You always seem to get two correct answers that are not consistent with each other which is a paradox.
-
Reminds me of a lotery system that I thought up when lotteries were still fairly new and could have have represented an edge during times such as when the pool became very large to make it worth playing. The problem with large pools is that the chance for multiple winners could reduce your winnings considerably. In that case you should play only numbers that no one else would ever consider playing so that your less likely to have to split the winnings. For example picking 1, 2, 3, 4, 5, 6 has the same odds of winning as playing 6 random numbers but no one thinks thats true. However, once I started telling people about the strategy, it was no longer a valid strategy. I read an article with the same idea about a year after I first started telling people about it to prove the point.
-
I discovered an improved method for finding large prime numbers by ignoring the prime numbers alltogether and only focus on the wavelike patterns of prime factors which is loosly related to the Sieve of Erasthonenes. Figure 1: The Sieve of Erasthonenes By noting that prime factors occur at regular intervals. ie multiples of 2 reoccur every other number, multiples of 3 reoccur at every third number, etc. we can leverage this periodicity of prime factors to identify all non-prime positions within a predifined large range of natural numbers arranged in an array. This periodicity of prime factors means that we can apply the concepts of Standing Wave Harmonics to find all composite numbers with a given range based on these wave patterns. Therefore we will also know the relative positions of all prime numbers within the same range. Figure 2: Patterns of Standing Wave Harmonics These positions can be stored as a "0" for non-prime or "1" for prime rather than needing to store the entire number, thus alleviating the computational issues with large numbers. The method The key is to arrange numbers into rows of N numbers which is defined by the product of the first n primes. N = P₁x P₂...x Pn The standing wave like effect of those first prime numbers will cause their prime factors allign into their periodic columns which, in turn causes the primes to allign themselves within the remaining collumns though they will be intermingled with other composite that are defined by primes greater than Pn. for N = 2x3 = 6 multiples of 2: xx, 02, xx, 04, xx, 06, xx, 08, xx, 10, xx, 12, xx, 14, xx, 16, xx, 18, xx, 20, xx, 22, xx, 24, xx, 26, xx, 28, xx, 30, xx, 32, xx, 34, xx, 36, multiples of 3: xx, xx, 03, xx, xx, 06, xx, xx, 09, xx, xx, 12, xx, xx, 15, xx, xx, 18, xx, xx, 21, xx, xx, 24, xx, xx, 27, xx, xx, 30, xx, xx, 33, xx, xx, 36, after we combine those multiples we get: xx, 02, 03, 04, xx, 06, xx, 08, 09, 10, xx, 12, xx, 14, 15, 16, xx, 18, xx, 20, 21, 22, xx, 24, xx, 26, 27, 28, xx, 30, xx, 32, 33, 34, xx, 36, Therefore we can see that the prime numbers must be located within the remaining columns that are not already occupied by the composite numbers. For prime numbers greater than Pn their prime factors form diagonal patterns which define the gaps between the prime numbers in those remaining collumns. xx, xx, xx, xx, 05, xx, xx, xx, xx, 10, xx, xx, xx, xx, 15, xx, xx, xx, xx, 20, xx, xx, xx, xx, 25, xx, xx, xx, xx, 30, xx, xx, xx, xx, 35, xx, or xx, xx, xx, xx, xx, xx, 07, xx, xx, xx, xx, xx, xx, 14, xx, xx, xx, xx, xx, xx, 21, xx, xx, xx, xx, xx, xx, 28, xx, xx, xx, xx, xx, xx, 35, xx, combining all waves we get: xx, 02, 03, 04, 05, 06, 07, 08, 09, 10, xx, 12, xx, 14, 15, 16, xx, 18, xx, 20, 21, 22, xx, 24, 25, 26, 27, 28, xx, 30, xx, 32, 33, 34, 35, 36, note the numbers in bold are recognized as primes as well as those numbers marked as xx i.e. no prime divisors. The exception is 01 which always shows up as a prime number but you can just ignore it. With the numbers arranged in a 2D array, we can treat it like a matrix and therefore we can ignore the value of the numbers themselves and only define the relative positions of all composite numbers within the array which of course also defines the relative positions of all of the prime numbers within the array. Since we are only treating positions of the array as prime (1) or non-prime (0), then we can alleviate the issues with computational complexity of long numbers by only dealing with their primality and position. Example: By arranging numbers into rows of N numbers then you will notice that all primes will become aligned into columns that number fewer than columns of composite numbers. e.g. For the first 2 primes determine the positions of the first 11 primes within the first 36 numbers. N = 2x3 = 6 0,1,1,0,1,0, 1,0,0,0,1,0, 1,0,0,0,1,0, 1,0,0,0,1,0, 0,0,0,0,1,0, 1,0,0,0,0,0, For the first 3 primes we can define the positions of the first 29 primes within the first 100 numbers. N = 2x3x5 = 30 0,1,1,0,1,0,1,0,0,0,1,0,1,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,1,0, 1,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0, 1,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0, 1,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,1... where 1 ≈ prime number position 0 ≈ composite number position We can scale the Node ranges to the size of the prime numbers which we are focused on by adding more primes to the composite node. N₂ = 2x3 = 6 N² = 36 N₃ = N₂x5 = 30 N² = 900 N₄ = N₃x7 = 210 N² = 44,100 N₅ = N₄x11 = 2310 N² = 5,336,100 N₆ = N₅x13 = 30,030 N² = 901,800,900 N₇ = N₆x17 = 510,510 N² = 260,620,460,100 N₈ = N₇x19 = 9,699,690 N² = 94,083,986,096,100 N₉ = N₈x23 = 223,092,870 N² = 49,770,428,644,836,900 N₁₀ = N₉x27 = 6,023,507,490 N² = 36,282,642,482,086,100,100 N₁₁ = N₁₀x31 = 186,728,732,190 N² = 34,867,619,425,284,742,196,100 N₁₂ = N₁₁x37 = 6,908,963,091,030 N² = 47,733,770,993,214,812,066,460,900 ... Nn Below is an excel spreadsheet that uses the first 4 prime numbers to define a node length: N = 2x3x5x7 = 210 This node was then used to find the first 8555 prime number positions from within 88,200 natural numbers by using the wave patterns of all prime factors to eliminate all composite numbers leaving only primes behind. I was able to validate the correctness of those prime numbers to be 100% correct compared to downloaded samples. While this is not important for finding small primes, the method uses the same process for finding prime number positions without needing access or operate on the large numbers themselves. All prime number positions can be predifined using prime factor wave-like patterns so that accessing large prime numbers should be usable in real time. While the spreadsheet had over 70 sheets involved, I show sample sheets for two prime factor patterns for 11 and 101. The third sheet shows the prime number positions are shown in red within a field of natural numbers shown in grey, although the numbers themselves are just there to provide context for the result. The actual numbers used in formulas are simply a 0 or 1 to show primality as shown in the fourth sheet. Figure 3: Proof of concept in using an Excel spreadsheet which uses wave patterns of prime factors shown as the patterns of "1" intersecting harmonic columns shown as the white cells in order to derive the positions for the first 8555 prime numbers within the first 88,200 natural numbers. While creating this kind of matrix for the largest prime numbers in question would be a large undertaking the memory and speed cost can be greatly reduced relative to current methods by dealing with relative prime number positions only without needing to store or perform operations on large prime numbers directly. It may even be possible to model prime number positions using electronic signal waves or perhaps light wave frequencies at wavelengths that correspond with the prime factor periodicity in order to determining where they intersect with prime harmonic standing waves in order to identify the prime number positions. Ultimately, with access to large primes in near real time, it should be possible to use methods of infinite compression, by taking large strings of binary data, and converting it to small number keys using universally accessable large mersenne primes to compress and decompress the data at either end.
-
As in several web pages that draw files from a single database? You can assign each web page a prime number then assign their semi-primes to each file. example: WP A p1 = 2 WP B p2 = 3 WP C p3 = 5 ... then, say you have a file that you want linked to A and C but not B. You can assign the file an index of primes: eg index = p1*p2 = 10 So each web page can use the mod function to filter only the files that belong to them from the single DB idea was not origionally mine BTW. unfortunately, I only heard about it second hand sometime back in the 80's and used it to create my first program which was a phone book app.
-
I think you have a good point that sounds similar to my own line of thinking only I have to think of it in a more mundane way. For example, if I do long division by hand, the first operation to calculate the first digiit of the 2nd factor is an intuitive guess based on the two relative sizes in question which is right most of the time but when it's close, e.g. does it divide 8 times or 9 times? it becomes more and more like a coin flip. But computers, being the most anal thinkers in the Universe can't use intuitive guessing. So, I'm just guessing that they must look at the number of digits, then the left most digit, then the next and the next and so on. Then it must repeat the entire process for calculating the next digit. which makes it an exponential expansion. I always thought of that as the extra loading though I don't actually know what the actual algorythm looked like, especially for binary numbers. like there might be some mathetical rules I'm not aware of going on. Also the operation seems to require looking at the entire number as a whole whereas all other operations seem to be able to to be broken down into pieces. So how it must be handled using different memory types which is something else I don't know about how computers handle it, though Sensei's post touched on multiple class types. Possibly something going on at the chip logic level such as with math coprocessors. Edit to add: If someone can code a simple test we could measure this to find out. example generic code b = the largest integer your code allows a = some big number with half number the digits -1 digit c = average value of x in the first loop. n = some big number to increase the resolution of the timers and to test a fair sample size of a. . . report StartTime . . for i = 1 to n do x = b/a a = a+1 end. . report SplitTime re-initialize a . . for i =1 to n do y = a*c a = a+1 end. . report EndTimeThe timer should be the kind accurate down to 5ms, not the second timer. This would test the processor time for the first 32-bits. I don't know how to code for testing the other class types. Edit to add again: Also, Sensei's posts had gotten me to think more along the lines of binary numbers, which I suppose is the nature of c's efficiency advantage. Therefore I suppose there is an associated lagg time that comes from converting decimals to binary that should be taken into account. However, I think we might be able to look at it as lagtime effect or an additive component that must occur for both operations vs any multiplication component that I'm actually more concerned about. I have some additional questions about binary numbers which I may post in a different thread. I don't want the topic to wander too far off base. Though binary numbers may be too integral to the topic that we cannot seperate the two.
-
Forgive me, I made an error in saying the counter goes in the second loop. I should have said in the first loop after the counter that counts odd numbers only. I think I was thinking in terms of being on the second level. Also you're right, I was confused about binary even being an issue which is ignorance on my part of not knowing the language in your example, much less the syntax which seems to stress fewest characters over readability. A pet peeve of mine since it all gets compiled so why make it an actual code? but "tabs vs spaces" I guess (HBO's Silicon Valley reference). Though I do understand binary for the most part in terms of 2ⁿ combinations or in conversion to/from decimals or even in terms of QM, but not in terms of how most operations are performed on binaries or if distinguishing individual digits in decimal is practical in binary form, but I do get what your saying. I just didn't realize that the "j++" syntax ment change all integers to binary if I interpreted your post correctly. I'm not in the software profession and worked as an EE on the hardware side, though I am naturally gifted at logic and enjoy programming in my own time for personal yet quite extensive projects. However, it's all self taught and limited to the languages I needed most so basically, If I didn't need to use a feature, I probably still need to learn it. So in terms of dynamic type changes, it was mostly as a feature I whished I had many times before it existed. After I had the feature, I've yet to need it which is typical. So I understand why it's needed but I probably wouldn't know it if I saw it, nor any particular details that are important to know about it either.. In terms of user-defined classes, I only know a little from VBA experience so thanks for the tip, It was very enlightening. I do have a follow up question if you don't mind at the end. Fortunately, I dont think binary is an issue once we correct my error and place the counter in the first loop, which If I understood your post correctly, initializes the integer as a decimal in the first loop and changes all integers to binary in the second loop. At least I hope thats right. BTW I assume you chose binary for it's efficiency as the native language of computers which I certainly wouldn't argue with either. I hope that clears up any confusion I may have created. As to my quesion on the user-defined classes you showed. How do they pertain to the OS bing 32-bit vs 64-bit as well as the hardware having either 32 i/o signals or 64? I might see it as getting around the OS limitations, but I cant conceive how it gets around the physical limitations of the hardware having only a fixed number of physical signals, not even counting any overhead that may take up some bits. I guess that mostly pertains to that last declaration for over 64 bits. Is this coding for something beyond a typical PC?
-
I think mod is fine for small divisors but since the numerator can get large, maybe add a counter in the second loop k += 1 if k = 5 then k = 0 break endifIDK if its any more efficient though. BTW, not sure what language I just used, probably some hybrid lol. edit to add: That's why I think mod isnt efficient for large numerators with large denominators. Think of a 64 digit number divided by a 32 digit number. Operations must be expanding expontialy and the number of digits in the example are probably around a 100x too small for 128-bits.
-
Thanks Sensie, that clarified a lot. BTW, I think you can skip every other 5th integer as well since they all end in a 5. i.e. all primes > 10 end in 1,3,7,9 only. I guess that means every 5th iteration after the odd check. edit to add: So, I guess I'm asking what would a timer record if placed outside of that modulo check? off topic: Why do carraige returns keep disappearing when I preview, reply or edit?
-
I'm trying to get a feel for the scale of negative impact that the division operation involved in the factorization of large primes has on computational time. Hypothetically speaking, if you could replace all division operations on large numbers with a multiplication operation on the same numbers instead, while assuming it achieved the same goals of course. What factor increase in efficiency do you think we would gain?
-
Any Anomalies in Bell's Inequality Data?
TakenItSeriously replied to TakenItSeriously's topic in Quantum Theory
Never paid much attention before discovering the logical flaw, mostly just what I've read about in general about a bunch of "loopholes" or invalid assumptions in theory and experiments, but nothing specific. wikipedia Bell's Inequality under Assumptions The flaw is based on Bob appearing to have 6 results: test 1 up down test 2 up down test 3 up down But one test is an illusion because it was based on Alice's action not Bob's. The effect breaks the decision tree down in a strange binary way. 50% Alice Test n 50% up 50% down 50% Bob50% test not n(1) 50% up 50% down 50% test not n(2) 50% up 50% down -
Were their any unexpected anomalies found in Bells Inequality test data aside from those predicted by QM? what was the frequency for Alice and Bob choosing the same test?Please dont answer unless you know. e.g. Don't just assume it was 1/3. I'm not sure 1/3 is the correct expected results and it may be significantly lower than that such as 1 time in 4 when opposing testors chose the same measurements. Have tests of analogous classical systems been used as a control? such as for what kind of variance to expect? What about Monty Carlo sims of classical systems? Any unexpected results from classical results? What kind of variance was experienced for all results? in general, I'm under the impression that variance was much higher than expected. Were their deviations between different experimental results? What reliability factors are involved. any unexpected anomalies in variance? were probability experts consulted? Forgive my asking, but based on what I know, it seems like proper testing was not much of a concern, contrary to the normal rigorous attention normally applied to physics theory, experiments, especially given the logical nature of the problem, which mathematicians dont seem to appreciate much..