Jump to content

PeterJ

Senior Members
  • Posts

    988
  • Joined

  • Last visited

Everything posted by PeterJ

  1. Ha ha. Yes, I can understand your nervousness. Francis Bradley cites Materialism and Theism as good examples of views that do not survive in metaphysics. They would be my two first choices as well.
  2. Right on brother! This failure to understand metaphysics allows countless ludicrous views to survive. But good luck trying to get physcists to see this. Generally it seems that metaphysics is considered to be not worth thinking about. This leaves us free to believe whatever nonsense we like. I blame this on academic metaphysicians who, as you say, have lost the plot. In reality, as you also say,the plot is quite simple.
  3. Popper opines somewhere that we act according to our beliefs. Pretty obvious really. If we believe that there's food in the fridge we go there when we believe that we're hungry. William James talks of the greatest discovery of his generation - 'that a human being can alter his life by altering his attitudes of mind'. I mean, really, this is daft. The discovery was made thousands of years earlier and has been widely and energetically promoted ever since. Still, perhaps these things have to be constantly rediscovered.
  4. Good stuff. The perennial philosophy, being perennial, has a vast and very repetitive literature.
  5. Yes. Interesting indeed. And also quite encouraging. Thus the Upanishads can claim that there is no consciousness after death and yet still be an optimistic vision of life and death. Kant and Hegel also arrive at this idea of (something like) a pre-intellectual awareness that stands outside of the cycle of life and death. Schrodinger never quite managed to make the idea respectable in physics despite forty years of trying, but it's a very common one elsewhere. It seems to me that much confusion is caused by casually associating 'intentional consciousness' with the sort of processless awareness that Kant defines as 'not an instance of a category'. As such, this cannot be in the category 'alive' or 'dead', and so this pristine awareness prior to the categories of thought must be either a mass meditative delusion suffered by countless thousands of Buddhists, Taoists, Sufis, Gnostics and other assorted hippies, or a real and timeless phenomenon. Hegel calls it a 'spiritual unity', a phrase that might tempt many to dismiss it as a superstition. But nobody who dismisses it as a superstition can show that they are right to do so, and so we are still allowed to be optimistic about life, death, the universe and everything. .
  6. The difficulty is the word 'consciousness'. It is not quite the same word as 'awareness'. If intentional consciousness does not survive the destruction of the brain this does not mean that all awareness need cease. Meditators say that it is possible to visit a post-death state in order to confirm that it ain't so bad.
  7. I recommend Ulrich Morhoff and the 'Pondicheri' interpretation. Don't understand him well myself but his approach will appeal to some folks here. .
  8. Is it not also the case that its wave aspect is everywhere? In which case it is not travelling. Is this correct? If it is, then the problem seems much greater than can be solved by fiddling around with ideas about masses and charges.
  9. Fair enough. Strangely, I'd be very pleased if you could convince me you're right. Then I could forget the whole thing. But it niggles away, the idea that this might work. Thanks for clearing up a lot of my errors anyway.
  10. Hey Imatfaal - Have you given up in despair? I'd like to get to point where I can be sure my approach doesn't work, and I'm not quite there yet.
  11. Great. Now we're going to measure progress by the size of our explosions. Go science.
  12. The theory seems to be ad hoc. If there is a God there is no need for it. Perhaps children are closer to Him, and better able to intuit His presence. Just causing trouble.
  13. Yes!! Very annoying. In any case email disussions with strangers can be unexpectedly difficult. We probably could have sorted all this out in ten minutes over a pint. . But some of the crossed-lines will be my fault. I'm still figuring out how to explain how I'm coming at this problem, and keep leaving out things I should have mentioned. So yes, successive stages. calculating products of p working from 5 up to P1. This needs to be done only once, and then as p increases we just add one a new prime factor to the calc., while the previous result still applies as to density but has to be scaled for the change in the size in R. Eg. Once we've calculated the density of products for the factors up to p1, then for next R the density is the same but over a different range, and there will be one more factor in R ito take into account, which is p1 x p2. All other products of p1 are products of previous primes and already counted. . So if there are 10 primes in R, and if next R is larger than previous R, then there must be >10 primes in next R, less one to account for the new p1. There are some provisos to add to this but as a general rule it seems correct. . . . ,
  14. Okay. There are plenty of things that are not predictable about the primes and plenty that are. As you say, there are many buggeration factors. But I'm not looking for an algorithm to predict primes. I'm looking for a calculation, however complex it may be, to predict products of primes. Sorry - I can't understand point 3 here. It seems a strange calculation. But yes, to say that 210nx5x7+/-35 = all joint products of 5& 7 that occur at 6n+/-1 is a trivial observation. Still it does look like a prediction to me. If we count the products of primes starting with 5, then when we count for 7 we just need remove previous products of 5, of which there will be 2 in every 210 numbers. It does not matter about any larger prime factors. They will be dealt with in their turn. So for joint products of 11 and any smaller primes we would need to deduct 2 in (6x11x5) + 2 in (6x11x7). This exhausts the relevant products of 5,7 & 11. For my R this calc will be approximate because the end points of R do not line up with the combination wave of the products. For the range from (6n)5*7*11 to (6n+1)5*7*11, however, it will always be exact for any n. (This has to be adjusted where n=0) . Good. This is what I was banking on. This is an important point dealt with. But they are entirely predictable. I want to argue about this. Perhaps 'predictable' has a mathematical meaning I'm misusing. I'd say thay are predictable because the products of primes are predictable. It is just that the calculation is difficult. When I say 'predictable' I just mean that we could write out the prime sequence without having to do any factorisation. If we can say that the primes >3 occur only at 6n+/-1 is this not a prediction, albeit a very trivial one? , 175 = 210-35. Predictable. 205 = (6nx5x41)+/-205. Predictable. (It is irrelevant to anything that 41 is a divisor of 205 unless we are counting products of 41. When we are we would want to deduct the previous products of 5. These occur at (6nx5x41)+/-205. Here 'n' can be zero, which produces the number 205. Note that the variable 'n' means the calc is infinitely repeatable. ) 217 = 6x31x7. This is (6nx31x7). Predictable. . (Text box failure) Imf- "But as I have been saying from the beginning - primes are neither mechanical nor predictable. You are moving from the agreed (all numbers are primes or multiples of primes AND multiples are cyclical) to the unproven (primes are cyclical)" PJ - Primes are clearly not cyclical. But any finite qty of primes will produce a combination wave of products which is precisely predictable and which repeats forever on it's wavelength. . . Imf - "it will not be less for the next R" this is demonstrably false. 1 1 2 2 4 2 7 2 4 8 2 11 7 3 11 13 13 5 This is the number of twin primes between the squares of consecutive primes - the trend in certainly upward but "it will not be less for the next R" - clearly refuted. PJ - Apologies. I meant to say 'on average'. For a start, sometimes next R is a lot smaller than previous R. The first R that I'm concerned with yields 4, then 2, 7 etc as per your list. The trend is bound to be upwards but yes, it can be smaller or larger in any instance. As p grows larger the difference between previous/next R will become ever larger and the trend will become ever more obvious. It is possible, I think, that there may come a point where from then on the next R will always contain more twins than previous R, but that's a guess. This list suggests that I have correctly seen the mechanism that causes this upward trend for twins in R. My original calculation was much too simple to show it, although it illustrates the idea, but despite your invaluable assistance I still cannot see why a decent mathematician wouldn't be able to make it accurate enough. Maybe I'm still missing something. (By the way, please be wary if you read my posts from the email notifications. I tend to make mistakes and have to edit them out, and I may have two or three goes at it.)
  15. Yes, of course a proof has to be ironclad. I've not suggested otherwise. As it stands yes. Why would it be impossible to make it work? . The original calculation was wildly incorrect. My prediction as to where the relevant products of primes occur is solid as a rock. . . . It may yet be impossible for some reason to do what I'm suggesting. Certainly the calculation you mention doesn't work. But you've given me no reason to believe no calculation could ever work if it is made sufficiently sophisticted. The relevant products of 7 will occur as stated, and 210/2 will give the joint products of 5. It is always possible to account for doubling with smaller primes, albit that for a specific range there will be an error term for reasons we've discussed. . Yes. This is why I'm exploring how to improve it. I can't see how a statistical.approach could ever work. What might work, it seems to me, is a study of the behaviour of the products.of primes. This is entirely predictable. Suppose we draw an empty number line and mark a zero point. Then we place a circle with a 6" circumference on the line touching it at zero and mark the circle where it meets the line. Roll the circle back 1" and mark the circle where it meets the line, and then forward to do the same. Now we have a circle marked off at zero, +1 and -1. (or at 4, 6 & 8 oclock,) If we roll the circle up the line then each time the centre mark is on the line the two other marks will identity 6n+/-1. . Now we have identified all the 'relevant' numbers. All we would need to do to create the twin prime sequence is to scale this circle for each prime. So to identify the products of 5 we would increase the circumference of the circle to 30 and roll it up the line marking off 30p+/-p. These are the only products of 5 that have any bearing on the twin primes. For relevant products of 7 the circle needs a circumference of 42. etc etc. For joint products we can do the same. For 5,7 the circumference would be 210. For 5, 7 & 11 it would be 2310. Etc etc. I'm not suggesting that this is a particulalrly useful thing to do, but I am suggesting that this mechanical and predictable behaviour can be measured and allow us to make predictions about the density of twin primes. This is because for any finite qty of prime factors the density of primes relative to these prime factors can be calculated, and if the correct range is chosen the prediction will be accurate. If the density is high enough then there will be twin primes in the range, The key point is that this is not a one off calculation. For any finite qty of prime factors the density of primes relative to them can be predicted to infinity, The products form a combination wave that repeats endlessly with a fixed wavelength. So where R is carefully chosen we can predict quite a lot. I may have confused the issues by mention the idea of simply counting twin primes in R from a table of primes. This is not what I'm suggesting. But I find it interesting that whatever the qty of twin primes in R turns out to be, it will not be less for the next R, and the next p will only add one product in R that is relevant. This and similar thoughts seem to offer a way in to the problem.
  16. I don't think so. They are not accurate enough, which is not quite the same thing. Of course. I know what needs to be done. . I don't get this. My predictions are inevitable. They ar not statistical or probabalisitic. They are just not very accurate yet. Not at all. As p grows larger the fuzziness matters less and less. . Yes. An overestimate of twins would be useless. But I'm not using statistics. I'm using the rules that govern the behaviour of prime products, which are predictable to infinity and back. . Mybe we're still slightly at cross-purposes.
  17. It's okay. I'm predicting, not factoring. Because the relevant products of primes only occur at 6np+/-1, I was once able to build a Excel prime checker that outstripped the machine. My basic approach is not wrong, it just might not work in this context. . Good point. But we wouldn't need to ever do this. The idea is to prove a principle and this can done using small primes. My proposition is that the calcualtion can be done for small primes, which is all we'd need to do. . What assumption? Is it not inevitable? True. In the case of all p it can vary by +/-1. This is what I meant about it being a bit messy for small primes. But one product more or less doesn't matter where R is large. Don't forget I'm after a limit, not an exact count. Damn. Text box gone again. Imatfaal - If your 6n+/-1 is divisible by 5 then you know this is lowest prime factor. But if you know for instance that it is divisible by 17 - in order to compute a density you need to know if this is the lowest prime factor (ie to be sure it is not divisible by 5,7,11,13). That is the number you need - you can be safe in estimating that 2 in 5 sites of 6n+/-1 are divisible by 5; but to do the same for 7 you need to find the 7s and remove the fives. And that is complex. Me - Exactly. It can be done, bit it's a pain. But then, I don't need to do all the calculations. I only need to make the caclculation good enough to work, not be completely accurate. Imatfaal - Even if you do this - it is merely evidence not proof. You need a way to generalise it - to show that it always works Me - This is ambiguous issue for me. It must be true that the products of primes that ocuur at 6n+/-1 do so at 6np+/-p, and thus can be measured and counted, but I would have no idea how to go about proving this. I would probably have to draw a picture.
  18. Right. I'm up to speed. I have no idea how I made such a stupid mistake. It beggars belief. My calculation is clearly not up to the job. This puzzles me, because a few years ago I spent a lot of time on this and it all seemd to work fine. Oh well. My apologies for wasting so much of your time, and many thanks imatfaal for sticking with it. . . But there is still hope for the idea, and I'd like to explore it a bit further, just to see how much else I've got wrong. . The problem that has to be solved for my idea to work is getting rid of all the duplication errors when counting products in R. My calc. takes no account of these. I can see that they can be calculated in principle, but actually doing it is beyond me. For example, when counting the products of 7 in R we can deduct those that are joint product of 5 and thus have already been counted. For products of 7, 2 in every 210 numbers are joint products of 5. This sort of calculation can be done for all the relevant prime factors but it's massively complex. I end up with fancy hierarchical chains of reciprocals that quickly get out of control. It can be done in theory, however, and that is the main thing. The principle is based on this thought. Where p1, p2 are large and (say) 100 numbers apart. Let us calculate (even if it takes a month), that the behaviour of the multiples of the relevant primes is such that there must be at least two twin primes in R. We then know that the density of products of primes below P2 is insufficient to prevent there being infinite twin primes. When we increase p1 by one step on average R will be larger. Let's say the next prime is also 100 numbers away. R will now be many times greater than previous R. So when we increase p by one step, so that old p2 now becomes new p1, we already know that there are more than two twin prime in R - unless, that is, P1 produces enough products to prevent more twins from occuring. Let's say P2 is 108+1. In this case it produces two 'relevant' products in every 6(108+1) numbers. This is not very many. What I was trying to do was show that P2 can never produce enough products to elimate all the twin primes in R, since this seem bound to be true whether or not we can calculate it. R grows ever larger with each increase in p1, and the increase may be arbitrarily large, while the products of p2 become ever more sparse as R becomes larger. Say there are 10 twin primes in R. We can just count them off a table of primes. Because the primes become more sparse as they become larger, a step increase in p1 will on average make R larger, often a lot larger, while the new p1 will be the only prime factor available to increase the density of products in R. This will produce 2 in every 6p1 numbers less any joint products, of which there will be many. If there were 10 twin primes in the previous R , then (on average) there will be many more than 10 in the next R, but only one more prime factor. Do you see what I'm getting at? If the three consecutive primes we're using are 101, 201, 301, (let's say they're primes) then a step increase in p will increases R fourfold and likewise (on average) the qty of twin primes in R, Meanwhile the density of products in R will increases by only 2(201*6) = 1/603 less any joint products of lower primes, which gives a grand total of 1. All the rest of the products of the new p1 have already been counted. This is all very vague but there does seem to be a mechanism at work that might allow us to rule out the idea of a highest twin prime. Or have I got this all wrong as well?
  19. Sorry John, but I really do not understand what you are saying. Imatfaal - I'm not that bad at arithmetic. As you say, it's the particular circumstances that make the two calcs different. Give me a little time to think. There's something here that I don't understand. I've done this calculation many times and wouldn't have posted anything here if I hadn't. I'll just check that we're doing the same calculation. If we are then I might have to leave the forum and never come back.
  20. Sorry, but the definition for R is the range between the squares of consecutive primes. It always has been. It is the entire basisd for the calculation. He used the correct range. I don't understand your objection. Clearly I have explained this very badly. There may be an arbitrarily large quantity of multilples of primes >6 in R. How could it be otherwise? John - "I'm not sure what you mean here. If the lower limit is never infinity, then for arbitrarily large ranges, the possibility still exists that only a finite number of twin primes exist. Even if the lower limit in some range is Graham's number or something, that means there could possibly be Graham's number plus two twin primes, and no more." The limit is a lower limit for twin primes. As p increases this limit (may) approach infinity and R (will) approach infinity. . But there can never be an infinite quantity of twin primes in R. I think you are not seeing what I'm saying yet. R is a well-defined range, and the highest prime having a multiple in R can be defined. This allows us to calculate (albeit in a very sloppy way) the relationship between available locations for twin primes and the quantity of multiples available to fill those locations as p increases. This is simply true. What is not certain is whether the calculation can be made sufficiently accurate to allow us to predict where this relationship goes.
  21. Okay. No point in disussing this further if you can make up your mind on the basis of this kind of thinking. .
  22. it's trying to use the damn things that's frustrating. I like the ideas but can't do the arithmetic. But your sums are spot on. This is the calculation. Almost. A problem with your version is that you divide by 3p instead of using 6p and doubling the result. It looks the same on paper but it's not in practice. My way gives a lower total of 19. So 19 products and 19 locations gives 0 as a lower limit for TPs. Before I say anything about why the result is zero, which is a massive underestimate, can I just check something with you. It will save a lot time if the result is not what I expect. When I did the spreadsheet calcs they came out fine, but it's all a bit messy for very small primes. . Do you have a spreadsheet set up? It sounded like it. Could you do me a favour and do the calculation once (but using 6p as the divisor of R) for two much larger primes, (not so large it becomes a nuisance). Your result will be trustworthy. My calcs worked, but maybe I built in an error. I did them a long time ago. If the result is less than one then you will have demonstrated mathematically that I am an idiot. My approach would still be sound, but the calculation would be a failure.
  23. Where p1 = 31, 287 is not in R. The definition for R is p1^2 to P2^2. Not upset, but frustrated. It is such a simple idea, but somehow I cannot get it across. My fault I'm sure. . It was presented rigorously. But he found that the original calculation left open the theoretical possibiliity of a highest twin prime. even though the likelihood was approximately zero. How can a limit reach infinity? Here it just grows larger. As p grow larger the limit cannot fall or stay the same. .
  24. Yes. but they've already been counted as the products of primes below sqrt p2. 205 was counted as a product of 5. If the prime factor is larger than p2^2 then it cannot add any products in R that have not been counted already. . I can count.and predict. It's what I'm doing. Why is this not obvious? I have, but obviously not effectively. Damn. Text box gone again. " I can show what you have claimed (that between the squares of consecutive primes exist at least one twin prime) is true for the first 1600 primes - took me about 5 minutes on excel. With some time I could make a program that would check higher and higher - and I have no doubt whatsoever that I would not find a counter-example. But counting and checking does not make a proof - a proof needs to show that for all possible." I did not suggest that counting makes a proof. I don't think you understand what I'm saying yet, since all these objections miss the mark. . You can count it for the first million primes, it makes no difference. The mechanism is invariable. No need to check R for lots of different values. It's true where p=5 and true for ever more. Sorry, I'm getting tetchy. Maybe you are as well. I think we are having a communication breakdown but I'm not sure why. The first version of this I sent to a mathematician at Uni of Bristol, (editor of a maths journal) who told me it was correct but unrigorous. No quibbles or doubts about it. This is virtually the same calculation but it should now be rigorous. It cannot suddenly have become a load of nonsense. .
  25. It is no use stating what the monk knew and don't. know. You don't know. How can you make statements about this when it's just guesswork? Your idea of 'all-encompassing' is odd. This is not a person in spacetime. I think this idea will make no sense to you until you see what is actually being proposed. As Overtone says, the idea can be reached in logic or experience, but it has to be one or the other.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.