sandokhan
Senior Members-
Posts
60 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by sandokhan
-
You cannot dismiss the fact that Gauss' Easter formula disproves the Gregorian calendar. This is the crux of the matter. Nor can you dismiss the acceleration of the moon's elongation paradox. As for the wall of China, be kind and consider the alternatives. Are you going to call Garry Kasparov's article as absurd? http://www.revisedhistory.org/view-garry-kasparov.htm Fine. Explain this then. "With the Easter formula derived by C.F. Gauss in 1800, Nosovsky calculated the Julian dates of all spring full moons from the first century AD up to his own time and compared them with the Easter dates obtained from the Easter Book. He reached a surprising conclusion: three of the four conditions imposed by the First Council of Nicaea were violated until 784, whereas Vlastar had noted that “all the restrictions except the last one have been kept firmly until now.” When proposing the year 325, Scaliger had no way of detecting this fault, because in the sixteenth century the full-moon calculations for the distant past couldn’t be performed with precision. “The Easter Rules makes the two following restrictions: it should not be celebrated together with the Judaists, and it can only be celebrated after the spring equinox. Two more had to be added later, namely: celebrate after the first full moon after the equinox, but not any day – it should be celebrated on the first Sunday after the equinox. All of these restrictions, except for the last one, are still valid (in times of Matthew Vlastar – the XIV century – Auth.), although nowadays we often celebrate on the Sunday that comes later. Namely, we always count two days after the Lawful Easter (that is, the Passover, or the full moon – Auth.) and end up with the subsequent Sunday. This didn’t happen out of ignorance or lack of skill on the part of the Elders, but due to lunar motion” Let us emphasize that the quoted Collection of Rules Devised by Holy Fathers is a canonical mediaeval clerical volume, which gives it all the more authority, since we know that up until the XVII century, the Orthodox Church was very meticulous about the immutability of canonical literature and kept the texts exactly the way they were; with any alteration a complicated and widely discussed issue that would not have passed unnoticed.So, by approximately 1330 AD, when Vlastar wrote his account, the last condition of Easter was violated: if the first Sunday happened to be within two days after the full moon, the celebration of Easter was postponed until the next weekend. This change was necessary because of the difference between the real full moon and the one computed in the Easter Book. The error, of which Vlastar was aware, is twenty-four hours in 304 years.Therefore the Easter Book must have been written around AD 722 (722 = 1330 - 2 x 304). Had Vlastar known of the Easter Book’s 325 AD canonization, he would have noticed the three-day gap that had accumulated between the dates of the computed and the real full moon in more than a thousand years. So he either was unaware of the Easter Book or knew the correct date when it was written, which could not be near 325 AD.G. Nosovsky: So, why the astronomical context of the Paschalia contradicts Scaliger’s dating (alleged 325 AD) of the Nicaean Council where the Paschalia was canonized?This contradiction can easily be seen from the roughest of calculations.1) The difference between the Paschalian full moons and the real ones grows at the rate of one day in 300 years.2) A two-day difference had accumulated by the time of Vlastar, which is roughly dated 1330 AD.3) Ergo, the Paschalia was compiled somewhere around 730 AD, since1330 – (300 x 2) = 730.It is understood that the Paschalia could only be canonized by the Council sometime later. But this fails to correspond to Scaliger’s dating of its canonization as 325 AD in any way at all!Let us emphasize, that Matthew Vlastar himself, doesn’t see any contradiction here, since he is apparently unaware of the Nicaean Council’s dating as the alleged year 325 AD. A natural hypothesis: this traditional dating was introduced much later than Vlastar’s age. Most probably, it was first calculated in Scaliger’s time. The Council that introduced the Paschalia – according to the modern tradition as well as the mediaeval one, was the Nicaean Council – could not have taken place before 784 AD, since this was the first year when the calendar date for the Christian Easter stopped coinciding with the Passover full moon due to slow astronomical shifts of lunar phases.The last such coincidence occurred in 784 AD, and after that year, the dates of Easter and Passover drifted apart forever. This means the Nicaean Council could not have possibly canonized the Paschalia in IV AD, when the calendar Easter Sunday would coincide with the Passover eight (!) times – in 316, 319, 323, 343, 347, 367, 374, and 394 AD, and would even precede it by two days five (!) times, which is directly forbidden by the fourth Easter rule, that is, in 306 and 326 (allegedly already a year after the Nicaean Council), as well as the years 346, 350, and 370.Thus, if we’re to follow the consensual chronological version, we’ll have to consider the first Easter celebrations after the Nicaean Council to blatantly contradict three of the four rules that the Council decreed specifically for this feast! The rules allegedly become broken the very next year after the Council decrees them, yet start to be followed zealously and in full detail five centuries (!) after that.Let us note that J.J. Scaliger could not have noticed this obvious nonsense during his compilation of the consensual ancient chronology, since computing true full moon dates for the distant past had not been a solved problem in his epoch.The above mentioned absurdity was noticed much later, when the state of astronomical science became satisfactory for said purpose, but it was too late already, since Scaliger’s version of chronology had already been canonized, rigidified, and baptized “scientific”, with all major corrections forbidden.Now, the ecclesiastical vernal equinox was set on March 21st because the Church of Alexandria, whose staff were reputed to have astronomical expertise, reckoned that March 21st was the date of the equinox in 325 AD, the year of the First Council of Nicaea. The Council of Laodicea was a regional synod of approximately thirty clerics from Asia Minor that assembled about 363–364 AD in Laodicea, Phrygia Pacatiana, in the official chronology.The major concerns of the Council involved regulating the conduct of church members. The Council expressed its decrees in the form of written rules or canons.However, the most pressing issue, the fact that the calendar Easter Sunday would coincide with the Passover eight (!) times – in 316, 319, 323, 343, 347, 367, 374, and 394 AD, and would even precede it by two days five (!) times, which is directly forbidden by the fourth Easter rule, that is, in 306 and 326 (allegedly already a year after the Nicaean Council), as well as the years 346, 350, and 370 was NOT presented during this alleged Council of Laodicea.We are told that the motivation for the Gregorian reform was that the Julian calendar assumes that the time between vernal equinoxes is 365.25 days, when in fact it is about 11 minutes less. The accumulated error between these values was about 10 days (starting from the Council of Nicaea) when the reform was made, resulting in the equinox occurring on March 11 and moving steadily earlier in the calendar, also by the 16th century AD the winter solstice fell around December 11.But, in fact, as we see from the information presented in the preceeding paragraphs, the Council of Nicaea could not have taken place any earlier than the year 876-877 e.n., which means that in the year 1582, the winter solstice would have arrived on December 16, not at all on December 11.Papal Bull, Gregory XIII, 1582:Therefore we took care not only that the vernal equinox returns on its former date, of which it has already deviated approximately ten days since the Nicene Council, and so that the fourteenth day of the Paschal moon is given its rightful place, from which it is now distant four days and more, but also that there is founded a methodical and rational system which ensures, in the future, that the equinox and the fourteenth day of the moon do not move from their appropriate positions.Given the fact that in the year 1582, the winter solstice would have arrived on December 16, not at all on December 11, this discrepancy could not have been missed by T. Brahe, or G. Galilei, or J. Kepler. Newton agrees with the date of December 11, 1582 as well; moreover, Britain and the British Empire adopted the Gregorian calendar in 1752 (official chronology); again, more fiction at work: no European country could have possibly adopted the Gregorian calendar reformation in the period 1582-1800, given the absolute fact that the winter solstice must have falled on December 16 in the year 1582 AD, and not at all on December 11 (official chronology). EXPLICIT DATING GIVEN BY MATTHEW VLASTARIt is indeed amazing that Matthew Vlastar’s Collection of Rules Devised by Holy Fathers – the book that every Paschalia researcher refers to – contains an explicit dating of the time the Easter Book was compiled. It is even more amazing that none of the numerous researchers of Vlastar’s text appeared to have noticed it (?!), despite the fact that the date is given directly after the oft-quoted place of Vlastar’s book, about the rules of calculating the Easter date. Moreover, all quoting stops abruptly immediately before the point where Vlastar gives this explicit date.What could possibly be the matter? Why don’t modern commentators find themselves capable of quoting the rest of Vlastar’s text? We are of the opinion that they attempt to conceal from the reader the fragments of ancient texts that explode the entire edifice of Scaliger’s chronology. We shall quote this part completely:Matthew Vlastar:“There are four rules concerning the Easter. The first two are the apostolic rules, and the other two are known from tradition. The first rule is that the Easter should be celebrated after the spring equinox. The second is that is should not be celebrated together with the Judeans. The third: not just after the equinox, but also after the first full moon following the equinox. And the fourth: not just after the full moon, but the first Sunday following the full moon… The current Paschalia was compiled and given to the church by our fathers in full faith that it does not contradict any of the quoted postulates. (This is the place the quoting usually stops, as we have already mentioned – Auth.). They created it the following way: 19 consecutive years were taken starting with the year 6233 since Genesis (= 725 AD – Auth.) and up until the year 6251 (= 743 AD – Auth.), and the date of the first full moon after the spring equinox was looked up for each one of them. The Paschalia makes it obvious that when the Elders were doing it; the equinox fell on the 21st of March” ([518]).Thus, the Circle for Moon – the foundation of the Paschalia – was devised according to the observations from the years 725-743 AD; hence, the Paschalia couldn’t possibly have been compiled, let alone canonized, before that. Both Pompeii and Herculaneum were destroyed in the 18th century. NO ancient Rome at all. The Colosseum, the Parthenon were built much more recently in time. Read Kasparov's article.
-
But you should. My formula was peer reviewed and also published by Professor Yeh in the Journal of Optics Letters. It is flawless. Can you understand the connection? You are asserting heliocentrism. Yet, Michelson and Gale (and every ring laser gyroscope) did not record the SAGNAC EFFECT at all, only the CORIOLIS EFFECT of the ether drift. A most direct proof I am correct. Sure. The calendar we are using is set in the heliocentrical context. That is why you have to add an extra day every four years. In 1900, the geocentric year had 364 years, yet each of those days is a bit longer than the correponding heliocentrical day, it also accounts for the extra leap day. You are constantly avoiding the fact that the Council of Nicaea could not possibly have taken place in the year 325 AD. Dr. Anatoly Fomenko:We have cross-checked archaeological, astronomical, dendro-chronological, paleo-graphical and radiocarbon methods of dating of ancient sources and artefacts. We found them ALL to be non-independent, non-exact, statistically implausible, contradictory and inevitably viciously circular because they are based or calibrated on the same consensual chronology.Unbelievable as it may seem, there is not a single piece of firm written evidence or artefact that could be reliably and independently dated earlier than the XI century. Classical history is firmly based on copies made in the XV-XVII centuries of 'unfortunately lost' originals. It just happens that there is no valid irrefutable scientific proof that ALL ‘ancient’ artefacts are much older than 1000 years contrary to the self fulfilling radiocarbon dating obligingly rubber-stamped by radiocarbon labs to the prescriptions of the mainstream historians. How heartbreaking is that the oldest ORIGINAL written documents that can be reliably, irrefutably and unambiguously dated belong only to the 11th century! All dirty and worn out originals have somehow disappeared in the Very Dark Ages, as illiterate but tidy monks kept only brand new copies. Better yet, most of the very old original document of 11th-13th tell very peculiar stories completely out of line with the consensual history. Radio-carbon method:Very sorry about c14 radiocarbon dating methods, the poor Nobel Libby must be turning in his grave after ‘calibration’ of his method (pity that!). By ‘calibration’ on statistically non-significant number of wood samples from Egypt with ARBITRARELY suggested alleged age of 3100 B.C. the Arizona university radiocarbon team simply smuggled the consensual chronology into c14 method of dating, turning it into a sheer fallacy.The c14 radiocarbon dating procedure runs as follows: archaeologist sends an artefact to a radiocarbon dating laboratory with his idea of the age of the object to get a to ‘scientific’ rubber-stamp. Laboratory gladly complies and makes required radio dating, confirming the date suggested by archaeologist. Everybody’s happy: lab makes good money by making an expensive test, archaeologist by reaping the laurels for his earth shattering discovery. The in-built low precision (because of sensitivity) of this method allows cooking scientifically looking results desired by the customer archaeologist. General public doesn’t realize that it was duped again.Just try to submit to any c14 lab a sample of organic matter and ask them to date it. The lab will ask your idea of the age of the sample, then it fiddles with the lots of knobs (‘fine-tuning’) and gives you the result as you’ve ‘expected’. With c14 dating method being so mind bogglingly precise C14 labs decline making 'black box' test of any kind absolutely. Nah, they assert that because their method is SO very sensitive they must have maximum information about the sample. This much touted method often produces reliable dating of objects of organic origin with exactitude (mistakes that) of up to plus minus 1500 years, therefore it is too crude for dating of historical events in the 3000 years timeframe!History: Fiction or Science? volume I:http://books.google.ro/books?id=YcjFAV4WZ9MC&printsec=frontcover&dq=history+science+or+fiction&cd=2&redir_esc=y#v=onepage&q=history%20science%20or%20fiction&f=falsechapter 1, sections 15 and 16Isotopic dating: science or fiction? https://web.archive.org/web/20080514235945/http://www.atenizo.org/evolution-c14-kar.htmThermochronology/geochemical analysis errors:http://tasc-creationscience.org/other/plaisted/www.cs.unc.edu/_plaisted/ce/dating2.htmlhttps://answersingenesis.org/geology/radiometric-dating/u-th-pb-dating-an-example-of-false-isochrons/https://web.archive.org/web/20110808123827/http://www.gennet.org/facts/metro14.htmlhttp://www.cs.unc.edu/~plaisted/ce/dating.html (superb documentation)http://web.archive.org/web/20110301201543/http://www.ridgecrest.ca.us/~do_while/sage/v8i9f.htmhttp://itotd.com/articles/349/carbon-dating/http://evolutionfacts.com/Ev-V1/1evlch07a.htmhttp://evolutionfacts.com/Ev-V1/1evlch07b.htmhttp://evolutionfacts.com/Appendix/a07.htm(must read)http://www.parentcompany.com/great_dinosaur_mistake/tgdm9.htmSpectroscopy methods errors:http://www.theflatearthsociety.org/forum/index.php/topic,58190.msg1489346.html#msg1489346http://www.ldolphin.org/univ-age.htmlIce core dating errors:http://www.detectingdesign.com/ancientice.htmlCollapsing Tests of Time:http://grazian-archive.com/quantavolution/vol_03/chaos_creation_03.htmThe methods described above cannot be used to date anything. The only accurate and direct method is: comets as luminous bodes MUST have limited lives.When passing close to the sun, comets emit tails. It is assumed that the material of the tail does not return to the comet's head but is dispersed in space; consequently, the comets as luminous bodies must have a limited life. If Halley's comet has pursued its present orbit since late pre-Cambrian times, it must "have grown and lost eight million tails, which seems improbable." If comets are wasted, their number in the solar system must permanently diminish, and no comet of short period could have preserved its tail since geological times.But as there are many luminous comets of short period, they must have been produced or acquired at some time when other members of the system, the planets and the satellites, were already in their places. (from Worlds in Collision)The age of the Solar System must be less than the estimated upper age of comets.From the work Saturnian Comets:The usual explanation for the Saturnian and Jovian families of comets is that they had originally traveled on extremely elongated or even parabolic orbits and, passing close to one of the large planets, were changed into short-period comets, traveling on ellipses—it is usual to say that they were “captured.” However, the Russian astronomer K. Vshekhsviatsky of the Kiev Observatory, one of the leading authorities on comets, has brought strong arguments to show that the comets of the solar system are very youthful bodies—only a few thousand years old—and that they originated in explosions from the planets, especially from the major planets Saturn and Jupiter or their moons.By comparing the observed luminosity of the periodic comets on their subsequent returns, he found it failing and their masses rapidly diminishing by loss of matter to the space through which they travel; the head of the comet emits tails on each passage close to the sun and then dissipates the matter of the tails without recovery. Thus Vshekhsviatsky concluded that comets of short duration originated in the solar system, were not captured from outside of that system—a point to which the majority of astronomers still adhere—and that they came into existence by explosion from Jupiter and Saturn, and to a smaller extent by explosion from the smaller planets, like Venus and Mars.http://articles.adsabs.harvard.edu//full/1962PASP...74..106V/0000107.000.html]1962PASP...74..106VK. Vshekhsviatsky was the leading expert in comet astrophysics as his works clearly demonstrate this.Two months after the discovery of the ring around Jupiter, the Soviet Union claimed joint credit for the discovery, contending that Vsekhsviatskii had predicted the ring’s existence as early as 1960 in a journal called Izvestia of the Armenian Academy of Sciences. The passage from the relevant paper is as follows:‘The existence of active ejection processes in the Jupiter system, demonstrated by comet astronomy, gives grounds for assuming that Jupiter is encircled by comet and meteorite material in the form of a ring similar to the ring of Saturn.’PAGE 107: Halley's comet, for example, could not exist as a comet for more than 120 revolutions.120 x 75 = 9000 years
-
The great wall of China was constructed quite recently. http://de.geschichte-chronologie.de/index.php?option=com_content&view=article&id=83:chronological-revolution-part-1&catid=2:2008-11-13-21-58-51&Itemid=90 (glorious Chinese history is a fake section)http://breakfornews.com/forum/viewtopic.php?p=27892#27892 (not so ancient china 1)http://breakfornews.com/forum/viewtopic.php?p=27945#27945 (not so ancient china 2)http://breakfornews.com/forum/viewtopic.php?p=27981#27981 (not so ancient china 3) Damodar Kosambi, India's greatest historian of the 20th century:"There is virtually nothing of what we know as historical literature in India... all we have is a vague oral tradition and an extremely limited number of documented data, which is of a much greater value to us than that obtained from legends and myths. This tradition gives us no opportunity of reconstructing the names of all the rulers. The meagre remnants that we do possess are so nebulous that no date preceding the Muslim period [before the VIII century A.D.] can be regarded as precise... the works of the court chroniclers didn't reach our time... all of this leads some rather earnest and eminent scientists claim that India has no history of its own". "Written memorials of the Indus culture defy decipherment to this day. .. not a single finding can be associated with an actual person or historical episode. We don't even know the language that was spoken by the inhabitants of the Indus valley." We are told further on that many vital issues concerning the "ancient" history of India are based on the manuscripts found as late as the XX century. It turns out, for instance, that:"the main source of knowledge in what concerns the governmental system of India and the policy of the state in the epoch of Maghadhi's ascension is the Arthashastra - the book. .. that had only been found in 1905, after many a century of utter oblivion". It turns out that this book is basically an Indian version of the famous me-diaeval oeuvre of Machiavelli. However, in this case the "ancient Indian Arthashastra" couldn't have been written before the Renaissance. This could have happened in the XVII-XVIII century, or even the XIX."Emperor Ashoka, considered to be India's greatest ruler, never existed:https://madhesi.wordpress.com/2008/09/24/did-ashoka-exist/ The geocentric year has 364 days, each day though is a little longer: it accounts for the extra heliocentrical day (leap day every four years) quite nicely. What you have to deal with is the absolute fact that the Council of Nicaea could not possibly have taken place in the year 325 AD, as proven by the Gauss Easter formula. Then, the claim made by Gregory XIII re: the ten extra days is completely false. Moreover, I can prove that Michelson and Gale did not record the SAGNAC EFFECT at all in 1925: all they registered is the CORIOLIS EFFECT of the ether drift, a huge difference.
- 42 replies
-
-3
-
I did not see this reply. Go ahead and explain how the light of the Sun was blocked by the curvature of the Earth, while at the same time the light of the explosion was seen instantaneously from London.
- 42 replies
-
-1
-
No. In the geocentric calendar you do not have 366 days at all every four years. That extra day is accounted for in the context of those 364 days (each day, though, is a little longer). Five synodical years of Venus equal 2919.6 days, whereas eight years of 365 days equal 2920 days, and eight Julian years of 365/4 days equal 2922 days. In other words, in four years there is a difference of approximately one day between the Venus and the Julian calendars. Therefore, apparently in a strange way, we are following a Venusian calendar, unknowingly.This is one of the main reasons for the entire Gregorian calendar reform hoax (the other being, of course, to falsify the chronology of history, as we have seen earlier). And there is much more to be accounted for in the heliocentrical setting. For example, the axial precession of the Earth. "Calculated precession rates over the last 100 years show increasing precession rates which produce a declining precession cycle period. The precession rate goes up each year. The Astronomical Almanac gives a rate of 50.2564 (arc seconds) for the year 1900. In that year, the top astronomer in America, Simon Newcomb, used a constant of .000222 as the amount the precession rate will increase per year. The actual constant increase since that time is closer to .000330 (about 50 % higher than expected) and it is increasing exponentially (faster each year)." As can be seen from the chart below, the precession rate (now 50.29 arc seconds per year) has been accelerating over the last 100 years. This means the calculated time required to complete one precession cycle has been falling. Note that the precession rate was under 50.255 arc seconds before 1900 when Simon Newcomb first began to keep accurate records, (meaning a complete precession cycle would have taken about 25,790 years), but now just 100 years later, the rate is 50.29 arc seconds per year and the computed time to complete one full cycle is down under 25,770 years. That is a decline of 20 years of periodicity in just 100 years of record keeping. Also, the trend is fairly consistent year over year and it is accelerating. If the local gravity theory of lunisolar precession were correct, and this trend was extrapolated back a few hundred thousand years then precession would have been virtually non-existent even though the Sun and Moon exerted about the same gravitational influence as they do now. And if this trend were extrapolated forward a few million years the Earth might be wobbling so severely it would retrograde a day for every day it spins, and essentially stop moving or go into reverse!Following is a chart with points representing the actual annual calculated precession ratesfor the last 100 plus years. The early calculations are by Simon Newcomb and the laterby Williams or the Astronomical Almanac. We have drawn a line in the middle of thedots to show the slope of the trend. If precession were the result of our Sun’s motionaround another object (causing a reorientation of the Earth) then according to Kepler’slaws any trend line would reflect the signature of an elliptical orbit. Figure 1. Current trends in precession. Source: 1900-1980 The American Ephemeris andNautical Almanac;1981-2002 The Astronomical Almanac. United States Naval ObservatoryHowever, in the lunisolar model (local gravity) the changing trend in precession rates was entirely unexpected. The fact of the matter is the gravity of the Sun and Moon have been very stable formillions of years [according to the official theory of astrophysics] and there should be no reason in the lunisolar model for this significant upward trend in the wobble rate. If anything it might be expected to slightly “decrease” under lunisolar theory as the Moon moves a fraction of an inch farther from Earth each year and as the Sun burns up a small fraction of its mass each year. But frankly these amounts are so negligible relative to the mass and scale involved that the precession rate should be noticeably stable year after year – if these masses are indeed the cause of the wobble. Lunisolar theorists not only need to find new inputs to the precession formula for the sake of accuracy, they need to offset these slight diminishments in gravitational forces and come up with larger effects in the opposite direction.W. Cruttenden Dr. Anatoly Fomenko - Dean of the Faculty of Mathematics-Mechanics, Moscow State University, author of 200 scientific publications, and 28 treatises on advanced mathematics. But Fomenko proposes only the new chronology of history. I am an adept of the much more radical new chronology of history. Fomenko's papers were published in respected journals, please see the references above. Well done. Dr. Gunnar Heinsohn has already demonstrated in his best known work that the entire historical period of 2100 BC - 600 BC was invented:https://web.archive.org/web/20110517042728/http://www.specialtyinterests.net/heinsohn.html"Heinsohn has made a very important contribution to the revisionist debate by focussing attention on the evidence of stratigraphy outside Egypt. Dayton had uncovered many examples in museums around the world where near identical ancient artefacts of very similar styles and manufacturing techniques were given dates which varied sometimes by as much as 1000-1500 years. Heinsohn, from an extensive study of archaeological reports from most of the better known sites across Asia Minor, showed how these anachronisms had arisen. At site after site, archaeologists had artificially increased the age of the lower strata by inserting, without supporting evidence, 'occupation gaps' of many centuries. They did this in order to meet the expectations of excessive antiquity among historians, who had used Biblically derived dates for Abraham (c. 2100), initially seen as broadly contemporary with the great Assyrian king Hammurabi. Using this elongated time frame, great empires of the past such as the Sumerians, Akkadians and Old Babylonians were invented by late 19th C and early 20th C scholars to fill the historical voids. The ancient Greek and Roman historians, not surprisingly, knew nothing of these ancient peoples. Sumerian, said Heinsohn, 'is the language of the well known Kassite/Chaldeans, whose literacy deserves its fame'. He showed that the Bronze Age started in China and Mesoamerica some 1500 years later than in the Near East and proposed this gap be largely closed by lowering the ages of the Mediterranean civilisations. He cited the Indus Valley where the early period civilisations, dated from Mesopotamian seals to c. 2400BC, sit right underneath the Buddhist strata of 7-6C. Seals from Mesopotamia are found in the Indus valley and in Mesopotamia there are seals from the Indus Valley. So the excavators have to say they have an occupation gap of some 1700 years. Thus some sites only about 30km apart have chronologies some 1500 years apart. But in the same strata, supposedly 1500 years apart, they frequently find the same pottery. C&CR had insufficient space to provide a full forum for Heinsohn's work, but a volume entitled Ghost Empires of the Past was published in C&CR format in 1988, thanks to help from SIS stalwarts Birgit Liesching and Derek Shelley-Pearce. In this, Heinsohn set out many chronological 'problems' and 'riddles', and argued persuasively for equating, among others, the Mittani with the Medes and the Empire Hittites with the Late Chaldeans. His excellent paper on the archaeology of Hazor (C&CR 1996:1) revealed some important anachronisms. For example, two cuneiform tablets written in Old-Babylonian Akkadian and two more written in the Akaddian of the Amarna era were found in the upper layers of the site. Heinsohn asks 'How did tablets from the early second millennium end up in a stratum reaching its peak in the period of the Persian Empire (550-330 BC)?'. The tablets were, of course, immediately labelled 'heirlooms' by their finders. But, as Heinsohn pointed out, it seems strange that the later Hazoreans kept tablets for over 1000yr as heirlooms from the MBA or LBA, yet were apparently incapable of producing any texts of their own. Also, a clay jar inscribed in 23C Old-Akkadian was found in the Hyksos layer c17C. Yes, you've guessed - this was explained as yet another boring old 'heirloom'. Heinsohn makes a plea to archaeologists to 'set textbooks aside and allow oneself the liberty of following reason and hard stratigraphical evidence'. The textbook schemes 'separate by enormous time spans what is found in parallel stratigraphical locations, exhibiting very similar material cultures.' Unfortunately for archaeologists, the writers of the textbooks are often the 'Guardians of the Dogma' who control the funding for archaeological research. As a result, an archaeologist brave enough to confront conventional thinking may quickly find himself both professionally discredited and out of a job. Heinsohn has presented many well-researched papers exposing stratigraphical problems, and suggesting much lower chronologies for Near Eastern civilisations. His stratigraphy and stylistic-based chronologies and, more recently his explanation for the 'lost' Persian layer throughout the Persian Empire have generated much debate and some unanswered controversy among revisionists." A photograph with an exposure time of 20 seconds taken at 10.5 p.m., July 1, 1908 by George Embrey of Gloucester.http://www.phenomena.org.uk/features/page88/page88.html This is what we are talking about: the light from the explosion was seen over a distance of 5,200 km, while at the same time the light from the Sun could not be observed in that interval of time. Now, let us see how the extended arctangent series was used for the Gizeh pyramid. Surely this will attract your attention. TAN 51.8554° = TWO SACRED CUBITSReference #1http://davidpratt.info/pyramid.htmFor example, the angle of slope of the Pyramid’s outer casing was 51.85 degrees.Reference #2The Pyramid Age, E.J. SweeneyChapter 1, page 4This ratio provides a slope of 51.85 degrees (calculated).Reference #3http://stochasticprojectmanagement.com/?p=105ratio of height to width: 1.571 (one half of pi) slope: 51.85 degreesReference #4http://www.numberscience.me.uk/Giza.htmlThe slant angle of the face of the pyramid approximates to 51.85 degrees.Nineteenth century archaeologists or even modern researchers into the Gizeh pyramid phenomenon have no idea what to look for, having succumbed to the official propaganda which tells us that ancient Greeks introduced the п symbol/number.The pi ratio in the pyramid is derived from the ratio of the pyramid baseline divided by the height. The average baseline is 9,068.8. Divide this by the height (5776 +- 7 inches) and you get 1.5701. This value times two is 3.1402. A better approximation of pi is obtained using the angle of the slope of the faces of the pyramid. The angle for the north slope according to Petrie is 51 deg. 50 min. 40 sec. +- 1 min. 5 sec. The same ratios in a pyramid with this angle yield a value of 3.1427+-0.002. The Pi value in the pyramid is an interesting feature, but the facts show that the value that can be found is not any more accurate then the value of 22/7 for pi (or 11/14 for pi/4) that is traditionally attributed to Archimedes. It is not at all clear that the Egyptians intended this Pi relationship to be a design feature per se.The builders of the Gizeh pyramid could not care less about π: the entire edifice is built according to the sacred cubit figure, which is the value of 2/π.3.1427/2 = 1.571351/1.57135 = 0.636395, one of the exact values of the sacred cubitThe sacred cubit is designated in the form of a horseshoe projection, known as the "Boss" on the face of the Granite Leaf in the Ante-Chamber of the Pyramid. By application of this unit of measurement it was discovered to be subdivided into 25 equal parts known now as: Pyramid inches.ONE SACRED CUBIT = 0.6356621 meters Those who are seeking for the ultimate proof that the builders of the Gizeh Pyramid knew advanced calculus, have to look no further than the following demonstration, which I discovered two years ago.As we have seen, the angle of the slope of the Pyramid’s outer casing is 51.85 degrees.However, in order to reach/know this value, the architects of the Gizeh Pyramid must have had at their disposal the extended arctangent series: TAN 51.8554 DEGREES = TWO SACRED CUBITS = 1.27330478216 = 0.636652 x 2In order to reach the value of 51.8554 degrees, the architects MUST have used the extended arctangent series to achieve the final result. 136.12 = actual height of the Gizeh Pyramid (without the masonry base). The other angle of the triangle, 38.145 degrees, is also closely related to the sacred cubit:38.13 = 60 sacred cubitsAnd 51.85/38.1 = 1.361 - therefore, all these measurements/dimensions must have been known well ahead of time to the arhitects of the Gizeh Pyramid; but in order to have the actual angle values, they needed to calculate the arctangent of two sacred cubits. Further proof that the values of 51.8554 and 38.1446 are related to the sacred cubit.51.8554/14.134725 = 11/31400/11 = 127.27272727127.272727 = 63.63636363 x 251.8554 x 27 = 140051.8554 x 1.618034 = 83.9041.618034 = PHI83.904 x 0.6366 = 53.41353.413 x 0.2548 = 13.610.02544 = one sacred inch (0.636/25)136.1 = height of Gizeh Pyramid without the masonry baseRelationship between the two angles:The other angle of the triangle, 38.145 degrees, is also closely related to the sacred cubit:38.13 = 60 sacred cubitsAnd 51.85/38.1 = 1.361 - therefore, all these measurements/dimensions must have been known well ahead of time to the arhitects of the Gizeh Pyramid; but in order to have the actual angle values, they needed to calculate the arctangent of two sacred cubits.TAN 51.8554 DEGREES = TWO SACRED CUBITS = 1.27330478216 = 0.636652 x 2In order to reach the value of 51.8554 degrees, the architects MUST have used the extended arctangent series to achieve the final result.Just a "very good approximation" won't do it.One needs the correct value to the fifth decimal, something that can be achieved ONLY by using advanced calculus. There is no other way to calculate the inverse tangent function of a certain angle (without using a pocket calculator/computer) other than resorting to power series, that is, utilizing calculus. Moreover, one would need a clear understanding of the concept of the radian measure.The architects of the Giza Pyramid had these choices at their disposal in order to solve the following equation:TAN X = 1.27330478216 = 0.636652 x 21. Maclaurin series in conjunction with the arctan reciprocal formula (equation #3)51.8554° = 0.907045 radians1/1.27330478216 = 0.78535Substituting the value of 0.78535 in the Maclaurin arctan series and solving the reciprocal arctan equation, up to the O(x11) term we get:0.905045This corresponds to a 51.983° value.Therefore, the builders of the Pyramid must have had at their disposal the notion of the derivative (either the Newton-Leibniz or the Madhava definitions) in order to obtain the arctan Maclaurin series, not to mention the reciprocal arctan equation; even in that case, they had to be able to compute powers of certain numbers, going perhaps all the way to the O(x17) term (in the Maclaurin series) or even beyond, to obtain a meaningful accuracy.2. Extended arctangent series This is a result from advanced calculus.3. Gauss-Pfaff-Borchardt-Carlson iterative formulahttp://www.ams.org/journals/mcom/1972-26-118/S0025-5718-1972-0307438-2/S0025-5718-1972-0307438-2.pdfThis formula necessitates the use of the concept of derivatives for its mathematical proof.https://books.google.ro/books?id=cGnSMGSE5Y4C&pg=PR20&lpg=PR20&dq=numerical+methods+that+work+forman+acton&source=bl&ots=_TWAL76eh8&sig=UoUEc2xjUGxLP0awbJv64HXJG14&hl=ro&sa=X&ved=0ahUKEwjCsci5h4_QAhUJaRQKHcR6CkoQ6AEIXTAH#v=onepage&q=numerical%20methods%20that%20work%20forman%20acton&f=false(pg 6-9)Other variants of this formula:http://files.ele-math.com/articles/jmi-09-73.pdfA more advanced look at this approach:https://www.math.ust.hk/~machiang/education/enhancement/arithmetic_geometric.pdf4. My formulaARCTAN v = 2n x ((2- {2+ [2+ (2+ 2{1/(1+ v2)}1/2)1/2]...1/2}))1/2 (n+1 parentheses to be evaluated)
- 42 replies
-
-1
-
Numismatic dating problems: https://books.google.ro/books?id=YcjFAV4WZ9MC&printsec=frontcover&dq=fomenko+history&hl=en&sa=X&ved=0ahUKEwixoYyC18XhAhXGpIsKHVvIC4QQ6AEIMzAC#v=onepage&q=fomenko history&f=false pages 90 - 93 In the geocentrical setting, the year has 364 days, however, the duration of a single day is a bit longer. You are describing the year as 1900, based on the conventional calendar of history. However, you have already seen that the claim made by Gregory XIII (10 extra days) is completely false. Fine. Here are more proofs. D" PARAMETER: MOON'S ELONGATION PARADOXThe Moon's Acceleration"Understanding the moon's orbit around Earth is a difficult mathematical problem. Isaac Newton was the first to consider it, and it took more than two centuries until the American mathematician George William Hill found a suitable framework in which to address this question.The concern is with the acceleration, D'', of the moon's elongation, which is the angle between the moon and the sun as viewed from Earth. This acceleration D'' is computable from observations, and its past behavior can be determined from records of eclipses. Its values vary between -18 and +2 seconds of arc per century squared. Also, D'' is slightly above zero and almost constant from about 700 BC to AD 500, but it drops significantly for the next five centuries, to settle at around -18 after AD 1000. Unfortunately this variation cannot be explained from gravitation, which requires the graph to be a horizontal line.Among the other experts in celestial mechanics who attacked this problem was Robert Newton from Johns Hopkins University. In 1979, he published the first volume of a book that considered the issue by looking at historical solar eclipses. Five years later, he came up with a second volume, which approached the problem from the point of view of lunar observations. His conclusion was that the behavior of D'' could be explained only by factoring in some unknown forces.Newton's results can be interpreted similarly: if we exclude the possibility of mysterious forces, his graph puts traditional ancient and medieval chronology in doubt." https://web.archive.org/web/20120323153614/http://www.pereplet.ru/gorm/fomenko/dsec.htmIt is important for some computational astronomical problems to know the behaviour of D'' -- the second derivative of the Moon's elongation - as a function of the time, on a rather long segment of the time line. This problem, particularly, was talked about during the discussion organized in 1972 by the London Royal Society and British Academy of Sciences. The scheme of the calculation of D'' is as follows: we are to fix the totality of ancient observations of eclipses, then calculate. on the basis of the modern theory, when these observations were made, and then compare the results of the calculations with the observed parameters to evaluate the Moon's acceleration.Newton: "The most striking feature of Figure 1 is the rapid decline in D'' from about 700 to about 1300 ... . This decline means (Newton, 1972b) that there was a 'square wave' in the osculating value of D''... . Such changes in D'', and such values, unexplainable by present geophysical theories ... , show that D'' has had surprisingly large values and that it has undergone large and sudden changes within the past 2000 yrs". D" parameter, new chronology of history: Dr. Robert Newton, Two Uses of Ancient Astronomy:https://web.archive.org/web/20120531060430/http://www.pereplet.ru/gorm/atext/newton2.htmPhil. Trans. R. Soc. Land. A. 276, 99-110 (1974)Dr. Robert Newton, Astronomical Evidence Concerning Non-Gravitational Forces in the Earth-Moon System:https://web.archive.org/web/20120531054411/http://www.pereplet.ru/gorm/atext/newton1.htmAstrophysics and Space Science 16 (1972) 179-200Each and every astronomical recording supposedly made in the period 700 BC - 1000 AD is proven to be false.In the new radical chronology of history, each and every astronomical recording supposedly made in the period 1000 AD - 1750 AD is also proven to be false.When was Ptolemy's Star Catalogue in 'Almagest' Compiled in Reality? Statistical Analysis:https://web.archive.org/web/20131111204106/http://www.hbar.phys.msu.ru/gorm/fomenko/fomenko3.pdfhttp://www.chronologia.org/en/es_analysis2/index.htmlAppendix 2. When Was Ptolemy's Star Catalogue Really Compiled? Variable Configurations of the Stars and the Astronomical Dating of the Almagest Star Catalogue:pages 346 - 375The Dating of Ptolemy's Almagest Based on the Coverings of the Stars and on Lunar Eclipses:https://web.archive.org/web/20131111203642/http://www.hbar.phys.msu.ru/gorm/fomenko/fomenko4.pdfhttp://www.chronologia.org/en/es_analysis2/index.htmlpages 376 - 381https://web.archive.org/web/20131111203642/http://www.hbar.phys.msu.ru/gorm/fomenko/fomenko4.pdf(section 3: The Dating of the Lunar Eclipses and Appendix 2: The Table of the Almagest's Lunar Eclipses)http://www.chronologia.org/en/es_analysis2/index.html (pages 382 - 389)
-
Again, you are dealing with the conventional calendar of 365.24219 days + one leap day added every four years. In the geocentrical context, you have to deal with the new radical theory of chronology, exactly what I have been trying to point out to you so far. Gauss' Easter formula is very scientific. Yes, I knew you'd say this, but I could not post all of that material over here; you seemed to be interested to find out more details. But they are in conflict over a distance of 5,200 km. http://www.nuforc.org/GNTungus.html“TO THE EDITOR OF THE TIMES.”“Sir,--I should be interested in hearing whether others of your readers observed the strange light in the sky which was seen here last night by my sister and myself. I do not know when it first appeared; we saw it between 12 o’clock (midnight) and 12:15 a.m. It was in the northeast and of a bright flame-colour like the light of sunrise or sunset. The sky, for some distance above the light, which appeared to be on the horizon, was blue as in the daytime, with bands of light cloud of a pinkish colour floating across it at intervals. Only the brightest stars could be seen in any part of the sky, though it was an almost cloudless night. It was possible to read large print indoors, and the hands of the clock in my room were quite distinct. An hour later, at about 1:30 a.m., the room was quite light, as if it had been day; the light in the sky was then more dispersed and was a fainter yellow. The whole effect was that of a night in Norway at about this time of year. I am in the habit of watching the sky, and have noticed the amount of light indoors at different hours of the night several times in the last fortnight. I have never at any time seen anything the least like this in England, and it would be interesting if any one would explain the cause of so unusual a sight.Yours faithfully, Katharine Stephen. Godmanchester, Huntingdon, July 1.”Let us remember that the first newspaper report about the explosion itself ONLY appeared on July 2, 1908 in the Sibir periodical.A report from Berlin in the New York Times of July 3 stated: 'Remarkable lights were observed in the northern heavens on Tuesday and Wednesday nights, the bright diffused white and yellow illumination continuing through the night until it disappeared at dawn...'On July 5, (1908) a New York Times story from Britain was entitled: 'Like Dawn at Midnight.' '...The northern sky at midnight became light blue, as if the dawn were breaking...people believed that a big fire was raging in the north of London...shortly after midnight, it was possible to read large print indoors...it would be interesting if anyone would explain the cause of so unusual a sight.'The letter sent by Mrs. Katharine Stephen is absolutely genuine as it includes details NOBODY else knew at the time: not only the precise timing of the explosion itself (7:15 - 7:17 local time, 0:15 - 0:17 London time), BUT ALSO THE DURATION OF THE TRAJECTORY OF THE OBJECT, right before the explosion, a fact uncovered decades later only by the painstaking research of Dr. Felix Zigel, an aerodynamics professor at the Moscow Institute of Aviation:The same opinion was reached by Felix Zigel, who as an aerodynamics professor at the Moscow Institute of Aviation has been involved in the training of many Soviet cosmonauts. His latest study of all the eyewitness and physical data convinced him that "before the blast the Tunguska body described in the atmosphere a tremendous arc of about 375 miles in extent (in azimuth)" - that is, it "carried out a maneuver." No natural object is capable of such a feat. Manotskov decided that the 1908 object, on the other hand, had a far slower entry speed and that, nearing the earth, it reduced its speed to "0.7 kilometers per second, or 2,400 kilometers per hour" - less than half a mile per second.375 miles = 600 km, or 15 minutes of flight time, given the speed exemplified aboveI do not know when it first appeared; we saw it between 12 o’clock (midnight) and 12:15 a.m.LeMaire maintains the "accident-explanation is untenable" because "the flaming object was being expertly navigated" using Lake Baikal as a reference point. Indeed, Lake Baikal is an ideal aerial navigation reference point being 400 miles long and about 35 miles wide. LeMaire's description of the course of the Tunguska object lends credence to the thought of expert navigation:The body approached from the south, but when about 140 miles from the explosion point, while over Kezhma, it abruptly changed course to the east. Two hundred and fifty miles later, while above Preobrazhenka, it reversed its heading toward the west. It exploded above the taiga at 60º55' N, 101º57' E (LeMaire 1980). If the light from the Sun could not reach London due to curvature and/or any light reflection phenomena, then certainly NO LIGHT from an explosion which occurred at some 7 km altitude in the atmosphere could have been seen at all, at the same time, on a spherical earth. A few formulas of interest.CURVATUREC = R(1 - cos[s/(2R)]) - angle measured in radiansR = 6378,164 kms = distanceVISUAL OBSTACLE BD = (R + h)/{[2Rh + h2]1/2(sin s/R)(1/R) + cos s/R} - RBD = visual obstacleh = altitude of observer It proves that those cities were recognized to be living, thriving places in the years listed on the maps: 1570, 1633, 1725. You should remember that both Pompeii and Herculaneum were supposed to be buried under many meters of ash some 1500 years before Ortelius.
- 42 replies
-
-1
-
But it is changed within the context of geocentrim, which is linked to the new radical chronology of history. Let me explain. The new chronology of history: history does not exist beyond 1000 AD, everything was forged/falsified after 1500 AD. The new radical chronology of history: history does not exist beyond 1660 AD, everything (including the Bible) was forged/falsified in the period 1780-1800 AD. Christ was crucificied/resurrected much more recently than we have been led to believe, while the crucifixion itself took place in Constantinople, and not Jerusalem (plenty of proofs for this one too). You are listing a historical "fact" which is valid only within the heliocentrical setting. What you have to deal with is the proof, using Gauss' Easter formula, that the council of Nicaea could not possibly have taken place before the year 876-877 AD. Moreover, you have to deal with the absolute proof that Dionysius Exiguus' biography was forged during the Renaissance. The maps I have provided show Pompeii and Herculaneam as cities listed on contemporary maps (1570, 1633, 1725). Now, imagine this scene: Ortelius, pondering whether to include Pompeii on his Neapoletanum Regnum map as a practical joke, while his assistants quietly point out to him that Pompeii was buried by the Vesuvius eruption some 1500 years earlier. But Ortelius, nonetheless, proceeds to draw Pompeii on the map. Imagine the scandal throughout southern Italy, the uproar from the other map makers of the day, not to mention the loss of potential clients who would not have been amused at all to find more practical jokes of the same sort for the maps they ordered from Ortelius. Not the kind of glass featured at St. Gobain, as an example. The perfectly flat glass from Herculaneum necessitated the use of technology which was available only in the 17th century. http://www.ilya.it/chrono/pages/pompejigallerydt.htm http://www.ilya.it/chrono/images/gallery/pom13.jpg http://www.ilya.it/chrono/images/gallery/pom14.jpg In the window of the museum can be seen a lot of glass products, including bottles, flasks for perfumes, multicolored glass of different shades. Particularly noteworthy are absolutely transparent thin glass vases. The same glass vases are shown on Pompeian frescoes. Then, at the mid point of the 15th century, Angelo Barovier produced what was to become known as vetro cristallo or cristallo veneziano. This was a pure, bright, completely transparent crystal glass. An early example of Venetian cristallo glass dating from 1580 Abbildung 11: Italienische oder pompejanische Renaissance:Tizian: Liegende Kurtisane (unten) und liegende Mänade ausPompeji (oben)Abbildung der Mänade aus: Pietro Giovanni Guzzo: Pompei, Ercolano, Stabiae, Oplontis;Napoli 2003, 75Figure 11: Italian Renaissance and Pompeian:Titian: Horizontal courtesan (below) and from lying maenadPompeii (top)Figure out the maenad: Pietro Giovanni Guzzo: Pompei, Ercolano, Stabia, Oplontis;Napoli 2003, 75The well-known painting by Titian copied perfectly at Pompeii...As Titian did not have at his disposal a space-time machine to take him back to the year 79 AD, we can only infer that the authors of both paintings/frescoes were contemporaries, perhaps separated only by a few decades in time. On the way from Naples to the south to Torre Annunziata, 15 kilometers from Naples, one can see a monument on the façade of the Villa Pharao Mennela, an epitaph for the victims of the eruption of Vesuvius in 1631, on two stone slabs with the text in the Latin language , On one of these are the towns of Pompeii and Herculaneum, as well as Resina and Portici, in the list of destroyed cities.AT O VIII ET LX POST ANNO XVII CALEND (AS) IANUARII PHILIPPO IV REGE FUMO, FLAMMIS, BOATU CONCUSSO CINERE ERUPTIOHE HORRIFICUS, FERUS SI UNQUAM VESUVIUS NEC NOMEN NEC FASCES TANTI VIRI EXTIMUIT QUIPPE, EXARDESCENTE CAVIS SPECUBUS IGN, IGNITUS, FURENS, IRRUGIA, EXITUM ELUCTANS. COERCITUSAER, IACULATUS TRANS HELLESPONTUMDISIECTO VIOLENTER MONTISCULMINE, IMMANI ERUPIT HIATU POSTRIDGE, CINEREM PONE TRAFFIC AD EXPLENDAM VICEM PELAGUS IMMITE PELAGUS FLUVIOS SULPHUREOS FLAMMATUM BITUMEN, FOETAS ALUMINE CAUTES, INFORME CUIUSQUE METALLI RUDUS, MIXTUM AQUARUM VOIURINIBUS IGNEM FEBRVEM (QUE) UNDANTE FUMO CINEREM SESEQ (UE) FUNESTAMQ (UE) COLLLUVIEM IUGO MONTIS EXONERANS POMPEIOS HERCULANEUM OCTAVIANUM, PERSTRICTIS REАTINA ET PORTICU, SILVASQ (UE), VILLASQ (UE), (UE) MOMENTO STRAVIT, USSIT, DIRUIT LUCTUOSAM PRAEA SE PRAEDAM AGENS VASTUMQ (UE) TRIUNPHUM. PERIERAT HOC QUOQ (UE) MARMORALTE SEPQLUM CONSULTISSIMI NO MONUMENTUM PROREGIS. NE PEREAT EMMAHUEZL FONSECA ET SUNICA COM (ES), MONT IS RE (GIS) PROR (EX), QUA ANIMI MAGNITUDINE PUBLICA CALAMITATI EA PRIVATAE CONSULUIT EXTRACTUM FUNDITUS GENTIS SUI LAPIDEM. COELO RESTITUIT, VIAM RESTAURAVIT, FUMANTE ADHUC ET INDIGNANTE VESEVO. AN (NO) SAL (UTIS) MDCXXXV, PRAEFECTO VIARUM ANTONIO SUARES MESSIA MARCHI (ONE) VICI. http://www.ilya.it/chrono/images/gallery/pom01.jpg http://www.ilya.it/chrono/images/gallery/pom35.jpg http://www.ilya.it/chrono/images/gallery/pom36.jpg Pompeii Grafitti, gladiators with helmets which feature mobile visors, a XVth century invention (official chronology of history): Would you care to explain to your readers how objects A and B exert A PRESSURE (PUSHING FORCE) on the obstacle? By what mechanism? You want to use gravitons? Newton has other ideas: 4. When two bodies moving towards one another come near together, I suppose the aether between them to grow rarer than before, and the spaces of its graduated rarity to extend further from the superficies of the bodies towards one another; and this, by reason that the aether cannot move and play up and down so freely in the strait passage between the bodies, as it could before they came so near together.5. Now, from the fourth supposition it follows, that when two bodies approaching one another come so near together as to make the aether between them begin to rarefy, they will begin to have a reluctance from being brought nearer together, and an endeavour to recede from one another; which reluctance and endeavour will increase as they come nearer together, because thereby they cause the interjacent aether to rarefy more and more. But at length, when they come so near together that the excess of pressure of the external aether which surrounds the bodies, above that of the rarefied aether, which is between them, is so great as to overcome the reluctance which the bodies have from being brought together; then will that excess of pressure drive them with violence together, and make them adhere strongly to one another, as was said in the second supposition.Two bodies are pulled to each other by an external pressure. Perhaps I will. But then you are going to have to deal with the double forces of attractive gravitation paradox, with the Allais effect, with the precise explanation of how the telluric ether forces affect an object gravitationally, the fact that physicists cannot explain why a simple bathroom scale does not record/register the weight of the corresponding column of air (2,000 pounds), then I would bring the definite proofs in favor of ether drift: the Galaev experiments, the new formula for the Sagnac effect, the Podkletnov effect and much more... while all the while disproving the attractive mechanism. You can find the entire theory of the paleomagnetic dating of the artifacts from Pompeii/Herculaneum here: https://www.theflatearthsociety.org/forum/index.php?topic=30499.msg1683846#msg1683846 https://www.theflatearthsociety.org/forum/index.php?topic=30499.msg1685184#msg1685184 https://www.theflatearthsociety.org/forum/index.php?topic=30499.msg1690028#msg1690028 Too much material to post here. As for the word "flat", please remember that the explosion at Tunguska (7:10 am, June 30, 1908) was seen, instantaneously from London, Stockholm, Antwerp, Berlin, over a supposed curvature corresponding to a distance of some 5,200 km, while at the same time we are told that the light from the Sun could not reach London at that point in time due to the curvature of the Earth.
-
The purpose of the thread is as follows: to prove, using Gauss' celebrated Easter formula, that the council of Nicaea could not possibly have taken place before the year 876-877 AD. This much was proven. You, then, brought up the leap year argument, while I pointed out that this sort of explanation makes sense only within the heliocentrical context. However, there are other explanations available, such as the geocentrical setting. Newton states clearly what he means. “In attractions, I briefly demonstrate the thing after this manner. Suppose an obstacle is interposed to hinder the meeting of any two bodies A, B, attracting one the other: then if either body, as A, is more attracted towards the other body B, than that other body B is towards the first body A, the obstacle will be more strongly urged by the pressure of the body A than by the pressure of the body B, and therefore will not remain in equilibrium: but the stronger pressure will prevail, and will make the system of the two bodies, together with the obstacle, to move directly towards the parts on which B lies; and in free spaces, to go forwards in infinitum with a motion continually accelerated; which is absurd and contrary to the first law.”the obstacle will be more strongly urged by the pressure of the body A He even offers more details in another famous passage, the letter to R. Boyle: 4. When two bodies moving towards one another come near together, I suppose the aether between them to grow rarer than before, and the spaces of its graduated rarity to extend further from the superficies of the bodies towards one another; and this, by reason that the aether cannot move and play up and down so freely in the strait passage between the bodies, as it could before they came so near together.5. Now, from the fourth supposition it follows, that when two bodies approaching one another come so near together as to make the aether between them begin to rarefy, they will begin to have a reluctance from being brought nearer together, and an endeavour to recede from one another; which reluctance and endeavour will increase as they come nearer together, because thereby they cause the interjacent aether to rarefy more and more. But at length, when they come so near together that the excess of pressure of the external aether which surrounds the bodies, above that of the rarefied aether, which is between them, is so great as to overcome the reluctance which the bodies have from being brought together; then will that excess of pressure drive them with violence together, and make them adhere strongly to one another, as was said in the second supposition.Two bodies are pulled to each other by an external pressure. But, as you said, this is not the subject of this thread. The Earth is not pulling at all on the rock. The pressure of the ether is exerting a force upon the rock. Certainly there is much more to be said on this very subject, but this is not the place. Fine. Abraham Ortelius was the finest map maker of the Renaissance. Yet, on his 1570 map Neapoletanum Regnum, Pompeii is featured as thriving city: Here are the maps drawn by Giovanni Mascolo, 1633: Here is the map dated 1725, again both Pompeii and Herculaneum featured as cities in full activity: http://halsema.org/people/theleonardifamily/history/mapsof15-18thcentitaly/images/fullsize/3.jpg The water conduit built by the architect/engineer Domenico Fontana starting with 1592 A.D. (official chronology), which runs EXACTLY through Pompeii: The water conduit passes through Via de Nocere, Pompeii: https://translate.google.com/translate?sl=de&tl=en&js=y&prev=_t&hl=ro&ie=UTF-8&u=http%3A%2F%2Fwww.ilya.it%2Fchrono%2Fpages%2Fpompejidt.htm&edit-text=The Fontana water conduit built while POMPEII WAS A CITY IN FULL ACTIVITY:https://www.youtube.com/watch?feature=player_embedded&v=_sc5PfjuCqQ#t=0https://www.youtube.com/watch?feature=player_embedded&v=koKNBC-t51c#t=0Two remarcable documentaries, signed A. Tschurilow, which take the viewer on a journey through Pompeii, street by street, and demonstrates that the water conduit built by D. Fontana was constructed while Pompeii was a city in full activity. Perfectly flat window glasses at Herculaneum: It was in 1688, in France Experts developed new process of making Flat glass, mainly used in Mirrors. The process was pouring molten glass onto a special table and roll it flat, later when cooled it was polished using felt disks, then it is coated with reflective material to produce the Mirrors.https://books.google.ro/books?id=jXgnnCpz22QC&pg=PA3&lpg=PA3&dq=flat+window+glass+first+obtained+at+st.+gobain+1688&source=bl&ots=kADb-hHyu9&sig=CZw5-KyF8ZGQDxyrtHnG2SA7b90&hl=ro&sa=X&ei=Spw3VbvTNcWmsgHgsIDgCg&ved=0CEsQ6AEwBg#v=onepage&q=flat%20window%20glass%20first%20obtained%20at%20st.%20gobain%201688&f=false"The use of Renaissance artists of identical details, same colors decisions, motives, general composition plans, the presence in the Pompeian frescoes of the things that emerged in the 15 to 17 century, the presence in Pompeian paintings of genre painting, which is found only in the epoch of the Renaissance, and the presence of some Christian motifs on some frescoes and mosaics suggest that Pompeian frescoes and the works of artists of the Renaissance come from the same people who have lived in the epoch. "Vitas Narvidas," Pompeian Frescoes and the Renaissance: a comparison, "Electronic Almanac" Art & Fact 1 (5), 2007. Archaeomagnetic dating of the artifacts at Pompeii:https://translate.google.com/translate?depth=1&hl=ro&ie=UTF8&prev=_t&rurl=translate.google.com&sl=ru&tl=en&u=http://new.chronologia.org/volume6/tur_vez79.htmlDating events "Vesuvius Eruption '79" paleomagnetic characteristics of artifactsAll the artifacts tested belong to the 17TH century (including a fresco attributed to "antiquity"). Figure 5. Dating event "The eruption of Vesuvius in '79" calibration curve SIVC (AnTyur). Detail of the calibration curve SIVC (AnTyur) shown in magenta. Red circle shows the average value of the paleomagnetic parameters artifacts. The numbers near the points characterizing paleomagnetic parameters artifacts of Pompeii and Herculaneum, the samples correspond to the numbers in Table 1. Paleomagnetic parameters of the artifacts found at Pompeii and Herculaneum: Table 1: Paleomagnetic samples parameters characterizing the event "The eruption of Vesuvius in '79" https://translate.google.com/translate?depth=1&hl=ro&ie=UTF8&prev=_t&rurl=translate.google.com&sl=ru&tl=en&u=http://new.chronologia.org/volume6/tur_vez79.htmlPaleomagnetic parameters, Southern Italy, 1600 - 2000 AD: Figure 1. The actual data describing the evolution of the parameters of the geomagnetic field of Southern Italy in the last 400 years [Tanguy, 2005]. The results of instrumental measurements of vector direction of the geomagnetic field, represented in the form of the path of movement of the North Magnetic Pole, shows dark yellow line. Black circles show the direction of the residual magnetization vectors of samples of lava eruptions of Etna (E) and Vesuvius (V). The size of the circle corresponds to the measurement error. Digit near the circle - the year of the eruption. Blue line shows the path of movement of the North Magnetic Pole, estimated by paleomagnetic product parameters volcanoes Etna and Vesuvius.The data coincide perfectly: the artifacts found at Pompeii and Herculaneum belong to the 17th century You still think it's a joke?
- 42 replies
-
-1
-
You are using the heliocentrical setting/context and the conventional calendar (365.24219 days + 1 leap day every four years). In the geocentrical setting, the chronology of history is much shorter: the new radical chronology of history. That is, history is much shorter than we have been led to believe. Did you know that modern astronomy agrees that the interval of assured reliability for Newton's equations of gravitational motion, as they apply to planetary orbits, is at most three hundred years?Dr. Robert W. BassPh.D. (Mathematics) Johns Hopkins University, 1955 [Wintner, Hartman]A. Wintner, world's leading authority on celestial mechanicsPost-Doctoral Fellow Princeton University, 1955-56 [under S. Lefschetz]Rhodes ScholarProfessor, Physics & Astronomy, Brigham Young UniversityDr. W.M. SmartRegius Professor of Astronomy at Glasgow UniversityPresident of the Royal Astronomical Society from 1949 to 1951Dr. E.W. BrownFellowship, Royal SocietyPresident of the American Mathematical SocietyProfessor of Mathematics, Yale UniversityPresident of the American Astronomical SocietyDr. Bass' basic discovery:In a resonant, orbitally unstable or "wild" motion, the eccentricities of one or more of the terrestrial planets can increase in a century or two until a near collision occurs. Subsequently the Principle of Least Interaction Action predicts that the planets will rapidly "relax" into a configuration very near to a (presumably orbitally stable) resonant, Bode's-Law type of configuration. Near such a configuration, small, non-gravitational effects such as tidal friction can in a few centuries accumulate effectively to a discontinuous "jump" from the actual phase-space path to a nearby, truly orbitally stable, path. Subsequently, observations and theory would agree that the solar system is in a quasi-periodic motion stable in the sense of Laplace and orbitally stable. Also, numerical integrations backward in time would show that no near collision had ever occurred. Yet in actual fact this deduction would be false.""I arrived independently at the preceding scenario before learning that dynamical astronomer, E. W. Brown, president of the American Astronomical Society, had already outlined the same possibility in 1931."Dr. Robert Bass, Stability of the Solar System:https://web.archive.org/web/20120916174745/http://www.innoventek.com:80/Bass1974PenseeAllegedProofsOfStabilityOfSolarSystemR.pdf The astronomers who rely upon Nekhoroshev's theorem regarding the stability of the solar system, must understand that the threshold value of the small parameter ε obtained from various statements of the theorem provide values which, when applied to Solar System dynamics, are very small, and can be hardly compared to the existent perturbations.Unfortunately, most attempts of application of Nekhoroshev results have turned to frustration. Indeed it is very hard to check if the conditions for the application of Nekhoroshev theorem are fulfilled (in particular the one imposing the non-integrability parameter to be small enough), and to compute analytically the value of the stability time. The results are often unrealistic. Moreover, any computer-assisted program designed to aid in the verification of Nekhoroshev's theorem does not take into account Professor Bass' basic discovery: observations and theory would agree that the solar system is in a quasi-periodic motion stable in the sense of Laplace and orbitally stable. Also, numerical integrations backward in time would show that no near collision had ever occurred. Yet in actual fact this deduction would be false.D.G. Saari's theorem (1971) on the collisions in Newtonian gravitational systems suffers from a basic flaw: its very hypothesis stipulates that inverse square law of attractive gravitation plays a crucial role in the proof of the result. A single counterexample to the attractive model (the Allais effect, the DePalma spinning ball experiment, the Kozyrev gyroscope experiment, the Biefeld-Brown effect) is sufficient to prove that the assertion that the force of gravity is attractive is false. Moreover, we have the quote from Principia: Two bodies are pulled to each other by an external pressure.Let's see how Newton describes this force in the Principia:“In attractions, I briefly demonstrate the thing after this manner. Suppose an obstacle is interposed to hinder the meeting of any two bodies A, B, attracting one the other: then if either body, as A, is more attracted towards the other body B, than that other body B is towards the first body A, the obstacle will be more strongly urged by the pressure of the body A than by the pressure of the body B, and therefore will not remain in equilibrium: but the stronger pressure will prevail, and will make the system of the two bodies, together with the obstacle, to move directly towards the parts on which B lies; and in free spaces, to go forwards in infinitum with a motion continually accelerated; which is absurd and contrary to the first law.”the obstacle will be more strongly urged by the pressure of the body APRESSURE = PUSHING FORCEATTRACTION = PULLING FORCENewton's clear description again:the obstacle will be more strongly urged by the pressure of the body A than by the pressure of the body B, and therefore will not remain in equilibrium: but the stronger pressure will prevailhttps://books.google.ro/books?id=VW_CAgAAQBAJ&pg=PA34&lpg=PA34&dq=isaac+newton+In+attractions,+I+briefly+demonstrate+the+thing+after+this+manner.+Suppose+an+obstacle+is+interposed+to+hinder+the+meeting+of+any+two+bodies+A,+B,+attracting+one+the+other&source=bl&ots=eRsq4NaOYt&sig=ACfU3U3NMCiW4fsquNSq0t25is5H6aobrA&hl=en&sa=X&ved=2ahUKEwipgr6fw6fgAhWnAGMBHXZMAlQQ6AEwAXoECAkQAQ#v=onepage&q=isaac%20newton%20In%20attractions%2C%20I%20briefly%20demonstrate%20the%20thing%20after%20this%20manner.%20Suppose%20an%20obstacle%20is%20interposed%20to%20hinder%20the%20meeting%20of%20any%20two%20bodies%20A%2C%20B%2C%20attracting%20one%20the%20other&f=false Of course, much more has to be added on the subject of the falsification of history: -both Pompeii and Herculaneum were destroyed by the eruption of the Vesuvius volcano in the 18th century, and not in the first century AD
-
The Council of Nicaea could not possibly have taken place before the year 784 AD. Therefore, the claim made in the papal bull by Gregory XIII, it has already deviated approximately ten days since the Nicene Council is completely false. Here are more proofs, using of course Gauss' Easter formula. Dionysius Exiguus, On Easter (translation from Latin to English)http://www.ccel.org/ccel/pearse/morefathers/files/dionysius_exiguus_easter_01.htmExiguus assigns the date of March 24, year 563 AD, for the Passover. http://www.staff.science.uu.nl/~gent0113/easter/easter_text4a.htmHowever, in the year 563 AD, the Passover fell on March 25.Dr. G.V. Nosovsky:Ecclesiastical tradition, in accordance with the New Testament, tells that Christ was resurrected on March 25 on Sunday, on the next day after Passover, which, therefore, fell in that time on March 24 (Saturday). These are exactly the conditions used by Dionisius in his calculation of the date of the First Easter. Dionysius supposedly conducted all these arguments and calculations working with the Easter Book. Having discovered that in the contemporary year 563 (the year 279 of the Diocletian era) the First Easter conditions held, he made a 532-year shift back (the duration of the great indiction, the shift after which the Easter Book entirely recurs) and got the date for the First Easter. But he did not know that Passover (the 14th moon) could not be shifted by 532 years (because of the inaccuracy of the Metonian cycle) and made a mistake: "Dionysius failed, though he did not know that. Indeed, if he really supposed that the First Easter fell on March 25, 31 A.D., then he made a rough mistake as he extrapolated the inaccurate Metonian cycle to 28 previous cycles (that is, for 532 years: 28 x 19 = 532). In fact, Nisan 15, the Passover festival, in the year 31 fell not on Saturday, March 24, but on Tuesday, March 27!". [335, pg. 243: I.A. Klimishin, Calendar and Chronology, in Russian, Nauka, Moscow, 1985]That is a modern reconstruction of what Dionysius the Little did in the 6th century. It would be all right, but it presupposes that near Dionysius' date of 563 A.D. the 14th moon (Passover) really fell on March 24. It could be that Dionysius was not aware of the inaccuracy of the Metonian cycle and made the mistake shifting Passover from 563 to the same day of March in 31 A.D.But he could not have been unaware of the date of Passover in the the almost contemporary year 563! To that end it was sufficient to apply the Metonian cycle to the coming 30-40 years; the inaccuracy of the Metonian cycle does not show up for such intervals. But in 563 Passover (the 14th moon) fell not on March 24, but on Sunday, March 25, that is, it coincided with Easter as determined by the Easter Book. As he specially worked with the calendar situation of almost contemporary year 563 and as he based his calculation of the era "since the birth of Christ" on this situation, Dionysius could not help seeing that, first, the calendar situation in the year 563 did not conform to the Gospels' description and, second, that the coincidence of Easter with Passover in 563 contradicts the essence of the determination of Easter the Easter Book is based on. Therefore, it appears absolutely incredible that the calculations of the First Easter and of the Birth of Christ had been carried out in the 6th century on the basis of the calendar situation of the year 563. It was shown in Sec. 1 that the Easter Book, used by Dionysius, had not been compiled before the 8th century and had been canonized only at the end of the 9th century. Therefore, the calculations carried out by (or ascribed to) Dionysius the Little had not been carried out before the lOth century. www.chronologia.org/en/es_analysis2/index.html (pages 390 - 401 and 401 - 405)Exiguus, the central pillar of the official historical chronology, could not have made such a colossal mistake UNLESS his works/biography were forged/falsified at least five centuries later in time.In the official chronology, Bede, Syncellus, Scaliger, Blastares, and Petavius base their calculations on Exiguus' methods and data. Dr. G. Nosovsky went even further with his research into the falsified chronology of history: using Gauss' Easter formula he was able to show that the FIRST EASTER conditions, stipulated by Exiguus, WERE SATISFIED ONLY IN THE YEAR 1095 AD (Saturday, March 24, Paschal Moon).http://www.chronologia.org/en/es_analysis2/img408.pdfhttp://www.chronologia.org/en/es_analysis2/img409.pdfhttp://www.chronologia.org/en/es_analysis2/img410.pdfhttp://www.chronologia.org/en/es_analysis2/img411.pdfThis means that the biography of Dionysius Exiguus, the central pillar of modern chronology, was falsified at least after 1400 AD (anybody in the period 1095 + 300 = 1395 AD, could have used the Metonian cycle to verify that the conditions were fulfilled in the year 1095 AD), during the Renaissance. Dr. G. Nosovsky:We don’t have to observe the sky or perform astronomical calculations every time; compiling a table of March and April full moons for any given period of 19 years should suffice for further reference. The reason is that the phases of the moon recur every 19 years in the Julian calendar, and the recurrence cycle remains unaltered for centuries on end – that is, if the full moon fell on the 25th March any given year, it shall occur on the 25th of March in 19 years, in 38 (19 x 2) years, etc.The malfunctions in the cycle shall begin after 300 years, which is to say that if we cover 300 years in 19-year cycles, the full moon shall gradually begin to migrate to its neighbouring location in the calendar. The same applies to new moons and all the other phases of the moon. In the official chronology of history we find one of the most perplexing mysteries.Kepler advocated the adoption of the reformed calendar in a work entitled "Dialogue on the Gregorian Calendar" published in 1612.http://articles.adsabs.harvard.edu//full/1920PA.....28...18L/0000021.000.htmlIn 1613, the Emperor Matthias asked Kepler to attend the Reichstag at Regensburg to counsel on the issue of adopting the Gregorian calendar reform in Germany. In Germany, the Protestant princes had refused to accept the calendar on confessional grounds. Kepler believed that the new calendar was sufficiently exact to satisfy all needs for many centuries. Thus, he proposed that the Emperor issue a general imperial decree to implement the calendar.Moreover, the arch enemy of the Vatican, Galileo Galilei, also agrees with the changes instituted by the Gregorian calendar.Clavius was the senior mathematician on the commission for the reform of the calendar that led, in 1582, to the institution of the Gregorian calendar. From his university days, Galileo was familiar with Clavius's books, and he visited the famous man during his first trip to Rome in 1587. After that they corresponded from time to time about mathematical problems, and Clavius sent Galileo copies of his books as they appeared.http://books.google.ro/books?id=o6-8BAAAQBAJ&pg=PA24&lpg=PA24&dq=galileo+galilei+gregorian+calendar&source=bl&ots=ORPJHVLJB5&sig=MMjwonnPkIE6XYnFrcMCS3Yow20&hl=ro&sa=X&ei=UStiVO3mFY2zaczhgMAN&ved=0CB4Q6AEwADgK#v=onepage&q=galileo%20galilei%20gregorian%20calendar&f=falseThesaurus Temporum, published by Joseph Scaliger, which was based almost entirely on the calculations of Dionysius Exiguus and Matthew Blastares, received criticism from Johannes Kepler.However, it is absolutely impossible (and amazing at the same time) for Johannes Kepler to have agreed with the Gregorian calendar reform, given the fact that he was familiar with the popular work attributed to Matthew Blastares.It would have been perfectly simple for Kepler and Galilei to show the humongous errors inherent in the Gregorian calendar reform, to publicize these results, and thus have a very solid base on which to express their opinions regarding the planetary system.All Kepler had to do is to refer each and every historian/astronomer/researcher of his time to the familiar quote signed Matthew Blastares:"By about AD 1330, the medieval scholar Matthew Vlastar wrote the following about how to determine the anniversary of Christ's resurrection in the Collection of Rules of the Holy Fathers of the Church:The rule on Easter has two restrictions: not to celebrate together with the Israelites and to celebrate after the spring equinox. Two more were added by necessity: to have the festival after the very first full Moon after the equinox and not on any day but on the first Sunday after the full Moon. All the restrictions except the last one have been kept firmly until now, but now we often change for a later Sunday. We always count two days after the Passover [full Moon] and then turn to the following Sunday. This happened not by ignorance or inability of the Church fathers who confirmed the rules, but because of the lunar motion.In Vlastar's time, the last condition of Easter was violated: if the first Sunday took place within two days after the full moon, the celebration of Easter was postponed until the next weekend. This change was necessary because of the difference between the real full moon and the one computed in the Easter Book. The error, of which Vlastar knew, is twenty-four hours in 304 years. Therefore the Easter Book must have been written around AD 722. Had Vlastar been aware of the Easter Book's AD 325 canonization, he would have noticed the three-day gap that had accumulated between the dates of the real and the computed full moon in more than 1,000 years." And yet, to the amazement and uncomprehending stupor of modern historians, no such thing happened.Not only Kepler or Galilei, but every reader of Scaliger's works could have brought forward the quote from Blastares, and reveal the errors made by Luigi Lilio (the Gregorian reform of the calendar was carried out on the basis of the project of the Italian "physician and mathematician" Luigi Lilio).As we have seen, in the year 1582, the winter solstice would have arrived on December 16, not at all on December 11. Newton agrees with the date of December 11, 1582 as well; moreover, Britain and the British Empire adopted the Gregorian calendar in 1752 (official chronology).http://articles.adsabs.harvard.edu//full/1920PA.....28...18L/0000024.000.htmlNo less a figure than Isaac Newton (1642-1727) also took an active interest in the field, publishing "The Chronology of Ancient Kingdoms Amended", a substantial monograph disputing several key conclusions in Scaliger's work.But Newton couldn't possibly have missed the work done by Blastares, and the quote attributed to the same author.Benjamin Franklin told his readers of the Poor Richard's Almanac to enjoy the extra 11 days in bed and that losing 11 days did not worry him--after all, Europe had managed since 1582.http://articles.adsabs.harvard.edu//full/1920PA.....28...18L/0000024.000.htmlBut in 1752 AD, the error/discrepancy between the false Gregorian calendar reform and the real calendar would have amounted to a full 3 (three) days difference, a thing that could not have been missed by any researcher.In 1806, Napoleon, we are told, ordered a return to the Gregorian calendar.In accordance with the Concordat with Pope Pius VII (1742-1823), signed July 15, 1801, a decree put an end to the revolutionary calendar. On 17 Brumaire Year 14 (November 8, 1805) the Minister of Finance announced the January 1, 1806, return to the Gregorian calendar which had been outlawed in October 1793.But in 1806 AD, the error would have been at least a full 2 (two) days, and no one could have missed this huge discrepancy.
-
Gauss' Easter formula is the most accurate astronomical dating tool at our disposal.A brief summary of the dating of the First Council of Nicaea and the startling conclusions following the fact that the Gregorian calendar reform never occurred in 1582 AD. "With the Easter formula derived by C.F. Gauss in 1800, Nosovsky calculated the Julian dates of all spring full moons from the first century AD up to his own time and compared them with the Easter dates obtained from the Easter Book. He reached a surprising conclusion: three of the four conditions imposed by the First Council of Nicaea were violated until 784, whereas Vlastar had noted that “all the restrictions except the last one have been kept firmly until now.” When proposing the year 325, Scaliger had no way of detecting this fault, because in the sixteenth century the full-moon calculations for the distant past couldn’t be performed with precision.Another reason to doubt the validity of 325 AD is that the Easter dates repeat themselves every 532 years. The last cycle started in 1941, and previous ones were 1409 to 1940, 877 to 1408 and 345 to 876. But a periodic process is similar to drawing a circle—you can choose any starting point. Therefore, it seems peculiar for the council to have met in 325 AD and yet not to have begun the Easter cycle until 345.Nosovsky thought it more reasonable that the First Council of Nicaea had taken place in 876 or 877 AD, the latter being the starting year of the first Easter cycle after 784 AD, which is when the Easter Book must have been compiled. This conclusion about the date of the First Council of Nicaea agreed with his full-moon calculations, which showed that the real and the computed full moons occurred on the same day only between 700 and 1000 AD. From 1000 on, the real full moons occurred more than twenty-four hours after the computed ones, whereas before 700 the order was reversed. The years 784 and 877 also match the traditional opinion that about a century had passed between the compilation and the subsequent canonization of the Easter Book." Dr. G. Nosovsky, Easter Issue: Easter, also known as Pascha, the Feast of the Resurrection, the Sunday of the Resurrection, or Resurrection Day, is the most important religious feast of the Christianity, observed between late March and late April by the Western and early April to early May in Eastern Christianity.It is assumed that the First Ecumenical Nicaean Council (Nicaea is a town in Bythinia, Asia Minor) had compiled and sanctioned a church calendar in the year 325 AD. The Christian church has deemed this Easter Book (in the West), also known as Paschalia (in the East), to be of the greatest importance ever since. The British Encyclopaedia names Joseph Justus Scaliger (1540-1609) and his follower Dionysius Petavius (1583 – 1652) as the founders of consensual chronology. This chronology stands on two pillars – the date of Jesus Christ’s Nativity and the date of the First Ecumenical Council in Nicaea, which is usually referred to as “The Nicaean Council”.Scaliger’s version of chronology is based on the datings of Christ’s birth and the First Ecumenical Council in Nicaea to a great extent, since it was primarily compiled as that of ecclesial history. Secular chronology of the ancient times was represented in his works as derivative, based on synchronisms with ecclesial events.We shall give here a detailed account of why one of these ground laying dates, that is the date of the First Ecumenical Council in Nicaea is definitely wrong.The principal method of the research we are relating here is that of computational astronomy. However, the understanding of the issue does not require a profound knowledge of astronomy or other special scientific issues.The founder of chronology Joseph Justus Scaliger considered himself a great mathematician. Pity, but his demonstrations were quite wrong – for instance, he boasted that he had solved the classical “ancient” mathematical ‘Quadrature of Circle’ problem that was subsequently proven insoluble.Calendarian issues are a part of chronology. The chronology belonged to the paradigm of mathematics and astronomy. This was the case in the XVI-XVII centuries, when the consensual Scaliger-Petavius version of chronology was created.Since then, the perception of chronology has changed, and in the XVIII century already, chronology was considered humanity. As its essence cannot be changed, it remains a subdivision of applied mathematics to this day.The historians are supposed to concern themselves with chronology. However, without a sufficient mathematical education – and in the case of chronological studies, sufficient means fundamental – the historians are forced to evade the solution and even the discussion of the rather complex chronological issues.Every historical oddness and contradiction becomes carefully concealed from the public attention; in dangerous and slippery places the historians put on a “professional” mien, saying that “everything is really okay” and they shall “give you a full explanation” later on.WHAT WE KNOW ABOUT THE NICAEAN COUNCIL TODAYNo deeds or acts of this Council have reached our time, but the historians report: “...the opuses of St. Athanasius of Alexandria, Socrates, Eusebius of Caesarea, Sozomenus, Theodoritus, and Rufinus contain enough details for us to get a good idea of the Council together with the 20 rules and the Council’s vigil… The Emperor (Constantine the Great – Auth.) arrived in Nicaea on the 4th or the 5th of July, and the next day the Council was called in the great hall of the Emperor’s palace… the council had solved the problem of determining the time of Easter celebration… and set forth the 20 rules… After the Council, the Emperor had issued a decree for convincing everyone to adhere to the confession proclaimed by the council.”[988], tome 41, pages 71-72.It is thus assumed that together with the proclamation of the united Orthodox-Catholic confession that got split up later, the Nicaean council had also determined the way Easter should be celebrated, or, in other words, developed the Paschalia Easter Book.Despite the fact that no original Easter edicts of the Nicaean council remain, it is said that the Council issued its edicts in the alleged year 325 AD, when the “the actual methods of calculating the Easter dates had already been well developed”, and the Easter date table “that had been used for centuries” had been compiled. The latter is quite natural, since “every 532 years, the Christian Easter cycle repeats from the very start… the Paschalian tables for each year of 532 were in existence” [817], page 4.Thus, the calculation of the new 532-year Easter table really comes down to a simple shift of the previous one by 532 years. This order is still valid: the last Great Indiction began in 1941 and is the shifted version of the previous Great Indiction (of the years 1409-1940), which, in its turn, is derived from the Great Indiction of the years 977-1408, etc. So, when we move the modern Easter table by an applicable factor divisible by 532, we should get exactly the same table as was introduced by the Nicaean council.Ergo, the primary form of the Paschalia Easter Book can be easily reconstructed, and we will show the reader how earliest possible date of compilation of Paschalia Easter Book can be deduced from it. Let us turn to the canonical mediaeval ecclesial tractate - Matthew Vlastar’s Collection of Rules Devised by Holy Fathers, or The Alphabet Syntagma. This rather voluminous book represents the rendition of the rules formulated by the Ecclesial and local Councils of the Orthodox Church.Matthew Vlastar is considered to have been a Holy Hierarch from Thessalonica, and written his tractate in the XIV century. Today’s copies are of a much later date, of course. A large part of Vlastar’s Collection of Rules Devised by Holy Fathers contains the rules for celebrating Easter. Among other things, it says the following:“The Easter Rules makes the two following restrictions: it should not be celebrated together with the Judaists, and it can only be celebrated after the spring equinox. Two more had to be added later, namely: celebrate after the first full moon after the equinox, but not any day – it should be celebrated on the first Sunday after the equinox. All of these restrictions, except for the last one, are still valid (in times of Matthew Vlastar – the XIV century – Auth.), although nowadays we often celebrate on the Sunday that comes later. Namely, we always count two days after the Lawful Easter (that is, the Passover, or the full moon – Auth.) and end up with the subsequent Sunday. This didn’t happen out of ignorance or lack of skill on the part of the Elders, but due to lunar motion” Let us emphasize that the quoted Collection of Rules Devised by Holy Fathers is a canonical mediaeval clerical volume, which gives it all the more authority, since we know that up until the XVII century, the Orthodox Church was very meticulous about the immutability of canonical literature and kept the texts exactly the way they were; with any alteration a complicated and widely discussed issue that would not have passed unnoticed.So, by approximately 1330 AD, when Vlastar wrote his account, the last condition of Easter was violated: if the first Sunday happened to be within two days after the full moon, the celebration of Easter was postponed until the next weekend. This change was necessary because of the difference between the real full moon and the one computed in the Easter Book. The error, of which Vlastar was aware, is twenty-four hours in 304 years.Therefore the Easter Book must have been written around AD 722 (722 = 1330 - 2 x 304). Had Vlastar known of the Easter Book’s 325 AD canonization, he would have noticed the three-day gap that had accumulated between the dates of the computed and the real full moon in more than a thousand years. So he either was unaware of the Easter Book or knew the correct date when it was written, which could not be near 325 AD.G. Nosovsky: So, why the astronomical context of the Paschalia contradicts Scaliger’s dating (alleged 325 AD) of the Nicaean Council where the Paschalia was canonized?This contradiction can easily be seen from the roughest of calculations.1) The difference between the Paschalian full moons and the real ones grows at the rate of one day in 300 years.2) A two-day difference had accumulated by the time of Vlastar, which is roughly dated 1330 AD.3) Ergo, the Paschalia was compiled somewhere around 730 AD, since1330 – (300 x 2) = 730.It is understood that the Paschalia could only be canonized by the Council sometime later. But this fails to correspond to Scaliger’s dating of its canonization as 325 AD in any way at all!Let us emphasize, that Matthew Vlastar himself, doesn’t see any contradiction here, since he is apparently unaware of the Nicaean Council’s dating as the alleged year 325 AD. A natural hypothesis: this traditional dating was introduced much later than Vlastar’s age. Most probably, it was first calculated in Scaliger’s time. The Council that introduced the Paschalia – according to the modern tradition as well as the mediaeval one, was the Nicaean Council – could not have taken place before 784 AD, since this was the first year when the calendar date for the Christian Easter stopped coinciding with the Passover full moon due to slow astronomical shifts of lunar phases.The last such coincidence occurred in 784 AD, and after that year, the dates of Easter and Passover drifted apart forever. This means the Nicaean Council could not have possibly canonized the Paschalia in IV AD, when the calendar Easter Sunday would coincide with the Passover eight (!) times – in 316, 319, 323, 343, 347, 367, 374, and 394 AD, and would even precede it by two days five (!) times, which is directly forbidden by the fourth Easter rule, that is, in 306 and 326 (allegedly already a year after the Nicaean Council), as well as the years 346, 350, and 370.Thus, if we’re to follow the consensual chronological version, we’ll have to consider the first Easter celebrations after the Nicaean Council to blatantly contradict three of the four rules that the Council decreed specifically for this feast! The rules allegedly become broken the very next year after the Council decrees them, yet start to be followed zealously and in full detail five centuries (!) after that.Let us note that J.J. Scaliger could not have noticed this obvious nonsense during his compilation of the consensual ancient chronology, since computing true full moon dates for the distant past had not been a solved problem in his epoch.The above mentioned absurdity was noticed much later, when the state of astronomical science became satisfactory for said purpose, but it was too late already, since Scaliger’s version of chronology had already been canonized, rigidified, and baptized “scientific”, with all major corrections forbidden.Now, the ecclesiastical vernal equinox was set on March 21st because the Church of Alexandria, whose staff were reputed to have astronomical expertise, reckoned that March 21st was the date of the equinox in 325 AD, the year of the First Council of Nicaea. The Council of Laodicea was a regional synod of approximately thirty clerics from Asia Minor that assembled about 363–364 AD in Laodicea, Phrygia Pacatiana, in the official chronology.The major concerns of the Council involved regulating the conduct of church members. The Council expressed its decrees in the form of written rules or canons.However, the most pressing issue, the fact that the calendar Easter Sunday would coincide with the Passover eight (!) times – in 316, 319, 323, 343, 347, 367, 374, and 394 AD, and would even precede it by two days five (!) times, which is directly forbidden by the fourth Easter rule, that is, in 306 and 326 (allegedly already a year after the Nicaean Council), as well as the years 346, 350, and 370 was NOT presented during this alleged Council of Laodicea.We are told that the motivation for the Gregorian reform was that the Julian calendar assumes that the time between vernal equinoxes is 365.25 days, when in fact it is about 11 minutes less. The accumulated error between these values was about 10 days (starting from the Council of Nicaea) when the reform was made, resulting in the equinox occurring on March 11 and moving steadily earlier in the calendar, also by the 16th century AD the winter solstice fell around December 11.But, in fact, as we see from the information presented in the preceeding paragraphs, the Council of Nicaea could not have taken place any earlier than the year 876-877 e.n., which means that in the year 1582, the winter solstice would have arrived on December 16, not at all on December 11.Papal Bull, Gregory XIII, 1582:Therefore we took care not only that the vernal equinox returns on its former date, of which it has already deviated approximately ten days since the Nicene Council, and so that the fourteenth day of the Paschal moon is given its rightful place, from which it is now distant four days and more, but also that there is founded a methodical and rational system which ensures, in the future, that the equinox and the fourteenth day of the moon do not move from their appropriate positions.Given the fact that in the year 1582, the winter solstice would have arrived on December 16, not at all on December 11, this discrepancy could not have been missed by T. Brahe, or G. Galilei, or J. Kepler. Newton agrees with the date of December 11, 1582 as well; moreover, Britain and the British Empire adopted the Gregorian calendar in 1752 (official chronology); again, more fiction at work: no European country could have possibly adopted the Gregorian calendar reformation in the period 1582-1800, given the absolute fact that the winter solstice must have falled on December 16 in the year 1582 AD, and not at all on December 11 (official chronology). https://www.scribd.com/document/74886881/Easter-Issue EXPLICIT DATING GIVEN BY MATTHEW VLASTARIt is indeed amazing that Matthew Vlastar’s Collection of Rules Devised by Holy Fathers – the book that every Paschalia researcher refers to – contains an explicit dating of the time the Easter Book was compiled. It is even more amazing that none of the numerous researchers of Vlastar’s text appeared to have noticed it (?!), despite the fact that the date is given directly after the oft-quoted place of Vlastar’s book, about the rules of calculating the Easter date. Moreover, all quoting stops abruptly immediately before the point where Vlastar gives this explicit date.What could possibly be the matter? Why don’t modern commentators find themselves capable of quoting the rest of Vlastar’s text? We are of the opinion that they attempt to conceal from the reader the fragments of ancient texts that explode the entire edifice of Scaliger’s chronology. We shall quote this part completely:Matthew Vlastar:“There are four rules concerning the Easter. The first two are the apostolic rules, and the other two are known from tradition. The first rule is that the Easter should be celebrated after the spring equinox. The second is that is should not be celebrated together with the Judeans. The third: not just after the equinox, but also after the first full moon following the equinox. And the fourth: not just after the full moon, but the first Sunday following the full moon… The current Paschalia was compiled and given to the church by our fathers in full faith that it does not contradict any of the quoted postulates. (This is the place the quoting usually stops, as we have already mentioned – Auth.). They created it the following way: 19 consecutive years were taken starting with the year 6233 since Genesis (= 725 AD – Auth.) and up until the year 6251 (= 743 AD – Auth.), and the date of the first full moon after the spring equinox was looked up for each one of them. The Paschalia makes it obvious that when the Elders were doing it; the equinox fell on the 21st of March” ([518]).Thus, the Circle for Moon – the foundation of the Paschalia – was devised according to the observations from the years 725-743 AD; hence, the Paschalia couldn’t possibly have been compiled, let alone canonized, before that. Here is another proof:Byzantine historian Leo Diaconus (ca. 950-994), as he observed the total eclipse of 22 December 968 from Constantinople (now Istanbul, Turkey). His observation is preserved in the Annales Sangallenses, and reads:"...at the fourth hour of the day ... darkness covered the earth and all the brightest stars shone forth. And is was possible to see the disk of the Sun, dull and unlit, and a dim and feeble glow like a narrow band shining in a circle around the edge of the disk". "When the Emperor was waging war in Syria, at the winter solstice there was an eclipse of the Sun such as has never happened apart from that which was brought on the Earth at the Passion of our Lord on account of the folly of the Jews. . . The eclipse was such a spectacle. It occurred on the 22nd day of December, at the 4th hour of the day, the air being calm. Darkness fell upon the Earth and all the brighter stars revealed themselves. Everyone could see the disc of the Sun without brightness, deprived of light, and a certain dull and feeble glow, like a narrow headband, shining round the extreme parts of the edge of the disc. However, the Sun gradually going past the Moon (for this appeared covering it directly) sent out its original rays, and light filled the Earth again."Refers to a total solar eclipse in Constantinople of 22 December AD 968.From: Leo the Deacon, Historiae, Byzantine.http://www.mreclipse.com/Special/quotes2.htmlHowever, the winter solstice in the year 968 MUST HAVE FALLEN on December 16, given the 10 day correction instituted by Gregory XIII, as we are told (a very simple calculation - 11 minutes in the length of a solar year amount to a full day for each 134 years), according to the official chronology.
-
Can anyone recommend any educational material
sandokhan replied to ALine's topic in Applied Mathematics
https://books.google.ro/books?id=fCwv7JlIE9IC&printsec=frontcover&dq=lin+segel+mathematics+applied&hl=en&sa=X&ved=0ahUKEwi4-rno3MPhAhWFlYsKHYjWAQEQ6AEIJzAA#v=onepage&q=lin segel mathematics applied&f=false -
Global algorithm for the strong Lehmer pairs π/ln2 is the sacred cubit (basic unit of distance) of the Lehmer pairs.1400/22 is the sacred cubit distance for regular zeta function zeros.Infinite sequence formula for all Lehmer pairs:{n ⋅ 2π/ln2 + n ⋅ 2π/ln2 + π/ln2}/2 = π/ln2 x (4n + 1)/2{n ⋅ 2π/ln2 + π/ln2 + (n + 1) ⋅ 2π/ln2 }/2 = π/ln2 x (4n + 3)/2There are several choices for the optimum interval which can be used for the global subdivision algorithm to find the strong Lehmer pairs:2π/ln2 x 10 = 90.6472...2π/ln2 x 15 = 135.9708...2π/ln2 x 75/2 = 75π/ln2 = 2.5 x 135.9708... = 339.9270106...2π/ln2 x 100sc = 576.84583...The best version is 75π/ln2 = 2.5 x 135.9708... = 339.9270106...The subdivision proportions are as follows:534 sc = 339.9270106...160 sc = 101.81818...135.9708 x sc = 86.52687538...106.8 sc = 67.985402...53.4 sc = 33.9927...1018.1818 = 4 x 254.545454... = 16 x 63.636363...For the 100sc interval the subdivision proportions were:534136.1 8053.426.7and 534160136.1106.853.4List of strong Lehmer pairs:https://www.slideshare.net/MatthewKehoe1/riemanntex (pg 64-88)The first such values are: 415.018809755 and 415.455214996, 7005.062866175 and 7005.100564674, 17143.786536184 and 17143.821843505, 23153.514967223 and 23153.574227077...http://www.dtc.umn.edu/~odlyzko/zeta_tables/zeros1All of the zeros of the zeta function are generated by the five elements subdivision algorithm, therefore the location of all of the Lehmer pairs (including the strong Lehmer pairs) must be related to the subdivision values, but on a larger scale. The same global algorithm successfully employed before to find the zeta zeros on a 100 sc interval, will be used again, featuring the two counterpropagating zeta functions. 33992.70110181.8188652.6876798.543399.2733992.701 - 10181.818 = 23810.88323810.8837132.06266060.9524762.17662381.088323810.883 - 7132.0626 = 16678.8216678.824995.84245.5183335.7641667.8821667.882499.58424.5518333.5764166.788210181.818 + 7132.0626 = 17313.880610181.818 + 6060.952 = 16242.7723810.883 - 6060.952 = 17749.93123810.883 - 7132.0626 = 16678.827132.0626 - 6060.952 = 1071.11061071.1106320.83272.646214.222107.11116242.77 + 320.83 = 16563.617749.931 - 320.83 = 17429.11071.1106 - 320.83 = 750.281750.281224.73190.98150.05675.2817429.1 - 224.73 = 17204.3716563.6 + 224.73 = 16788.33That is, upper and lower bounds are being obtained for the Lehmer pair located at 17143.786...We also have a lower bound estimate for the Lehmer pair located at 7005.1..., 6798.54, and an upper bound for the Lehmer pair located at 23153.5..., 23810.883.The Lehmer phenomenon, a pair of zeros which are extremely close, is related to the close proximity of some of the values of the two subdivisions of the 63.6363... segment.In the same way, strong Lehmer pairs are related to the close proximity of some of the values of the two subdivisions of the 10n x 339.927106... segment. The same algorithm can be applied for the 339927.106... segment or for the 33992710.6... segment, however the calculations involving the two subdivision fractals will be more involved since now we have to obtain many more significant digits. On the extreme values of the zeta function:https://www2.warwick.ac.uk/fac/sci/maths/people/staff/stefan_grosskinsky/rcssm/ws2/00ExtremeBehaviour.pdfhttp://mat.uab.cat/~bac16/wp-content/uploads/2013/12/talk.bondarenko.pdfhttps://heilbronn.ac.uk/wp-content/uploads/2016/05/Hughes_MaxZeta0T.pdfhttp://siauliaims.su.lt/pdfai/2004/stesle-04.pdf
-
Gran Sasso, Italy - GINGERino experimentLatitude: 42.4166° λHe:Ne = 632 nmL = 3.6 mThe formula for the square ring laser interferometer located away from the center of rotation derived in the previous message, could have been obtained directly from the global/generalized Sagnac formula, by letting l1 = l2 = 2L:2(V1L1 + V2L2)/c2Frequency formula for the CORIOLIS EFFECT at Gran Sasso, Italy, ring laser gyroscope:4Aω/λP = Lω/λ(A = L2, P = 4L)Frequency formula for the SAGNAC EFFECT at Gran Sasso, Italy, ring laser gyroscope:[4L(v1 + v2)]/λP = 2v/λ(v = Rω, since the sides of the square interferometer measure 3.6 meters in length, v1 practically equals v2)2v/λ / Lω/λ = 2R/LAt the Gran Sasso latitude, R = 4,710 km = 4,710,000 metersL = 3.6 meters2R/L = 2,616,666.666The SAGNAC EFFECT frequency is larger by a factor of 2,616,666.666 times than the CORIOLIS EFFECT frequency.As we have seen earlier, for the Michelson-Gale experiment, the SAGNAC EFFECT time phase difference is 21,000 times greater than the CORIOLIS EFFECT phase difference.The CORIOLIS EFFECT frequency formula is not always written in its full form, which must include the conversion factor from rad/s to Hz:https://pos.sissa.it/318/181/pdf (the 2π factor is featured in the formula)https://www.scitepress.org/papers/2015/54380/54380.pdf (the authors do not include the 2π conversion factor)https://bura.brunel.ac.uk/bitstream/2438/7277/1/FulltextThesis.pdf (it includes the correct derivation for the CORIOLIS EFFECT frequency formula, pg. 39-40 and 60)The huge error introduced by Albert Michelson in 1925 has not been observed by all of the distinguished physicists who have published works on the SAGNAC EFFECT, including E.J. Post who had no idea in 1967 that he was deriving and describing the CORIOLIS EFFECT formula.http://www.orgonelab.org/EtherDrift/Post1967.pdfhttp://signallake.com/innovation/andersonNov94.pdfhttps://phys.org/news/2017-03-deep-earth-rotational-effects.htmlhttps://agenda.infn.it/event/7524/contributions/68390/attachments/49528/58554/Schreiber.pdfhttps://pdfs.semanticscholar.org/47ea/33bdc7d0247772658b1e29c3e9e2a4578d17.pdfhttp://inspirehep.net/record/1468904/files/JPCS_718_7_072003.pdf http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1925ApJ....61..137M&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf The promise made by A. Michelson, "the difference in time required for the two pencils to return to the starting point will be...", never materialized mathematically.Instead of applying the correct definition of the Sagnac effect, Michelson compared TWO OPEN SEGMENTS/ARMS of the interferometer, and not the TWO LOOPS, as required by the exact meaning of the Sagnac experiment.As such, his formula captured the Coriolis effect upon the light beams. By having substracted two different Sagnac phase shifts, valid for the two different segments, we obtain the CORIOLIS EFFECT formula.However, for the SAGNAC EFFECT, we have a single CONTINUOUS CLOCKWISE PATH, and a single CONTINUOUS COUNTERCLOCKWISE PATH, as the definition of the Sagnac effect entails.HERE IS THE DEFINITION OF THE SAGNAC EFFECT:Two pulses of light sent in opposite direction around a closed loop (either circular or a single uniform path), while the interferometer is being rotated.Loop = a structure, series, or process, the end of which is connected to the beginning.A single continuous pulse A > B > C > D > A, while the other one, A > D > C > B > A is in the opposite direction, and has the negative sign.We can see at a glance each and every important detail.For the Coriolis effect, one has a formula which is proportional to the area; only the phase differences of EACH SIDE are being compared, and not the continuous paths.For the Sagnac effect, one has a formula which is proportional to the velocity of the light beam; the entire continuous clockwise path is being compared to the other continuous counterclockwise path exactly as required by the definition of the Sagnac effect.Experimentally, the Michelson-Gale test was a closed loop, but not mathematically. Michelson treated mathematically each of the longer sides/arms of the interferometer as a separate entity: no closed loop was formed at all. Therefore the mathematical description put forth by Michelson has nothing to do with the correct definition of the Sagnac effect (two pulses of light are sent in opposite direction around a closed loop) (either circular or a single uniform path). By treating each side/arm separately, Michelson was describing and analyzing the Coriolis effect, not the Sagnac effect.Loop = a structure, series, or process, the end of which is connected to the beginning.Connecting the two sides through a single mathematical description closes the loop; treating each side separately does not. The Sagnac effect requires, by definition, a structure, the end of which is connected to the beginning.
-
https://arxiv.org/pdf/1612.08627.pdf Few mathematicians who study the zeta function remember or have knowledge of the fact that D.H. Lehmer proved the existence of an infinite number of Lehmer pairs: https://projecteuclid.org/download/pdf_1/euclid.acta/1485892173 Lehmer, D. H. On the roots of the Riemann zeta-function. Acta Math. 95 (1956), 291--298 H.M. Edwards acknowledges this proof in his treatise on the zeta function (Riemann's Zeta Function, section 8.3, pg 179): The Riemann hypothesis means that the de Bruijn-Newman constant is zero.The existence of infinitely many Lehmer pairs proves that the de Bruijn-Newman constant Λ is 0. An infinite sequence of such Lehmer pairs is given by the formula: (636.3 x 3n - 16.9 x n)2π/ln2(n = 1,2,3...) n = 1 17150.45078 http://www.dtc.umn.edu/~odlyzko/zeta_tables/zeros117143.78653618417143.821843505 n = 2 34300.90155 34295.10494425534295.371984027 n = 3 51451.35233 51448.07634996451448.72947532751449.153911623 n = 4 68601.80311 68597.63647979768597.971396943 n = 5 85752.25388 85748.62177348885748.861163006 n = 6 102902.7047 102907.166732245102907.475751344 n = 7 120053.1554 120055.446373211120055.565321075 It can be checked even at much greater heights on the critical line: http://www.dtc.umn.edu/~odlyzko/zeta_tables/zeros4 However, what is still needed is an understanding of the nature of the strong Lehmer pairs, how they relate to the two counterpropagating zeta functions.
-
There is another very interesting phenomenon: very close double Lehmer pairs. http://www.slideshare.net/MatthewKehoe1/riemanntex (pg. 64-87) 441296.9992 / 441297.0149 and 441649.8183 / 441649.8273 506616.5065 / 506616.5305 and 506959.3064 / 506959.327 675064.2749 / 675064.2909 and 675149.609 / 675149.6213 692620.9588 / 692620.9806 and 692736.741 / 692736.7631 847172.8025 / 847172.8171 and 847263.1402 / 847263.1502 1055407.79 / 1055407.813 and 1055657.193 / 1055657.216 1438925.829 / 1438925.849 and 1439457.546 / 1439457.567 1579400.943 / 1579400.968 and 1579721.076 / 1579721.097 1662448.536 / 1662448.546 and 1662515.735 / 1662515.743 Even on a larger scale, this phenomenon still can be observed. 18580341990011.15934 and 18523741991636.36437 Since 2π/ln2 x 15 = 135.9708043... is the value of the "unscheduled excess of nines=1" Lehmer pairs occurrence, we can use it as another shortcut formula for finding both regular and strong Lehmer pairs. (2π/ln2 x 15) x k = 135.9708043 x k The first strong Lehmer pair is: 415.0188 and 415.455 2π/ln2 x 15 x 3 = 407.9124 The most famous Lehmer pair is of course: 17143.786 and 17143.8218 2π/ln2 x 15 x 126 = 17132.321 35615956517.47854 2π/ln2 x 15 x 261938273 = 35,615,956,530.4284 2414113624163.41943 2π/ln2 x 15 x 17754647499 = 2,414,113,624,157.0292 7954022502373.43289015387 2π/ln2 x 15 x 58498019445 = 7,954,022,502,352.206 13066434408794226520207.1895041619 2π/ln2 x 15 x 96097356261743157503 = 13,066,434,408,794,226,520,208.9124 So, this shortcut formula is able to detect some of the best known strong Lehmer pairs. However, this is still not enough. We need to know/discover the hidden pattern of the strong Lehmer pairs from a basic arithmetical point of view. The average number of zeta zeros on the entire critical strip:N(T) = (T/2π)(lnT/2π) - T/2π T = 40 Let T1 = T + L(T) L(T) = average spacing = 2π/log(t/2π)N(T1) = (T + L(T))/2π {ln[(T + L(T)) ⋅ /2π] - 1} = N1 L(T) = 3.3945 T1 = 43.3945N = 5.41765N1 = 6.44Zeta zeros37.586 = z140.9187 = z28(N + N1) - 2z1 = 19.68928(N + N1) - 2z2 = 13.02388(N + N1) - z1 - z2 = 16.356516N - z1 - z2 = 8.177716N1 - z1 - z2 = 24.53538(N + N1) - z1 - z2 = 2 x (16N - z1 - z2) = 32N - 2z1 - 2z2z1 + z2 = 32N - 8(N + N1)
-
https://arxiv.org/pdf/1704.05834.pdfOn large gaps between zeros of L-functions from branchesAndre LeClair (Cornell University) proves that the normalized gaps between consecutive ordinates tn of the zeros of the Riemann zeta function on the critical line cannot be arbitrarily large. https://arxiv.org/pdf/1508.05870.pdfLehmer pairs revisitedThe Riemann hypothesis means that the de Bruijn-Newman constant is zero.Unusually close pairs of zeros of the Riemann zeta function, the Lehmer pairs, can be used to give lower bounds on Λ.Soundararajan’s Conjecture B implies the existence of infinitely many strong Lehmer pairs, and thus, that the de Bruijn-Newman constant Λ is 0. The factor (1 - 2^[1 - s])^(-1) determines the exact locations of the zeta zeros, especially those of the Lehmer pairs. 2π/ln2 x 1 = 9.0647202842π/ln2 x 2 = 18.1294405682π/ln2 x 3 = 27.1941608522π/ln2 x 4 = 36.2588811362π/ln2 x 5 = 45.323601422π/ln2 x 6 = 54.3883217042π/ln2 x 7 = 63.4530419882π/ln2 x 8 = 72.5177622722π/ln2 x 9 = 81.5824825562π/ln2 x 10 = 90.647202842π/ln2 x 11 = 99.7119231242π/ln2 x 12 = 108.7766434082π/ln2 x 13 = 117.8413636922π/ln2 x 14 = 126.9060839762π/ln2 x 15 = 135.970804262π/ln2 x 16 = 145.035524544(an addition by the factor of 1 to the decimal part after the multiplication by 16) (k/2)(2π/ln2) The zeta zeros cannot be generated at the odd or even integer values of k; the zeta zeros can be generated only at the fractional values of k. This is the starting point of the so-called "excess of nines = 1" theory as it applies to the Lehmer pairs. "It is evident that all even or odd (whole numbers) values of k produce an excess of nines=1 and therefore cannot generate a zeta function zero. Further, it is true that all zeros occur from the fractional values of the k's; when an unscheduled "Excess of Nines=1" occurs, so does a Lehmer event.The plotted data briefly passes through an excess of nines=1, wavers, then becomes fractional again, crosses the real axis and produces a zero." While these facts do explain the occurrence of the regular Lehmer pairs, a deeper explanation is needed to account for the existence of the strong (high quality) Lehmer pairs. http://www.dtc.umn.edu/~odlyzko/doc/zeta.derivative.pdf1.30664344087942265202071895041619 x 1022 1.30664344087942265202071898265199 x 1022The average spacing of zeros at that height is 0.128, while the above Lehmer pair of zeros is separated by 0.00032 (1/400th times the average spacing). So, the pattern of these strong Lehmer pairs has to be revealed/deciphered. Highest zeta zero ever computed: t ≈ 81029194732694548890047854481676712.9879 ( n = 1036 + 4242063737401796198).https://arxiv.org/pdf/1607.00709.pdf1273315917355388788579148020712834 x 63 = 802189027933894936804863253049085421273315917355388788579148020712834 x 0.63636363 = 810291939305055209561529176768134.2318274281,029,194,732,694,548,890,047,854,481,676,676.23182742 81,029,194,732,694,548,890,047,854,481,676,739.8681910581,029,194,732,694,548,890,047,854,481,676,692.40916072(+16.1773333)81,029,194,732,694,548,890,047,854,481,676,704.47514472(+12.065984)81,029,194,732,694,548,890,047,854,481,676,713.47347282(+8.9983281)81,029,194,732,694,548,890,047,854,481,676,709.78410162(+5.3089569)81,029,194,732,694,548,890,047,854,481,676,710.72208732(+0.9379857)81,029,194,732,694,548,890,047,854,481,676,711.42159955(+0.69951223)81,029,194,732,694,548,890,047,854,481,676,711.943267789(+0.521668239)81,029,194,732,694,548,890,047,854,481,676,712.332307089(+0.3890393)81,029,194,732,694,548,890,047,854,481,676,712.622437039(+0.29012995)81,029,194,732,694,548,890,047,854,481,676,712.838804339(+0.2163673)81,029,194,732,694,548,890,047,854,481,676,723.69085805(-16.177333)81,029,194,732,694,548,890,047,854,481,676,711.62487405(-12.065984)81,029,194,732,694,548,890,047,854,481,676,716.5720035(-7.11885455)81,029,194,732,694,548,890,047,854,481,676,715.3142453(-1.2577582)81,029,194,732,694,548,890,047,854,481,676,714.37625955(-0.93798575)81,029,194,732,694,548,890,047,854,481,676,713.6767473(-0.69951225)81,029,194,732,694,548,890,047,854,481,676,713.15507905(-0.52166825) s = r x θr = 68.1 (136.2/2, 22.7 = 1.362 x 16.66666, 38.136 = 1.362 x 28, 51.756 = 1.362 x 38, 68.1 = 1.362 x 50, 81.72 = 1.362 x 60, 98.064 = 1.362 x 72, 118.494 = 1.362 x 87)θ = 136.12° sin 136.12° = ln2136.12° = 2.375742 radianss = 161.7880463.6363/16.1773 = 1/0.254222π/ln2 = (10s - 1000)/r10 x 136.12° radians - 1000/r = 2π/ln2136.12° radians x 3.819072 = 2π/ln2That is, 2π/ln2 is the arclength corresponding to the 136.12° expressed in radians multiplied by 6 sacred cubits.
-
It is my belief that RH is a genuinely arithmetic question that likely will not succumb to methods of analysis. Number theorists are on the right track to an eventual proof of RH, but we are still lacking many of the tools. J. Brian Conrey It is clearly a preliminary note and might not have been written if L. Kronecker had not urged him to write up something about this work (letter to Weierstrass, Oct. 26 1859). It is clear that there are holes that need to be filled in, but also clear that he had a lot more material than what is in the note. What also seems clear : Riemann is not interested in an asymptotic formula, not in the prime number theorem, what he is after is an exact formula!(Lecture given in Seattle in August 1996, on the occasion of the 100th anniversary of the proof of the prime number theorem, Atle Selberg comments about Riemann’s paper: A. Selberg, The history of the prime number theorem, A SYMPOSIUM on the Riemann Hypothesis, Seattle, Washington)This exact formula has been obtained here: the five element subdivisions of the interval lead directly and precisely to the values of the zeta zeros, to the nth decimal precision desired.The Riemann-Siegel formula is a local expression, while the Five Element Subdivision algorithm is a global formula.It involves no transcendental or algebraic functions, but only the four elementary operations of mathematics. Thus, all of the zeros of the zeta function must be located on the 1/2 critical line: if any of the Riemann zeta function ζ(s) non-trivial zeros would lay off the critical line, s = σ + it, σ = 1/2 - ε, then the values of all of the other zeros would have to be modified as well, all the way to the first zero, 14.134725.The sum of any two sides of a triangle is greater than the third side. The five elements sequence of proportions would be disrupted as the distance from the previous zero to the zero which is off the critical line, and from the zeta zero which finds itself on the σ = 1/2 - ε line to the next zero would be greater than the distances from that previous zero to the next two zeta zeros to be found on the critical line.Moreover, since there are two counter-propagating zeta function waves, there would have be to TWO zeros off the critical line within the same 63.6363... segment. To see the issues involved, here are the first five element subdivisions (first upper and lower bounds) for the second and third zeta zeros.21.02214.134725 + 6.3636 = 20.497514.134725 + 6.3636 + 0.80886 = 21.30656(14.134725 + 63.6363) - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.732 - 2.7834 = 22.29945(14.134725 + 63.6363) - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.732 - 2.7834 - 2.0757 = 20.2294525.010814.134725 + 9.5445 + 1.68632 = 25.3660214.134725 + 9.5445 + 0.99492 = 24.67(14.134725 + 63.6363) - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.732 = 25.099425(14.134725 + 63.6363) - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.732 - 1.64 = 23.459Even the slightest deviation from the 21.022039639 and 25.010857580 values would invalidate the entire five element subdivision algorithm. The values of the zeta zeros are a consequence of the precise five element subdivisions fractal.Thus, if a zero should be located off the critical line, it would mean that all of the values of the previous zeros would have be modified as well, all the way to the first zero which is 14.134725. Currently, the values of the zeta zeros are thought to be totally random:http://math.sun.ac.za/wp-content/uploads/2011/03/Bruce-Bartlett-Random-matrices-and-the-Riemann-zeros.pdfhttps://pdfs.semanticscholar.org/fc82/c1f7e35f23eb1695b0c78830c366e1258c88.pdfOne of the best mathematicians in the world, Dr. Yuri Matiyasevich (who solved Hilbert's tenth problem), could not find a scientific journal which would publish his results which prove that there is definite relationship between the values of the zeta zeros:https://www.researchgate.net/publication/265478581_An_artless_method_for_calculating_approximate_values_of_zeros_of_Riemann's_zeta_functionhttps://phys.org/news/2012-11-supercomputing-superproblem-journey-pure-mathematics.html The five element subdivision algorithm (fractal) creates the zeta zeros, which in turn are related to the distribution of the prime numbers."These zeros did not appear to be scattered at random. Riemann's calculations indicated that they were lining up as if along some mystical ley line running through the landscape." The mystical ley line has been revealed here: it is the five element subdivision algorithm. "Present an argument or formula which (even barely) predicts what the next prime number will be (in any given sequence of numbers)."The relationship between log p and the values of the zeta zeros:http://www.dam.brown.edu/people/mumford/blog/2014/RiemannZeta.htmlThe log-prime figures give oscillating terms whose discrete frequencies correspond to the true zeros of the zeta function. And this method can be extended to large primes.Since we now know that the five element subdivision algorithm creates the actual zeta zeros values, then these values can be anticipated in a very precise fashion, thus making possible the prediction of the next prime number. <N(T)> = T/2π(logT/2πe) + 7/8 Let <N(T)> = n (value of an integer)n - 7/8 = T/2π(logT/2πe)http://mathworld.wolfram.com/LambertW-Function.htmlThe Lambert W function is the inverse of f(W) = WeW.Main subdivision point =~ 2πe ⋅ eW[(n - 7/8)/e]Mathematica software for the Lambert W function:http://functions.wolfram.com/webMathematica/FunctionEvaluation.jsp?name=ProductLog Global algorithm relating to the Lehmer pairs (corresponding <N(t)> values in the parentheses).7005.0629 and 7005.100567004.0437 (6707.487)7005.0629 (6708.626)7005.10056 (6708.667)7006.74 (6710.498)That is, the Lehmer pair will be located between the average number of zeta zeros values of 6708 and 6709.Sacred cubit interval: 63.6363636363...6999.999 7063.6363For the first zeta function the values are:7003.18147003.99037004.5937005.04357005.175447005.379The calculations for the second zeta function:7004.5417005.1757005.48Since 7005.175 and 7005.17544 are very close figures, a Lehmer pair must be located in the vicinity of these values.Using 200/π = 63.66197724 as a sacred cubit interval the results are not as impressive: the nearest values are 7004.6825 and 7004.689, 7005.2652 and 7005.23.63137.2115 and 63137.232463136.537 (82551.023)63137.2115 (82552.013)63137.2324 (82552.0434)63138.2238 (82553.4973)Sacred cubit interval: 63.6363636363...63127.21 63190.846For the first zeta function the values are:63136.75563137.4263137.0866(+9.5445, +0.66328, +0.33164)The calculations for the second zeta function lead to these values:63138.157463137.6163137.063(...-3.7322, -0.5474, -1.09478)Since 63137.0866 and 63137.063 are very close figures, a Lehmer pair must be located in the vicinity of these values.Using 200/π = 63.66197724 as a sacred cubit interval the results are not as impressive; the nearest values are: 63137.32 and 63137.185.71732.9012 and 71732.9159171732.02 (95246.674)71732.9012 (95247.984)71732.91591 (95248.00608)71734.097 (95249.76)Sacred cubit interval: 63.6363636363...71718.11 71781.75For the first zeta function the values are:71734.28771727.65571729.34171730.671731.53671732.23571732.757The calculations for the second zeta function:71732.793671732.0671731.32671730.6Two pairs of zeta zeros which are very close: 71730.6 and 71732.757 and 71732.7936.To distinguish between these choices the second five element subdivision algorithm will be applied.53.4 106.8 136.1 160 534 63.636363 19.091 16.1773 12.7272 6.363 63.63 - 19.091 = 44.545344.545313.36311.32528.914.45444.5453 - 13.363 = 31.182331.18239.35477.92786.236463.1182331.1823 - 9.3547 = 21.827621.82766.54835.54944.365522.1827621.8276 - 6.5483 = 15.279315.27934.58383.884613.055861.52793For the first zeta function, the values are:71737.20171730.837271731.872271734.287371732.671732.93871732.77The calculations for the second zeta function:71733.471731.86571730.33771732.93571732.66371732.721Since 71730.8372 and 71730.337 are not as close to one another as the corresponding pair using the first five element subdivision algorithm, it means we are not dealing with a Lehmer pair; the same analysis applies to 71731.8722 and 71731.865 (the corresponding pair using the five element subdivision algorithm are not this close to one another).Amazingly, the two five element subdivision algorithms have located the precise interval of the Lehmer pair: 71732.757 and 71732.7936, 71732.938 and 71732.935.The actual values are: 71732.9012 and 71732.91591.Using 200/π = 63.66197724 as a sacred cubit interval the results are not as impressive; the nearest values are: 71732.9171 and 71732.909, after a long series of calculations (more involved than using 63.6363636 as a sacred cubit interval).220538.853 and 220538.8702220537.0585 (332251.37)220537.4266 (332251.98)220538.853 (332254.36)220538.8702 (332254.39)220539.8528 (332256.0258)220538.853 is the 332254th zero, 220538.8702 is the 332255th zero, where the average spacing is 0.6.Sacred cubit interval: 63.6363636363...220499.78 220563.416For the first zeta function the values are:220537.0213220538.341220538.926220538.676220538.824The calculations for the second zeta function:220537.925220538.863Since 220538.824 and 220538.863 are very close figures, a Lehmer pair must be located in the vicinity of these values. Using the second five element subdivision algorithm, the following results are obtained:220538.471220538.901220538.81220538.64220535.415220539.871220538.534220538.738220538.98Again, the Lehmer pair must be located very close to the value of 220538.9.Using 200/π = 63.66197724 as a sacred cubit interval the results are not as impressive: the nearest values are: 220538.8085 and 220538.8266.435852.8393 and 435852.8572435851.967 (703890.467)435852.8393 (703892.015)435852.8572 (703892.046)435853.455 (703893.107)Sacred cubit interval: 63.6363636363...435845.45 435909.0865435851.814435852.623435852.743435853.6145435852.7981The Lehmer pair must be located around the value of 435852.78.555136.9163 and 555136.9315555136.284 (917905.02)555136.9163 (917906.17)555136.9315 (917906.195)555137.412 (917907.066)Sacred cubit interval: 63.63636363...555099.9944 555163.631555133.547555137.2357555136.385555136.6555136.763555136.8831555135.3877555140.3352555137.44555136.92555136.767773657.1461 and 773657.1559773656.6413773657.1461773657.1559773658.041Sacred cubit interval: 63.63636363773627.265 773.690.9014773655.51773.657.278773657.3665773657.35947107.8201 and 947107.8325947107.2485947107.8201947107.8325947108.2566Sacred cubit interval: 63.63636363947099.9905 947163.627947106.354947107.163947107.766947108.155947107.7468Both sets of five element subdivision algorithms are needed to detect the Lehmer pairs, the zeta zeros values which are most difficult to find. http://www.dtc.umn.edu/~odlyzko/doc/zeta.derivative.pdf1.30664344087942265202071895041619 x 1022 1.30664344087942265202071898265199 x 1022The average spacing of zeros at that height is 0.128, while the above Lehmer pair of zeros is separated by 0.00032 (1/400th times the average spacing).Using a very large number calculator:205329683587299385104.8399385104 = 13066434408794226520207/63.6363636312935770065999861261552 = 205329683587299385104 x 63130664342794365258601.54936752 = 205329683587299385104 x .63636363 53.45063194549 = 0.8399835 x 63.6363636313,066,434,408,794,226,520,153.54936752 13,066,434,408,794,226,520,217.18573115The calculations for the first zeta function:13,066,434,408,794,226,520,169.72670085(+16.17733333)13,066,434,408,794,226,520,181.79268471(+12.06598386)13,066,434,408,794,226,520,190.791012836(+8.998328126)13,066,434,408,794,226,520,197.501606019(+6.710593183)13,066,434,408,794,226,520,202.506097991(+5.004491972)13,066,434,408,794,226,520,206.238247924(+3.732149933)13,066,434,408,794,226,520,207.332996246(+1.094748322)13,066,434,408,794,226,520,206.785622085(+0.547374161)13,066,434,408,794,226,520,206.924786491(+0.139164406)13,066,434,408,794,226,520,207.028569738(+0.103783247)13,066,434,408,794,226,520,207.105967138(+0.0773974)13,066,434,408,794,226,520,207.163687018(+0.05771988)13,066,434,408,794,226,520,207.189083398(+0.02539638)The computations for the second zeta function:13,066,434,408,794,226,520,207.64027661(-9.54545454)13,066,434,408,794,226,520,207.308682645(-0.331593965)13,066,434,408,794,226,520,207.224378195(-0.08430445)0.247289515 interval13,066,434,408,794,226,520,207.187284768(-0.037093427)The second five element subdivision algorithm:13,066,434,408,794,226,520,172.64027661(+19.090909)13,066,434,408,794,226,520,186.00391297(+13.363636)13,066,434,408,794,226,520,195.35845842(+9.35454545)13,066,434,408,794,226,520,201.906640238(+6.548181818)13,066,434,408,794,226,520,206.490367508(+4.58372727)1.069536360.3208609090.2719190.213907270.10695363613,066,434,408,794,226,520,206.811228417(+0.320860909)13,066,434,408,794,226,520,207.035831017(+0.2246026)13,066,434,408,794,226,520,207.193052861(+0.1572218844)13,066,434,408,794,226,520,210.82209485(-6.363636)13,066,434,408,794,226,520,208.91300395(-1.9090909)13,066,434,408,794,226,520,207.576640314(-1.336363636)3.1181818113,066,434,408,794,226,520,207.264822133(-0.3118181)13,066,434,408,794,226,520,207.202458497(-0.062363636)0.079276650.06236360.016913030.0050739113,066,434,408,794,226,520,207.197384587(-0.00507391)0.01183910.00355173713,066,434,408,794,226,520,207.19383285(-0.0035517370 7954022502373.432890153877954022502373.43289494012t2 - t1 = 4.7863 x 10-6 = 0.00000478637,954,022,502,331.87047696 7,954,022,502,395.506840597,954,022,502,348.04781029(+16.17733333)7,954,022,502,360.11379415(+12.06598386)7,954,022,502,369.112122276(+8.998328126)7,954,022,502,373.071330025(+3.959207749)2.7513854380.699512230.4127078150.27513854380.137569277,954,022,502,373.3464685688(+0.2751385438)0.137569270.034975617,954,022,502,373.3814441788 (+0.03497561)0.03497561 = 1/28.5914, where 286.1 = 450 sc (1 sacred cubit = 2/π or 7/22)0.102593660.02608347,954,022,502,373.4075275788(+0.0260834)0.076510250.0194519657,954,022,502,373.4269795438(+0.019451965)0.0570582820.00570582827,954,022,502,373.432685372(+0.0057058282)7,954,022,502,395.506840597,954,022,502,379.32950729(-16.1773333)7,954,022,502,374.58360426(-4.74590303)2.3729515170.603299197,954,022,502,373.98030507 (-0.60329919)7,954,022,502,373.530388664(-0.449916406)1.3197359160.1319735910.0659867957,954,022,502,373.464401869(-0.065986795)0.0659867950.0167764827,954,022,502,373.447625387(-0.016776482)0.049210310.012511237,954,022,502,373.435114157(-0.01251123)0.0366990827,954,022,502,373.43327920286(-0.00183495414)7,954,022,502,373.43281268412(-0.00046651874)The second five element subdivision algorithm calculations:7,954,022,502,350.96138605(+19.09090909)7,954,022,502,364.32502241(+13.36363636)7,954,022,502,373.67956786 (+9.35454545)7,954,022,502,376.4159315(-19.090909)4.454545451.33636367,954,022,502,375.0795679(-1.3363636)7,954,022,502,374.1441134(-0.9354545)2.182727270.654818180.55493660.436545450.21827277,954,022,502,373.48929522(-0.65481818)Using the Riemann-Siegel asymptotic formula, the sum will feature at least O(4.56 x 1010) terms (for t = 1.30664344087942265202071895041619 x 1022): Using the five element subdivision algorithms, we only need to translate/shift the 63.63636363 interval by a factor of k: k = [t/63.6363636363] x 63.6363636363, where [ x ] denotes the integer part, and then simply apply the five element partition process for the two zeta functions to detect both regular zeta zeros and Lehmer pairs, carefully keeping a check on the average number of zeros values which are close to an integer (these are the values which are equivalent to a five element subdivision figure). The zeta zeros are generated by the five element subdivision algorithm. These zeros, in turn, determine the distribution of the prime numbers. Mathematicians have concentrated for far too long on the RH, and have neglected the more important issues: what do these zeros actually represent? Is there a hidden pattern to these values? "The lack of a proof of the Riemann hypothesis doesn't just mean we don't know all the zeros are on the line x = 1/2 , it means that despite all the zeros we know of lying neatly and precisely smack bang on the line x = 1/2 , no one knows why any of them do, for if we had a definitive reason why the first zero 1/2 + 14.13472514 i has real value precisely 1/2 we would have a reason to know why they all do. Neither do we know why the imaginary parts have the values they do.Answers to such questions depend on a much more detailed knowledge of the distribution of zeros of the zeta function than is given by the RH. Relatively little work has been devoted to the precise distribution of the zeros."C. King
-
Global formulas for Lehmer pairs and Large Gaps Zeta zeros distribution: regular zeros, large gaps between zeros, Lehmer pairs and strong Lehmer pairs.The most difficult aspect of the distribution of the zeta zeros is the location of the strong Lehmer pairs. There can’t be a zero of ζ'(s) between every pair of zeros of ζ(s) because the density of zeros of ζ(s) is log(T/2π)/2π while the density of zeros of ζ'(s) is log(T/4π)/2π. So on average there is a “missing” zero of ζ'(s) in each T interval of width 2π/log 2 ≈ 9.06.ROOTS OF THE DERIVATIVE OF THE RIEMANN ZETA FUNCTIONhttps://arxiv.org/pdf/1002.0372.pdfOn Small Distances Between Ordinates of Zeros of ζ(s) and ζ'(s)http://math.boun.edu.tr/instructors/yildirim/paper/OnSmallDistancesBtwOrdinates.pdfLEHMER PAIRS AND DERIVATIVES OF HARDY’S Z-FUNCTIONhttps://arxiv.org/pdf/1612.08627.pdfThe author has calculated that the first two million zeros include 4637 pairs of zeros which satisfy the first assertion, while 1901 pairs actually belong to the set L.LEHMER PAIRS REVISITEDhttps://arxiv.org/pdf/1508.05870.pdfIn other words, strong Lehmer pairs tend to arise from a small gap between zeros of ζ(s), and from the zeros of ζ'(s) very near the critical line. Figure 2 shows the argument of ζ'(s)/ζ(s), interpreted as a color, in a region which includes Lehmer’s example. The Riemann zeros 1/2 + iγ6709 and 1/2 + iγ6710 are now poles, while in between we see a zero of ζ'(s) at 0.50062354 + 7005.08185555i, very close to the critical line, even on the scale of this close pair of Riemann zeros. Global formula for Lehmer pairs/close values of the pairs of zeta zerosT =~ {n ⋅ 2π/ln2 + n ⋅ 2π/ln2 + π/ln2}/2T =~ {n ⋅ 2π/ln2 + π/ln2 + (n + 1) ⋅ 2π/ln2 }/2n > 2T will always be part of an infinite sequence of Lehmer pairs (which includes also strong Lehmer pairs). Large gaps formula for the zeta function zeros 32 + 25 x n 16 + 25 x n 8 + 24 x nThere will always be large gaps right next to these values of the critical line. 14.134725142 21.022039639 25.010857580 30.424876126 32.935061588 37.586178159 40.918719012 43.327073281 48.005150881 49.773832478 52.970321478 56.446247697 59.347044003 60.831778525 65.112544048 67.079810529 69.546401711 72.067157674 75.704690699 77.144840069 79.337375020 82.910380854 84.735492981 87.425274613 88.809111208 92.491899271 94.651344041 95.870634228 98.831194218 101.317851006 103.725538040 105.446623052 107.168611184 111.029535543 111.874659177 114.320220915Large gaps at:1624324048566472808896104112120128 399.985119876 401.839228601 402.861917764 404.236441800 405.134387460 407.581460387 408.947245502 410.513869193 411.972267804 413.262736070 415.018809755 415.455214996 418.387705790Large gaps at 400, 408 and 416. Examples:415.018809755415.4552149968 unit interval: 408 to 4162π/ln2 + π/ln2 interval403.38 to 412.445π/ln2 interval407.912 to 416.977(416.977 + 412.445)/2 = 414.7117005.0628661757005.1005646748 unit interval: 7000 to 70082π/ln2 interval6997.964 to 7007.0292π/ln2 + π/ln2 interval7002.496 to 7011.56(7007.029 + 7002.496)/2 = 7004.76317143.78653618417143.8218435058 unit interval17136 to 171442π/ln2 interval17141.386 to 17150.4512π/ln2 + π/ln2 interval17145.918 to 17154.98(17145.918 + 17141.386)/2 = 17143.6535839.41521017835839.7462386178 unit interval35832 to 358402π/ln2 interval35832.8 to 35841.92π/ln2 + π/ln2 interval35837.36 to 35846.42(35837.36 + 35841.9)/2 = 35839.6How to generate the 2π/ln2 intervals9.0647218.129427.19436.25845.323...(we simply multiply 2π/ln2 by n, n = 1,2,3...)How to generate the 2π/ln2 + π/ln2 intervals(9.06472 + 18.1294)/2 = 13.597113.5971 - 2π/ln2 = 4.5323601424.5323613.597122.661831.72641.4197...(we simply shift the 2π/ln2 intervals by a factor of π/ln2) 2π/ln2 ⋅ 144(n + ε) 144 ⋅ ε = k, where k = 1,2,3...,143 2π/ln2 ⋅ (144n + k)The previous formulas featured 2π/ln2 multiplied by n; this global formula incorporates the decimal parts as well, which have special values.Examples:17143.786517143.7865/2π/ln2 = 1891.26481891.2648/144 = 13.1338144 x 0.1338 =~ 192π/ln2 x (144 x 13 + 19) = 17141.385The equations derived previously are special cases of this global formula.169872.8532π/ln2 x (144 x 130 + 20) = 169872.85845505.592π/ln2 x (144 x 34 + 124) = 45504.894445436.652π/ln2 x (144 x 34 + 117) = 45441.44412597.2952π/ln2 x (144 x 316 + 13) = 412598.86555136.91632π/ln2 x (144 x 425 + 41) = 555132.522π/ln2 x (144 x 425 + 42) = 555141.58Average = 555137.05117954022502373.432890153872π/ln2 x (144 x 6093543501 + 75) = 7954022502369.5442π/ln2 x (144 x 6093543501 + 76) = 7954022502378.53Average = 7954022502374.0372414113624163.41943 2π/ln2 x (144 x 1849442389 + 136) = 2414113624163.44613066434408794226520207.1895041619 Since now T is very large (the average spacing is 0.128), the decimal parts of k also can be used (k = v + 1/2, v+ 1/4, v+ 3/4); in this case 144 x 0.1441 = 20.75).With 20.75, we get:2π/ln2 x (144 x 10010140964026289815 + 20.75) = 13,066,434,408,794,226,520,207.14279 8847150598019.22359827 2π/ln2 x (144 x 6777765214 + 101) = 8847150598015.231882π/ln2 x (144 x 6777765214 + 102 = 8847150598024.2966Average = 8847150598019.7642432π/ln2 x 144 = 415.496 x π7005.12π/ln2 x (144 x 5 + 53) = 7007.028 A shortcut formula for Lehmer pairs (636.3 x 3n - 16.9 x n)2π/ln2(n = 1,2,3...) n = 1 17150.45078 http://www.dtc.umn.edu/~odlyzko/zeta_tables/zeros117143.78653618417143.821843505 n = 2 34300.90155 34295.10494425534295.371984027 n = 3 51451.35233 51448.07634996451448.72947532751449.153911623 n = 4 68601.80311 68597.63647979768597.971396943 n = 5 85752.25388 85748.62177348885748.861163006 n = 6 102902.7047 102907.166732245102907.475751344 n = 7 120053.1554 120055.446373211120055.565321075 2π/ln2 is the most important constant of the eta zeta function (alternating series zeta function):https://arxiv.org/pdf/math/0209393.pdfhttps://arxiv.org/pdf/0706.2840.pdf This would be the starting point to prove the shortcut formula which really does provide the exact results. Now, would using this shortcut formula for Lehmer pairs together with the global algorithm for finding zeta zeros be enough to prove the RH? Not yet, since we need to prove that the shortcut formula will ALWAYS include a strong Lehmer pair in the sequence of Lehmer pairs. However, we can accomplish something else. The existence of infinitely many Lehmer pairs implies that the de Bruijn-Newman constant Λ is equal to 0.Therefore, a constructive/computer-assisted proof of the Riemann hypothesis would be possible, if further Lehmer pairs can be produced with little computational effort. I believe that the strong Lehmer pairs have shortcut formulas/special infinite sequences from which they can be generated with little effort.The (636.3 x 3n - 16.9 x n)2π/ln2 infinite sequence certainly suggests that other similar sequences do exist.Since now we no longer have to rely on the Riemann-Siegel formula to produce the zeta zeros, the calculation of zeros around the 1050, 10300, 101000 values on the 1/2 critical line become possible using the four subdivisions algorithm, the França-LeClair equation (ϑ(tn) + limδ→0+ arg ζ(1/2 + δ + itn) = (n - 3/2)π), used in conjunction with Backlund's method and Gram points.That is, further Lehmer pairs can be produced with very little effort, using the two infinite sequences above: further sequences exist, which can capture the strong Lehmer pairs even better.These Lehmer pairs then can be used to produce lower and lower bounds for the de Bruijn-Newman constant, finally proving that Λ is equal to zero. The values of the strong Lehmer pairs behave like regular zeta zeros in a way: they exhibit large gaps and double Lehmer pairs (two pairs which are located very close to each other).To understand the behavior of the regular zeta zeros, a certain interval was used (63.636363...), and we used the five elements subdivision algorithm to capture perfectly the value of each zeta zero. The Lehmer pairs (see the definition used earlier) occur each and every 2π/ln2 units or at an average of the n x 2π/ln2 and (n + 1) x 2π/ln2 values.Even this information can be used with great advantage, together with the five elements subdivision algorithm or with the França-LeClair equation to find the values of each and every Lehmer pair at very high figures on the 1/2 critical line, a feat which could not be accomplished before.Strong Lehmer pairs tend to arise from a small gap between zeros of ζ(s), and from the zeros of ζ'(s) very near the critical line.Interval for the strong Lehmer pairs2π/ln2 x 100 sacred cubitsThat is, we treat each 2π/ln2 value as a single unit of measure (a distance of 9.064720284... = one unit).2π/ln2 x 100 sacred cubits = 576.84583...Then, we subidivide this interval just like before using the 26.7, 53.4, 80, 136.1, 534 subdivisions, looking for the location of the strong Lehmer pairs.576.84583146.65786.5267657.684528.842576.84583 - 146.657 = 430.88430.88109.37164.5282430.88 - 109.371 = 320.817320.81781.564548.1225320.817 - 81.5645 = 239.2525239.252560.827239.2525 - 60.827 = 178.4255178.425545.38826.7117.848.92178.4255 - 45.388 = 133.0375133.037533.82319.95513.3036.65133.0375 - 33.823 = 99.214599.214525.224With these values, we obtain very nice approximations for the Lehmer pairs located at 111.03 and 415.45: 416.26 (first zeta function) and 419.37 (second zeta function) and 113.08 (first zeta function).To capture the values of the higher strong Lehmer pairs, 7005.1 and 17143.78, the interval will be increased to 57684.52413 (2π/ln2 x 10000 sacred cubits).Using the same subdivision, we get 7048.6 and 16950.5, as first values of entire sequence of approximations.For the strong Lehmer pairs which have 12 digits, the interval becomes:57684583623255.194152266 (2π/ln2 x 1 x 1012 sacred cubits)or an even better approximation,57684583623255.1941522669588143649472078575This would be the only way to get approximate values of very large strong Lehmer pairs, and to gain an understanding of their location, which is not random, but has a very precise pattern, based on the 2π/ln2 x 100 sacred cubits interval.
-
Global algorithm for the zeros of the zeta function The Riemann Zeta function takes the prize for the most complicated and enigmatic function. The lack of a proof of the Riemann hypothesis doesn’t just mean we don’t know all the zeros are on the line x = 1/2, it means that, despite all the zeros we know of lying neatly and precisely smack bang on the line x = 1/2, no one knows why any of them do, for if we had a definitive reason why the first zero 1/2 + 14.13472514i has real value precisely 1/2, we would have a definitive reason to know why they all do. Since the energy levels of the atoms are reflected in the zeros of the Riemann zeta landscape (see the calculation of the moments which give rise to a sequence of numbers, 1, 42, 24,042... verified by the quantum physicists and by the mathematicians), there must exist a certain interval which best captures all of the important features/formulas of the Riemann zeta function. The value of the first zero is 14.134725... List of zeroes of the zeta function:http://www.dtc.umn.edu/~odlyzko/zeta_tables/index.html 14.134725 x 45 = 636.062625... 200/π = 63.661977... 1400/22 = 63.636363... The interval which best captures/describes the zeta function is 63.63636363... Now, here is the most crucial observation. Virtually all of the mathematicians have forgotten that there are TWO zeta functions to be investigated: only the zeros on the critical 1/2 line whose imaginary parts are positive have been researched and the corresponding zeta function, while the zeros on the critical 1/2 line whose imaginary parts are negative (and the corresponding zeta function) have been left aside, rarely even mentioned in most papers. In order to reach any kind of meaningful results, especially using ONLY the tools provided by basic arithmetic, we must use BOTH these zeta functions in a novel way. That is, we must translate the second zeta function to that interval: now we will have two zeta functions within the same interval. "The zeta function is probably the most challenging and mysterious object of modern mathematics, in spite of its utter simplicity.""We may – paraphrasing the famous sentence of George Orwell – say that 'all mathematics is beautiful, yet some is more beautiful than the other'. But the most beautiful in all mathematics is the zeta function. There is no doubt about it." The basic interval of 63.63636363 will be divided like a fractal, according to these values/proportions: 26.753.480136.1534 Applying these five elements proportions to the our fundamental distance, we get:63.63636316.17739.54456.363633.1815 That is, 534 is divided by 20 to obtain 26.7, and divided by 10 to get 53.4, again divided by approximately 20/3 to get 80, and 534/136.1 =~ 1/(4 x 0.063636363...). We divide the 63.636363 interval in the same way (same proportional values) to obtain: 63.636363, 16.1773, 9.5445, 6.3636363, 3.1815. From the left to the right, the first the zeta function will include the zeros from 14.134725 to 77.771.From right to left, the second translated zeta function will include the zeros from 14.134725 to 77.771 pointing the other way. 77.771... - 14.134725 = 63.63636363... Here are the values of the zeta zeros on the first interval (from 14.134725... to 77.771 over a distance of 63.6363636... units): 14.134725142 21.022039639 25.010857580 30.424876126 32.935061588 37.586178159 40.918719012 43.327073281 48.005150881 49.773832478 52.970321478 56.446247697 59.347044003 60.831778525 65.112544048 67.079810529 69.546401711 72.067157674 75.704690699 77.144840069 Now, we will further subdivide (in a way the basic interval will become a fractal) each subsequent interval according to the same values/proportions, as follows: 63.63636316.17739.54456.363633.18159.5445 - 6.36363 = 6.36363 - 3.1815 = 3.18153.18150.808860.4772250.318150.15907516.1773 - 9.5445 = 6.63286.63281.686320.994920.663280.3316463.6363 - 16.1773 = 47.45947.45912.0667.118854.74592.37347.459 - 12.066 = 35.39335.3938.9985.3093.53931.7735.393 - 8.998 = 26.39526.3956.71063.962.63951.3197526.395 - 6.7106 = 19.689419.6894 5.00452.952661.968940.9842219.6894 - 5.0045 = 14.6814.68 3.73722.2021.4680.73414.68 - 3.7372 = 10.947810.94782.78341.64221.094780.547410.9478 - 2.7834 = 8.16948.1694 2.07571.224660.816940.408228.1694 - 2.0757 = 6.08876.08871.5480.91330.604870.3044256.0887 - 1.548 = 4.54074.54071.15440.6811050.454070.2270354.5407 - 1.1544 = 3.38633.38630.8610.5080.338630.1693153.3863 - 0.861 = 2.52532.52530.6920.37880.252530.1262.5253 - 0.692 = 1.8331.8330.46610.2750.18330.0912.066 - 7.11885 = 4.9474.9471.25770.742050.49470.28735 Then, the values of the subdivision of our basic interval using the five elements ratios/proportions, will nearly coincide with the values of the zeroes of Riemann's zeta function. 14.134725142 21.022039639 20.497725 (14.134725 + 6.363) 25.010857580 23.679225 (14.134725 + 9.5445) 30.424876126 30.312025 (14.134725 + 16.1773) 32.935061588 32.685 (30.312025 + 2.373) 37.586178159 37.43 (30.312 + 7.11885) 40.918719012 40.5234 (77.7647 - 16.1773 - 12.066 - 8.998) 43.327073281 42.37 (30.312 + 12.066) 48.005150881 47.68 (42.37 + 5.309) 49.773832478 49.554 (77.7647 - 16.1773 - 12.066) 52.970321478 51.37 (42.37 + 8.998) 56.446247697 55.336 (51.37 + 3.96) 59.347044003 58.08 (51.37 + 6.7106) or 59.07 (51.37 + 6.7106 + 0.98422) 60.831778525 60.05 (58.08 + 1.96844) 65.112544048 65.06 (60.05 + 5.0045) 67.079810529 67.26 (65.06 + 2.202) 69.546401711 68.79 (65.06 + 3.7322) 72.067157674 71.575 (68.79 + 2.7834) 75.704690699 75.1988 (71.575 + 2.0757 + 1.548) 77.144840069 77.214 (75.1988 + 1.1544 + 0.861) To put it another way: For the first zeta function, the subdivision points will look like this: 14.1347 +3.1815 = 17.3162 *6.363 = 20.4947 *9.545 = 23.68 *16.1773 = 30.312 *2.373 = 32.685 *4.746 = 35.0587.1185 = 37.43 } *12.066 = 42.378 }1.77 = 44.148 *3.54 = 45.92 * midpoint5.309 = 47.687 *{8.998 = 51.376{1.319 = 52.6952.64 = 54.02 *3.96 = 55.336.7106 = 58.086 *0.984 = 59.07 *1.968 = 60.052.95 = 61.03 *5.0045 = 63.1 }0.734 = 63.8 }1.468 = 64.56 }2.2 = 65.3 }3.73 = 66.8 }1.64 = 68.46 }2.783 = 69.61.224 = 70.832.07 = 71.67 *1.548 = 73.221.154 = 74.38 *0.861 = 75.240.692 = 75.930.4661 = 76.40.3475 = 76.7450.26 = 77 For the second zeta function, right on the same interval, we will obtain: 14.1347 + 63.63 = 77.764777.7647 -3.1815 = 74.58 *6.363 = 71.4 *9.545 = 68.22 * }16.173 = 61.587 * }2.373 = 59.2 *4.746 = 56.84 *7.1185 = 54.469 * }12.066 = 49.52 }1.77 = 47.75 *3.54 = 45.98 * midpoint5.309 = 44.2 *8.998 = 40.52 }1.319 = 39.2 }2.64 = 37.88 }3.96 = 36.56 *6.7106 = 33.80.984 = 32.8 *1.968 = 31.842.95 = 30.86 *5.0045 = 28.81 0.734 = 28.071.468 = 27.342.2 = 26.613.73 = 25.081.64 = 23.43 *2.783 = 22.231.224 = 21.072.07 = 20.22 *1.548 = 18.671.154 = 17.52 * 0.861 = 16.660.692 = 15.970.4661 = 15.5 0.3475 = 15.15 0.26 = 14.92π/log(t/2π) is the average gap/spacing formula. Now, we are ready for the global algorithm for the zeta zeros. z1 = 14.134725L(z1) = 7.7497714.134725 + 7.74977 = 21.884497Now, all of the previous results will be applied to obtain the value of the second zero of the zeta function, to four decimal places accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.18159.5445 - 6.36363 = 6.36363 - 3.1815 = 3.18153.18150.808860.4772250.318150.15907514.134725 + 6.36363 = 20.4977 (lower bound)First estimate, using the zeta function directed to the left, the lower bound20.4977 + 0.80886 = 21.30656 (upper bound) 2.7834 22.29945 Upper bound of first estimate using the zeta function directed to the right.2.075720.22945Lower bound Both lower and upper bounds of the estimates both zeta functions will be used to refine the approximation.Since the lower bound of the second zeta function has a smaller value than the corresponding figure of the lower bound of the first zeta function, the next value in the five element subdivision is substracted to get an UPPER BOUND.10.94782.78341.64221.904780.54748.16442.0757 1.224660.816440408221.2246621.069425This is the new UPPER BOUND for the approximation.3.18150.808860.4772250.318150.15907520.4977 + 0.477225 = 20.9752.0757 - 1.22466 = 0.851040.851040.216370.1276560.0851040.042552Substracting these bottom four values successively from 21.069425:20.85305520.9417720.98432121.026873Since now 20.984321 exceeds the estimate from the other zeta function (20.975), this will be new LOWER BOUND of the approximation.So far:Lower bound: 20.984321Upper bound: 21.0694250.80886 - 0.477225 = 0.3316350.3316350.0843150.0497450.03316350.016582Adding 0.084315 to 20.975 will equal 21.0593, a figure which already exceeds the upper bound.Adding 0.049745 to 20.975 will equal 21.024745.Adding 0.0331635 to 20.975 will equal 21.0081635.21.024745 will be the new upper bound of the approximation.0.085104 - 0.042552 = 0.0425520.0425520.010818420.00638280.00425520.0021276Substracting the bottom four figures from 21.026873 we obtain:21.01605521.020521.0226221.02474521.024745 is the SAME VALUE obtained from the five element subdivision for the first zeta function, this is how we know it is the upper bound of the entire approximation.The lower bound is 21.016055.To get the lower bound for the first zeta function, we have to subdivide the interval further.The last estimate was 21.0081635.0.084315 - 0.049745 = 0.03457Using only the first two subdivision values:0.034570.008789121.024545 + 0.0087891 = 21.03353, a figure which is too large.0.049745 - 0.0331635 = 0.016582Again, using only the first two subdivision values:0.0165820.004215721.0081635 + 0.0042157 = 21.01238Continuing in this way we obtain:21.01786This will be the new lower bound of the entire approximation.Continuing even further:21.0226217 (this corresponds to the subdivision 0.00285253 and 0.00072523; the previous subdivision is 0.003825 and 0.00097247).This is the same value as the one obtained from the other subdivision.This will be new UPPER BOUND of the entire approximation.0.0063828 - 0.0042552 = 0.00212730.00212730.0005408450.00031810.000212730.00010636521.02262 - 0.000540845 = 21.0220791621.02262 - 0.00021273 = 21.02240727To get the new lower bound, a figure higher than 21.016055 has to be obtained from the first zeta function subdivision.0.0038250.00097250.000573750.00038250.00019125The value corresponding to 0.0009725 is 21.0218965.This now is the new lower bound.So far:21.0218965 = lower bound21.0226217 = upper bound0.00285253 - 0.00072523 = 0.0021273This is the same value as that obtained earlier from the second zeta function.Since 21.02207916 exceeds 21.0218965, it will become the new UPPER BOUND of the entire approximation.0.0009725 - 0.00057375 = 0.000398750.000398750.00010137821.0218965 + 0.000101378 = 21.02199788This figure will be the new lower bound.The true value for the second zeta zero is:21.022039639Already we have obtained a five digit/three decimal place approximation:21.02207916 z2 = 21.022L(z2) = 5.202621.022 + 5.2026 = 26.2246The third zeta zero, to four decimal places accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.181514.134725 + 9.545 = 23.674716.1773 - 9.5445 = 6.63286.63281.686320.994920.663280.3316423.6747 + 1.68632 = 25.3660223.6747 + 0.99492 = 24.6746223.6747 is the first lower bound.25.36602 is the first upper bound. 3.732225.0994251.6423.4594251.0947824.0019450.547424.552025The values are taken from the subdivision:14.683.73222.2021.4680.73414.68 - 3.7322 = 10.947810.94782.78341.64221.094780.5474Upper bound of first estimate using the zeta function directed to the right:25.099425Lower bound:24.552025Just like before, we search for a higher lower bound in both subdivisions, and for a lower upper bound in both subdivisions (a comparison, in order to locate the precise and correct subdivision interval for the zeta zero).So far:Upper bound25.099425Lower bound24.674621.68632 - 099492 = 0.69140.69140.175780.103710.069140.03457Adding these bottom four values successively to 24.67462:24.850424.778324.7737624.70920.6914 - 0.17578 = 0.515620.515620.13110.0773430.0515620.025781Adding these bottom four values successively to 24.8504:24.981524.927724.90224.8762From the other zeta function:0.54740.1391710.082110.054740.02737Substracting these bottom four values successively from 25.099425:24.9602525.017325.04468525.07205524.96025 is the new LOWER BOUND.Since 24.9815 (from the other zeta function) is a higher lower bound, this value will become the new lower bound for the entire approximation.In order to obtain the new upper bound:0.51562 - 0.1311 = 0.384520.384520.097760.0576780.0384520.019226Adding these bottom three values successively to 24.9815:25.03917825.01995225.000726Then, 25.000726 becomes the new lower bound, while 25.019952 is the new upper bound for the first zeta function.Since 25.0173 (second zeta function) is a lower value than 25.019952, this then is the new UPPER BOUND for the entire approximation.So far:25.000726 is the lower bound25.0173 is the upper bound0.038452 - 0.019226 = 0.0192260.0192260.0048880.0028840.00192260.0009613Adding these bottom four values successively to 25.000726:25.00561425.0036125.00264925.00169Using the second zeta function:0.139171 - 0.08211 = 0.0570610.0570610.01450720.008560.00570610.00285305Substracting these bottom four values successively from 25.0173:25.002825.0087425.011625.014447The new lower bound is 25.00874 (a higher lower bound than 25.005614).The new upper bound is 25.0116.0.019226 - 0.004888 = 0.0143380.014338 0.00364530.00215070.00143380.0007169Adding these bottom four values successively to 25.005614:25.0092625.0077625.0070525.0063310.014338 - 0.0036453 = 0.01069270.01069270.00271850.0016040.001069270.000534635Adding these bottom four values to 25.00926:25.01225.01086425.0103325.009825.010864 is the new upper bound (a lower upper bound than 25.0116).25.01033 is the new lower bound.The true value for the third zeta zero is:25.01085758Already we have obtained a six digit/four decimal place approximation:25.010864 z3 = 25.0108L(z2) = 4.5483229.55912The fourth zeta zero, to three decimal places accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.1815 14.134725 + 16.1773 = 30.31216.1773 + 2.373 = 32.6855.004528.812.9530.869430.312 is the first lower bound.30.8694 is the first upper bound.2.3730.603310.355950.23730.11865Adding the bottom three values to 30.312:30.66830.5530.430655.0045 - 2.95266 = 2.051842.051840.5097980.3077760.2051840.102592Substracting the bottom four values from 30.8694:30.359630.561630.66421630.7668130.3596 is the new lower bound.30.5616 is the new upper bound for the second zeta function.30.43065 is the new upper bound for the first zeta function; since this figure is smaller than 30.5616, it is the upper bound of the entire approximation.0.118650.03016560.01779750.0118650.0059325Adding the bottom four values to 30.312:30.3421630.329830.32430.3180.11865 - 0.0301656 = 0.0884844Using only the first two subdivisions (corresponding to 534 and 136.1):0.08848440.02249630.342 + 0.022496 = 30.36465The subdivisions for the second zeta function.0.509796 - 0.307776 = 0.2020220.2020220.05136230.5616 - 0.051362 = 30.510238In order to make a new comparison between the two zeta functions, we have to subdivide further in order to determine the correct upper and lower bounds using subdivisions which have a very close value.0.202022 - 0.051362 = 0.150660.150660.03830430.510238 - 0.038304 = 30.471930.15066 - 0.038304 = 0.1123560.1123560.028565430.47193 - 0.0285654 = 30.443360.112356 - 0.0285654 = 0.0837910.0837910.0213030.0125690.00837910.0041895Substracting the bottom four values from 30.44336:30.42205730.4307930.43530.43917The subdivisions for the first zeta function.0.08848440.02249630.342 + 0.022496 = 30.364650.0884844 - 0.022496 = 0.0659880.065988 0.01677730.36465 + 0.016777 = 30.381430.065988 - 0.016777 = 0.0492110.0492110.012511430.38143 + 0.0125114 = 30.393940.049211 - 0.0125114 = 0.03669960.03669960.0093305130.39394 + 0.00933051 = 30.403270.0366996 - 0.00933051 = 0.02736910.0273691 0.006958330.40327 + 0.0069583 = 30.4102280.0273691 - 0.0069583 = 0.02041080.02041080.00518924230.410228 + 0.005189242 = 30.415420.0204108 - 0.005189242 = 0.015221560.01522156 0.0038730.41542 + 0.00387 = 30.419280.01522156 - 0.00387 = 0.011351560.011351560.00288630.41928 + 0.002886 = 30.422176By comparison with the subdivisions obtained from the second zeta function, we can see that 30.422176 is the new lower bound.0.01135156 - 0.002886 = 0.00846560.00846560.002152330.422176 + 0.0021523 = 30.424328Returning to the subdivisions for the second zeta function.0.112356 - 0.0285654 = 0.0837910.0837910.0213030.0125690.00837910.0041895Substracting the bottom four values from 30.44336:30.42205730.4307930.43530.439170.021303 - 0.012569 = 0.008734 (this is the interval of the subidivision where the upper and lower bounds of the second zeta function are located)0.0087340.00222050.00131010.00087340.0004367Substracting the bottom values from 30.43079:30.4285730.4294830.42991730.43035Returning to the subdivisions for the first zeta function.0.00846560.002152330.422176 + 0.0021523 = 30.4243280.0084656 - 0.0021523 = 0.00631330.00631330.00160510.0009470.000631330.000315665Adding the bottom three values to 30.424328:30.42527530.42495930.424684Returning to the subdivisions for the second zeta function.0.008734 - 0.0022205 = 0.00651350.00651350.00165630.42857 - 0.001656 = 30.42690.0065135 - 0.001656 = 0.00485750.00485750.00123530.4269 - 0.001235 = 30.425660.0048575 - 0.001235 = 0.00362550.00362550.0009210.000543830.000362550.0001813Substracting the four bottom values from 30.42566:30.4247430.4251230.425330.4254830.424684 is the new lower bound.30.424959 is the new upper bound.The true value for the fourth zeta zero is:30.424876126Already we have obtained a five digit/three decimal place approximation:30.424684 (to be continued) Let us remember that I am only using basic arithmetic to derive the values of the zeta zeros, in accordance with the optimum interval and subsequent subdivision, and using BOTH zeta functions on that same interval to obtain new upper and lower bounds for the zeros. z4 = 30.4247L(z4) = 3.9833134.408The fifth zeta zero, to three decimal places accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.1815 16.1773 + 2.373 = 32.6854.7459 - 2.373 = 2.3732.3730.60330.3560.23730.118645Adding to the bottom four values to 32.685:33.288333.04132.922332.80361.96831.84940.98432.82946.710633.832.8294 is the first lower bound.Since 32.9223 is a higher lower bound, this value is the lower bound of the entire approximation.To find the first upper bound, we need to subdivide the intervals for the second zeta function further, in order to find a lower upper bound than 33.041.0.984220.2502333.8 - 0.25023 = 33.550.98422 - 0.25023 = 0.7340.7340.1866133.55 - 0.18661 = 33.3640.734 - 0.18661 = 0.54740.54740.13917133.364 - 0.139171 = 33.2250.5474 - 0.139171 = 0.408230.408230.10378833.225 - 0.103788 = 33.12120.40823 - 0.103788 = 0.3044420.3044420.077433.1212 - 0.0774 = 33.04380.304442 - 0.0774 = 0.2270420.2270420.0577233.0438 - 0.05772 = 32.986132.9861 is the new upper bound of the entire approximation.0.356 - 0.23729 = 0.118710.118710.03020.017810.0118710.0059355Adding the bottom four values to 32.9223:32.952532.940132.934232.92832.9401 is the new upper bound.Returning to the subdivisions for the second zeta function.0.227042 - 0.05772 = 0.169320.169320.0430532.9861 - 0.04305 = 32.943050.16932 - 0.04305 = 0.126270.126270.03210.018940.0126270.0063135Substracting the bottom four values from 32.94305:32.91132.924132.930432.9367332.93672 is the new upper bound.0.012627 - 0.0063135 = 0.00631350.00631350.00160520.0009470.000631350.000315675Substracting the bottom four values from 32.93673:32.93512532.93578332.936132.936414Returning to the subdivisions for the first zeta function.0.01781 - 0.011871 = 0.00593550.00593550.0015090.0008910.000593550.000297Adding the bottom four values to 32.9342:32.9357132.93509132.934832.9345Since 32.935091 is a lower value than 32.935125, this figure is the new upper bound of the entire approximation.0.0063135 - 0.0016052 = 0.00470830.00470830.001197040.0007062450.000470830.000235415Substracting the last figure from 32.935125 we obtain 32.93489.Since this is greater value than 32.9348, it becomes the new lower bound of the entire approximation.This is further proof that 32.935125 was an upper bound, and that 32.935091 is the new upper bound for the entire approximation.The true value for the fifth zeta zero is:32.935061588Already we have obtained a five digit/three decimal place approximation:32.935091Further subdivisions for greater accuracy.0.00047083 - 0.000235415 = 0.0002354150.000235415 0.0000598520.00003530.00002354150.000011771Substracting the bottom four values from 32.935125:32.93506532.93508932.93510132.935113Returning to the subdivisions for the first zeta function.0.000891 - 0.00029745 = 0.000297450.000297450.00007562432.9348 + 0.000075624 = 32.93487560.00029745 - 0.000075624 = 0.0002218260.0002218260.000056432.9348756 + 0.0000564 = 32.934920.0001654260.00004205532.93492 + 0.000042055 = 32.9349620.000123370.00003136632.934962 + 0.000031366 = 32.93499340.0000923340.00002347532.9349934 + 0.000023475 = 32.935016880.0000688590.000017506732.93501688 + 0.0000175067 = 32.93503440.0000513530.00001305632.9350344 + 0.000013056 = 32.935047460.0000382970.0000097366332.93504746 + 0.00000973663 = 32.93505720.0000285610.0000072613532.9350572 + 0.00000726135 = 32.93506446This becomes the new upper bound of the entire approximation (a value smaller than 32.935065 obtained from the second zeta function subdivision).0.0000285610.000007261350.0000042841532.9350572 + 0.00000428415 = 32.93506148The true value for the fifth zeta zero is:32.935061588Already we have obtained an eight digit/six decimal place accuracy:32.93506148 z5 = 32.935L(z5) = 3.792736.7277The sixth zeta zero, to three decimal places accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.1815 14.134725 + 16.1773 + 7.1185 = 37.4312.066 - 7.1185 = 4.94754.94751.25770.742050.494750.2473537.43 + 0.24735 = 37.677352.6437.87943.9636.5694237.6773 is the first upper bound.36.56942 is the first lower bound.0.247350.0628860.03710.0247250.012367Adding the bottom four values to 37.43:37.492937.46737.45537.4420.24735 - 0.062886 = 0.1844640.1844640.046937.4929 + 0.0469 = 37.540.184464 - 0.0469 = 0.1375640.1375640.0349740.020630.01375640.00688Adding the bottom four values to 37.54:37.57537.560637.55375637.5468837.575 is the new lower bound.3.96 - 2.6395 = 1.32051.32050.3357240.1980750.132050.066025Substracting the bottom values from 37.8794 (the upper bound for the second zeta function):37.5436737.68137.7473537.813370.335724 - 0.198075 = 0.1376490.1376490.034995810.020647350.01376490.00688245Substracting the bottom four values from 37.681:37.646004237.6603537.6672437.67410.137649 - 0.03499581 = 0.10265320.10265320.026137.6460042 - 0.0261 = 37.619910.1026532 - 0.0261 = 0.07655320.0765532 0.01946337.61991 - 0.019463 = 37.6004470.0765532 - 0.019643 = 0.05710.05710.01451460.0085650.00570.002855Substracting the bottom four values from 37.600447:37.585932437.59237.594737.597637.5859324 is the new lower bound of the entire approximation.Returning to the subdivisions for the first zeta function.0.137564 - 0.034974 = 0.102590.102590.02608250.01538850.0102590.0051285Adding the bottom four values to 37.575:37.60137.590437.5852637.5801337.5904 is the new upper bound.The true value for the sixth zeta zero is:37.586178159Already we have obtained a five digit/three decimal place approximation:37.5859324 The upper bound can also be used within the same derivation to obtain new estimates.14.134725 + 16.1773 + 7.1185 = 37.4312.066 - 7.1185 = 4.94754.94751.25770.742050.494750.2473537.43 + 0.24735 = 37.677350.247350.0628860.03710.0247250.012367Adding the bottom four values to 37.43:37.492937.46737.45537.442Now, the same values will be substracted from the upper bound, 37.6773:0.0628860.03710.0247350.012367Obtaining:37.614437.640237.652537.665The second lower bound for the second zeta function is 37.54367.Now, this value will be used to add the corresponding values belonging to the derivation of the first zeta function.0.0628860.03710.0247350.012367Adding these values (belonging to the first zeta function subdivisions) to 37.54367:37.606637.58137.56837.556Conversely, the upper bound of the first zeta function, 37.6773, will be used to get new estimates belonging to the second zeta function.1.32050.3357240.1980750.132050.066025Substracting the bottom four values from 37.6773:37.341637.47937.54537.6113The lower bound can also be used within the same derivation to obtain new estimates.0.1376490.034995810.020647350.01376490.00688245Adding the bottom four values to 37.54367:37.578637.5643237.557437.5505The new estimates are: 37.545 as a lower bound, 37.6113 as an upper bound.Then, 37.6066 becomes the new upper bound, while 37.581 is the new lower bound.By observation, 37.600447 (the value previously calculated) becomes the new upper bound of the entire approximation.Then, 37.5904, the value from the first zeta function subdivision is the new upper bound, while 37.58526 is the new lower bound.Thus, these new features/results greatly simplify the entire sequence of five elements subdivisions estimates: now one can also add/substract the upper/lower bounds as needed, and use an estimate from the first zeta function (or from the second zeta function) as an upper/lower bound starting point value to use in the subdivisions calculations for the second zeta function (or for the first zeta function).z6 = 37.586L(z6) = 3.512641.098The seventh zeta zero, to three significant digits accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.1815 14.134725 + 16.1773 + 7.1185 = 37.4312.066 - 7.1185 = 4.94754.94751.25770.742050.494750.2473537.43 + 1.2577 = 38.68774.9475 - 1.2577 = 3.68983.68980.93810.553470.368980.1845Adding the bottom four values to 38.6877:39.625839.2411739.0566838.87225.30944.2 8.99840.5240.52 is the first lower bound.44.2 is the first upper bound (even though 44.2 is greater than the eighth zeta zero, 43.327)Now, the new features/results from the previous message on this page will be used.8.998 - 5.309 = 3.6893.6890.93790.553350.36890.18445Adding the bottom four values to 40.52:41.45841.07340.88940.70440.889 is the new lower bound.41.073 is the new upper bound.38.6877 + 3.6898 = 42.3781.25770.742050.49470.24735Substracting 1.2577 from 42.378 we obtain 41.1203.3.68980.93810.553470.368980.1845Substracting the bottom four values from 41.1203:40.182340.56740.751340.9358Since 40.9358 is a smaller value than 41.073, 40.9358 is the new upper bound of the entire approximation.The true value for the seventh zeta zero is: 40.918719012Already we have obtained a three significant digit approximation:40.9358A more difficult approach, without using the new features/results, would be use the five element subdivision algorithm, starting with 44.2 for the second zeta function (44.2, 43.262, 42.5632, 42.04172, 41.6526, 41.3624, 40.5112 and 41.146, 40.9846, 40.9136 and 40.93726) and continuing with the value of 38.6877 for the first zeta function (39.6258, 40.3254, 40.847126 and 40.9236, 40.86658, 40.8811, 40.892, 40.90007).Once the four subdivision figures are obtained, there is no need to even bother to find the value of the corresponding zeta zero: all that matters are the five element subdivision points, then the zeta zero can be computed effortlessly if so desired. z7 = 40.9187L(z7) = 3.35344.272The eighth zeta zero, to three significant digits accuracy, using only the five elements subdivision applied to both zeta functions as a guide.63.63636316.17739.54456.363633.1815 14.134725 + 16.1773 + 12.066 = 42.37814.134725 + 16.1773 + 12.066 + 1.77 = 44.1485.30944.28.99840.511.770.450.26550.1770.0885Adding the bottom four values to 42.378 (which is the first lower bound):42.8342.6442.55542.46Substracting the bottom four values from 44.148 (the first upper bound), thus using the new results/features posted on this page:43.743.8843.9744.06For the second zeta function:8.998 - 5.309 = 3.6893.6890.93790.553350.36890.18445Substracting the bottom four values from 44.2:43.26243.64643.831144.015Adding the bottom four values to 42.378:43.31642.9342.74742.56242.83 is the new lower bound; since 43.316 has a greater value than 42.83, 43.316 is the new lower bound for the entire approximation.43.7 is the new upper bound.At this point, a three digit approximation has already been obtained (true value of the eighth zeta zero is 43.327073281); however, the five element subdivision algorithm will be continued, in order to show the precise calculations.Since 43.262 is the lower bound for the second zeta function (while 43.646 is the upper bound), we already know that the true value of the eighth zeta zero is to be found in the 0.9379 - 0.55335 interval.3.689 - 0.9379 = 2.75112.75110.6990.41260.275110.1375Substracting the bottom four values from 43.262:42.56342.8542.98743.124Adding the bottom four values to 43.316:44.01543.7343.643.450.9379 - 0.55335 = 0.384550.384550.09770.05770.0384550.0192343.646 - 0.0977 = 43.550.38455 - 0.0977 = 0.286850.286850.0729343.55 - 0.07293 = 43.4770.28685 - 0.07293 = 0.213920.213920.054443.477 - 0.0544 = 43.420.21392 - 0.0544 = 0.159520.159520.040543.42 - 0.0405 = 43.380.15952 - 0.0405 = 0.119020.119020.0302643.38 - 0.03026 = 43.350.11902 - 0.03026 = 0.088760.088760.02256643.35 - 0.022566 = 43.327 (a five digit approximation)Returning to the calculations for the first zeta function.1.77 - 0.45 = 1.321.320.33560.1980.1320.066Adding the bottom four values to 42.83:43.165643.02842.96242.896Substracting the bottom four values from 43.7:43.36443.50243.56843.6341.32 - 0.3356 = 0.98440.98440.25030.147060.098440.04922Adding the bottom four values to 43.1656:43.41643.31543.26443.215Substracting the bottom four values from 43.364:43.11443.21643.2643.3150.9844 - 0.2503 = 0.73410.73410.18660.110.073410.0367Adding the bottom four values to 43.416:43.643.52643.4943.45Returning to the calculations for the second zeta function:0.13750.0349580.02060.013750.006875Adding the bottom four values to 43.316:43.35143.282643.27643.268Successively, the new upper bounds are: 43.634, 43.6, 43.568, 43.526, 43.502, 43.45, 43.416, 43.364.The new lower bounds are: 42.83, 42.93, 42.987, 43.028, 43.215, 43.26, 43.316.The true value for the eighth zeta zero is: 43.327073281Already we have obtained a three significant digit approximation:43.316 The fact that the five element subdivisions algorithm can be applied to each separate 63.6363... segment can immediately be used to great advantage to calculate the zeta zeros for extremely large values of t (1/2 + it). So far, the computations of the Riemann zeta function for very high zeros have progressed to a dataset of 50000 zeros in over 200 small intervals going up to the 1036-th zero.The main problem is the calculations of the exponential sums in the Riemann-Siegel formula.However, the five element subdivisions algorithm suffers from no such restrictions.The 63.6363... segment can be shifted to any desired height, using arbitrary-precision arithmetic.Therefore, computations of zeros around the first Skewes number, 1.39822 x 10316 become possible using the Schönhage–Strassen algorithm for the multiplication/addition of very large numbers.The Riemann-Siegel requires the addition of all of the terms in the formula, involving the evaluation of cosines, logarithms, square roots, and a complex set of remainders.With the five element subdivision algorithm, only the following calculations are required: k x 63.6363..., where k can be 1.39822 x 10316 or 1010,000 (10 followed by ten thousand zeros). No divisions are required, no evaluation of elementary transcendental or algebraic functions is needed. The five element sequence of proportions are T, 63.6363... x T/250, 3T/10, T/10, T/20: simple multiplications by 1/250, 3/10, 1/10 and 1/20.The only figure remaining to be calculated very precisely is the actual value of the interval distance.14/22 = 0.63636363...2/π = 0.636619722...286.1/450 = 0.6357777...14.134725 x 45 = 636.062625....π has been calculated to over one million digits, the first zeta zero to over 40,000 digits.The precise figure can be deduced by using the five element subdivision algorithm to the following heights: 636.63, 6,363.63, 63,636.63, 636,363.63.Two examples which prove that the 63.6363 segment can be shifted to higher intervals on the critical 1/2 line, with no previous knowledge of the values of the other zeta zeros.Zeta zero: 79.33737502014.134725 + 63.63 = 77.7647L(77.7647) = 2.4975 (average spacing estimate 80.262)77.7647 + 0.80886 = 78.5735677.7647 + 3.1815 = 80.9462141.3947 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 = 80.2836141.3947 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 - 0.692 = 79.598141.3947 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 - 0.692 - 0.4661 = 79.13183.1815 - 0.80886 = 2.37262.37260.603320.35590.237260.11863Adding the bottom four values to 78.57356:79.17778.92978.81178.692279.177 is the first lower bound.2.3726 - 0.60322 = 1.76941.76940.450.265410.176940.08847Adding the bottom four values to 79.177:79.62779.44279.35479.265579.598 is the first upper bound.0.4661 0.2750.18330.09Substracting these values from 79.598:79.131879.32379.414779.50879.323 is the new lower bound. 79.354 is the new upper bound.0.17694 - 0.08847 = 0.088470.08847 0.022579.2655 + 0.0225 = 79.2880.08847 - 0.225 = 0.065970.065970.01677279.288 + 0.016772 = 79.304370.06597 - 0.016772 = 0.04920.04920.0125179.30437 + 0.01251 = 79.31730.0492 - 0.01251 = 0.036690.036690.00932879.3173 + 0.009328 = 79.3266379.362663 is the new lower bound.0.03669 - 0.009328 = 0.0273620.0273620.00695650.00410430.00273620.00131681Adding the bottom four values to 79.3266:79.333679.330779.3293779.3280.027362 - 0.0069565 = 0.02040550.02040550.00518790.0030610.002040550.00102Adding the bottom four values to 79.3336:79.3387979.33666179.3356479.334620.0204055 - 0.0051879 = 0.01521760.01521760.0038690.002830.001521760.000761Adding the bottom four values to 79.33879:79.3426679.341679.340379.3395The calculations for the second zeta function.0.275 - 0.1833 = 0.09170.09170.023379.4147 - 0.0233 = 79.39140.0917 - 0.0233 = 0.06840.06840.017479.3914 - 0.0174 = 79.3740.0684 - 0.0174 = 0.0510.0510.01296679.374 - 0.012966 = 79.3610.051 - 0.012966 = 0.0380340.0380340.0096779.361 - 0.00967 = 79.3513379.35133 is the new upper bound.0.038034 - 0.00967 = 0.0283640.028364 0.0072179.35133 - 0.00721 = 79.344120.028364 - 0.00721 = 0.0211540.0211540.00537820.0037730.00211540.001058Substracting the bottom values from 79.34412:79.33874279.3409579.34279.343060.021154 - 0.0053782 = 0.0157760.0157760.0040110.00236640.00157760.000789Substracting the bottom four values from 79.338742:79.3347379.336479.33716479.33795Now, the new features/results from the previous message will be used.0.09170.02330.0137550.009170.004585Adding the bottom four values to 79.323:79.327679.332279.336779.34530.17694 - 0.08847 = 0.08847 0.088470.0224930.0132710.0088470.0044235Substracting the bottom four values from 79.354:79.3315179.3407379.345579.349679.3367 is the new lower bound.79.33879 is the new upper bound.Since 79.337164 is a higher figure than 79.3367, 79.337164 is the new lower bound for the entire approximation.Without any knowledge of the values of the previous zeta zeros, a five digit/three decimal place approximation of the zeta zeros was obtained.Zeta zero: 143.11184580814.134725 + 63.63 + 63.63 = 141.3947L(141.3947) = 2.018 (average spacing estimate 143.4126)141.3947 + 0.80886 = 142.20356141.3947 + 0.60322 = 142.8068141.3947 + 0.45 = 143.2568141.3947 + 0.335 = 143.592205.0247 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 = 143.9187205.0247 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 - 0.692 = 143.2277205.0247 - 16.1773 - 12.066 - 8.998 - 6.7106 - 5.0045 - 3.7322 - 2.7834 - 2.0757 - 1.548 - 1.1544 - 0.861 - 0.692 - 0.4661 = 142.7618142.8068 is the first lower bound.143.2277 is the first upper bound.1.76940.450.265410.176940.08847Adding the bottom four values to 142.8068:143.2568143.07221142.984142.89530.4661 0.2750.18330.09Substracting these values from 143.2277:142.7618142.9527143.0444143.1377Now, the new features/results from the previous message will be used.1.76940.450.265410.176940.08847Substracting the bottom three values from 143.2568:142.9914143.0798143.168330.4661 0.2750.18330.09Adding the bottom three values to 142.7618:143.0368142.945142.8518143.07221 is the new lower bound.143.1377 is the new upper bound.
-
The key to longevity is represented by the virtues one develops/attains (purity, compassion, nobleness). Sexual energy can only be sublimated/transmuted by the higher emotions originating from the superconscious mind (higher self). Celibacy is one extreme. Full sexual continence is another (and even advanced monks/yogi have nocturnal ejaculations unless they use very advanced techniques such as Virya Mudra (inserting a tube through the urethra, starting at the age of 14, to the verumontanum area in order to activate the seminal sphincter; for the Vamacari yogi), or Kevala Pranayam a dangerous and extremely difficult method attained by only very few). Uddhiyana Bandha and Oli Mudras (the Sahajoli Mudrasana introduced by Gitananda) can only help so much, for full continence one needs those extremely advanced techniques which cannot be practiced by most yogi. Therefore, one should ejaculate rarely (perhaps once every few months), while sublimating all the while the sexual energy (generative force) to the higher self. Qi/chi/prana is the energy related to the petals of a chakra. Gong/virtues/higher emotions are related to the geometrical symbol of a chakra, a much higher function.
-
-1
-
It is my belief that RH is a genuinely arithmetic question that likely will not succumb to methods of analysis. Number theorists are on the right track to an eventual proof of RH, but we are still lacking many of the tools. J. Brian Conrey"...the Riemann Hypothesis will be settled without any fundamental changes in our mathematical thoughts, namely, all tools are ready to attack it but just a penetrating idea is missing." Y. Motohashi"...there have been very few attempts at proving the Riemann hypothesis, because, simply, no one has ever had any really good idea for how to go about it." A. Selberg"I still think that some major new idea is needed here" E. Bombieri http://wwwf.imperial.ac.uk/~hjjens/Riemann_talk.pdfSubtle relations: prime numbers, complex functions, energy levels and RiemannProf. Henrik J. Jensen, Department of Mathematics, Imperial College Londonhttp://www.ejtp.com/articles/ejtpv10i28p111.pdfRiemann Zeta Function and Hydrogen Spectrum The year: 1972. The scene: Afternoon tea in Fuld Hall at the Institute for Advanced Study. The camera pans around the Common Room, passing by several Princetonians in tweeds and corduroys, then zooms in on Hugh Montgomery, boyish Midwestern number theorist with sideburns. He has just been introduced to Freeman Dyson, dapper British physicist.Dyson: So tell me, Montgomery, what have you been up to? Montgomery: Well, lately I've been looking into the distribution of the zeros of the Riemann zeta function. Dyson: Yes? And? Montgomery: It seems the two-point correlations go as.... (turning to write on a nearby blackboard): Dyson: Extraordinary! Do you realize that's the pair-correlation function for the eigenvalues of a random Hermitian matrix? It's also a model of the energy levels in a heavy nucleus—say U-238. The asymptotic formula developed by Riemann (discovered by C. Siegel in the early 1930s from the notes left by Riemann) is the most difficult asymptotic expansion ever attempted, certainly the most complex calculation of the 19th century. C. Siegel realized that no one else could have done it, and in 1930 Riemann was still ahead of every other mathematician involved in the study of the zeta function. https://michaelberryphysics.files.wordpress.com/2013/06/berry483.pdf Why would B.F. Riemann embark on such a colossal derivation of an asymptotic expansion (see H.M. Edwards, Riemann's Zeta Function, chapter 7) unless he was certain that all of the zeros do lie on the critical 1/2 line? The notes discovered by Siegel baffled the mathematicians, because Riemann used this most difficult asymptotic formula to simply obtain the values of the first few zeros of the zeta function. It is as if he had already proven that all of the zeros lie on the critical 1/2 line and he wanted to make sure that just the very first zeros have this property. In my opinion, Riemann must have used both his newly discovered zeta functional equation and other equations in the Nachlass to prove the RH. Then, and only then, did he embark on this very difficult derivation. Let us now briefly explore the best papers published on the RH. Two mathematicians from the Lomonosov Moscow State University have used the mollifier function introduced by N. Levinson in a novel way:https://arxiv.org/pdf/1805.07741.pdf100% OF THE ZEROS OF THE RIEMANN ZETA-FUNCTION ARE ON THE CRITICAL LINEEarlier, they published another paper in which they showed that at least 47% of the zeros of the Riemann zeta function lie on the critical line (the previous records were Feng (41%), Conrey (40%) and Levinson (34%)).http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.758.4457&rep=rep1&type=pdfhttps://arxiv.org/pdf/1403.5786.pdf (the original paper on the novel way of using mollifier functions)https://arxiv.org/pdf/1207.6583.pdfLimitations to mollifying ζ(s)http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=F7C33227D1D6635FFBC27972BA54E5A8?doi=10.1.1.36.9777&rep=rep1&type=pdfLong mollifiers of the Riemann zeta functionhttps://arxiv.org/pdf/1604.02740.pdfTHE θ = ∞ CONJECTURE IMPLIES THE RIEMANN HYPOTHESIShttps://rjlipton.wordpress.com/2018/09/26/reading-into-atiyahs-proof/ (on Sir M. Atiyah's use of the Todd function for the Riemann hypothesis)Mathematicians complain that 99% of the proofs submitted to the Annals are rejected because they make use of the zeta functional equation."The reason for this is (as has been known since the work of Davenport and Heilbronn) that there are many examples of zeta-like functions (e.g., linear combinations of L-functions) which enjoy a functional equation and similar analyticity and growth properties to zeta, but which have zeroes off of the critical line. Thus, any proof of RH must somehow use a property of zeta which has no usable analogue for the Davenport-Heilbronn examples."However, the arguments used in the following papers are very well presented and make a lot of sense.Riemann's nachlass = manuscripts, lecture notes, calculation sheets and letters left by G.F.B Riemannhttps://www.researchgate.net/publication/281403728_To_unveil_the_truth_of_the_zeta_function_in_Riemann_NachlassThe authors assert that not all of the formulas left by Riemann in his notes have been taken into consideration, and that these neglected equations were used by Riemann to actually prove the RH. https://arxiv.org/ftp/arxiv/papers/0801/0801.4072.pdfA Necessary Condition for the Existence of the Nontrivial Zeros of the Riemann Zeta Function(a paper which shows that B. Riemann must have followed a similar kind of argument, using the newly discovered zeta functional equation, to reach the conclusion that all the nontrivial zeros are all located on the ½ line) On the computation of the zeta zeros The complexity of the Riemann-Siegel coefficients: The Riemann-Siegel formula does not deal with the distribution of zeros.Nor can it reveal the hidden pattern/structure of the zeta zeros. That is why for very large values of the zeta zeros, the Euler-Maclaurin formula becomes competitive. An alternative to the Riemann-Siegel formula: improving the convergence of the Euler-Maclaurin expansion thereby greatly reducing the length of the main sum:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.758.2810&rep=rep1&type=pdf Recently, A. LeClair and G. França introduced and proved a new formula for finding the values of the zeta zeros, which is faster than the Riemann-Siegel formula: Transcendental equations satisfied by the individual zeros ofRiemann ζ, Dirichlet and modular L-functionshttps://arxiv.org/pdf/1502.06003.pdfStatistical and other properties of Riemann zeros based on anexplicit equation for the n-th zero on the critical linehttps://arxiv.org/pdf/1307.8395.pdf (to be continued; in the subsequent messages I will introduce the new global algorithm which only uses simple arithmetical operations to find the zeros of the zeta function, global formulas for the Lehmer pairs and large gaps, and other results) De Bruijn-Newman constant https://arxiv.org/pdf/1508.05870.pdfLehmer pairs revisitedThe Riemann hypothesis means that the de Bruijn-Newman constant is zero.Unusually close pairs of zeros of the Riemann zeta function, the Lehmer pairs, can be used to give lower bounds on Λ.Soundararajan’s Conjecture B implies the existence of infinitely many strong Lehmer pairs, and thus, that the de Bruijn-Newman constant Λ is 0.http://www.math.kent.edu/~varga/pub/paper_209.pdfLehmer pairs of zeros and the Riemann ξ-functionhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.30.9492&rep=rep1&type=pdfA new Lehmer pair of zeros and a new lower bound for the de Bruijn-Newman constant Λhttp://www.academia.edu/19018042/Lehmer_pairs_of_zeros_the_de_Bruijn-Newman_constant_and_the_Riemann_HypothesisLehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann Hypothesishttp://www.dtc.umn.edu/~odlyzko/doc/debruijn.newman.pdf An improved bound for the de Bruijn-Newman constanthttps://www.ams.org/journals/mcom/2011-80-276/S0025-5718-2011-02472-5/S0025-5718-2011-02472-5.pdfAn improved lower bound for the de Bruijn-Newman constantRecently, it was proven that the de Bruijn-Newman constant is non-negative:https://arxiv.org/pdf/1801.05914.pdfThis means that an infinite sequence of Lehmer pairs of arbitrarily high quality (strong Lehmer pairs) will prove that the de Bruijn-Newman constant is equal to zero (Λ = 0).https://terrytao.wordpress.com/2018/01/20/lehmer-pairs-and-gue/ If the de Bruijn-Newman constant is equal to zero, Λ = 0, then Riemann's hypothesis (all zeta zeros lie on the 1/2 critical line) is true. However, in order to prove that -10-20 < Λ, at least 1030 zeros would have to be examined. The total number of simple arithmetic mathematical operations that have been performed by all digital computers in history is only on the order of 1023.Not even with improvements in hardware, it cannot be hoped to compute 1030 zeta zeros using existing methods. Strong/high quality Lehmer pairs can be used to give lower bounds for Λ.The existence of infinitely many Lehmer pairs implies that the de Bruijn-Newman constant Λ is equal to 0. Strong Lehmer pairs 1187 pairs:http://www.slideshare.net/MatthewKehoe1/riemanntex (pg. 64-87) Very interesting comments on the S(t) function:https://arxiv.org/pdf/1407.4358.pdf (page 46) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.751.9485&rep=rep1&type=pdf"The RH and (5.3) imply that, as t → ∞, the graph of Z(t) will consist of tightly packed spikes, which will be more and more condensed as t increases, with larger and large oscillations. This I find hardly conceivable. Of course, it could happen that the RH is true and that (5.3) is not." NEW COMPUTATIONS OF THE RIEMANN ZETA FUNCTION ON THE CRITICAL LINEhttps://arxiv.org/pdf/1607.00709.pdfAs a byproduct of our search for large values, we also find large values of S(t). It is always the case in our computations that when ζ(1/2 + it) is very large there is a large gap between the zeros around the large value. And it seems that to compensate for this large gap the zeros nearby get “pushed” to the left and right. A typical trend in the large values that we have found is that S(t) is particularly large and positive before the large value and large and negative afterwards.The calculations involve more than 50000 zeros in over 200 small intervals going up to the 1036th zero.S(t) is related to the large gaps between the zeta zeros where high extreme values of peaks occur, where it seems to protect the zeta function from attaining the tightly packed spikes conjectured by mathematicians. http://www.dhushara.com/DarkHeart/RH2/RH.htm (one of the very best works on the Riemann zeta function and the RH)http://www.math.sjsu.edu/~goldston/Tsang%20Ch2.pdf (on the function S(t))https://www.sciencedirect.com/science/article/pii/0022314X8790059Xhttps://math.boku.ac.at/udt/vol10/no2/09OzSteu.pdf (on the distribution of the argument of the zeta function)http://wayback.cecm.sfu.ca/~pborwein/TEMP_PROTECTED/book.pdf (a classic work on the Riemann zeta function, it includes all of the major papers published over the last 150 years on the subject)http://www.dtc.umn.edu/~odlyzko/unpublished/zeta.10to20.1992.pdf (one of the best papers on the zeta function, it includes pertinent material on the S(t) function, pg. 11, 25, 29, 43, 68) N(T) = T/2π (logT/2π - 1) + 7/8 + o(1) + Nosc(T)Nosc(T) = S(T) = 1/π Im log ζ(1/2 + iT), the oscillatory part of the formula <N(T)> = N(T) - Nosc(T) "We have all this evidence that the Riemann zeros are vibrations, but we don't know what's doing the vibrating." "Maybe we have become so hung up on looking at the primes from Gauss's and Riemann's perspective that what we are missing is simply a different way to understand these enigmatic numbers. Gauss gave an estimate for the number of primes, Riemann predicted that the guess is at worst the square root of N off its mark, Littlewood showed that you can't do better than this. Maybe there is an alternative viewpoint that no one has found because we have become so culturally attached to the house that Gauss built." M. du Sautoy, The Music of the Primes "The zeta function is probably the most challenging and mysterious object of modern mathematics, in spite of its utter simplicity. . . The main interest comes from trying to improve the Prime Number Theorem, i.e. getting better estimates for the distribution of the prime numbers. The secret to the success is assumed to lie in proving a conjecture which Riemann stated in 1859 without much fanfare, and whose proof has since then become the single most desirable achievement for a mathematician."M.C. Gutzwiller, Chaos in Classical and Quantum Mechanics, page 308"Riemann showed the importance of study of [the zeta] function for a range of problems in number theory centering around the distribution of prime numbers, and he further demonstrated that many of these problems could be settled if one knew the location of the zeros of this function. In spite of continued assaults and much progress since Riemann's initial investigations this tantalizing question remains one of the major unsolved problems in mathematics."D. Reed, Figures of Thought (Routledge, New York, 1995) p.123 ...it is that incidental remark - the Riemann Hypothesis - that is the truly astonishing legacy of his 1859 paper. Because Riemann was able to see beyond the pattern of the primes to discern traces of something mysterious and mathematically elegant at work - subtle variations in the distribution of those prime numbers. Brilliant for its clarity, astounding for its potential consequences, the Hypothesis took on enormous importance in mathematics. Indeed, the successful solution to this puzzle would herald a revolution in prime number theory. Proving or disproving it became the greatest challenge of the age...It has become clear that the Riemann Hypothesis, whose resolution seems to hang tantalizingly just beyond our grasp holds the key to a variety of scientific and mathematical investigations. The making and breaking of modern codes, which depend on the properties of the prime numbers, have roots in the Hypothesis. In a series of extraordinary developments during the 1970s, it emerged that even the physics of the atomic nucleus is connected in ways not yet fully understood to this strange conundrum. ...Hunting down the solution to the Riemann Hypothesis has become an obsession for many - the veritable 'great white whale' of mathematical research. Yet despite determined efforts by generations of mathematicians, the Riemann Hypothesis defies resolution.""J. Derbyshire, from the dustjacket description of Prime Obsession (John Henry Press, 2003)"Proving the Riemann hypothesis won't end the story. It will prompt a sequence of even harder, more penetrating questions. Why do the primes achieve such a delicate balance between randomness and order? And if their patterns do encode the behaviour of quantum chaotic systems, what other jewels will we uncover when we dig deeper?Those who believe mathematics holds the key to the Universe might do well to ponder a question that goes back to the ancients: What secrets are locked within the primes?"E. Klarreich, "Prime Time" (New Scientist, 11/11/00) "Riemann's insight followed his discovery of a mathematical looking-glass through which he could gaze at the primes. Alice's world was turned upside down when she stepped through her looking-glass. In contrast, in the strange mathematical world beyond Riemann's glass, the chaos of the primes seemed to be transformed into an ordered pattern as strong as any mathematician could hope for. He conjectured that this order would be maintained however far one stared into the never-ending world beyond the glass. His prediction of an inner harmony on the far side of the mirror would explain why outwardly the primes look so chaotic. The metamorphosis provided by Riemann's mirror, where chaos turns to order, is one which most mathematicians find almost miraculous. The challenge that Riemann left the mathematical world was to prove that the order he thought he could discern was really there." "For centuries, mathematicians had been listening to the primes and hearing only disorganised noise. These numbers were like random notes wildly dotted on a mathematical stave with no discernible tune. Now Riemann had found new ears with which to listen to these mysterious tones. The sine-like waves that Riemann had created from the zeros in his zeta landscape revealed some hidden harmonic structure." "These zeros did not appear to be scattered at random. Riemann's calculations indicated that they were lining up as if along some mystical ley line running through the landscape." (to be continued)
-
Now, we can finally solve the mystery of the Michelson-Morley experiment. In 1999 E. J. Post showed the equivalence between the Michelson-Morley experiment and the Sagnac experiment.E. J. Post, A joint description of the Michelson Morley and Sagnac experiments.Proceedings of the International Conference Galileo Back in Italy II, Bologna 1999,Andromeda, Bologna 2000, p. 62E. J. Post is the only person to notice the substantial identity between the 1925 experiment and that of 1887: "To avoid possible confusion, it may be remarked that the beam path in the more well-known Michelson-Morley interferometer, which was mounted on a turntable, does not enclose a finite surface area; therefore no fringe shift can be expected as a result of a uniform rotation of the latter".E. J. Post, Reviews of Modern Physics. Vol. 39, n. 2, April 1967 A. Michelson and E. Morley simply measured the Coriolis effect. The Coriolis effect can be registered/recorded either due to the rotation of the Earth or due to the rotation of the ether drift (Whittaker's potential scalar waves). The deciding factor is of course the Sagnac effect, which is much greater than the Coriolis effect, and was never registered. Since MM did not use a phase-conjugate mirror or a fiber optic equipment, the Coriolis force effects upon the light offset each other. The positive (slight deviations) from the null result are due to a residual surface enclosed by the multiple path beam (the Coriolis effect registered by a Sagnac interferometer). Dayton Miller also measured the Coriolis effect of the ether drift in his experiment (Mount Wilson, 1921-1924 and 1925-1926, and Cleveland, 1922-1924). Dr. Patrick Cornille (Essays on the Formal Aspects of Electromagnetic Theory, pg. 141): Let us examine now the Sagnac interferometer using topology. https://www.researchgate.net/publication/288491190_SAGNAC_EFFECT_A_consequence_of_conservation_of_action_due_to_gauge_field_global_conformal_invariance_in_a_multiply-joined_topology_of_coherent_fields Dr. Terence W. Barrett (Stanford University) Just like the Aharonov-Bohm experiment, the Sagnac interferometer is a multiply-connected domain in the presence of a topological obstruction. The Heaviside-Lorentz equations (the modified Maxwell set of equations) can only partially describe the rotating interferometer. Upon rotation, the Sagnac interferometer will exhibit a patch condition. Dr. Terence W. Barrett: Stated differently, with the rotation of the platform, the gauge symmetry is SU(2)/Z2 = SO(3), and on the stabilization of the platform the gauge symmetry is U(1). When rotated, a patch condition exists in the multiply-connected topology. To put it differently, the Sagnac interferometer experiment cannot be described by vector fields (the usual Heaviside-Lorentz equations): they required the use of the quaternions, the mathematical language developed by Maxwell to describe his original set of EM equations. "Maxwell's original EM theory was written in quaternions, which are an extension to the complex number theory and an independent system of mathematics. In short, since the quaternion is a hypernumber, Maxwell's theory was a hyperspatial theory -- not just the limited three-dimensional subset that was extracted and expressed by Heaviside and Gibbs in terms of an abbreviated, incomplete vector mathematics." Quaternions have a vector and a scalar part and have a higher topology than vector and tensor analysis.
-
The variables are identified as required by the use of an interferometer which is located away from the center of rotation (as in, for example, the Michelson-Gale experiment, or the ring laser gyroscopes at Gran Sasso, Italy). If the interferometer is located away from the center of rotation, one will encounter two different lengths (of the arms of the interferometers) and two different velocities for the light beams. This situation, of course, is different from the context where one has an interferometer whose center of rotation coincides with its geometrical center: same lengths and the same velocities. So, for the interferometer located away from the center of rotation, the variables are as follows: http://www.conspiracyoflight.com/Michelson-Gale_webapp/image002.png Point A is located at the detectorPoint B is in the bottom right cornerPoint C is in the upper right cornerPoint D is in the upper left cornerl1 is the upper arm.l2 is the lower arm. v1 v2 are the velocities of the rotation of the Earth at the corresponding latitudes (since there are two latitudes, one will have two velocities, one for each latitude) c = speed of light Here are the variables used by Michelson: