Tristan L
Senior Members-
Posts
45 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Tristan L
-
Is there a proposition known to be undecidable?
Tristan L replied to Tristan L's topic in Mathematics
No, it's just a side-hobby of mine to help free speeches broadly, and the English tongue in particular, from the effects of linguistic imperialism, which I do by brooking (using) truly English words instead of ones that came into this language by way of speech-imperialism in actual natural situations, such as forum talks (mutatis mutandis for other victims of language-imperialism). Speech in general interests me, too, but not more that what it is brooked to stand for (which, spellbindingly, includes speech itself). The website to which the link leads is the English Wordbook. It is part of the Anglish Moot, which is all about liberating English from the effects of linguisic imperialism. The English Wordbook gives the right English equivalents to un-English words that came in through language-imperialism, to a big part, but not only, due to the Norman Conquest. For many proper English words which have not yet become widely used, when I brook them, I link their first instances in a text to the Wordbook so as to back my brooking of those words up, and I also write their foreign-derived equivalents in brackets after them (or vice versa). Brooking truly English words has the futher boot that they are usually shorter and thus more efficient that their foreign counterparts. For instance, "boot" is only a third as long (in terms of syllables) and therefore thrice as efficient as foreign-derived "advantage", and "often" has only two syllables while "frequently" has three. "Witcraft" is onefoldly (simply) the English synonym for "logic", the former being drawn from "wit" (from OE "witt", from PIE "*weyd-") and "craft" (from OE "cræft", from PG "*kraftuz"), while the latter is derived from OG "logikḗ" (from PIE "*leǵ-"). Another properly English word for witcraft is "flitecraft", which stems from the Old English word "flītcræft" for witcraft. For more truly English techical language (craftspeak), see the Anglish Moot's leaf on Craftspeak. Yes, and after my first use of that word in my post, a fayed (added) "(properties)" to clarify for those not yet/anymore familiar with "ownship". This word is the exact English equivalent of the German (Þeech) word "Eigenschaft" for ownship; "own" and "eigen" both come from Ortheedish (Proto-Germanic) "*aiganaz", and "-ship" and "-schaft" both come from Orþeedish "*-skapiz", which is closely related (beteed) to the forebear of English "shape". And who bears no beteeing (relationship) to me (although the flitecrafta/logician would remind us that strictly speaking, everything bears some beteeing to everything else, e.g. the relation of standing to each other in such a way that 1=1) 😉. Huh?! That post is about the fact (deedsake) that for any current microstate, you can partition phase space in such a way that the current entropy with regard to that partition (for entropy is always relative to a partition of phase space) is low. Hence, I argue there (and please set me right if I'm wrong), there's almost always something interesting going on (often perhaps even life), though it is likely that the lifeforms of one partition can only detect what is interesting with resprect to their own partition of phase-space. I just used particle-configurations that look like runes as an example; I could just as well have chosen tennis-rackets or Chinese characters or chess-pieces or whatever has structure. This shows us how weighty it is to read (or listen) and understand before one judges 😉. It's part of the great platform https://www.fandom.com/, formerly called "Wikia", which has been co-made by the co-maker Jimmy Wales of Wikipedia. So what's the big deal? Those who thought that there was any conspiracy, be careful ⚠️ lest you become conspiracy-theorists. 🤣 Judging the canship (ability) or knowledge of another as wanting can result from one's own want of canship or knowledge, as seems to be the case here. Dost ðou even know ðy own speech? For instance, ðou shouldst not address an individual as "you", but as "ðou", for "you" is the plural accusative and dative (many-tale whon-case and whom-case) form of ðe English second personal pronoun, akin to Þeech "euch", while "ðou" is its singular nominative (one-tale who-case) shape, akin to German "du". Ðe sentence "Alice, can you please help me?" translates into Þeech literally as "Adelheid, kann euch mir bitte helfen?", which is so bad German ðat it would not be easy to even understand. Didst ðou know ðat? Sadly, the fair English speech has been messed up quite a bit, but we should at least try to tidy it up again. However, all that has little to do with the topic at hand, and I'd rather discuss it in a speechlore (linguistics) forum rather that a mathematics one. -
Is there a proposition known to be undecidable?
Tristan L replied to Tristan L's topic in Mathematics
Oh, only now do I realize that what I said was ambiguous. Yes, it does. If I wanted to contradict you regarding second-step logic, I'd have said: "No, it does". The problem is that the English speech doesn't have an equivalent of German "doch", French "si", and Arabic "بَلَى" (and in fact, none of these three European languages has an equivalent of Arabic "كَلَّا"). So I guess that my question has been half-answered: Since second-order witcraft is incomplete, there are undecidable statements in it, of which CH is an example. However, while we know that second-order logic will always stay incomplete (if only finite proofs are allowed), we might some day find a further axiom that applies to the true set universe with which to decide CH. That's right. I should have been more specific: a statement expressible in the speech of the Dedekind-Peano axioms, th.i. second-step or some higher-step logic. Yes, that's what I mean, though sets of naturals and sets of sets of naturals and so on are also allowed. -
Is there a proposition known to be undecidable?
Tristan L replied to Tristan L's topic in Mathematics
Thank you very much for your detailed and informative (informatul) answer! 👍 Or rather, I was only talking about semantics. I like to think of an axiom-system as standing for an ownship, and of being a model of that axiom-system as having that ownship. Right. Yes, it does. Yes, and to avoid confusion with the much too weak first-order theory, I called the second-order theory "DP". Dedekind's Isomorphy Theorem indeed shows the categoricity of DP. True. You're right; instead of taking not-categorical first-order ZFC as our basis, we take categorical second-order analysis as our basis, with the true set-theory (about which ZFC doesn't give enough info) as the meta-logical (over-witcrafty) framework of second-step logic. Then CH is indeed a proposition of the kind that I was searching for. As I understand it, the word "complete" as brooked (used) here means what I'll call "semantic completeness": every sentence expressible in the speech is either true in all models or untrue in all models, as opposed to what I'll call "syntactic completeness": every sentence expressible in the speech is either derivable from the axioms with the logical calculus or its negation is. Well, that has to be so if the logical calculus is right, which it should. I think that semantic completeness rather than syntactic completeness is meant, though, which makes sense: If the axiom-system has only one model, every sentence is either true in all models or it's false in all models. Most certainly your whole answer is! 😃 Yes, and more broadly, I was searching for statements which are semantically decidable in (either true in all models of or untrue in all models of) some axiom-system, but not syntactically decidable in the axiom-system. Am I right in thinking that these are the ones that show true lack of being able to know on our part? -
Is there a proposition known to be undecidable?
Tristan L replied to Tristan L's topic in Mathematics
Yes, exactly. However, the conjecture that there are odd perfect numbers is not what I seek, for since we know that (if it is true, we can show that it's true), we know that (if it's undecidable, it is false), so if we knew that it is undecidable, we'd know that it's false, making it decidable after all and thus leading to a contradiction. I seek a proposition \(A\) such that we know that we can't know whether \(A\) is true or false. -
Is there a proposition known to be undecidable?
Tristan L replied to Tristan L's topic in Mathematics
Yes, CH is undecidable in / independent of ZFC, but that's because ZFC has several not-isomorphic models: I'm asking whether there is a proposition which is known to be independent of an axiom-system with only one model (up to isomorphy), such as the Dedekind-Peano-axioms. -
To me, axiom-systems seem to basically be ownships (properties). For instance, the group-axiom-system is basically the ownship of being an ordered pair \((G, *)\) such that \(G\) is a set and \(*\) is a function from \(G\times G\) to \(G\) such that \(*\) is associative and has an identity element and each member of \(G\) has an inverse element with regard to \(*\). Just as the axiom-system itself is an ownship, so are what are called “propositions in the language/speech of the system” actually properties. For instance, when we say: “The proposition that the sum of the inner angles of a triangle is always 180° follows from the Euclidean axioms“, we actually mean that for every structure E, if E has the Euclidean axiom-system as a property, then E has the property that every triangle in it has an inner-angle-sum of 180°. Some axiom-systems, such as the group-axiom-system and the field-axiom-system, are had by several structures of which not all are isomorphic to each other. In other words, such axiom-systems have at least two models which aren’t isomorphic to each other. Let’s call these axiom-systems “not-characterizing”. Others, such as the Dedekind-Peano-axiom-system (called “DP” henceforth) , are only had by one structure up to isomorphy – they have a model, and all their models are isomorphic to each other. Let’s call these axiom-systems “characterizing”. The only model of DP up to isomorphy (and indeed up to unique isomorphy), th.i. the only entity, up to (unique) isomorphy, which has DP as a property, is the structure of the natural numbers. The German mathematician Richard Dedekind showed this in 1888 with his Theorem of Isomorphy. Now, there are two ways in which a ‘proposition’ \(P\) (in reality a property, see above) can be undecidable (neither provable nor disprovable) in an axiom-system \(AS\). One way is that \(AS\) has several non-isomorphic models of which some have \(P\) and others don’t. For instance, being unendly (infinite) isn’t decidable from the field-axioms since there are finite (endly) fields as well as unendly ones. This underkind of undecidability is, I think, obviously not very interesting. The Continuum-Hypothesis (CH) is one example, for it’s true in some models of ZFC and false in others. Here, the lack of our knowledge stems from the fact (deedsake) that ZFC doesn’t contain all information about the set-universe to start with. The second way in which \(P\) can be undecidable in \(AS\) is that it either holds in all models of \(AS\) or its negation holds in all models of \(AS\), and yet we still can derive neither one from \(AS\) because our logical (witcrafty) tools are too weak. This reflects a true unableness on our part to get info out of \(AS\) which nevertheless is there. This is always the case if \(AS\) is characterizing (has only one model up to isomorphy). Such an axiom-system contains all info about its model, so undecidability in it means inability to get info which is nonetheless there. So while undecidableness in ZFC need not be interesting, undecidableness in DP always is. Since characterizing axiom-systems have just one model up to isomorphy, what we call “propositions in their speech” – in truth ownships – can actually be regarded as propositions, namely the propositions resulting from predicating those properties of the system’s unique model. Now, from Gödel’s Incompleteness Theorems, we know that there are undecidable propositions in DP, that is, not every statement about the naturals can be shown or disproven. However, my question is this: Is there an individual proposition about the naturals of which we know that we can neither prove it nor disprove it in DP?
-
Were the first synapsids scaly?
Tristan L replied to Tristan L's topic in Evolution, Morphology and Exobiology
However, the shared forebear of epidermal scales, feathers and hair need not itself have been a scale. -
I’ve read some claims that the synapsids (theropsids) had smooth, glandular skin rather than scaly skin like modern reptiles – Estemmenosuchus is often cited in this regard –, and that the scale-like structures that they did have, their belly-scales, aren’t homologous to lepidosaur scales. While the latter claim may be the case, I do have qualms with the former claim. After all, this study (see https://www.sciencedaily.com/releases/2016/06/160624154658.htm for an overview) has proven that modern reptiles have anatomical placodes just as mammals and birds do, and that mammalian hair, modern reptile scales, and bird feathers are all homologous to each other and come from the skin appendages of the last shared forebear of mammals, modern reptiles, and birds. Estemmenosuchus is a derived synapsid (namley a therapsid), so its lack of scales may very well be secondary, right? Moreover, the lepidosaurian-like scales of the varanopid Ascendonanus (see https://www.researchgate.net/publication/323782950_First_arboreal_%27pelycosaurs%27_Synapsida_Varanopidae_from_the_early_Permian_Chemnitz_Fossil_Lagerstatte_SE_Germany_with_a_review_of_varanopid_phylogeny) show that varanopids had reptilian scales, and even lepidosaur-like ones. Of course, this isn’t necessarily evidence for synapsid scales since varanopids might be bird-line amniotes rather than mammal-line amniotes (though this suggestion has been contested and argued for and against). Still, doesn’t the homology of hair, scales, and feathers strongly indicate that the last common forebear of mammals, modern reptiles, and birds had scaly skin, and that early synapsids thus were scaly, too, in a way homologous to modern reptiles? If so, am I right in rejecting such claims as the one that early synapsids looked like naked lizards, or that the skin of our early ancestors was frog-like rather than reptile-like? The latter claim is false anyway since even if they didn’t have scales, their skin would still have been suited to a dry environment like reptiles’ and unlike frogs’, right? Is there other evidence for (or against) scaly skin being the plesiomorphic condition of all crown-group amniotes?
-
Right. It would indeed need a stretch of the imagination to see how lifeforms from the last aeon could get info into our aeon in a way that brings about living things. I find the hypothesis that life arose much more convincing. Then again, I just found out that my idea isn't new: it's the idea of information panspermia. Still, I find it very speculative and think the arising of life from scratch to be much likelier. But there are some very good solutions to the ILP which show that info isn't destroyed after all, aren't there? I firmly believe in the indestructibility of info (though I do think that on the other hand, info can be made), also because of quantum theory, but not only due to it. That's one reason for which I'm skeptical of CCC. Yes, I think we could, though I believe that it's not at all certain that we will. See also the idea of information panspermia - my idea turned out not to be so new after all.
-
Yes, those are some good points for life having arisen rather than always been there. But the properties of the CMB depend in part on those of the Big Bang, which in turn might be influenced by a previous aeon, I think. Is that right? For instance, see here for the possibility of the CMB containing info from an aeon before ours. As far as I know, the genetic code is very similar across all Earthly living beings and hasn’t changed much over billions of years. I don’t mean individual genomes or sets of genomes, but rather the genetic speech. Could our genetic speech have been written by organisms of the last aeon? In conformal cyclic cosmology (CCC), there is no rebound. So are these forum posts no information? Couldn’t a very mighty civilization use EM radiation to send info to the next aeon? But what about cyclic models?
-
Good points. I also think that life did arise. However, I think that we shouldn’t take this as a given fact without questioning it. Arguments such as yours are one of the things that I was searching for. Has the question of whether life did arise in the first place been discussed in the scientific literature? But what if those beings influenced life in our cycle more subtly? For example, maybe they knew that amino-acids and nucleotides would arise naturally, so they rather wrote the genetic code and sent it to our cycle’s raw materials. I don’t really believe such things; I’d just like to know whether there are good arguments against it (or perhaps for it for that matter). They likely couldn’t. Bear in mind, though, that according to Roger Penrose’s CCC, there is no collapse; rather, the (if you ask me boring) future time-like infinity of one cycle is the big bang of the next cycle. Mightn’t seeds survive that way, in the shape of EM signals, perhaps?
-
Many scientists, philosophers, and people of religion have sought to find the answer to the question: How did life arise? However, isn’t that question loaded? Shouldn’t we first ask: Did life arise? Mightn’t it be the case that the Universe is everlasting, and that life has always existed in this Universe? For instance, if Roger Penrose’s conformal cyclic cosmology is true, then isn’t it possible that intelligent living beings in each aeon manage to seed the next aeon with life, perhaps with signals of some sort? Something similar could be asked about other cyclical models.
-
I didn't and don't want to do that; rather, I only wanted and still want to make sure that there are no misunderstandings before I answer your other points and go on with the discussion. The important thing is that in this whole thread, I have only ever talked about partitioning the set of microstates into macrostates, not partitioning the system into subsystems. However, you seem to imply that I have done the latter by saying In reality, I have only ever talked about partitioning the phase space. In what way does it do that? Does that mean that without the equipartition theorem, one microstate could belong to more than one macrostate? What exactly do you mean by that? All members of a partition are pairwise disjoint by definition. Being sets, some partitions P, Q are disjoint (share no common subsets of the ground-set), while others are not. Partitions of what? Of course not, but as I said, I want to do away with any misunderstandings on my or your part before talking about your other points, including Caratheodory. Actually, I linked to the paper mainly because I find it interesting that there may be a way to outsmart the Second Law 😁.
-
Actually, that's not what I mean by "partitions". A partition P of a set Mi of all possible microstates is a way to split it up into not-empty, pairwise disjoint subsets whose union is Mi, and the elements of P are the macrostates of/w.r.t. P. For example, if we have exactly six possible microstates 1, 2, 3, 4, 5, 6, the set of microstates is Mi = {1, 2, 3, 4, 5, 6}, the set {{1}, {2, 3}, {4, 5, 6}} is one of the partitions of Mi, and {2, 3} is one of the macrostates w.r.t. that partition. The thermo-partition is the set of all thermodynamic macrostates, th.i. the way in which thermodynamics groups microstates. Mi usually carries a structure, so that we're dealing with (Mi, STRUCTURE) and not just Mi as an unstructured set. Partitions P, Q of Mi are isomorphic if there is an automorphism f of (Mi, STRUCTURE) such that we get Q from P by applying f to each of the elements (microstates) of the elements (macrostates) of P. Partitional isomorphy is isomorphy of partitions if STRUCTURE is trivial (th.i. if we deal with Mi as an unstructured set), so that any permutation of Mi is an automorphism as far as partitional isomorphy is concerned. That's at least how I use those words.
-
Answer to joigus: Yes, I have, and just like there will be Boltzmann brains (which don't live long) after a long enough time, there will be Boltzmann galaxies (which can sustain life for a long time) after an even longer enough time. In fact, it is almost certain (probability = 1) that this will happen endlessly often afaik. Right; I should have said that you had shown good reasons why my idea may well be wrong. I thought that I had written "likely", but apparently I was wrong. But if the Universe were a closed system with an endless past and an endless future, the structures which gave rise to them (solar nebulas? galaxies?) would be Poincaré recurrences, I think. However, 👍 Answer to studiot: Yes, I think so. I think that I misinterpreted your argument and analogy. My new interpretation is as follows: The one-to-one correspondence between the boards stands for partitional isomorphy, whereas the different laws of chess and checkers stand for the additional structure on the set Mi of microstates, e.g. the neighborhood-relation in the simple LED-system above. Many partitions which are partition-isomorphic to the thermo-partition aren't isomorphic to it in the stronger sense, which also takes the additional structure into account. For example, in the LED-system, the brightness-partition is strongly isomorphic to the brightness'-partition, but not to the merely partitionally isomorphic brighthood-partition. If that is what you mean, I fully agree with you. Regarding the units of entropy and the Boltzmann constant, I still cannot see how one quantity which is a constant multiple of another can obey different laws than it. Also, you can actually set the Boltzmann constant equal to 1, and in fact, the Planck unit system equates universal constants like the Boltzmann constant with 1 or a numeric multiple thereof. But I now think that you meant something else, namely that the existence of units indicates that there is more structure on Mi than just the sethood of Mi. If that's what you meant, I agree. Do you only mean that they have the same partitional structure (number of macrostates, number of microstates in each macrostate), th.i. are partitionally isomorphic? If yes, then that's in accordance with my interpretation of you above. However, if you mean that they are isomorphic in the strong sense, th.i. have the same number of microstates, the same corresponding microstate-transitions, and the same probabilities of corresponding transitions, then that contradicts my above interpretation, and I cannot follow you. For an informational system which has exactly the same microstate structure as the physical world (transition-correspondence, same probabilities, and all), the states of that info-system which correspond to the emergent and complex states of the physical world are the informatioal emergent phenomena you're looking for. So long as the differences are or result in structural (th.i. substantial) differences, you can indeed not equate the two things in question. However, if the two things have exactly the same structure, then you can regard them as essentially the same (though not selfsame, of course). For example, the set of all even positive whole numbers together with the x -> x+2 function has exactly the same structure as the set of all positive whole numbers with the x -> x+1 function. Therefore, it's meaningless to ask which of the two are the "true" natural numbers. Perhaps with quantum info theory? But as long as the quantum effects, e.g. the anomalously high ionisation energies, do not result in structural and informational differences, I don't really have to explain them. It's like with Turing machines; we don't have to care for what details (e.g. number of symbols used) distinguish one universal Turing machine from another. As long as they've been shown to be UTMs, that's the only thing we have to care about since they can perfectly simulate each other. Now I have, and I might look into the topic. With that, you've brought a really interesting problem to my attention. I guess that what you want to say is that entropy alone doesn't give us enough info to solve it; we need additional details about the physical world. Is that right? If so, then this shows that these details have a bearing on the informational structure of our world. When I started this thread, I originally wanted to bring up the following issue, but decided it to be too far off-topic, but apparently not so. The issue is this: Actually, it doesn't really make sense to assign probabilities to states. It only makes sense to assign probabilities to state transitions or talk about conditional probabilities (which is basically the same, I think, though perhaps a bit broader). Therefore, since entropy assumes that states have likelihood, it might not grasp all the informational structure of the system. Perhaps, the piston problem shows that there is more to the informational structure of the physical world than state-probabilities. Anyway, the piston-problem has led me to this very interesting article: https://arxiv.org/ftp/physics/papers/0207/0207073.pdf Indeed. I hope that hasn't taken so much time and made so much entropy that is has hastened the coming of the heat death 🥵.
-
Answer to joigus: From what you've said, I think that I finally get where the problem lies: The set of all possible microstates isn't a simple unstructured set Mi, but a highly structured set (Mi, (STRUCTURE, e.g. relations and functions)). Partitions Ma1, Ma2 are isomorphic if and only if they have a partition-isomorphism (are isomorphic as partitions) and that partition-isomorphism respects STRUCTURE. Also, only partitions which respect STRUCTURE are eligible as partitions into macrostates. For example, if STRUCTURE is made up of a linear order on Mi, only not-crossing partitions are allowed. In the case of our simple system, there is a "neighborhood"-relation on the set of microstates, which tells us which state can become which other states with only one LED turning on or off. The brightness-partition, the brightness'-partition, and the rest of the sixteen partitions which we get from the brightness-partition by defining for each microstate (f, u, Þ, a) a new brighness-measure b_(f, u, Þ, a) through b_(f, u, Þ, a)(x, y, z, w) := (f*x - (1-f)*(1-x), u*y - (1-u)*(1-y), Þ*z - (1-Þ)*(1-z), a*w - (1-a)*(1-w)), are isomorphic to each other in strong sense that they and their isomorphisms respect the neighborhood-relation. However, simply exchanging e.g. (1, 1, 1, 0) with (0, 0, 0, 1) in the brightness-partition yields a forbidden partition (call it partition in terms of "brighthood"), since the other microstates (1, 1, 0, 1), (1, 0, 1, 1), (0, 1, 1, 1) in the same brighthood-macrostate as (0, 0, 0, 1) only differ from the one and only brighthood=4 microstate (1, 1, 1, 1) by one LED, but (0, 0, 0, 1) differs from it by three LEDs. Likewise, the many partitions which are isomorphic to the thermo-partition in the partition-sense don't respect the additional structure (of which there is a lot) given by the things which you've mentioned. If I understand you in the right way, the one and only partition respecting all that additional structure is the thermo-partition. Is that right? They mean what they mean - sets of microstates, and they are sets of microstates allowed by the known laws of physics. Perhaps with a rune-shaped "sock" weaved out of thin threads which tear when strong wind blows against them. As soon as an unusually big number of gas-particles assemble inside the sock, they will cause an outflowing wind that rips the sock apart. Yes, I think that you're right. Your three points have been important for my above analysis. Regarding the time-stopping, I think that I now get what you mean: There are vast swathes of time during which the thermo-entropy is maximal or almost maximal (after all, it's always slightly and randomly fluctuating), but since nothing interesting happens during these times, there's nothing and no one that observes them, so in effect, they're not-existent. So, as soon as life becomes impossible due to too high entropy, the Poincare Recurrence Time will pass as if the blink of an eye since no one is there to observe it, and after the Universe has become interesting again, life can again take hold. So though you've shown that my idea of the Universe being interesting much of the time is wrong, you've also shown that the Universe is actually interesting most of the time since from a macroscopic POV, the boring times don't exist. Am I right? But after a very, very long time (which is nonetheless puny compared to Graham's number of years, for instance), everything will be as it once was by the Poincare Recurrence Theorem. Therefore, time (in the macroscopic sense) will comes back one day, and will in fact come back endlessly often. By the same theorem, runes will spontaneously appear in the gas, but it will take much longer than the age of the Universe, so we can't expect to see something like that happen in a paractical experiment. But on the whole, the Universe will be interesting for an infinitely long macroscopic time (which isn't continous, of course), and also boring for an infinitely long fundamental (but not macroscopic time). Of course, that doesn't take the evolution of space-time itself into account (e.g. expansion, dark energy asf.). Your idea that time doesn't macroscopically exist when entropy is maximal or near-maximal has actually proven quite hope-giving, I hope. Answer to studiot: Actually, I'm bent on finding the truth, and I think that I might've come pretty close with my above analysis in this post. You claimed that thermo-entropy and info-entropy behave differently and obey different laws and, if I understand you in the right way, that this is so only because they're just proportional and not identical. You still owe me an explanation for that. Your likening of the Boltzmann constant to the constant of proportionality between stress and strain is not valid since the former is a universal constant whereas the latter is not. After all, we could measure temperature in joules, and then the Boltzmann constant would have no units. I never said that there is only one rule applying to my partitions. I only wondered whether there is only one partition which is isomorphic to the thermo-partion. In a purely partitional sense, that is certainly not the case, but my analysis above, based partly on what joigus has said, suggests that there may indeed be no other partition which is isomorphic to the thermo-partition in the sense of respecting the additional structure. The anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic are explained by QM, but as I said, I never said that one law was enough for explaining everything. I was only talking about isomorphy. This discussion is really interesting. Question for joigus and studiot: Even if the thermo-partition is the only one in its equivalence class w.r.t. "strong" isomorphy, is it really the only interesting one? Can we really be sure e.g. that no extremely complex computations are actually going on in the seemingly dull and boring air around us? After all, if Alice and Bob send each other encypted messages, it looks like nonsense to us, but they may still be having a very meaningful discussion about statistical thermodynamics.
-
First Answer to studiot: You're welcome. I'm sorry to have to point out that apparently, you do not understand your own analogy well enough. Therefore, let me make it clearer to you. Your flats being in a one-to-one correspondence with your pigeonholes is analogous to two games being isomorphic each other, which is in turn like two partitions of the set of microstates being isomorphic to each other. Chess and checkers, however, are not isomorphic to each other; there is no one-to-one correspondence between their possible game configurations and allowed moves. That's why they work differently. Regarding the partitions, there are some that aren't isomorphic to each other, and others that are. The thermodynamic partition is isomorphic to every partition that we get by taking the thermo-partition and then applying an arbitrary permutation of the microstates. Not all of these partitions are distinct, but there are still many partitions isomorphic to the thermo-partition but still distinct from it. So, there are many measures of entropy equivalent to thermo-entropy but distinct from it, and the system will much more often be 1. in a state of low entropy w.r.t. some partition isomorphic to the thermo-partition than 2. in a state of low entropy w.r.t. the thermo-partition itself. Thermodynamic entropy is just information entropy w.r.t. the thermo-partition multiplied by the Boltzmann constant afaik. They are not only defined in terms of isomorphic partitions, but in terms of one and the same partition. One is just the other multiplied by a constant. Could you please tell me how you supposedly get conflicting results with them? As I've already said, chess and checkers are not isomorphic, unlike thermodynamic entropy and information entropy w.r.t. the thermo-partition. Thermodynamic entropy vs. information entropy w.r.t. the thermo-partition is like playing chess on some board with chess-pieces of a certain size vs. playing chess on a physically bigger board with bigger chess pieces, but with the number of squares an everything else kept the same. Therefore, we can safely equate the two and just talk of thermo-entropy, and to make the math a bit easier, we'll not use unneeded units that distract from the essence. I've already said why info-entropy w.r.t. thermo-partition and thermo-entropy are essentially the same (not just isomorphic) and why they're very different from the Ch's, which aren't even isomorphic. But I ask you again: Since when are info-entropy w.r.t. thermo-partition and thermo-entropy subject to different laws? Please do tell. ************************************************************************************************************** Answer to joigus: Of course I'm aware of that. Also, don't get me wrong and think that I want to be right in order to be right. I want to be right since I don't like the heat death of the Universe at all . But of course, I won't let that make me bend results. From a purely scientific point, my being right and my being wrong are indeed both interesting, but from a life-loving perspective, I really do hope to be right . I find your thoughts very interesting. Actually, the number of partitions of a set with n elements is the Bell number Bn, and the sequence of Bell numbers does grow quite quicky. So if we have n microstates, there are Bn ways to define macrostates. So, while for a particular kind of choosing a partition, the number of macrostates in that partition might get overwhelmed by the number of microstates, for any number of microstates, there is a way of partioning them such that the number of macrostates in that partition is not overwhelmed. Now, of course, not all partitions are isomorphic, but even just the partitions isomorphic to some partition is very big in many cases. I've calculated (hopefully right) that for any positive whole number k, sequence (l_1, ... , l_k) of positive whole numbers, and strictly rising sequence (m_1, ... , m_k) of positive whole numbers, there are (l_1*m_1+...+l_k*m_k)! / ( m_1!^l_1 * l_1! * ... * m_k!^l_k * l_k! ) ways to partition a set with n = l_1*m_1+...+l_k*m_k members into l_1 sets of m_1 elements each, ..., and l_k sets of m_k elements each. Here, k, (l_1, ... , l_k) and (m_1, ... , m_k) uniquely determine an equivalence class of isomorphic partitions, if I'm right. This result is consistent with the first few Bell numbers. Thus, since the thermo-partition isn't trivial (k=1, l_1=1, m_1=n or k=1, l_1=n, m_1=1), there are many partitions isomorphic but not identical to the thermo-partition, and their number likely does grow humongously as the number of microstates rises. Take the following very simple system, in which we'll assume time is discrete to make it even simpler: We have n bits which are either 1 or 0. In each step and for each bit, there's a probability of p that the bit will change. The bits change independently of each other. Let's interpret the bits as LEDs of the same brightness which are either on or off. The microstates of the system are the ways in which the individual LEDs are on or off. We can then define two microstates as belonging to the same macrostate if they both have the same overall brightness. If we take n = 4, for example, the microstates are (0, 0, 0, 0), (0, 0, 0, 1), ..., (1, 1, 1, 1), sixteen in total. The brighntess-macrostates are {(0, 0, 0, 0)} (brightness = 0, probability = 1/16), {(0, 0, 0, 1), (0, 0, 1, 0), (0, 1, 0, 0), (1, 0, 0, 0)} (brightness = 1, probability = 4/16), {(0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), (1, 1, 0, 0)} (brightness = 2, probability = 6/16), {(0, 1, 1, 1), (1, 0, 1 1), (1, 1, 0, 1), (1, 1, 1, 0)} (brightness = 3, probability = 4/16), {(1, 1, 1, 1)} (brightness = 4, probability = 1/16). Simple calculations show us that the system will on average evolve from brighness 0 or brightness 4 (low probability, low entropy) to brightness 2 (high probaility, high entropy). However, when the system is in the bightness-macrostate of brighness 2, which has maximum brighness-entropy, e.g. by being in microstate (0, 1, 1, 0), we can simply choose a different measure of entropy which is low by choosing the partition into brightess'-macrostates, where the brightness' of a microstate (x, y, z, w) = the brightness of the microstate (x, 1-y, 1-z, w) : {(0, 1, 1, 0)} (brightness' = 0, probability = 1/16), {(0, 1, 1, 1), (0, 1, 0, 0), (0, 0, 1, 0), (1, 1, 1, 0)} (brightness' = 1, probability = 4/16), {(0, 1, 0, 1), (0, 0, 1, 1), (1, 1, 1, 1), (0, 0, 0, 0), (1, 1, 0, 0), (1, 0, 1, 0)} (brightness' = 2, probability = 6/16), {(0, 0, 0, 1), (1, 1, 0 1), (1, 0, 1, 1), (1, 0, 0, 0)} (brightness' = 3, probability = 4/16), {(1, 0, 0, 1)} (brightness' = 4, probability = 1/16). The system will also tend to change from low brightness'-entropy to high brightness'-entropy, but then I can choose yet another measure of brightness, brightness'', according to which the entropy is low. The thing is that at any time, I can choose a partion of the set of microstates into macrostates which is isomorphic to the brightness-partition and for which the current microstate has minimum entropy. But anyway, the system will someday return to the low brightness-entropy state of brightness=4. Since it is so simple, we can even observe that spontaneous fall in brightness-entropy. Does that mean for our simple system above that the microstates (0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), and (1, 1, 0, 0) somehow magically stop the flow of time? Entropy is emergent, right? So, how can it stop something as fundamental as time? You yourself said that microscopic changes will go on happening, which means that there must always be time. By the Poincaré recurrence theorem and the Fluctuation theorem, the system will almost certainly go back to its original state of low entropy. It just needs a very, very long time to do that. After all, the Second Law isn't some law of magic which says that a magical property called entropy defined in terms of some magically unique partition must always rise, right? And spontaneous entropy falls have been observed in very small systems, haven't they? Again, I find your ideas very stimulating and fruitful. Second Answer to studiot: This is the unaswered question. +1 No longer. See my answer to that above.
-
Just a quick correction of my correction: I forgot "the logarithm of" after "proportional to".
-
The units are only due to a constant of proportionality (the Boltzmann constant). However, in essence, every entropy is a number defined in terms of probaility, including both thermodynamic entropy (defined statistically mechanically and without unneeded constants of proportionality) and "runish entropy". What's essential about thermodynamic entropy is that it's defined in terms of thermodynamic macrostates. "Rune entropy", on the other hand, is defined in terms of how well the particles spell out runes. Of course I'd rather live in a flat, but that's only because I'm a human an not a pigeon. Translating this metaphor, it means that I'd rather live in a universe with low thermodynamic entropy rather than low runic entropy, but only since I'm a thermodynamic lifeform and not a runish one. Maybe at a time in the far future when thermodynamic entropy is high but runish entropy is low, there will be an intelligent runish lifeform asking another one whether it likes to live in the low-rune-entropy universe it knows or is so unreasonable as to want to live in a universe with low thermodynamic entropy. The thermodynamic world is indeed very different from the runish world, but I see no reason for thermo-chauvinism. Low thermo-entropy is good for thermo-life, and low rune-entropy is good for runish life. Alice and Bob can have very different machines, where Alice's is built such that it uses a pressure difference between two chambers, and Bob's machine is built such that it extracts useful work from the Fehu-state I described above, e.g. by having tiny Fehu-shaped chambers in it or something. It's just that in our current world, Alice's machine is much more useful as thermo-entropy is low while rune-entropy is high at the current time. Isn't that right? Yeah, that's right. My bad. I should have said that if all microstates are equally likely, the entropy of a macrostate is proportional to the probability of that macrostate. According changes have to be made throughout my text. However, that doesn't change anything about its basic tenets, regardless of whether the microstates are equally likely or not, does it? I hope and think not, but please correct me if I'm wrong. Exactly. My point is that if I choose, say, having the particles arranged so as to spell out runes rather than thermodynamic properties like pressure, temperature and volume, I get a very different entropy measure and thus also a different state of maximal entropy. So, rune-entropy can be low while thermo-entropy is high. Doesn't that mean that runish life is possible in a universe with low rune entropy? Why should e.g. temperature be more priveleged that rune-spelling? Yes, I fully agree. On average, thermo-entropy increases with time, and when it has become very high, it will take eons to spontaneously become low again. The same thing goes for rune-entropy. However, since there are so humongously many measures of entropy, there will always be at least one that falls and one that is very low at any time. Therefore, life will always be possible. When thermodynamic entropy becomes to high, thermo-life stops, but then, e.g. rune-entropy is low, so rune-life starts. When rune-entropy has become too high, runish life ends and is again replaced by another shape of life. My point is that rather than being interesting and life-filled for very short whiles separated by huge boring lifeless intervals, the universe (imagine it to be a closed system, for expansion and similar stuff is another topic) will be interesting and life-filled for much of the time. It's not life itself that needs eons to come again, it's only each particular shape of life that takes eons to come again. That's my point, which I hope is right. Perhaps some of the entropy measures aren't as good as others, but is thermo-entropy really better than every other measure of entropy? As far as I can see, I think they are. Yes, it certainly does!
-
As I understand entropy and the Second Law of Thermodynamics, things stand as follows: A closed system has a set Mi of possible microstates between which it randomly changes. The set Mi of all possible microstates is partitioned into macrostates, resulting in a partition Ma of Mi. The members of Ma are pairwise disjoint subsets of Mi, and their union is Mi. The entropy S(ma) of a macrostate ma in Ma is the logarithm of the probability P(ma) of ma happening, which is in turn the sum Sum_{mi ∊ ma}p(mi) of the probabilities p(mi) of all microstates mi in ma. The entropy s_{Ma}(mi) of a microstate mi with respect to Ma is the probability of the macrostate in Ma to which mi belongs. The current entropy s_{Ma} of the system with reprect to Ma is the entropy of the microstate in which the system is currently in with respect to Ma. The Second Law of Thermodynamics simply states that a closed system is more likely to pass from a less probable state into a more probable one than from a more probable state into a less probable one. Thus, it is merely a stochastical truism. By thermal fluctuations, the fluctuation theorem, and the Poincaré recurrence theorem, and generally by basic stochastical laws, the system will someday go back to a low-entropy state. However, also by basic stochastical considerations, the time during which the system has a high entropy and is thus boring and hostile to life and information processing is vastly greater than the time during which it has a low entropy and is thus interesting and friendly to info-processing and life. Thus, there are vast time swathes during which the system is dull and boring, interspersed by tiny whiles during which it is interesting. Or so it might seem... Now, what caught my eye is that the entropy we ascribe to a microstate depends on which partition Ma of Mi into macrostates we choose. Physicists usually choose Ma in terms of thermodynamic properties like pressure, temperature and volume. Let’s call this partition of macrostates “Ma_thermo”. However, who says that Ma_thermo is the most natural partition of Mi into macrostates? For example, I can also define macrostates in terms of, say, how well the particles in the system spell out runes. Let’s call this partition Ma_rune. Now, the system-entropy s_{Ma_thermo} with respect to Ma_thermo can be very different from the system-entropy s_{Ma_rune} with respect to Ma_rune. For example, a microstate in which all the particles spell out tiny Fehu-runes ‘ᚠ’ probably has a high thermodynamic entropy but a low rune entropy. What’s very interesting is that at any point in time t, we can choose a partition Ma_t of Mi into macrostates such that the entropy s_{Ma_t}(mi_t) of the system at t w.r.t. Ma_t is very low. Doesn’t that mean the following?: At any time-point t, the entropy s_{Ma_t} of the system is low with respect to some partition Ma_t of Mi into macrostates. Therefore, information processing and life at time t work according to the measure s_{Ma_t} of entropy induced by Ma_t. The system entropy s_{Ma_t}rises as time goes on until info-processing and life based on the Ma_t measure of entropy can no longer work. However, at that later time t’, there will be another partition Ma_t’ of Mi into macrostates such that the system entropy is low w.r.t. Ma_t’. Therefore, at t’, info-processing and life based on the measure s_{Ma_t’} of entropy will be possible at t’. It follows that information processing and life are always possible, it’s just that different forms thereof happen at different times. Why, then, do we regard thermodynamic entropy as a particularly natural measure of entropy? Simply because we happen to live in a time during which thermodynamic entropy is low, so the life that works in our time, including us, is based on the thermodynamic measure of entopy. Some minor adjusments might have to be made. For instance, it may be the case that a useful partition of Mi into macrostates has to meet certain criteria, e.g. that the macrostates have some measure of neighborhood and closeness to each other such that the system can pass directly from one macrostate only to the same macrostate or a neighboring one. However, won’t there still be many more measures of entropy equally natural as thermodynamic entropy? Also, once complex structures have been established, these structures will depend on the entropy measure which gave rise to them even if the current optimal entropy measure is a little different. Together, these adjusments would lead to the following picture: During each time interval [t1, t2], there is a natural measure of entropy s1 with respect to which the system’s entropy is low at t1. During [t1, t2] – at least during its early part – life and info-processing based on s1 are therefore possible. During the next interval [t2, t3], s1 is very high, but another shape of entropy s2 is very low at t2. Therefore, during [t2, t3] (at least in the beginning), info-processing and life based on s1 are no longer possible, but info-processing and life based on s2 works just fine. During each time interval, the intelligent life that exists then regards as natural the entropy measure which is low in that interval. For example, at a time during which thermodynamic entropy is low, intelligent lifes (including humans) regard thermodynamic entropy as THE entropy, and at a time during which rune entropy is low, intelligent life (likely very different from humans) regards rune entropy as THE entropy. Therefore my question: Doesn’t all that mean that entropy is low and that info-processing and life in general are possible for a much greater fraction of time than thought before?