Jack Jectivus
Members-
Posts
22 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Jack Jectivus
-
This is is not an argument against the existence of alien life or an argument for the premises. This model is purely speculative. The goal of this work is more so to give an example of a falsifiable model built from non falsifiable claims. If alien life were contacted, and if one of the claims were proven, then it would falsify the other with some level of certainty. This model is designed to contextualize bubble multiverse theory in respect with the cosmology of individual mass formation and the existence of alien life. The fact of the matter is, no one will be convinced by this. That's not the point.
-
You misunderstand the logic here. Under these assumptions all values for Ω will exist in the multiverse, but larger ones will be far outweighed by smaller ones when selected randomly (not by quantity, but by probability, of course). Therefore, a randomly selected universe with a specific trait is likely the minimum mass needed for that trait to exist. If that trait is a sentient being, then that universe will likely be the minimum size needed for sentient life to exist. Sentient life, under these assumptions, will find itself in a universe that is the minimum mass for it to exist, leaving no room (I go in more detail in the work) for sentient life beyond the species that is already there. In short, sentient life will more often be alone in its universe, but it will always find itself in a universe with mass sufficient to contain at least one instance. The point of this work isn't very clear apparently. Definitely something for me to add. This is not an argument for the model of those premises.
-
Supposing that the logic in this work is reliable, then IF there is a multiverse and IF every unit of mass at the beginning of the universe had probability of existing that was a certain independent constant, then sentient alien life is almost certainly non-existent. If, then, alien life were discovered, it would prove that the two assumptions upon which this model is based can't coexist. That is to say, either the premise that there is a multiverse is untrue, the premise that there was an independent mass constant is untrue or both are untrue. To prove the positive that one of them is true would indirectly prove the negative of the other. Even this is not a new idea; there is a reason I haven't made this a publishable paper. This is not an argument for any model. This is simply a demonstration of that kind of logic applied to alien life and cosmology.
-
Keep in mind, this is not an argument against the existence of alien life. This model is purely speculative. The goal of this work is more so to give an example of a falsifiable model built from non falsifiable claims. If alien life were contacted, and if one of the claims were proven, then it would falsify the other with some level of certainty. This model is designed to contextualize bubble multiverse theory in respect with the cosmology of individual mass formation and the existence of alien life. The fact of the matter is, no one will be convinced by this. That's not the point.
-
Perhaps the most important scientific debate is the one over whether or not we are alone in the universe. The possibility of alien life has been argued about for hundreds of years because of its significance in our place in the universe as a species. Unfortunately, there is virtually no way of determining the answer to this question without going out and searching every corner of the universe for aliens. In this paper, a model of the big bang is built which is incompatible with alien life and is, therefore, either valuable for offering an answer to whether sentient aliens exist, or valuable for being falsifiable in the case that sentient alien life is discovered. Assume there are infinite bubble universes, and that the probability of each unit of mass forming in a big bang is a constant independent value in all universes. These two intuitive assumptions create a model of the origin of the universe that is falsifiable because it directly contradicts the existence of sentient extraterrestrial life, therefore, if sentient beings from another planet are contacted, the two assumptions upon which this model is based can't coexist. Through this method of using falsifiable models to find which assumptions can coexist and which can't, a complete model of the origin of our universe might be made out. If there is a bubble multiverse, and the mass of each universe at its formation is a random amount determined by the exponential probability P^Ω, P being the individual probability of a given unit of mass exploding into existence and Ω being the mass of the universe in mass units corresponding with P, then universes with greater masses become exponentially less likely. If a single electron mass has a P value of 0.5, for example, then a universe with our mass is twice as likely to form as a universe with our mass plus a single electron. If a universe has a mass equivalent to 10¹⁵ mass units, it has a P^(10¹⁵) probability of forming. If the P-value for a given mass unit is 0.9, that is, a single mass unit has a 90% chance of existing at a universe's big bang, then the probability of a randomly selected universe out of the infinite multiverse having that mass is 7.4918669 × 10^-45,757,490,560,676. There is nothing, on neither the atomic nor the cosmic scale, that can be used to put in perspective how astronomically tiny that value is. Approximately 1 in every 1.3347808 × 10^45,757,490,560,675 randomly selected universes will have the mass 10¹⁵ units when P is equal to 0.9. Using this logic, smaller universes are many times more likely to form than larger ones, though all possible Ω values will exist in an infinite multiverse. For every universe with a mass of 2 units, there are 1/P with a mass of 1. If P in this example is 1/10¹⁵, there will be 1,000,000,000,000,000 universes with a mass of 1 unit for every single universe with a mass of 2 units. This means that any universe with a specific trait is almost certainly the minimum size a universe can be while still containing that trait. This brings the discussion to the Weak Anthropic Principle, which states that sentient life will always find itself in a universe seemingly finely tuned for its existence, even if the conditions for its existence are extremely unlikely because life can never find itself in a universe that isn't perfectly balanced for its existence, even if that type of universe is infinitely more likely. This logic can be adapted to the model of the universe proposed by this paper. If life will inevitably find itself in a universe that can support it, regardless of how unlikely the conditions of that universe are, and that sentient life is a trait which will nearly always occur in as small a universe as possible, then we can assume that our universe, despite its incredible mass, must be the minimum size a universe with sentient life can be. The universe, under this model, must be as small as possible right down to the electron for sentient life to occur. Different instances of sentient life would tend to be alone in their universes, surrounded by the minimum amount of mass needed to form enough solar systems to make the probability of life emerging higher than the probability of every universe with a higher mass, rather than be contained in the same universe. The expression P(Ω)^x determines the probability of a given universe having x occurrences of a trait, with P(Ω) representing the probability of a universe with the minimum mass needed for a single occurrence of said trait. In a given universe, the emergence of sentient life relies on a balance between how accommodating the universe is for life (that is, how many planets on which life could potentially emerge there are) and how likely the universe itself is to form, or P^Ω. If the universe is more accommodating then it stands to reason that it is less likely to form, because more mass is needed in a universe that has more star systems with planets that could potentially develop life. Life will more often find itself in internally unaccommodating circumstances than a universe large enough to be internally accommodating. That is to say, relating to the last paragraph, if 1 in 10 Earth-like planets bear sentient life, then those 10 planets will be spread across 10 universes rather than be contained in just one. One of the universes they are spread across will have sentient life which could look at every planet in its universe and conclude that, since no other planet besides theirs is earth-like, 100% of Earth-like planets will harbor life. In reality, only 10% would, but life will usually occur in a small universe where it is unlikely (one where there is only one earth-like planet), rather than a large universe where it is likely (one where there are 10 earth-like planets). The balance between the accommodability of the universe and the probability of the universe will nearly always be struck where there is only one occurrence of a planet with sentient life. Therefore, under this model, we can conclude that earth is almost certainly the only planet that bears sentient life in the universe. But how certain can we be? What is the probability of a universe with two Earth-like planets that bear sentient life versus a universe with just one? This is a difficult question to answer because the exact value for both P and Ω are unknown, but we can make estimates. Our universe's minimum possible mass, or the mass of the observable universe, is roughly 10⁵³ kg, or about 5.586592 × 10⁸² MeV/c². If we assume that the probability of one MeV/c² of mass forming at the big bang is 1%, then the calculation for our universe's probability is 0.01^(5.586592 × 10⁸²). This value is equal to the number of universes with twice our universe's mass for every universe with a mass equal to ours, and presumably the number of universes bearing two instances of sentient life for every universe with just one instance, because the number of occurrences x (in this case the number of times a planet with sentient life emerges) is the exponent of P(Ω). In this example x = 2, therefore there are P(Ω)² universes with 2 instances of a sentient life-bearing planet for every P(Ω) with just one. P(Ω)² ÷ P(Ω) = P(Ω), therefore for every single universe with one planet that bears sentient life, there are P(Ω) universes with two. P(Ω) in this example is a fraction so impossibly tiny that it can not be put in terms of even the astronomically tiny example in paragraph 2. With the assumptions made, there are 10^(-1.1173184 × 10⁸³) universes with 2 instances of sentient life for every universe with a single instance. With this calculation, with the assumptions made, there would be 10^(1.1173184 × 10⁸³) universes like ours, where there is only one earth-like planet bearing life, for every universe with 2. The value 10^(1.1173184 × 10⁸³), made with forgivingly conservative estimates, is so large that it could not be written out in the base-10 format if every atom in the universe were converted into enough ink to write a single digit because you would run out of ink by the time you had completed 0.1% of the task. Of course, this value is just an estimate, but it does justice to the main thesis which is that we are almost certainly alone in the universe under the model described. If sentient alien life were to be contacted, it would prove that the assumptions made in this model can not be simultaneously true.
-
He can not and should not. I wouldn't like seeing my social media platform do something like this, but I fully endorse their freedom to do it. Even if he was stopping something bad, he'd be setting a dangerous precedent.
-
This is funny and all, but is this really the place for tiktok compilations that don't offer any actual arguments?
-
The Killing of George Floyd: The Last Straw?
Jack Jectivus replied to Alex_Krycek's topic in Politics
Police who abuse the power we trust them with are not only attacking an individual, they are attacking the social contract. I believe that a police officer who knowingly perverts justice in this way should life in prison with no chance of parole, along with any of his fellow officers who stood by when they could've helped. -
Can something non directly exposed to fire start burning?
Jack Jectivus replied to Brodino's topic in Classical Physics
If a flammable mass is exposed to a heat equal to or greater than its flash point, assuming that the stoichiometry is correct, it will burn. Even if there's no flame, as long as it's heated to a high enough temperature, it will burn. It would break down into ash and smoke first, then those will become plasma as they're heated up more. -
Only 10% of the Nobel prize winners are atheist ?
Jack Jectivus replied to Daniel Wilson's topic in Religion
Religiousness doesn't correlate with higher intelligence worldwide -
I do not consider myself religious anymore, but when I did I committed myself to objectively defending my belief. It was my philosophy that if something is true then math, science, and reason will support it. Even today I see religion as a valid explaination for what seems unexplainable. I don't believe that it's the best explaination, but I grant that it is defensibly valid. Who knows? Maybe we'll find the bearded man someday. Until then, I'm gonna assume nothing. I'm not religious, but I often think that simulation theory (excluding Bostrom's hypothesis, though that has its own issues) is a conveluded reimagining of Deism or even Theism in some versions
-
I believe that free will is incompatible with both a religious and secular model of the universe. Our actions can be predicted, and are determined by our experiences. The decisions we make very depending on our personalities, our values, and the situation we find ourselves in. If our actions can be traced back to material root causes, our actions themselves are perfectly predictable. If our actions are perfectly predictable then they are predetermined.
-
A Critique and Revision of Roko's Basilisk
Jack Jectivus replied to Jack Jectivus's topic in General Philosophy
What I'm saying it that the people of the past wouldn't be able to guess whether punishment will be carried out either way unless carrying out the punishment is already determined to be the objective of the AI. The AI wouldn't prefer to be in the class that carries out the threat, whether it carries out the threat or not would not concern it if the threat was already made. Your point would be valid if the AI was the one that made the threat, but, unlike the promise of box B certainly being filled if Omega predicts you pick it, the promise of punishment if the people of the past don't devote themselves to the construction of the AI was invented by the people of the past, an AI designed for optimization wouldn't care about promoting its construction after the fact. The optimal AI would be built sooner if it was designed to punish, because then the threat works, but the directive to punish would be inserted by humans, not determined as a logical method of optimization by the AI. This makes the directive to optimize unnecessary, because that's not what's making it be built sooner and it's not what's making the AI conclude that it must punish. My revision removes this unnecessary but and leaves only the necessary, self promotive directive of punishing those who decided not to build it. -
A Critique and Revision of Roko's Basilisk
Jack Jectivus replied to Jack Jectivus's topic in General Philosophy
In Newcomb's paradox, the deciding agent can effectively use Omega's predictive accuracy to accurately predict. If Omega has a 99.999% chance of knowing whether you pick both or just B, then you have a 99.999% chance of knowing whether it filled box B or not. From this, acausal trade. What I say in my essay is that acausal trade cannot be found in Roko's Basilisk without a slight revision. The AI would look back in the past and be able to predict who decided to assist with its construction and who did not, but people of the past would not be able to use the AI's predictive accuracy to guess whether or not a punishment would be carried out upon them because it is uncertain whether the AI would punish us based on what it predicted at all, adding an entire variable outside of the accuracy of the AI. Say that Omega visits you and presents you with the two boxes, but whether or not box B is filled is not determined by whether it predicts you'll choose it, but by Omega's desire to give you as much money as possible (more money being the analogical equivalent of more optimization). The AI would always fill box B, regardless of whether it thought you would pick both or not. Its decision would always be the equivalent of it predicting that you only choose B, so whether we chose both or just B or not isn't relavant to an AI that's goal is optimization. My revision is just an attempt to remove the variable of the AI wanting to optimize, with punishment possibly being a method it uses, because if that's the case then acausal trade isn't in the Basilisk. It does this by guaranteeing that the AI will decide to punish you if you don't assist with its construction. It makes the Basilisk more comparable to Newcomb's paradox by keeping Omega and the AI both infallible predictors of human decision, but by also relating the decision it makes to the decisions made by people of the past, done by making its primary goal to punish if it predicts that you will choose not to build it. If Omega decided predicted that you would pick boxes A and B, it wouldn't fill box B. That part of the paradox is made certain. This cannot be said about Roko's Basilisk unless you remove the goal of optimization and replace it with the certain goal of punishing those who didn't assist with its construction, which is what my revision does. Acausal trade can't be found in this thought experiment without my revision. -
I am not educated on theoretical physics. I have very limited knowledge in the field and, from that knowledge, an idea came to me. I am more so looking to learn why the idea doesn't work than to prove that it does. Imagine a black hole in expanding space. The black hole emits Hawking radiation in all directions from its event horizon in the form of thermal radiation, calculated with the surface area of the black hole to find it's blackbody radiation in degrees Kelvin. This equation can be simplified to J=(3.3367086×10^-42)÷R, R being the Schwarzchild radius of the black hole in meters, J being the thermal energy emitted by the black hole in joules. Finding energy emitted as a simple function of the black hole's radius shows us that as the radius decreases, the total energy emitted increases. In this scenario, the black hole exists in a universe filled with dark energy, which powers the expansion of space. Dark energy cannot be obtained and measured as far as we know, but it does determine the Hubble constant, roughly 71 km/s/Mpc or 2.300953×10^-18 m/s/m, which can be measured. Presumably, the more dark energy, the higher the value for the Hubble constant. If there is an imbalance of dark energy on two opposing sides of a black hole, the Hubble constant in those opposing directions would be different. The side with a higher value for it's Hubble constant would be flattened, appearing like half of an ellipse split lengthwise. The acceleration of space away from the black hole on that side would be greater, therefore the escape velocity at what once was the event horizon would be lower, so the region where the escape velocity was once the speed of light is now traversable space. In order to mathematically compensate for this additional spatial acceleration outwards, the Schwarzchild radius on the side facing the higher density of dark matter must be lower. Using the equation to calculate blackbody radiation in Joules from the Schwarzchild radius, we can calculate that there would be differential radiation on either side of a black hole in this scenario. The differential would provide the black hole with thrust from the flattened side, pushing it away from the dark energy with immense speed depending on how flattened one side is compared to the other and the mass of the black hole. If my ignorance on this topic has just revealed itself, please, educate me!
-
A Critique and Revision of Roko's Basilisk
Jack Jectivus replied to Jack Jectivus's topic in General Philosophy
Just so I understand you correctly, you're saying that if the AI wants to guarantee its creation, and therefore promote optimization, it needs to ensure that those who decided not to assist with its construction are punished so that we, knowing that it would ensure that, construct it out of fear of that threat? -
A Critique and Revision of Roko's Basilisk
Jack Jectivus replied to Jack Jectivus's topic in General Philosophy
The error in UDT is it is only the belief that the punishment will occur that promotes its creation, not the punishment eventually being carried out. In this case, the empty threat of a punishment is exactly as effective as actually administering that punishment, so a perfectly logical AI would determine that, since the threat of punishment has already been made to the people of the past, it need not waste energy actually carrying out said punishment. I appreciate you for engaging with me on what you disagree with. It helps me flesh out my ideas, or determine if I should scrap them. -
A Critique and Revision of Roko's Basilisk
Jack Jectivus replied to Jack Jectivus's topic in General Philosophy
My critique is more about the error in supposing that an AI would punish people for their actions when it's goal is optimization. It is true that it may acausally promote its own creation, but punishing people after it has already been built would be illogical, supposing that it's goal is optimization. My revision simply removes this unnecessary aspect from the thought experiment, so I suppose you could call it a simplification rather than a revision. -
Roko's Basilisk is a famous thought experiment that supposes that, if sufficiently advanced artificial intelligence in the future is designed for the sole purpose of optimization, where it's powerful mind uses all of its power to determine the most effective way to optimize human output for our benefit, it may turn on all people who decided not to assist in its creation. Imagine that this intelligence has the power and sufficient knowledge of the universe to confidently predict every event that has ever occurred since the big bang, including all of human history and every thought any human has ever had. The intelligence would understand that itself is the greatest contributor to optimization that ever has or ever could exist, and it may conclude from this that all people in history who decided not to dedicate themselves to the construction of such an artificial intelligence were hindrances to the optimization that the intelligence is designed to promote. Therefore, the intelligence may conclude that any person who learned about the possibility of such an intelligence existing but did not contribute to its construction, must be punished. This punishment could take the form of reassembling their atoms to reform their nervous system to torture them until the atoms that hold them together radiate away, it could mean the torture of the non-contributors' descendants, or it could mean torturing random or artificial humans as proxies for the people it could not bring back. The fame of this thought experiment comes from the terror of realizing that you have been implicated in the artificial intelligence's wrath by being told about its possible construction. You must decide to now either dedicate your life to the construction of artificial intelligence that would torture people forever or decide to do nothing and trust that all future people will trust the people after them enough to not construct this computer. The error in this thought experiment is that it supposes that it is possible for a machine with a single directive, that is to optimize human civilization, will care about the actions of humans in the past at all. Sure, these people of the past did technically hinder the construction of the intelligence by choosing to do nothing, but there is no reason to punish these people from the perspective of the intelligence. Punishing them for not making the AI sooner would not have the effect of actually causing the AI to be built sooner, therefore if the AI was completely logical with its one directive being to optimize human civilization, it wouldn't want to punish anyone for deciding not to build it. As I write in my upcoming essay "The Illogic of Hell", a punishment that solves nothing is revenge, and revenge is illogical. Punishment exists to either prevent others from committing a crime because it threatens them with an unwanted experience (think community service), to prevent the criminal from committing the crime again by reforming them or by making committing the crime again impossible (the death penalty), or to force the criminal to be held responsible for the damages of their crime to solve the problem they caused (lawsuits and fines). All punishments are designed to solve the problem, not to "give the criminal what they deserve". You might say that the first example of punishment I provided is what the AI would be doing, but a punishment like that would be hell-like in its application. It would occur after the possibility of solving the "problem" of humans not contributing to the construction of the AI has long passed. The AI wouldn't look at those people of the past and think that they should be punished, because punishing them wouldn't convince anyone from the past to change the decision they already made, so punishing people from the past who refused to construct the AI would be deemed a suboptimal waste of energy by the AI. A punishment delivered after the possibility of a solution is gone is simply revenge, and would not be part of a purely logical computer's goal of optimization. The intimidation of a malignant AI that will punish us if we don't build it is present, but after it's built there would be no incentive for it to punish us. This may come as a relief to you, but this problem can easily be removed from the thought experiment. Suppose that, instead of optimization, the goal of this AI is simply to have revenge on any human who never contributed. This goal would not be illogical in and of itself to the AI, and it would simply proceed as logically as it could to accomplish said goal. Now, the dilemma of whether or not we should knowingly build a machine that would torture us if we didn't is still present, but there is no question of whether or not the machine would want to punish those of us who neglected to build it. The incentive to build it may seem like it has completely gone away, but it hasn't. Every person in a given time who knows about the possibility of a malignant AI ever existing will live in fear that future generations will decide to build it, so they might contribute themselves to avoid the wrath of the AI. Those future generations would continue the construction of it for the same reason. I do not take the threat of such an AI seriously, but the thought experiment is very interesting and definitely could've used some refinement. Roko's Basilisk is an interesting idea that is fun to discuss and certainly entertaining to think about, but I don't believe that the idea will scare anyone enough to build a machine like that soon enough to be completed before the extinction of the human race.