Jump to content

Are purely logical governments best?  

1 member has voted

  1. 1. Are purely logical governments best?



Recommended Posts

Posted

We are ruled by politics, which, for all intents and purposes, is the art of compromise. Sadly, this includes compromise with illogical people. Therefore, politics can't be purely logical until everyone doing the politicking agrees that all arguments for a position should be free of logical fallacies. Thus politics remains a haphazard, illogical decision making process.

 

It is my belief that a logical government would govern best, and that law should represent a consistent formal logic system with prespecified and universally agreed upon axioms (which can change over time) from which every law can be derived through logical argumentation. I'm not saying this is practical. I'm saying this is best.

 

If axioms change in such a way to invalidate the logic from which particular laws are derived, then those laws should automatically be rendered invalid.

 

I think this is what all governmental systems throughout history have attempted to become. In America, for example, the Constitution would represent our set of axioms, the fundamental set of rules which all other laws must fall within and not invalidate. This set of rules is changable, but the process is difficult.

 

Agree? Disagree?

Posted

It seems to easy for the axonims to be inconsistent. Let's say you had just two: freedom = good and unnatural death = bad, there is already a clear contradiction.

Posted

The question is simplistic and, forgive me, puerile.

 

Define best: If you mean the most efficient, then dictatorship is a strong contender. If you mean compassionate and caring, well, we all have our illogical beliefs on that.

 

There is no one defineable logic in human nature (example: child vs. chimp).

 

I suspect Bascule's version of logical is whatever agrees with him, I hope therefore he would regard me as illogical.

 

If axioms can change, they are therefore false, and therefore illogical.

 

All present governments use axioms, most obviously the one that says "do what we must to retain power.

 

My basic opinion is that there never has been one iota of logic in government, the beast is incapable of it, the two mutually destruct. Matter/anti-matter.

Posted

I agree that Logical government would be the best, but I think in the same regards of Communism it would function well only in theory. Every system can work beautifully, but I believe that humans just mess it up.

 

I mean can you even really say we live in a proper democracy?

 

maybe 50 years from now with will have the technological power to do so. (25 years to develop, 25 years for people to stop making movies about killer Robots and Genocidial AI.)

Posted

I like the idea, but I wonder if you might run into problems with the axiom such that to correct a bad side effect of it would require too big of a deviation to live with. Especially if the problem is close to the foundation of the logic behind it.

 

But I'm not sure I really understand it anyway.

Posted
law should represent a consistent formal logic system with prespecified and universally agreed upon axioms

 

If axioms change in such a way to invalidate the logic from which particular laws are derived' date=' then those laws should automatically be rendered invalid.

 

I think this is what all governmental systems throughout history have attempted to become. In America, for example, the Constitution would represent our set of axioms, the fundamental set of rules which all other laws must fall within and not invalidate.

 

Agree? Disagree?[/quote']

 

I tend to generally agree, but I have a real problem with the other major statement of basic American axioms, the Declaration of Independence. The document sets forth to major axioms:

1. all men are created equal

and

2. they are endowed by their Creator with certain unalienable Rights

 

My problems with these axioms are:

1. how can all men be created equal when no two of them are alike?

and

2. does existence really come with built in 'entitlements'?

 

Would you tend to agree with me that these two basic axioms are at the very least on very shaky, logical ground?

aguy2

Posted
The question is simplistic and, forgive me, puerile.

 

If by simplistic you mean overly idealistic and impractical, I agree. However, I already conceeded that, and that isn't what this topic is about.

 

Puerile? How so?

 

Define best: If you mean the most efficient

 

That's certainly not what I mean. I mean: possesses the best decision-making methodology given the available information.

 

then dictatorship is a strong contender.

 

Wandering into strawman terroritory here. I'll assume it's from a legitimate misunderstanding. Although if you call Plato's Philosopher King a dictator, then so be it, yes, I'm describing a rule-by-logic dictatorship.

 

If you mean compassionate and caring, well, we all have our illogical beliefs on that.

 

There are also logic systems for compassion and caring. It's called Utilitarianism.

 

There is no one defineable logic in human nature (example: child vs. chimp).

 

I think the phraseology you're looking for is "There is no set of universally agreed upon governmental axioms"

 

I suspect Bascule's version of logical is whatever agrees with him, I hope therefore he would regard me as illogical.

 

From the above, I believe your understanding of the fundamental concepts behind logic is flawed.

 

If axioms can change, they are therefore false, and therefore illogical.

 

And now you've certainly demonstrated that you don't comprehend logic. The validity of axioms says nothing about the validity of logic systems constructed from them. The logic system can be rendered inconsistent by changing axioms, however this doesn't necessarily follow from changing axioms themselves; it's a different case entirely. It's possible to change axioms and still retain a consistent logical system from which a new set of conclusions follow, depending on how the logic system is constructed.

Posted

Am i right in thinking that the 'changing axioms' is an attempt to prevent the system from being incapable of adapting to changing situations and philosophies?

 

Also, how would such a system handle situations that encounter paradoxes whilst trying to solve them logically?

Posted
Am i right in thinking that the 'changing axioms' is an attempt to prevent the system from being incapable of adapting to changing situations and philosophies?

 

Yes

 

Also, how would such a system handle situations that encounter paradoxes whilst trying to solve them logically?

 

The system would have to dynamically adapt and restructure to accommodate changing axioms. Paradoxes would have to be resolved either by adapting arguments to ensure consistency, or adding/changing axioms

Posted

What of things that cannot be solved logically?

 

For example, what if the govournment were to descide that a refferendum was neccesary on an issue that didn't have a yes/no answre, BUT it has been proven logically impossible to design a voting system to accurately work out the

most representative view of a group of people if more than two options are presented (archers impossibility theorum)... what happens then, in a purely logical govournment?

 

Or, with law, one obviously has to frequently weigh someones right/need/desire/whatever to do something against someone elses right/need/desire/whatever for them not to do it: it's possible, i think, that if this is approached logically, one could encounter the libertarian paradox, thus preventing its suitable (depending on how you define suitable) resolution by pure logic (i think).

 

how would those situations be handeled, logically?

Posted
What of things that cannot be solved logically?

 

For example' date=' what if the govournment were to descide that a refferendum was neccesary on an issue that didn't have a yes/no answre, BUT it has been proven logically impossible to design a voting system to accurately work out the

most representative view of a group of people if more than two options are presented (archers impossibility theorum)... what happens then, in a purely logical govournment?[/quote']

 

All right, time to whip off the thinly veiled disguise this thread has lingered in.

 

I hinted at Plato's Philosopher King concept. We've adoped decentralized models of government becase we've discovered central regulation by a single person simply doesn't scale. The issues involved become too complex for any one person to possibly comprehend.

 

I still see the Philosopher King as the embodying the ultimate form of leadership. Unfortunately, no human would provide the sort of entity necessary to author and revise a purely logical system for regulating any sort of governed body.

 

Or, with law, one obviously has to frequently weigh someones right/need/desire/whatever to do something against someone elses right/need/desire/whatever for them not to do it: it's possible, i think, that if this is approached logically, one could encounter the libertarian paradox, thus preventing its suitable (depending on how you define suitable) resolution by pure logic (i think).

 

how would those situations be handeled, logically?

 

A method for organizing and semantically annotating (into a unified ontological structure) the entire system of human knowledge (at least digitized human knowledge) would need to be assembled, as well as consciousness capible on acting on that structure as a whole.

 

Such an intelligence (if benevolent) is the only sort of being who could possibly fulfill the role as the ultimate Philosopher King...

 

So, yeah, to sum it up I'm talking about friendly AI here...

Posted
All right' date=' time to whip off the thinly veiled disguise this thread has lingered in.

 

I hinted at Plato's Philosopher King concept. We've adoped decentralized models of government becase we've discovered central regulation by a single person simply doesn't scale. The issues involved become too complex for any one person to possibly comprehend.

[/quote']

 

I don't think its so much about "one person" being overwhelmed, but that one person has a limited capacity to experience different facets of life.

 

Its the same reason I take all my parent's advice with a grain of salt: I live my life 24/7 and they just hear about it on the phone, so while some of their advice applies I am still in the best position to make the final call.

 

I still see the Philosopher King as the embodying the ultimate form of leadership. Unfortunately' date=' no human would provide the sort of entity necessary to author and revise a purely logical system for regulating any sort of governed body.

 

 

 

A method for organizing and semantically annotating (into a unified ontological structure) the entire system of human knowledge (at least digitized human knowledge) would need to be assembled, as well as consciousness capible on acting on that structure as a whole.

 

Such an intelligence (if benevolent) is the only sort of being who could possibly fulfill the role as the ultimate Philosopher King...

 

So, yeah, to sum it up I'm talking about friendly AI here...

 

In the same line of reasoning, I think AI could be a great advisor, and offer us solutions we would not otherwise find.

 

In the end, I will always value the right to screw my life right up and die pennyless in a ditch. Then again there is an outside chance that my work will make me millions, but for every big net entrepeneur winner there is 99,999 who don't.

Not very logical odds, but seems like the only thing worth doing with my time these days.

Posted

I appear to have rattled Bascule's cage, for which I do not apologise.

 

My basic premise is that pure logic and politics are incompatible. I note that others are asking, in a less combative way similar questions.

 

Here, to illustrate a point, is a cut-down Chinese proverb/story to show that logic does not always provide the answer:

 

A woman used two buckets to fetch water from the well. One was cracked, and lost half its load on the way home. Instead of repairing or replacing the cracked bucket, she used it for many years. One day the cracked bucket found a voice and spoke to the woman "I am grateful that you have not discarded me as useless, it would have been the logical thing to do, but why?"

The woman replied: "Have you noticed the flowers growing along one side of the path, the side on which I carry you? There I sowed seeds, which flourished through your lost water. They brightened my life. Your weakness has given me strength. There are actions, benefits and consequences beyond logic". "Thank you".

 

Preserve me from being ruled by people whose religion is pure logic. As any craftsman will tell you, chose the right tool for the job. Logic is not the right tool for the job of politics.

Posted

 

A woman used two buckets to fetch water from the well. One was cracked' date=' and lost half its load on the way home. Instead of repairing or replacing the cracked bucket, she used it for many years. One day the cracked bucket found a voice and spoke to the woman "I am grateful that you have not discarded me as useless, it would have been the logical thing to do, but why?"

The woman replied: "Have you noticed the flowers growing along one side of the path, the side on which I carry you? There I sowed seeds, which flourished through your lost water. They brightened my life. Your weakness has given me strength. There are actions, benefits and consequences beyond logic". "Thank you".[/quote']

 

 

Isn't that logical anyway?

 

It seems that the planted seed were intentional, lets say it's 10m from her home to the well, and it's 5m to the flowers mid point. If she were to get two good buckets and fill them with water and return home, then go back and get water for the plants, then return home it would take 40m total. 20 there and back 10 from the home back to the well, then 5 to the plants and then 5 home. If she were to water the plants first it would take 30m. 10 to arrive at the well 5 to water the plants, 5 back to the well, then ten back home. This way she still has 2 bucks of water and flowers, and it only would cost 10 more meters.

 

I guess it depends if the second bucket of water is worth it, but why would she bring two originally if it wasn't? You could even sell the water and flowers and make a profit. :D

Posted
We are ruled by politics' date=' which, for all intents and purposes, is the art of compromise. Sadly, this includes compromise with illogical people. Therefore, politics can't be purely logical until everyone doing the politicking agrees that all arguments for a position should be free of logical fallacies. Thus politics remains a haphazard, illogical decision making process.

 

It is my belief that a logical government would govern best, and that law should represent a consistent formal logic system with prespecified and universally agreed upon axioms (which can change over time) from which every law can be derived through logical argumentation. I'm not saying this is practical. I'm saying this is best.

 

If axioms change in such a way to invalidate the logic from which particular laws are derived, then those laws should automatically be rendered invalid.

 

I think this is what all governmental systems throughout history have attempted to become. In America, for example, the Constitution would represent our set of axioms, the fundamental set of rules which all other laws must fall within and not invalidate. This set of rules is changable, but the process is difficult.

 

Agree? Disagree?[/quote']

 

The Courts have agreed that the Constitution is the supreme law of our country and that a law which violates the Constitution is invalid. However, I"m not sure if the Constitution represents our set of axioms. For many people, axioms are set by God, not man. People give varying degrees of thought to the question of what, stripped down, are the unshakable axioms which should guide their lives and our laws. Mine are in flux even in some of the debates we are having. I'm not sure what I want to be when I finally do grow up.

Posted
All right' date=' time to whip off the thinly veiled disguise this thread has lingered in.

 

I hinted at Plato's Philosopher King concept. We've adoped decentralized models of government becase we've discovered central regulation by a single person simply doesn't scale. The issues involved become too complex for any one person to possibly comprehend.[/quote']

 

I also am not sure this just comes down to the complexity of scale. I think it has to do with an axiom in this country that there is inherent worth to the individual. We grant that individual rights to be left alone with his own decisions in certain parts of his life and give him power to join with the collective to choose leaders who will make decisions about areas that are fit for collective decision. Even so, we do not give the leaders the right to intrude in certain spheres of the individual.

 

I still see the Philosopher King as the embodying the ultimate form of leadership. Unfortunately, no human would provide the sort of entity necessary to author and revise a purely logical system for regulating any sort of governed body.

 

A method for organizing and semantically annotating (into a unified ontological structure) the entire system of human knowledge (at least digitized human knowledge) would need to be assembled, as well as consciousness capible on acting on that structure as a whole.

 

Such an intelligence (if benevolent) is the only sort of being who could possibly fulfill the role as the ultimate Philosopher King...

 

So, yeah, to sum it up I'm talking about friendly AI here...

 

I'm not sure why government after the singularity, if it comes, still couldn't be participatory. You would think that the logical ability of the members of the community would be enhanced and superstition would be minimized by such creatures.

Posted

I think there are too many wars for a purely logical form of government to work. In a hypothetical war between two countries, country A could lose 200,000,000 people whilst country B could lose 200,000,001. Using pure logic, country A won. Is there a logical construct for a Pyrrhic victory?

Posted

I think it would be more or less impossible to choose the initial axioms so that everyone would agree. Look at the differences of opinion when IMM tried to construct a set of axioms for morality.

 

And while I would probably accept the mistakes made by majority opinion, I would not accept the mistakes made by an AI's programming.

Posted

Here' date=' to illustrate a point, is a cut-down Chinese proverb/story to show that logic does not always provide the answer:

 

A woman used two buckets to fetch water from the well. One was cracked, and lost half its load on the way home. Instead of repairing or replacing the cracked bucket, she used it for many years. One day the cracked bucket found a voice and spoke to the woman "I am grateful that you have not discarded me as useless, it would have been the logical thing to do, but why?"

The woman replied: "Have you noticed the flowers growing along one side of the path, the side on which I carry you? There I sowed seeds, which flourished through your lost water. They brightened my life. Your weakness has given me strength. There are actions, benefits and consequences beyond logic". "Thank you".[/quote']

This story doesn't really show how logic is flawed, or how the woman deciding to keep the bucket is "beyond logic." Logic is about making inferences from assumed premises, and I don't see how anything this woman did goes beyond that. She noted that a leaky bucket can be used to water seeds -- looks like an inference to me.

Posted

I think purely logical governments are futile. Why? Because people do not think logically. Usually, they arrive at an answer for physological or emotional reason, then justify it (often poorly). If a government makes logical decisions that the bulk of the people *really* disagree with on an emotional level, said emotional apes are going to do what emotional apes do best: yell, throw things, break things, and beat people to death.

 

While I agree with Sev about intial axioms, I don't think even that's the big issue: the big issue is that if at least some effort isn't made to please the populace, the populace will revolt.

 

Mokele

Posted
I think it would be more or less impossible to choose the initial axioms so that everyone would agree. Look at the differences of opinion when IMM tried to construct a set of axioms for morality.

 

And while I would probably accept the mistakes made by majority opinion' date=' I would not accept the mistakes made by an AI's programming.[/quote']

 

Although I tend to think that government by and through the auspices of AI will prove to be the way to go, I also think all that should be needed would be a 'benign neglect' atittude toward increasingly efficient, dependable, and equible computerizations of government functions for at least a series of defacto AI government systems to evolve.

 

I would think if these systems all allowed to evolve naturally, the problems you are anticipating concerning 'forgiving' or tolerating inevitable mistakes would be largely mitigated.

aguy2

Posted

I'm hearing a lot of opinions being expresed here which... don't logically follow from a rule-by-logic system.

 

Here, to illustrate a point, is a cut-down Chinese proverb/story to show that logic does not always provide the answer

 

She's provided reasoning for her actions free of logical fallacies, and stated axioms from which she draws her conclusion. What she's expressed isn't a consistent formal logic system, but it cerainly isn't illogical.

 

All that aside, you seem to argue that logic is inherently cruel, heartless, and evil. None of those concepts are in any way tied to logic. When we think of someone as being "cold and calculating", that's because the axioms they are operating upon don't include compassion.

 

Utilitarianism embodies a logical moral system with compassion as one of its fundamental axioms. (And I can only hope that our hypothetical Friendly AI ruler is a Utilitarian, or at least espouses a Utilitarian-derived moral philosophy)

 

While I agree with Sev about intial axioms, I don't think even that's the big issue: the big issue is that if at least some effort isn't made to please the populace, the populace will revolt.

 

My view is that in an age of information ubiquity where all human knowledge can finally be sorted into consistent, logically constructed ontological systems, anyone attempting to argue contrarily to the super Philosopher King AI system wouldn't have a leg to stand on.

 

I guess, in the end, compromises would have to be made in the form of axioms, and some participatory involvement would be required in order for Philosopher King AI to construct the axiomatic structure and choose what compromises are necessary to keep the entire system consistent.

 

And while I would probably accept the mistakes made by majority opinion, I would not accept the mistakes made by an AI's programming.

 

You seem to be confusing artificial intelligence with expert systems. While the logical system constructed by AI could be implemented in the form of an expert system, the degree of seperation between the programmer and AI will be one as immense as the seperation between groups of neurous and mind.

 

If AI made a mistake, it would be no different than a human making a mistake. If AI remotely resembles human consciousness (I personally think AI will arise from a functional model of the neocortical column, once a project like BlueBrain manages to create one), then it will be based on immense collections of self-similar structures (the neocortical column is repeate millions of times in the brain, accounting for roughly 20 billion neurons). Failure to properly implement the self-similar structures would make the entire system fail completely (it couldn't construct a model of reality from input patterns alone). Logical errors on the part of AI would be due to a failure in reasoning. As the Internet evolves more and more into a semantically annotated ontological structure, the role of AI will be more to traverse (and collaboratively develop) this structure. Thus the logic errors will more than likely be a result of (repeated, distributed) error in semantic annotations created by humans (or at least, resulting from human mistakes) than a failure of the AI itself.

 

All that aside, once we have consciousness in a computer and can profile and eliminate bottlenecks as well as find ways to scale up the capacity of data the system can keep in working memory, it will surely (if AI is fundamentally possible of course) outperform the ability of any human programmer, and, having access to a program which describes its own inner workings, will have the capacity to detect logic errors in its own design (as well as continue work on eliminating bottlenecks and scaling its own capacity). This concept is called seed AI, and was popularized in William Gibson's novel Neuromancer.

 

In the end, I will always value the right to screw my life right up and die pennyless in a ditch.

 

I find libertarians (The Cato Institute sort, at least) to be vocal proponents of logic. That doesn't make them good people, but there's no reason to assume that a Philosopher King would take an authoritarian position. Not saying that you're doing that here, necessarily, but I wanted to get that out there.

Posted

hey gcol do you knokw where I could find the full version of that story, I'm interested in seeing the rest of it.

 

Quote:

Originally Posted by padren

In the end, I will always value the right to screw my life right up and die pennyless in a ditch.

 

 

I find libertarians (The Cato Institute sort, at least) to be vocal proponents of logic. That doesn't make them good people, but there's no reason to assume that a Philosopher King would take an authoritarian position. Not saying that you're doing that here, necessarily, but I wanted to get that out there.

__________________

 

 

I think that we have seen enough government's that have tried to put an end to hardship to know that it leads to a less productive, less motivated and less happy country.

 

look at the USSR and the USA during the cold war.

 

The USSR's GNP essentially stayed the same during the course of the cold war while the USA's increased dramatically eventually allowing president Reagan to invest far more money in the military than the Soviet Union could and thus leading to the end of the cold war.

 

many writer's when talking about the soviet economy have mentioned the lack of motivation that faced the Soviet work force.

 

 

essentially the cold war proved that a capatalist society where someone is allowed to sink or swim results in a far more productive work force.

 

and personally I like knowing that my future is in my own hands and mine alone

Posted

Is that supposed to be a reply to my comment? It seems like you quoted me then went off on a totally unrelated tangent

Posted

C.P Luke:

 

The chines bucket/water/flowers story came to my wife via an email. Cannot find the original, but it was just padded out and literary, and must have altered much in translation.

 

It may not have perfectly illustrated my point, but it made me wonder about the difference between axioms, aphorisms, received wisdom, proverbs etc.

 

I found that there was a lot of crossover in definition between axioms and aphorisms. An axiom seems to be a statement that can neither be proved nor disproved, so what its use is in non-partisan political logic escapes me.

 

A person can create an aphorism, which becomes a personal axiom, which then becomes a philosophical maxim. Who then decides which axioms should be used, which altered over time? I suggest that politically based axioms are useless fodder for a logical system. Of course (?) I dont include mathematical axioms in this unhelpful group.

 

Also, just how many temporarily logical axioms are required to make a computerised system that caters completely for any given political situation?

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.