Forum:The most righteous people and government to support

From RationalWiki
Jump to navigation Jump to search

Supposing someone wanted to go to live under the most benevolently righteous regime possible [that exists in the world today]. How would we judge this and where would that be? Phoney (talk) 21:19, 21 January 2011 (UTC)

Start a new discussion

Contents

Thread titleRepliesLast modified
The Monarchy of Me 217:28, 17 April 2011
In the world today118:22, 29 January 2011
Relativism1208:48, 24 January 2011
[Iain Banks' Culture]706:06, 23 January 2011

The Monarchy of Me

Clearly I would be the most benevolent and righteous ruler possible. You disagree? I'm sure my re education camps will sort that out. A bullet to the head will remove any lingering doubt of how blessed you would be under my most gracious rule.

AMassiveGay (talk)04:18, 22 January 2011
Edited by 2 users.
Last edit: 17:28, 17 April 2011

I second AMassiveGay's idea, though I would replace him with me in the position of righteous ruler.

P.S. "A Monarchy of Me" sounds like yet another lame way of expressing the web 2.0 movement (e.g. "The Time Person of the Year is... You!").

Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments.00:40, 24 January 2011
Edited by another user.
Last edit: 18:21, 29 January 2011

Of course. Because if you were the ruler you could set up a democracy or whatever you thought was beneficial/righteous. The question is, what place is most like this already?

Phoney (talk)15:35, 29 January 2011
 
 

In the world today

I hear Andorra is nice and quiet.

Blue (is useful)01:23, 23 January 2011
Edited by another user.
Last edit: 18:22, 29 January 2011

<Exasperated sigh> I actually made this so UncleHo could get with the positive vibration...

...instead of smashing it up in Zimbabwe.

Now that is fucking beautiful. Not sure if it is very intelligent.

Phoney (talk)18:02, 29 January 2011
 

Nice idea and noble aim in principle, but a flawed question to be asking.

Your main problem lies - and this is the main criticism of living by the Golden Rule - with the fact that what someone regards as "most benevolently righteous" will vary from person to person and place to place. Ergo you can conclude that such a thing would be impossible. Even if you satisfied a majority, you'd still dissatisfy a minority - or even actively oppress a minority! There is no way to objectively say what would form a righteous and benevolent government because the criteria that you use to assess it are still ultimately subjective. For instance, would you prefer a society where one person has a happiness of 10 and another has a happiness of 1 (arbitrary scales for illustration purposes) or a society where two people both have a happiness of 3. There are arguments in favour of both but no true test showing which is actually better. Thus one person may prefer the former and another the latter. Someone is going to be disappointed and not view your regime as benevolent or righteous.

Your only practical solution to this would be to split society into increasingly smaller groups whereby everyone in a group has a similar view and is governed only by the laws they view as benevolent. Eventually this would reduce everyone to living in a society of one, probably simulated to their idea of utopia. However, with the lack of conflict or friction, the desire to strive and improve is removed. Such a place may not be the most desirable place to live because it would remove this key aspect of human motivation. Indeed, going by the relativity of the definitions and the varied opinions involved, a regime that is the "most" benevolent and righteous might not be considered as the most benevolent and righteous.

Scarlet A.pngpathetic01:23, 22 January 2011

But the question asked the most "benevolently righteous regime possible", not the most benevolent regime period. The most benevolently righteous regime possible (MBRRP) would necessarily be a compromise between differing notions about its nature, I think.

Blue (is useful)01:44, 22 January 2011

The "most benevolent regime period" is practically synonymous with "the most benevolent regime possible" - the key is in the "most" part, implying that we can only get so far in either case.

Regardless, you would still have to define not only benevolence and righteousness by subjective criteria but also how you're going to measure it. Even saying "the regime that most people think is most benevolent" still isn't objectively the best criteria to judge it by. Certainly that minority who didn't agree about the level of benevolence would complain about using a majority vote as a measure. Take the UK election system. The Liberal Democrats want to change to a proportional system because if you define it like this it's the best. Tories want to keep it as is, because if you define it like that it's the best. Even merely saying "compromise" can be tricky, because you run the risk of diluting the ideas down so far that they can't be properly executed and are ineffective. Such a compromise idea between all opinions would lead a system that pleased no one because everything that an individual wanted would be stopped short by another persons wants and needs. We see such examples all the time where proposed laws are compromised on and weakened, and we end up with final drafts that no one particularly wants. The law doesn't go far enough for its supporters and goes too far for its opponents - despite shining public faces of "well, we made a good compromise so it's all great" everyone, deep down, is actually unhappy about it.

I you imagine "most" as the maximum on a curve, then it's easy to find where you want to be for any particular system, regime or otherwise. However, because of choosing your definitions by subjective criteria, and these change between people, there isn't just one curve but several, and their maxima are found in different locations. A compromise situation would end up with a curve with no maxima, but a mere straight and flat line, probably very low down.

And that's without getting into specifics about defining a benevolent act. In one particular model, if you want to define "most benevolent" as just reducing harmful acts to an absolute minimum, then the "most benevolent regime" would, in fact, be no regime at all - as something that can't exist cannot be malevolent in any way.

Scarlet A.pngpathetic13:26, 22 January 2011

We let the super-intelligent "Minds" decide how to execute policy, with the possible exception of going to war. As to what policy should be, yes, there will always be disagreement, but ideally the mantra would be "if you think you'd be better off in a different system, don't interfere with those who don't, or go elsewhere."

Both of these would only be optimal or even possible in a society that lacked serious cultural/moral heterogeneity and had done away with money, poverty, illness... I think perhaps you're thinking more about the here and now, when I'm thinking more abstractly about the (distant) future.

Blue (is useful)19:29, 22 January 2011

(I think my point is far more abstract that you seem to think it is!)

Which God decided that "intelligence" (regardless of how high) is a required criteria for deciding benevolence? Would they have a better idea about how someone would want to be treated than that person themselves, for instance. Again, it's all basing it on ultimately arbitrary criteria.

Scarlet A.pngpathetic20:00, 22 January 2011

I would think it would be more like this: each individual who opts to live in this "benevolent regime" would essentially decide however to live their own life, under the guiding principle that "my right to swing my fist ends where the other man's face begins." Ultimately you can't make everyone happy, so those who simply cannot abide this regime would be forced to leave it.

These criteria are arbitrary. It's arbitrarily an anarchist-communist utopia based on hedonism. However, I believe that under that arbitrary system the most benevolently righteous regime would be able to function, i.e., that is the best form of society.

I can't justify this point of view scientifically or logically, because doing so would require too many a priori assumptions. Why is intelligence necessary to make good policy? I suppose you could get lucky, but that's not a government I would put much faith in.

Blue (is useful)01:20, 23 January 2011
 

"Great intelligence" is not so much required to decide what benevolence is, but to solve the logistical problems of carrying it out.

Phoney (talk)05:57, 23 January 2011
 
 
 

Oop, my bad. I meant the most benevolently righteous regime that actually exists in the world today. I shall update it. I always seem to screw things up this way.

Phoney (talk)01:18, 23 January 2011

In a word: oh.

Blue (is useful)01:22, 23 January 2011

Carry on. We can make new threads.

Phoney (talk)05:46, 23 January 2011
 
 
 

"However, with the lack of conflict or friction, the desire to strive and improve is removed" They could ask others to give them such and/or ask the system to limit their powers, including modifying their memories that they have done so, so that they can conflict and strive in a number of virtual worlds all they like.

Sen (talk)02:34, 22 January 2011

With "equal omnipotence" there is still politics. There is no longer any physical limitation, so physics becomes a realm of artwork and social games.

But there are also mental/psychological limitations. There may be limitless frontiers due to the difficulty with acquiring knowledge without inducing suffering. The knowledge is required to know what we would want to do, how to improve our mind and experience. This may be a never ending process of discovery. No matter how good we become, we could always be better.

Unicow (talk)08:48, 24 January 2011
 
 

[Iain Banks' Culture]

Ideally most things would be taken care of by super-intelligent benevolent AIs, kind of like Iain Banks' Culture. Blue (is useful) 21:30, 21 January 2011 (UTC)

21:41, 21 January 2011

Why should the AI's be any more benevolent than their programmer?

Phoney (talk)21:42, 21 January 2011

They would be conscious, of course.

Blue (is useful)22:10, 21 January 2011

I never know what that means if I'm speaking with materialists.

Is "consciousness" supposed to help them to be benevolent? If they feel pleasure or pain it seems it may rather give them needs and interests of their own. Wouldn't they be more concerned with their own pleasure rather than the humans, or would the programmers override this with "altruistic intentions"?

Phoney (talk)00:02, 22 January 2011

Programmers are not necessary. There doesn't have to be a "kill switch" or a safety mode a la Star Trek.

We would put our trust in Minds. Of course, this would only work if we had reached a sufficiently advanced level of technology, otherwise the Minds would not have a framework to operate in.

Blue (is useful)00:39, 22 January 2011

Surely the most benevolent state would be a state of being totally free except from hurting others (anyone disagreeing with that is free to be hurt, as per his morality). However we cant do that, because we are enslaved, be it to scarcity, or gravity, or entropy or death. Philosophically, I would say that the most "benevolent" regime, would be for everyone to be a God with infinite power to reshape reality of his own, but no effect on others. No effect on others going down to the levels of being allowed to have populations to torture and animals to hunt or whatever, but all those actually being composed of P-Zombies with no actual conciousness behind. (Bonus philosophy: Can you have beings responding 100% realistically as if they are concious, yet not to actually be conscious?).

This however requires universe building technology which is a tad out of our reach yet (I have been told that I am an optimist), so the low tech approach (that's the low tech, I have a funny definition of low tech), would be something like a computronium dyson sphere around the sun, in which peeps (uploaded obviously) can similarly create their own heavens or hells-but-heavens-for-their-creators, as long as they cannot effect someone else. There could also be transactions (in mutual willing consent yadda yadda) where people allow multiple control over a virtual reality, or go for a chit chat and world exchange.

The good news is that by this point, someone would have simplified a civilization's problems regarding where to park the cars, and if smoking in public places should be allowed, and wither we should have socialized health care and all that crap. Here's your computronium, you go fuck off in it, end of story. The bad news is that such a civilization would still be bound by a couple forms of scarcity. A) Computational resources and B) Energy. The first has to do with themselves (so no zooming in in fractals in 100% detail) and the second with the environment outside the sphere and the heat death of the universe.

A) Is actually rather linked with B) because even if you had less resources, if you simply slowed down your calculations they could still appear many to you. (Bonus philosophy: Can a 386 processor, with lots of memory, energy and time, simulate a concious human and is that human actually feeling concious, even if most of his time he is spend in storage?) As well as because, if everyone wants to keep all his memories of his billion virtual world adventures, then all this memory has to be, err, memorized somewhere so you can remember it. So even with clever storage techniques (like storing similar sensory experiences from many immortals, together etc), they would still require more structure (aka energy) to continue storing their memories.

At which point they would have to either re-engineer the universe's laws (simples), escape the universe (also simples), or either a) meet other similar civilizations and have a galactic and then trans-galactic clusterfuck fighting for energy until there is only one and then die b) Meet other civilizations and don't have a galactic and then trans-galactic clusterfuck fighting for energy, rather live until the end of the universe together, mutually appreciating the gift of conciousness ^^, oh, and then die.

There is also the thought experiment of what would happen, if a civilization found out that it can escape the universe but the technique to do so requires so much energy that only a limited number of conciousness can do so, at which point you could have another form of scarcity in the form of post-heat-death survival and not. But you know what, considering that so far we still haven't been arsed to put a city grade solar panel in orbit yet I think that we can postpone that debate for now.

Sen (talk)02:26, 22 January 2011
 

Someone must initially create the AI. Those are the programmers. Do you expect they will all agree on how these beings will operate? We already have a disagreement. How can I know they are "conscious" or sentient, having subjective states of mind and qualia? Are they supposed to keep all the humans alive forever, who exist when everyone switches over to the matrix, or do they make new humans? Are we supposed to "upload" our mind into a machine?

Phoney (talk)05:43, 23 January 2011