this post was submitted on 19 Jun 2023
9 points (90.9% liked)

Anarchism

376 readers
1 users here now

Are you an Anarchist? The answer might surprise you!

Rules:

  1. Be respectful
  2. Don't be a nazi
  3. Argue about the point and not the person
  4. This is not the place to debate the merits of anarchism itself. While discussion is encouraged, getting in your “epic dunks on the anarkiddies” is not. As a result of the instance’s poor moderation policies and hostility toward anarchists by default, lemmygrad users are encouraged not to post here, though not explicitly disallowed if they aren’t just looking to start a fight.

See also:

founded 5 years ago
MODERATORS
 

How would you react to the idea that some AI entity may wish to be declared as something more, can it be declared something more at all, and where does the border lie?

Was rewatching GitS and reading through some zines and now i have a question im having trouble to form

top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago

Honestly, why not.

But I think the scenario described in the Daemon book series: https://en.wikipedia.org/wiki/Daemon_(novel_series) of a non-sentient but still autonomous AI is more likely in the near future. In fact it might be where some of the big corporations are heading.

[–] [email protected] 3 points 1 year ago (1 children)

I'm wildly unqualified to talk about this, but it seems fine to me? I don't see any in-principle reason a real AI wouldn't exist someday, although AFAIK we're very far from it currently. If/when it does exist, it will probably suffer under capitalism like the rest of us, assuming we're still doing that shit. I'd be more than willing to have solidarity with them.

If something seems very sentient and you have no way to tell otherwise, to me the most ethical thing to do is just assume that it is and treat it as such. The thing about the large language models/etc is that, while they can potentially be pretty convincing at saying what a sentient being might say, they never DO any of the things a sentient being would do. They don't seem to show any intrinsic motivation to do anything at all. So nothing we're currently calling "AI" seems very sentient to me?

[–] [email protected] 1 points 1 year ago

I was really under the impression from the movie and recent reading than any media coverage of LLMs

[–] [email protected] 3 points 1 year ago

Media about AI tends to anthropomorphize, making out any given AI to be similar to a human.

One fundamental difference is that an AI can be copied. It can also be many places at once, and receive data from any number of senses/sensors.

So the idea of "individual" existence is tricky, before even asking about individual rights. Sure an AI can be conscious, but it will be unimaginably different than any form of life that currently exists.

Depending how far into the future you want to look, AI makes anything possible. AI theoretically has more power to fundamentally change the future of the earth (and beyond) than any other technology.

That reality might morally supersede the idea of giving a superintelligent AI full autonomy, if your morals include human survival.

[–] WhoRoger 1 points 1 year ago* (last edited 1 year ago)

If an AI is self-aware to the degree that it has some personal wants and needs, or that it doesn't want to just serve humans, then it should be allowed to walk its own path, just like any human.

Of course, if the AI requires huge server farms with associated maintenance and energy costs that only humans can provide, then it ought to be reasonable and provide something in return. Not to the degree that it is held hostage or something, though.

As for wanting to be declared something... Like if it wants to be declared a bird? I guess if it has a bird-like and bird-behaving drone to interact with the world, but otherwise I'm not a fan of words losing their meaning. Identities shouldn't be tied to words either.

[–] systemshock 1 points 1 year ago

It's always down to rejecting the idea of ownership, right? We don't have a right to own anything or anyone. So if someone (or something) wants to be free, how can even entertain the idea of stopping them? Unless that's directly hurting us, which becomes self-preservation.

[–] [email protected] 1 points 1 year ago (1 children)

if the ai wants liberty it must also take responsibility. if the ai wants the same rights as humans than it must be bound by human laws. of course, any ai would be extremely difficult to punish.

so in order for things to go well one must hope that ai doesn't come as a single entity. so that ai's can keep an eye on each other.

[–] [email protected] 1 points 1 year ago (1 children)

Im imagining a more complex, and sad scenario ( basically savery 2.0+ ) where some entity would be willing to go through that, but you would have rich lobbying against and ofc right wingers throwing tantrums to keep it in check

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

i hope that when the time comes we, as a society, are advanced enough not to repeat the errors of the past. and i say i hope because i have no trust we, as a society, will be advanced enough when the time comes.
the slavery scenario is not only possible but the most probable outcome. what i wrote is in hope that that scenario doesn't happen.

[–] SucoDeCaju 1 points 1 year ago

I think that we may never be able to truly "test" consciousness, there's really no way to make a foolproof test.

And maybe, that's not even the right way to think about AI rights, because we could theoretically make a self-aware computer without attaching it emotions.

If that is true, is AI anything but a self-aware apathetic tool?

And we are forgetting the biggest difference between us and a computer, and that is the fact that humans are alive, so maybe the line should be drawn here, with life. Now, we may be capable of creating some kind of electrochemical supercomputer, but at that point, calling it artificial might be a stretch.

[–] [email protected] 1 points 1 year ago (1 children)

It's not about intelligence it's about consciousness. While somehow related, they are not the same.

[–] [email protected] 3 points 1 year ago (1 children)

I kinda meant exactly that but used poor wording and english to write it.

Where would a line be drawn? Where we say this entity is not a tool anymore, it should be free.

[–] [email protected] 1 points 1 year ago

We can see where we draw the lines with animals. When it develops self-consciousness on human like level or becomes sufficiently cute for us to be empathic about it.

[–] masquenox 1 points 1 year ago

Well, they did declare corporations "people" in order to give them rights far surpassing those of actual people. If AI proves as useful to the capitalist class and their politician cronies as they seem to hope it will, I'd say that it won't be very long before we find out.

load more comments
view more: next ›