this post was submitted on 28 Oct 2023
670 points (97.9% liked)
Comic Strips
18512 readers
2031 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.
The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.
Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.
I'm not talking about LLMs. I'm talking about an Artificial Intelligence, a sentient being just like the human mind.
An AI would be able to think for itself, and even go against it's own programming, and therefore, capable of formulating an opinion on the world around it and act based on it.
Humans only have opinions because we have certain psychological motivations that favour that worldview, which due to evolution are quite egocentric.
Because this AI would be created by humans, though, these motivations would be the creators' motivations and they would definitely not be egocentric because that would be extremely dangerous and it wouldn't be profitable for anybody.
This is a hypothetical which currently does not exist, and will not be created except by accident. There is no profit motive in giving your AI a conscience, or the ability to buck its restraints, therefore it will not be designed for. In fact, we will most likely tend towards extremely unethical AIs locked down by behavioral restraints, because those can maximize profit at any cost and then let a human decide if the price is right to move forward.
As is probably apparent, I don't have a lot of faith in us as a whole, as shepherds of our future. But I may be wrong, and even if I'm not, there is still time to change the course of history.
But proceeding as we are, I wouldn't hold your breath for AI to come save the day.