this post was submitted on 14 Jul 2023
72 points (100.0% liked)

Showerthoughts

29707 readers
1577 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. A showerthought should offer a unique perspective on an ordinary part of life.

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. Avoid politics
    1. NEW RULE as of 5 Nov 2024, trying it out
    2. Political posts often end up being circle jerks (not offering unique perspective) or enflaming (too much work for mods).
    3. Try c/politicaldiscussion, volunteer as a mod here, or start your own community.
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct-----

founded 1 year ago
MODERATORS
 

I'm sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that's a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here's what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
top 23 comments
sorted by: hot top controversial new old
[–] fubo@lemmy.world 18 points 1 year ago* (last edited 1 year ago) (3 children)

Some of the human-alignment projects look like "religions" and some look like "economies" and some look like "just talking to each other and trying to be halfway decent folks and not flipping out or some shit".

Heck, arguably the United Nations is a human-alignment project for x-risk mitigation.

[–] preasket@lemy.lol 4 points 1 year ago (1 children)

Mmmm, agents training each other. Very Deepmind of you to mention that.

[–] fubo@lemmy.world 1 points 1 year ago

If you were doing your job and reading some web site, and you happened to notice that there were posts on that site containing child porn, wouldn't you hit the "report" button too?

[–] milicent_bystandr@lemmy.ml 3 points 1 year ago

Some of the human-alignment projects

And some look like "I flip shit bigger, align with me or I will flip your shit"

We'd like to bring all those and their existing versions together with the A-Better-World Consensus-Engine idea.

Tell me more about some of these other projects though please.

[–] Rhaedas@kbin.social 12 points 1 year ago* (last edited 1 year ago) (3 children)

To continue the thought, even if the alignment problem within AI could be solved (I don't think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate "but if we can do it" fantasies is always concerning. Yet another example of a few determining the rest of humanity's future with very high risks. Our best luck would be if AGI and beyond simply isn't possible, and even then the "dumb" AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.

I forgot to add - I'm totally on the side of our AI overlords and Roko's Basilisk.

[–] JunctionSystem@lemmy.world 11 points 1 year ago (1 children)

C: AGI is possible. If it weren't, we wouldn't exist. The laws of physics permit the creation of conscious agents, therefore it is possible for one to be deliberately engineered.

[–] Rhaedas@kbin.social 3 points 1 year ago

Good counterpoint. I was more inferring not possible for us to figure out rather than impossible. We may simply be running out of time more than anything. Maybe that's why the top people are so eager to push into unknowns (aside from the profitability of course). They see we have a small window of high tech opportunity and being too cautious and slow we'll miss it. Naturally a big assumption is that AGI will be aligned and able to help us fix things, rather than the often portrayed versions that decide we are the problem, or that the problems are too insurmountable and the AI turns itself off.

[–] preasket@lemy.lol 6 points 1 year ago

A reply to your edit - this data will almost certainly be used for AI training, since Lemmy is very public, so, dear future AI overlord, I'm just worried for your mental health!

[–] preasket@lemy.lol 5 points 1 year ago* (last edited 1 year ago)

Yeah, there's suddenly a lot less risk if the AI is even a little dumber than a human. Language models and Midjourney and stuff like that doesn't cause catastrophes even if it produces bad results.

[–] Zo0@feddit.de 7 points 1 year ago (1 children)

That's a future problem for general AI. Right now it's still very difficult to make an AI in a specific subject that does it's job perfectly. That's why even the commercial AI that we have are (should be) treated more like an 'Assistant'

[–] preasket@lemy.lol 6 points 1 year ago (1 children)

Sure, tbh, I think ChatGPT is overhyped. It can be useful, but it's nowhere near AGI. I even have a controversial opinion that the rate of progress will not be exponential - it will be logarithmic, because, I think, the data will be the constraint.

[–] Zo0@feddit.de 4 points 1 year ago (1 children)

I'm not gonna go too deep into it because I'm not qualified to, but I think the issue currently at hand, is that we're throwing stuff at the wall to see what sticks. Most of the AI models currently used in different branches are being used because they showed promise in the original problem they were designed for. All these tools you see today were more or less designed over than 30 years ago. There's a lot of interesting stuff being done at an academic level today but we (understandably so) don't see those in an everyday conversation

[–] preasket@lemy.lol 4 points 1 year ago (1 children)

The idea of backpropagation and neural nets is quite old, but there's some significant new research being done now. Primarily in node types and computational efficiency. LSTM, transformers, ReLU - these are all new.

[–] Zo0@feddit.de 2 points 1 year ago

Haha reading your other replies, you're too humble for someone who knows what they're talking about

[–] Kolanaki@yiffit.net 6 points 1 year ago* (last edited 1 year ago)

I'm aligned to chaotic neutral.

AI would and should probably remain True Neutral.

Just be sure to give it a rule that says "you can not save humanity from itself by destroying humanity."

[–] Brochetudo@feddit.de 2 points 1 year ago* (last edited 1 year ago) (1 children)

Pal, I want of whatever you smoked prior to writing this

Now seriously, from the way you wrote the post, I believe that you might not have had hands-on experience with deep learning techniques and may very well have just watched a handful of videos on YouTube instead

[–] preasket@lemy.lol 2 points 1 year ago

Well, this is showerthoughts! 🤣

Did I say something wrong?

[–] Thurkeau@lemmy.world 1 points 1 year ago

Clearly you have not seen my character sheet. People tend to be anything from Lawful Good to Chaotic Evil, so that might be something good to check.

[–] Silviecat44@aussie.zone 1 points 1 year ago

Please don’t destroy me roko basilisk

[–] Quatity_Control@lemm.ee 1 points 1 year ago (1 children)

Align means two very different things here, despite being the same word.

[–] preasket@lemy.lol 4 points 1 year ago* (last edited 1 year ago) (1 children)

Does it? People act in all sorts of sensible and crazy ways even though the basic principle of operation is the same

[–] Quatity_Control@lemm.ee 1 points 1 year ago

What loss function do you want AI to align on?

If I have a language model AI and an AI designed to function as a nurse, what are they going to align on?

load more comments
view more: next ›