this post was submitted on 24 Sep 2023
19 points (88.0% liked)

Futurology

1774 readers
101 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rhaedas@kbin.social 1 points 1 year ago

You are referring to AGI (artificial general intelligence). AI has been around for a while now in the form of ANI (artificial narrow intelligence). LLMs are still in the latter, but faster compute and ways to use different LLMs to improve their outputs have changed and broadened how narrow they can be. Still not AGI, absolutely, but the point here is still valid because even a narrow AI or even lower can have alignment problems that turn them into an issue. And safety towards such things is very much backseat in any AI operation, even as the same experts talk about sudden and unexpected emergent properties. Eventually with such recklessness for profit and being first, an emergent property will occur that might as well be AGI for the dangerous potential it has, and we are not ready.

Companies are bending over backwards to insert the AI that we've come up with (that's absolutely not AGI) in all sorts of places, with some major failures (because LLMs are being sold as AGI, not as what they are). Eventually someone will go too far even without AGI and it doesn't seem anyone is putting on the brakes.