this post was submitted on 22 Jul 2023
204 points (100.0% liked)

Asklemmy

43822 readers
929 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

you are viewing a single comment's thread
view the rest of the comments
[–] tara@lemmy.blahaj.zone 2 points 1 year ago (1 children)

So typically there are 4 main competing interpretations of what AI is:

  1. Acting like a human
  2. Thinking like a human
  3. Acting rationally
  4. Thinking rationally

These are from Norvig’s “AI: A Modern Approach”.

Alan Turing’s “Turing Test” tests whether a given agent is artificially intelligent (according to definition #1). The test involves a human conversing with the agent via text messages, and deciding whether the agent is human or not. Large language models, a form of machine learning, can produce chatbot agents which pass this test. Instances of GPT4 prompted sufficiently to text an assessor for example. The assessor occasionally interacts with humans so they are kept sufficiently uncertain.

By this point, I think that machine learning in the form of an LLM can achieve artificial intelligence according to definition #1, but that isn’t what most non-tech non-academic people mean by AI.

The mainstream definition of AI is what we would call Artificial General Intelligence (AGI). This is an agent that meets a given one of Norvig’s criteria for AI across multiple scenarios and situations that they have never encountered before.

Many would argue that LLMs like GPT4 do not meet the criteria for AGI because they are not general enough, unable to learn to play an Atari game for example, or to learn an entirely unseen language to fluency.

This is the difference between an LLM and a fictional AGI like Glados or Skynet.

Additionally forms of machine learning exist like k-means clustering, which identify related groups within a dataset as their only function. I would assert these are not AI, although a weak argument could be made that they are thinking “rationally” enough to meet definition #4.

Then there are forms of AI which are not machine learning, such as heuristic agents - agents that are hard coding with reasoning by humans - such as the chess playing Stockfish, or the AI found in most video games.

Ultimately AI can describe machine learning if “AI” is understood as something which meets one or more of Norvig’s definitions. But since most people say AI when they mean AGI, I think “machine learning” is a better term. Less undeserved hype, less marketing disinformation, and generally better at communicating what is being talked about.

Thanks for taking your time and putting it in that laconic way.