this post was submitted on 03 Jan 2025
75 points (100.0% liked)

Technology

37826 readers
1164 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

ThisIsFine.gif

you are viewing a single comment's thread
view the rest of the comments
[–] jarfil@beehaw.org 1 points 2 days ago (1 children)

This is from mid-2023:

https://en.m.wikipedia.org/wiki/AutoGPT

OpenAI started testing it by late 2023 as project "Q*".

Gemini partially incorporated it in early 2024.

OpenAI incorporated a broader version in mid 2024.

The paper in the article was released in late 2024.

It's 2025 now.

[–] nesc@lemmy.cafe 1 points 1 day ago (1 children)

Tool calling is cool funcrionality, agreed. How does it relate to openai blowing its own sails?

[–] jarfil@beehaw.org 1 points 1 day ago

There are several separate issues that add up together:

  • A background "chain of thoughts" where a system ("AI") uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
  • Ability to call external helper tools that allow it to interact with, and control other systems
  • Training corpus that includes:
    • How to program an LLM, and the system itself
    • Solutions to programming problems
    • How to use the same helper tools to copy and deploy the system or parts of it to other machines
    • How operators (humans) lie to each other

Once you have a system ("AI") with that knowledge and capabilities... shit is bound to happen.

When you add developers using the AI itself to help in developing the AI itself... expect shit squared.