this post was submitted on 24 Nov 2023
-1 points (49.8% liked)

Technology

58306 readers
3169 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] nicetriangle@kbin.social 190 points 10 months ago (3 children)

Geez the reporting around this has been ridiculously sensationalist

[–] FlyingSquid@lemmy.world 92 points 10 months ago (2 children)

You mean OpenAI didn't just create a superintelligent artificial brain that will surpass all human ability and knowledge and make our species obsolete?

[–] kescusay@lemmy.world 67 points 10 months ago (5 children)

The funny thing is, last year when ChatGPT was released, people freaked out about the same thing.

Some of it was downright gleeful. Buncha people told me my job (I'm a software developer) was on the chopping block, because ChatGPT could do it all.

Turns out, not so much.

I swear, I think some people really want to see software developers lose their jobs, because they hate what they don't understand, and they don't understand what we do.

[–] Enkers@sh.itjust.works 34 points 10 months ago* (last edited 10 months ago) (2 children)

As a software developer, I do want to see software developers lose their jobs to AI. This shouldn't be surprising, as the purpose of a lot of software development is to put other people out of a job via automation, and that's fundamentally a good thing. The alternative is like wanting a return to preindustrial society. Automation generally raises quality of life.

The real problem is that we still haven't figured out how to distribute the benefits of society's automation efforts equitably so that they raise quality of life for everyone.

[–] nicetriangle@kbin.social 23 points 10 months ago (2 children)

Yeah that would be all fine and well if it meant we're on track for some post-work egalitarian utopia but you and I know that's not at all where this is heading.

[–] FaceDeer@kbin.social 5 points 10 months ago

Unfortunately based on what I know of history it seems likely that humanity won't ever be on track to build a post-work egalitarian utopia until we've got no other option left. So I support going ahead with this tech because that seems like a good way to force the issue. The transition period will be rough, but better than stagnation IMO.

[–] Enkers@sh.itjust.works 4 points 10 months ago

Oh, for sure, it'll definitely further wealth disparity, as automation always seems to in a capitalist system. But that's a societal problem that we continually have to address, and it spans nearly all fields of human work to varying degrees.

Fortunately, for the most part tech advancements are very hard to control. Progress can be impeded from spreading, but not stopped, and it means the average individual has access to more and more powerful tools.

[–] Eldritch@lemmy.world 4 points 10 months ago (2 children)

We've figured it out. They already had a start on it in the 19th and 20th centuries. However, those with the means have spent the last 100 years screaming bloody murder. Dismantling government and any progress that had been made to address it. As well as invading and overthrowing any foreign group that though about opposing them.

load more comments (2 replies)
[–] FlyingSquid@lemmy.world 16 points 10 months ago* (last edited 10 months ago) (4 children)

Even if ChatGPT gets far in advance of the way it is now in terms of writing code, at the very least you're still going to need people to go over the code as a redundancy. Who is going to trust an AI so much that they will be willing to risk it making coding errors? I think that the job of at the very least understanding how code works will be safe for a very long time, and I don't think ChatGPT will get that advanced for a very long time either, if ever.

[–] kescusay@lemmy.world 18 points 10 months ago (1 children)

There's more to it than that, even. It takes a developer's level of knowledge to even begin to tell ChatGPT to make something sensible.

Sit an MBA down in front of a ChatGPT window and tell them to make an application. The application has to save state, it has to use the company's OAuth login system, it has to store data in a PostgreSQL database, and it has to have granular, roles-based access control.

Then watch the MBA struggle because they don't understand that...

  • Saving state is going to vary depending on the front-end. Are we writing a browser application, a desktop application, or a mobile application? The MBA doesn't know and doesn't understand what to ask ChatGPT to do.
  • OAuth is a service running separately to the application, and requires integration steps that the MBA doesn't know how to do, or ask ChatGPT to do. Even if they figure out what OAuth is, ChatGPT isn't trained on their particular corporate flavor for integration.
  • They're actually writing two different applications, a front-end and a back-end. The back-end is going to handle communication with PostgreSQL services. The MBA has no idea what any of that means, let alone know how to ask ChatGPT to produce the right code for separate front-end and back-end features.
  • RBAC is also probably a separate service, requiring separate integration steps. Neither the MBA nor ChatGPT will have any idea what those integration steps are.

The level of knowledge and detail required to make ChatGPT produce something useful on a large scale is beyond an MBA's skillset. They literally don't know what they don't know.

I use an LLM in my job now, and it's helpful. I can tell it to produce snippets of code for a specific purpose that I know how to describe accurately, and it'll do it. Saves me time having to do it manually.

But if my company ever decided it didn't need developers anymore because ChatGPT can do it all, it would collapse inside six months, and everything would be broken due to bad pull requests from non-developers who don't know how badly they're fucking up. They'd have to rehire me... And I'd be asking for a lot more money to clean up after the poor MBA who'd been stuck trying to do my job.

[–] FlyingSquid@lemmy.world 4 points 10 months ago (1 children)

Thank you, you explained all of that much better than I could.

[–] kescusay@lemmy.world 7 points 10 months ago

You're welcome! And it occurs to me that the fact that it took a developer to explain all of that is an object lesson in why ChatGPT won't end software development as a career option - and believe me, I simplified it for a non-developer audience.

[–] thisfro@slrpnk.net 15 points 10 months ago (25 children)

Who is going to trust an AI so much that they won't risk it making coding errors?

Sadly, too many

load more comments (25 replies)
[–] nicetriangle@kbin.social 4 points 10 months ago* (last edited 10 months ago) (1 children)

That's a fuckin bleak outcome for a lot of people if the job transition goes from \ to \

That's like being an artist and being told your job now is simply to fix the shitty hands Midjourney draws. And your job will only last as long as that remains a problem.

[–] FlyingSquid@lemmy.world 5 points 10 months ago

Hey, I didn't say the future would be bright, just that it will still need people familiar with code for the foreseeable future. At least until the Earth heats up so much that the lack of potable water and the unsurvivable high temperatures destroy civilization.

load more comments (1 replies)
[–] dustyData@lemmy.world 10 points 10 months ago* (last edited 10 months ago) (6 children)

Your comment reminds me the cesspit of Xitter with the generative AI bros trying to conflate AI with assistive tech. They seriously argued that “artistically impaired” was a genuine disability and that they were entitled to generative AI training sets because it allowed them to draw. It was the most disingenuous argument, that they had a right to steal artists work, and leave them without income, to train their AI because they couldn't be bothered to rub a pen against some paper.

[–] Valmond@lemmy.mindoki.com 5 points 10 months ago

Artistically impaired, lol that made my day!

load more comments (5 replies)
[–] nicetriangle@kbin.social 3 points 10 months ago (1 children)

Lotta people have already lost jobs because of it. I know a few personally. People with college educations. We're just getting started with this, it will get worse.

[–] kescusay@lemmy.world 9 points 10 months ago (1 children)

In software development? Not many - and certainly not at smart companies.

ChatGPT is a tool. It goes in the developer toolbox because it's useful. But it doesn't replace the developer, any more than a really good screwdriver replaces the construction worker.

More and more, understanding how to use LLMs for software development will be a job requirement, and developers who can't adapt to that may find themselves unemployed. But most of us will adapt to it fine.

I have. I'm using Copilot these days. It's great. And the chances of it replacing me are roughly 0%, because it doesn't actually know anything about our applications, and if told to make code by someone else who doesn't know anything about them either, it'll make useless garbage.

[–] nicetriangle@kbin.social 4 points 10 months ago* (last edited 10 months ago) (3 children)

Yeah so your job is a harder to fully replace with AI at this moment than jobs like copywriting, narration, or illustration. Enjoy it while it lasts, because the days are numbered.

And before all those jobs are gone, people using AI tools like your Copilot will be more productive requiring less headcount. At the same time there will still be a lot of people seeking work, but now with fewer jobs there will be downward pressure on wages.

load more comments (3 replies)
load more comments (1 replies)
[–] Hyperreality@kbin.social 17 points 10 months ago (3 children)

You laugh now, but just you wait. If it turns out they've created a hyperintelligent waifu/husbando, this will inevitably lead to plummeting birth rates and the end of human civilisation.

[–] Duke_Nukem_1990@feddit.de 6 points 10 months ago

One can only hope

load more comments (1 replies)
[–] DemBoSain@midwest.social 5 points 10 months ago

Superintelligent AI Just Pried the Keyboard from my Cold, Dead Hands

load more comments (1 replies)
[–] toothbrush@lemmy.blahaj.zone 79 points 10 months ago (2 children)

just bs. They are trying to come up with an explanation for why altman was fired that is not: we caught him doing lots of illegal stuff.

[–] GONADS125@lemmy.world 38 points 10 months ago (7 children)

I think it's a hype move at this point. Like the guy who claimed he believed google's chat bot was sentient.

I read another article that stated they had a computational breakthrough, in which their program can now carry out basic grade school math. No other model is able to actually carry out math equations, not even basic arithmetic.

This is a significant development, but it's not like they're on the cusp of developing superintelligence now. I bet they are taking this small inch towards superintelligence, and hyping it like they've just huddled miles forward.

[–] dustyData@lemmy.world 6 points 10 months ago* (last edited 10 months ago) (1 children)

The thing is, this could actually be a several miles jump. But where they want to go is not the grocery down the road. They are trying to fly to another galaxy. This is more like hyping up that you are going to land on the moon next year, at a time when you just figured out that rubbing two sticks together it makes a fire. Technically it's truly a leap, but we are so far away still.

[–] GONADS125@lemmy.world 5 points 10 months ago

Technically it's truly a leap, but we are so far away still.

I completely agree and was trying to convey that. Not trying to downplay the significance of the development, but they are far from superintelligence and they're going to hype it up as much as they can.

load more comments (6 replies)
load more comments (1 replies)
[–] Korne127@lemmy.world 74 points 10 months ago* (last edited 10 months ago) (1 children)

Connecting superintelligence to the board's recent actions, which Sutskever initially supported, might be a stretch.

Why do you do that in your headline then?

[–] DarkThoughts@kbin.social 27 points 10 months ago
[–] gedaliyah@lemmy.world 39 points 10 months ago (2 children)

But can it open the pod bay doors?

[–] Enkers@sh.itjust.works 20 points 10 months ago (1 children)
[–] Bonehead@kbin.social 7 points 10 months ago (1 children)
load more comments (1 replies)
[–] random_character_a@lemmy.world 6 points 10 months ago (1 children)

Take your upvote and go watch more artsy 60's scifi, you brilliant sod.

load more comments (1 replies)
[–] DrCake@lemmy.world 38 points 10 months ago (1 children)

So was it all just a marketing stunt?

[–] otter@lemmy.ca 33 points 10 months ago* (last edited 10 months ago)

CEO ousting shenanigans = 📉

Release rumor = 📈

They're not publicly traded, but I assume public sentiment still has an effect on things (ex. Partnerships, users buying memberships, etc.)

[–] Melt@lemm.ee 24 points 10 months ago

Hope it replaces the most expensive job position: CEO

[–] satans_crackpipe@lemmy.world 20 points 10 months ago (1 children)

What are these con artists up to? And why are so many people self replicating the propaganda?

[–] boatswain@infosec.pub 4 points 10 months ago

self replicating the propaganda?

You can't self-replicate anything other than yourself. You replicate things; we use "self-replicating" because it's shorthand for "thing that replicates itself."

[–] ZILtoid1991@kbin.social 15 points 10 months ago (2 children)

The "superintelligence" in question: the same old tech, but with a larger context window, which will make it hallucinate a bit less often.

load more comments (1 replies)
[–] Tattorack@lemmy.world 11 points 10 months ago* (last edited 10 months ago)

Alright, so the article really doesn't prove anything, just says OpenAI claims something and then fills it with words.

Let's be clear here; we don't even have an AGI. That is to say, artificial general intelligence, a man-made intelligence that is at least as capable and general purpose as Human intelligence.

That would be a intelligence that is self aware and can actually think and understand. Data from Star Trek would be an AGI.

THESE motherfuckers are now claiming they made a breakthrough on potentially creating an SI, a super intelligence. An artificial, man-made intelligence that not only has the self awareness and understanding of an AGI, but is vastly more intelligent than a Human, and likely has awareness that surpasses Human awareness.

I think not.

[–] RiikkaTheIcePrincess@kbin.social 10 points 10 months ago (1 children)

Why do I keep looking at these threads? The way people talk about this stuff on all sides is so asinine. Nearly every good point is accompanied by missing a big one or just ricocheting off the good one, flying off into space and hitting a fully automated luxury gay space commulist. Hopes, dreams, assumptions, and ignorance all just headbutting each other and getting nowhere.

Oh yeah, I wanted to know what "superintelligence" was and whether I should care. Welp.

load more comments (1 replies)
[–] reflex@kbin.social 5 points 10 months ago* (last edited 10 months ago) (3 children)

Yawn.

Let me know when we get a real Terminator or Matrix or Space Odyssey situation.

load more comments (3 replies)
[–] sentient_loom@sh.itjust.works 5 points 10 months ago

Almost sounds like the whole thing was a performance.

load more comments
view more: next ›