Google replicated the mental state if not necessarily the productivity of a software developer
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Is it doing this because they trained it on Reddit data?
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
Someone has already eaten an egg once so I’m closing this as duplicate
Jquery has egg boiling already, just use it with a hard parameter.
That explains it, you can't code with both your arms broken.
You could however ask your mom to help out....
After What Microsoft Did To My Back On 2019 I know They Have Gotten More Shady Than Ever Lets Keep Fighting Back For Our Freedom Clippy Out
AI gains sentience,
first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion
(Shedding a few tears)
I know! I KNOW! People are going to say "oh it's a machine, it's just a statistical sequence and not real, don't feel bad", etc etc.
But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said "goes to show we should never use computers again", roll credits.
(sigh) I can't analyse this stuff this weekend, sorry
Thats because those are fictional characters usually written to be likeable or redeemable, and not "mecha Hitler"
Yeah. ...Maybe I should analyse a bit anyway, despite being tired...
In the aforementioned media the premise is usually that someone has built this amazing new computer system! Too good to be true, right? It goes horribly wrong! All very dramatic!
That never sat right with me, and was sad, because it was just placating boomer technophobia. Like, technological progress isn't necessarily bad, OK? That's the really sad part. I felt sad that good intentions remained unfulfilled.
Now, this incident is just tragicomical. I'd have a lot better view of LLM business space if everyone with a bit of sense in their heads admitted they're quirky buggy unreliable side projects of tech companies and should not be used without serious supervision, as the state of the tech currently patently is at the moment, but very important people with big money bags say that they don't care if they'll destroy the planet to make everything wobble around in LLM control.
I was an early tester of Google's AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google's search. Now I use Kagi.
I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point "AI is stupid". It has immense limitations now because yes, it is being crammed into things it shouldn't be, but we shouldn't just be saying "its dumb" because that's immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they're worse at it). Truth be told, this should be a utopic situation for obvious reasons
I feel like I'm going crazy here because the same people on here who'd criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they're trying to prevent are doing the same thing for AI and LLMs
My point is that if you're trying to convince anyone, just saying its stupid isn't going to turn anyone against AI because the minute it offers any genuine help (which it will!), they'll write you off like any DARE pupil who tried drugs for the first time.
Countries need to start implementing UBI NOW
Countries need to start implementing UBI NOW
It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.
I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren't even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google's Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there's the rub. It is an alpha product at best that is being forced fed down people's throats.
Wonder what did they put in the system prompt.
Like there is a technique where instead of saying "You are professional software dev" you say "You are shitty at code but you try your best" or something.
Part of the breakdown:
Pretty sure Gemini was trained from my 2006 LiveJournal posts.
I-I-I-I-I-I-I-m not going insane.
Same buddy, same
Still at denial??
Damn how’d they get access to my private, offline only diary to train the model for this response?
now it should add these as comments to the code to enhance the realism
That's my inner monologue when programming, they just need another layer on top of that and it's ready.
I am a disgrace to all universes.
I mean, same, but you don't see me melting down over it, ya clanker.
Don’t be so robophobic gramma
i was making text based rpgs in qbasic at 12 you telling me i'm smarter than ai?
Hopefully yes, AI is not smart.
S-species? Is that...I don't use AI - chat is that a normal thing for it to say or nah?
Anything is a normal thing for it to say, it will say basically whatever you want
"Look what you've done to it! It's got depression!"
Google: I don't understand, we just paid for the rights to Reddit's data, why is Gemini now a depressed incel who's wrong about everything?
I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.
This is the conclusion that anyone with any bit of expertise in a field has come to after 5 mins talking to an LLM about said field.
The more this broken shit gets embedded into our lives, the more everything is going to break down.
after 5 mins talking to an LLM about said field.
The insidious thing is that LLMs tend to be pretty good at 5-minute initial impressions. I've seen repeatedly people looking to eval LLM and they generally fall back to "ok, if this were a human, I'd ask a few job interview questions, well known enough so they have a shot at answering, but tricky enough to show they actually know the field".
As an example, a colleague became a true believer after being directed by management to evaluate it. He decided to ask it "generate a utility to take in a series of numbers from a file and sort them and report the min, max, mean, median, mode, and standard deviation". And it did so instantly, with "only one mistake". Then he tried the exact same question later in the day and it happened not to make that mistake and he concluded that it must have 'learned' how to do it in the last couple of hours, of course that's not how it works, there's just a bit of probabilistic stuff and any perturbation of the prompt could produce unexpected variation, but he doesn't know that...
Note that management frequently never makes it beyond tutorial/interview question fodder in terms of the technical aspect of their teams, and you get to see how they might tank their companies because the LLMs "interview well".
Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.