One of the only good uses I’ve seen for LLMs is as a sort of advanced grammar checker. I have a friend who is a subject matter expert but English is not his first language. He uses ChatGPT to help him rewrite his technical statements into plainer, more fluent English. He does not rely on the model for any subject matter information and when it hallucinates he is able to detect and correct the mistakes. Otherwise he finds it extremely useful as a writing tool.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
I recently took some college classes and they had us run our papers through gramarly to check for errors and to help our writing.
I hated it. It took the voice of your writing out almost completely and every sentence was weighted to be written like a standard textbook. Sure, all the same information was there, but when the ai said it was good... It sounded like it was just written by ai in the first place. Making it happy was worse than writing the paper in the first place since the grammar portion of the grading was simply 'run it through ai and mark down for any errors it picks up'.
In other non surprising news "tech bros are grifters"
Google search isn't a grift. Microprocessors isn't a grift. Video games aren't a grift. Social media isn't a grift.
Only some of the tech that comes from silicon valley tech bros is grifts. Some of it was legitimately revolutionary.
It's an important distinction that while tech (generally speaking) isn't a grift, there are a lot of grifters in the tech industry, those are the tech bros the comment you're replying to mentions. They don't see tech as a tool for their users, they see it as a vessel to extract money from investors until they get enough users from whom to extract money so they don't need investment money any more
I don't think it anymore applies in all cases of silicon valley tech entrepreneurs
It’s about time that we call the hype machine what it is. Ed Zitron has been calling this out for more than a year in his newsletter and on his podcast. These charlatans pretend we’re on the edge of thinking machines. Bullshit. They are statistical word generators. Can they be made to be useful beyond that? It appears so[0], but useful other things are not available to be mass-adopted so far. Curing cancer certainly doesn’t appear to be near.
If you ever tried to use ai for code you'd know how dumb it was.
Generally it's ok, it'll get some stuff done but if it thinks something is a certain way you can't convince it otherwise. It hallucinates documentation, admits it made it up then carries on telling you to use the made up parts of the code.
Infuriating.
Like I said though, generally pretty good at helping you learn a new language if you have knowledge to start with.
People learning from scratch are cooked, it makes crazy decisions sometimes that will compound over time and leave you with trash.
If you ever tried to use ai for code you'd know how dumb it was.
If you ever tried using it for anything you are pretty familiar with, you'd know how dumb it was.
That's the only reason I think people still think AI is great; they don't know shit so they think the AI is giving them good info when it's not.
I actually started finding use cases for copilot in excel. It is still dumb, but it can mass process data quickly so if you write your prompt well, it can save you a lot of time.
And that is exactly how "AI" should be used. It can boost productivity if used as the tool that it is.
I've actually tried to use these things to learn both Go and Rust (been writing Python for 17 years) and the experience was terrible. In both cases, it would generate code that referenced packages that didn't exist, used patterns that aren't used anymore, and wrote code that didn't even compile. It was wholly useless as a learning tool.
In the end what worked was what always works: I got a book and started on page 1. It was hard, but I started actually learning after a few hours.
I used Gemini for go and was pleasantly surprised, might be important to note that I don't ask it to generate a whole thing but more like "in go how do I " and sort of build up from there myself.
Chatgpt and deepseek were a lot more failure prone.
As an aside I found Gemini very good at debugging blender issues where the UI is very complex and unforgiving, and issues with that are super hard to search for, different versions and similarly named things etc.
But as soon as you hit something it will not accept has changed it's basically useless. But often that got me to a point where I could find posts in forums about "where did functionality x move to"
Just like VR I think the bubble will burst and it will remain a niche technology that can be fine tuned for certain professions or situations.
People getting excited for ways for ai to control their PCs are probably going to be in for a bad time..
About freaking time someone called them on it.
The “Artificial” part isn’t clue enough?
But I get it. The executives constantly hype up these madlib machines as things they are not. Emotional intelligence? It has neither emotion nor intelligence. “Artificial Intelligence” literally means it has the appearance of intelligence, but not actual intelligence.
I used to be excited at the prospect of this technology, but at the time I naively expected people to be able to create and run their own. Instead, we got this proprietary capital-chasing clepto corporate dystopia.
The “Artificial” part isn’t clue enough?
Imo, no. The face-value connotation of "Artificial Intelligence" is intelligence that's artificial. Actual intelligence, but not biologic. That's a lot different from "it kinda looks like intelligence so long as you don't look too hard at what's beneath the hood".
Thus far, examples of that only exist in sci-fi. That's part of why people are opposed to the bullshit generators marketed as "AI", because calling it "AI" in the first place is dishonest. And that goes way back - videogame NPCs, Microsoft 'Clippy' etc have all been incorrectly branded "AI" in marketing or casual conversation for decades, but those aren't stuffed into every product the way the current iteration is watering down the quality of what's on the market, so outside of a mild pedantic annoyance, no one really gave a shit.
Nowadays the stakes are higher since it's having an actual negative impact on people's lives.
If we ever come up with true AI - actual intelligence that's artificial - it's going to be a game changer for humanity, for better or worse.
This is actually not true. You are referring to Artificial General Intelligence (AGI), an artificially intelligent system that is able to function in any context.
Artificial Intelligence as a field of Computer Science goes back to the 50s, and is defined as systems that appear intelligent, not that actually exhibit thinking capabilities. The entire purpose of the Turing test is to appear intelligent, with no requirement that the system actually is.
Rule based systems and statistical models are examples of AI in the scientific sense, but the public perception of what AI should mean is warped by portrayals in science fiction of what it could mean.
The Turing test was a thought experiment stating that if something seemed intelligent then it was intelligent. We have utterly proved that wrong now. IMHO we should only teach it to say that it isn't a complete definition.
I mean, nowadays the can of worms is long-since opened, and there's the whole spiel about how definitions change over time with use, so... sure I guess?
AI became synonymous with computing in general, and "AGI" moved the goal posts in an attempt to un-muddy the waters? Give it time, I'm sure marketing will fuck that one up too and a couple other randoms on the internet will be having this same conversation but between AGI and whatever the new flavor is.