this post was submitted on 03 Mar 2024
46 points (100.0% liked)

TechTakes

1095 readers
143 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

starting out[0] with "I was surprised by the following results" and it just goes further down almost-but-not-quite Getting It Avenue

close, but certainly no cigar

choice quotes:

Why is it impressive that a model trained on internet text full of random facts happens to have a lot of random facts memorized? … why does that in any way indicate intelligence or creativity?

That’s a good point.

you don't fucking say

I have a website (TrackingAI.org) that already administers a political survey to AIs every day. So I could easily give the AIs a real intelligence test, and track that over time, too.

really, how?

As I started manually giving AIs IQ tests

oh.

Then it proceeds to mis-identify every single one of the 6 answer options, leading it to pick the wrong answer. There seems to be little rhyme or reason to its misidentifications

if this fuckwit had even the slightest fucking understanding of how these things work, it would be glaringly obvious

there's plenty more, so remember to practice stretching before you start your eyerolls

you are viewing a single comment's thread
view the rest of the comments
[–] Amoeba_Girl@awful.systems 13 points 4 months ago (3 children)

Another way of putting it: Out of 196 questions, ChatGPT-4 got about 5 more correct answers than a random guesser would (39 vs 34.23.)

What are the odds of that?

I'm too lazy to look through the tests he's administering, but IQ tests like the WAIS have vocabulary questions, which yes you would expect an LLM to be better at than random chance.

I've surely said it before but when you see the sort of thinking on display by Mr Max Truth here, is it any wonder why rationalists are impressed with ChatGPT's reasoning faculties.

[–] Amoeba_Girl@awful.systems 16 points 4 months ago (1 children)

I asked ChatGPT-4 if cars in roundabouts in Ireland go clockwise or counterclockwise. It got it wrong. When I told it that, it apologized and gave the right answer. But then I trickily called it out on its right answer, and it apologized again and reverted to the wrong answer. Fundamentally, it knows that the Irish drive on the left side of the road, but it doesn’t understand how to apply that to a roundabout to find the circular direction.

lol you fucking idiot

[–] self@awful.systems 14 points 4 months ago (1 children)

this coin I’m flipping fundamentally knows everything about how the Irish drive, but it only seems to feel like giving me the right answer approximately half the time

this reminds me of very early in my programming career, when I discovered that an NPC I programmed to randomly either move forward or turn left every 10 seconds was surprisingly good at solving simple labyrinths. I used to instantiate like 100 of them and see which ones would win (or “fight” by colliding with each other, or escape the labyrinth by stacking on top of other instances). you’re telling me now I was a handful of incredibly stupid blog posts away from being a renowned AI researcher?

[–] swlabr@awful.systems 12 points 4 months ago

I used to instantiate like 100 of them and see which ones would win (or “fight” by colliding with each other

The basilisk will not take kindly to your desecration of AGI for sport.

load more comments (1 replies)