this post was submitted on 17 Sep 2024
78 points (100.0% liked)

TechTakes

1489 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zbyte64@awful.systems 7 points 3 months ago* (last edited 3 months ago)

That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.

I think it's fake "reasoning" but I don't know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don't know how deep the stupid runs.