Architeuthis

joined 1 year ago
[–] Architeuthis@awful.systems 2 points 1 month ago (1 children)

This was exactly what I had in mind but for the life of me I can't remember the title.

[–] Architeuthis@awful.systems 9 points 1 month ago* (last edited 1 month ago) (1 children)

why are all podcast ads just ads for other podcasts? It’s like podcast incest

I'm thinking combination of you probably having set all your privacy settings to non serviam and most of their sponsors having opted out of serving their ads to non US listeners.

I did once get some random scandinavian sounding ads, but for the most part it's the same for me, all iheart podcast trailers.

[–] Architeuthis@awful.systems 17 points 2 months ago (4 children)

It had dumb scientists, a weird love conquers all theme, a bathetic climax that was also on the wrong side of believable and an extremely tacked on epilogue.

Wouldn't say that I hated it, but it was pretty flawed for what it was. magnificent black hole cgi notwithstanding.

[–] Architeuthis@awful.systems 16 points 2 months ago

Summizing Emails is a valid purpose.

Or it would have been if LLMs were sufficiently dependable anyway.

[–] Architeuthis@awful.systems 10 points 2 months ago* (last edited 2 months ago) (1 children)

But “It’s Greek to me” goes right back to the Romans.

The wiki seems to say the aphorism originates with medieval scribes and Shakespeare's Julius Caesar.

The actual ancient Romans are unlikely to have had such qualms, since at the time Greek was much more widely understood than Latin, so much so that many important roman works like Caesar's Memoirs and Marcus Aurelius' Meditations were originally written in Greek, with the Latin versions being translations.

[–] Architeuthis@awful.systems 7 points 2 months ago

You’d think AI companies would have wised up by this point and gone through all their pre-recorded demos with a fine comb so that ~~marks~~ users at least make it past the homepage, but I guess not.

The target group for their pitch probably isn't people who have a solid grasp of coding, I'd bet quite the opposite.

[–] Architeuthis@awful.systems 21 points 2 months ago* (last edited 2 months ago) (2 children)

On each step, one part of the model applies reinforcement learning, with the other one (the model outputting stuff) “rewarded” or “punished” based on the perceived correctness of their progress (the steps in its “reasoning”), and altering its strategies when punished. This is different to how other Large Language Models work in the sense that the model is generating outputs then looking back at them, then ignoring or approving “good” steps to get to an answer, rather than just generating one and saying “here ya go.”

Every time I've read how chain-of-thought works in o1 it's been completely different, and I'm still not sure I understand what's supposed to be going on. Apparently you get a strike notice if you try too hard to find out how the chain-of-thinking process goes, so one might be tempted to assume it's something that's readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

From the detailed o1 system card pdf linked in the article:

According to these evaluations, o1-preview hallucinates less frequently than GPT-4o, and o1-mini hallucinates less frequently than GPT-4o-mini. However, we have received anecdotal feedback that o1-preview and o1-mini tend to hallucinate more than GPT-4o and GPT-4o-mini. More work is needed to understand hallucinations holistically, particularly in domains not covered by our evaluations (e.g., chemistry). Additionally, red teamers have noted that o1-preview is more convincing in certain domains than GPT-4o given that it generates more detailed answers. This potentially increases the risk of people trusting and relying more on hallucinated generation.

Ballsy to just admit your hallucination benchmarks might be worthless.

The newsletter also mentions that the price for output tokens has quadrupled compared to the previous newest model, but the awesome part is, remember all that behind-the-scenes self-prompting that's going on while it arrives to an answer? Even though you're not allowed to see them, according to Ed Zitron you sure as hell are paying for them (i.e. they spend output tokens) which is hilarious if true.

[–] Architeuthis@awful.systems 20 points 2 months ago

"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.

He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.

You don't say.

[–] Architeuthis@awful.systems 16 points 2 months ago* (last edited 2 months ago) (2 children)

OpenAI manages to do an entire introduction of a new model without using the word "hallucination" even once.

Apparently it implements chain-of-thought, which either means they changed the RHFL dataset to force it to explain its 'reasoning' when answering or to do self questioning loops, or that it reprompts itsefl multiple times behind the scenes according to some heuristic until it synthesize a best result, it's not really clear.

Can't wait to waste five pools of drinkable water to be told to use C# features that don't exist, but at least it got like 25.2452323760909304593095% better at solving math olympiads as long as you allow it a few tens of tries for each question.

[–] Architeuthis@awful.systems 14 points 2 months ago* (last edited 2 months ago)

This is conceptually different, it just generates a few seconds of doomlike video that you can slightly influence by sending inputs, and pretends that In The Future™ entire games could be generated from scratch and playable on Sufficiently Advanced™ autocomplete machines.

[–] Architeuthis@awful.systems 18 points 2 months ago* (last edited 2 months ago)

Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph.

She seems to think the AI generates .WAD files.

I guess they fell victim to one of the classic blunders: never assume that it can't be that stupid, and someone must be explaining it wrong.

[–] Architeuthis@awful.systems 0 points 2 months ago

Did LLama3.1 solve the hallucination problem?

I bet we would have heard if it had, since It's the albatross hanging on the neck of this entire technology.

view more: ‹ prev next ›