this post was submitted on 13 Mar 2025
1738 points (99.7% liked)

People Twitter

6360 readers
1713 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
(page 3) 39 comments
sorted by: hot top controversial new old
[–] balderdash9@lemmy.zip 9 points 1 day ago (24 children)

Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.

load more comments (24 replies)
[–] lalala@lemmy.world 2 points 1 day ago

I think that AI has now reached the point where it can deceive people ,not equal to humanity.

[–] OsrsNeedsF2P@lemmy.ml -1 points 1 day ago* (last edited 1 day ago) (4 children)

Oof let's see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to "give practical examples where the time/accuracy tradeoff in Flink is important" it can't come up with more than 1 or 2 truly distinct examples) but it's never wrong.

The only times it's blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn't make sense and prod followups.

Tl;dr funny meme, would be funnier if true

[–] spooky2092@lemmy.blahaj.zone 3 points 1 day ago

I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don't exist based on the prompt. I give it plenty of context on what I'm using and trying to do, and it makes up shit based on what it thinks I want to hear.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›