this post was submitted on 13 Mar 2025
1814 points (99.7% liked)

People Twitter

6360 readers
2562 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] jsomae@lemmy.ml 1 points 3 hours ago (3 children)

I said not true in general. I don't know much about chemistry. It may be more true in chemistry.

Coding is different. In many situations it can be cheap to test or eyeball the output.

Crucially, in nearly any subject, it can give you leads. Nobody expects every lead to pan out. But leads are hard to find.

[–] tacobellhop@midwest.social 1 points 3 hours ago (2 children)

I imagine ChatGPT and code is a lot like air and water.

Both parts are in the other part. Meaning llm is probably more native at learning reading and writing code than it is at interpreting engineering standards worldwide and allocation the exact thread pitch for a bolt you need to order thousands of. Go and thread one to verify.

[–] jsomae@lemmy.ml 1 points 3 hours ago (1 children)

This is possibly true due to the bias of the people who made it. But I reject the notion that because ChatGPT is made of code per se that it must understand code better than other subjects. Are humans good at biology for this reason?

[–] tacobellhop@midwest.social 1 points 3 hours ago

You might know better than me. If you ask ChatGPT to write the code for itself I have no way to verify it. You would.