this post was submitted on 13 Mar 2025
10 points (64.7% liked)

Programming

18893 readers
295 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I've seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I'm going insane.

Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).

all 28 comments
sorted by: hot top controversial new old
[–] A1kmm@lemmy.amxl.com 1 points 18 minutes ago

As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.

It got the interface right, but it got some details really wrong:

  • It stored way more information than it needed in the structure tracking state, some of it very sensitive.
  • It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
  • Instead of using type safe enums it defined integer constants for no good reason.
  • It logged information about failures as variable length strings, creating a possible timing side channel attack.
  • Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
  • And it "helpfully" used wrapping_add to increment the 32 sequence number! For those who don't know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages). wrapping_add explicitly means when you get up to the maximum number (and remember, it's only 32 bits, so there's only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.
  • It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).

To be fair, I didn't really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).

But right now, I'd not put "vibe coded" output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.

[–] onlinepersona@programming.dev 4 points 3 hours ago

We should let these twits enjoy their shit on twitter. The AI hype is just like the crypto hype, it'll fade.

The name vibe coding sounds like a drunk evening with friends getting an MVP off the ground, but nothing more.

Anti Commercial-AI license

[–] TehPers@beehaw.org 3 points 5 hours ago

For personal projects, I don't really care what you do. If someone who doesn't know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn't really bother me. Just don't ask me to look at the code, and definitely don't ask me to use the tool.

So you mean debugging then?

[–] mke@programming.dev 9 points 12 hours ago* (last edited 12 hours ago)

That's a bad vibe if I've ever seen one.

[–] jjjalljs@ttrpg.network 51 points 16 hours ago (3 children)

Seems like a recipe for subtle bugs and unmaintainable systems. Also those Eloi from the time machine, where they don't know how anything works anymore.

Management is probably salivating at the idea of firing all those expensive engineers that tell them stuff like "you can't draw three red lines all perpendicular in yellow ink"

I'm also reminded of that ai-for-music guy that was like "No one likes making art!". Soulless husk.

[–] spartanatreyu@programming.dev 11 points 16 hours ago

^ this

Using AI leads to code churn and code churn is bad for the health of the project.

If you can't keep the code comprehensible and maintainable then you end up with a worse off product where either everything breaks all the time, or the time it takes to release each new feature becomes exponentially longer, or all of your programmers become burnt out and no one wants to touch the thing.

You just get to the point where you have to stop and start the project all over again, while the whole time people are screaming for the thing that was promised to them back at the start.

It's exactly the same thing that happens when western managers try to outsource to "cheap" programming labor overseas, it always ends up costing more, taking longer, and ending in disaster

[–] KazuchijouNo@lemy.lol 3 points 12 hours ago

I agree with you.

The reason I wrote this post in the first place was because I heard people I respect a lot at work talk about this as being the future of programming. Also the CEO has acknowledged this and is actively riding the "vibe-coding" train.

I'm tired of these "get rich quick the easy way" buzz-words and ideas, and the hustle culture that perpetuates them.

[–] GammaGames@beehaw.org 24 points 16 hours ago (1 children)

They can vibe as much as they want, but don’t ever ask me to touch the mess they create.

[–] GiorgioPerlasca@lemmy.ml 5 points 16 hours ago (1 children)

Once companies recognize the full extent of their technical debt, they will likely need to hire a substantial number of highly experienced software engineers to address the issues, many of which stem from over-reliance on copying and pasting outputs from large language models.

[–] GammaGames@beehaw.org 3 points 7 hours ago

A new post-LLM coding opportunity: turd polishing

[–] jubilationtcornpone@sh.itjust.works 11 points 14 hours ago (2 children)

Nearly every time I ask ChatGPT a question about a well established tech stack, it's responses are erroneous to the point of being useless. It frequently provides examples using fabricated, non-existent functionality and the code samples are awful.

What's the point in getting AI to write code that I'm just going to have to completely rewrite?

[–] Hoimo@ani.social 2 points 5 hours ago

There's one valid use-case for LLMs: when you have writer's block, it can help to have something resembling an end product instead of a blank page. Sadly, this doesn't really work for programming, because incorrect code is simply worse than no code at all. Every line of code is a potential bug and every line of incorrect code is a guaranteed bug.

I use an LLM with great success to write bad fanfiction though.

[–] MyNameIsRichard@lemmy.ml 3 points 7 hours ago

But it's AI

[–] EnthusiasticNature94@lemmy.blahaj.zone 10 points 14 hours ago (1 children)

This seems like a game you'd do with other programmers, lol.

I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.

The whole coding process? No. 😭

[–] Hoimo@ani.social 1 points 5 hours ago

You can save time at the cost of headaches, or you can save headaches at the cost of time. You cannot save both time and headaches, you can at most defer the time and the headaches until the next time you have to touch the code, but the time doubles and the headaches triple.

[–] ErsatzCoalButter@beehaw.org 2 points 10 hours ago
[–] small44@lemmy.world 14 points 16 hours ago (1 children)

If you don't write a single line then you aren't coding

[–] axo10tl@sopuli.xyz 1 points 51 minutes ago (1 children)

Yup, sure, but this is basically a "no true scotsman" argument, which isn't at all what the "AI" hype is about.

Put yourself in the shoes of some naive corporate exec. You want the software to get made, but you don't want to pay for it. To you, people (especially experts like programmers) are an expense. You'd very much like to skip that pesky part and go straight from an idea to the product. This is what the "AI" hype is largely about.

"AI" companies are trying to set up a narrative, in which programmers can be replaced with LLMs. Execs don't care whether you're coding or not - they care about expenses and profits, and they know a team of programmers is more expensive than an OpenAI subscription.

[–] small44@lemmy.world 1 points 48 minutes ago

I don't want to be put in the shoes of a greedy corporate exec but i can put myself in the shoes of a non developer wanting to create an app for his own need, so i understand why some people may need AI for that. I'm ok with that but that is not coding

[–] Kolanaki@pawb.social 6 points 15 hours ago* (last edited 15 hours ago) (1 children)

If it wasn't for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn't mind the use of AI tools for this or much of anything.

But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you'll still need to fucking verify that it didn't just output a bunch of unusable bullshit.

Relying on these things is absolutely stupid.

[–] KazuchijouNo@lemy.lol 2 points 13 hours ago

Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it'd take them to actually write the thing in the first place. It's nonsense.

[–] FizzyOrange@programming.dev 5 points 16 hours ago

Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.

As soon as you get anywhere that isn't all over the internet it starts to bullshit.

But if you're working in a domain it's decent at, why not? I found in those cases fixing the AI's mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go "no I obviously don't want it like that"...

[–] MXX53@programming.dev 5 points 16 hours ago

I probably wouldn’t do it. I do have AI help at times, but it is more for bouncing ideas off of, and occasionally it’ll mention a library or tech stack I haven’t heard of that allegedly accomplishes what I’m looking to do. Then I go research the library or tech stack and determine if there is value.

[–] MasterBlaster@lemmygrad.ml 3 points 16 hours ago

This sounds like something I put on my resume to get a coding job, but I'm not actually a coder.

It'd work, too.

[–] Reptorian@programming.dev 2 points 15 hours ago

Nah. I only used AI as a last resort, and in my case, it has worked out. I cannot see myself using AI for codes again.