this post was submitted on 21 Sep 2024
84 points (71.4% liked)

Technology

59656 readers
3750 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

top 50 comments
sorted by: hot top controversial new old
[–] Bougie_Birdie@lemmy.blahaj.zone 123 points 2 months ago (1 children)

A lot of the criticism comes with AI results being wrong a lot of the time, while sounding convincingly correct. In software, things that appear to be correct but are subtly wrong leads to errors that can be difficult to decipher.

Imagine that your AI was trained on StackOverflow results. It learns from the questions as well as the answers, but the questions will often include snippets of code that just don't work.

The workflow of using AI resembles something like the relationship between a junior and senior developer. The junior/AI generates code from a spec/prompt, and then the senior/prompter inspects the code for errors. If we remove the junior from the equation to replace with AI, then entry level developer jobs are slashed, and at the same time people aren't getting the experience required to get to the senior level.

Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

Another argument would be that if I generate code that I have to take time to review and figure out what might be wrong with it, it might just be quicker and easier to write it correctly the first time

Business often doesn't understand these subtleties. There's a ton of money being shovelled into AI right now. Not only for developing new models, but for marketing AI as a solution to business problems. A greedy executive that's only looking at the bottom line and doesn't understand the solution might be eager to implement AI in order to cut jobs. Everyone suffers when jobs are eliminated this way, and the product rarely improves.

[–] clif@lemmy.world 51 points 2 months ago (6 children)

Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

This really resonated with me and is an excellent point. I'm going to have to remember that one.

load more comments (6 replies)
[–] boatswain@infosec.pub 62 points 2 months ago (3 children)

As a cybersecurity guy, it's things like this study, which said:

Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

load more comments (3 replies)
[–] EncryptKeeper@lemmy.world 55 points 2 months ago (2 children)

If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.

The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.

If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.

load more comments (2 replies)
[–] leftzero@lemmynsfw.com 50 points 2 months ago (7 children)

The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.

I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I'd wanted to do, while in another tab chatgpt was still spewing nonsense.

LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they're extremely good at convincing their users that they know what they're doing and what they're talking about.

That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn't incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone's time and learning nothing.

Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you're copying will work for your particular situation or not.

LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you'd do given different commented answers.

They'll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it's not.

[–] JasonDJ@lemmy.zip 8 points 2 months ago (1 children)

This is my big concern...not that people will use LLMs as a useful tool. That's inevitable. I fear that people will forget how to ask questions and learn for themselves.

load more comments (1 replies)
[–] cy_narrator@discuss.tchncs.de 6 points 2 months ago

I can feel that frustrated look when someone uses chatGPT for such a tiny reason

load more comments (5 replies)
[–] simplymath@lemmy.world 45 points 2 months ago* (last edited 1 month ago) (7 children)

People who use LLMs to write code (incorrectly) perceived their code to be more secure than code written by expert humans.

https://arxiv.org/abs/2211.03622

load more comments (7 replies)
[–] WolfLink@sh.itjust.works 36 points 2 months ago (3 children)
  • AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.
  • AI code is designed to look like it fits, not be correct. Sometimes it is correct. Sometimes it’s close but has small errors. Sometimes it looks right but is significantly wrong. Personally I’ve never gotten ChatGPT to write code without significant errors for more than trivially small test cases.
  • You aren’t learning as much when you have ChatGPT do it for you, and what you do learn is “this is what chat gpt did and it worked last time” and not “this is what the problem is and last time this is the solution I came up with and this is why that worked”. In the second case you are far better equipped to tackle future problems, which won’t be exactly the same.

All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know. But take every answer it gives you with a grain of salt. And if you can find documentation I’d trust that a lot more.

[–] cy_narrator@discuss.tchncs.de 5 points 2 months ago

Yes, I completely forget how to solve that problem 5 minutes after chatGPT writes its solution. So I whole heartedely believe AI is bad for learning

load more comments (2 replies)
[–] unmagical@lemmy.ml 29 points 2 months ago

It gives a false sense of security to beginner programmers and doesn't offer a more tailored solution that a more practiced programmer might create. This can lead to a reduction in code quality and can introduce bugs and security holes over time. If you don't know the syntax of a language how do you know it didn't offer you something dangerous? I have copilot at work and the only thing I actually accept its suggestions for now are writing log statements and populating argument lists. While those both still require review they are generally faster than me typing them out. Most of the rest of what it gives me is undesired: it's either too verbose, too hard to read, or just does something else entirely.

[–] tabular@lemmy.world 27 points 2 months ago (6 children)

If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.

Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]

load more comments (6 replies)
[–] bruhduh@lemmy.world 27 points 2 months ago* (last edited 2 months ago) (5 children)

That is the general reason, i use llms to help myself with everything including coding too, even though i know why it's bad

load more comments (5 replies)
[–] corroded@lemmy.world 27 points 2 months ago (2 children)

When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you're tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it's slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.

ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of "rubber ducky" debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that's just not true. Sure, ChatGPT can produce code that "works," but it will fail at edge cases and will generally be inefficient and slow.

load more comments (2 replies)
[–] SergeantSushi@lemmy.world 25 points 2 months ago (4 children)

I agree AI is a godsend for non coders and amateur programmers who need a quick and dirty script. As a professional, the quality of code is oftentimes 💩 and I can write it myself in less time than it takes to describe it to an AI.

[–] MagicShel@programming.dev 5 points 2 months ago* (last edited 2 months ago)

I think the process of explaining what you want to an AI can often be helpful. Especially given the number of times I've explained things to junior developers and they've said they understood completely, but then when I see what they wrote they clearly didn't.

Explaining to an AI is a pretty good test of how well the stories and comments are written.

[–] lurch@sh.itjust.works 5 points 2 months ago

i love it when the AI declares and sets important sounding variables it then never uses 🙄

load more comments (2 replies)
[–] MacStache@programming.dev 25 points 2 months ago (2 children)

For me it's because if the AI does all the work the person "coding" won't learn anything. Thus when a problem does arise (i.e. the AI not being able to fix a simple mistake it made) no one involved has the means of fixing it.

load more comments (2 replies)
[–] cley_faye@lemmy.world 24 points 2 months ago* (last edited 2 months ago) (2 children)
  • issues with model training sources
  • business sending their whole codebase to third party (copilot etc.) instead of local models
  • time gain is not that substantial in most case, as the actual "writing code" part is not the part that takes most time, thinking and checking it is
  • "chatting" in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you're half-competent. We've known that since customer/developer meetings have existed.
  • the dev have to actually be competent enough to review the changes/output. In a way, "peer reviewing" becomes mandatory; it's long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
  • some business thinking that LLM outputs are "good enough", firing/moving away people that can actually do said review, leading to more issues down the line
  • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
  • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
  • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of "optimized out" in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

  • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
  • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
  • the chatbot turning short code into longer "natural language" explanations can sometimes act as a rubber duck in aiding for debugging

Note the "sometimes". I don't have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it's not even a matter of the tool becoming better, it's the usefulness to begin with that's uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare "gains".

load more comments (2 replies)
[–] HakFoo@lemmy.sdf.org 23 points 2 months ago (1 children)

My objections:

  1. It doesn't adequately indicate "confidence". It could return "foo" or "!foo" just as easily, and if that's one term in a nested structure, you could spend hours chasing it.
  2. So many hallucinations-- inventing methods and fields from nowhere, even in an IDE where they're tagged and searchable.

Instead of writing the code now, you end up having to review and debug it, which is more work IMO.

[–] CarbonatedPastaSauce@lemmy.world 7 points 2 months ago

I stopped using it after the third time it just wholesale made up powershell cmdlets that don’t exist.

Until it has fidelity it’s just a toy.

[–] Grofit@lemmy.world 22 points 2 months ago

One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.

In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.

An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.

An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.

So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.

[–] bitwolf@lemmy.one 18 points 2 months ago (1 children)

We built a Durable task workflow engine to manage infrastructure and we asked a new hire to add a small feature to it.

I checked on them later and they expressed they were stuck on an aspect of the change.

I could tell the code was ChatGPT. I asked "you wrote this with ChatGPT didn't you?" And they asked how I could tell.

I explained that ChatGPT doesn't have the full context and will send you on tangents like it has here.

I gave them the docs to the engine and to the integration point and said "try using only these and ask me questions if you're stuck for more than 40min.

They went on to become a very strong contributor and no longer uses ChatGPT or copilot.

I've tried it myself and it gives me the wrong answers 90% of the time. It could be useful though. If they changed ChatGPT to find and link you docs it finds relevant I would love it but it never does even when asked.

[–] socialmedia@lemmy.world 7 points 2 months ago

Phind is better about linking sources. I've found that generated code sometimes points me in the right direction, but other times it leads me down a rabbit hole of obsolete syntax or other problems.

Ironically, if you already are familiar with the code then you can easily tell where the LLM went wrong and adapt their generated code.

But I don't use it much because its almost more trouble than its worth.

[–] AreaKode@lemmy.world 18 points 2 months ago (9 children)

I've found it to be extremely helpful in coding. Instead of trying to read huge documentation pages, I can just have a chatbot read it and tell me the answer. My coworker has been wanting to learn Powershell. Using a chatbot, his understanding of the language has greatly improved. A chatbot can not only give you the answer, but it can break down how it reached that conclusion. It can be a very useful learning tool.

[–] Eldritch@lemmy.world 7 points 2 months ago (3 children)

It's great for regurgitating pre written text. For generating new or usable code it's largely useless. It doesn't have an actual understanding of what it says. It can recombine information and elements its seen before. But not generate anything truly unique.

load more comments (3 replies)
load more comments (8 replies)
[–] sugar_in_your_tea@sh.itjust.works 14 points 2 months ago* (last edited 2 months ago) (2 children)

Two reasons:

  1. my company doesn't allow it - my boss is worried about our IP getting leaked
  2. I find them more work than they're worth - I'm a senior dev, and it would take longer for me to write the prompt than just write the code

I just dont know anything about bash’s syntax

That probably won't be the last time you write Bash, so do you really want to go through AI every time you need to write a Bash script? Bash syntax is pretty simple, especially if you understand the basic concept that everything is a command (i.e. syntax is <command> [arguments...]; like if <condition> where <condition> can be [ <special syntax> ] or [[ <test syntax> ]]), which explains some of the weird corners of the syntax.

AI sucks for anything that needs to be maintained. If it's a one-off, sure, use AI. But if you're writing a script others on your team will use, it's worth taking the time to actually understand what it's doing (instead of just briefly reading through the output). You never know if it'll fail on another machine if it has a different set of dependencies or something.

What negative experience do you have with AI chatbots that made you hate them?

I just find dealing with them to take more time than just doing the work myself. I've done a lot of Bash in my career (>10 years), so I can generally get 90% of the way there by just brain-dumping what I want to do and maybe looking up 1-2 commands. As such, I think it's worth it for any dev to take the time to learn their tools properly so the next time will be that much faster. If you rely on AI too much, it'll become a crutch and you'll be functionally useless w/o it.

I did an interview with a candidate who asked if they could use AI, and we allowed it. They ended up making (and missing) the same mistake twice in the same interview because they didn't seem to actually understand what the AI output. I've messed around with code chatbots, and my experience is that I generally have to spend quite a bit of time to get what I want, and then I still need to modify and debug it. Why would I do that when I can spend the same amount of time and just write the code myself? I'd understand the code better if I did it myself, which would make debugging way easier.

Anyway, I just don't find it actually helpful. It can feel helpful because it gets you from 0 to a bunch of code really quickly, but that code will probably need quite a bit of modification anyway. I'd rather just DIY and not faff about with AI.

load more comments (2 replies)
[–] PixelProf@lemmy.ca 14 points 2 months ago (2 children)

Lots of good comments here. I think there's many reasons, but AI in general is being quite hated on. It's sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here's a few perspectives:

  • Training data is questionable/debatable ethics,
  • Amateur programmers don't build up the same "code muscle memory",
  • It's being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
  • The time saved writing code isn't being used to review and test the code more carefully than it was before,
  • The AI is being used for problem solving, where it's not ideal, as opposed to code-from-spec where it's much better,
  • Non-Local AI is scraping your (often confidential) data,
  • Environmental impact of the use of massive remote LLMs,
  • Can be used (according to execs, anyways) to replace entry level developers,
  • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
  • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what's important and don't see the skills they strengthen along the way to the answer.

I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

I think there is a lot of reasons to hate on it, but I think it's because the reasons to use it effectively are still being figured out.

Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don't really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It'll take time to adopt and adapt.

[–] adespoton@lemmy.ca 6 points 2 months ago

Spot-on.

I spend a lot of time training people how to properly review code, and the only real way to get good at it is by writing and reviewing a lot of code.

With an LLM, it trains on a lot of code, but it does no review per-se… unlike other ML systems, there’s no negative and positive feedback systems in place to improve quality.

Unfortunately, AI is now equated with LLM and diffusion models instead of machine learning in general.

load more comments (1 replies)
[–] count_dongulus@lemmy.world 14 points 2 months ago

It doesn't pass judgment. It just knows what "looks" correct. You need a trained person to discern that. It's like describing symptoms to WebMD. If you had a junior doctor using WebMD, how comfortable would you be with their assessment?

[–] small44@lemmy.world 10 points 2 months ago (2 children)

Many lazy programmers may just copy paste without thinking too much about the quality of generated code. The other group of person who oppose it are those who think it will kill the programmer job

[–] OpenStars@discuss.online 14 points 2 months ago

There is an enormous difference between:

rm -rf / path/file

vs.

rm -rf /path/file

[–] cm0002@lemmy.world 10 points 2 months ago (1 children)

Many lazy programmers may just copy paste without thinking too much about the quality of generated code

Tbf, they've been doing that LONG before AI came along

load more comments (1 replies)
[–] john89@lemmy.ca 10 points 2 months ago

Personally, I've found AI is wrong about 80% of the time for questions I ask it.

It's essentially just a search engine with cleverbot. If the problem you're dealing with is esoteric and therefore not easily searchable, AI won't fare any better.

I think AI would be a lot more useful if it gave a percentage indicating how confident it is in its answers, too. It's very useless to have it constantly give wrong information as though it is correct.

[–] kibiz0r@midwest.social 10 points 2 months ago (3 children)

Basically this: Flying Too High: AI and Air France Flight 447

Description

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

load more comments (3 replies)
[–] OmegaLemmy@discuss.online 7 points 2 months ago

I use ai, but whenever I do I have to modify it, whether it's because it gives me errors, is slow, doesn't fit my current implementation or is going off the wrong foot.

[–] moon@lemmy.cafe 6 points 2 months ago

It's a tool just like everything else, but people are just now sobering up after all the hype that it's incredibly wrong a lot.

[–] Numuruzero@lemmy.dbzer0.com 5 points 2 months ago (1 children)

I have a coworker who is essentially building a custom program in Sheets using AppScript, and has been using CGPT/Gemini the whole way.

While this person has a basic grasp of the fundamentals, there's a lot of missing information that gets filled in by the bots. Ultimately after enough fiddling, it will spit out usable code that works how it's supposed to, but honestly it ends up taking significantly longer to guide the bot into making just the right solution for a given problem. Not to mention the code is just a mess - even though it works there's no real consistency since it's built across prompts.

I'm confident that in this case and likely in plenty of other cases like it, the amount of time it takes to learn how to ask the bot the right questions in totality would be better spent just reading the documentation for whatever language is being used. At that point it might be worth it to spit out simple code that can be easily debugged.

Ultimately, it just feels like you're offloading complexity from one layer to the next, and in so doing quickly acquiring tech debt.

load more comments (1 replies)
[–] Eczpurt@lemmy.world 5 points 2 months ago

Sounds like it's just another tool in a coding arsenal! As long as you take care to verify things like you did, I can't see why it'd be a bad idea. It's when you blindly trust that things go wrong.

load more comments
view more: next ›