this post was submitted on 21 Sep 2024
84 points (71.4% liked)

Technology

59656 readers
3750 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

(page 3) 39 comments
sorted by: hot top controversial new old
[–] Rhaedas@fedia.io 1 points 2 months ago

Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it's much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn't understand the code it gives you, nor can it reason if it will function.

Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.

Basically, if you can help the LLM hone in its probabilities on the better data from the start, you're more likely to get what may be good code.

[–] KairuByte@lemmy.dbzer0.com -1 points 2 months ago

As someone who just delved into a related but unfamiliar language for a small project, it was relatively correct and easy to use.

There were a few times it got itself into a weird “loop” where it insisted on doing things in a ridiculous way, but prior knowledge of programming was enough for me to reword and “suggest” different, simpler, solutions.

Would I have ever got to the end of that project without knowledge of programming and my suggestions? Likely, but it would have taken a long time and been worse off code.

The irony is, without help from copilot, I’d have taken at least three times as long.

[–] cy_narrator@discuss.tchncs.de -2 points 2 months ago (3 children)

Also if you are interested, here are those scripts I wrote with chatGPT

https://gitlab.com/cy_narrator/lukshelper

load more comments (3 replies)
[–] Smokeydope@lemmy.world -2 points 2 months ago* (last edited 2 months ago) (3 children)

Its not just AI code but AI stuff in general.

It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.

Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.

Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.

So you'll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.

[–] rolling_resistance@lemmy.world -2 points 2 months ago

I see you like when something threatens your livelihood.

load more comments (2 replies)
[–] socsa@piefed.social -3 points 2 months ago

Because most people on Lemmy have never actually had to write code professionally.

[–] NuXCOM_90Percent@lemmy.zip -5 points 2 months ago (3 children)

Lemmy is an outlier where anything "AI" immediately triggers the luddites to scream and rant (and occasionally send threats over PMs...) that it is bad because it is "AI" and so forth. So... massive grain of salt.

Speaking as (for simplicity's sake) a software engineer who wears both a coder and a manager hat?

"AI" is incredibly useful for charlie work. Back in the day you would hire an intern or entry level staff to write your unit tests and documentation and utility functions. But, for well over a decade now, documentation and even many unit tests can be auto-generated by scripts for vim or plugins for an IDE. They aren't necessarily great but... the stuff that Fred in Accounting's son wrote was pretty dogshit too.

What LLMs+RAG do is step that up a few notches. You still aren't going to have them write the critical path code. But you can farm off a LOT more charlie work to the point where you just need to do the equivalent of review an MR that came from a plugin rather than a kid who thinks we don't know he reeks of weed.

And... that is good and bad. Good in that it means smaller companies/teams are capable of much bigger projects. And bad because it means a lot fewer entry level jobs to teach people how to code.

So that is the manager/mentor perspective. Let's dig a bit deeper on your example:

I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

Honestly? That sounds to me like foundational issues. You already articulated what you need but you wanted to find an all in one guide rather than googing "bash function input example" or "bash function return example" or "strip trailing strash from directory path linux" and so forth. Also, I am pretty sure I very regularly find a guide that covers every one of those questions except for string processing every time I forget the syntax to a for loop in bash and need to google it.

And THAT is the problem with relying on these tools. I know plenty of people who fundamentally can't write documentation because their IDE has always generated (completely worthless) doxygen for them. And it sounds like you don't know how to self-educate on how to solve a problem.

Which is why, generally speaking:

I still prefer to offload the charlie work to newbies because it helps them learn (and it lets me justify their paycheck). And usually what I do is tell them I want to "walk you through our SDLC. it is kind of annoying" to watch over their shoulder and make sure they CAN do this by hand. Then... whatever. I don't care if they pass everything through whatever our IT/Cybersecurity departments deem legit.

Which... personally? I generally still prefer "dumb" scripts to generate the boilerplate for myself. And when I do ask chatgpt or a "local" setup: I ask general questions. I don't paste our codebase in. I say "Hey chatgpt, give me an example of setting the number of replicas of a pod based upon specific metrics collected with prometheus". And I adapt that. Partially to make sure I understand what we are adding to our codebase and mostly because I still don't trust those companies with my codebase and prompts. Which... is probably going to mean moving away from VSCode within the next year (yay Copilot) but... yeah.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›