this post was submitted on 09 Feb 2025
5 points (77.8% liked)

Programming

264 readers
2 users here now

Welcome to the Lemmygrad programming community! This is a space where programmers of all levels can discuss programming, ask for help with problems, and share their personal programming projects with others.


Rules

  1. Respect all users, regardless of their level of knowledge in programming. We're here to learn and help each other improve.
  2. Keep posts relevant to programming and related topics.
  3. Respect people's personal preferences. If you disagree with someone's choice of programming language, method of formatting code, or anything else, don't attack the poster. Genuine criticism is fine, but personal attacks are not.
  4. In order to promote breaks from typing, all code snippets must be photos of code written on paper.
    Just kidding :), please use proper markdown code blocks.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] amemorablename@lemmygrad.ml 9 points 1 month ago (5 children)

I admit my read of the article was partly skimming, so maybe they covered this point, but from everything I've seen with LLMs, I'm skeptical their impact is going to change much, unless it's to make things shittier by forcing them where they aren't ready. AI as a whole could change a lot that is hard to predict because AI is kinda synonymous with automation and could be many developments of many different kinds of technologies. But the current crop of AI hype and what it's capable of? Where I see it most taking over is the capitalistic "content churn" industry. For anything that needs to be thinking beyond "cash in and move onto the next one", I don't see how it gets integrated very effectively.

Part of what makes me doubt it is efficiency. Although there are some notable advances in efficiency, such as Deepseek's cost reduction in training, generative AI is overall a resource-heavy technology. Both training and inference are costly (environmentally, in GPUs, etc., not just in price tag). Another point is competence. The more complicated a task is, the easier it is for the AI to make mistakes, some of which only an expert in the related subject matter would pick up on, which makes it a high competence task just to evaluate the AI's results and make sure it isn't doing more harm than good. Another is learning. You could look at the competence example and say, a human in training needs similar evaluation, but the human in training will usually learn from their mistakes, with correction, and not make them as often in the future. The AI won't unless you retrain it and then it is still highly limited due to its statistical and tokenizing nature. Another element is trust. The western market has much more of a vested interest than, say, China, in selling the idea that AI as it is now will work and will integrate and therefore will make a profit; otherwise, its house of cards gold rush investments go to waste and the industry tanks (the fragility of that seen already in how easily Deepseek upset the equilibrium, or lack thereof).

I think programmers and programming as a field is in more danger (or danger of change, depending on how you want to look at it) from capitalists than from generative AI. The field already zipped past a phase where I can still remember reading about someone talking about a fizz buzz example as a test of basic programming competence, to the internet being stuffed to the brim with coding bootcamp stuff and "master algorithms and data structures" doctrine. And that change happened before generative AI. I don't know what the hard numbers are, so I could be deceived on it, but by all appearances, programming became much more saturated via all the "learn to code" stuff, coupled with more companies cutting jobs in general, resulting in it being a field that is significantly harder to get into and harder to stay in. And again, all of that before generative AI.

I don't mean this toward you, Yogthos, of course, but I think there is a certain amount of programmers being in denial about the field being touched by capitalism in general. This sort of unspoken belief that because programming is so important, the trend will just sort of continue that way and it will continue to be a lucrative and cozy ivory tower to hang out in. But that won't stop capitalists from trying to reduce payroll as much as possible, whether it truly makes rational sense or not.

[–] yogthos@lemmygrad.ml 7 points 1 month ago (4 children)

It seems like AI is a very polarizing topic, and people tend to either think it'll do everything or reject it as pure hype. Typically, the reality of the usefulness of new tech tends to lie somewhere in between. I don't expect that programmers will disappear as a profession in the foreseeable future. My view is that LLMs are becoming a genuinely useful tool, and they will be increasingly able to take care of writing boilerplate freeing up developers to do more interesting things.

For example, just the other day I had to create a SQL schema for an API endpoint, and I was able to throw sample JSON into DeepSeek R1 to get a reasonable schema out of it that needed practically no modifications. It probably would've taken me a couple of hours of work to design and write it. I also find you can generally figure out how to do something quicker with these tools than by searching sites like stack overflow or random blogs. Even if it doesn't give a correct solution, it can point you in the right direction. Another use I can see is having it search through code bases finding where specific functionality is. This would be very helpful with finding your way around large projects. So, my experience is that there are already a lot of legitimate time saving uses for this tech. And as you note it's hard to say where we start getting into diminishing returns territory.

Efficiency of these things is still a valid concern, but I don't think we've really tried optimizing things much yet. The fact that DeepSeek was able to get such a huge improvement makes me think that there are a lot of other low hanging fruit to be plucked in the near future. I also think it's highly likely we'll be combining LLMs with other types of AI such as symbolic logic. This is already being tried with neurosymbolic systems. Different types of machine learning algorithms could tackle different types of problems more efficiently. There are also interesting things happening on the hardware side with stuff like analog chips showing up. Making the chip analog is way more efficient for this stuff since we're currently emulating analog systems on top digital ones.

I very much agree regarding the point of capitalism being a huge negative factor here. AI being used abusively is just another reason to fight against this system.

[–] RedClouds@lemmygrad.ml 4 points 1 month ago (1 children)

Efficiency problems aside (hopefully R1 keeps us focused on increasing efficiency while still being useful), I find it super useful when you set a pattern and let it fill it out for you.

On a side project, I built out 10 or 15 structs and then implemented one of them in a particular pattern and I just asked it to finish off the rest. I did like 10% of the work, but because I set the pattern, it finished everything else flawlessly.

[–] yogthos@lemmygrad.ml 3 points 1 month ago

Oh yeah, I noticed that too. Once you give it a few examples, it's good at iterating on that. And this is precisely the kind of drudgery I want to automate. There is a lot of code you end up having to write that's just glue that holds things together, and it's basically just a repetitive task that LLMs can automate.

load more comments (2 replies)
load more comments (2 replies)