yogthos

joined 5 years ago
MODERATOR OF
 
 
 
[–] yogthos@lemmygrad.ml 3 points 1 month ago

Oh yeah, I noticed that too. Once you give it a few examples, it's good at iterating on that. And this is precisely the kind of drudgery I want to automate. There is a lot of code you end up having to write that's just glue that holds things together, and it's basically just a repetitive task that LLMs can automate.

[–] yogthos@lemmygrad.ml 7 points 1 month ago (4 children)

It seems like AI is a very polarizing topic, and people tend to either think it'll do everything or reject it as pure hype. Typically, the reality of the usefulness of new tech tends to lie somewhere in between. I don't expect that programmers will disappear as a profession in the foreseeable future. My view is that LLMs are becoming a genuinely useful tool, and they will be increasingly able to take care of writing boilerplate freeing up developers to do more interesting things.

For example, just the other day I had to create a SQL schema for an API endpoint, and I was able to throw sample JSON into DeepSeek R1 to get a reasonable schema out of it that needed practically no modifications. It probably would've taken me a couple of hours of work to design and write it. I also find you can generally figure out how to do something quicker with these tools than by searching sites like stack overflow or random blogs. Even if it doesn't give a correct solution, it can point you in the right direction. Another use I can see is having it search through code bases finding where specific functionality is. This would be very helpful with finding your way around large projects. So, my experience is that there are already a lot of legitimate time saving uses for this tech. And as you note it's hard to say where we start getting into diminishing returns territory.

Efficiency of these things is still a valid concern, but I don't think we've really tried optimizing things much yet. The fact that DeepSeek was able to get such a huge improvement makes me think that there are a lot of other low hanging fruit to be plucked in the near future. I also think it's highly likely we'll be combining LLMs with other types of AI such as symbolic logic. This is already being tried with neurosymbolic systems. Different types of machine learning algorithms could tackle different types of problems more efficiently. There are also interesting things happening on the hardware side with stuff like analog chips showing up. Making the chip analog is way more efficient for this stuff since we're currently emulating analog systems on top digital ones.

I very much agree regarding the point of capitalism being a huge negative factor here. AI being used abusively is just another reason to fight against this system.

[–] yogthos@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago) (9 children)

Sure it is programming, but it's a different style of programming. Modern high level languages are still primarily focused on the actual implementation details of the code, they're not really declarative in nature.

Meanwhile, as I wrote in my original comment, the LLM could use a gradient descent type approach to converge on a solution. For example, if you define a signature for what the API looks like as a constraint it, can keep iterating on the code to get there. In fact, you don't even need LLMs to do this. For example, Barliman is a constraints solver that does program synthesis this way. It's also smart enough to reuse functions it already implemented to build more complex ones. It's possible that these kinds of approaches could be combined with LLMs in the future, where LLM could generate an initial solution, and a solver can refine it.

Finally, the fact that LLMs fail at some tasks today does not mean that these kinds of tasks are fundamentally intractable. The pattern has been that progress is happening at a very quick pace right now, and we don't know what the plateau will be. I've been playing around with DeepSeek R1 for code generation, and a lot of the time what it outputs is clean and correct code that requires little or no modification. It's light years ahead of anything I've tried even a year ago. I expect it's only going to get better going forward.

[–] yogthos@lemmygrad.ml 4 points 1 month ago (11 children)

I expect that programmers are going to incresingly focus on defining specifications while LLMs will handle the grunt work. Imagine declaring what the program is doing, e.g., "This API endpoint must return user data in <500ms, using ≤50MB memory, with O(n log n) complexity", and an LLM generates solutions that adhere to those rules. It could be an approach similar to the way genetic algorithms work, where LLM can try some initial solutions, then select ones that are close to the spec, and iterate until the solution works well enough.

I'd also argue that this is a natural evolution. We don’t hand-assemble machine code today, most people aren't writing stuff like sorting algorithms from scratc, and so on. I don't think it's a stretch to imagine that future devs won’t fuss with low-level logic. LLMs can be seen as "constraint solvers" akin to a chess engine, but for code. It's also worth noting that Modern tools already do this in pockets. AWS Lambda lets you define "Run this function in 1GB RAM, timeout after 15s", imagine scaling that philosophy to entire systems.

[–] yogthos@lemmygrad.ml 3 points 1 month ago

That's fair, I'd say Stalingrad was the point where it become clear even to the Germans themselves that they lost the war.

[–] yogthos@lemmygrad.ml 7 points 1 month ago

There was a sustained attack on it a few months ago actually.

[–] yogthos@lemmygrad.ml 6 points 1 month ago

And this kind of tariff hits people directly so it becomes very visible how it's harming them. It's kind of funny that he didn't realize this to begin with.

[–] yogthos@lemmygrad.ml 4 points 1 month ago

yeah that's a good way to put it

[–] yogthos@lemmygrad.ml 8 points 1 month ago (2 children)

I find the specific dynamic is that you can say anything you want as long as it doesn't translate into material action. That is the line that cannot be crossed.

[–] yogthos@lemmygrad.ml 11 points 1 month ago (4 children)

not only that, but you can have all the freeze peach that you can handle

[–] yogthos@lemmygrad.ml 6 points 1 month ago

Exactly, and this is likely going to be the biggest barrier to being absorbed into the US. It would undermine very semblance of identity that Canada was cobbled around.

[–] yogthos@lemmygrad.ml 11 points 1 month ago

I think that's probably exactly what it is. He just needs to feel manly.

view more: ‹ prev next ›