yogthos

joined 5 years ago
MODERATOR OF
 

Shadowrun is becoming a reality

 

The reality is that US government has always been a private club for the billionaire class, they just used to have the decency to hide it behind closed doors. Musk’s real crime is skipping the velvet rope and letting us all see the sausage being made. Bravo, Elon, for finally making oligarchy transparent.

https://www.cambridge.org/core/journals/perspectives-on-politics/article/testing-theories-of-american-politics-elites-interest-groups-and-average-citizens/62327F513959D0A304D4893B382B992B

 
 
 
[–] yogthos@lemmygrad.ml 31 points 1 month ago

I guess Ukraine did end up with a partisan movement, it just happens to be one against the western backed regime.

[–] yogthos@lemmygrad.ml 2 points 1 month ago

Right, the reality is going to be nuanced. There will be niches where this tool will be helpful, and others where it doesn't really make sense. We're in a hype phase right now, and people are still figuring out good uses for it. It's also worth noting that people are already actively working on solutions for the hallucination problem and doing actual reasoning. The most interesting approach I've seen so far is neurosymbolics. It combines a deep neural net with a symbolic logic engine. The neural net does what it's good at which is parsing raw input data and classifying, and symbolic logic system operates on the classified data. This way you can have the system actually reason through a problem, explain the steps, correct it, etc. This is a fun read about it https://arxiv.org/abs/2305.00813

I do think the AI might present a problem for the capitalist system as a whole because if vast majority of work really can be automated going forward, then the whole model of working for a living will fall apart. It will be very interesting to see how the capitalist world grapples with this assuming it lasts that long to begin with.

[–] yogthos@lemmygrad.ml 9 points 1 month ago

oh yeah the image failed to upload

[–] yogthos@lemmygrad.ml 0 points 1 month ago

I'd argue this is quite different because it's a tool actual developers can use, and it very much does save you time. There's a lot of hype around this tech, but I do think people will settle on good use cases for it as it matures.

[–] yogthos@lemmygrad.ml 2 points 1 month ago

Given enough constraints sure, it's basically just logic programming at its core.

[–] yogthos@lemmygrad.ml 4 points 1 month ago

I think we're already largely there. Nobody really knows how the full computing stack works anymore. The whole thing is just too big to fit in your head. So, there is a lot of software out there that's already effectively a black box. There's a whole joke how large legacy systems are basically like generation ships where new devs have no idea how or why the system was built that way, and they just plug holes as they go.

However, even if people forget how to write code, it's not like it's a skill that can't be learned if it becomes needed again. And if we do get to the point where LLMs are good enough that people forget how to write code, then it means the LLMs just become the way people write code. I don't see how it's different from people who only know how to use a high level language today. A Js dev will not know how to work with pointers, do manual memory management and so on. You can even take it up a level and look at it from a perspective of a non technical person asking a developer to write a program for them. They're already in this exact scenario, and that's vast majority of the population.

And given the specification writing approach I described, I don't actually see that much of a problem with the code being a black box. You would basically create contracts and LLM will fill them, and this way you have some guarantees about the behavior of the system.

It's possible people start developing mysticism about software, but at this point most people already treat technology like magic. I expect there will always be people who have an inclination towards a scientific view of the world, and who enjoy understanding how things work. I don't think LLMs are going to change that.

Personally, I kind of see a synthesis between AI tools and humans going forward. We'll be using this tech augment our abilities, and we'll just focus on solving bigger problems together. I don't expect there's going to be some sort of intellectual collapse, rather the opposite could happen where people start tacking problems on the scale that seems unimaginable today.

[–] yogthos@lemmygrad.ml 3 points 1 month ago

Oh yeah, I noticed that too. Once you give it a few examples, it's good at iterating on that. And this is precisely the kind of drudgery I want to automate. There is a lot of code you end up having to write that's just glue that holds things together, and it's basically just a repetitive task that LLMs can automate.

[–] yogthos@lemmygrad.ml 7 points 1 month ago (4 children)

It seems like AI is a very polarizing topic, and people tend to either think it'll do everything or reject it as pure hype. Typically, the reality of the usefulness of new tech tends to lie somewhere in between. I don't expect that programmers will disappear as a profession in the foreseeable future. My view is that LLMs are becoming a genuinely useful tool, and they will be increasingly able to take care of writing boilerplate freeing up developers to do more interesting things.

For example, just the other day I had to create a SQL schema for an API endpoint, and I was able to throw sample JSON into DeepSeek R1 to get a reasonable schema out of it that needed practically no modifications. It probably would've taken me a couple of hours of work to design and write it. I also find you can generally figure out how to do something quicker with these tools than by searching sites like stack overflow or random blogs. Even if it doesn't give a correct solution, it can point you in the right direction. Another use I can see is having it search through code bases finding where specific functionality is. This would be very helpful with finding your way around large projects. So, my experience is that there are already a lot of legitimate time saving uses for this tech. And as you note it's hard to say where we start getting into diminishing returns territory.

Efficiency of these things is still a valid concern, but I don't think we've really tried optimizing things much yet. The fact that DeepSeek was able to get such a huge improvement makes me think that there are a lot of other low hanging fruit to be plucked in the near future. I also think it's highly likely we'll be combining LLMs with other types of AI such as symbolic logic. This is already being tried with neurosymbolic systems. Different types of machine learning algorithms could tackle different types of problems more efficiently. There are also interesting things happening on the hardware side with stuff like analog chips showing up. Making the chip analog is way more efficient for this stuff since we're currently emulating analog systems on top digital ones.

I very much agree regarding the point of capitalism being a huge negative factor here. AI being used abusively is just another reason to fight against this system.

[–] yogthos@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago) (9 children)

Sure it is programming, but it's a different style of programming. Modern high level languages are still primarily focused on the actual implementation details of the code, they're not really declarative in nature.

Meanwhile, as I wrote in my original comment, the LLM could use a gradient descent type approach to converge on a solution. For example, if you define a signature for what the API looks like as a constraint it, can keep iterating on the code to get there. In fact, you don't even need LLMs to do this. For example, Barliman is a constraints solver that does program synthesis this way. It's also smart enough to reuse functions it already implemented to build more complex ones. It's possible that these kinds of approaches could be combined with LLMs in the future, where LLM could generate an initial solution, and a solver can refine it.

Finally, the fact that LLMs fail at some tasks today does not mean that these kinds of tasks are fundamentally intractable. The pattern has been that progress is happening at a very quick pace right now, and we don't know what the plateau will be. I've been playing around with DeepSeek R1 for code generation, and a lot of the time what it outputs is clean and correct code that requires little or no modification. It's light years ahead of anything I've tried even a year ago. I expect it's only going to get better going forward.

[–] yogthos@lemmygrad.ml 4 points 1 month ago (11 children)

I expect that programmers are going to incresingly focus on defining specifications while LLMs will handle the grunt work. Imagine declaring what the program is doing, e.g., "This API endpoint must return user data in <500ms, using ≤50MB memory, with O(n log n) complexity", and an LLM generates solutions that adhere to those rules. It could be an approach similar to the way genetic algorithms work, where LLM can try some initial solutions, then select ones that are close to the spec, and iterate until the solution works well enough.

I'd also argue that this is a natural evolution. We don’t hand-assemble machine code today, most people aren't writing stuff like sorting algorithms from scratc, and so on. I don't think it's a stretch to imagine that future devs won’t fuss with low-level logic. LLMs can be seen as "constraint solvers" akin to a chess engine, but for code. It's also worth noting that Modern tools already do this in pockets. AWS Lambda lets you define "Run this function in 1GB RAM, timeout after 15s", imagine scaling that philosophy to entire systems.

[–] yogthos@lemmygrad.ml 3 points 1 month ago

That's fair, I'd say Stalingrad was the point where it become clear even to the Germans themselves that they lost the war.

[–] yogthos@lemmygrad.ml 7 points 1 month ago

There was a sustained attack on it a few months ago actually.

view more: ‹ prev next ›