mountainriver

joined 2 years ago
[–] mountainriver@awful.systems 2 points 59 minutes ago (1 children)

From experience in an IT-department, I would say mainly a combination of management pressure and need to make security problems manageable by choosing AI tools to push on users before too many users start using third party tools.

Yes, they will create security problems anyway, but maybe, just maybe, users won't copy paste sensitive business documents into third party web pages?

[–] mountainriver@awful.systems 3 points 2 hours ago

Can't they just re-release Kris I befolkningsfrågan? Tried and tested solutions like full employment policies, cheap houses, more support and money for parents.

Or is kids not all that important if it means having to improve conditions for ordinary people?

Clever. Writing up my pitch to Open ai...

[–] mountainriver@awful.systems 2 points 1 day ago (5 children)

I started thinking about what kind of story you could tell with these impressive but incoherent bits. It wouldn't be a typical movie, but there's got to be a ton of money willing to back any movie that can claim to be "made with AI".

One would have to start from the technical limitations. The characters are inconsistent, so in order to tell any story one would need something that the technology can deliver at least a high percentage of the time to identify protagonist/antagonist. Perhaps hats in different colours? Or film protagonist and antagonists with green screen and put them in the clips? (That is cheating, but of course they would cheat.)

So what kind of story can you tell? A movie that perhaps has a lot of dream sequences? Or a drug trip? It would be very niche, but again the point would just be to be able to claim "made with AI".

[–] mountainriver@awful.systems 9 points 6 days ago (1 children)

I think in most EU countries - after lobbying from US copyright corporations - it is explicitly banned to make copies from an illegal original. This was in order to criminalise downloads from torrents whether you seed or not. And the potential punishment typically involves jail sentences in order to give the police access to the surveillance necessary to prove the crime. Plus copyright violations being the only crime that in all EU countries also yields punishing damages.

Now I know this because I was against every single one of these unproportional laws, but some copyright organisations over here should know this. Just saying it would be fun if Meta got to pay out punishing damages. And even funnier if Zuckerberg got some jail time.

[–] mountainriver@awful.systems 5 points 1 week ago (3 children)

My suspicion, my awful awful newfound theory, is that there are people with a sincere and even kind of innocent belief that we are all just picking winners, in everything: that ideology, advocacy, analysis, criticism, affinity, even taste and style and association are essentially predictions. That what a person tries to do, the essential task of a person, is to identify who and what is going to come out on top, and align with it. The rest—what you say, what you do—is just enacting your pick and working in service to it.

Maybe. But I would counter with that it's an attitude towards their cynicism. Deep down they know their lies aren't true, they just consider lying in service of power a natural thing.

As an example, witness one Matthew Miller (the Biden press conference guy) who after smirking his way through lies about how Israel is totally going to investigate itself after the latest atrocity, now has appeared in an interview saying he was just representing the administration, that wasn't his own view. He knew he was lying in service of at least atrocities (he isn't ready to admit to it being a genocide), he just considers that natural.

It appears he has stopped smirking, I guess that was his tell that he was lying.

[–] mountainriver@awful.systems 42 points 2 weeks ago (2 children)

"We've set fire to a bunch of money - now you need to give us more" - tech companies "investing" in "AI" to their customers.

[–] mountainriver@awful.systems 6 points 2 weeks ago

I for one has not stopped finding it funny that the 100 billion dollars in profit is their definition of AGI.

[–] mountainriver@awful.systems 8 points 2 weeks ago (2 children)

I find it a bit interesting that it isn't more wrong. Has it ingested large tables and got a statistical relationship between certain large factors and certain answers? Or is there something else going on?

[–] mountainriver@awful.systems 3 points 2 weeks ago (1 children)

Which "Word" do you mean? Is it Microsoft 365 Copilot (formerly Office) desktop app Word or Microsoft 365 Copilot (formerly Office) online app Word? Or maybe another program, that is slightly different and also named Word? Maybe Microsoft has put a descriptor on it, perhaps the word "new", which won't at all be replaced by another "new" version in a couple of years.

All making it rather hard to search for solutions to problems with these oh so similar, yet when it comes down their problems rather different programs!

Ah well, we change what we can and rant about what we can't.

[–] mountainriver@awful.systems 10 points 2 weeks ago

Asimov being Asimov, the human consequences of the decline and fall of the galactic empire happens mostly of screen.

How exactly Trantor in a couple of hundred years went from a bustling planetary city to a planet where the last survivors scratch out a living from farming the former imperial grounds, is better left unexplored. If you are living in that world you are much more likely to be among the masses were stuff happens that will eventually be noted by Foundation scholars as "population decline", than being a Foundation scholar.

[–] mountainriver@awful.systems 3 points 2 weeks ago

"Elsa" does not feature in "The Snow Queen". The kids in that story are Kai who gets abducted by the Snow Queen and Gerda who rescues him after a long journey which she manages by being good and very, very Christian. It's also pretty racist, though tame by European 19th century standards. I don't know who made up Elsa, but I guess they had long signed over their rights to Disney.

As the purpose of the system is what it does, the purpose of copyright is to centralise ownership and control. But then again that is also the purpose of the AI bubble. So they will fight, and the public is likely to lose.

 

Capgemini has polled executives, customer service workers and consumers (but mostly executives) and found out that customer service sucks, and working in customer service sucks even more. Customers apparently want prompt solutions to problems. Customer service personnel feels that they are put in a position to upsell customers. For some reason this makes both sides unhappy.

Solution? Chatbots!

There is some nice rhetorical footwork going on in the report, so it was presumably written by a human. By conflating chatbots and live chat (you know, with someone actually alive) and never once asking whether the chatbots can actually solve the problems with customer service, they come to the conclusion that chatbots must be the answer. After all, lots of the surveyed executives think they will be the answer. And when have executives ever been wrong?

 

This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.

A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.

"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital and power.

Even in Marx times, it was not only profits that built capital, but new capital could be had from banks, jump-starting the business in exchange for future profits. Thus capital was still allocated in the 1990s when "No Logo" was written, even if the profits had shifted from the good to the brand. In this model, one could argue about ethical consumption, but that is no longer the world we live in, so I am just gonna leave it there.

In the 1990s there was also a tech bubble were capital allocation was following a different logic. The bubble logic is that capital formation is founded on hype, were capital is allocated to increase hype in hopes of selling to a bigger fool before it all collapses. The bigger the bubble grows, the more institutions are dragged in (by the greed and FOMO of their managers), like banks and pension funds. The bigger the bubble, the more it distorts the surrounding businesses and legislation. Notice how now that the crypto bubble has burst, the obvious crimes of the perpetrators can be prosecuted.

In short, the bigger the bubble, the bigger the damage.

If in a profit driven capital allocation, the consumer can deny corporations profit, in the hype driven capital allocation, the consumer can deny corporations hype. To point and laugh is damage minimisation.

view more: next ›