HedyL

joined 2 years ago
[–] HedyL@awful.systems 6 points 5 hours ago

This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...

[–] HedyL@awful.systems 5 points 2 days ago (1 children)

To me, those forced Google AI answers are a lot more disconcerting than even all the rest. Sure, publishers always hated content creators, because paying them ate into their profit margins from advertising. However, Google always got most of its content (the indexed webpages) for free anyway, so what exactly was their problem?

Also, how much more energy do these forced AI answers consume, compared with regular search queries? Has anyone done the math?

Furthermore, if many people really loved that feature so much, why not make it opt-in?

At the same time, as many people already pointed out, prioritizing AI-generated answers will probably further disincentivize creators of good original content, which means there will be even less usable material to feed to AI in the future.

Is it really all about pleasing Wall Street? Or about getting people to spend more time on Google itself rather than leave for other websites? Are they really confident that they will all stay and not disappear completely at some point?

[–] HedyL@awful.systems 7 points 2 days ago

The only reason the tool supposedly has value is because the websites are made to be bad on purpose so that they make more money.

Yes, and because, as it appears, AI occasionally ingests content from some of the better websites out there. However, without sources, you'll be unable to check whether that was the case for your specific query or not. At the same time, it is getting more and more difficult for us to access these better websites ourselves (see above), and sadly, incentives for creators to post this type of high-quality content appear to be decreasing as well.

[–] HedyL@awful.systems 4 points 1 week ago

In terms of creativity, this seems almost on par with the ability to convert photos to the style of, say, impressionist paintings or pop art, which has been available in standard photo editing software for many years. These were also impressive gimmicks, but of very limited practical value.

[–] HedyL@awful.systems 10 points 1 week ago

FWIW, years ago, some people who worked for a political think tank approached me for expert input. They subsequently published a report that cited many of the sources I had mentioned, but their recommendations in the report were exactly the opposite of what the cited sources said (and what I had told them myself). As far as I know, there was no GenAI at the time. I think these people were simply betting that no one would check the sources.

This is not to defend the use of AI, on the contrary - I think this shows quite well what sort of people would use such tools.

[–] HedyL@awful.systems 17 points 3 weeks ago

It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That's why I'm always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.

[–] HedyL@awful.systems 27 points 3 months ago (3 children)

In any case, I think we have to acknowledge that companies are capable of turning a whistleblower's life into hell without ever physically laying a hand on them.

[–] HedyL@awful.systems 6 points 5 months ago

I would argue that such things do happen, the cult "Heaven's Gate" probably being one of the most notorious examples. Thankfully, however, this is not a widespread phenomenon.

[–] HedyL@awful.systems 22 points 5 months ago

Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.

[–] HedyL@awful.systems 11 points 7 months ago (1 children)

From the original article:

Crivello told TechCrunch that out of millions of responses, Lindy only Rickrolled customers twice.

Yes, but how many of them received other similarly "useful" answers to their questions?