I don't understand why any journalism site will advertise that they are using AI. It just says they don't care about facts, research or quality in writing. Journalism is not simply spewing out a handful of paragraphs of text about a random subject. It is research that can take weeks or months, double checking facts, verifying sources and putting it all together into a well written article. AI texts have none of that. Quite the opposite.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Because a significant chunk of what gets passed off as journalism on such sites is just writing copy -- for example, regurgitating press releases, or repackaging the work of another outlet that actually did do the legwork of investigative journalism. I don't think there's anything inherently wrong with using AI tools to speed up the task of summarizing some other text for republishing, but I do question the value of such work in the first place.
It's going to be a long, long time until artificial intelligence can do the work of a true investigative journalist.
What outlets do tend to put out investigative journalism more often than writing copy? ~Strawberry
I don't know if there's many major outlets that are primarily investigative in the era of the 24/7 news cycle and the accompanying need to always have something fresh on the front page, but at least in the English-speaking world the various newspapers of record (think places like the New York Times or the The Guardian) still have a decent newsroom and publish original investigative pieces. In audio formats, NPR and the various constellations of associated organizations like the Center for Investigative Reporting do excellent work as well. There's also organizations like Bellingcat that specialize in deep-dive investigations using open-source intelligence, presented in a "just-the-facts" format without editorialization.
The Wikipedia article on news agencies is pretty good: "Although there are many news agencies around the world, three global news agencies, Agence France-Presse (AFP), the Associated Press (AP), and Reuters have offices in most countries of the world, cover all areas of media, and provide the majority of international news printed by the world's newspapers." Scroll down and you'll also find a list of some smaller news agencies, which tend to focus on local news.
Because you have to have specific knowledge about how AI works to know this is a bad idea. If you don't have specific knowledge about it, it just sounds futuristic because AI is like a Star Trek thing.
This current AI craze is largely as big a deal as it is because so few people, including the people using it, have any idea what it is. A cousin of mine works for a guy who asked an AI about a problem and it cited an article about how to fix whatever the problem was, I forgot. He asks my cousin to implement the solution proposed in that article. My cousin searches for it and discovers article doesn't actually exist, so he says that. And after many rounds of back and forth, of the boss saying "this is the name of the article, this is who wrote it" and my cousin saying "that isn't a real thing and that author did write about some related topics but there's no actionable information there", the boss becomes convinced that this is a John Henry situation where my cousin is trying to make himself look more capable than the AI that he feels threatened by and the argument ends with a shrug and an "Okay, then if it's so important to you then we can do something else even though this totally would have worked."
There really needs to be large-scale education on what language models are actually doing to prevent people from using them for the wrong purposes.
I installed INCH on all my browsers, it's obviously not 100% accurate, but it is nice to get a visual cue that the article you're reading may very well be AI generated.
I know news media is losing money fast, but if this is the solution they go with, I think it will have the opposite effect. People who still read the news are sure as hell going to stop paying if this becomes the norm. If they think readership is declining, how is spamming a mass of AI generated junk going to help?
As if it had the potential to be anything else.
Yeah this is all over. Here's another one about Io9 publishing an AI article with tons of mistakes and no chance to even edit it. Not sure who thought it would be a good idea. AI can be very hit or miss on first go and needs editing before use at the very least.
Oh god, I read that article and thought "Well that's wrong, I wonder how they could mess that up." I saw the author was an AI and just laughed.
But they got the click in the end and I have an ad blocker so I have no idea who wins at that point.
So, basically, it's a complete and 1:1 replacement for most regular journalism.
It's a 1:1 replacement for the lowest effort trash written by an un- or under-paid intern, but when companies start assuming (and they will) it can take over well-researched reporting, it'll be crud for us all.