404 Media

22 readers
3 users here now

404 Media is a new independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox.

Don't post archive.is links or full text of articles, you will receive a temp ban.

founded 4 weeks ago
MODERATORS
76
 
 

The Real Future of AI Is Ordering Mid Chicken at Bojangles

Yesterday I ordered my lunch from an AI operating a drive-thru. It was fine. Banal. Boring even. A new experience that I think will become routine in the future.

The AI drive-thru operator isn’t cutting edge tech deployed in an upscale market to win over high value consumers. I live at the edge of a South Carolina city with a little more than 140,000 people. A booming metropolis with the best and the finest, it is not.

There’s a lot of local fast food fried chicken joints here and one of them is Bojangles. It’s mid. Better than KFC and not as good as Popeyes, Bojangles is fine if you’re hungry but you’ll forget the meal as soon as it’s done and you’ll never yearn for it. Last year the restaurant said it would deploy an AI agent at its drive-thru windows. It’s called, I shit you not, Bo-Linda and made by the Israeli tech firm Hi-Auto.

According to the Bojangles website, “Bo-Linda™ can take guest orders 96+% of the time with no human intervention,” and “improve overall satisfaction by offloading order taking from team members and providing a consistent guest experience.”

When Bo-Linda finally arrived in South Carolina, I went to see what the fuss was about. It was crushingly dull. A preview of a time in the near future, I think, when the AI bubble retracts and the agents are common. It took my order with an efficiency that, I’ll be honest, is not typical of the typical fast food worker. The worst part was its constant attempts to up-sell me.

0:00/0:39 1×

“Do you want to upgrade your drink to our new water-melon iced tea?” It asked.

“No thank you.”

“Would you like to add our new peach cobbler for $1.99?”

“No thank you.”

“May I get you anything else?”

“No, that’s it.”“Would you like to round up for military scholarships?”“No thank you.”

“You’re welcome. Thank you. Your total is $10.89.”

When 404 Media founder Joseph Cox watched the video of my interactions, he made fun of my “no thank yous.” What can I say? There’s an ingrained and often stifling politeness that’s bred into us in the American South. Even though I knew I was talking to a machine, I couldn’t not be nice to it.

My thought in the immediate aftermath is that the whole thing was painless. My order wasn’t complicated, but it was correct. The machine never stumbled over itself or asked for clarification. It knew what I wanted and the humans at the window gave it to me. A few conversations with friends and a quick scan of social media in the area show that other people have had much the same interactions with Bo-Linda.

The drive-thru AI, much like the chicken it sold me, is fine. Forgettable.

It was later, sitting at home, and doing a little research for the story that concerns popped up. OpenAI CEO Sam Altman has said that saying “please” and “thank you” to ChatGPT has cost the company tens of millions of dollars. How much water and energy had I burned being polite to Bo-Linda the chatbot?

Sometimes it feels like the answers to these questions don’t matter. We’re barreling forward into the AI future, whether we like it or not. Data centers are springing up across America and nuclear power plants are coming back online, so Bojangles can make a little more money and so people in the drive-thru can feel a little less friction before eating their meal.

This is how a new technology takes over, what it feels like right before it becomes ubiquitous. One day you wake up and the cameras are everywhere, able to recognize your face and chart your movements across the city you live in. One day you look up and everyone has their face buried in their phone. It happened by degrees, but so gradually you didn’t notice. There were signs along the way, dangers and warnings.

But mostly, it was fine, as boring and routine as ordering chicken at a drive-thru.


From 404 Media via this RSS feed

77
 
 

3D Printing Patterns Might Make Ghost Guns More Traceable Than We Thought

So-called 3D-printed ghost guns are untraceable firearms that can be assembled at home. But cutting edge work from a forensic expert in California and researchers at the University of Oklahoma may soon show investigators can trace a 3D printed object to the specific printer that made it.

Weapons manufactured using 3D printers have been a subject of Biden-era legislation and recent Supreme Court scrutiny. It’s possible to download the blueprints for a firearm and build it in your home. There’s no serial number to track and no store to scrutinize your purchase. Luigi Mangione used a ghost gun to allegedly assassinate United Healthcare CEO Brian Thompson.

Kirk Garrison, a forensics expert who works for the San Bernardino Sheriff’s department, told 404 Media he’s had early success matching 3D printed objects to the machines that made them. Garrison said his comments represent his own views and not those of the San Bernardino Sheriff’s department. He also cautioned that what he’s doing is in its infancy and it might be years before authorities can reliably match a gun to the machine that made it, if they can do it at all.

In 2018, Garrison started seeing a lot of 3D printed gun parts in his work at the Sheriff’s department. It was mostly 80% kits and automatic conversion kits, small 3D printed pieces of plastic that turn a semiautomatic pistol into an automatic one. Then he got his first case with a fully 3D printed gun frame. “That’s when I was like, ‘We might need to know a little bit more about this now if we’re actually going to be seeing this stuff and potentially have to testify to it,” he told 404 Media.

A few years later Garrison attended a conference for forensic examiners in Atalanta and caught a talk by FBI lab tech Corey Scott. Scott had been 3D printing novelty items and noticed something. “He was just like, ‘Hey, I noticed on these 3D printed items, there’s these marks, but we was like: ‘I’m not actually a firearms or toolmark examiner.’”

A toolmark is a consistent scratch or impression a harder object leaves on a softer one. A screwdriver may produce the same scratches in the head of every screw it touches. A pair of bolt cutters will scratch up a length of chain in the same way every time. Matching tools to the objects they interacted with is one of the bedmarks of forensic science and it’s something Garrison is an expert in.So the question was: do 3D printers leave behind consistent toolmarks on the objects they make? When he got back to his San Bernardino lab following the conference, Garrison put the 3D printed weapon frame under the microscope. He noticed that the manufacturing process had left stria, or scratch marks, behind. If a 3D printer left behind the same pattern of stria on everything it printed then it might be possible to match a printer to an object it printed.

From there, Garrison started printing simple blocks at home on his own 3D printer. He’d take them into the lab on his own time and examine them under a microscope. “That’s when I started seeing some of the consistency on two separate printed things,” he said. It was too early to tell, and it’s still too early to tell, but individual printers might leave behind unique toolmarks on every object they print.

3D Printing Patterns Might Make Ghost Guns More Traceable Than We ThoughtA page from 'An exploratory study of topographical signatures within 3D fused deposition modelling using Polylactic Acid (PLA) filament.'

Most 3D printers work by heating up a filament—often, but not always, plastic—and extruding it through a metal nozzle. The nozzle puts down hundreds, or even thousands, of layers of the heated plastic to form a solid object. Each individual level of the print is called the print line. “So on the firearm, I’m seeing from the trigger guard—maybe print line 200—and the top of the magazine well—print line 400—the marks are staying consistent,” Garrison said.

It was an exciting discovery but it also wouldn’t be admissible as evidence in a criminal trial. Despite the promise that we may one day be able to match a printer to the object that made it, Garrison stressed that the work was in its very early days and that it would take years, perhaps even a decade, of science to work out the truth of toolmarks and 3D printers.

He was also studying this on his own time and still had a full caseload with the Sheriff’s department. Garrison published a study about his results in the Forensics Science International that he co-authored with researcher Steven Pavlovich, but he knew there was more to do. “I’ve always been like, ‘Hey, someone who works at a university who gets paid to do this, you should totally do this right now,’” he said.

Enter Eric Law, an Assistant Professor at the University of Central Oklahoma Forensic Science Institute, and his graduate student Cooper Blair. Along with Garrison, the pair are the authors of a forthcoming research paper about the phenomenon of toolmarks in 3D printed objects. Once published, it’ll be the first of its kind.

Law and Blair’s focus is narrow. “So if we had a single printer and we had multiple nozzles, can we tell the difference between something printed on each of those different nozzles? And also, if we have different print bed surfaces, can we differentiate those print bed surfaces and tell what object was printed on which?” Law told 404 Media.

The nozzles used in 3D printing are often, but not always, made of metal and printed onto a strip of material that’s called a print sheet or print bed. They studied print sheets first. Not all sheets are the same, some are smooth, some are textured, and they come in a variety of different materials. “So we looked at textured, because we figured if there's some texture to it, those characteristics might reproduce on the plastic, and might let us do that comparison a bit easier,” Law said. “So I looked at texture print beds, and we could differentiate those 100% of the time.” Meaning that, both by eye and using a computer, his team could match an object to the sheet it was printed on.

It’s a promising early finding. “The problem we get into there is we're looking at a specific area on the print bed, so you have to print something on the exact same region, because every area on that print bed is different,” Law said. “If we print something right in the center and then print that same object in the top right corner, those would be different from each other. So it has to be in the same location, which complicates things a little bit.”

He pointed to Glock switches, the conversion kits that turn a pistol into an automatic weapon. “Those are pretty small and on a 3D print bed you could align a bunch of those and print them all at once,” he said. “Which is what you would do to produce as many as you can, as quickly as you can. If you had two of those they might look like they're from different printers, but they might have just been from different sections of the same printer.”

Print sheets can also move between printers and can be easily discarded. Knowing that a Glock switch was printed out on a particular sheet is not a smoking gun. “So it shows promise. But there's a lot of potential issues too,” Law said.

Law and Blair succeeded in matching nozzles to printed objects in their study, but the results weren’t as promising as the print sheets. Law said the nozzle match rate was correct about 75 percent of the time. “The algorithm could identify the correct nozzles, probably a little bit less than that with just visual examination,” he said. “It still shows promise, but is a bit more challenging.”

There are other issues too. All of Law and Blair’s tests were done with one kind of 3D printer—a Prusa MK4S. There’s hundreds of different devices on the market that all behave differently. Law also pointed out that brass nozzles warp over time themselves and may produce different results after hundreds of prints and that different nozzles made from different materials may work very differently. Law would also want an examiner rate study—a formal scientific inquiry into false positives and examiner bias.

“There’s a lot of promise in what we’ve seen but there’s also a lot of questions still. Different nozzles, different print beds, how easy it is to swap those and whether they change,” Law said. He would not, at this point, be willing to testify in a criminal case as an expert on 3D printed forensics.

Garrison also said he wouldn’t be comfortable using any of this in a court but he was still excited. “Even if it doesn’t work, and this is not a possibility, we still found out new information. I’d be just as happy with that. ‘Hey cool, I was involved in finding out that you can’t do this,’” he said.


From 404 Media via this RSS feed

78
 
 

The Hyperpersonalized AI Slop Silo Machine Is Here

For a while, I have said that the AI slop endgame, for social media companies, is creating a hyper personalized feed full of highly specific content about anything one could possibly imagine. Because AI slop is so easy to make and because social media algorithms are so personalized, this means that Facebook, Instagram, TikTok, or YouTube can feed you anything they perceive its users to possibly want. So this means that AI slop makers are exploring ever more niche areas of content.

Case in point: Facebook AI slop about the horrific and deadly Texas flood. Topical AI content about disasters, war, current events, and news stories are at this point so commonplace that they are now sadly barely notable, and AI-powered “misinformation” about horrible events are all over every social media feed I can think of. But as we document our descent into this hellhole, I thought some AI slop surfaced on Bluesky by Christina Stephens was particularly notable:

The Hyperpersonalized AI Slop Silo Machine Is Here

This is slop that shows Louisiana State University football coach Brian Kelly assisting in the Texas floods. Kelly is “famous” in that SEC football coaches are famousish, but he has no real connection to Texas and there is no reason for this content to exist other than the fact that it is being churned out by a Facebook page called LSU Gridiron Glory, which is specifically making AI slop about Kelly and other LSU football figures, including quarterback Garrett Nussmeier and some of his apparent girlfriends. In the grand scheme of things, Brian Kelly is a very minor figure.

This page is churning out slop that includes Brian Kelly’s reaction to last month’s tragic Air India crash and the supposedly amazing line of encouragement he said (this line is never shared, and, of course, the football coach in Louisiana has not had anything to say about a plane that crashed in India). There is slop of Kelly getting his lost wallet returned to him, donating to the homeless, slop of Kelly in the hospital with a rare illness, slop of Kelly being deported by Trump, talking to Apple CEO Tim Cook, and slop of Kelly secretly “paying off the debt owed by a struggling gardener.” The slop is so completely random and specific that I struggle to imagine how one would decide to fill this niche, and, yet, the AI slop economy has done so, anyway.

The Hyperpersonalized AI Slop Silo Machine Is HereThe Hyperpersonalized AI Slop Silo Machine Is HereThe Hyperpersonalized AI Slop Silo Machine Is Here

My point is that there is no reason for LSU football coach Brian Kelly flood rescue inspiration porn to exist on the internet because it did not happen and because it is so hyperspecific as to seem like there could not possibly be a market for such content. And yet someone has decided that ridiculously niche disaster content would get served up by the algorithm to someone who might interact with it.

The Hyperpersonalized AI Slop Silo Machine Is HereThe Hyperpersonalized AI Slop Silo Machine Is Here

Then consider that essentially the exact same thing exists, but for fans of the NBC show The Voice. A page called The Voice Fandom is showing AI slop of judge Blake Shelton saving dogs in the Texas flood, Shelton carrying a girl out of a medical clinic in Kerr County, fellow judge Luke Bryan donating to an animal rescue shelter, etc. As we have seen with previous slop factories on Facebook, many of these bizarre images link out to AI-generated “news” websites that are overloaded with ads. There are, surely, thousands of other similar pages that are doing the exact same thing with celebrities big and small, creating an internet where the LSU fans of the world can imagine their coach as first responder or the judge of their favorite TV show as dog savior or whatever.

Very little of this slop has much engagement on it, but one of the Blake Shelton photos has 18,000 likes and a few hundred comments. Slop has gotten so cheap and easy to produce, and Facebook is so easy to spam, that presumably the return is worth it. In covering these pages for months, I have learned that a single person can operate dozens or hundreds of pages and can keep them filled up with content, and so having something occasionally go viral can be enough to make the entire endeavor financially viable. There was a time a few months ago when I would click through these pages endlessly and marvel at the sheer volume of slop being posted, but the tactic has become so common at this point that we have become almost fully desensitized to it.


From 404 Media via this RSS feed

79
 
 

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

Yesterday, Silicon Valley trade publication The Information launched TITV, a live-streaming news program. For its first episode, the company’s founder Jessica Lessin interviewed Mark Zuckerberg about his recent attempts to hire AI executives away from other companies for huge sums of money.

It is impossible to say what actually happened in the interview, however, because there were audio problems with the livestream—the Zuckerberg segment was completely silent, giving us a minute and 50 seconds of video of Zuckerberg’s face moving with no sound coming out.

In recent weeks, however, Zuckerberg has really ramped up his rhetoric about “superintelligence,” throwing hundreds of millions of dollars at AI researchers in attempts to hire them at Meta, and beginning work on massive data centers around the country to power his AI. Zuckerberg and Meta have talked endlessly about AI as a game changer, while being unable or unwilling to keep AI-generated misinformation and slop off of Meta platforms.

Meanwhile, there has been increased discussion about the possible power of using AI for journalism, which we have written about extensively. I saw the silent interview with Zuckerberg as both an opportunity to check out the awesome abilities of Meta’s AI, and to test AI tools at a task some of them are supposedly good at—reading lips. If these technologies worked as well as their creators have promised, it might be possible to provide the world with information from The Lost Mark Zuckerberg Interview.

Here is how my experiment went.

Because Zuckerberg was obviously talking about Meta AI, I thought I would try his tool first. I took video from the interview and asked Meta AI to lip read it to tell me what they were saying:

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

“I’d love to help! However, I’m a text-based AI and don’t have capability to visually lip read videos,” Meta AI said. It then suggested that I “find a professional lip reader or transcription service that specializes in lip reading videos” or “use automated lip reading software.”

I then tried to upload still images to see if Meta AI could parse them:

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

When I uploaded additional images, Meta AI was also not able to provide any information about what was being said.

I then went to ChatGPT, because Zuckerberg is reportedly offering pay packages of up to $300 million to OpenAI staffers to come work at Meta. I uploaded the 1:50 video and ChatGPT told me “the video processing took too long and timed out.” I then uploaded a 25 second clip and it told me “the system is still timing out while trying to extract frames.” I then asked it to do the first five seconds and it said “even with the shorter clip and smaller scope (first 5 seconds), the system timed out.” I then asked for it to extract one single frame, and it said “it looks like the system is currently unable to extract even a single frame from the video file.” ChatGPT then asked me to take a screenshot of Zuckerberg. I sent it this:

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

And ChatGPT said “the person appears to be producing a sound like ‘f’ or ‘v’ (as in ‘video’ or ‘very’),” but that “possibly ‘m’ or ‘b,’ depending on the next motion.” I then shared the 10 frames around that single screenshot, and ChatGPT said “after closely analyzing the progression of lip shapes and facial motion,” the “probable lip-read phrase” was “This is version.” I then uploaded 10 more frames and it said the “full phrase so far (high confidence): ‘This version is just.’”

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

I then decided to try to extract every frame from the video and upload it to ChatGPT.

I went to a website called frame-extractor.com and cut the video into 3,000 frames. After it had processed 700 of them, I tried to upload them to ChatGPT and it did not work. I then decided I would go 10 frames at a time from the beginning of the clip. Even though I sent an entirely different portion of the video and told ChatGPT we were starting from a different part of the video, it still said that the beginning of the video said “this version is.” I continued uploading frames, 10 at a time. These frames included both Lessin and Zuckerberg, not just Zuckerberg.

ChatGPT slowly began to create a surely accurate transcript of the lost audio of this interview: “This version is just that it we built,” ChatGPT said. As I added more and more frames, it refined the answer: “This version is what we’re going to do,” it said. Finally, it seemed to make a breakthrough. “Is this version of LLaMA more powerful than the one we released last year?” the ChatGPT transcript said. It was not clear about who was speaking, however. ChatGPT said "her mouth movements," but then explained that the "speaker is the man on the left" (Lessin, not Zuckerberg, was speaking in these frames).

I had uploaded 40 of a total of 3,000 frames. Zoom video is usually 30 fps, so in approximately 1.5 seconds, Lessin and/or Zuckerberg apparently said “Is this version of LLaMA more powerful than the one we released last year?” I then recorded this phrase at a normal speaking speed, and it took about four seconds. Just a data point.

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AISaving the Lost Silent Zuckerberg Interview With the Amazing Power of AILipreadtest0:00/4.9733331×

I then got an error message from ChatGPT, and got rate-limited because I was uploading too much data. It told me that I needed to wait three hours to try again.

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

Finally, I did what Meta AI told me to do, and tried a bespoke AI lip reading app. I found one called ReadTheirLips.com, which is powered by Symphonic Labs. This is a tool that people have been trying to use in recent months to figure out what Donald Trump and Jeffrey Epstein were saying to each other in silent b-roll news footage, without much success.

I paid $10 for three minutes worth of transcription and asked it to lip read using its “Multiface Detection.” After waiting 10 minutes, I got an error message that said “Transcription failed, no credits have been used, try again later.” I then asked it to focus only on Zuckerberg, and actually got some text. I separately asked it to focus on Lessin.

Here is a transcript of what the AI says they were talking about. It has not been edited for clarity and I have no idea which parts, if any, are accurate:

LESSIN: Thanks for joining us again, TV. We're happy to have you already this morning. News that you've spent even more money with your big announcement about your new supercomputers. We'll get to that, but to start, you've been in huge scale like I.

ZUCKERBERG: Happy TO BE HERE. We're GOING TO TALK A LITTLE BIT ABOUT META'S AI STRATEGY. It's BEEN BUSY, YOU KNOW? I THINK THE MOST EXCITING THING THIS YEAR IS THAT WE'RE STARTING TO SEE EARLY GLIMPSES OF SELF-IMPROVEMENT WITH THE MODELS, WHICH MEANS THAT DEVELOPING SUPERINTELLIGENCE IS NOW.

LESSIN: You HAVE BEEN ON A PLANE OF AI HIRING, WHY AND WHY NOW?

ZUCKERBERG: Insight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with a lab is to deliver personal superintelligence to everyone in the world, so that way, you know, we can put that power in every individual's hand. I'm really excited about it.

LESSIN: I DON'T KNOW, I DON'T KNOW, I DON'T KNOW.

ZUCKERBERG: Than ONE OF THE OTHER LABS YOU'RE DOING, AND YOU KNOW MY VIEW IS THAT THIS IS GOING TO BE SOMETHING THAT IS THE MOST IMPORTANT TECHNOLOGY IN OUR LIVES. IT'S GOING TO UNDERPIN HOW WE DEVELOP EVERYTHING AND THE COMPANY, AND IT'S GOING TO AFFECT SOCIETY VERY WISELY. SO WE JUST WANT TO MAKE SURE WE GET THE BEST FOCUS.

LESSIN: Did YOU FEEL LIKE YOU WERE BEHIND WHAT WAS COMING OUT OF LAW BEFORE I'M NOT ADJUSTING.

ZUCKERBERG: On THIS FROM ENTREPRENEURS TO RESEARCHERS TO ENGINEERS WORKING ON THIS HIDDEN INFRASTRUCTURE, AND THEN OF COURSE WE WANT TO BACK IT UP WITH JUST AN ABSOLUTELY MASSIVE AMOUNT OF COMPUTER RESEARCH, WHICH WE CAN SUPPORT BECAUSE WE HAVE A VERY STRONG BUSINESS MODEL THAT THROWS OFF A LOT OF CAPITAL. LET'S TALK ABOUT.

LESSIN: Like THIS SUMMER, PARTICULARLY, YOU SWITCH GEARS A LITTLE BIT.

ZUCKERBERG: I THINK THE FIELD IS ACCELERATING, YOU KNOW, WE KEEP ON TRACK FOR WHERE WE WANT TO BE, AND THE FIELD KEEPS US MOVING FORWARD.

The video ends there, and it cuts back to the studio.

Update: The Information provided 404 Media with several clips (with audio) from Lessin's interview with Zuckerberg, as well as a real transcript of the interview. Here is the real segment of what was said. As you can see, the AI captured the jist of this portion of the interview, and actually did not do too bad:

Lessin: Mark, thanks for joining TITV. We're happy to have you here. Already this morning, [there’s] news that you've spent even more money with your big announcement about your new supercomputers. We'll get to that. But to start, you took a huge stake in ScaleAI. You have been on a blitz of AI hiring. Why, and why now?Zuckerberg: Yeah, it's been busy. You know, I think the most exciting thing this year is that we're starting to see early glimpses of self-improvement with the models, which means that developing super intelligence is now in sight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal super intelligence to everyone in the world, so that way we can put that power in every individual's hand. And I'm really excited about it. It's a different thing than what the other labs are doing.And my view is that this is going to be something that is the most important technology in our lives. It's going to underpin how we develop everything at the company, and it's going to affect society very widely. So we just want to make sure that we get the best folks to work on this, from entrepreneurs to researchers to engineers working on the data and infrastructure.And then, of course, we want to back up with just an absolutely massive amount of compute which we can support, because we have a very strong business model that throws off a lot of capital.Lessin: Did you feel like you were behind coming out of Llama 4? It seems like this summer, in particular, you switched gears a little bit.Zuckerberg: I think the field is accelerating, you know, we keep on having goals for where we want to be. And then the field keeps on moving faster than we expect.

The rest of the interview is available at The Information.


From 404 Media via this RSS feed

80
 
 

Immigration Raid Tracking App ‘ICE Block’ Keeps Your Data Private, Researcher Finds

ICE Block, an app that lets users warn others about the location of ICE officers, and which for a short while was the top of the social media App Store chart, does protect users’ privacy and doesn’t share your location with third parties, according to a recent analysis from a security researcher. ICE Block already claimed that it did not collect any data from the app; the analysis now corroborates that.

“It’s not uploading your location at all, when you make a report that report isn’t associated with your device in any way, and there are no third party services that it talks to or sends data to,” Cooper Quintin, senior public interest technologist at the Electronic Frontier Foundation (EFF), who analyzed the ICE Block app, told 404 Media.


From 404 Media via this RSS feed

81
 
 

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors.

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them “a short period of time” before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new “real people” model policy.

At the time of writing, the Discord channel has hundreds of members who are still finding and sharing models that have been removed from Civitai and are reuploading them to Hugging Face. Some users have even shared a piece of software, also hosted on Hugging Face, which allows users to automatically upload Civitai models to Hugging Face in batches.

Hugging Face did not respond to multiple requests for comment. It also did not respond to specific questions about how and if it plans to moderate these models given the fact that they were previously hosted on a platform primarily used for AI generating pornography, and which our reporting shows were used to create noncensual pornography.

I found the Civitai models of real people that were reuploaded to Hugging Face thanks to a paper I covered where researchers scraped Civitai. The paper showed that the platform was primarily used for pornographic content, and that it deleted at least 50,000 AI models designed to recreate the likeness of real people once it changed its policy in May. The researchers, Laura Wagner and Eva Cetnic from the University of Zurich, provided me with a spreadsheet of all the deleted models, which included the name of the models (which is almost always the name of a female celebrity or lesser known internet personality), a link to where it was previously hosted on Civitai, and the SHA256 hash Civitai uses to identify all the models hosted on its site.

The people who are reuploading the Civitai models to Hugging Face are seemingly trying to hide the purpose of those models on Hugging Face. On Hugging Face, these models have generic names and URLs like “LORA” or “Test model.” Users can’t tell that these models are used to generate the likeness of real people just by looking at their Hugging Face page, nor would they be able to find them by searching for the names of celebrities on Hugging Face. In order to find them, users can go to a separate website the Civitai archivists created. There, they can enter the name of a Civitai model, the link where it used to be hosted on Civitai before it was deleted, or the model’s SHA256 hash. All of these will lead users to a page which explains what the model is, show its name, as well as several images showing the kind of images it can generate. At the bottom of that page is a link to one or more Hugging Face “mirrors” where the model has been reuploaded.

By using Wagner’s and Cetnic’s data and entering it into this Civitai archive site, I was able to find the Civitai models hosted on Hugging Face.

Hugging Face’s content policy bans “Unlawful, defamatory, fraudulent, or intentionally deceptive Content (e.g., disinformation, phishing, scams, inauthentic behavior),” as well as “Sexual Content used for harassment, bullying, or created without explicit consent.” Models that generate the likeness of real people don’t have to be used for unlawful or defamatory ends, and they only produce sexual content if people choose to use them that way. There’s nothing in Hugging Face’s content policy that explicitly forbids AI models that recreate the likeness of real people.

However, the Hugging Face Ethics & Society group, which is “committed to operationalizing ethics at the cutting-edge of machine learning,” has identified six “high-level categories for describing ethical aspects of machine learning work,” one of which is that AI should be “Consentful.”

“Consentful technology supports the self-determination of people who use and are affected by these technologies,” the company explains. Examples of this, the company says, includes “Avoiding extractive, chauvinist, ‘dark,’ and otherwise ‘unethical’ patterns of engagement.”

Other AI models that recreate the likeness of real people could conceivably not violate any of these principles. For example, two of the deleted Civitai models that were reuploaded to Hugging Face were designed to recreate the likeness of Vladimir Putin, which in theory people would want to use in order to mock or criticize the Russian president. However, the vast majority of the models are of female celebrities, which my reporting has shown is being used to create nonconsensual sexual content, and which were deleted en masse from Civitai because of pressure from payment processors who didn’t want to be associated with that type of media.


From 404 Media via this RSS feed

82
 
 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In the two years that I’ve been reporting about Civitai, a platform for sharing AI image generation models that has been instrumental in the production of AI generated non-consensual porn, Civitai has consistently argued that the amount of adult content on the site has been overstated. But new research shows that, if anything, the amount of adult content on Civitai has been underestimated.

In their paper, “Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm,” researchers Laura Wagner and Eva Cetnic from the University of Zurich studied more than 40 million user-generated images on Civitai and over 230,000 models. They found “a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals” on the platform, they write in the paper.

“What began as a promising creative breakthrough in TTI [text-to-image] generation and model personalization, has devolved into a pipeline for the large-scale production of sensational, biased, and abusive content. The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes,” Wagner and Cetnic write in their paper. “Several indicators suggest a descent into a self-reinforcing feedback loop of platform decay. These include a dramatic increase in NSFW imagery, from 41% to 80% in two years, as well as the community’s normalization of deepfakes, misogynistic tropes, and other exploitative content.”

To visualize just how dominant adult content was on Civitai, check the chart below, which shows the distribution of images by “NSFW browsing levels” over time. These categories, which are inspired by the Motion Picture Association film rating system and are used by Civitai to tag images, show that adult content was always a significant portion of all images hosted on the site, but that the portion of “overtly sexual, or disturbing” content only grew as the site became more popular, and exploded starting in 2024. The chart is based on Civitai’s own numbers and categorization system which the researchers scraped from the site. It likely undercounts the number of explicit images on the site since as both the researchers and I observed during my reporting, not all adult content is tagged as such.

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In December, 2023, Civitai CEO Justin Maier told Venture Beat that “less than 20% of the posted content is what we would consider ‘PG-13’ or above.” When I reached Maier for comment for this article, he told me that “The VentureBeat figure cited a December 2023 snapshot, when adult posts were a minority. The mix shifted in 2024 as many NSFW creators migrated from platforms that no longer allow that content.”

However, the data in the paper shows that by October of 2023, 56 percent of all images on the site were tagged as “NSFW” and were designated by Civitai as “PG-13” or above.

In May, Civitai announced it’s banning all AI image generation models designed to recreate the likeness of real people because of pressure from payment processors. Since the authors of the paper were already tracking hundreds of thousands of models hosted on Civitai, they could easily see which models were removed, giving us a first clear look at how common those models were.

Overall, they saw that more than 50,000 models designed to AI-generate the likeness of real people were removed because of the ban. These are models that Civitai itself tagged as “person of interest,” the tag it uses to indicate a model recreates the likeness of a real person, so the actual number of models depicting real people is likely higher.

It’s hard to say if the most popular AI models on Civitai were all popular just because they were used to generate explicit images, because people could use models tagged as NSFW to generate non-nude images and vice versa. For example, according to the data collected by the researchers the most popular AI image generation model on Civitai was EasyNegative with almost 600,000 downloads. It’s not tagged or promoted as a model for generating pornography, but images that users created with it, which are shared on its Civitai model page, show it is commonly used that way.

Other very popular models on Civitai are clearly designed to generate explicit images. The sixth most popular model with 360,000 downloads is Nudify XL: Better Bodies, which its creator says is for “nude female frontals.” A model called Realistic Vaginas - God Pussy 1 had 256,000 downloads. The POV Squatting Cowgirl LoRA model, which Civitai tagged as a “sex” model, had 189,000 downloads.

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

The authors of the paper also conducted deeper analysis of the 40,000 most downloaded models on Civitai. In the 11,151 models where they could extract textual training data, meaning text that indicates what kind of images the models were trained on, they found “specifically abusive terms.” 5.6 percent included the keywords “loli” (558 models) and/or “shota” (69 models), Japanese terms commonly used to refer to sexualized depictions of pre-pubescent girls and boys. About 2.1 percent (189 models) included the keyword “rape.”

The data shows with clear numbers what we have long argued at 404 Media: adult content drives technological innovation and early adoption, and this has been especially true in the world of generative AI. Despite its protestation to the contrary, Civitai, which is one of the fastest growing platforms in that industry, and that the influential Silicon Valley venture capital firm Andreessen Horowitz invested in, grew because of explicit content, much of which was nonconsensual.

“The rapid rise of NSFW content, the over-representation of young female subjects, and the prioritization of sensational content to drive engagement reflect an exploitative, even abusive dynamic,” the researchers wrote. “Additionally, structural discrimination embedded in today’s open-source TTI tools and models have the potential to cause significant downstream harm as they might become widely adopted and even integrated into future consumer applications.”

Adult content driving innovation and early adoption doesn’t have to be harmful. As the researchers write, it’s the choices platforms like Civitai make that give us these outcomes.

“The contingent nature of technology, shaped by online communities, platform operators, lawmakers, and society as a whole, also creates opportunities for intervention,” they write. “Model-sharing hubs and social media platforms both have the capacity to implement safeguards that can limit the spread of abusive practices such as deepfake creation and abusive imagery.”


From 404 Media via this RSS feed

83
 
 

Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years

Many trains in the U.S. are vulnerable to a hack that can remotely lock a train’s brakes, according to the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the researcher who discovered the vulnerability. The railroad industry has known about the vulnerability for more than a decade but only recently began to fix it.

Independent researcher Neil Smith first discovered the vulnerability, which can be exploited over radio frequencies, in 2012.

“All of the knowledge to generate the exploit already exists on the internet. AI could even build it for you,” Smith told 404 Media. “The physical aspect really only means that you could not exploit this over the internet from another country, you would need to be some physical distance from the train [so] that your signal is still received.”


From 404 Media via this RSS feed

84
 
 

Swedish Prime Minister Pulls AI Campaign Tool After It Was Used to Ask Hitler for Support

The Moderate Party of Sweden has removed an AI tool from its website after people used it to generate videos of Prime Minister Ulf Kristersson asking Adolf Hitler for support.The tool allowed users to generate videos of Kristersson holding an AI-generated message in an attempt to promote the candidate ahead of the general election in Sweden next year.

Swedish television station TV4 used the tool to generate a video of Kristersson on a newspaper above the headline “Sweden needs Adolf Hitler” after it noticed that it had no guardrails or filters.

In the video TV4 generated using the website, Kristersson makes his pitch over stock footage of old people embracing. A woman runs through a field, the camera focusing on flowers while the sun twinkles in the background. Cut to Kristersson. He turns a blue board around. “We need you, Adolf Hitler,” it says.

The Moderates removed the AI system from its website, but the videos of Ulf asking Hitler to join the Moderates remain on social media and TV4’s website..

In an attempt to bolster its party's ranks, Moderates launched a website that allowed users to generate a custom video of Kristersson asking someone to join the party. The idea was probably to have party members plug in the names of friends and family members and share what appeared to be a personalized message from the PM asking for their support.

In the video, Kristersson stands in front of stairs, makes his pitch, and turns around a blue tablet that bears a personalized message to the viewer. The system apparently had no guardrails or filters and Swedish television station TV4 was able to plug in the names Adolf Hitler, Ugandan dictator Idi Amin, and Norwegian mass murderer Anders Breivik.The Moderate Party did not return 404 Media’s request for a comment about the situation, but told TV4 it shut down the site as soon as it learned people were using it to generate messages with inappropriate names.

The Moderate Party’s AI-generated video was simple.. It filmed the PM holding a blue board it could easily overlay with input from a user and then used AI to generate the fake newspaper and a few other slides. Preventing people from typing in “Hitler” or “Anders Brevik” would have been as simple as maintaining a list of prohibited names, words, and phrases, something that every video game and service does. Users are good at bypassing guardrails, but the Moderate’s AI tool appeared to have none.

Users making content you don’t want to be associated with is one of the oldest and most well known problems in AI. If you release a chatbot, generative photo system, or automated political greeting generator, someone will use it to reference the Nazis or make nonconsensual porn.

When Microsoft launched TAY in 2016, users turned it into a Hitler-loving white nationalist in a few hours. Eight years later, another Microsoft AI product had a loophole that let people make AI-generated nudes of Taylor Swift. Earlier this year, Instagram’s AI chatbots lied about being licensed therapists.


From 404 Media via this RSS feed

85
 
 

'Deportation Tok' Is Taking Off

As immigration raids roll out across the U.S., those affected are processing the experience in the normal 2025 way—via vertical video.

Across social media, people are uploading clips with uncanny-valley titles like “A normal day for me after being deported to Mexico” and “3 things I wish I knew before self-deporting from the US!” These posts have the normal shape, voiceovers, and fonts of influencer content, but their dystopian topic reflects the whiplash of the current historical moment.

Doomscrolling last week, a particular clip caught my eye. A man sits on the bottom bunk of a metal bed, staring down at the floor, with the caption “Empezando una nueva vida después de que me Deportaran a México” (“Starting a new life after being Deported to Mexico”).


From 404 Media via this RSS feed

86
 
 
Subscribe

Join the newsletter to get the latest updates.

SuccessGreat! Check your inbox and click the link.ErrorPlease enter a valid email address. The Media's Pivot to AI Is Not Real and Not Going to Work

On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.

The Media's Pivot to AI Is Not Real and Not Going to Work

This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones.

Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.

But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.

Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.

In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.”

Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”


From 404 Media via this RSS feed