Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
This is a most excellent place for technology news and articles.
Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
Good old honeytrap. I'm not sure, but I think that it's doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won't access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said... or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it's still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I'd love to see something similar with robots.
Yup, it's the same approach as email spam traps. Except the naughty list, but... holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
"Help, my website no longer shows up in Google!"
Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.
I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.
As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that's why things seem so much worse now.
It's abuse, plain and simple.
Governments could do something about it, if they weren't overwhelmed by bullshit from bullshit generators instead and lead by people driven by their personal wealth.
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.
The open and free web is long dead.
just thinking about robots.txt as a working solution to people that literally broker in people's entire digital lives for hundreds of billions of dollars is so ... quaint.
It's up there with Do-Not-Track.
Completely pointless because it's not enforced
I would be shocked if any big corpo actually gave a shit about it, AI or no AI.
if exists("/robots.txt"):
no it fucking doesn't
Robots.txt is in theory meant to be there so that web crawlers don't waste their time traversing a website in an inefficient way. It's there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
Alternative title: Capitalism doesn't care about morals and contracts. It wants to make more money.
Exactly. Capitalism spits in the face of the concept of a social contract, especially if companies themselves didn't write it.
We need laws mandating respect of robots.txt
. This is what happens when you don’t codify stuff
It's a bad solution to a problem anyway. If we are going to legally mandate a solution I want to take the opportunity to come up with an actually better fix than the hacky solution that is robots.txt
AI companies will probably get a free pass to ignore robots.txt even if it were enforced by law. That's what they're trying to do with copyright and it looks likely that they'll get away with it.
you can't really make laws in the united states it's too hard
The battle cry of conservatives everywhere: It's too hard!
Except if it involves oppressing minorities and women. Then it's a moral imperative worth all the time and money you can shovel at it regardless of whether the desired outcome is realistic or not.
Most every other social contract has been violated already. If they don't ignore robots.txt, what is left to violate?? Hmm??
It's almost as if leaving things to social contracts vs regulating them is bad for the layperson... 🤔
Nah fuck it. The market will regulate itself! Tax is theft and I don't want that raise or I'll get in a higher tax bracket and make less!
This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.
Yet another form of persecution that the poor actually suffer and the rich pretend to.
They didn't violate the social contact, they disrupted it.
True innovation. So brave.
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
You cannot simply block crawlers lol
hide a link no one would ever click. if an ip requests the link, it's a ban
Except that it'd also catch out people who use accessibility devices might see the link anyways, or use the keyboard to navigate a site instead of a mouse.
I explicitly have my robots.txt set to block out AI crawlers, but I don't know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i've been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
The funny (in an "wtf" not "haha" sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn't, even after doing responsible disclosure.
Meanwhile, companies completely ignore the standard mentions to say "you are not allowed to scape this data" and then use OUR content/data to build up THEIR datasets, including AI etc.
That's not a "violation of a social contract" in my book, that's violating the terms of service for the site and essentially infringement on copyright etc.
No consequences for them though. Shit is fucked.
Remember Aaron Swartz
Strong "the constitution is a piece of paper" energy right there
No laws to govern so they can do anything they want. Blame boomer politicians not the companies.
Why not blame the companies ? After all they are the ones that are doing it, not the boomer politicians.
And in the long term they are the ones that risk to be "punished", just imagine people getting tired of this shit and starting to block them at a firewall level...
What social contract? When sites regularly have a robots.txt
that says "only Google may crawl", and are effectively helping enforce a monolopy, that's not a social contract I'd ever agree to.
This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced
Loads of crawlers don't follow it, i'm not quite sure why AI companies not following it is anything special. Really it's just to stop Google indexing random internal pages that mess with your SEO.
It barely even works for all search providers.