The Agora
In the spirit of the Ancient Greek Agora, we invite you to join our vibrant community - a contemporary meeting place for the exchange of ideas, inspired by the practices of old. Just as the Agora served as the heart of public life in Ancient Athens, our platform is designed to be the epicenter of meaningful discussion and thought-provoking dialogue.
Here, you are encouraged to speak your mind, share your insights, and engage in stimulating discussions. This is your opportunity to shape and influence our collective journey, just like the free citizens of Athens who gathered at the Agora to make significant decisions that impacted their society.
You're not alone in your quest for knowledge and understanding. In this community, you'll find support from like-minded individuals who, like you, are eager to explore new perspectives, challenge their preconceptions, and grow intellectually.
Remember, every voice matters and your contribution can make a difference. We believe that through open dialogue, mutual respect, and a shared commitment to discovery, we can foster a community that embodies the democratic spirit of the Agora in our modern world.
Community guidelines
New posts should begin with one of the following:
- [Question]
- [Discussion]
- [Poll]
Only moderators may create a [Vote] post.
Voting History & Results
view the rest of the comments
Server admins could add in the policy that any AI scrapping requires the previous permission of the copyright holders of the contents (i.e., the users) when the scrap is done for exploitation of the data for greed. Also, the robots.txt could be used to forbid AI HTML scrap.
I don't think that restrictions should be added at a protocol level, but, may be, some declarative tags should be fine:
I think this would be the only way. It would be interesting to knowing how much traffic or requests this instance gets to see if its a real problem. Server admins could implement stricter rate limiting for non-members if it becomes an issue. They could even likely implement something that could allow them to sort out which of their members are making the most requests to have some visibility. I don't believe this is something that is possible today from within platform anyway.
There's really two issues here:
Maybe @TheDude@sh.itjust.works would be open to share some insights regarding to the amount of requests is received per month and how much resources its taking
I don't need to be paid, I just don't want corpos profiting off of my data for any reason, so robots.txt works fine for me. The reason that's enough in my eyes is, I don't hate capitalism nor am I an anarchist or tankie, this is about halting enshittification and for one other reason:
"AI is fundamentally about giving the wealthy access to skill while depriving the skilled of the means to access wealth."
In short, eat the rich because they've ruined everything. They want capitalism? Then no more "laissez-faire" bullshit, you pay your fucking 90% tax on every dollar above 1 mil and shut it. Nobody needs 15 different colors of common Lamborghini and 1 Lambo out of less than 500. Nobody needs 5000 days of going to the mall to buy a dress every day. Nobody needs a personally-owned A380 private jet. Nobody needs 25%, 25 fucking percent, return on investment.
That also applies to the internet and tech companies as much as reality and banks. When I used Reddit, I never told them they could block access behind a paywall and they know it, and they also knew I can't afford an international court case against an American tech giant. Now 90% of google is locked behind reddit, a company which shadow banned me well before the API issues.
As long as Reddit, Google, Samsung, Microsoft, etc. can't legally make free money off of this, I'm happy.