this post was submitted on 04 Oct 2024
6 points (87.5% liked)

The Agora

1598 readers
1 users here now

In the spirit of the Ancient Greek Agora, we invite you to join our vibrant community - a contemporary meeting place for the exchange of ideas, inspired by the practices of old. Just as the Agora served as the heart of public life in Ancient Athens, our platform is designed to be the epicenter of meaningful discussion and thought-provoking dialogue.

Here, you are encouraged to speak your mind, share your insights, and engage in stimulating discussions. This is your opportunity to shape and influence our collective journey, just like the free citizens of Athens who gathered at the Agora to make significant decisions that impacted their society.

You're not alone in your quest for knowledge and understanding. In this community, you'll find support from like-minded individuals who, like you, are eager to explore new perspectives, challenge their preconceptions, and grow intellectually.

Remember, every voice matters and your contribution can make a difference. We believe that through open dialogue, mutual respect, and a shared commitment to discovery, we can foster a community that embodies the democratic spirit of the Agora in our modern world.

Community guidelines
New posts should begin with one of the following:

Only moderators may create a [Vote] post.

Voting History & Results

founded 1 year ago
MODERATORS
 

I'm curious to get all of your thoughts on this. It's no secret that AI has been growing quite exponentially over the last year. I feel that new models are being released almost every other day. With that said many of these models need a tremendous amount of data to train on. It's no secret that reddit sells its users interaction to the highest bidder. This was partially the reason why they made the changes to the API limits that got many of us to move to the fediverse in the first place.

My question is how does everyone feel with knowing that multi-billion dollar companies as scraping this instance and the others, creating extra load on the servers for nothing more than to be able to profit from it?

What can be done to continue providing a free, open network to users but prevent those who are only looking to profit from the data?

edit: fixed title typo

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 day ago

I don't care tbh. I am writing everything here as if everyone at any time could read it.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago) (2 children)

Scrape*, for your title.

Meanwhile, preventing un-paid scraping was a big part of Reddit's rationalle for their en-shitification, ie, charging for API access.

I would rather train an AI indirectly for free than ask random Instances to run interference, which IRL works out to be pay-walling and selling user content.

By asking Lemmy Instances to "prevent AI from seeing my content", all you are really asking them to do is to slap a price-tag on it, and hire lawyers to pursue companies/users that don't pay. Not pay you or me, but them.

[–] [email protected] 2 points 1 day ago

Yeah, I'm more worried about the output of AI getting involved than anything regarding the input, at least as far as a public forums go.

[–] Zachariah 1 points 1 day ago

typos are mportant to undermine the scrapping

[–] AbouBenAdhem 2 points 1 day ago* (last edited 12 hours ago)

My main issue with the Reddit deal (and similar data grabs) is that major AI companies are hoarding user-generated content to give themselves a competitive advantage. I have less of an issue with them using non-exclusive public content like Wikipedia, fediverse comments, and public-domain historical works.

[–] [email protected] -1 points 1 day ago* (last edited 1 day ago) (1 children)

Server admins could add in the policy that any AI scrapping requires the previous permission of the copyright holders of the contents (i.e., the users) when the scrap is done for exploitation of the data for greed. Also, the robots.txt could be used to forbid AI HTML scrap.

I don't think that restrictions should be added at a protocol level, but, may be, some declarative tags should be fine:

{
"rich": "eat",
"about-meta": "fck-genocidal-and-youth-suicidal-promoter-zuckenberg",
"ai": "not-for-greed"
}
[–] [email protected] 2 points 1 day ago

I think this would be the only way. It would be interesting to knowing how much traffic or requests this instance gets to see if its a real problem. Server admins could implement stricter rate limiting for non-members if it becomes an issue. They could even likely implement something that could allow them to sort out which of their members are making the most requests to have some visibility. I don't believe this is something that is possible today from within platform anyway.

There's really two issues here:

  1. If users are ok and even aware that their public conversations are certainly going to be picked up and used for future models
  2. Are the lemmy instance admins ok with potentially half of their traffic going to bots that are hoarding and scrapping the data causing additional load on the servers.

Maybe @[email protected] would be open to share some insights regarding to the amount of requests is received per month and how much resources its taking