this post was submitted on 09 Jul 2023
76 points (100.0% liked)

Technology

19 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago
 

Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.

you are viewing a single comment's thread
view the rest of the comments
[–] magic_lobster_party@kbin.social 1 points 1 year ago (1 children)

It’s difficult to tell to what extent books are encoded into the model. The data might be there in some abstract form or another.

During training it is kind of instructed to plagiarize the text it’s given. The instruction is basically “guess the next word of this unfinished excerpt”. It probably won’t memorize all input it’s given, but there’s a nonzero chance it manages to memorize some significant excerpts.

[–] FaceDeer@kbin.social 2 points 1 year ago (1 children)

It’s difficult to tell to what extent books are encoded into the model. The data might be there in some abstract form or another.

This is a court case so the accusers are going to have to prove it.

The evidence provided is that ChatGPT can produce two-page summaries of the books. The summaries are of unknown accuracy, I haven't read the books myself so I have no idea how much of those summaries are hallucinations. This is very weak.

[–] Doomhammer458@kbin.social 1 points 1 year ago (1 children)

They have to prove it but if they case gets far enough they will have the right to ask for discovery and they can see for themselves what was included. Thats why it might just settle quietly to avoid discovery.

[–] FaceDeer@kbin.social 1 points 1 year ago

The important question is not what was in the training data. The important question is what is in the model. The training data is not magically compressed into the model like some kind of physics-defying ultra-Zip, the model does not contain a copy of the training data.

There are open-source language models out there, you can experiment with training them. Unless you massively over-fit it on a specific source document (an error that real AI training procedures do everything they can to avoid) you won't be able to extract the source documents from the resulting model.