No that’s not how it works. It stores learned information like “word x is more likely to follow word y than word a” or “people from country x are more likely to consume food a than b”. That is what is distributed when the AI model is shared. To learn that, it just reads books zillions of times and updates its table of likelihoods. Just like an artist might listen to a Lil Wayne album hundreds of times and each time they learn a little bit more about his rhyme style or how beats work or whatever. It’s more complicated than that, but that’s a layperson’s explanation of how it works. The book isn’t stored in there somewhere. The book’s contents aren’t transferred to other parties.
The learning model is artificial, vs a human that is sentient. If a human learns from a piece of work, that’s fine if they emulate styles in their own work. However, sample that work, and the original artist is due compensation. This was a huge deal in the late 80s with electronic music sampling earlier musical works, and there are several cases of copyright that back original owners’ claim of royalties due to them.
The lawsuits allege that the models used copyrighted work to learn. If that is so, writers are due compensation for their copyrighted work.
This isn’t litigation against the technology. It’s litigation around what a machine can freely use in its learning model. Had ChatGPT, Meta, etc., used works in the public domain this wouldn’t be an issue. Yet it looks as if they did not.
EDIT
And before someone mentions that the books may have been bought and then used in the model, it may not matter. The Birthday Song is a perfect example of copyright that caused several restaurant chains to use other tunes up until the copyright was overturned in 2016. Every time the AI uses the copied work in its’ output it may be subject to copyright.
I’ve glanced at these a few times now and there are a lot of if ands and buts in there.
I’m not understanding how an AI itself infringes on the copyright as it has to be directed in its creation at this point (GPT specifically). How is that any different than me using a program that will find a specific piece of text and copy it for use in my own document. In that case the document would be presented by me and thus I would be infringing not the software. AI (for the time being) are simply software and incapable of infringement. And suing a company who makes the AI simply because they used data to train its software is not infringement as the works are not copied verbatim from their original source unless specifically requested by the user. That would put the infringement on the user.
There’s a bit more nuance to your example. The company is liable for building a tool that allows plagiarism to happen. That’s not down to how people are using it, that’s just what the tool does.
So a company that makes lock picking tools is liable for when a burglar uses them to steal? Or a car manufacturer is liable when some uses their car to kill? How about knives, guns, tools, chemicals, restraints, belts, rope, and I could go on and nearly use every single word in the English language yet none of those manufacturers can be sued for someone misusing their products. They’d have to show intent of maliciousness which I just don’t see is possible in the context they’re seeking.
The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.
deleted by creator
No that’s not how it works. It stores learned information like “word x is more likely to follow word y than word a” or “people from country x are more likely to consume food a than b”. That is what is distributed when the AI model is shared. To learn that, it just reads books zillions of times and updates its table of likelihoods. Just like an artist might listen to a Lil Wayne album hundreds of times and each time they learn a little bit more about his rhyme style or how beats work or whatever. It’s more complicated than that, but that’s a layperson’s explanation of how it works. The book isn’t stored in there somewhere. The book’s contents aren’t transferred to other parties.
The learning model is artificial, vs a human that is sentient. If a human learns from a piece of work, that’s fine if they emulate styles in their own work. However, sample that work, and the original artist is due compensation. This was a huge deal in the late 80s with electronic music sampling earlier musical works, and there are several cases of copyright that back original owners’ claim of royalties due to them.
The lawsuits allege that the models used copyrighted work to learn. If that is so, writers are due compensation for their copyrighted work.
This isn’t litigation against the technology. It’s litigation around what a machine can freely use in its learning model. Had ChatGPT, Meta, etc., used works in the public domain this wouldn’t be an issue. Yet it looks as if they did not.
EDIT
And before someone mentions that the books may have been bought and then used in the model, it may not matter. The Birthday Song is a perfect example of copyright that caused several restaurant chains to use other tunes up until the copyright was overturned in 2016. Every time the AI uses the copied work in its’ output it may be subject to copyright.
The creator of ChatGPT is sentient. Why couldn’t it be said that this is their expression of the learned works?
https://crsreports.congress.gov/product/pdf/LSB/LSB10922
I’ve glanced at these a few times now and there are a lot of if ands and buts in there.
I’m not understanding how an AI itself infringes on the copyright as it has to be directed in its creation at this point (GPT specifically). How is that any different than me using a program that will find a specific piece of text and copy it for use in my own document. In that case the document would be presented by me and thus I would be infringing not the software. AI (for the time being) are simply software and incapable of infringement. And suing a company who makes the AI simply because they used data to train its software is not infringement as the works are not copied verbatim from their original source unless specifically requested by the user. That would put the infringement on the user.
There’s a bit more nuance to your example. The company is liable for building a tool that allows plagiarism to happen. That’s not down to how people are using it, that’s just what the tool does.
So a company that makes lock picking tools is liable for when a burglar uses them to steal? Or a car manufacturer is liable when some uses their car to kill? How about knives, guns, tools, chemicals, restraints, belts, rope, and I could go on and nearly use every single word in the English language yet none of those manufacturers can be sued for someone misusing their products. They’d have to show intent of maliciousness which I just don’t see is possible in the context they’re seeking.
The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.