Not really, though it's hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it's training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT's mode of operation mostly discourages long-form wrote memorization. It's a statistical model, after all, and the enemy of "objective" state.
Furthermore, GPT isn't coherent enough for long-form content. With it's small context window, it just has trouble remembering big things like books. And since it doesn't have access to any "senses" but text broken into words, concepts like pages or "how many" give it issues.
None of the leaked prompts really mention "don't reveal copyrighted information" either, so it seems the creators really aren't concerned — which you think they would be if it did have this tendency. It's more likely to make up entire pieces of content from the summaries it does remember.
IANAL, but aren't their licenses are being respected up until they are put into a codebase? At least insomuch as Google is allowed to display code snippets in the preview when you look up a file in a GitHub repo, or you are allowed to copy a snippet to a StackOverflow discussion or ticket comment.
I do agree regulation is a very good idea, in more ways than just citation given the potential economic impacts that we seem clearly unprepared for.