I've heard this theory. Feels like unrealistic hopeful wishes of people who want AI to fail.
LLM processing will be a huge tool for pruning and labeling training sets. Humans can sample and validate the work. These better training sets will produce better LLMs.
Who cares is a chunk of text was written by a human or not? Plenty of humans are shit writers who believe illogical or clearly incorrect things. The idea that human origin text is superior is a fantasy. chatGPT is a better writer than 80% of humans todat. In 10 years LLMs will be better than 99.9% of humans. There is no poison to be avoided.
chatGPT has an apparent style when used in the default mode, but you can already get away from that with simple prompt tweaks. This whole thing is a non-issue.
I think OpenAI's own chatGPT detector had double digit false negative and positive rates. I expect as diversity of LLMs proliferates, it will become increasingly harder to detect.