gpt-like AI is useful for what I'm doing (and others in similar boats), because I'm doing lore and world building for a fictional setting that almost nobody but me knows about. If the lying machine lies, that's ok, because I can just choose to use it or not.
Actual research on real world subjects should not use gpt-like AI. They're trying to discover about an unknown whatever, and the lying machine is of course going to fill in gaps with plausible sounding bullshit.
Anyone who is both 1) paying attention, and 2) isn't pushing an agenda, already knows this.