And then, I had an epiphany: I was probably looking at the output of an ML-based language model, such as GPT-3. The models have a remarkably good command of a variety of niche topics, but lack higher-order critical thought. They are prone to vivid confabulation, occasionally spew out self-contradictory paragraphs, and often drift off-topic - especially when tasked with generating longer runs of text.

Source: Fake books written by computers

This is going to get worse before it gets better.