Good comment:

I feel like we know why ChatGPT makes things up. Because all it knows is the next most likely word based on a statistical model. To ChatGPT, theres no difference between saying something true and saying something false. There isnt even a difference between saying something found in its training set and saying something not found in its training set. Theres just the next step in a statistical progression.

Source: OpenAI peeks into the black box of neural networks with new research