Well it is technically new content as the neural nets essentially output an average of whichever images are relevant. The risk is that if a particular topic doesn't have a lot of data the NN was trained on then the NN is more likely to output content than was identical to the training data if queried correctly. The more data trained upon, the less risk of this although always a possibility.
Also to note, I agree I don't think it's a good idea to make full posts with Chat GPT. Much better as a research/assistant tool. Good at helping with planning and thinking of points of views that may be missed alone