I have not played with it. I saw what others have asked it and sometimes I thought exactly what you stated and I was both amazed and afraid. But then I have also seen its "logical reasoning" be quite wrong. That logical reasoning is from learning a pattern of thoughts that others have and have talked about.
There is one logical reasoning it does not have nor have I seen it. It does not know when to answer with an "I don't know". Either it is suppressed, or not it has not been fed enough material with "I don't know".
It does not know reality very well. It is designed to answer questions with full conviction, even if it has not much knowledge about the topic. Then it makes things up -- the AI dev jargon for it is 'it hallucinates'.
But in my experience ChatGPT in December had less hallucinations than GPT-3 before it, and ChatGPT in February has less hallucinations than in December. So there is a fast progression.
And yes, its reasoning is sometimes wrong or stupid. But sometimes not. And it really can connect chains of thoughts from different areas. I invented questions about topics, for which I am sure nobody ever discussed them. And the answers made sense.
There is one logical reasoning it does not have nor have I seen it. It does not know when to answer with an "I don't know". Either it is suppressed, or not it has not been fed enough material with "I don't know".