Hm. One problem with frequently bouncing ideas off of ChatGPT- it's not just that it's a bit sycophantic and built to get you addicted to a bit of praise (I've read that a foolish optimism can be useful anyway- like, yes, the depressive view is often more technically accurate tactically, but strategically speaking optimism gets more stuff done, so long as you don't Dunning–Kruger yourself and overreach) - I'm worried about how it primes you to steamroll conversations in general. LLMs generally end their passage with suggestions for next steps, but it's ok to ignore them and/or change the subject entirely in a way that would be a bad habit to form if talking to humans.
Followup: my friend Nick B mentioned this good point:
their proclivity for spitting out a full-page response to a simple question makes it hard to view what's going on as a "conversation".