Without people providing their prompts, it's impossible to say whether they are skilled or not, and their complaints or claims of "it worked with this prompt" without the output are also not possible to validate.
Maybe there's a clue in there as to why these experiences seem so different. I'm glad GPTs don't get frustrated.
Ive spent thousands of hours, literally, learning the ropes, and continue to hone it. There is a much higher skill ceiling for prompting than there was for Google-fu.
Yes, LLMs can actually help with coding. But it’s not magic. There are limits. And you get better with practice.