My experience has been different, I found dragon back in the day to be about 90-95% accurate, but today I find google to be about 50% accurate. An that's 50% accurate for some limited voice commands, which is a much lower standard than full voice dictation that dragon offered.
Dragon was trained for me, google is trained for some mythical "everyone".
Who am I to argue with your experience? I can only offer the current literature on the topic [0]. (They report a 5.1% error)
> These systems typically use deep convolutional neural network (CNN) architectures ... driving the word error rate on the benchmark Switchboard corpus down from its mid-2000s plateau of around 15% to well below 10%.
Dragon was trained for me, google is trained for some mythical "everyone".