I'm not a huge drinker. I can't stand beer. I prefer liqueurs and on rare occasions, when the food paring is appropriate, I may have a glass of wine (although I generally don't keep any stocked in my personal liquor cabinet).
I definitely have a few brands I prefer (the St. James peach wine can be very good, although it's not always consistent), but generally, if experimenting, I try to go by the ratings listed in the store. In general, these ratings have done very well for me. My absolute favorite brand of amaretto (Gozio) was one I discovered through its high rating.
However, a recently published study has shown that the wine ratings (which is presumably done in the same manner as for the other liquors) may be substantially flawed.
In general, the ratings had a large spread, even by the same taster. Most tasters rated the same wines within +/- 4 points, but some had deviations as large as +/- 10 points! And before you can claim that that is just due to the skill of the former tasters, the study also showed that the same ones were inconsistent from year to year.
In short, the ratings aren't what they should be and there's so many wines clustered in the high 80's and 90's that such a substantial spread makes the system virtually worthless for a statistical standpoint. And the wine makers aren't surprised.
However, wine makers aren't ready to shuck the ratings all together. They still admit that the higher rating does give people a sort of placebo effect in which, having a high rating will make people perceive it as better.
But with this new information, I think I may try experimenting a bit more when I pick up wines and stick with ones that I've come to like, regardless of the rating.