Much has been written and many debates take place about how to rate wine. It seems now that the 100 point scale is seen as "old guard," that it has not been effective at communicating a wine's quality. There are of course other rating systems, and their effectiveness is also debatable. I don't want to spend time here summarizing the various arguments, and I don't have a definitive opinion on the best rating system for wine. But I do have some thoughts that I want to share.
I think that some wines are better than others. That might sound silly to say, but there are folks who think that endeavors in the world of art and craft cannot and should not be measured in an absolute sense. They point out that one person's Mozart is another's Black Sabbath, and that both are equally excellent to the individual beholder. And it is true that we each have our own preferences regarding things like paintings, film, music, wine, roast chicken, and so on. It's romantic to say that "the perfect wine is the one you drink with your lover at sunset in a cafe overlooking the ocean." But there is a difference between personal preference and objective quality, and this is the whole point of professional criticism. The critic is supposed to be able to put their personal preferences and experiences aside and evaluate based on a set of established criteria, and then tell the rest of us something definitive about objective quality. What I'm saying here is that DRC is better than Yellowtail. It is higher quality wine. There may be people who prefer the smell and taste of Yellowtail, or who cannot distinguish between then two, and those people are welcome to their preferences and should go forth in peace and be happy. But one is a better wine than the other, regardless of personal opinion or the cafe at sunset context.
If you agree that there is objective quality to wine, then you probably agree that there must be some way for a critic to measure a wine's quality and communicate this to the rest of us. This is the hard part.
Some things are easy to rate - things that can be expressed finitely in purely mathematical terms. If I wanted to know which brand is the best AA battery available on the market, I could find out the average number of minutes each one lasts, determine the average price of each brand, and create a statistic that tells me how many minutes-per-dollar-spent I can expect from each battery.
Rarely is it this simple, however, even when things can be expressed purely in mathematical terms. Think about rating cars or schools or baseball hitters. How do we know which hitter is the best? Batting average is a start - some are higher than others, and there is a highest each year. But is the person with the highest batting average the best hitter? Is someone who hits 10 singles in 20 trips to the plate a better hitter than someone who hits 8 doubles in 20 trips to the plate? What about someone who hit only 5 singles in 20 trips to the plate, but those singles came at crucial points in the game and scored runs for the team. It is possible to determine which hitter has the highest batting average or hit for the most total bases in a season, but determining which is the best hitter requires more than statistics.
Painting, film, cooking, making music, wine...those things don't easily lend themselves to measurement in mathematical terms. But we have inherited a system of wine criticism that attempts to impose a mathematical framework on wine evaluation. The 100 point scale requires us to accept the idea that it is possible to measure something about wine, to assign a numeric value to one or more of its traits and arrive at a finite conclusion. That there is an objective qualitative difference between a 93 and a 92 point wine. Perhaps there is, but I'd like to see the rubric used to arrive at such a conclusion - how are those points generated?
To me, it makes sense not to try to impose finite mathematical rating systems when the subject matter does not itself generate outputs that can be measured using numbers. Why not relieve ourselves of the burden of ordering wines in such tiny groups (87 points, 88 points, 89 points, etc.) and instead work within larger groups, accepting that there are no exact measurements for wine quality. I would prefer a system in which the professional wine critic tells me which wines are of the highest quality, which are of high quality, which are above average, and so on, without attempting to distinguish between wines within each group.
Which are the highest quality wines of Meursault? For me, it would be enough to read a critic who tells me (and I'm making this up) that Coche-Dury, Comte Lafon, Pierre Morey, and Roulot make the highest quality wines of Meursault; François Jobard, Pierre Matrot, Pierre Yves Colin-Morey make high quality wines, and so on. I also would like to read about which wines by Comte Lafon, for example, are the best. And I'm frustrated with the fact that Perrières gets 94 points, Charmes and Genevrières get 91-93 points, Gouttes d'Or gets 90-92 points, and Clos de la Barre gets 89-91 points. From that I understand that the critic rates the wines generally in that order (and every year, they all do), but I still don't understand the value of one point. Perrières is 94 points and Charmes is 93 points, so Perrières is one point better. But what generated that extra point? I accept the idea that Perrières might objectively be a better wine, but not the idea that the critic who awards the additional point experienced something in drinking the wine that can be measured and expressed by a 94 as opposed to a 93.
My guess is that Perrières, Charmes, and Genevrières are all highest quality wines. Perhaps we don't need to take it any further than that - they are all highest quality. There may in fact be some objective truth - one of them might be better than the others in a certain vintage, but it seems to me that the sensations the drinker experiences in coming to this conclusion are not quantifiable.
How, then, should the professional critic explain the criteria for "highest quality," "high quality," and so forth? Sorry, but I'm asking questions and don't have answers. Here, though, is one that makes a lot of sense to me (from Peter Liem's ChampagneGuide.net):
* One star denotes a wine of particular quality and distinctiveness of character, one that stands out among its peers in some significant way.
** Two stars means that this wine is outstanding in its class, showing a marked quality, expression and refinement of character.
*** Three stars indicates a champagne of the highest class, demonstrating a completeness and expression of character that places it among the very finest wines within its context. Needless to say, these wines are uncommon.
This sort of system puts wines in large groups and requires me to do some thinking on my own, and I like that. Really he's just telling me the groups of wines that he thinks are best - which are very good, which are good, and which are not as good - the rest is up to me. There are over 1,000 wines reviewed on Peter's site, and 61 of them are awarded three stars. I'm sure Peter could tell me his favorites among those 61, but would laugh at the idea that there is one "best" wine within this three star group, that it is possible to construct a strict ordering of those 61 wines. That said, he could explain what it is about each of those 61 wines that merits it being in the three star group, and why each of the 251 two star wines is not in the three star group.
I understand that my analysis here is incomplete, and I'm not trying to start an argument. I guess I'm just saying that in trying to impose a strict mathematical ordering on wine evaluation, we are barking up the wrong tree. If you have something thoughtful to say about this, I'd love to hear it. But spare us from rants about points and the evil culture of selling wine, and also from salt of the earth declarations about how beautiful the simplest country wine can be with fish just-plucked-from-the-sea. I'm starting with the notion that some wines are objectively better than others, and that there must be some way of measuring this. Just not the 100 point scale we've been using. How can this objective quality best be measured? And how should this measurement be communicated?