Monday, October 23, 2017

Ranking America’s most popular cheap wines

In a recent article in the Washington Post, Dave McIntyre contemplated 29 of America’s favorite cheap wines, ranked. Here, he looked at some of the mass-market wines available in the USA, and tried to decide which ones might be recommendable by a serious wine writer.

To do this, he "assembled a group of tasters to sample 29 chardonnays, cabernets and sweet red blends that are among the nation’s most popular, plus a few of my favorite and widely available Chilean reds". He then ranked the wines in descending order of preference (reds and whites separately), and provided a few tasting notes.

This is a risky business, not least because the tasting notes tended to be along the lines of "smells of sewer gas" and "boiled potato skins, sauced with rendered cough drops", which might suggest that the exercise was not taken too seriously by the tasting panel. However, a more important point is that the general populace of wine drinkers might not actually agree with the panel.

Scores and ranking of chardonnay wines

The latter point can be evaluated rather easily, by looking at any of those web sites that provide facilities for community comments about wine. I chose CellarTracker for this purpose. I looked up each of the 29 wines, and found all but three of them, the missing ones being some of the bag-in-box wines. For each of the 26 wines, I simply averaged the CellarTracker quality scores for the previous five vintages for which there were scores available (in most cases there were >50 such scores).

I have plotted the results in the two graphs above, where each point represents one of the wines. McIntyre's preference ranking is plotted horizontally, and the average CellarTracker scores are shown vertically. McIntyre's Recommended wines are in green, and the Not Recommended wines are in red.

It seems to me that there is not much relationship between the McIntyre ranking and the CellarTracker users' rankings. In particular, there is no consistent difference in CellarTracker scores between those wines that McIntyre recommends and those that he rejects. In other words, the preferences of the populace and the preferences of the tasting panel have little in common.

So, what exactly was the point of the Washington Post exercise? It may be a laudable exercise for wine writers to look at those wines that people actually drink, rather than those drunk by experts (eg. Blake Gray describes it as "outstanding service journalism"). However, we already have CellarTracker to tell us what actual wine drinkers think about the wines; and we have a bunch of specialist web sites that taste and recommend a wide range of inexpensive wines (see my post on Finding inexpensive wines). These sources can be used any time we want; we don't need a bunch of sardonic tasting notes from professionals, as well.

Personally, I would go with the CellarTracker scores, as a source of Recommended wines.

5 comments:

  1. David,

    You write:

    "For each of the 26 wines, I simply averaged the CellarTracker quality scores for the previous five vintages for which there were scores available (in most cases there were >50 such scores)."

    Are you implicitly assuming these previous five vintages had similar growing season through harvest season weather, which would have a negligible affect on conducting a vertical tasting preference vote of the wines?

    Alternately, are you implicitly assuming that the "commodity quality" of these low priced/mass produced wines -- manipulated by their producers to be consistent year-after-year -- have an unvarying house style?

    If these wines had come from France (say Bordeaux) during the five consecutive year time period 2009, 2010, 2011, 2012, 2013, I would anticipate large swings in their relative quality and house style.

    Whereas in California we have more consistent growing season through harvest season weather over a comparative five consecutive year period.

    Bob

    ReplyDelete
  2. Hej Bob!

    The averages don't really assume anything about the wines. They are simply a convenient means of summarizing the perceived quality of the wines over recent years. My intention is that they might be a more useful assessment of the wines than a single tasting from a panel.

    As for vintage consistency, most of the wines seem to have consistent CellarTracker scores from year to year. I presume that the producers are deliberately trying to provide consistent wines, so that the drinkers know what to expect from year to year.

    David

    ReplyDelete
  3. Interesting approach...looking at the CellarTracker data (which, I agree, is a good representation of reality), the points are actually well-banded in the 82-88 point range. I think that if you were to plot all CellarTracker scores, you will find probably five groups: 93 pt or more (very small), 89-92 (large #), 85-88 (very large, 81-84 (small), and less than 80 (small). Wines are rated outstanding, very good, good, meh, or bad. Ergo, most of these wines are good or meh. Makes good sense to me and my palate. Trying to differentiate between 83 and 86 pts - that's useless, as there isn't the consistency of ratings to any absolute scale that would justify the separation. GIGO.

    ReplyDelete
  4. Hej Joel!

    Thanks for your comment.

    I have never tried pooling all of the CellarTracker scores, although I have looked at them for individual wines (https://winegourd.blogspot.com/2016/12/are-there-biases-in-community-wine.html). Like you, I am not sure how people decide on a score of 83 versus 86. Maybe I should look into it a bit more.

    David

    ReplyDelete