I've long thought that the wine-tasting business is little more than a pretentious racket. I'm delighted to discover that there appears to be proof of this. The Telegraph reports:
A few years ago, a frustrated vintner named Robert Hodgson, who had a background in statistics, thought of a way of testing the testers. He wondered what would happen if he supplied 100 wines for consideration but, without the judges knowing, slipped each wine to them three different times. Would they notice? Surely, with their trained and articulate palates, they’d at least be consistent in how they rated the identical drinks?
Having sought the agreement of the chief judge, G M “Pooch” Pucilowski, Hodgson ran his study for four consecutive years. When the results were calculated, they staggered him, disappointed Pooch and infuriated others. “Some people think I’m whacko,” says Hodgson. “Some say I’m full of c---. None is a scientist.” The Hodgson studies have shaken the wine world, calling into doubt the promises of its most elevated masters.
. . .
“We gave each judge a flight of 30 wines,” he says. “They’d all have three samples of the same wine, but they didn’t know it. These samples were arranged randomly. When I first saw the results I could hardly believe them. They scored the identical wines like they were different. It was staggering.” Out of a 20-point scale (scored, for esoteric reasons, between 80-100), an identical drink would typically vary by four points from one tasting to the next. “About 10 per cent of the judges were really bad,” he says. Their judgments of these wines ranged between 16-18 points. “But about 10 per cent were quite good. We thought we’d be able to use these judges as mentors to teach the others how they did it.”
Extraordinary as it was to have realised that 90 per cent of the judges didn’t appear to have any real consistency in their judgments, at least they’d narrowed down an anointed few who could. Well, that’s what Hodgson thought until he tracked their results the following year.
“It turned out they couldn’t maintain that performance. One year they might be really good, the next they were just in the middle of the group.”
. . .
As you might expect from a statistician, Hodgson’s numbers are big enough to count. It’s hard to argue with data extracted from hundreds of judges and thousands of wines over several years. He might be the first academic to have treated the pronouncements of the wine gurus so rigorously, but he’s not the first to have come to an embarrassing conclusion. One French academic, Frederic Brochet, decanted the same ordinary bordeaux into a bottle with a budget label and one with that of a grand cru. When the connoisseurs tasted the “grand cru” they rhapsodised its excellence while decrying the “table” version as “flat”. In the US, psychologists at the University of California, in Davis, dyed a dry white various shades of red and lied about what it was. Their experts described the sweetness of the drink according to whether they believed they were tasting rosé, sherry, bordeaux or burgundy. A similar but no less sobering test was carried out in 2001 by Frédéric Brochet at the University of Bordeaux. His 54 experts didn’t spot that the red wine they were drinking was white with added food colouring.
There's more at the link.
Adding food coloring to white wine to fool the judges - and succeeding. That's priceless!
Peter
3 comments:
Priceless! :D
I've been following this and responding to it since '09 or so, because I use some related techniques in my research.
I direct interested people to the most easily accessible textbook in the field (http://books.google.com/books?id=BTR7VEJPDWAC -- '97 edition currently $0.97 at Amazon).
There are two aspects to his research. One is that wines that do well in one competition might not place in another. This is like saying evaluations of cars are suspect because a Corvette did poorly in one competition when compared to a Ferrari and a stripped racing version Porsche, but was a gold medal winner when compared to the Veloster and a Scion FRS.
Another aspect is that the competition judges didn't consistently rate three examples of wine from the same bottle -- but this evaluation was done at the end of the day when tasters had already done evaluations of some 60 wines -- it is quite well known in the field that consistency will drop under such circumstances. So this is probably great research for showing how county wine competitions should improve their judging procedures, but it in no way implicates the wine reviews you see in magazines and the excellent work on wine quality done at places like UC Davis.
I guess this just reinforces my belief that I should drink wine because I like it, and not because someone I've never met and whose opinion I couldn't be bothered to care about likes it.
Either that, or drink beer.
Post a Comment