Query: Grange Scores - Relative?
Query: Grange Scores - Relative?
When looking through JO Wine Annual 2006, are scores assigned to Grange relative to other Grange vintages or are they relative to all wines in the annual?
For example, JO Wine Annual 2006: Seppelt's Chalambar Shiraz 2002 scored 96 points, cost ~$25.
JO Wine Annual 2006: Penfolds Grange 1999 Scored 96 points, cost ~$300+.
My Question is do these equal scores suggest that JO feels that the Chalambar tastes as good as the Grange? OR Is the 96 pts for the Grange relative to other previous Grange vintages, and is still actually far better tasting than the Chalambar?!?!
Cheers fellers!
For example, JO Wine Annual 2006: Seppelt's Chalambar Shiraz 2002 scored 96 points, cost ~$25.
JO Wine Annual 2006: Penfolds Grange 1999 Scored 96 points, cost ~$300+.
My Question is do these equal scores suggest that JO feels that the Chalambar tastes as good as the Grange? OR Is the 96 pts for the Grange relative to other previous Grange vintages, and is still actually far better tasting than the Chalambar?!?!
Cheers fellers!
Hi, new to the forum but an interested reader over the last couple of weeks...
I have occasionally wondered this also - another way of expressing it would be:
is the Grange 2000 vintage (87 points) equal in merit to the Jacobs Creek Merlot (87 points)?
Hardly think so, regardless of the bagging the 2000 Grange has received...
I have occasionally wondered this also - another way of expressing it would be:
is the Grange 2000 vintage (87 points) equal in merit to the Jacobs Creek Merlot (87 points)?
Hardly think so, regardless of the bagging the 2000 Grange has received...
-
- Posts: 1361
- Joined: Fri Sep 05, 2003 7:23 pm
- Location: Nth Qld
I suspect that most critics don't taste wine sample in the blind/double-blind format so the ratings are likely to be very subjective and based on other wines of the same label. That said, I expect that the critics would evaluate what's in the glass they're tasting.
Gotta go - The Iron Chef's just started.
Cheers all
daz
Gotta go - The Iron Chef's just started.
Cheers all
daz
Daryl Douglas wrote:I suspect that most critics don't taste wine sample in the blind/double-blind format so the ratings are likely to be very subjective and based on other wines of the same label.
Very true. JO does not generally taste blind, at least for his own evaluations.
That said, I expect that the critics would evaluate what's in the glass they're tasting.
I guess you didn't get to proof read this, because of the Iron Chef.
Gotta go - The Iron Chef's just started.
"It is very hard to make predictions, especially about the future." Samuel Goldwyn
We all know (I think) there is a "fudge factor" in wine scores. This is largely due to the highly subjective nature of wine tasting (epecially for unblinded tasting) and the enormous number of uncontrolled variables. I think we all unconsciously or consciously take this into account when we read the scores and look at the source of the scores (the latter being very important).
One way of conceptualising this is to use the statistical concept of "confidence limits". I won't go into the detail of this statistical concept (and I am not a statistician) but it is a way of quantifying measurement error. To use the example in this thread, we start with the score for Grange of 87 pts. There is no way of knowing the true confidence limits for this measure in this instance, because it would require a statistical analysis of JO's scores for this wine on repeated occasions tasted in a blinded way. But let's say for the sake of the illustration that the 95% confidence limits for this measure are plus or minus 2. What this means is that there is a 95% probability that the "true" score (one free of any major bias apart from JO's wine preferences) is between 85 and 89.
So this is the fudge factor in wine scores which I think we all recognise. If we than introduce a similar fudge factor (or confidence limits) to the Jacob's Ck merlot, and note there is a 95% chance of the true score being between 85 and 89, occurrences of the type highlighted in this thread are easier to reconcile.
I agree with TORB's general thrust on this issue of scoring. The more I look at the whole concept, the more flawed it all appears. If one were to do it properly, it would actually require a lot of blinded tasting and complicated statistical analysis using tools such as confidence limits. I don't see this happening, but scoring is not going to go away, despite its problems. I just always keep in mind the fudge factor although quantifying it is guesswork and will vary from taster to taster.
If you think that wine critics are so good that their scores would never vary a single point for the same wine tasted under different circumstances, then there is no fudge factor and confidence limits are zero. I don't believe this is the case.
As well as looking at confidence limits for the same taster you can look at them for different tasters. The latter, if calculated, are likely to be so wide that it would make comparing wines on the basis of scores from different tasters nonsensical most of the time.
One way of conceptualising this is to use the statistical concept of "confidence limits". I won't go into the detail of this statistical concept (and I am not a statistician) but it is a way of quantifying measurement error. To use the example in this thread, we start with the score for Grange of 87 pts. There is no way of knowing the true confidence limits for this measure in this instance, because it would require a statistical analysis of JO's scores for this wine on repeated occasions tasted in a blinded way. But let's say for the sake of the illustration that the 95% confidence limits for this measure are plus or minus 2. What this means is that there is a 95% probability that the "true" score (one free of any major bias apart from JO's wine preferences) is between 85 and 89.
So this is the fudge factor in wine scores which I think we all recognise. If we than introduce a similar fudge factor (or confidence limits) to the Jacob's Ck merlot, and note there is a 95% chance of the true score being between 85 and 89, occurrences of the type highlighted in this thread are easier to reconcile.
I agree with TORB's general thrust on this issue of scoring. The more I look at the whole concept, the more flawed it all appears. If one were to do it properly, it would actually require a lot of blinded tasting and complicated statistical analysis using tools such as confidence limits. I don't see this happening, but scoring is not going to go away, despite its problems. I just always keep in mind the fudge factor although quantifying it is guesswork and will vary from taster to taster.
If you think that wine critics are so good that their scores would never vary a single point for the same wine tasted under different circumstances, then there is no fudge factor and confidence limits are zero. I don't believe this is the case.
As well as looking at confidence limits for the same taster you can look at them for different tasters. The latter, if calculated, are likely to be so wide that it would make comparing wines on the basis of scores from different tasters nonsensical most of the time.
"It is very hard to make predictions, especially about the future." Samuel Goldwyn
I tend to agree with TORB's comments also. I am often surprised (and perhaps confused at times) by the absolute disparity between various wine journalists on certain wines.
I would pretty much tend to assume a wine is a winner if everyone in print is high-scoring it but often I see one wine that's a 95 in JO's book and an 83/4 in Halliday or Winefront or vice versa etc.
Given that reasoning, tasting notes alone seem to be the most legitimate approach.... but... even then (and perhaps even moreso) there are various wildly different readings / analyses of a given wine.... and so at the end of the day, it's just all opinion anyway.
Ultimately you should try for yourself and form your own opinion (if you can mentally charge past the Parkerpoints and the swathe of trumpeted tasting notes). [/u]
I would pretty much tend to assume a wine is a winner if everyone in print is high-scoring it but often I see one wine that's a 95 in JO's book and an 83/4 in Halliday or Winefront or vice versa etc.
Given that reasoning, tasting notes alone seem to be the most legitimate approach.... but... even then (and perhaps even moreso) there are various wildly different readings / analyses of a given wine.... and so at the end of the day, it's just all opinion anyway.
Ultimately you should try for yourself and form your own opinion (if you can mentally charge past the Parkerpoints and the swathe of trumpeted tasting notes). [/u]
I tend to agree with TORB's comments also. I am often surprised (and perhaps confused at times) by the absolute disparity between various wine journalists on certain wines.
I would pretty much tend to assume a wine is a winner if everyone in print is high-scoring it but often I see one wine that's a 95 in JO's book and an 83/4 in Halliday or Winefront or vice versa etc.
Given that reasoning, tasting notes alone seem to be the most legitimate approach.... but... even then (and perhaps even moreso) there are various wildly different readings / analyses of a given wine.... and so at the end of the day, it's just all opinion anyway.
Ultimately you should try for yourself and form your own opinion (if you can mentally charge past the Parkerpoints and the swathe of trumpeted tasting notes).
I would pretty much tend to assume a wine is a winner if everyone in print is high-scoring it but often I see one wine that's a 95 in JO's book and an 83/4 in Halliday or Winefront or vice versa etc.
Given that reasoning, tasting notes alone seem to be the most legitimate approach.... but... even then (and perhaps even moreso) there are various wildly different readings / analyses of a given wine.... and so at the end of the day, it's just all opinion anyway.
Ultimately you should try for yourself and form your own opinion (if you can mentally charge past the Parkerpoints and the swathe of trumpeted tasting notes).
Multipy your scores by the speed of light, then divide by the square root of gravity. This only applies when drinking upside down in a total vacuum.
It's really quite simple.
After applying the algorithm to all wine scores given since 1975 it would seem that;
95+ = Cracker!!!
90-95 = Ripper!!
85-90 = Bonza!
80-85 = Quaffer
<80 = Buy some Quick Eze, stat.
It's really quite simple.
After applying the algorithm to all wine scores given since 1975 it would seem that;
95+ = Cracker!!!
90-95 = Ripper!!
85-90 = Bonza!
80-85 = Quaffer
<80 = Buy some Quick Eze, stat.
Scores
Here is another point of view. Do you score the wine as if it is the best it could ever be, or how good it is relative to "the perfect wine"? For example, if you taste a great bottle of Grange and you cannot think of a better wine you give it 100 points. But what about a great white Zinfandel? If someone makes the perfect white Zin do you give it 100 points because there is no way a better one could be made? Or do you score it 86 points because relative to the Grange it is simply not as good a wine? I would go with the second option. Just a thought. Rick
Red Wine is the Blood of Life
Scores
Here is another point of view. Do you score the wine as if it is the best it could ever be, or how good it is relative to "the perfect wine"? For example, if you taste a great bottle of Grange and you cannot think of a better wine you give it 100 points. But what about a great white Zinfandel? If someone makes the perfect white Zin do you give it 100 points because there is no way a better one could be made? Or do you score it 86 points because relative to the Grange it is simply not as good a wine? I would go with the second option. Just a thought. Rick
Red Wine is the Blood of Life
A long time ago I worked out the ultimate scoring system, to take account of all the variables and complications when attempting to assign scores to wine. Don't know if I posted it here, but here is the Ultimate Solution, something I know Craig will appreciate. Disappointingly, I haven't seen it taken up by anyone, despite the clarity it offers:
"It's perfectly simple" (Basil Fawlty)
Honestly, I don't see what all the fuss is about. You're all struggling with such simple concepts. As an accountant, I feel I should be able to make numbers easily understood to everyone. So here's what we do.
1) We acknowledge that there appear to be two approaches to scoring - peer group, absolute, and the enjoyment factor. Three approaches! Our three approaches are...
2) a "peer group" might be determined by region, variety, price, style.
The ultimate score will be extremely useful to everyone if we combine all these elements together by scoring each of them - absolute and enjoyment scores preceded by 'A' and 'E' respectively, and the 4 peer scores by lower case 'r', 'v', 'p', and 's' (lower case for clarity - to indicate they're all subsets of 'Peer').
Thus, a tasting note might read
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92 E93 r98 v91 p88 s92
See how clear that is? There's only one problem. People may be tempted to add the scores together to give a total out of 600. This could be misleading. A way to discourage this, but which adds yet more user flexibility to the system, is to allow each grader to set the scale for each of the 'peer group'. Why? Well, we all know that the 100 point system is very good, but there's still the problem of palate calibration. Unfortunately, we can't all have Robert Parker's palate transplanted into our own heads. By varying the scale for the peer group components, the writer can indicate the relative importance to him/her of these aspects.
With Bordeaux, for example, the 'varietal' component may be less significant to me, and by scoring 'v' out of, say, 65 points, I am indicating this relativity - and you are therefore able to make a judgement of the validity of my note for your purposes. Also, with 'points inflation' (see half a dozen 100-point wines in the last WA alone) we can increase the 'Enjoyment' scale to 110, say, or 116. So our TN might now read:
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92/100 E95/108 r91/95 v58/65 p70/80 s90/96
See how much more useful this is? The more data we have, the better informed we are. I am concerned, however, that sometimes people just look at the notes and ignore the score. This bothers me, because the TN fails to indicate critical factors which affect the way the wine will perform. These are well known to keen tasters, and usually encompass the wine's temperature, whether it was decanted or not, and of course the tasting glass.
I therefore propose to add after a separator (*) the codes 't{ºC}', 'd{min}',and 'c{cc/xl5}, where °C is the temperature in Centigrade, min is the time since decanting, and cc/xl5 indicates the capacity of the glass in mm, or that a standard tasting glass was used. So our Lynch Bages tasting note, decanted for an hour, and drunk at 19C from a Riedel Vinum Bordeaux would look like this:
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92/100 E95/108 r91/95 v58/65 p70/80 s90/96 * t19 d60 c670
This is way more useful than the simplistic systems I've seen used around the place, and clearly and easily covers the things we all need to know.
Some fool suggested to me that you can't properly describe the colour of a wine unless you know the light levels in the room, so I was going to incorporate a lux reading, but I began to wonder if my leg was being pulled...
cheers,
Graeme
"It's perfectly simple" (Basil Fawlty)
Honestly, I don't see what all the fuss is about. You're all struggling with such simple concepts. As an accountant, I feel I should be able to make numbers easily understood to everyone. So here's what we do.
1) We acknowledge that there appear to be two approaches to scoring - peer group, absolute, and the enjoyment factor. Three approaches! Our three approaches are...
2) a "peer group" might be determined by region, variety, price, style.
The ultimate score will be extremely useful to everyone if we combine all these elements together by scoring each of them - absolute and enjoyment scores preceded by 'A' and 'E' respectively, and the 4 peer scores by lower case 'r', 'v', 'p', and 's' (lower case for clarity - to indicate they're all subsets of 'Peer').
Thus, a tasting note might read
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92 E93 r98 v91 p88 s92
See how clear that is? There's only one problem. People may be tempted to add the scores together to give a total out of 600. This could be misleading. A way to discourage this, but which adds yet more user flexibility to the system, is to allow each grader to set the scale for each of the 'peer group'. Why? Well, we all know that the 100 point system is very good, but there's still the problem of palate calibration. Unfortunately, we can't all have Robert Parker's palate transplanted into our own heads. By varying the scale for the peer group components, the writer can indicate the relative importance to him/her of these aspects.
With Bordeaux, for example, the 'varietal' component may be less significant to me, and by scoring 'v' out of, say, 65 points, I am indicating this relativity - and you are therefore able to make a judgement of the validity of my note for your purposes. Also, with 'points inflation' (see half a dozen 100-point wines in the last WA alone) we can increase the 'Enjoyment' scale to 110, say, or 116. So our TN might now read:
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92/100 E95/108 r91/95 v58/65 p70/80 s90/96
See how much more useful this is? The more data we have, the better informed we are. I am concerned, however, that sometimes people just look at the notes and ignore the score. This bothers me, because the TN fails to indicate critical factors which affect the way the wine will perform. These are well known to keen tasters, and usually encompass the wine's temperature, whether it was decanted or not, and of course the tasting glass.
I therefore propose to add after a separator (*) the codes 't{ºC}', 'd{min}',and 'c{cc/xl5}, where °C is the temperature in Centigrade, min is the time since decanting, and cc/xl5 indicates the capacity of the glass in mm, or that a standard tasting glass was used. So our Lynch Bages tasting note, decanted for an hour, and drunk at 19C from a Riedel Vinum Bordeaux would look like this:
1985 Lynch Bages - pale garnet red, a brambly nose of faintly herbaceous blackcurrants, graphite and cedar. The palate is long and soft, with fine tannins mostly resolved (despite a faint astringency on the back palate).
Score: A92/100 E95/108 r91/95 v58/65 p70/80 s90/96 * t19 d60 c670
This is way more useful than the simplistic systems I've seen used around the place, and clearly and easily covers the things we all need to know.
Some fool suggested to me that you can't properly describe the colour of a wine unless you know the light levels in the room, so I was going to incorporate a lux reading, but I began to wonder if my leg was being pulled...
cheers,
Graeme
GraemeG wrote:Some fool suggested to me that you can't properly describe the colour of a wine unless you know the light levels in the room, so I was going to incorporate a lux reading, but I began to wonder if my leg was being pulled...
cheers,
Graeme
It's not just the light level, it's the spectrum coverage as well, old fluoros are particularly bad for judging colour...
PS, Nice one Graeme, who said accountants don't have a sense of humour.
Cheers
Brian
Life's too short to drink white wine and red wine is better for you too! :-)
Brian
Life's too short to drink white wine and red wine is better for you too! :-)
Graeme
Your piece of work finds my applause as a beginners attempt
You have concentrated heavily on the environmental factors that influence negatively a steadfast scoring system
However, you fail really to cover too many internal factors.
-What did the taster have for breakfast?
-How is his emotional state?
-Did his favourite sports team win recently?
-Did he get any last night?
-Has he enjoyed a social occassion recently?
-Has he seen a good movie?
-Is he chewing gum?
etc etc
With a little more work im sure you can include this into your model and become immortalised in the world wide wine geek hall of fame.
Score Lemmings will line up for your autograph and your score will take up more real estate than any flowery dreamy poetic descriptions you care to pen in a half cut state
Again my congratulations, and again please consider the 109 point system, so that the working class amongst us can yet again afford to drink 95-100 point wines.
C.
Your piece of work finds my applause as a beginners attempt
You have concentrated heavily on the environmental factors that influence negatively a steadfast scoring system
However, you fail really to cover too many internal factors.
-What did the taster have for breakfast?
-How is his emotional state?
-Did his favourite sports team win recently?
-Did he get any last night?
-Has he enjoyed a social occassion recently?
-Has he seen a good movie?
-Is he chewing gum?
etc etc
With a little more work im sure you can include this into your model and become immortalised in the world wide wine geek hall of fame.
Score Lemmings will line up for your autograph and your score will take up more real estate than any flowery dreamy poetic descriptions you care to pen in a half cut state
Again my congratulations, and again please consider the 109 point system, so that the working class amongst us can yet again afford to drink 95-100 point wines.
C.
Follow me on Vivino for tasting notes Craig Thomson
-
- Posts: 374
- Joined: Sat Jan 01, 2005 1:01 pm
- Location: Hobart
My personal preference is for a wine to be scored relative to it's price positioning in the market.
ie: A $100 icon wine is scored comparitively with other $100 icon wines.
A $7 quaffer is scored comparitively with other $7 quaffers.
Therefore De Bortoli Sacred Hill can get 98 points if it is the best $7 quaffer you've ever tasted and a poor vintage of Grange can be given 87.
You still know that the Grange is a better wine but it's a poor wine within that price bracket.
Writers who rate both quality and value come unstuck when they rate a 97 point Grange as only 2 or 3 stars for value. Surely if it is 97 points compared to other $300 + wines then it should get 4 or 5 stars for value.
My favourite rating system going around is the Penguin Guide. Comes complete with a bit of winery background and a tasting note and often a comment on value as well as the rating. But the ratings are quite broad not getting down to 0.1 point of difference but you still get an idea of the level of quality they assess it at.
Honestly, what is the difference between a 92 wine and a 91 point wine or a 18.6 and an 18.7?
I am influenced by ratings when making purchasing decisions but I don't split hairs between a 91 point wine and a 94 point wine. I broadly classify the points as follows.
Under 90 - would want to be selling for under $15 for me to buy it because there are so many 90+ wines under $20 out there.
90-94 - A sound example of the brand or style - will buy if the price is right (and I like the style of course)
95+ - must be a standout vintage for this particular wine - search out other comments and tasting notes or get to a tasting - if they back this up then make sure you get some before it's sold out.
ie: A $100 icon wine is scored comparitively with other $100 icon wines.
A $7 quaffer is scored comparitively with other $7 quaffers.
Therefore De Bortoli Sacred Hill can get 98 points if it is the best $7 quaffer you've ever tasted and a poor vintage of Grange can be given 87.
You still know that the Grange is a better wine but it's a poor wine within that price bracket.
Writers who rate both quality and value come unstuck when they rate a 97 point Grange as only 2 or 3 stars for value. Surely if it is 97 points compared to other $300 + wines then it should get 4 or 5 stars for value.
My favourite rating system going around is the Penguin Guide. Comes complete with a bit of winery background and a tasting note and often a comment on value as well as the rating. But the ratings are quite broad not getting down to 0.1 point of difference but you still get an idea of the level of quality they assess it at.
Honestly, what is the difference between a 92 wine and a 91 point wine or a 18.6 and an 18.7?
I am influenced by ratings when making purchasing decisions but I don't split hairs between a 91 point wine and a 94 point wine. I broadly classify the points as follows.
Under 90 - would want to be selling for under $15 for me to buy it because there are so many 90+ wines under $20 out there.
90-94 - A sound example of the brand or style - will buy if the price is right (and I like the style of course)
95+ - must be a standout vintage for this particular wine - search out other comments and tasting notes or get to a tasting - if they back this up then make sure you get some before it's sold out.
Ratcatcher wrote:You still know that the Grange is a better wine but it's a poor wine within that price bracket.
But how would you know? When you say within that price bracket surely you do not mean just comparing $100 wines... and I don't think price is a gurantee of better quality...
I know I've recently had a bottle under $5 that was a lot better than a over $15 wine (to my tastes).
I am in favour of TORB's scoring system where you don't split hair over points. I would also rate value accordingly.
Although I have to say, I am tempted to 'score' that particular $5 wine lower because of its price. Anyone else have this problem?
Maybe what I will do is score wines to see whether I'll buy them (so value is important), then arrange the wines in price order to determine which way I drink them, and only put a higher priced wine below a lower priced wine if I feel there is a clear difference in quality.
Ratcatcher wrote:My personal preference is for a wine to be scored relative to it's price positioning in the market.
ie: A $100 icon wine is scored comparitively with other $100 icon wines.
A $7 quaffer is scored comparitively with other $7 quaffers.
Therefore De Bortoli Sacred Hill can get 98 points if it is the best $7 quaffer you've ever tasted and a poor vintage of Grange can be given 87.
So when Wynns cut the price of 1998 John Riddoch from $80 to $50 a few years ago the 'score' went from 86 to 93 points at the same time?
And international writers' reviews come with a exchange rate/currency/local taxes score converter? Bin 389 gets 88 Parker points at US$20 (and 66% the price of Yalumba's Signature), which equals ? at A$42 locally, where Signature is the same price...?
The last nexus I ever want is between price and points...
cheers,
Graeme
-
- Posts: 374
- Joined: Sat Jan 01, 2005 1:01 pm
- Location: Hobart
I think you've both been a little super critical of my idea. It was just an off the top of my head post, I hadn't sat down and nutted out the terms of references, clauses and sub clauses.
I'm just saying there are esentially 5 classes of red wine.
1. Cheap quaffer cask and the cheapest bottles
2. Everyday wines up to about $19
3. Quality wines $20 to $35
4. Premium $40 to $80
5. Super premium. $85 up
I just say rate them for quality within those parameters. Please note these are just rough figures off the top of my head.
Just like wine shows have different classes why can't wine writers sort wines into different classes?
Surely a poor $400 Burgundy rated 87 is a more than 2 points better wine than a decent bottle of Windy Peak rated at 85?
ie: If Winefront or Halliday rate a bunch of $7.95 wines between 81-85 what's the diff between an 82 and an 83 and what does the average punter get out of that? Not much.
But if they have a class of Cheap Quaffer and rate wines from 81-100 then the differences are clearer and people can ascertain which wines are clearly the best.
I suppose it just supports what Ric says that written tasting notes and comments are 97 times better than numerical ratings.
It's just with so many brands and wineries out there what writer can produce that many written comments and they have to cater for consumers at all market levels.
If I were writing a Guide tomorrow I would be splitting wines into their market position and rating accordingly.
If a wine repositions itself in price then go ahead and reassess it in the new category. The example you gave would still have remained in the premium category.
This is what I would like as a wine consumer.
I'm just saying there are esentially 5 classes of red wine.
1. Cheap quaffer cask and the cheapest bottles
2. Everyday wines up to about $19
3. Quality wines $20 to $35
4. Premium $40 to $80
5. Super premium. $85 up
I just say rate them for quality within those parameters. Please note these are just rough figures off the top of my head.
Just like wine shows have different classes why can't wine writers sort wines into different classes?
Surely a poor $400 Burgundy rated 87 is a more than 2 points better wine than a decent bottle of Windy Peak rated at 85?
ie: If Winefront or Halliday rate a bunch of $7.95 wines between 81-85 what's the diff between an 82 and an 83 and what does the average punter get out of that? Not much.
But if they have a class of Cheap Quaffer and rate wines from 81-100 then the differences are clearer and people can ascertain which wines are clearly the best.
I suppose it just supports what Ric says that written tasting notes and comments are 97 times better than numerical ratings.
It's just with so many brands and wineries out there what writer can produce that many written comments and they have to cater for consumers at all market levels.
If I were writing a Guide tomorrow I would be splitting wines into their market position and rating accordingly.
If a wine repositions itself in price then go ahead and reassess it in the new category. The example you gave would still have remained in the premium category.
This is what I would like as a wine consumer.