Home · Register · Search · View Winners · Software · Hosting · Software · Join Upload & Sell

My posts · My subscriptions
  

  Previous versions of gdanmitchell's message #12018290 « Why Do Canon Lenses Score so Low? »

  

gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X+1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are unwarranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for overall trends and areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn avoid those photo writers and bloggers who offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 08:46 PM
gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X+1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are unwarranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for overall trends and areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn which photo writers and bloggers offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 08:46 PM
gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X+1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are unwarranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn which photo writers and bloggers offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 08:44 PM
gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X+1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are not warranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn which photo writers and bloggers offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 08:44 PM
gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X+1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are not warranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn which photo writers and bloggers offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 08:43 PM
gdanmitchell
Offline
Upload & Sell: On
Re: Why Do Canon Lenses Score so Low?


I think that quite a few of the observations made recently in this thread are roughly in line with my earlier points: Tests often produce interesting raw results, but the simple rating values don't reveal the context or assumptions behind the tests. As I wrote earlier, I read and try to understand test results and their context. It sounds like at least some participating in this thread do, too. The main concern comes when the final rating values (lens X scores value A) are assumed to tell us things that they don't really tell us - e.g. that lens A that scores X is not as "good" as lens B that scores X-1.

DXO (along with others) no doubt provides a useful service in doing the testing, and a lot of interesting data results from their tests. The problem is that most of the context is missing when the final lens "numbers" are brought up in the way that they were at the start of this thread. The numbers don't mean much unless you understand the context of what format, camera, and brand was tested and the way that the various factors that go in to the ratings are weighted. The problem isn't that someone is testing things - it is that too many people make presumptions based on the final ratings values that are not warranted.

I stand by my observation - which also seems to be shared by many in this thread - that, given what we know about various lenses from actually using them and what we discover when we look more closely at the methodology of testing and how the final values are calculated, that the actual meaning of the final lens ratings as a way of determining real world lens performance is of often very little value and all too often over-rated.

If you want to use and understand lens ratings - and I often do - I think that the best way to go about getting useful information from them includes:

- checking the results of one rating system against several others. (I like the interactive blur charts from http://www.slrgear.com/ for example, recognizing their presumptions and limits, too. There are others, including several mentioned in this thread.) Look for areas of general agreement among these tests.
- paying attention also to subjective ratings and evaluations, especially those by people whose opinions have some validity and to reports that pool the feedback from larger groups of users. (Also learn which photo writers and bloggers offer up opinions that are inconsistent and unreliable.)
- trying to understand the assumptions behind and context of the factors that go into the ratings - just what do those numbers actually mean?
- balancing the technical testing data against other important factors including functionality and the reputation of the lens for design integrity, along with how these things relate to your photography.
- being aware that tests often speak to differences among things that are all very good, in which cases small differences in test performance that are measurable on the test bench may be insignificant and other factors will make a bigger difference.

Take care,

Dan



Dec 22, 2013 at 05:50 PM



  Previous versions of gdanmitchell's message #12018290 « Why Do Canon Lenses Score so Low? »