Upload & Sell: Off
| Re: How does 5D3 track subjects? (Object recognition) |
I will tell you what I think I know, or what I understand at least about how AI Servo works in the Canon 5D Mk III. I am sure that others will post their corrections to my understanding - and we all will be able to learn from that.
I believe this is substantially different from the more advanced tracking that the Canon 1DX does, which I won\'t be discussing here.
The best source of knowledge on the subject that I know of is the posting from Chuck Westfall that Pixel Perfect gave us. The understanding that I have that I pose below is somewhat different from what I think I understand from Chuck\'s post - but I don\'t know how to reconcile my understanding with what Chuck presents.
Unfortunately what makes its way to us, the end-users, is not very technical in nature - so we just have a cursory understanding of the details. I bet somewhere there are patents that do a pretty good job of actually explaining the technical details. I don\'t have the energy to to a patent search on the subject, but perhaps someone else has the patents.
On to my understanding of AI Servo in the 5D Mk III.
First, some terms I will use to describe my understanding.
Subject movement can be in the X, Y and/or Z direction. X-direction subject movement is movement that is side to side in the viewfinder. Y-direction is movement that is up or down in the viewfinder. Z-direction is subject movement closer to or farther away from the camera - depth movement.
In AI Servo tracking in the Canon 5D Mk III (and in most other Canon cameras) the first step in the focusing process is for the photographer to select the focus point that the camera is to use. For the sake of discussion, lets say the photographer selects the center focus point to be used. Use of that focus point sets the X and Y position for the focus point. The camera (the focus point) has no ability and does not track the subject in the X or Y directions - it can\'t. It only sees the subject under the selected focus point - in this case the center focus point. This sets the X and Y directions.
There is no tracking of the subject in the X or Y directions in AI Servo (and I won\'t get into AF Point Expansion or Zone Focus, but they don\'t actually track the subject either). I think you might be confusing AF point auto switching sensitivity, which allows the photographer to select the sensitivity at which focus is changed from the selected focus point to one of the surrounding focus points, with true subject X-Y tracking - which it is not. If this is the confusion - we can discuss this in more detail.
AI Servo tracks the subject that is under the selected focus point in the Z-direction - the distance between the subject and the camera. It does this by continuously using the phase-detect AF circuits. It takes multiple distance readings, and then based on those multiple distance readings (the last three distance readings, based on Chuck\'s information) and the camera\'s knowledge of how long it takes between the time the photographer presses the shutter button and when the image is taken (its actually a little more complicated than that, but simplistically I will leave it at that for now) the camera calculates the predicted focus DISTANCE for the time at which the picture will be taken. The camera commands the lens to focus at that distance, and when the camera takes the image the lens is focused at the proper distance.
I know Canon (and others) use advanced algorithms to make these calculations, but the basics of this are simple. Lets consider a simple situation where we have two distance measurements from the AF system. With the AF system in AI Servo, we have the first distance point and the exact time that the distance measurement was taken at. Then we have a second distance measurement, along with the exact time that the distance measurement was taken at. We also know the exact time (or almost the exact time) that the image will be captured at. This is the time at which the photographer presses the shutter, plus the shutter lag (again, this is an overly-simplistic explanation). From all this data (time and distance for point one, time and distance for point two, and the calculated time for when the image will be taken) it is rather simple math to calculate the predicted distance from the camera to the subject at the instant the photo will be taken. In reality the calculations are a lot more complex, because quite a number of other variables are taken into account, and Canon uses the three (or maybe more now) most recent distances rather than just the two I used my simplified explanation.
I hope that makes some sense, and I am sure others will help me form a better understanding of how all of this really occurs.
I believe my understanding and explanation is not totally in sync with the Chuck Westfall post that Whayne posted. Perhaps Whayne or others can help me better understand what Chuck is saying versus my understanding.
My guess is that the OP actually wanted to know about subject tracking in the X and Y directions, something that the Canon 5D Mk III does not do (at least as I understand it). I believe that the Canon 1DX (and various Nikon cameras) actually have the ability to do this, but I am unsure of specifically how they do this. Here again, obtaining patents on the subject would be most helpful. Perhaps others can help us both to understand how this subject tracking in the Canon 1DX and various Nikon cameras is accomplished.