top of page

How Good is My Glass: The Baseline


Before we start, I need to apologize. My day-job work has really ramped up recently and the optic reviews I promised were coming were...well...let's just say "late". But here it is. I finally got around to writing the code to pull out all the important information from my scopes and bino's, and this article is really meant to give you a taste of what to expect as I dive into more of my "higher end" glass in subsequent reviews. As a heads up, this review will be a little bit longer as it provides a baseline and a simple comparison, but the following reviews should be a lot more concise. After this first one, I can just give you the meat and potatoes and boil all the charts and data down so you don't feel like you are reading a bunch of gibberish.


My other article, "What to expect from my optics reviews" went over everything at a high level and should give you a good idea of what to expect, but this article will actually give you the optical data, and more importantly the objective quality metrics, of the camera I use to take the images (the baseline), and an inexpensive (and old) scope I was gifted to show you how the results can change and be compared.


As a quick refresher, I use something called an Imatest eSFR chart, which stands for edge spatial frequency response. It is an optical standard used by many professional lens and optics assessors and provides all kinds of useful information you all should be concerned about when researching new scopes, binoculars, spotters, range finders, etc. The data I will give you through these reviews has the potential to save you thousands of dollars from buying glass you end up not being happy with, and I can tell you why. In contrast to the literal thousands of reviews which use language like, "we got 10 of our friends together to look through these bunch of scopes.....", I will actually tell you, in no uncertain terms, why some glass looks better than others. And more importantly, by how much. Everyone's eyes are different, and I just got a little fed up with all the subjective analyses. This isn't an English paper. High quality glass should be measured against more than "Billy Bob told me it was good", though I am sure Billy is a good guy.


The Test Setup and Chart Anatomy

Alright, finally down to the details. Below you will see a photo of my test setup. Please ignore the pony keg and the cooler, and the fact that the lighting could be used for a p*%n shoot (I use it for high-speed video for work). It provides me a controlled setting where I can accurately and repeatably set lighting conditions, light temperature, diffraction, and exclude unwanted light pollution. But yes, it is in my basement :) Just a regular guy, after all.


Basic test setup showing lighting, camera, and eSFR chart.
Basic test setup showing lighting, camera, and eSFR chart.

I use the camera to zoom in on the Imatest chart and then import that image into my software for analysis. The imported image is below.

You'll notice there are a bunch of regions in the chart that have been enclosed in boxes and labeled with a color-coded number. These are called regions of interest (ROI's) and are identified and outlined by my image processing software. All the 60 ROI's with green numbers are used to measure image sharpness and chromatic aberration. We all want nice, crisp, sharp edges (vertical and horizontal) so we perceive the most accurate representation of the scene we are looking at. You'll see later this is one of the most critical measurements of an optic, and what really sets high quality glass apart from others. I'll spare you the details on why the ROI's are slanted, but if you are interested there is a wealth of information on the anatomy of this chart on the Imatest website here. We can also measure local chromatic aberration, or color fringing, at each of these locations. Chromatic aberration will make certain objects, under certain lighting conditions, look like they have some kind of colored halo. Or, in really bad scenarios, turn an edge into a rainbow, making it distracting and naturally make your eye move toward it. This distracts you from what you actually want to look at. Again, high quality glass usually does not suffer from this.


The circular array of red-labeled boxes around the center of the chart are used for illuminance (Scene Illumination) and noise. Scene illumination is just an estimate of the color of light reflected back to your eye and will specify if more red, green, or blue light makes it through the lenses. This is common when scenes have a reddish or bluish tint to them, and is often affected by lens coatings and the like. Noise is more of a camera property and the amount of noise injected into the scene by digitizing it. Noisy images can look grainy or pixelated even though the original scene looks clean.


You will also notice the 16 color patches situated around the circular array. These measure color accuracy. These help measure how well the measured red, green, and blue values of the digital image compare with the actual colors used in the test chart. This can also be affected by lens coatings and image digitization, but generally your eye will perceive color shifts where trees (greens and yellows) may be muted. Some vendors, like Swarovski, apply coatings to make natural scenes look more vibrant.


Finally, there are four circles divided into black and white quadrants which are used to automatically align the image and identify the ROI's. That is basically it in a nutshell, but we can extract a lot of information from this little chart.


The Baseline Data - Sony RX100 VII


I use a Sony RX100 VII compact camera to take all the images, and use some baseline camera settings in order to keep everything consistent. I adjust ISO, aperture, shutter speed, white balance, and a few others to eliminate variables in digitizing the images. All of these values will remain consistent as I test new glass. This allows us to back out how the test optics (i.e. scopes, bino's, etc.) affect the light data received by the camera sensor (or our eyes). The above image was taken with just the camera with no test optic in front. You'll notice the chart image looks crisp pretty much everywhere. The Sony RX100 is a solid camera in a small package, and the images it takes provide a good baseline for our optic performance reviews.


Sharpness and Contrast


Below is plot of what is called Spatial Frequency Response (SFR), also called Modulated Transfer Function (MTF), of the 60 slanted edge ROI's in green. Don't worry too much about the terminology, just remember that SFR is a measure of how rapidly features of an image chance across a certain distance. Extending from that, imagine a blurry edge in an image. If you were to follow the image pixels perpendicular to the edge, your eye will naturally notice that the edge gradually changes from one color or shade on one side of the edge to another color or shade on the other. This is indicative of blur. In contrast, those same colors or shades would transition between each other much more rapidly if the edge were in focus and crisp. Low spatial frequencies allow us to see only larger details of an image, while higher frequencies allow our eyes to see finer details, like sharp edge transitions, distinct facial features (like eyelashes), etc. You get the idea. Higher SFR values (y-axis) indicate sharper image features, or higher sharpness and contrast. Take a look at the figures below, which show the average vertical and horizontal SFR for all 60 slanted edge ROI's.



Right now, these just look like some funky plots, but I want you to notice a few things:

1. The red, green, blue, and luminance channels are all basically right on top of each other, meaning the sharpness and contrast of all light channels comprising the image track really well with each other. Yes, there are actually four different tracks on that graph.

2. As we follow the graph from the left to the right, we notice the transition is clean and smooth. We want the slope of these lines to be as low, or as gradual, as possible, meaning we retain high Spatial Frequency Response at higher line pairs per pixel so we can resolve the finest details in our images.


In other words, take a look at the average vertical SFR at a Spatial Frequency of 0.2 (x-axis). The SFR value (y-axis) is right about 0.47-ish. If that number was higher, it would indicate a sharper image at that spatial frequency. In short, a shallower slope, or more prolonged roll-off on this plot indicates a sharper image.


I am sure we have all used rifle scopes that have a bit of blur at the edges. So, let's show SFR plots of ROI's 1 (upper left, vertical) and 60 (bottom right, horizontal) to see how sharp the edges are compared to the average. Just note that I focused the camera on the center focus object between ROI's 29-32, so that region should be the sharpest of all.

Hmmmm, the lines still track each other pretty well, but notice if we trace up from 0.2 spatial frequency (x-axis) for ROI 1 like we did before for the average vertical SFR plot, the y-axis value is only about 0.25, instead of 0.47 for the vertical average. We can normalize that by saying the vertical edges of the image at the periphery are about 50% less sharp than the average, and probably a lot more than the center. You will notice a similar trend for ROI 60 when compared to the horizontal SFR average (0.21 versus 0.58 for a spatial frequency of 0.2). I will provide actual numbers in subsequent reviews using actual test optics and compare them to baseline values (and to each other), commonly referred to as MTF70, MTF50, MTF30, and MTF10. Just for fun, here is a table comparing the MTF70 and MTF30 values of ROI's 1 and 60 to the averages. We can definitively say the image is between 27 to 41 percent less sharp at the edges than the average (this is the meat and potatoes). These percentages would probably become even more pronounced if we compared them to the center ROI's, which are likely the sharpest because they are what the camera focused on. From experience, I will say that the Sony image is pretty darn crisp at the center, so even a 30-40 percent reduction in sharpness at the edges may not be readily apparent at a first glance. You'll see more extreme cases later.

Finally, also take a look at how the RGB channels track with one another. They definitely do not overlap as well as shown in the average SFR plots, but are still pretty good. Either way, I think you get the idea of how this works. So, we can move on to chromatic aberration.


Chromatic Aberration (CA)


Chromatic aberration is pesky, and I hate it when it shows up in my glass. Chromatic aberration is really a local image property like sharpness, but we could average it if we wanted. Below is a graphic of how well the Sony RX100 handles CA at ROI 29 near the center of the test image.



Yes, there are all three color channels on that graph. Red, green, and blue tracks are all pretty much right on top of each other. This indicates very low CA. Most human eyes, as sensitive as they are, would not be able to detect any CA at all based on this plot. Let's take a look at CA at ROI 19 in the upper right of the eSFR chart.



Again, pretty freaking good. I can see a little bit of deviation between pixels 200-300, but overall, a solid performance. In fact, identical overall Aberration score of 1.159 (look in the title of the plot for the average). Solid performance. I will compare all of these ROI values to ROI's measured through test optics in subsequent reviews.


Noise and Illuminance


Noise and signal strength are two other important metrics we can measure. To be blunt, we want high signal intensity and high signal to noise ratio (SNR). In other words, we want a lot of light coming in through the optic and low noise during digitization. There will be some noise injected into the image during digitization, but at a camera ISO of 400 there should be pretty low noise. Signal intensity will be affected by light transmission through the lens, coatings, and general lens quality. The graphs below use the 20 gray patch ROI's in the test image rather than the slanty boxes. The plots show the mean, or average, red, green, and blue SNR of those gray patches. Aside from that, the only other thing to know is that these plots will be useful once there is a test optic in front.

We can also use the 20 gray patch ROI's to measure the scene illumination and see if there is any deviation between red, green, and blue light transmission through the optics. For instance, our test image has relative RGB channel strengths of:


Red: 118.12

Green: 117.94

Blue: 118.81


All those numbers are pretty close together, so our optic does a good job of balancing the color channels to produce an accurate scene. In fact, these numbers are exactly what your camera would use to perform a White Balance Calibration and bring these relative channel intensities back in line with one another. We will be able to use the illuminance numbers to see how optics color shift incoming light to your eye through the use of coatings, lens doping, and other things.


Color Accuracy


Finally, we arrive at color accuracy using the 16 color patch ROI's. Below is a color patch chart from our test image, showing how accurate the colors produced by the digitized image match to the actual colors printed on the Imatest eSFR chart. Delta_E values closer to one indicate better color accuracy. It looks like our camera, based on the settings I used, is producing slightly more vibrant colors. This is likely happening during the conversion from RAW to JPEG and the specific software Sony uses to make that conversion. They likely perform some filtering and enhancement to make the pictures taken by the camera look more appealing. We really don't care that the camera is doing that, as long as we know it is doing it and by how much. We can just back out those numbers later when we test other optics.


Another way of looking at color is by using a Chromaticity chart like the one below. The arrows point in the direction of color shift and the length of the arrow is equivalent to the magnitude of the shift (Delta_E in the above plot).


And that is basically it for evaluating the glass quality. I may eventually include other metrics for optics, like rifle scope turret tracking or durability. But right now I just want to focus on optical quality. It is what I value the most, especially when I can find excellent glass for a reasonable price. Suffice to say, our little Sony RX100 is doing a great job taking great photos. Now let's see what happens when we slide a rifle scope in there.


Initial Comparison with an Old, Inexpensive Fixed Power Rifle Scope


We just presented a bunch of test data for the camera we plan to use to evaluate a variety of hunting (and related) optics. But so what? It doesn't really mean anything unless you can use that data to make educated decisions on what glass you want to buy. I want to give you a hyperbolic (i.e. really bad) example of the same data taken through an old, circa 1980 4X fixed power scope (Savage Model 0433B). To my eyes, this scope is not even in the same league as the next few I have planned for review (Nightforce NX8 1-8 F1 and the Zeiss LRP S5 3-18 FFP), but I will also note this scope has taken down more animals than I can count handed down through three generations of hunters. It's simple and it works. but you'll see how the glass compares. I will go through these metrics pretty quickly as none of you will actually be able to buy this scope, but it should make the point of what I am trying to do.


First, let's look at the actual chart image. Right off the bat I notice the image is not as vibrant and the focusing rings around the periphery are pretty blurry, especially in the corners. I took this through a camera adapter and then cropped the image to 3650 x 2105 pixels (7.68 MP). This image would still be good enough to produce a clean portrait sized print in your home. We obviously see the difference between this image and the first eSFR chart I showed you, but how can we quantitively compare them?

Well, let's look at sharpness. Below is a plot of the average horizontal SFR of all 15 slanty boxes (30 ROI's). I'll spare you going through the average vertical SFR chart, but let's just agree that is looks pretty much the same as this one (it does). You immediately notice there is less tracking between the four channels in the graph (red, green, blue, and luminance channels are not right on top of each other). Just through visual inspection, they deviate by about 23 percent at a spatial frequency of 0.2. However, recall the Sony RX100 had a SFR value of 0.47 at a spatial frequency of 0.2. The blue channel has the max SFR at that value of about 0.3, or about a 36 percent decrease in average sharpness.

I don't know about you, but the corners of the Savage scope image look really blurry. Here is the SFR chart for ROI 1 (upper left corner) of the Savage scope compared to ROI 1 of the Sony RX100.

Yikes.....that group of nice, clustered tracks is the Sony. The Savage scope is creating such extreme image blur we see a sharp roll off initially and then pretty nasty oscillation. This is indicative of the test object being dramatically out of focus. We focused pretty cleanly on the center focusing object between ROI's 29-32, but the scope just cannot handle keeping the scene in focus at the periphery. Even at lower spatial frequencies around 0.05, the Savage scope shows about a 69% reduction in sharpness at ROI 1. You get the idea.


Chromatic aberration is a similar story. Using the same ROI 29 as in our previous example, we see significantly more color fringing in the center of the image. In fact, the aberration score is about 18 times poorer than our camera alone (look in the title of the plot: 1.159 for the Sony versus 20.29 for the Savage scope).

The channel deviation and SNR for the Savage are also all over the place. The signal intensity is not as high, nor is the SNR over the full range of gray patch ROI's. In fact, comparing the Sony and Savage signal intensities, I can tell you there is about a 10% reduction in intensity across all 20 ROI's. I'll provide percent differences in more detail in subsequent reviews.

For illuminance, we measure RGB channel intensity of 110.5, 107.1, and 100.7, respectively, versus the roughly 118 for each channel of the Sony by itself. This loosely translates to a loss of 7% light transmission in the red channel, 9.2% in the green, and a whopping 15.3 percent in the blue channel. Not great by any stretch. You'll notice this is an average reduction of 10.5% in light transmission, just like what we approximated from the signal intensity plot, and that red has the highest intensity, followed by green, and then blue. This is not a coincidence.


Finally, I present to you the color patch chart, which is pretty self-explanatory. I am not going to compare each color, but generally I think you'll find the Savage scope produces more muted colors, despite the camera trying to make them more vibrant. Future reviews will include a comparison table showing how the optic shifted from the Sony camera, and how they compare to other optics generally.


Conclusions


This is starting to get pretty long, so I will conclude by saying I bet you can see how we can use this method to compare different glass. What you do with this data is entirely up to you, but it will give you numbers to base your purchasing decisions on. Now I just hope I can pull together enough optics to test to keep you all interested. In the near future, the following reviews are on the docket:



That's all for now. I hope you all found this information useful. There is certainly a lot of data to digest, so if it seems overwhelming at first I hope you stick with it. I'll do my best to boil it down to what is important and try and present the data in a way that is maximally useful to you, and so it doesn't bore you. Optics are expensive though, so ask yourself: "Is a 20 minute read worth it to potentially save a thousand dollars?" Because the data I present to you might do just that. Many blessings.

Commenti


© 2024 by Western Big Game LLC

bottom of page