Dr. Stuart Wright, Senior Scientist, EDAX
You don’t have to be a genius with a high IQ to recognize that IQ is an imperfect measure of intelligence much less EBSD pattern quality.
A Brief History of IQ
At the time we first came up with the idea of pattern quality, we were very focused on finding a reliable (and fast) image processing technique to detect the bands in the EBSD patterns. Thus, we were using the term “image” more frequently than “pattern” and the term “image quality” stuck. The first IQ metric we formulated was based on the Burn’s algorithm (cumulative detected edge length) that we were using to detect the bands in the patterns in our earliest automation work1.
We presented this early work at the MS&T meeting in Indianapolis in October 1991. Niels Krieger-Lassen showed some promising band detection results using the Hough Transform2. Even though the Burn’s algorithm was working well we thought it would be good to compare it to the Hough Transform approach. During that time we decided to use the sum of the Hough peak magnitudes to define the IQ when using the Hough Transform3. The impetus for defining an IQ was to compare how well the Hough Transform approach performed versus the Burn’s algorithm as a function of pattern quality. In case you are curious, here is the result. Our implementation of the Hough transform coupled with the triplet indexing routine clearly does a good job at indexing patterns of poor quality. Notice the relatively small IQ Hough-based values; this is because in this early implementation the average intensity of the pattern was subtracted from each pixel. This step was later dropped, probably simply to save time, which was critical when the cycle time was about four second per pattern.
After we did this work we thought it might be interesting to make an image by mapping the IQ value to a gray scale intensity at each point in a scan. Here is the resulting map – our first IQ map (Hough based IQ).
Not only did we explore ways of making things faster, we also wanted to improve the results. One by-product of those developments was that we modified the Hough Transform to be the average of the detected Hough peak heights instead of the sum. A still later modification was to use the number of peaks requested by the user, instead of the number of peaks detected. This was done so that patterns, where only a few peaks were found, did not receive unduly high IQ values.
The next change came not from a modification in how the IQ was calculated, but from the introduction of CCD cameras with 12 bit dynamic range which dramatically increased the IQ values.
In 2005 Tao and Eades proposed using other metrics for measuring IQ4. We implemented these different metrics and compared them with our Hough based IQ measurement in a paper we published in 20065. One of the main conclusions of that paper was that while for some very specific instances the other metrics had some value, our standard Hough based IQ was the best parameter for most cases. Interestingly, exploring the different IQ values was the seed for our PRIAS6 ideas but that is another story. Our competitors use other measures of IQ, but unfortunately these have not been documented – at least to my knowledge.
Factors Influencing IQ
While we have always tried to keep our software chronologically compatible, the IQ parameter has evolved and thus comparing absolute IQ values from data sets obtained using older versions of OIM with results obtained using new ones is probably not a good idea. Not only has the IQ definition evolved but so has the Hough Transform. In fact, since we created the very first IQ maps we realized that while the IQ maps are quite useful they are only quantitative in the sense of relative values within an individual dataset. We have always cautioned against using absolute IQ values of a method for comparing different datasets. In part, because we know a lot of factors affect the IQ values:
- Camera Settings:
- Binning
- Exposure
- Gain
- SEM Settings
- Voltage
- Current
- Hough Transform Settings
- Pattern Size
- Mask Size
- Number of peaks
- Secondary factors (peak symmetry, min distance, vertical bias,…)
- Sample Prep
- Image Processing
In developing the next version of OIM we thought it might be worthwhile revisiting the IQ parameter as implemented in our various software packages to see what we could learn about the absolute value of IQ. In that vein, I thought it would be particularly interesting to look at the Mask Size and the Number of Peaks selected. To do this, I used a dataset where we had recorded the patterns. Thus, we were able to rescan the dataset using different Hough settings to ascertain the impact of these settings on the IQ values. I also decided to add some Gaussian noise7 to the patterns to see what effect the noise had on the Hough settings.
It would be nice to scale the peak heights with the mask size. However, the “butterfly” masks have negative values in them, making it quite difficult to scale to the weighting of the individual elements of the convolution masks. In the original 7×7 mask we selected the individual components so that the sum would equal zero, to provide some inherent scaling. However, as we introduced other mask sizes this became increasingly difficult, particularly with the smaller masks (intended primarily for more heavily binned patterns). Thus, we expected the peak heights to be larger for larger masks simply due to the number of matrix components. This trend was confirmed and is shown using the red curves in the figure below. It should be noted that the smaller mask was used on a 48×48 pixel pattern, the medium on a 96×96 and the larger on a 192×192 pixel pattern.
We also decided to look at the effect of the number of peaks selected. It is assumed that, as we include more peaks we expect the pattern quality to decrease, as the weaker peaks will drive the average Hough peak heights down. This trend was also confirmed as can be seen by the blue curves in the figure.
In theory, if all the settings are the same, then the absolute value of the IQ for a matrix of samples should be meaningful. However, it would be rare to use the same settings (Camera, SEM, sample prep,…) for all materials in all states (e.g. deformed vs recrystallized). In fact this is one of the challenges of doing in-situ EBSD work for either a deformation experiment or a recrystallization/grain growth experiment – it is not always easy to predict how the SEM parameters or camera settings need to change as an in-situ experiment progresses. In addition, any changes made to the hardware generally mean that changes to the software are needed as well. Keeping everything constant is a lot easier in theory than it is in practice.
In conclusion, the IQ metric is “relatively” straightforward, but it must “absolutely” be used with some intelligence.☺
Bibliography
1. S.I. Wright and B.L. Adams (1992) “Automatic Analysis of Electron Backscatter Diffraction Patterns” Metallurgical Transactions A 23, 759-767.
2. K. Kunze, S.I. Wright, B.L. Adams and D.J. Dingley (1993) “Advances in Automatic EBSP Single Orientation Measurements” Textures and Microstructures 20, 41-54.
3. N.C. Krieger Lassen, D. Juul Jensen and K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning microscopy 6, 115-121.
4. X. Tap and A. Eades (2005) “Errors, artifacts, and improvements in EBSD processing and mapping” Microscopy and Microanalysis 11, 79-87.
5. S.I. Wright and M.M Nowell (2006) “EBSD Image Quality Mapping” Microscopy and Microanalysis 12, 72-84.
6. S. I. Wright, M. M. Nowell, R. de Kloe, P. Camus and T. M. Rampton (2015) “Electron Imaging with an EBSD Detector” Ultramicroscopy 148, 132-145.
7. S I. Wright, M. M. Nowell, S. P. Lindeman, P. P. Camus, M. De Graef and M. Jackson (2015) “Introduction and Comparison of New EBSD Post-Processing Methodologies” Ultramicroscopy 159, 81