“You guys are basically all the same”

Dr. Stuart Wright, Senior Scientist

Please click here to read Dr Wright’s blog in Chinese.

At MS&T 2014 in Pittsburgh, I spoke to an EBSD user of one of our competitors’ systems. He commented in regards to the data collection offerings from the different vendors that “you guys are basically all the same”. In the context of our discussion, his statement was not meant negatively, as he was arguing against some of the claims surrounding rectangular versus circular phosphors. Nonetheless, his statement became a proverbial “burr under my saddle”.

Figure 1: Raw and background corrected pattern

As luck would have it, shortly after that conversation I received some EBSD data from one of our customers who also has one of our competitors’ EBSD systems. The customer had collected a dataset on the competitor’s system with purposefully poor imaging conditions – high gain and low exposure time as would be used for maximizing data collection speed. This was done to create a dataset and accompanying patterns for testing a newly developed indexing method1. As expected, the noisy patterns lead to low quality indexing results despite the material being typically straightforward for EBSD – recrystallized nickel. The customer actively sought out a representative of the competitor in order to verify that the optimal software settings were used when indexing the noisy patterns. In view of my previous conversation I was eager to take advantage of this opportunity to compare the results of our indexing algorithms on these noisy patterns against those obtained by our competitor to see if we as vendors are really “all the same” or not.

The patterns were recorded at 80×60 pixels. I cropped out a 60×60 circular pattern and applied background correction to each using a background formed by averaging all of the patterns in the dataset together. No additional image processing was performed. Standard operating parameters for the Hough transform and the indexing algorithm were used (10 bands were specified).

IPF maps as obtained from the competitor’s indexing results and the results from OIM DC are shown in Figures 2a and 2b. The competitor was only able to index 30.4% of the patterns. The indexing success rate from my OIM based rescan using the patterns is 90.3% – nearly three times better.  Obviously, the different EBSD systems are not “basically all the same”.

Figure 2: IPF Maps constructed from data from (a) the competitor, (b) OIM DC (CI > 0.1) and (c) the dictionary method.

When I presented these results, the improvement was so dramatic that it was assumed I had performed a clean-up process to achieve such good results. The indexing rate was calculated by first performing CI standardization2 followed by excluding points with CI values over 0.1. I emphasize that the CI standardization process does not perform any modifications to the orientation data – it only upgrades the CI values. In order to verify the validity of our indexing success rate metric, I compared my indexing results on a point-by-point basis against results obtained using a dictionary method1 pioneered through a collaborative effort between groups at the University of Michigan (A. Hero), Carnegie Mellon (M. De Graef), AFRL (J. Simmons) and BlueQuartz Software (M. Jackson). The IPF map in Figure 2c shows the extremely high fidelity of this approach even with these noisy patterns. While the dictionary method does an excellent job it should be noted that it is very computationally intensive relative to standard indexing methods. The good news from my point of view is that 89.8% of the orientations obtained by OIM DC match those obtained by the dictionary method whereas the competitor’s data matches only 30.3%. This confirms both the proficiency of the OIM DC indexing routines as well as the validity of our indexing success rate metric.

Figure 3: Image Quality Map

As a sideline, it is clear that there is a horizontal band near the bottom of the scan and a vertical band near the right edge where the points are difficult to index – particularly in the overlapping region of the two bands. This is due to poorer quality patterns in these regions as is evident in the following IQ map. The scan was part of a montage of overlapping scans. The overlapping regions have additional hydrocarbon contamination leading to poorer quality patterns.

While I was pleased with our results relative to our competitor’s, it should be noted that this single dataset does not represent a test of the full EBSD system performance. However it does provide insight into the relative capability of EDAX’s indexing routines. I was happy to verify that our triplet indexing method and our implementation of the Hough transform are both clearly very robust.

A special thanks to Michael Jackson of BlueQuartz Software (www.bluequartz.net) for providing the dictionary method results.
1. Y.H. Chen, Park S.U., D. Wei, G. Newstadt, M. Jackson, J.P. Simmons, M. De Graef, and A.O. Hero. (2015) “A dictionary approach to the EBSD indexing problem”. Microscopy and Microanalysis, under review.
2.  Nowell, M. M. and S. I. Wright (2005). “Orientation effects on indexing of electron backscatter diffraction patterns.” Ultramicroscopy 103: 41-58.


  1. Seems an interesting approach. Look forward to seeing the published paper (reference 1). Can you also comment on the difference in processing time between dictionary method and the normal routine?

  2. I’m sure the paper on the dictionary method I cited will have a lot more details; but, from presentations I’ve seen so far on the subject I seem to remember it taking about 2-3 hours to do the calculations necessary to simulate all the patterns needed to fill the dictionary. Then, in terms of the actual matching to the experimental patterns process it currently takes 1-2 hours on datasets of the size presented here (~50k). I know the research groups involved with the dictionary method are working on optimizing the codes so I’m sure it will be faster down the road. The time needed to acquire data with these size of patterns (60×60 pixels) on a material like nickel would take just a few minutes in the normal live indexing mode.

  3. I got some more details for you on the dictionary method from those working on the dictionary code. The data set shown was processed on a few different machines. Generating the dictionary took about 8 hrs on a 3Ghz Mac Pro with 8 Cores. The actual processing of the input data against the dictionary took about 3 hours to run. Since this data set the authors of the dictionary paper have analyzed the algorithm and simplified it and have managed to reduce the time for this step by about an our. One of the original team members is optimizing the dictionary generation algorithms to run on GPUs which should yield a large speed gain also. The 2nd processing step was run on a 2009 Era Mac Pro 2.6GHZ with 8 cores. The codes use BLAS/LAPACK. The source codes are available and are written in C++ and Fortran. Once you generate a dictionary you can reuse it for subsequent EBSD collections of the same material under similar operating conditions. The published paper will have considerably more detail when released.

  4. Great. Look forward to seeing a demo of it, if possible!
    Also, it would be a win-win situation for EDAX and its costumers, if you could release the codes for your costumers with no cost – at least for a limited time


  5. I assume you are referring to the dictionary method which is being developed by the groups at CMU, OSU, AFRL and BlueQuartz and the source codes are already available. While we have not made our Hough transform and Triplet Indexing codes publicly available they are well documented in the following publications.

    “Automatic Analysis of Electron Backscatter Diffraction Patterns” S.I. Wright and B.L. Adams. Metallurgical Transactions A 23, 759-767 (1992).

    “Individual Lattice Orientation Measurements Development and Applications of a Fully Automatic Technique” S.I. Wright. PhD Thesis, Yale University (1992).

    “Advances in Automatic EBSP Single Orientation Measurements” K. Kunze, S.I. Wright, B.L. Adams and D.J. Dingley. Textures and Microstructures 20, 41-54 (1993).

  6. Thanks for taking such an approach to what must have been a difficult comment! It’s very professional to remain so calm and simply validate capabilities. These days, it’s the results that count in the long run and you have shown that the equipment gets you there, even with imperfect data.

Leave a ReplyCancel reply