Month: January 2015

“You guys are basically all the same”

Dr. Stuart Wright, Senior Scientist

Please click here to read Dr Wright’s blog in Chinese.

At MS&T 2014 in Pittsburgh, I spoke to an EBSD user of one of our competitors’ systems. He commented in regards to the data collection offerings from the different vendors that “you guys are basically all the same”. In the context of our discussion, his statement was not meant negatively, as he was arguing against some of the claims surrounding rectangular versus circular phosphors. Nonetheless, his statement became a proverbial “burr under my saddle”.

Figure 1: Raw and background corrected pattern

As luck would have it, shortly after that conversation I received some EBSD data from one of our customers who also has one of our competitors’ EBSD systems. The customer had collected a dataset on the competitor’s system with purposefully poor imaging conditions – high gain and low exposure time as would be used for maximizing data collection speed. This was done to create a dataset and accompanying patterns for testing a newly developed indexing method1. As expected, the noisy patterns lead to low quality indexing results despite the material being typically straightforward for EBSD – recrystallized nickel. The customer actively sought out a representative of the competitor in order to verify that the optimal software settings were used when indexing the noisy patterns. In view of my previous conversation I was eager to take advantage of this opportunity to compare the results of our indexing algorithms on these noisy patterns against those obtained by our competitor to see if we as vendors are really “all the same” or not.

The patterns were recorded at 80×60 pixels. I cropped out a 60×60 circular pattern and applied background correction to each using a background formed by averaging all of the patterns in the dataset together. No additional image processing was performed. Standard operating parameters for the Hough transform and the indexing algorithm were used (10 bands were specified).

IPF maps as obtained from the competitor’s indexing results and the results from OIM DC are shown in Figures 2a and 2b. The competitor was only able to index 30.4% of the patterns. The indexing success rate from my OIM based rescan using the patterns is 90.3% – nearly three times better.  Obviously, the different EBSD systems are not “basically all the same”.

Figure 2: IPF Maps constructed from data from (a) the competitor, (b) OIM DC (CI > 0.1) and (c) the dictionary method.

When I presented these results, the improvement was so dramatic that it was assumed I had performed a clean-up process to achieve such good results. The indexing rate was calculated by first performing CI standardization2 followed by excluding points with CI values over 0.1. I emphasize that the CI standardization process does not perform any modifications to the orientation data – it only upgrades the CI values. In order to verify the validity of our indexing success rate metric, I compared my indexing results on a point-by-point basis against results obtained using a dictionary method1 pioneered through a collaborative effort between groups at the University of Michigan (A. Hero), Carnegie Mellon (M. De Graef), AFRL (J. Simmons) and BlueQuartz Software (M. Jackson). The IPF map in Figure 2c shows the extremely high fidelity of this approach even with these noisy patterns. While the dictionary method does an excellent job it should be noted that it is very computationally intensive relative to standard indexing methods. The good news from my point of view is that 89.8% of the orientations obtained by OIM DC match those obtained by the dictionary method whereas the competitor’s data matches only 30.3%. This confirms both the proficiency of the OIM DC indexing routines as well as the validity of our indexing success rate metric.

Figure 3: Image Quality Map


As a sideline, it is clear that there is a horizontal band near the bottom of the scan and a vertical band near the right edge where the points are difficult to index – particularly in the overlapping region of the two bands. This is due to poorer quality patterns in these regions as is evident in the following IQ map. The scan was part of a montage of overlapping scans. The overlapping regions have additional hydrocarbon contamination leading to poorer quality patterns.

While I was pleased with our results relative to our competitor’s, it should be noted that this single dataset does not represent a test of the full EBSD system performance. However it does provide insight into the relative capability of EDAX’s indexing routines. I was happy to verify that our triplet indexing method and our implementation of the Hough transform are both clearly very robust.

A special thanks to Michael Jackson of BlueQuartz Software (www.bluequartz.net) for providing the dictionary method results.
1. Y.H. Chen, Park S.U., D. Wei, G. Newstadt, M. Jackson, J.P. Simmons, M. De Graef, and A.O. Hero. (2015) “A dictionary approach to the EBSD indexing problem”. Microscopy and Microanalysis, under review.
2.  Nowell, M. M. and S. I. Wright (2005). “Orientation effects on indexing of electron backscatter diffraction patterns.” Ultramicroscopy 103: 41-58.

The Next Big thing in EDS – 2015

Tara Nylese – Global Applications Manager EDAX

Late last year our applications group met with our marketing group to talk about the topics and content that were likely to be hot topics for the year ahead.  At the time I was knee deep in gathering data for a conference talk and had some compelling results on our new Spectrum Match feature.  I had also recently returned from a trip where I met with several SEM manufacturers, showed them this function and had excellent discussions with them on the applications and industries that would greatly benefit from this type of analysis.  So, of course what quickly came to mind was that Spectrum Match would be the hot topic for 2015!

This capability is not really new as we’ve had it in our software for as far back as I can remember, nearly two decades at EDAX.  The concept is quite simple:  build a library of spectra of known materials, save that library, collect an unknown spectrum and, finally, click a button to match the unknown against the library.  While it allows an analyst to get a quick result showing that the unknown is consistent with a known material, traditional routes had some drawbacks.  For example, a user would often have to spend time collecting a dedicated library.  Also, matching based on spectral intensities would not apply if there were changes in analytical conditions such as kV, geometries or even detector resolutions, so there wasn’t much flexibility.

Modernizing the approach to library matching is exactly why our new method is so exciting.  I can still remember when our programmer showed me how our first alpha version could search an entire PC to look for spectra of user-defined criteria and build the library from that search.  For any of our users who add the Spectrum Match feature to an existing system, they can literally back reference years worth of data to build their libraries.  And we’ve also added a unique approach to the matching to compare the quant results of the knowns to the quant results of the unknowns.  Since the quant routine is inherently a great normalizer of changes in analytical and detector factors, a user can build a general library of any kV, and, say, search and match a 20 kV library of alloys to a 30 kV spectrum of an unknown superalloy based on the resulting quant numbers.

But it gets even better than that, and here’s where the light bulbs will start turning on with our phase mapping users.  Our highly popular phase mapping routine is built directly off spectra that are saved with every phase map. These phase mapping spectra can also then be used to build the spectrum matching library!  Or vice versa.  So, any phases created with an auto phase map can then be solved by a comparison to a library.  Suddenly an image is created which shows all of the areas of interest and matches them to a known.

To put all of this in perspective, I’ll describe the work I did for the presentation I mentioned earlier and share some of the results here.  My goal was to “solve” one of my favorite unknown samples, a brake pad, which is a composite of a wide variety of materials.  I started by putting my mineral standard in the SEM and collecting a series of 20 second spectra at 15 kV and 20 K CPS with a resolution better than 125 eV.

Figure 1

Once I had collected about 15 spectra in the project (Figure 1), a few of them at different kVs to check robustness, I built my library (Figure 2).

Figure 2

I then loaded my “unknown” brake pad and collected the phase map seen here (Figure 3).

Figure 3

Unlike the spectra, I collected the map at about 100 K CPS and a resolution of 127 eV.  I ran the auto phase map which found the unique areas and saved those as phase spectra.  I then pulled up a single phase spectrum and matched against my library. I immediately had a match that showed that the pink phase in the map was SbS with an 80% compared to 62% for the next possible match!  Sure enough, Antimony Sulfate is sometimes a component of brake pads, notably one that can form a potentially carcinogenic compound under friction and heat such as found with braking.  Aka, you don’t want this in your brake pad*. I then also pulled out a spectrum from the map dataset by defining a bright area and extracting the spectrum (Figure 4a).

Figure 4a

This one matched barium sulfate at 93%, as shown in the overlay (Figure 4b).

Figure 4b

So, whether it’s the auto generated phase spectrum, or a spectrum from a user selected area, you can match the spectra to a library to solve your unknowns in a map data set.

I hope this casual review of some very relevant and powerful results will start you thinking about how spectrum matching will work for your own applications.  Be on the lookout for more on this exciting feature in the coming year.

* It’s worth mentioning this was an older brake pad not from a customer, but was manufactured outside of current regulations.