Month: June 2016

Why is There an Error in My Measurement?

Sia Afshari, Global Marketing Manager, EDAX
 download (1)

One interesting part of my job has been the opportunity to meet people of all technical backgrounds and engage in conversations about their interest in analytical instrumentation and ultimately what a measurement means!

I recall several years back on a trip to Alaska I met a group of young graduate students heading to the Arctic to measure the “Ozone Hole.”  Being an ozone lover, I start asking questions about the methodology that they were going to use in their attempt to accomplish this important task, especially the approach for comparative analysis about the expansion of the Ozone hole!

I learned that analytical methods used for ozone measurement have changed over time and the type of instrumentation utilized for this purpose has changed along with the advances in technology.  I remember asking about a common reference standard that one would use among the various techniques over the years to make sure the various instrumentation readings are within the specified operation parameters and the collected data represents the same fundamental criteria obtained by different methods, under different circumstances over time.  I was taken by the puzzled look of these young scientists about my questions regarding cross comparison, errors, calibration, reference standards, statistical confidence, etc. that I felt I’d better stop tormenting them and let them enjoy their journey!

Recently I had an occasion to discuss analytical requirements for an application that included both bulk and multilayer coating samples with a couple of analysts.  We talked about the main challenge for multi-layer analysis as being the availability of a reliable type standard that truly represents the actual samples being analyzed.  It was noted that the expectation of attainable accuracy for the specimen where type standards are not available need to be evaluated by consideration of the errors involved and the propagation of such errors through measurements especially when one approaches “infinite thickness” conditions for some of the constituents!

As the demand for more “turn-key” systems increases where users are more interested in obtaining the “numbers” from analytical tools, it is imperative as a manufacturer and SW developer to imbed the fundamental measurement principals into our data presentation in a manner that a measurement result is qualified and quantified with a degree confidence that is easily observable by an operator.  This is our goal as we set to embark on development of the next generation intelligent analytical software.

The other part of our contribution as a manufacturer is the training aspect of the users in their understanding of the measurement principals.  It is imperative to emphasize the basics and the importance of following a set of check sheets for obtaining good results!

My goal is to use a series of blogs as a venue for highlighting the parameters that influence a measurement, underlying reasons for errors in general, and provide references for better understanding of the expectation of performance for analytical equipment in general and x-ray analysis in particular!

totalanalyticerrorconcept

So to start:

My favorite easy readings on counting statistics and errors are old ones but classics never go out of style:
• Principles and Practices of X-ray Spectrometric Analysis, by Eugene Bertin, Chapter 11, “Precision and Error: Counting Statistics.”
• Introduction to the Theory of Error, by Yardley Beers (Addison-Wesley, 1953).  Yes, it is old!

Also, Wikipedia has a very nice write up on basic principles of measurement uncertainty that could be handy for a novice and even an experienced user that was recommended by my colleague!  If you don’t believe in Wikipedia, at least the article has a number of linked reference documents for further review.
• https://en.wikipedia.org/wiki/Measurement_uncertainty

As food for thought on measurements, I would consider:
• What I am trying to do?
• What is my expectation of performance in terms of accuracy and precision?
• Are there reference standards that represent my samples?
• What techniques are available for measuring my samples?

With recognition of the facts that:
• There is no absolute measurement technique, all measurements are relative.  There is uncertainty in every measurement!
• The uncertainty in a measurement is a function of systematic errors, random errors, and bias.
• All measurements are comparative in nature, so there is a requirement for reference standards.
• Reference standards that represent the type of samples analyzed provide the best results.
• One cannot measure more accurately than the specified reference standard’s error range.  (Yes reference standards have error too!)
• Fundamental Parameters (FP) techniques are expedients when type standards are not available but have limitations.
• A stated error for a measurement needs to be qualified with the degree of confidence as number, i.e. standard deviation.
• Precision is a controllable quantity in counting statistics by extending the measurement time.
• What else is present in the sample often is as important as the targeted element in analysis. Sample matrix does matter!
• The more complex the matrix of the specimen being measured, the more convoluted are the internal interactions between the matrix atoms.
• Systematic errors are in general referred to as Accuracy and random errors as precision.
• The uncertainties/errors add in quadrature (the square root of the sum of squares).

Till next time, where we will visit these topics and other relevant factors in more details, questions, suggestions, and related inputs are greatly appreciated. By the way, I am still thinking that the ozone layer may not be being measured scientifically these days!

Cleaning Up After EBSD 2016

Matt Nowell, EBSD Product Manager, EDAX

I recently had the opportunity to attend the EBSD 2016 meeting, the 5th topical conference of the Microanalysis Society (MAS) in a series on EBSD, held this year at the University of Alabama. This is a conference I am particularly fond of, as I have been able to attend and participate in all 5 of these meetings that have been held since 2008. This conference has grown significantly since then, from around 100 participants in 2008 to around 180 this year. This year there were both basic and advanced tutorials, with lab time for both topics. There have also been more opportunities to show live equipment, with demonstrations available all week for the first time. This is of course great news for EDAX, but I did feel a little badly that Shawn Wallace, our EBSD Applications guru in the US, had to stay in the lab while I was able to listen to the talks all week. For anyone interested or concerned, we did manage to make sure he had something to eat and some exposure to daylight periodically.

This conference also strongly encourages student participation, and offers scholarships (I want to say around 70) that allow students to travel and attend this meeting. It’s something I try to mention to academic users all the time. I’m at a stage in my career now that I am seeing that people, who were students when I trained them years ago, are now professors and professionals throughout the world. I’ve been fortunate to make and maintain friendships with many of them, and look forward to seeing what this year’s students will do with their EBSD knowledge.

There were numerous interesting topics and applications including transmission-EBSD, investigating cracking, both hydrogen and fatigue induced, HR-EBSD, nuclear materials (the sample prep requirements from a safety perspective were amazing), dictionary-based pattern indexing, quartz bridges in rock fractures, and EBSD on dinosaur fossils. There were also posters on correlation with Nanoindentation, atom probe specimen preparation, analysis of asbestos, ion milling specimen preparation, and tin whisker grain analysis. The breadth of work was great to see.

One topic in particular was the concept of cleaning up EBSD data. EBSD data clean up must be used carefully. Generally, I use a Grain CI Standardization routine, and then create a CI >0.1 partition to evaluate the data quality. This approach does not change any of my measured orientations, and gives me a baseline to evaluate what I should do next. My colleague Rene uses this image, which I find appropriate at this stage:

Figure 1: Cleanup ahead.

Figure 1: Cleanup ahead.

The danger here, of course, is that further cleanup will change the orientations away from the initial measurement. This has to be done with care and consideration. I mention all this because at the EBSD 2016 meeting, I presented a poster on NPAR and people were asking about the difference is between NPAR and standard cleanup. I thought this blog would be a good place to address the question.

With NPAR, we average each EBSD pattern with all of the neighboring patterns to improve the signal to noise ratio (SNR) of the averaged pattern prior to indexing. Pattern averaging to improve SNR is not new to EBSD, we used this with analog SIT cameras years ago, but moved away from it as a requirement as digital CCD sensors improved pattern quality. However, if you are pushing the speed and performance of the system, or working with samples with low signal contrast, pattern averaging is useful. The advantage of the spatial averaging with NPAR is that one does not have the time penalty associated with collecting multiple frames in a single location. A schematic of this averaging approach is shown here:

Figure 2: NPAR.

Figure 2: NPAR.

As an experiment, I used our Inconel 600 standard (nominally recrystallized), and found a triple junction. I then collected multiple patterns from each grain with a fast camera setting with corresponding lower SNR EBSD pattern. Representative patterns are shown below.

Figure 3: Grain Patterns.

Figure 3: Grain Patterns.

Now if one averages patterns from the same grain with little deformation, we expect SNR to increase and indexing performance to improve. Here is an example from 7 patterns averaged from grain 1.

Figure 4: Frame Averaged Example.

Figure 4: Frame Averaged Example.

That is easy though. Let’s take a more difficult case, where with our hexagonal measurement grid averaging kernel, we have 4 patterns from one grain and 3 patterns from another. The colors correspond to the orientation maps of the triplet junction shown below.

Figure 5: Multiple Grains

Figure 5: Multiple Grains.

In this case, the orientation solution from this mixed averaged pattern was only 0.1° from the pattern from the 1st grain, with this solution receiving 35 votes out of a possible 84. What this indicated to me was that 7 of the 9 detected bands matched this 1st grain pattern. It’s really impressive what the triplet indexing approach accomplishes with this type of pattern overlap.

Finally, let’s try an averaging kernel where we have 3 patterns from one grain, 2 patterns from a second grain, and 2 patterns from a third grain, as shown here:

Figure 6: Multiple Grains.

Figure 6: Multiple Grains.

Here the orientation solution misoriented 0.4° from the pattern from the 1st grain, with this solution receiving 20 votes out of the possible 84. This indicates that 6 of the 9 detected bands matched this 1st grain pattern. These example do show that we can deconvolute the correct orientation measurement from the strongest pattern within a mixed pattern, which can help improve the effective EBSD spatial resolution when necessary.

Now, to compare NPAR to traditional cleanup, I then set my camera gain to the maximum value, and collected an OIM map from this triple junction, with an acquisition speed near 500 points per second at 1nA beam current. I then applied NPAR to this data. Finally, I reduced the gain and collected a dataset at 25 points per second at the same beam current as a reference. The orientation maps are shown below with corresponding Indexing Success Rates (ISR) as defined by the CI > 0.1 fraction after CI Standardization. This is a good example of how clean up can be used to improve the initial noisy data, as NPAR provides a new alternative with better results.

Figure 7: Orientation Maps.

Figure 7: Orientation Maps.

We can clearly see that the NPAR data correlated well with the slower reference data with the NPAR data collected ≈ 17 times faster than the traditional settings.

Now let’s see how clean up (or noise reduction, although I personally don’t like this term as often we are not dealing with noise-related artifacts) compared to the NPAR results. To start, I used the grain dilation routine in OIM Analysis, which first determines a grain (I used the default 5° tolerance angle and 2 pixel minimum defaults), and then expands that grain out by one step per pass. The results from a single pass, a double pass, and dilation to completion (when all the grains are fully grown together) are shown below. If we compare this approach with the NPAR and As-Collected references, we see that dilation cleanup has brought the 3 primary grains into contact, but a lot of “phantom” artifact grains with low confidence index are still present (and therefore colored black).

Figure 8: Grain Dilation.

Figure 8: Grain Dilation.

The other clean up routine I will commonly use is the Neighbor Orientation Cleanup routine, which in principle is similar to the NPAR neighbor relation approach. Here, instead of averaging patterns spatially, from each measurement point we compare the orientation measurements of all the neighboring points, and if 4 of the 6 neighbors have the same orientation, we change the orientation of the measurement point to this new neighbor orientation. Results from this approach are shown here.

Figure 9: Neighbor Orientation Correlation.

Figure 9: Neighbor Orientation Correlation.

Now of course the starting data is very noise, and was intentionally collected at higher speeds with lower beam currents to highlight the application of NPAR. With initial data like this, traditional clean up routines will have limitations in representing the actual microstructure, and this is why we urge caution when using these procedures. However, clean up can be used more effectively with better starting data. To demonstrate this, a single pass dilation and single pass of neighbor orientation correlation was performed on the NPAR processed data. These results are shown below, along with the reference orientation map. In this case, the low confidence points near the grain boundary have been filled with the correct orientation, and more of the grain boundary interface has been filled in, which would allow better grain misorientation measurements.

Figure 10: NPAR Cleanup.

Figure 10: NPAR Cleanup.

When I evaluate these images, I think the NPAR approach gives me the best representation relative to the reference data, and I know that the orientation is measured from diffraction patterns collected at or adjacent to each measurement point. I think this highlights an important concept when evaluating EBSD indexing, namely that one should understand how pattern indexing works in order to understand when it fails. Most importantly, I think (and this was also emphasized at the EBSD 2016 meeting) that it is good practice to always report what approach was used in measuring and presenting EBSD data to better interpret and understand the measurements relative to the real microstructure.