Time For A Change – New Perspectives in Grain Size Analysis

EBSD geometry

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

For better or worse, I’ve been long involved in trying to set forth some guidelines for the measurement of grain size using EBSD. This involvement has included serving on some of the standard committees, advising customers and reading, reviewing and publishing papers [1]. However, I’ve always felt unsettled about the outcome of those efforts. Part of that discomfort comes from the fear that an inexperienced EBSD user could be misled to an incorrect conclusion based on using a canned procedure. However, some recent experiments have given me more confidence in obtaining reliable grain size statistics using EBSD.

There are a couple of challenges associated with the measurement of grain size using EBSD. First, there are the usual factors associated with collecting good EBSD data: determining a polishing procedure that will produce good patterns, finding good SEM and EBSD camera settings, ensuring the sample is in the expected geometry [2]. However, I will assume these factors are well under control so that good quality EBSD data is obtained, that is data with a high indexing rate so that very little clean-up is required. Now, the challenge is making sure the choices for the parameters associated with grain size are appropriate. The first of these is the grain tolerance angle. I will focus on recrystallized materials at this point so the choice of grain tolerance angle is not as critical as for a deformed material. In the case of deformed materials, I’ve found you need to experiment with different values to get a feel for the best choice. However, for recrystallized material using the default value of 5° generally works well. The next critical parameter is the choice of the minimum grain size in terms of the number of grid points. Once this value is selected the analysis software will exclude grains with less than this value from the grain size distribution.

There are several approaches to selecting a good value for the minimum grain size – I will call it Nmin for brevity. It is entirely possible that a single point in an OIM scan could be a grain. Imagine the grain structure in three dimensions; it is easy to imagine a single scan point being associated with the top or bottom tip of a grain just intersecting the sampling plane. Thus a single point in an OIM scan could represent a grain especially if that point has a high confidence index. Of course, we have also seen many cases where an individual point is located at a grain boundary with a low confidence index. This arises because the resulting pattern is a combination of two competing patterns from the two grains on either side of the grain boundary (or three at a triple point) and thus the Hough transform will find bands from all the competing patterns resulting in an incorrect indexing [3]. Another consideration that has gone in to the choice of Nmin is is how well a shape can be reconstructed from a given number of grid points. For example, consider a circle. The following figure shows how well a circle can be approximated by a given number of grid points. To approximate the area of circle with less than 5% error requires about 30 points. However, if very many grains are measured the errors in approximating a given grain will be averaged out so I personally don’t think this argument should carry much weight in the choice of Nmin. In fact, I hope by the end of these ramblings that I will have convinced you that if using the right approach, the choice of Nmin is not as important as one might presume.

Grid points in a circle

Figure 1: Approximation of the area of a circle using a square grid and a hexagonal grid.

I’ve done some experiments with a very nice set of data provided by my colleague Matt Nowell. These measurements were performed as part of a Round-Robin test conducted for the development of ISO Guidelines for the measurement of grain size using EBSD [4]. The figure below shows five data sets measured on one sample merged together. I have excluded the grains touching the edges from my analysis. On average, each field contains 750 grains. The step size is 2 microns and each scan contains almost 250,000 data points. The total number of grains analyzed was 3748 grains. Prior to any analysis a grain dilation cleanup was applied to each individual dataset using a 5° tolerance angle, two pixels for the minimum grain size and the grains were required to span multiple grid rows. The percentage of points changed in the clean-up process was only 0.7% confirming the high fidelity of the data.

Figure 2: Grain map of the merged data.

Figure 2: Grain map of the merged data.

The grain size distributions for the individual data sets and the merged data set are shown in Figure 3 using both a linear axis and a log axis for the grain diameter (the horizontal axis). The vertical axis is the usual area fraction.

Stu June3a Stu June3b
Figure 3: Log and linear grain size distributions of the individual data sets (colors) and the merged data set (black) 

The next step in such analyses is to calculate the average grain size (I will use the diameter). This is simply done by adding up the diameters of all the grains and then dividing by the total number of grains. Figure 4 shows the average grain diameter overlaid on the distribution for the merged data.

Figure 4: Area fraction grain diameter distribution for the merged data overlaid with the average diameter.

Figure 4: Area fraction grain diameter distribution for the merged data overlaid with the average diameter.

As usual, the location of the average diameter does not correlate well with the center of the distribution. I always find this disconcerting. A common response is to simply pass this off to more data being needed especially since the distribution curve is still a bit jagged even with 3748 grains. You can experiment with the number of bins used to create the distribution as well as the grain definition parameters but the average grain size is always a bit left of where you think it should be. The reason for this mismatch is that the distribution is given in area fraction but the average is calculated as a number average. If you plot the distribution as a number fraction then the number average appears to fit the data better. However, what originally approximated a Gaussian curve in the area fraction plot now becomes skewed to the left in the number fraction plot.

Figure 5: Number fraction grain diameter distribution for the merged data overlaid with the average diameter.

Figure 5: Number fraction grain diameter distribution for the merged data overlaid with the average diameter.

Another solution to overcoming this mismatch between the average and the peak in the distribution is, instead of using a number averaging approach, using an area weighted averaging approach. The area weighted average is a relatively simple calculation given by:
Equation
Where n is the number of grains, Ai is the area of grain i and di is the diameter.
The area weighted averaging leads to an average value that matches that approximated “by-eye”.

Figure 6: Area fraction grain diameter distribution for the merged data overlaid with the average diameter calculated using number averaging and area weighted averaging.

Figure 6: Area fraction grain diameter distribution for the merged data overlaid with the average diameter calculated using number averaging and area weighted averaging.

In fact, I’ve found that the weighted area average provides a very good seed to any automatic fitting of the distribution data. The advantage of the area average relative to a curve-fit determination, is that the area average is calculated from the raw grain size data independent of the binning used to create the distribution plot.

Figure 7: Gaussian distributions for the merged data grain diameter distributions.

Figure 7: Gaussian distributions for the merged data grain diameter distributions.

So why do I bring this whole area weighted averaging approach up when the accepted approach is the number averaging approach? What does it have to do with selecting an appropriate Nmin ? The reason lies in the following plot of the area and number averages versus Nmin. In this plot the area average appears less sensitive to the choice of Nmin than the number average. This observation has held up on across many different samples on which I’ve performed grains size analysis.

Plot of average grain diameter as a function of the choice of Nmin

Figure 8: Plot of average grain diameter as a function of the choice of Nmin.

The sensitivity of the area average to Nmin seems to increase in the plot at about a value of 50. If we look at this same data plotted on a log scale the increase in sensitivity at higher Nmin become more apparent. These curves start at a value of Nmin = 5 due to the fact that this was the minimum grain size I selected for the grain dilation clean-up.

Figure 9: Log plot of average grain diameter as a function of the choice of Nmin.

Figure 9: Log plot of average grain diameter as a function of the choice of Nmin

The argument among the grain size community is to select a much larger Nmin value than that typically used in the EBSD community (2 to 5 points). The ASTM standard states that the minimum grain that should be included should contain 100 points and that the average grain should contain at least 500 points [4]. This standard evolved to this methodology primarily for historical consistency with optical measurements. However, these results show that the choice of Nmin is not so critical in the determination of the average value provided it is not too large. Another way to look at this is to plot the data differently. If we change the horizontal axis to Nmin divided by the number of points corresponding to the average grain size and change the vertical axis to be the average diameter divided by the average diameter at Nmin equal to five then we get the following plot.

Figure 10: The normalized average grain size as a function of the ration of the number of points in the minimum sized grain to the number of points in the average sized grain.

Figure 10: The normalized average grain size as a function of the ration of the number of points in the minimum sized grain to the number of points in the average sized grain.

From this plot it appears that using a small number for Nmin is fine. The key is not to pick too large of a number – it should not exceed 10% of the number of points in the averaged sized grain. However, this is with the proviso that area weighted averaging is used in the analysis. The contribution of the small grains to the average is very much less for area weighted averaging as opposed to number averaging. Thus using a small number, even just 1, will lead to nearly the same result as using a larger value such as the value of 5 I used in this example. Because we (the EBSD community) can very much more definitively calculate grains of any size relative to traditional optical measurements, I believe this approach is the correct one despite its inconsistency with the historical approach.

References
[1] Wright, S. I. (2010). “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements.” Practical Metallography 47: 16-33.
[2] Nolze, G. (2007). “Image distortions in SEM and their influences on EBSD measurements.” Ultramicroscopy 107: 172-183.
[3] Wright, S. I., M. M. Nowell, R. de Kloe and L. Chan (2014). “Orientation Precision of Electron Backscatter Diffraction Measurements Near Grain Boundaries.” Microscopy and Microanalysis: (in press but available on First View).
[4] ISO 13067: Microbeam analysis – Electron backscatter diffraction – Measurement of average grain size.
[5] ASTM E2627-10: Standard practice for determining average grain size using electron backscatter diffraction (EBSD in) fully recrystallized polycrystalline materials.

5 comments

  1. Intriguing! Have you thought about using the median grain size instead of the mean? This approach was suggested by Ranalli (1984). It also solves the problem of the “average” grain size not falling in the center of the distribution and makes a lot more sense than using the mean since grain size distributions tend to be lognormally distributed. It does, like the traditional approach though, still mean you need good resolution of the smallest grains…

    Reference: Ranalli (1984) Grain size distribution and flow stress in Tectonites, Journal of Structural Geology, Volume 6, Issue 4, 1984, Pages 443–447

    Abstract:
    Flow stresses in dynamically recrystallized tectonites are usually determined by using empirically calibrated grain size-stress relations. As grain size adjusts locally to stress, the validity of the procedure is dependent on the assumption that the local stress, at grain or subgrain level, is equal to the externally applied tectonic stress. The local stress, however, is a stochastic variable with a distribution related to the tectonic stress: once this fact is recognised, the question becomes that of deciding which measure of grain size, and therefore of local stress, gives the best estimate of the tectonic stress.

    Current procedures implicitly assume that such a measure is the mean grain size. It is shown here that, on the basis of the most general probabilistic considerations, the local stress, and therefore the grain size, can be expected to have a lognormal distribution, and consequently that the median grain size, and not the mean, is the best indicator of tectonic stress. The lognormality of grain size has been confirmed by observations, both on metals and on rocks.

    The use of the mean, rather than the median, grain size introduces a further source of uncertainty in flow stress determinations. An expression for the error in stress is derived, and found to depend on the coefficients of variation (i.e. dispersions) in the grain size distribution of calibrating curve and field tectonite. If these two are the same (or in the trivial case in which they are both very small), no error arises from the use of mean grain size. But, if this condition is not fulfilled, an error of up to 10–20% in flow stress may occur.

  2. Thanks for the feedback, I was not aware or Ranalli’s approach. I’ll have to take a look at the paper. Of course, the resolution of the small grains seems to always be the issue.

    1. I looked into the possibility of using the median as a better approximation as well as similar metrics in log space. For the particular dataset in this blog the log normal median (28.3) was the best fit followed by the traditional median (30.9). The log normal median lines up very well with the peak of number fraction distribution. The log normal median is taken as e^mean(ln(diameters)).

    2. I calculated the median for the data shown in the blog and found a value of 30.9. However, I then calculated a “median” where the sum of the areas of the grains with diameters above the “median” covered 50% of the total area. This “median” came out to 50.2. This is quite similar to the values obtained for the number and area weighted average diameters I originally calculated.

  3. That was quick work on the median grain size; and an interesting article.
    Your opening para sums things up very well. The ISO standards were interesting experiences.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s