OIM

Old Eyes?

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting

Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains [1] and about 500 pixels per average grain in order to capture the grain size well [2]. This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.

Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.


This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of  = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.

As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).

TDI
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.

Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.


Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.

References
[1] SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
[2] SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.

Teaching is learning

Dr. René de Kloe, Applications Specialist, EDAX

Figure 1. Participants of my first EBSD training course in Grenoble in 2001.

Everybody is learning all the time. You start as a child at home and later in school and that never ends. In your professional career you will learn on the job and sometimes you will get the opportunity to get a dedicated training on some aspect of your work. I am fortunate that my job at EDAX involves a bit of this type of training for our customers interested in EBSD. Somehow, I have already found myself teaching for a long time without really aiming for it. Already as a teenager when I worked at a small local television station in The Netherlands I used to teach the technical things related to making television programs like handling cameras, lighting, editing – basically everything just as long as it was out of the spotlight. Then during my geology study, I assisted in teaching students a variety of subjects ranging from palaeontology to physics and geological fieldwork in the Spanish Pyrenees. So, unsurprisingly, shortly after joining EDAX in 2001 when I was supposed to simply participate in an introductory EBSD course (fig 1) taught by Dr. Stuart Wright in Grenoble, France, I quickly found myself explaining things to the other participants instead of just listening.

Teaching about EBSD often begins when I do a presentation or demonstration for someone new to the technique. And the capabilities of EBSD are such that just listing the technical specifications of an EBSD system to a new customer does not do it justice. Later when a system has been installed I meet the customers again for the dedicated training courses and workshops that we organise and participate in all over the world.

Figure 2. EBSD IPF map of Al kitchen foil collected without any additional specimen preparation. The colour-coding illustrates the extreme deformation by rolling.

In such presentations, of course we talk about the basics of the method and the characteristics of the EDAX systems, but then it always moves on to how it can help understand the materials and processes that the customer is working with. There, teaching starts working the other way as well. With every customer visit I learn something more about the physical world around us. Sometimes this is about a fundamental understanding of a physical process that I have never even heard of.

At other times it is about ordinary items that we see or use in our daily lives such as aluminium kitchen foil, glass panes with special coatings, or the structure of biological materials like eggs, bone, or shells. Aluminium foil is a beautiful material that is readily available in most labs and I use it occasionally to show EBSD grain and texture analysis when I do not have a suitable polished sample with me (fig 2) and at some point, a customer explained to me in detail how it was produced in a double layer back to back to get one shiny and one matte side. And that explained why it produces EBSD patterns without any additional preparation. Something new learned again.

Figure 3. IPF map of austenitic steel microstructure prepared by additive manufacturing.

A relatively new development is additive manufacturing or 3D printing where a precursor powdered material is melted into place by a laser to create complex components/shapes as a single piece. This method produces fantastically intricate structures (fig 3) that need to be studied to optimise the processing.

With every new application my mind starts turning to identify specific functions in the software that would be especially relevant to its understanding. In some cases, this then turns into a collaborative effort to produce scientific publications on a wide variety of subjects e.g. on zeolite pore structures (1, fig (4)), poly-GeSi films (2, fig (5)), or directional solidification by biomineralization of mollusc shells (3).

Figure 4. Figure taken from ref.1 showing EBSD analysis of zeolite crystals.

Figure 5. Figure taken from ref.2 showing laser crystallised GeSi layer on substrate.

Such collaborations continuously spark my curiosity and it is because of these kinds of discussions that after 17 years I am still fascinated with the EBSD technique and its applications.

This fascination also shows during the EBSD operator schools that I teach. The teaching materials that I use slowly evolve with time as the systems change, but still the courses are not simply repetitions. Each time customers bring their own materials and experiences that we use to show the applications and discuss best practices. I feel that it is true that you only really learn how to do something when you teach it.

This variation in applications often enables me to fully show the extent of the analytical capabilities in the OIM Analysis™ software and that is something that often gets lost in the years after a system has been installed. I have seen many times that when a new system is installed, the users invest a lot of time and effort in getting familiar with the system in order to get the most out of it. However, with time the staff that has been originally trained on the equipment moves on and new people are introduced to electron microscopy and all that comes with it. The original users then train their successor in the use of the system and inevitably something is lost at this point.

When you are highly familiar with performing your own analysis, you tend to focus on the bits of the software and settings that you need to perform your analysis. The bits that you do not use fade away and are not taught to the new user. This is something that I see regularly during the training course that I teach. Of course, there are the new functions that have been implemented in the software that users have not seen before, but people who have been using the system for years and are very familiar with the general operation always find new ways of doing things and discover new functions that could have helped them with past projects during the training courses. During the latest EBSD course in Germany in September a participant from a site where they have had EBSD for many years remarked that he was going to recommend coming to a course to his colleagues who have been using the system for a long time as he had found that the system could do much more than he had imagined.

You learn something new every day.

1) J Am Chem Soc. 2008 Oct 15;130(41):13516-7. doi: 10.1021/ja8048767. Epub 2008 Sep 19.
2) ECS Journal of Solid State Science and Technology, 1 (6) P263-P268 (2012)
3) Adv Mater. 2018 Sep 21:e1803855. doi: 10.1002/adma.201803855. [Epub ahead of print]

A Little Background on Backgrounds

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

If you have attended an EDAX EBSD training course, you have seen the following slide in the Pattern Indexing lecture. This slide attempts to explain how to collect a background pattern before performing an OIM scan. The slide recommends that the background come from an area containing at least 25 grains.

Those of you who have performed re-indexing of a scan with saved patterns in OIM Analysis 8.1 may have noticed that there is a background pattern for the scan data (as well as one of the partitions). This can be useful if re-indexing a scan where the raw patterns were saved as opposed to background corrected patterns. This background pattern is formed by averaging 500 patterns randomly selected from the saved patterns. 500 is a lot more than the minimum of 25 recommended in the slide from the training lecture.

Recently, I was thinking about these two numbers – is 25 really enough, is 500 overkill? With some of the new tools (Callahan, P.G. and De Graef, M., 2013. Dynamical electron backscatter diffraction patterns. Part I: Pattern simulations. Microscopy and Microanalysis, 19(5), pp.1255-1265.) available for simulating EBSD patterns I realized this might be provide a controlled way to perhaps refine the number of orientations that need to be sampled for a good background. To this end, I created a set of simulated patterns for nickel randomly sampled from orientation space. The set contained 6,656 patterns. If you average all these patterns together you get the pattern at left in the following row of three patterns. The average patterns for 500 and 25 random patterns are also shown. The average pattern for 25 random orientations is not as smooth as I would have assumed but the one with 500 looks quite good.

I decided to take it a bit further and using the average pattern for all 6,656 patterns as a reference I compared the difference (simple intensity differences) between average patterns from n orientations vs. the reference. This gave me the following curve:
From this curve, my intuitive estimate that 25 grains is enough for a good background appears be a bit optimistic., but 500 looks good. There are a few caveats to this, the examples I am showing here are at 480 x 480 pixels which is much more than would be used for typical EBSD scans. In addition, the simulated patterns I used are sharper and have better signal-to-noise ratios than we are able to achieve in experimental patterns at typical exposure times. These effects are likely to lead to more smoothing.

I recently saw Shawn Bradley who is one of the tallest players to have played in the NBA, he is 7’6” (229cm) tall. I recognized him because he was surrounded by a crowd of kids – you can imagine that he really stood out! This reminded me that these results assume a uniform grain size. If you have 499 tiny grains encircling one giant grain, then the background from these 500 grains will not work as a background as it would be dominated by the Shawn Bradley grain!

Aimless Wanderin’ – Need a Map?

Dr. Stuart Wright, Senior Scientist, EDAX

In interacting with Rudy Wenk of the University of California Berkeley to get his take on the word “texture” as it pertains to preferred orientation reminds me of some other terminologies with orientation maps that Rudy helped me with several years ago.

Map reconstructed form EBSD data showing the crystal orientation parallel to the sample surface normal

Joe Michael of Sandia National Lab has commented to me a couple of times his objection to the term “IPF map”. As you may know, the term is commonly used to describe a color map reconstructed from OIM data where the color denotes the crystallographic axis aligned with the sample normal as shown below. Joe points out that the term “orientation map” or “crystal direction map” or something similar would be much more appropriate and he is absolutely right.

The reason behind the name “IPF map”, is that I hi-jacked some of my code for drawing inverse pole figures (IPFs) as a basis to start writing the code to create the color-coded maps. Thus, we started using the term internally (it was TSL at the time – prior to EDAX purchasing TSL) and then it leaked out publicly and the name stuck – my apologies to Joe. We later added the ability to color the microstructure based on the crystal direction aligned with any specified sample direction as shown below.

Orientation maps showing the crystal directions aligned with the normal, rolling and transverse directions at the surface of a rolled aluminum sheet.

The idea for this map was germinated from a paper I saw presented by David Dingley where a continuous color coding schemed was devised by assigning red, green and blue to the three axes of Rodrigues-Frank space: D. J. Dingley, A. Day, and A. Bewick (1991) “Application of Microtexture Determination using EBSD to Non Cubic Crystals”, Textures and Microstructures, 14-18, 91-96. In this case, the microstructure had been digitized and a single orientation measured for each grain using EBSD. Unfortunately, I only have gray scale images of these results.

SEM micrograph of nickel, grain orientations in Rodrigues-Frank space and orientation map based on color Rodrigues vector coloring scheme. Source: Link labeled “Full-Text PDF” at www.hindawi.com/archive/1991/631843/abs/

IPF map of recrystallized grains in grain oriented silicon steel from Y. Inokuti, C. Maeda and Y. Ito (1987) “Computer color mapping of configuration of goss grains after an intermediate annealing in grain oriented silicon steel.” Transactions of the Iron and Steel Institute of Japan 27, 139-144.
Source: Link labeled “Full Text PDF button’ at www.jstage.jst.go.jp/article/isijinternational1966/27/4/27_4_302/_article

We didn’t realize it at the time; but, an approach based on the crystallographic direction had already been done in Japan. In this work, the stereographic unit triangle (i.e. an inverse pole figure) was used in a continues color coding scheme were red is assigned to the <110> direction, blue to <111> and yellow to <100> and then points lying between these three corners of the stereographic triangle are combinations of these three colors. This color coding was used to shade grains in digitized maps of the microstructure according to their orientation. Y. Inokuti, C. Maeda and Y. Ito (1986) “Observation of Generation of Secondary Nuclei in a Grain Oriented Silicon Steel Sheet Illustrated by Computer Color Mapping”, Journal of the Japan Institute of Metals, 50, 874-8. The images published in this paper received awards in 1986 by the Japanese Institute of Metals and TMS.

AVA map and pole figure from a quartz sample from “Gries am Brenner” in the Austrian alps south of Innsbruck. The pole figure is for the c-axis. (B. Sander (1950) Einführung in die Gefügekunde der Geologischen Körper: Zweiter Teil Die Korngefüge. Springer-Vienna)
Source: In the last chapter (Back Matter) in the Table of Contents there is a link labeled “>> Download PDF” at link.springer.com/book/10.1007%2F978-3-7091-7759-4

I thought these were the first colored orientation maps constructed until Rudy later corrected me (not the first, nor certainly the last time). He sent me some examples of mappings of orientation onto a microstructure by “hatching” or coloring a pole figure and then using those patterns or colors to shade the microstructure as traced from micrographs. H.-R. Wenk (1965) “Gefügestudie an Quarzknauern und -lagen der Tessiner Kulmination”, Schweiz. Mineralogische und Petrographische Mitteilungen, 45, 467-515 and even earlier in B. Sander (1950) Einführung in die Gefügekunde Springer Verlag. 402-409 . Sanders entitled this type of mapping and analysis as AVA (Achsenvertilungsanalyse auf Deutsch or Axis Distribution Analysis in English).

Such maps were forerunners to the “IPF maps” of today (you could actually call them “PF maps”) to which we are so familiar with. It turns out our wanderin’s in A Search for Structure (Cyril Stanley Smith, 1991, MIT Press) have actually not been “aimless” at all but have helped us gain real insight into that etymologically challenged world of microstructure.

My Turn

Dr. Stuart Wright, Senior Scientist, EDAX

One of the first scientific conferences I had the good fortune of attending was the Eighth International Conference on Textures of Materials (ICOTOM 8) held in 1987 in Santa Fe, New Mexico. I was an undergraduate student at the time and had recently joined Professor Brent Adams’ research group at Brigham Young University (BYU) in Provo, Utah. It was quite an introduction to texture analysis. Most of the talks went right over my head but the conference would affect the direction my educational and professional life would take.

Logos of the ICOTOMs I've attended

Logos of the ICOTOMs I’ve attended

Professor Adams’ research at the time was focused on orientation correlation functions. While his formulation of the equations used to describe these correlations was coming along nicely, the experimental side was quite challenging. One of my tasks for the research group was to explore using etch pits to measure orientations on a grain-by-grain basis. It was a daunting proposition for an inexperienced student. At the ICOTOM in Santa Fe, Brent happened to catch a talk by a Professor from the University of Bristol named David Dingley. David introduced the ICOTOM community to Electron Backscatter Diffraction (EBSD) in the SEM. Brent immediately saw this as a potential experimental solution to his vision for a statistical description of the spatial arrangement of grain orientations in polycrystalline microstructures.

At ICOTOMs through the years

At ICOTOMs through the years

After returning to BYU, Brent quickly went about preparing to get David to BYU to install the first EBSD system in North America. Instead of etch pits, my Master’s degree became comparing textures measured by EBSD and those measured with traditional X-Ray Pole Figures. I had the opportunity to make some of the first EBSD measurements with David’s system. From those early beginnings, Brent’s group moved to Yale University where we successfully built an automated EBSD system laying the groundwork for the commercial EBSD systems we use today.

I’ve had the good fortune to attend every ICOTOM since that one in Santa Fe over 30 years ago now. The ICOTOM community has helped germinate and incubate EBSD and continues to be a strong supporter of the technique. This is evident in the immediate rise in the number of texture studies undertaken using EBSD immediately after EBSD was introduced to the ICOTOM community.

The growth in EBSD in terms of the percentage of EBSD related papers at the ICOTOMs

The growth in EBSD in terms of the percentage of EBSD related papers at the ICOTOMs

Things have a way of coming full circle and now I am part of a group of three (with David Fullwood of BYU and my colleague Matt Nowell of EDAX) whose turn it is to host the next ICOTOM in St George Utah in November 2017. The ICOTOM meetings are held every three years and generally rotate between Europe, the Americas and Asia. At ICOTOM 18 we will be celebrating 25 years since our first papers were published using OIM.
icotom-2017
It is a humbling opportunity to pay back the texture community, in just a small measure, for the impact my friends and colleagues within this community have had both on EBSD and on me personally. It is exciting to consider what new technologies and scientific advances will be germinated by the interaction of scientists and engineers in the ICOTOM environment. All EBSD users would benefit from attending ICOTOM and I invite you all to join us next year in Utah’s southwest red rock country for ICOTOM 18! (http://event.registerat.com/site/icotom2017/)

Some of the spectacular scenery in southwest Utah (Zion National Park)

Some of the spectacular scenery in southwest Utah (Zion National Park)

There’s A Hole in Your Analysis!

Dr. René de Kloe, Applications Specialist EDAX.

EBSD analysis is all about characterizing the crystalline microstructure of materials. When we are analyzing materials using EBSD the goal is to perform a comprehensive analysis on the entire field of interest. We strive to obtain the highest possible indexing rates and when we happen to misindex points we feel compelled to replace or clean these with other “valid” measurements that we simply copy from neighbor points so we do not have to show these failures in a report or paper.

But what should we do when we don’t expect data from specific spots in the first place? For example if a sample is porous or contains non-crystalline patches, or perhaps we have phases that don’t produce patterns? Then we typically simply try to ignore that. We may perhaps state that a certain fraction of our scan field refuses to produce indexing results and show which pixels these are, but that’s about it.

Pearlitic cast iron with graphite nodules - 13.5% graphite.

Figure 1: Pearlitic cast iron with graphite nodules – 13.5% graphite.

And that is strange as such areas where we don’t expect patterns are truly an integral part of a material and as such should also be characterized for a complete microstructural description. A traditional example of such a material is a cast iron which, although not porous, contains graphite inclusions which typically do not produce indexable EBSD patterns (Figure 1). Another example is the characterization of material produced by 3D printing of different metals, where small metal particles are sintered together using localized laser heating. This process doesn’t generate a fully dense product (Figure 2) and understanding the pore structure is important in predicting its mechanical response to stressed conditions.

Figure 2: Porous 3D printed steel - Indexing success 97.2%

Figure 2: Porous 3D printed steel – Indexing success 97.2%

Analyzing the non-indexed areas creates some challenges for the data treatment, especially any cleanup that you may want to do. You need to be careful to ensure that individual misindexed points, for example along grain boundaries or inside grains are not considered as pores. For a full analysis we need to be able to treat pore spaces as a special type of grain. Not one where pixels are grouped together based on similarity in measured orientation, but just the opposite, where pixels are combined based on misfit. This poses a special challenge on cleaning your data. When a typical clean-up acts like an in-situ grain growth experiment, where grains are expanded to consume bad points, in porous materials data cleanup needs to be done carefully to prevent the real grains from growing into real non-indexed spaces.

In general, EBSD data cleanup should be done in 3 steps:
1. Identify the good points,
2. Preserve the good points
and only then
3. Replace bad points.

For steps 1 and 2 we can use the patented Confidence Index in the OIM™ Analysis software. For step 1 we setup a filter to allow only correctly indexed points (typically with CI>0.1). However, this may remove too many points along grain boundaries, for example, where patterns overlap and indexing is uncertain. In step 2 we apply a confidence index standardization to retrieve all pixels that were indexed correctly, but had a low CI value and were excluded in step 1. This step assumes that if the orientation of a pixel matches that of adjacent pixels that had a high CI value, it was correctly indexed and needs to be included. This step does not change any measured orientations.

In step 3 we must be more careful as it is easy to accidentally replace too many points and shrink the non-indexed space (Figure 3):

Figure 3: Effect of too rigorous cleanup of partially crystalline material - Cu interconnects.

Figure 3: Effect of too rigorous cleanup of partially crystalline material – Cu interconnects.

A cleanup method that verifies whether a minimum number of neighboring points belong to a single grain such as the neighbor orientation correlation, is preferred.

Now that we know where the holes in the material are, we can get serious about analyzing them. First we need to define our real grains. Grains in EBSD analysis are defined by groups of points with similar orientations and a minimum number of pixels, for example maximum point to point misorientation less than 5 degrees and minimal 3 pixels in size. When you remove these grains from your partition, the left over pixels that do not fit into the grains can now be recognized. Coherent clusters of these misfit pixels are then grouped together into what might be called antigrains (Figure 4).

Figure 4: Grain and antigrain definition.

Figure 4: Grain and antigrain definition.

But even when the pores are recognized this way, the antigrains are not characterized by their orientation and as such their boundaries will do not show up in a traditional misorientation boundary overlay, which only shows the misorientation between recognized grains (Figure 5a). In order to make the antigrains visible as well, a boundary type that does not use misorientation as a criterion, but rather the position of triple junctions, needs to be selected. Between the triple junction nodes, vectors that follow all grain and antigrain interfaces are then constructed (Figure 5b).

a) Standard grain boundary overlay on IQ map based on grain orientation recognition. Non-indexed areas are white. b) IQ map with reconstructed boundaries including the antigrain edges.

Figure 5a) Standard grain boundary overlay on IQ map based on grain orientation recognition. Non-indexed areas are white. b) IQ map with reconstructed boundaries including the antigrain edges.

Once the antigrains are fully defined, all normal grain characterization tools are also available to describe the pore properties ranging from a basic size distribution (Figure 6) to a full analysis of the pore elongation and alignment (Figure 7).

Pore size distribution with colored highlighting in 3D printed iron sample.

Figure 6: Pore size distribution with colored highlighting in 3D printed iron sample.

Figure 7: Alignment of pore elongation direction.

Figure 7: Alignment of pore elongation direction.

With non-indexed points now properly assigned into antigrains, a full microstructural description of not fully dense materials or materials containing areas that cannot be indexed, is possible.

Finally we can do a (w)hole EBSD analysis.

Intelligent Use of IQ

Dr. Stuart Wright, Senior Scientist, EDAX

You don’t have to be a genius with a high IQ to recognize that IQ is an imperfect measure of intelligence much less EBSD pattern quality.

A Brief History of IQ
At the time we first came up with the idea of pattern quality, we were very focused on finding a reliable (and fast) image processing technique to detect the bands in the EBSD patterns. Thus, we were using the term “image” more frequently than “pattern” and the term “image quality” stuck.  The first IQ metric we formulated was based on the Burn’s algorithm (cumulative detected edge length) that we were using to detect the bands in the patterns in our earliest automation work1.

We presented this early work at the MS&T meeting in Indianapolis in October 1991. Niels Krieger-Lassen showed some promising band detection results using the Hough Transform2. Even though the Burn’s algorithm was working well we thought it would be good to compare it to the Hough Transform approach.  During that time we decided to use the sum of the Hough peak magnitudes to define the IQ when using the Hough Transform3. The impetus for defining an IQ was to compare how well the Hough Transform approach performed versus the Burn’s algorithm as a function of pattern quality. In case you are curious, here is the result. Our implementation of the Hough transform coupled with the triplet indexing routine clearly does a good job at indexing patterns of poor quality. Notice the relatively small IQ Hough-based values; this is because in this early implementation the average intensity of the pattern was subtracted from each pixel. This step was later dropped, probably simply to save time, which was critical when the cycle time was about four second per pattern.

Image1

After we did this work we thought it might be interesting to make an image by mapping the IQ value to a gray scale intensity at each point in a scan. Here is the resulting map – our first IQ map (Hough based IQ).
Image2

Not only did we explore ways of making things faster, we also wanted to improve the results. One by-product of those developments was that we modified the Hough Transform to be the average of the detected Hough peak heights instead of the sum. A still later modification was to use the number of peaks requested by the user, instead of the number of peaks detected. This was done so that patterns, where only a few peaks were found, did not receive unduly high IQ values.

The next change came not from a modification in how the IQ was calculated, but from the introduction of CCD cameras with 12 bit dynamic range which dramatically increased the IQ values.
In 2005 Tao and Eades proposed using other metrics for measuring IQ4. We implemented these different metrics and compared them with our Hough based IQ measurement in a paper we published in 20065. One of the main conclusions of that paper was that while for some very specific instances the other metrics had some value, our standard Hough based IQ was the best parameter for most cases. Interestingly, exploring the different IQ values was the seed for our PRIAS6 ideas but that is another story. Our competitors use other measures of IQ, but unfortunately these have not been documented – at least to my knowledge.

Factors Influencing IQ
While we have always tried to keep our software chronologically compatible, the IQ parameter has evolved and thus comparing absolute IQ values from data sets obtained using older versions of OIM with results obtained using new ones is probably not a good idea. Not only has the IQ definition evolved but so has the Hough Transform. In fact, since we created the very first IQ maps we realized that while the IQ maps are quite useful they are only quantitative in the sense of relative values within an individual dataset. We have always cautioned against using absolute IQ values of a method for comparing different datasets. In part, because we know a lot of factors affect the IQ values:

  • Camera Settings:
    • Binning
    • Exposure
    • Gain
  • SEM Settings
    • Voltage
    • Current
  • Hough Transform Settings
    • Pattern Size
    • Mask Size
    • Number of peaks
    • Secondary factors (peak symmetry, min distance, vertical bias,…)
  • Sample Prep
  • Image Processing

In developing the next version of OIM we thought it might be worthwhile revisiting the IQ parameter as implemented in our various software packages to see what we could learn about the absolute value of IQ.  In that vein, I thought it would be particularly interesting to look at the Mask Size and the Number of Peaks selected.  To do this, I used a dataset where we had recorded the patterns. Thus, we were able to rescan the dataset using different Hough settings to ascertain the impact of these settings on the IQ values. I also decided to add some Gaussian noise7 to the patterns to see what effect the noise had on the Hough settings.

It would be nice to scale the peak heights with the mask size. However, the “butterfly” masks have negative values in them, making it quite difficult to scale to the weighting of the individual elements of the convolution masks. In the original 7×7 mask we selected the individual components so that the sum would equal zero, to provide some inherent scaling. However, as we introduced other mask sizes this became increasingly difficult, particularly with the smaller masks (intended primarily for more heavily binned patterns).  Thus, we expected the peak heights to be larger for larger masks simply due to the number of matrix components. This trend was confirmed and is shown using the red curves in the figure below.  It should be noted that the smaller mask was used on a 48×48 pixel pattern, the medium on a 96×96 and the larger on a 192×192 pixel pattern.

We also decided to look at the effect of the number of peaks selected. It is assumed that, as we include more peaks we expect the pattern quality to decrease, as the weaker peaks will drive the average Hough peak heights down. This trend was also confirmed as can be seen by the blue curves in the figure.

Image3While these results went as expected, it can be harder to predict the effects of various image processing routines on IQ. The following plot shows the effect of various image processing routines on the IQ values. Perhaps someone with higher IQ could have predicted these results but to me the trends were not all expected. Of course, we usually apply image processing to improve indexing not IQ.

Image4Conclusions
In theory, if all the settings are the same, then the absolute value of the IQ for a matrix of samples should be meaningful. However, it would be rare to use the same settings (Camera, SEM, sample prep,…) for all materials in all states (e.g. deformed vs recrystallized). In fact this is one of the challenges of doing in-situ EBSD work for either a deformation experiment or a recrystallization/grain growth experiment – it is not always easy to predict how the SEM parameters or camera settings need to change as an in-situ experiment progresses. In addition, any changes made to the hardware generally mean that changes to the software are needed as well. Keeping everything constant is a lot easier in theory than it is in practice.

In conclusion, the IQ metric is “relatively” straightforward, but it must “absolutely” be used with some intelligence.☺

Bibliography
1. S.I. Wright and B.L. Adams (1992) “Automatic Analysis of Electron Backscatter Diffraction Patterns”  Metallurgical Transactions A 23, 759-767.
2. K. Kunze, S.I. Wright, B.L. Adams and D.J. Dingley  (1993) “Advances in Automatic EBSP Single Orientation Measurements” Textures and Microstructures 20, 41-54.
3. N.C. Krieger Lassen, D. Juul Jensen and K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning microscopy 6,  115-121.
4. X. Tap and A. Eades (2005) “Errors, artifacts, and improvements in EBSD processing and mapping” Microscopy and Microanalysis 11, 79-87.
5. S.I. Wright and M.M Nowell (2006) “EBSD Image Quality Mapping” Microscopy and Microanalysis 12, 72-84.
6. S. I. Wright, M. M. Nowell, R. de Kloe, P. Camus and T. M. Rampton  (2015) “Electron Imaging with an EBSD Detector” Ultramicroscopy 148, 132-145.
7. S I. Wright, M. M. Nowell, S. P. Lindeman, P. P. Camus, M. De Graef and M. Jackson (2015) “Introduction and Comparison of New EBSD Post-Processing Methodologies”  Ultramicroscopy 159, 81