Recently I gave a webinar on dynamic pattern simulation. The use of a dynamic diffraction model [1, 2] allows EBSD patterns to be simulated quite well. One topic I introduced in that presentation was that of dictionary indexing . You may have seen presentations on this indexing approach at some of the microscopy and/or materials science conferences. In this approach, patterns are simulated for a set of orientations covering all of orientation space. Then, an experimental pattern is tested against all of the simulated patterns to find the one that provides the best match with the experimental pattern. This approach does particularly well for noisy patterns.
I’ve been working on implementing some of these ideas into OIM Analysis™ to make dictionary indexing more streamlined for datasets collected using EDAX data collection software – i.e. OIM DC or TEAM™. It has been a learning experience and there is still more to learn.
As I dug into dictionary indexing, I recalled our first efforts to automate EBSD indexing. Our first attempt was a template matching approach . The first step in this approach was to use a “Mexican Hat” filter. This was done to emphasize the zone axes in the patterns. This processed pattern was then compared against a dictionary of “simulated” patterns. The simulated patterns were simple – a white pixel (or set of pixels) for the major zone axes in the pattern and everything else was colored black. In this procedure the orientation sampling for the dictionary was done in Euler space. It seemed natural to go this route at the time, because we were using David Dingley’s manual on-line indexing software which focused on the zone axes. In David’s software, an operator clicked on a zone axis and identified the <uvw> associated with the zone axis. Two zone axes needed to be identified and then the user had to choose between a set of possible solutions. (Note – it was a long time ago and I think I remember the process correctly. The EBSD system was installed on an SEM located in the botany department at BYU. Our time slot for using the instrument was between 2:00-4:00am so my memory is understandably fuzzy!)
One interesting thing of note in those early dictionary indexing experiments was that the maximum step size in the sampling grid of Euler space that would result in successful indexing was found to be 2.5°, quite similar to the maximum target misorientation for modern dictionary indexing. Of course, this crude sampling approach may have led to the lack of robustness in this early attempt at dictionary indexing. The paper proposed that the technique could be improved by weighting the zone axes by the sum of the structure factors of the bands intersecting at the zone axes. However, we never followed up on this idea as we abandoned the template matching approach and moved to the Burn’s algorithm coupled with the triplet voting scheme  which produced more reliable results. Using this approach, we were able to get our first set of fully automated scans. We presented the results at an MS&T symposium (Microscale Texture of Materials Symposium, Cincinnati, Ohio, October 1991) where Niels Krieger-Lassen also presented his work on band detection using the Hough transform . After the conference, we hurried back to the lab to try out Niels’ approach for the band detection part of the indexing process .
Modern dictionary indexing applies an adaptive histogram filter to the experimental patterns (at left in the figure below) and the dictionary patterns (at right) prior to performing the normalized inner dot-product used to compare patterns. The filtered patterns are nearly binary and seeing these triggered my memory of our early dictionary work as they reminded me of the nearly binary “Sombrero” filtered patterns– Olé! We may not have come back full circle but progress clearly goes in steps and some bear an uncanny resemblance to previous ones. I doff my hat to the great work that has gone into the development of dynamic pattern simulation and its applications.
 A. Winkelmann, C. Trager-Cowan, F. Sweeney, A. P. Day, P. Parbrook (2007) “Many-Beam Dynamical Simulation of Electron Backscatter Diffraction Patterns” Ultramicroscopy 107: 414-421.  P. G. Callahan, M. De Graef (2013) “Dynamical Electron Backscatter Diffraction Patterns. Part I: Pattern Simulations” Microscopy and Microanalysis 19: 1255-1265.  S.I. Wright, B. L. Adams, J.-Z. Zhao (1991). “Automated determination of lattice orientation from electron backscattered Kikuchi diffraction patterns” Textures and Microstructures 13: 2-3.  Y.H. Chen, S. U. Park, D. Wei, G. Newstadt, M.A. Jackson, J.P. Simmons, M. De Graef, A.O. Hero (2015) “A dictionary approach to electron backscatter diffraction indexing” Microscopy and Microanalysis 21: 739-752.  S.I. Wright, B. L. Adams (1992) “Automatic-analysis of electron backscatter diffraction patterns” Metallurgical Transactions A 23: 759-767.  N.C. Krieger Lassen, D. Juul Jensen, K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning Microscopy 6: 115-121.  K. Kunze, S. I. Wright, B. L. Adams, D. J. Dingley (1993) “Advances in Automatic EBSP Single Orientation Measurements.” Textures and Microstructures 20: 41-54.
John Haritos, Regional Sales Manager Southwest USA. EDAX
I recently had the opportunity to host a demo for one of my customers at our Draper, Utah office. This was a long-time EDAX and EBSD user, who was interested in seeing our new Velocity CMOS camera, and to try it on some of their samples.
When I started in this industry back in the late 90s, the cameras were running at a “blazing” 20 points per second and we all thought that this was fast. At that time, collection speed wasn’t the primary issue. What EBSD brought to the table was automated orientation analysis of diffraction patterns. Now users could measure orientations and create beautiful orientation maps with the push of a button, which was a lot easier than manually interpreting these patterns.
Fast forward to 2019 and with the CMOS technology being adapted from other industries to EBSD we are now collecting at 4,500 pps. What took hours and even days to collect at 20 pps now takes a matter of minutes or seconds. Below is a Nickel Superalloy sample collected at 4,500 pps on our Velocity™ Super EBSD camera. This scan shows the grain and twinning structure and was collected in just a few minutes.
Figure 1: Nickel Superalloy
Of course, now that we have improved from 20 pps to 4,500 pps, it’s significantly easier to get a lot more data. So the question becomes, how do we analyze all this data? This is where OIM Analysis v8™ comes to the rescue for the analysis and post processing of these large data sets. OIM Analysis v8™ was designed to take advantage of 64 bit computing and multi-threading so the software can handle large datasets. Below is a grain size map and a grain size distribution chart from an Aluminum friction stir weld sample with over 7 Million points collected with the Velocity™ and processed using OIM Analysis v8™. This example is interesting because the grains on the left side of the image are much larger than the grains on the right side. With the fast collection speeds, a small (250nm) step size could still be used over this larger collection area. This allows for accurate characterization of grain size across this weld interface, and the bimodal grain size distribution is clearly resolved. With a slower camera, it may be impractical to analyze this area in a single scan.
Figure 2: Aluminum Friction Stir Weld
In the past, most customers would setup an overnight EBSD run. You could see the thoughts running through their mind: will my sample drift, will my filament pop, what will the data look like when I come back to work in the morning? Inevitably, the sample would drift, or the filament would pop and this would mean the dreaded “ugh” in the morning. With the Velocity™ and the fast collection speeds, you no longer need to worry about this. You can collect maps in a few minutes and avoid this issue in practice. It’s a hard thing to say in a brochure, but its easy to appreciate when seeing it firsthand.
For me, watching my customer see the analysis of many samples in a single day was impressive. These were not particularly easy samples. They were solar cell and battery materials, with a variety of phases and crystal structures. But under similar conditions to their traditional EBSD work, we could collect better quality data much faster. The future is now. Everyone is excited with what the CMOS technology can offer in the way of productivity and throughput for their EBSD work.
When you have been working with EBSD for many years it is easy to forget how little you knew when you started. EBSD patterns appear like magic on your screen, indexing and orientation determination are automatic, and you can produce colourful images or maps with a click of a mouse.
Image 1: IPF on PRIAS™ center EBSD map of cold-pressed iron powder sample.
All the tools to get you there are hidden in the EBSD software package that you are working with and as a user you don’t need to know exactly how all of it happens. It just works. To me, although it is my daily work, it is still amazing how easy it sometimes is to get high quality data from almost any sample even if it only produces barely recognisable patterns.
Image 2: Successful indexing of extremely noisy patterns using automatic band detection.
That capability did not just appear overnight. There is a combination of a lot of hard work, clever ideas, and more than 25 years of experience behind it that we sometimes just forget to talk about, or perhaps even worse, expect everybody to know already. And so it is that I occasionally get asked a question at a meeting or an exhibition where I think, really? For example, some years ago I got a very good question about the EBSD calibration.
Image 3: EBSD calibration is based on the point in the pattern that is not distorted by the projection. This is the point where the electrons reach the screen perpendicularly (pattern center).
As you probably suspect EBSD calibration is not some kind of magic that ensures that you can index your patterns. It is a precise geometrical correction that distorts the displayed EBSD solution so that it fits the detected pattern. I always compare it with a video-projector. That is also a point projection onto a screen at a small angle, just like the EBSD detection geometry. And when you do that there is a distortion where the sides of the image on the screen are not parallel anymore but move away from each other. On video projectors there is a smart trick to fix that: a button labelled keystone correction which pulls the sides of the image nicely parallel again where they belong.
Image 4: Trapezoid distortion before (left) and after (right) correction.
Unfortunately, we cannot tell the electrons in the SEM to move over a little bit in order to make the EBSD pattern look correct. Instead we need to distort the indexing solution just so that it matches the EBSD pattern. And now the question I got asked was, do you actually adjust this calibration when moving the beam position on the sample during a scan? Because otherwise you cannot collect large EBSD maps. Apparently not everybody was doing that at that time, and it was being presented at a conference as the invention of the century that no EBSD system could do without. It was finally possible to collect EBSD data at low magnification! So, when do you think this feature will be available in your software? I stood quiet for a moment before answering, well, eh, we actually already have such a feature that we call the pattern centre shift. And it had been in the system since the first mapping experiments in the early 90’s. We just did not talk about it as it seemed so obvious.
There are more things like that hidden in the software that are at least as important, such as smart routines to detect the bands even in extremely noisy patterns, EBSD pattern background processing, 64-bit multithreading for fast processing of large datasets, and efficient quaternion-based mathematical methods for post-processing. These tools are quietly working in the background to deliver the results that the user needs.
There are some other original ideas that date back to the 1990’s that we actually do regularly talk about, such as the hexagonal scanning grid, triplet voting indexing, and the confidence index, but there is also some confusion about these. Why do we do it that way?
The common way in imaging and imaging sensors (e.g. CCD or CMOS chips) is to organise pixels on a square grid. That is easy and you can treat your data as being written in a regular table with fixed intervals. However, pixel-to-pixel distances are different horizontally and diagonally which is a drawback when you are routinely calculating average values around points. In a hexagonal grid the point-to-point distance is constant between all neighbouring pixels. Perhaps even more importantly, a hexagonal grid offers ~15% more points on the same area than a square grid, which makes it ideally suited to fill a surface.
Image 5: Scanning results for square (left) and hexagonal (right) grids using the same step size. The grain shape and small grains with few points are more clearly defined in the hexagonal scan.
This potentially allows improvements in imaging resolution and sometimes I feel a little surprised that a hexagonal imaging mode is not yet available on SEMs.
The triplet voting indexing method also has some hidden benefits. What we do there is that a crystal orientation is calculated for each group of three bands that is detected in an EBSD pattern. For example, when you set the software to find 8 bands, you can define up to 56 different band triangles, each with a unique orientation solution.
Image 6: Indexing example based on a single set of three bands – triplet.
Image 7: Equation indicating the maximum number of triplets for a given number of bands.
This means that when a pattern is indexed, we don’t just find a single orientation, we find 56 very similar orientations that can all be averaged to produce the final indexing solution. This averaging effectively removes small errors in the band detection and allows excellent orientation precision, even in very noisy EBSD patterns. The large number of individual solutions for each pattern has another advantage. It does not hurt too much if some of the bands are wrongly detected from pattern noise or when a pattern is collected directly at a grain boundary and contains bands from two different grains. In most cases the bands coming from one of the grains will dominate the solutions and produce a valid orientation measurement.
The next original parameter from the 1990’s is the confidence index which follows out of the triplet voting indexing method. Why is this parameter such a big deal that it is even patented?
When an EBSD pattern is indexed several parameters are recorded in the EBSD scan file, the orientation, the image quality (which is a measure for the contrast of the bands), and a fit angle. This angle indicates the angular difference between the bands that have been detected by the software and the calculated orientation solution. The fit angle can be seen as an error bar for the indexing solution. If the angle is small, the calculated orientation fits very closely with the detected bands and the solution can be considered to be good. However, there is a caveat. What now if there are different orientation solutions that would produce virtually identical patterns? This may happen for a single phase where it is called pseudosymmetry. The patterns are then so similar that the system cannot detect the difference. Alternatively, you can also have multiple phases in your sample that produce very similar patterns. In such cases we would typically use EDS information and ChI-scan to discriminate the phases.
Image 8: Definition of the confidence index parameter. V1 = number of votes for best solution, V2 = mumber of votes for 2nd best solution, VMAX= Maximum possible number of votes.
Image 9: EBSD pattern of silver indexed with the silver structure (left) and copper structure (right). Fit is 0.24″, the only difference is a minor variation in the band width matching.
In both these examples the fit value would be excellent for the selected solution. And in both cases the solution has a high probability of being wrong. And that is where the confidence index or CI value becomes important. The CI value is based on the number of band triangles or triplets that match each possible solution. If there are two indistinguishable solutions, these will both have the same number of triangles and the CI will be 0. This means that there are two or more apparently valid solutions that may all have a good fit angle. The system just does not know which of these solutions is the correct one and thus the measurement is rejected. If there is a difference of only 10% in matched triangles between alternative orientation solutions in most cases the software is capable of identifying the correct solution. The fit angle on its own cannot identify this problem.
After 25 years these tools and parameters are still indispensable and at the basis of every EBSD dataset that is collected with an EDAX system. You don’t have to talk about them. They are there for you.
Don’t just read the title of this post and skip to the photos or you might think it is some soap opera drama about strained relations – instead, the title is, once again, my feeble attempt at a punny joke!
I was recently doing a little reference checking and ended up on the website for Microscopy and Microanalysis (the journal, not the conference). On my first glance, I was surprised to see my name in the bottom right corner. Looking closer, I noticed that the paper Matt Nowell, David Field and I wrote way back in 2011 entitled “A Review of Strain Analysis Using Electron Backscatter Diffraction” is apparently the most cited article in Microscopy and Microanalysis. I am pleased that so many readers have found it useful. I remember, at the time, that we were getting a lot of questions about the tools within OIM Analysis™ for characterizing local misorientation and how they relate to strain. It was also a time when HREBSD was really starting to gain some momentum and we were getting a lot of questions on that front as well. So, we thought it would be helpful to write a paper that hopefully would answer some practical questions on using EBSD to characterize strain. From all the citations, it looks as though we actually managed to achieve what we had strived for.
My co-authors on that paper have been great to work with professionally; but I also count them among my closest personal friends. David Field joined Professor Brent Adams’ research group at BYU way back in 1987 if my memory is correct. We both completed master’s degrees at BYU and then followed Brent to Yale in 1988 to do our PhDs together. David then went on to Alcoa and I went to Los Alamos National Lab. Brent convinced David to leave and join the new startup company TSL and I joined about a year later. David left TSL for Washington State University shortly after EDAX purchased TSL.
Before, I joined TSL, Matt Nowell* had joined the company and he has been at TSL/EDAX ever since. Even with all the comings and goings we’ve remained colleagues and friends.
I’ve been richly blessed by both their excellent professional talents and their fun spirited friendship. We’ve worked, traveled and attended conferences together. We’ve played basketball, volleyball and golf together. I must also brag that we formed the core of the soccer team to take on the Seoul National University students after ICOTOM 13 in Seoul. Those who attended ICOTOM 13 may remember that it was held shortly after the 2002 World Cup hosted jointly by Korea and Japan; in which Korea had such a good showing – finishing 4th. A sequel was played at SNU where the students pretty much trounced the rest of the world despite our best efforts 😊. Here are a few snapshots of us with our Korean colleagues at ICOTOM 13 – clearly, we were always snappy dressers!
We all give presentations. We write and review papers. Either way, we have to be critical of our data and how it is presented to others, both numerically and graphically.
With that said, I thought it would be nice to start this year with a couple of quick tips or notes that can help with mistakes I see frequently.
The most common thing I see is poorly documented cleanup routines and partitioning. Between the initial collection and final presentation of the data, a lot of things are done to that data. It needs to be clear what was done so that one can interpret it correctly (or other people can reproduce it). Cleanup routines can change the data in ways that can either be subtle (or not so subtle), but more importantly they could wrongly change your conclusions. The easiest routine to see this on is the grain dilation routine. This routine can turn noisy data into a textured dataset pretty fast (fig. 1).
Figure 1. The initial data was just pure noise. By running it iteratively through the grain dilation routine, you can make both grains and textures.
Luckily for us, OIM Analysis™ keeps track of most of what is done via the cleanup routines and partitioning in the summary window on either the dataset level or the partition level (fig. 2).
Figure 2. A partial screenshot of the dataset level summary window shows cleanup routines completed on the dataset, as well as the parameters used. This makes your processing easily repeatable.
The other common issue is not including the full information needed to interpret a map. I really need to look at 3 things to get the full picture for an EBSD dataset: the IPF map (fig. 3), the Phase Map (fig. 4) and the IPF Legend (fig. 5) of those phases. This is very important because while the colors used are the same, the orientations differ between the different crystal symmetries.
Figure 3. General IPF Map of a geological sample. Many phases are present, but the dataset is not complete without a legend and phase map. The colors mean nothing without knowing both the phase and the IPF legend to use for that phase.
Below is a multiple phase sample with many crystal symmetries. All use Red-Green-Blue as the general color scheme. By just looking at the general IPF map (fig. 3), I can easily get the wrong impression. Without the phase map, I do not know which legend I should be using to understand the orientation of each phase. Without the crystal symmetry specific legend, I do not know how the colors change over the orientation space. I really need all these legends/maps to truly understand what I am looking at. One missing brick and the tower crumbles.
Figure 5. With all the information now presented, I can actually go back and interpret figure 3 using figures 4 and 5 to guide me.
Figure 4. In this multiphase sample, multiple symmetries are present. I need to know which phase a pixel is, to know which legend to use.
Being aware of these two simple ideas alone can help you to better present your data to any audience. The fewer the questions about how you got the data, the more time you will have to answer more meaningful questions about what the data actually means!
I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting
Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains  and about 500 pixels per average grain in order to capture the grain size well . This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.
Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.
This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.
As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.
Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.
Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.
 SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
 SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.
Figure 1. Participants of my first EBSD training course in Grenoble in 2001.
Everybody is learning all the time. You start as a child at home and later in school and that never ends. In your professional career you will learn on the job and sometimes you will get the opportunity to get a dedicated training on some aspect of your work. I am fortunate that my job at EDAX involves a bit of this type of training for our customers interested in EBSD. Somehow, I have already found myself teaching for a long time without really aiming for it. Already as a teenager when I worked at a small local television station in The Netherlands I used to teach the technical things related to making television programs like handling cameras, lighting, editing – basically everything just as long as it was out of the spotlight. Then during my geology study, I assisted in teaching students a variety of subjects ranging from palaeontology to physics and geological fieldwork in the Spanish Pyrenees. So, unsurprisingly, shortly after joining EDAX in 2001 when I was supposed to simply participate in an introductory EBSD course (fig 1) taught by Dr. Stuart Wright in Grenoble, France, I quickly found myself explaining things to the other participants instead of just listening.
Teaching about EBSD often begins when I do a presentation or demonstration for someone new to the technique. And the capabilities of EBSD are such that just listing the technical specifications of an EBSD system to a new customer does not do it justice. Later when a system has been installed I meet the customers again for the dedicated training courses and workshops that we organise and participate in all over the world.
Figure 2. EBSD IPF map of Al kitchen foil collected without any additional specimen preparation. The colour-coding illustrates the extreme deformation by rolling.
In such presentations, of course we talk about the basics of the method and the characteristics of the EDAX systems, but then it always moves on to how it can help understand the materials and processes that the customer is working with. There, teaching starts working the other way as well. With every customer visit I learn something more about the physical world around us. Sometimes this is about a fundamental understanding of a physical process that I have never even heard of.
At other times it is about ordinary items that we see or use in our daily lives such as aluminium kitchen foil, glass panes with special coatings, or the structure of biological materials like eggs, bone, or shells. Aluminium foil is a beautiful material that is readily available in most labs and I use it occasionally to show EBSD grain and texture analysis when I do not have a suitable polished sample with me (fig 2) and at some point, a customer explained to me in detail how it was produced in a double layer back to back to get one shiny and one matte side. And that explained why it produces EBSD patterns without any additional preparation. Something new learned again.
Figure 3. IPF map of austenitic steel microstructure prepared by additive manufacturing.
A relatively new development is additive manufacturing or 3D printing where a precursor powdered material is melted into place by a laser to create complex components/shapes as a single piece. This method produces fantastically intricate structures (fig 3) that need to be studied to optimise the processing.
With every new application my mind starts turning to identify specific functions in the software that would be especially relevant to its understanding. In some cases, this then turns into a collaborative effort to produce scientific publications on a wide variety of subjects e.g. on zeolite pore structures (1, fig (4)), poly-GeSi films (2, fig (5)), or directional solidification by biomineralization of mollusc shells (3).
Figure 4. Figure taken from ref.1 showing EBSD analysis of zeolite crystals.
Figure 5. Figure taken from ref.2 showing laser crystallised GeSi layer on substrate.
Such collaborations continuously spark my curiosity and it is because of these kinds of discussions that after 17 years I am still fascinated with the EBSD technique and its applications.
This fascination also shows during the EBSD operator schools that I teach. The teaching materials that I use slowly evolve with time as the systems change, but still the courses are not simply repetitions. Each time customers bring their own materials and experiences that we use to show the applications and discuss best practices. I feel that it is true that you only really learn how to do something when you teach it.
This variation in applications often enables me to fully show the extent of the analytical capabilities in the OIM Analysis™ software and that is something that often gets lost in the years after a system has been installed. I have seen many times that when a new system is installed, the users invest a lot of time and effort in getting familiar with the system in order to get the most out of it. However, with time the staff that has been originally trained on the equipment moves on and new people are introduced to electron microscopy and all that comes with it. The original users then train their successor in the use of the system and inevitably something is lost at this point.
When you are highly familiar with performing your own analysis, you tend to focus on the bits of the software and settings that you need to perform your analysis. The bits that you do not use fade away and are not taught to the new user. This is something that I see regularly during the training course that I teach. Of course, there are the new functions that have been implemented in the software that users have not seen before, but people who have been using the system for years and are very familiar with the general operation always find new ways of doing things and discover new functions that could have helped them with past projects during the training courses. During the latest EBSD course in Germany in September a participant from a site where they have had EBSD for many years remarked that he was going to recommend coming to a course to his colleagues who have been using the system for a long time as he had found that the system could do much more than he had imagined.
You learn something new every day.
1) J Am Chem Soc. 2008 Oct 15;130(41):13516-7. doi: 10.1021/ja8048767. Epub 2008 Sep 19.
2) ECS Journal of Solid State Science and Technology, 1 (6) P263-P268 (2012)
3) Adv Mater. 2018 Sep 21:e1803855. doi: 10.1002/adma.201803855. [Epub ahead of print]
If you have attended an EDAX EBSD training course, you have seen the following slide in the Pattern Indexing lecture. This slide attempts to explain how to collect a background pattern before performing an OIM scan. The slide recommends that the background come from an area containing at least 25 grains.
Those of you who have performed re-indexing of a scan with saved patterns in OIM Analysis 8.1 may have noticed that there is a background pattern for the scan data (as well as one of the partitions). This can be useful if re-indexing a scan where the raw patterns were saved as opposed to background corrected patterns. This background pattern is formed by averaging 500 patterns randomly selected from the saved patterns. 500 is a lot more than the minimum of 25 recommended in the slide from the training lecture.
Recently, I was thinking about these two numbers – is 25 really enough, is 500 overkill? With some of the new tools (Callahan, P.G. and De Graef, M., 2013. Dynamical electron backscatter diffraction patterns. Part I: Pattern simulations. Microscopy and Microanalysis, 19(5), pp.1255-1265.) available for simulating EBSD patterns I realized this might be provide a controlled way to perhaps refine the number of orientations that need to be sampled for a good background. To this end, I created a set of simulated patterns for nickel randomly sampled from orientation space. The set contained 6,656 patterns. If you average all these patterns together you get the pattern at left in the following row of three patterns. The average patterns for 500 and 25 random patterns are also shown. The average pattern for 25 random orientations is not as smooth as I would have assumed but the one with 500 looks quite good.
I decided to take it a bit further and using the average pattern for all 6,656 patterns as a reference I compared the difference (simple intensity differences) between average patterns from n orientations vs. the reference. This gave me the following curve: From this curve, my intuitive estimate that 25 grains is enough for a good background appears be a bit optimistic., but 500 looks good. There are a few caveats to this, the examples I am showing here are at 480 x 480 pixels which is much more than would be used for typical EBSD scans. In addition, the simulated patterns I used are sharper and have better signal-to-noise ratios than we are able to achieve in experimental patterns at typical exposure times. These effects are likely to lead to more smoothing.
I recently saw Shawn Bradley who is one of the tallest players to have played in the NBA, he is 7’6” (229cm) tall. I recognized him because he was surrounded by a crowd of kids – you can imagine that he really stood out! This reminded me that these results assume a uniform grain size. If you have 499 tiny grains encircling one giant grain, then the background from these 500 grains will not work as a background as it would be dominated by the Shawn Bradley grain!
In interacting with Rudy Wenk of the University of California Berkeley to get his take on the word “texture” as it pertains to preferred orientation reminds me of some other terminologies with orientation maps that Rudy helped me with several years ago.
Map reconstructed form EBSD data showing the crystal orientation parallel to the sample surface normal
Joe Michael of Sandia National Lab has commented to me a couple of times his objection to the term “IPF map”. As you may know, the term is commonly used to describe a color map reconstructed from OIM data where the color denotes the crystallographic axis aligned with the sample normal as shown below. Joe points out that the term “orientation map” or “crystal direction map” or something similar would be much more appropriate and he is absolutely right.
The reason behind the name “IPF map”, is that I hi-jacked some of my code for drawing inverse pole figures (IPFs) as a basis to start writing the code to create the color-coded maps. Thus, we started using the term internally (it was TSL at the time – prior to EDAX purchasing TSL) and then it leaked out publicly and the name stuck – my apologies to Joe. We later added the ability to color the microstructure based on the crystal direction aligned with any specified sample direction as shown below.
Orientation maps showing the crystal directions aligned with the normal, rolling and transverse directions at the surface of a rolled aluminum sheet.
The idea for this map was germinated from a paper I saw presented by David Dingley where a continuous color coding schemed was devised by assigning red, green and blue to the three axes of Rodrigues-Frank space: D. J. Dingley, A. Day, and A. Bewick (1991) “Application of Microtexture Determination using EBSD to Non Cubic Crystals”, Textures and Microstructures, 14-18, 91-96. In this case, the microstructure had been digitized and a single orientation measured for each grain using EBSD. Unfortunately, I only have gray scale images of these results.
SEM micrograph of nickel, grain orientations in Rodrigues-Frank space and orientation map based on color Rodrigues vector coloring scheme. Source: Link labeled “Full-Text PDF” at www.hindawi.com/archive/1991/631843/abs/
IPF map of recrystallized grains in grain oriented silicon steel from Y. Inokuti, C. Maeda and Y. Ito (1987) “Computer color mapping of configuration of goss grains after an intermediate annealing in grain oriented silicon steel.” Transactions of the Iron and Steel Institute of Japan 27, 139-144. Source: Link labeled “Full Text PDF button’ at www.jstage.jst.go.jp/article/isijinternational1966/27/4/27_4_302/_article
We didn’t realize it at the time; but, an approach based on the crystallographic direction had already been done in Japan. In this work, the stereographic unit triangle (i.e. an inverse pole figure) was used in a continues color coding scheme were red is assigned to the <110> direction, blue to <111> and yellow to <100> and then points lying between these three corners of the stereographic triangle are combinations of these three colors. This color coding was used to shade grains in digitized maps of the microstructure according to their orientation. Y. Inokuti, C. Maeda and Y. Ito (1986) “Observation of Generation of Secondary Nuclei in a Grain Oriented Silicon Steel Sheet Illustrated by Computer Color Mapping”, Journal of the Japan Institute of Metals, 50, 874-8. The images published in this paper received awards in 1986 by the Japanese Institute of Metals and TMS.
AVA map and pole figure from a quartz sample from “Gries am Brenner” in the Austrian alps south of Innsbruck. The pole figure is for the c-axis. (B. Sander (1950) Einführung in die Gefügekunde der Geologischen Körper: Zweiter Teil Die Korngefüge. Springer-Vienna) Source: In the last chapter (Back Matter) in the Table of Contents there is a link labeled “>> Download PDF” at link.springer.com/book/10.1007%2F978-3-7091-7759-4
I thought these were the first colored orientation maps constructed until Rudy later corrected me (not the first, nor certainly the last time). He sent me some examples of mappings of orientation onto a microstructure by “hatching” or coloring a pole figure and then using those patterns or colors to shade the microstructure as traced from micrographs. H.-R. Wenk (1965) “Gefügestudie an Quarzknauern und -lagen der Tessiner Kulmination”, Schweiz. Mineralogische und Petrographische Mitteilungen, 45, 467-515 and even earlier in B. Sander (1950) Einführung in die Gefügekunde Springer Verlag. 402-409 . Sanders entitled this type of mapping and analysis as AVA (Achsenvertilungsanalyse auf Deutsch or Axis Distribution Analysis in English).
Such maps were forerunners to the “IPF maps” of today (you could actually call them “PF maps”) to which we are so familiar with. It turns out our wanderin’s in A Search for Structure (Cyril Stanley Smith, 1991, MIT Press) have actually not been “aimless” at all but have helped us gain real insight into that etymologically challenged world of microstructure.
One of the first scientific conferences I had the good fortune of attending was the Eighth International Conference on Textures of Materials (ICOTOM 8) held in 1987 in Santa Fe, New Mexico. I was an undergraduate student at the time and had recently joined Professor Brent Adams’ research group at Brigham Young University (BYU) in Provo, Utah. It was quite an introduction to texture analysis. Most of the talks went right over my head but the conference would affect the direction my educational and professional life would take.
Logos of the ICOTOMs I’ve attended
Professor Adams’ research at the time was focused on orientation correlation functions. While his formulation of the equations used to describe these correlations was coming along nicely, the experimental side was quite challenging. One of my tasks for the research group was to explore using etch pits to measure orientations on a grain-by-grain basis. It was a daunting proposition for an inexperienced student. At the ICOTOM in Santa Fe, Brent happened to catch a talk by a Professor from the University of Bristol named David Dingley. David introduced the ICOTOM community to Electron Backscatter Diffraction (EBSD) in the SEM. Brent immediately saw this as a potential experimental solution to his vision for a statistical description of the spatial arrangement of grain orientations in polycrystalline microstructures.
At ICOTOMs through the years
After returning to BYU, Brent quickly went about preparing to get David to BYU to install the first EBSD system in North America. Instead of etch pits, my Master’s degree became comparing textures measured by EBSD and those measured with traditional X-Ray Pole Figures. I had the opportunity to make some of the first EBSD measurements with David’s system. From those early beginnings, Brent’s group moved to Yale University where we successfully built an automated EBSD system laying the groundwork for the commercial EBSD systems we use today.
I’ve had the good fortune to attend every ICOTOM since that one in Santa Fe over 30 years ago now. The ICOTOM community has helped germinate and incubate EBSD and continues to be a strong supporter of the technique. This is evident in the immediate rise in the number of texture studies undertaken using EBSD immediately after EBSD was introduced to the ICOTOM community.
The growth in EBSD in terms of the percentage of EBSD related papers at the ICOTOMs
Things have a way of coming full circle and now I am part of a group of three (with David Fullwood of BYU and my colleague Matt Nowell of EDAX) whose turn it is to host the next ICOTOM in St George Utah in November 2017. The ICOTOM meetings are held every three years and generally rotate between Europe, the Americas and Asia. At ICOTOM 18 we will be celebrating 25 years since our first papers were published using OIM.
It is a humbling opportunity to pay back the texture community, in just a small measure, for the impact my friends and colleagues within this community have had both on EBSD and on me personally. It is exciting to consider what new technologies and scientific advances will be germinated by the interaction of scientists and engineers in the ICOTOM environment. All EBSD users would benefit from attending ICOTOM and I invite you all to join us next year in Utah’s southwest red rock country for ICOTOM 18! (http://event.registerat.com/site/icotom2017/)
Some of the spectacular scenery in southwest Utah (Zion National Park)