EBSD

Back to Basics

Dr. René de Kloe, Applications Specialist, EDAX

When you have been working with EBSD for many years it is easy to forget how little you knew when you started. EBSD patterns appear like magic on your screen, indexing and orientation determination are automatic, and you can produce colourful images or maps with a click of a mouse.

Image 1: IPF on PRIAS™ center EBSD map of cold-pressed iron powder sample.

All the tools to get you there are hidden in the EBSD software package that you are working with and as a user you don’t need to know exactly how all of it happens. It just works. To me, although it is my daily work, it is still amazing how easy it sometimes is to get high quality data from almost any sample even if it only produces barely recognisable patterns.

Image 2: Successful indexing of extremely noisy patterns using automatic band detection.

That capability did not just appear overnight. There is a combination of a lot of hard work, clever ideas, and more than 25 years of experience behind it that we sometimes just forget to talk about, or perhaps even worse, expect everybody to know already. And so it is that I occasionally get asked a question at a meeting or an exhibition where I think, really? For example, some years ago I got a very good question about the EBSD calibration.

Image 3: EBSD calibration is based on the point in the pattern that is not distorted by the projection. This is the point where the electrons reach the screen perpendicularly (pattern center).

As you probably suspect EBSD calibration is not some kind of magic that ensures that you can index your patterns. It is a precise geometrical correction that distorts the displayed EBSD solution so that it fits the detected pattern. I always compare it with a video-projector. That is also a point projection onto a screen at a small angle, just like the EBSD detection geometry. And when you do that there is a distortion where the sides of the image on the screen are not parallel anymore but move away from each other. On video projectors there is a smart trick to fix that: a button labelled keystone correction which pulls the sides of the image nicely parallel again where they belong.

Image 4: Trapezoid distortion before (left) and after (right) correction.

Unfortunately, we cannot tell the electrons in the SEM to move over a little bit in order to make the EBSD pattern look correct. Instead we need to distort the indexing solution just so that it matches the EBSD pattern. And now the question I got asked was, do you actually adjust this calibration when moving the beam position on the sample during a scan? Because otherwise you cannot collect large EBSD maps. Apparently not everybody was doing that at that time, and it was being presented at a conference as the invention of the century that no EBSD system could do without. It was finally possible to collect EBSD data at low magnification! So, when do you think this feature will be available in your software? I stood quiet for a moment before answering, well, eh, we actually already have such a feature that we call the pattern centre shift. And it had been in the system since the first mapping experiments in the early 90’s. We just did not talk about it as it seemed so obvious.

There are more things like that hidden in the software that are at least as important, such as smart routines to detect the bands even in extremely noisy patterns, EBSD pattern background processing, 64-bit multithreading for fast processing of large datasets, and efficient quaternion-based mathematical methods for post-processing. These tools are quietly working in the background to deliver the results that the user needs.
There are some other original ideas that date back to the 1990’s that we actually do regularly talk about, such as the hexagonal scanning grid, triplet voting indexing, and the confidence index, but there is also some confusion about these. Why do we do it that way?

The common way in imaging and imaging sensors (e.g. CCD or CMOS chips) is to organise pixels on a square grid. That is easy and you can treat your data as being written in a regular table with fixed intervals. However, pixel-to-pixel distances are different horizontally and diagonally which is a drawback when you are routinely calculating average values around points. In a hexagonal grid the point-to-point distance is constant between all neighbouring pixels. Perhaps even more importantly, a hexagonal grid offers ~15% more points on the same area than a square grid, which makes it ideally suited to fill a surface.

Image 5: Scanning results for square (left) and hexagonal (right) grids using the same step size. The grain shape and small grains with few points are more clearly defined in the hexagonal scan.

This potentially allows improvements in imaging resolution and sometimes I feel a little surprised that a hexagonal imaging mode is not yet available on SEMs.
The triplet voting indexing method also has some hidden benefits. What we do there is that a crystal orientation is calculated for each group of three bands that is detected in an EBSD pattern. For example, when you set the software to find 8 bands, you can define up to 56 different band triangles, each with a unique orientation solution.

Image 6: Indexing example based on a single set of three bands – triplet.

Image 7: Equation indicating the maximum number of triplets for a given number of bands.

This means that when a pattern is indexed, we don’t just find a single orientation, we find 56 very similar orientations that can all be averaged to produce the final indexing solution. This averaging effectively removes small errors in the band detection and allows excellent orientation precision, even in very noisy EBSD patterns. The large number of individual solutions for each pattern has another advantage. It does not hurt too much if some of the bands are wrongly detected from pattern noise or when a pattern is collected directly at a grain boundary and contains bands from two different grains. In most cases the bands coming from one of the grains will dominate the solutions and produce a valid orientation measurement.

The next original parameter from the 1990’s is the confidence index which follows out of the triplet voting indexing method. Why is this parameter such a big deal that it is even patented?
When an EBSD pattern is indexed several parameters are recorded in the EBSD scan file, the orientation, the image quality (which is a measure for the contrast of the bands), and a fit angle. This angle indicates the angular difference between the bands that have been detected by the software and the calculated orientation solution. The fit angle can be seen as an error bar for the indexing solution. If the angle is small, the calculated orientation fits very closely with the detected bands and the solution can be considered to be good. However, there is a caveat. What now if there are different orientation solutions that would produce virtually identical patterns? This may happen for a single phase where it is called pseudosymmetry. The patterns are then so similar that the system cannot detect the difference. Alternatively, you can also have multiple phases in your sample that produce very similar patterns. In such cases we would typically use EDS information and ChI-scan to discriminate the phases.

Image 8: Definition of the confidence index parameter. V1 = number of votes for best solution, V2 = mumber of votes for 2nd best solution, VMAX= Maximum possible number of votes.

Image 9: EBSD pattern of silver indexed with the silver structure (left) and copper structure (right). Fit is 0.24″, the only difference is a minor variation in the band width matching.

In both these examples the fit value would be excellent for the selected solution. And in both cases the solution has a high probability of being wrong. And that is where the confidence index or CI value becomes important. The CI value is based on the number of band triangles or triplets that match each possible solution. If there are two indistinguishable solutions, these will both have the same number of triangles and the CI will be 0. This means that there are two or more apparently valid solutions that may all have a good fit angle. The system just does not know which of these solutions is the correct one and thus the measurement is rejected. If there is a difference of only 10% in matched triangles between alternative orientation solutions in most cases the software is capable of identifying the correct solution. The fit angle on its own cannot identify this problem.

After 25 years these tools and parameters are still indispensable and at the basis of every EBSD dataset that is collected with an EDAX system. You don’t have to talk about them. They are there for you.

What’s in a Name?

Matt Nowell, EBSD Product Manager, EDAX

The Globe Theatre

I recently had the opportunity to attend the RMS EBSD meeting, which was held at the National Physics Lab outside of London. It was a very enjoyable meeting, with lotsof nice EBSD developments. While I was there, I was able to take in a bit of London as well. One of the places I visited was the Shakespeare’s Globe Theater. While I didn’t get a chance to see a show here (I saw School of Rock instead), it did get me thinking about one of the Bard’s more famous lines, “What’s in a name? That which we call a rose by any other word would smell as sweet” from Romeo and Juliet.

I bring this up because as EBSD Product Manager for EDAX, one of my responsibilities is to help name new products. Now my academic background is in Materials Science and Engineering, so understanding how to best name a product has been an interesting adventure.

TSL

The earliest product we had was the OIM™ system, which stood for Orientation Imaging Microscopy. The name came from a paper introducing EBSD mapping as a technique. At the time, we were TSL, which stood for TexSem Laboratories, which was short for Texture in an SEM. Obviously, we were into acronyms. We used a SIT (Silicon Intensified Target) camera to capture the EBSD patterns. We did the background processing with a DSP-2000 (Digital Signal Processor). We controlled the SEM beam with an MSC box (Microscope System Control).

Our first ‘mapped’ car.

For our next generator of products, we branched out a bit. Our first digital Charge-Coupled Device (CCD) camera was called the DigiView, as it was our first digital camera for capturing EBSD patterns instead of analog signals. Our first high-speed CCD camera was called Hikari. This one may not be as obvious, but it was named after the high-speed train in Japan, as Suzuki-san (our Japanese colleague) played a significant role in the development of this camera. Occasionally, we could find the best of both worlds. Our phase ID product was called Delphi. In Greek mythology, Delphi was the oracle who was consulted for important decisions (could you describe phase ID any better than that?). It also stood for Diffracted Electrons for Phase Identification.

Among our more recent products, PRIAS™ stands for Pattern Region of Interest Analysis System. Additionally, though, it is meant to invoke the hybrid use of the detector as both an EBSD detector and an imaging system. TEAM™ stands for Texture and Elemental Analysis System, which allowed us to bridge together EDS and EBSD analysis in the same product. NPAR™ stands for Neighbor Pattern Averaging and Reindexing, but I like this one as it sounds like I named it because of my golf game.
I believe these names have followed in the tradition of things like lasers (light amplification by stimulated emission of radiation), scuba (self-contained underwater breathing apparatus), and CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). It generates a feeling of being part of the club, knowing what these names mean.

Velocity™ EBSD Camera

The feedback I get though, is that our product names should tell us what the product does. I don’t buy into this 100%, as my Honda Pilot isn’t a self-driving car, but it is the first recommendation on how to name a product (https://aytm.com/blog/how-to-name-a-product-10-tips-for-product-naming-success/). Following this logic, our latest and world’s fastest EBSD camera is the Velocity™. It sounds fast, and it is.

Of course, even when using this strategy, there can be some confusion. Is it tEBSD (Transmission EBSD) or TKD (Transmission Kikuchi Diffraction)? Does HR-EBSD give us better spatial resolution? Hopefully as we continue to name new products, we can make our answer clear.

“Strained” Friendship

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

Don’t just read the title of this post and skip to the photos or you might think it is some soap opera drama about strained relations – instead, the title is, once again, my feeble attempt at a punny joke!

I was recently doing a little reference checking and ended up on the website for Microscopy and Microanalysis (the journal, not the conference). On my first glance, I was surprised to see my name in the bottom right corner. Looking closer, I noticed that the paper Matt Nowell, David Field and I wrote way back in 2011 entitled “A Review of Strain Analysis Using Electron Backscatter Diffraction” is apparently the most cited article in Microscopy and Microanalysis. I am pleased that so many readers have found it useful. I remember, at the time, that we were getting a lot of questions about the tools within OIM Analysis™ for characterizing local misorientation and how they relate to strain. It was also a time when HREBSD was really starting to gain some momentum and we were getting a lot of questions on that front as well. So, we thought it would be helpful to write a paper that hopefully would answer some practical questions on using EBSD to characterize strain. From all the citations, it looks as though we actually managed to achieve what we had strived for.

My co-authors on that paper have been great to work with professionally; but I also count them among my closest personal friends. David Field joined Professor Brent Adams’ research group at BYU way back in 1987 if my memory is correct. We both completed master’s degrees at BYU and then followed Brent to Yale in 1988 to do our PhDs together. David then went on to Alcoa and I went to Los Alamos National Lab. Brent convinced David to leave and join the new startup company TSL and I joined about a year later. David left TSL for Washington State University shortly after EDAX purchased TSL.

Before, I joined TSL, Matt Nowell* had joined the company and he has been at TSL/EDAX ever since. Even with all the comings and goings we’ve remained colleagues and friends.

I’ve been richly blessed by both their excellent professional talents and their fun spirited friendship. We’ve worked, traveled and attended conferences together. We’ve played basketball, volleyball and golf together. I must also brag that we formed the core of the soccer team to take on the Seoul National University students after ICOTOM 13 in Seoul. Those who attended ICOTOM 13 may remember that it was held shortly after the 2002 World Cup hosted jointly by Korea and Japan; in which Korea had such a good showing – finishing 4th. A sequel was played at SNU where the students pretty much trounced the rest of the world despite our best efforts 😊. Here are a few snapshots of us with our Korean colleagues at ICOTOM 13 – clearly, we were always snappy dressers!

* Don’t miss Matt’s upcoming webinar: “Applications of High-Speed CMOS Cameras for EBSD Microstructural Analysis”

A Lot of Excitement in the Air!

Sia Afshari, Global Marketing Manager, EDAX

After all these years I still get excited about new technologies and their resulting products, especially when I have had the good fortune to play a part in their development. As I look forward to 2019, there are new and exciting products on the horizon from EDAX, where the engineering teams have been hard at work innovating and enhancing capabilities across all product lines. We are on the verge of having one of our most productive years for product introduction with new technologies expanding our portfolio in electron microscopy and micro-XRF applications.

Our APEX™ software platform will have a new release early this year with substantial feature enhancements for EDS, to be followed by EBSD capabilities later in 2019. APEX™ will also expand its wings to uXRF providing a new GUI and advanced quant functions for bulk and multi-layer analysis.

Our OIM Analysis™ EBSD software will also see a major update with the addition of a new Dictionary Indexing option.

A new addition to our TEM line will be a 160 mm² detector in a 17.5 mm diameter module that provides an exceptional solid angle for the most demanding applications in this field.

Elite T EDS System

Velocity™, EDAX’s low noise CMOS EBSD camera, provides astonishing EBSD performance at greater than 3000 fps with high indexing on a range of materials including deformed samples.

Velocity™ EBSD Camera

Last but not least, being an old x-ray guy, I can’t help being so impressed with the amazing EBSD patterns we are collecting from a ground-breaking direct electron detection (DED) camera with such “Clarity™” and detail, promising a new frontier for EBSD applications!
It will be an exciting year at EDAX and with that, I would like to wish you all a great, prosperous year!

Common Mistakes when Presenting EBSD Data

Shawn Wallace, Applications Engineer, EDAX

We all give presentations. We write and review papers. Either way, we have to be critical of our data and how it is presented to others, both numerically and graphically.

With that said, I thought it would be nice to start this year with a couple of quick tips or notes that can help with mistakes I see frequently.

The most common thing I see is poorly documented cleanup routines and partitioning. Between the initial collection and final presentation of the data, a lot of things are done to that data. It needs to be clear what was done so that one can interpret it correctly (or other people can reproduce it). Cleanup routines can change the data in ways that can either be subtle (or not so subtle), but more importantly they could wrongly change your conclusions. The easiest routine to see this on is the grain dilation routine. This routine can turn noisy data into a textured dataset pretty fast (fig. 1).

Figure 1. The initial data was just pure noise. By running it iteratively through the grain dilation routine, you can make both grains and textures.

Luckily for us, OIM Analysis™ keeps track of most of what is done via the cleanup routines and partitioning in the summary window on either the dataset level or the partition level (fig. 2).

Figure 2. A partial screenshot of the dataset level summary window shows cleanup routines completed on the dataset, as well as the parameters used. This makes your processing easily repeatable.

The other common issue is not including the full information needed to interpret a map. I really need to look at 3 things to get the full picture for an EBSD dataset: the IPF map (fig. 3), the Phase Map (fig. 4) and the IPF Legend (fig. 5) of those phases. This is very important because while the colors used are the same, the orientations differ between the different crystal symmetries.

Figure 3. General IPF Map of a geological sample. Many phases are present, but the dataset is not complete without a legend and phase map. The colors mean nothing without knowing both the phase and the IPF legend to use for that phase.

Below is a multiple phase sample with many crystal symmetries. All use Red-Green-Blue as the general color scheme. By just looking at the general IPF map (fig. 3), I can easily get the wrong impression. Without the phase map, I do not know which legend I should be using to understand the orientation of each phase. Without the crystal symmetry specific legend, I do not know how the colors change over the orientation space. I really need all these legends/maps to truly understand what I am looking at. One missing brick and the tower crumbles.

Figure 5. With all the information now presented, I can actually go back and interpret figure 3 using figures 4 and 5 to guide me.

Figure 4. In this multiphase sample, multiple symmetries are present. I need to know which phase a pixel is, to know which legend to use.

 

 

 

 

 

 

 

 

 

 

 

 

 

Being aware of these two simple ideas alone can help you to better present your data to any audience. The fewer the questions about how you got the data, the more time you will have to answer more meaningful questions about what the data actually means!

Old Eyes?

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting

Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains [1] and about 500 pixels per average grain in order to capture the grain size well [2]. This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.

Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.


This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of  = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.

As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).

TDI
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.

Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.


Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.

References
[1] SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
[2] SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.

Welcome to Weiterstadt!

Dr. Michaela Schleifer, European Regional Manager, EDAX

The European team had a very exhausting but successful week last week. Some months ago, we discussed the possibility of holding a user meeting at our headquarters in Weiterstadt, Germany. During our stay in Wiesbaden it became a tradition to do at least one user meeting or workshop a year. Because of our move to Weiterstadt and the development of some new structure in the European organization, it took quite some time to plan another user meeting. In spring time, we discussed how to satisfy the different areas in Europe regarding language and also how to transfer information about new technology to our distributors. We finally decided that we should organize 3 different meetings during the week of October 15th. The first two days were for our German speaking customers in Europe, mid-week we invited our distributors and on the last two days we offered a user meeting for our English-speaking customers. There was a lot of organization to be done, like making hotel reservations, preparing presentations, organizing hosting and also booking nice restaurants for the evening events. All of us were a bit nervous about whether everything would work, whether we had forgotten anything important and whether our SEM and system would work properly. The week before the meetings we installed the Velocity™ camera, our new high speed EBSD system in our demo lab and our application people were very happy with the performance and had fun playing around with it.

On Monday October 15th we started our first user meeting in the Weiterstadt office at around 1 pm with customers from the German speaking area. Around 45 participants joined the meeting. At the beginning we gave an overview of our current products and explained that our complete SDD series is using the Amptek modules with Si3N4 windows. Based on some spectra we showed the improved light element performance. After that Felix, one of our application specialists, showed our new user interface APEX™ live and the discussion which arose showed the interest from our users. Although only some users are doing EDS on a TEM we explained a little bit about the differences between EDS on a TEM and on a SEM. We finished the first day with a question and answer session and invited all the participants to a nice location in Darmstadt to have a typical German dinner together.

The next day was completely dominated by EBSD. Our EBSD product manager Matt Nowell, who came from Draper, USA to support us during our meetings, demonstrated the performance of our new Velocity™ EBSD camera. Matt also explained the differences in the camera technology using CCD or CMOS chips and described direct electron detection. It was easy to get more than 3000 indexed points per second while measuring a duplex steel with the Velocity™ camera. Our EBSD application specialist René de Kloe presented a lot of tips and tricks regarding EBSD measurements and analysis of measurement too and did not get tired of answering all the questions. At the end of our program all participants left with a good feeling having learnt a lot and got some good ideas about how to improve their measurements or what they might try to measure on their own samples.

The next day we shortened our program for our distributors and explained our product range and gave live demonstrations of APEX™ software platform and the Velocity™ CMOS EBSD camera. This day was dominated by a lot of discussions with the group and also by questions about our roadmap for 2019.

On Thursday and Friday of this week we did the same program for our English-speaking customers in Europe as we did for the German speaking customers. We had around 15 participants.

During this week we had around 75 customers in our office in Weiterstadt. Each customer was different in his applications and how he uses our systems but what we could observe during the evening was that most of them are very similar in what they like for dinner:

Late on Friday evening the whole European team was very happy that we managed the week with all the meetings and that based on the feedback we got it was a successful week. You may be sure that all of us went home and had a relaxing weekend!

I would like to thank Matt, Rene, Felix, Ana, Arie, Rudolf, Andreas and Paul and especially our customers who gave some interesting presentations about their institutes and the work they are doing there.