The city has recently started burying a pipe down the middle of one of the roads into my neighborhood. There were already a couple of troublesome intersections on this road. The construction has led to several accidents in the past couple of weeks at these intersections and I am sure there are more to come.
A question from a reviewer on a paper I am co-authoring got me thinking about the impact of intersections of bands in EBSD patterns on the Hough transform. The intersections are termed ‘zone axes’ or ‘poles’ and a pattern is typically composed of some strong ones where several high intensity bands intersect as well as weak ones where perhaps only two bands intersect.
To get an idea of the impact of the intersections on the Hough transform, I have created an idealized pattern. The intensity of the bands in the idealized pattern is derived from the peaks heights from the Hough transform applied to an experimental pattern. For a little fun, I have created a second pattern by blacking out the bands in the original idealized pattern, leaving behind only the intersections. I created a third pattern by blacking out the intersections and leaving behind only the bands. I have input these three patterns into the Hough transform. As I expected, you can see the strong sinusoidal curves from the pattern with only the intersections. However, you can also see peaks, where these sinusoidal curves intersect and these correspond (for the most part) to the bands in the pattern.
In the figure, the middle row of images are the raw Hough Transforms and the bottom row of images are the Hough Transforms after applying the butterfly mask. It is interesting to note how much the Hough peaks differ between the three patterns. It is clear that the intersections contribute positively to finding some of the weaker bands. This is a function not only of the band intensity but also the number of zone axes along the length of the band in the pattern.
Eventually the construction on my local road will be done and hopefully we will have fewer accidents. But clearly, intersections are more than just a necessary evil
Recently I gave a webinar on dynamic pattern simulation. The use of a dynamic diffraction model [1, 2] allows EBSD patterns to be simulated quite well. One topic I introduced in that presentation was that of dictionary indexing . You may have seen presentations on this indexing approach at some of the microscopy and/or materials science conferences. In this approach, patterns are simulated for a set of orientations covering all of orientation space. Then, an experimental pattern is tested against all of the simulated patterns to find the one that provides the best match with the experimental pattern. This approach does particularly well for noisy patterns.
I’ve been working on implementing some of these ideas into OIM Analysis to make dictionary indexing more streamlined for datasets collected using EDAX data collection software – i.e. OIM DC or TEAM. It has been a learning experience and there is still more to learn.
As I dug into dictionary indexing, I recalled our first efforts to automate EBSD indexing. Our first attempt was a template matching approach . The first step in this approach was to use a “Mexican Hat” filter. This was done to emphasize the zone axes in the patterns. This processed pattern was then compared against a dictionary of “simulated” patterns. The simulated patterns were simple – a white pixel (or set of pixels) for the major zone axes in the pattern and everything else was colored black. In this procedure the orientation sampling for the dictionary was done in Euler space. It seemed natural to go this route at the time, because we were using David Dingley’s manual on-line indexing software which focused on the zone axes. In David’s software, an operator clicked on a zone axis and identified the <uvw> associated with the zone axis. Two zone axes needed to be identified and then the user had to choose between a set of possible solutions. (Note – it was a long time ago and I think I remember the process correctly. The EBSD system was installed on an SEM located in the botany department at BYU. Our time slot for using the instrument was between 2:00-4:00am so my memory is understandably fuzzy!)
One interesting thing of note in those early dictionary indexing experiments was that the maximum step size in the sampling grid of Euler space that would result in successful indexing was found to be 2.5°, quite similar to the maximum target misorientation for modern dictionary indexing. Of course, this crude sampling approach may have led to the lack of robustness in this early attempt at dictionary indexing. The paper proposed that the technique could be improved by weighting the zone axes by the sum of the structure factors of the bands intersecting at the zone axes. However, we never followed up on this idea as we abandoned the template matching approach and moved to the Burn’s algorithm coupled with the triplet voting scheme  which produced more reliable results. Using this approach, we were able to get our first set of fully automated scans. We presented the results at an MS&T symposium (Microscale Texture of Materials Symposium, Cincinnati, Ohio, October 1991) where Niels Krieger-Lassen also presented his work on band detection using the Hough transform . After the conference, we hurried back to the lab to try out Niels’ approach for the band detection part of the indexing process .
Modern dictionary indexing applies an adaptive histogram filter to the experimental patterns (at left in the figure below) and the dictionary patterns (at right) prior to performing the normalized inner dot-product used to compare patterns. The filtered patterns are nearly binary and seeing these triggered my memory of our early dictionary work as they reminded me of the nearly binary “Sombrero” filtered patterns– Olé! We may not have come back full circle but progress clearly goes in steps and some bear an uncanny resemblance to previous ones. I doff my hat to the great work that has gone into the development of dynamic pattern simulation and its applications.
 A. Winkelmann, C. Trager-Cowan, F. Sweeney, A. P. Day, P. Parbrook (2007) “Many-Beam Dynamical Simulation of Electron Backscatter Diffraction Patterns” Ultramicroscopy 107: 414-421.  P. G. Callahan, M. De Graef (2013) “Dynamical Electron Backscatter Diffraction Patterns. Part I: Pattern Simulations” Microscopy and Microanalysis 19: 1255-1265.  S.I. Wright, B. L. Adams, J.-Z. Zhao (1991). “Automated determination of lattice orientation from electron backscattered Kikuchi diffraction patterns” Textures and Microstructures 13: 2-3.  Y.H. Chen, S. U. Park, D. Wei, G. Newstadt, M.A. Jackson, J.P. Simmons, M. De Graef, A.O. Hero (2015) “A dictionary approach to electron backscatter diffraction indexing” Microscopy and Microanalysis 21: 739-752.  S.I. Wright, B. L. Adams (1992) “Automatic-analysis of electron backscatter diffraction patterns” Metallurgical Transactions A 23: 759-767.  N.C. Krieger Lassen, D. Juul Jensen, K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning Microscopy 6: 115-121.  K. Kunze, S. I. Wright, B. L. Adams, D. J. Dingley (1993) “Advances in Automatic EBSP Single Orientation Measurements.” Textures and Microstructures 20: 41-54.
Not too long ago I went to my optometrist to get an eye exam for some replacement glasses. My last pair had been stolen after my car was broken into in broad daylight during lunch at a restaurant in the Bay Area. (What the thief planned on doing with my prescription glasses is still a mystery to me.)
Figure 1: The old phoropter* (top) and the new phoropter** (bottom).
It had been at least a couple years since my last examination, but I was prepared to be guided through all the typical tests, culminating with that “giant-machine-with-multiple-lenses” pressed into my face to help the optometrist determine the prescription that would best correct the errors in my vision. I’d later learn that this machine is called a phoro-optometer, or more commonly a “phoropter.” And, contrary to my previous experiences with this instrument, it was now a super-sleek, slimmed down, digital version of the machine, using a computer controlled digital refraction system to cycle through the refraction options instead of using stacks of physical lenses that had to be manually cycled by the optometrist.
It was much smaller, quieter, faster, and easier than the version with which I was familiar. I was thoroughly impressed. But I was even more impressed when the instrument was pulled away and I saw the Ametek logo emblazoned on the side of it.
I couldn’t help but reflexively blurt out “Hey I work there!” to which the optometrist looked up from my file and began curiously interrogating me about my history in the eye care industry. Sadly, he quickly lost interest after I explained that I worked in a different division of Ametek that manufactures EDS, EBSD, and WDS systems.
After my exam, for some reason I felt a bit intimidated about not knowing more about Ametek’s business units outside of the EDAX niche to which I belong. I knew Ametek was a huge corporation, steadily growing larger over the decades — mainly by acquisition of smaller companies – but I’d never really grasped the sheer size and breadth of everything Ametek does. This wasn’t the first time I’ve been in this type of situation. Prior to joining EDAX/Ametek I worked for another scientific instrumentation corporation, slightly smaller than Ametek but still a similar type of behemoth with a wide range of companies making products that service comparable industries and applications. Even at that corporation my knowledge of the business outside of my business unit’s portfolio was very limited. These places are just so big!
Working at large corporations like these can, at times, be a little bit discouraging if you think of yourself as just a single cog in a machine with thousands of moving parts. Giant corporations certainly seem to have a bad reputation these days and I’ll admit I’ve experienced my fair share of corporation-induced angst over the years. Working within a large bureaucracy can make completing the smallest internal tasks overwhelming. Being in a smaller company that is acquired – I’ve been through two acquisitions — can be disruptive to business and cause a lot of anxiety.
But is there a good side to these mega-corporations? I think so.
I can find some important benefits that could be argued to outweigh the negative aspects, not just to the cogs like myself but also to the markets that they serve. Whether or not these apply to other more prominent mega-corporations is debatable, but I think they seem to be reasonable positive characteristics, at least from my experience in the scientific instrumentation field.
Having the brand name recognition has always been an advantage. Customers (and their procurement departments) are typically more willing to do business with companies that have a long history of manufacturing products. Being in business for multiple decades with a proven track record of having the resources to reliably deliver products to the market and consistently service its user-base generates heaps of reassurance for customers that a younger or smaller company just can’t provide. It works similarly for vendors as well – it turns out that people are always more willing to sell you stuff if they’re confident that your company will pay for it.
Being in a large corporation also offers a huge advantage in the ability to research and develop new technology and product improvements. This can come by brute force – having deeper pockets to invest more money into R&D – or even by utilizing the synergy between individual companies under the corporation’s umbrella. EDAX is a great example of this in a couple ways. Ametek’s purchase of a new business unit in 2014 facilitated the development of EDAX’s groundbreaking Octane Elite and Octane Elect EDS systems, allowing for speed and sensitivity that had never been achieved before in any other EDS system. Collaboration between EDAX and another sister company within the Materials Analysis Division of Ametek, ushered in the release of EDAX’s new Velocity highspeed CMOS EBSD camera, by far the fastest EBSD system available. Realization of these two milestones of innovation would have been significantly delayed without the help of Ametek’s resources.
Figure 2: The Octane Elite (left) and the Velocity Super (right), two of EDAX’s products that were developed, in part, with the help of other business units inside Ametek.
But what I think tends to be the best part is that, as long as a company is meeting its targets and things are humming along nicely, corporations – at least the good ones, in my opinion — are usually happy to just let the business unit do its own thing. Having an “if it ain’t broke don’t fix it” mentality is the ideal way to keep the key talent happy and keep the business growing and making money. It also makes it possible to retain some semblance of the original company culture that contributed to its success in the first place. This is the holy grail for us cogs – being able to keep that small business feel while also being able to take advantage of all the big business benefits at the same time. Again, EDAX is a good example of this, with many of EDAX’s employees being legacy staff hired on long before the EDAX acquisition. This tells me Ametek must be doing something right.
So, I guess it’s debatable. While we may be willingly marching our grandchildren into a dystopia where three or four companies own all the businesses in the world, there are some undeniable advantages that working for a big company brings as well. And I take some comfort in the fact there are some very intelligent and innovative people behind the curtains, trying to do good things to make their customers happy and generally improve the lives of everyone in the world. We may or may not see all the things like the better phoropters out there, but our lives are almost certainly benefited by them whether we realize it or not.
A recent conversation on a list serv discussed sloppiness in the use of words and how it can cause confusion. This made me consider that in the world of microanalysis, we are not immune. We are probably sloppiest with two particular words. They are resolution and phase.
Let us start with how we use the word phase and how phases are commonly defined in microanalysis. In Energy Dispersive Spectroscopy (EDS), we use phase for everything, for example, phase mapping, phase library. In Electron Backscatter Diffraction (EBSD), the usage is a little more straightforward.
So, what is a phase? Well to me, a geologist, a phase has both a distinct chemistry and a distinct crystal structure. Why does this matter to a geologist? Two different minerals with the same chemistry, but with different structures, can behave in very different ways and this gives me useful information about each of them.
The classic example for geologists is the Al2SIO5 system (figure 1). It has three members, Kyanite, Sillimanite, and Andalusite. They each have the same chemistry but different structures. The structure of each is controlled by the pressure and temperature at which the mineral equilibrated. Simple chemistry tells me nothing. I need the structure to tease out that information.
Figure 1. Phase Diagram of the Al2SiO5 system in geological conditions. Different minerals form at different pressures and temperatures, letting geologists know how deep and/or the temperature at which the parent rock formed.**
EDS users use the term phase much more loosely. A phase is something that is chemically distinct. Our phase maps look at a spectrum pixel by pixel and see how they compare. In the end, the software goes through the entire map and groups each pixel with like pixels. The phase library does chi squared fits to compare the spectrum to the library (figure 2).
Figure 2. Our Spectrum Library Match uses as Chi-squared fit to determine the best possible matches. This phase is based on compositional data, not compositional and structural data.
While the definition of phase is relatively straight forward, the meaning of resolution gets a little murkier. If you asked someone what the EDS resolution is, you may get different answers depending on who you ask. The main way we use the term resolution when talking about EDS is spectral resolution. This defines how tight the peaks in a spectrum are (figure 3).
Figure 3. Comparison of EDS vs. WDS spectral resolution. WDS has much higher resolution (tighter peaks) than EDS, but fewer counts and more set-up are required.
The other main use of resolution, in EDS is the spatial resolution of the EDS signal itself (figure 4). There are many factors which determine this, but the main ones are the accelerating voltage and sample characteristics. This resolution can go from nanometers to microns.
Figure 4. Distribution of the electron energy deposited in an aluminum sample (top row) and a gold sample (bottom row) at 15 kV (left column) and 5 kV (right column). Note the dramatic difference in penetration given by the right hand side scale bar.
The final use of resolution for EDS is mapping resolution. This is by far the easiest to understand. It is just the step size of the beam while you are mapping.
Luckily for us, the easiest way to find out what people mean when they use the terms resolution or phase, is just to ask. Of course, the way to avoid any confusion is to be as precise as possible with your choice of words. I resolve to do my part and communicate as clearly as I can!
John Haritos, Regional Sales Manager Southwest USA. EDAX
I recently had the opportunity to host a demo for one of my customers at our Draper, Utah office. This was a long-time EDAX and EBSD user, who was interested in seeing our new Velocity CMOS camera, and to try it on some of their samples.
When I started in this industry back in the late 90s, the cameras were running at a “blazing” 20 points per second and we all thought that this was fast. At that time, collection speed wasn’t the primary issue. What EBSD brought to the table was automated orientation analysis of diffraction patterns. Now users could measure orientations and create beautiful orientation maps with the push of a button, which was a lot easier than manually interpreting these patterns.
Fast forward to 2019 and with the CMOS technology being adapted from other industries to EBSD we are now collecting at 4,500 pps. What took hours and even days to collect at 20 pps now takes a matter of minutes or seconds. Below is a Nickel Superalloy sample collected at 4,500 pps on our Velocity Super EBSD camera. This scan shows the grain and twinning structure and was collected in just a few minutes.
Figure 1: Nickel Superalloy
Of course, now that we have improved from 20 pps to 4,500 pps, it’s significantly easier to get a lot more data. So the question becomes, how do we analyze all this data? This is where OIM Analysis v8 comes to the rescue for the analysis and post processing of these large data sets. OIM Analysis v8 was designed to take advantage of 64 bit computing and multi-threading so the software can handle large datasets. Below is a grain size map and a grain size distribution chart from an Aluminum friction stir weld sample with over 7 Million points collected with the Velocity and processed using OIM Analysis v8. This example is interesting because the grains on the left side of the image are much larger than the grains on the right side. With the fast collection speeds, a small (250nm) step size could still be used over this larger collection area. This allows for accurate characterization of grain size across this weld interface, and the bimodal grain size distribution is clearly resolved. With a slower camera, it may be impractical to analyze this area in a single scan.
Figure 2: Aluminum Friction Stir Weld
In the past, most customers would setup an overnight EBSD run. You could see the thoughts running through their mind: will my sample drift, will my filament pop, what will the data look like when I come back to work in the morning? Inevitably, the sample would drift, or the filament would pop and this would mean the dreaded “ugh” in the morning. With the Velocity and the fast collection speeds, you no longer need to worry about this. You can collect maps in a few minutes and avoid this issue in practice. It’s a hard thing to say in a brochure, but its easy to appreciate when seeing it firsthand.
For me, watching my customer see the analysis of many samples in a single day was impressive. These were not particularly easy samples. They were solar cell and battery materials, with a variety of phases and crystal structures. But under similar conditions to their traditional EBSD work, we could collect better quality data much faster. The future is now. Everyone is excited with what the CMOS technology can offer in the way of productivity and throughput for their EBSD work.
When you have been working with EBSD for many years it is easy to forget how little you knew when you started. EBSD patterns appear like magic on your screen, indexing and orientation determination are automatic, and you can produce colourful images or maps with a click of a mouse.
Image 1: IPF on PRIAS center EBSD map of cold-pressed iron powder sample.
All the tools to get you there are hidden in the EBSD software package that you are working with and as a user you don’t need to know exactly how all of it happens. It just works. To me, although it is my daily work, it is still amazing how easy it sometimes is to get high quality data from almost any sample even if it only produces barely recognisable patterns.
Image 2: Successful indexing of extremely noisy patterns using automatic band detection.
That capability did not just appear overnight. There is a combination of a lot of hard work, clever ideas, and more than 25 years of experience behind it that we sometimes just forget to talk about, or perhaps even worse, expect everybody to know already. And so it is that I occasionally get asked a question at a meeting or an exhibition where I think, really? For example, some years ago I got a very good question about the EBSD calibration.
Image 3: EBSD calibration is based on the point in the pattern that is not distorted by the projection. This is the point where the electrons reach the screen perpendicularly (pattern center).
As you probably suspect EBSD calibration is not some kind of magic that ensures that you can index your patterns. It is a precise geometrical correction that distorts the displayed EBSD solution so that it fits the detected pattern. I always compare it with a video-projector. That is also a point projection onto a screen at a small angle, just like the EBSD detection geometry. And when you do that there is a distortion where the sides of the image on the screen are not parallel anymore but move away from each other. On video projectors there is a smart trick to fix that: a button labelled keystone correction which pulls the sides of the image nicely parallel again where they belong.
Image 4: Trapezoid distortion before (left) and after (right) correction.
Unfortunately, we cannot tell the electrons in the SEM to move over a little bit in order to make the EBSD pattern look correct. Instead we need to distort the indexing solution just so that it matches the EBSD pattern. And now the question I got asked was, do you actually adjust this calibration when moving the beam position on the sample during a scan? Because otherwise you cannot collect large EBSD maps. Apparently not everybody was doing that at that time, and it was being presented at a conference as the invention of the century that no EBSD system could do without. It was finally possible to collect EBSD data at low magnification! So, when do you think this feature will be available in your software? I stood quiet for a moment before answering, well, eh, we actually already have such a feature that we call the pattern centre shift. And it had been in the system since the first mapping experiments in the early 90’s. We just did not talk about it as it seemed so obvious.
There are more things like that hidden in the software that are at least as important, such as smart routines to detect the bands even in extremely noisy patterns, EBSD pattern background processing, 64-bit multithreading for fast processing of large datasets, and efficient quaternion-based mathematical methods for post-processing. These tools are quietly working in the background to deliver the results that the user needs.
There are some other original ideas that date back to the 1990’s that we actually do regularly talk about, such as the hexagonal scanning grid, triplet voting indexing, and the confidence index, but there is also some confusion about these. Why do we do it that way?
The common way in imaging and imaging sensors (e.g. CCD or CMOS chips) is to organise pixels on a square grid. That is easy and you can treat your data as being written in a regular table with fixed intervals. However, pixel-to-pixel distances are different horizontally and diagonally which is a drawback when you are routinely calculating average values around points. In a hexagonal grid the point-to-point distance is constant between all neighbouring pixels. Perhaps even more importantly, a hexagonal grid offers ~15% more points on the same area than a square grid, which makes it ideally suited to fill a surface.
Image 5: Scanning results for square (left) and hexagonal (right) grids using the same step size. The grain shape and small grains with few points are more clearly defined in the hexagonal scan.
This potentially allows improvements in imaging resolution and sometimes I feel a little surprised that a hexagonal imaging mode is not yet available on SEMs.
The triplet voting indexing method also has some hidden benefits. What we do there is that a crystal orientation is calculated for each group of three bands that is detected in an EBSD pattern. For example, when you set the software to find 8 bands, you can define up to 56 different band triangles, each with a unique orientation solution.
Image 6: Indexing example based on a single set of three bands – triplet.
Image 7: Equation indicating the maximum number of triplets for a given number of bands.
This means that when a pattern is indexed, we don’t just find a single orientation, we find 56 very similar orientations that can all be averaged to produce the final indexing solution. This averaging effectively removes small errors in the band detection and allows excellent orientation precision, even in very noisy EBSD patterns. The large number of individual solutions for each pattern has another advantage. It does not hurt too much if some of the bands are wrongly detected from pattern noise or when a pattern is collected directly at a grain boundary and contains bands from two different grains. In most cases the bands coming from one of the grains will dominate the solutions and produce a valid orientation measurement.
The next original parameter from the 1990’s is the confidence index which follows out of the triplet voting indexing method. Why is this parameter such a big deal that it is even patented?
When an EBSD pattern is indexed several parameters are recorded in the EBSD scan file, the orientation, the image quality (which is a measure for the contrast of the bands), and a fit angle. This angle indicates the angular difference between the bands that have been detected by the software and the calculated orientation solution. The fit angle can be seen as an error bar for the indexing solution. If the angle is small, the calculated orientation fits very closely with the detected bands and the solution can be considered to be good. However, there is a caveat. What now if there are different orientation solutions that would produce virtually identical patterns? This may happen for a single phase where it is called pseudosymmetry. The patterns are then so similar that the system cannot detect the difference. Alternatively, you can also have multiple phases in your sample that produce very similar patterns. In such cases we would typically use EDS information and ChI-scan to discriminate the phases.
Image 8: Definition of the confidence index parameter. V1 = number of votes for best solution, V2 = mumber of votes for 2nd best solution, VMAX= Maximum possible number of votes.
Image 9: EBSD pattern of silver indexed with the silver structure (left) and copper structure (right). Fit is 0.24″, the only difference is a minor variation in the band width matching.
In both these examples the fit value would be excellent for the selected solution. And in both cases the solution has a high probability of being wrong. And that is where the confidence index or CI value becomes important. The CI value is based on the number of band triangles or triplets that match each possible solution. If there are two indistinguishable solutions, these will both have the same number of triangles and the CI will be 0. This means that there are two or more apparently valid solutions that may all have a good fit angle. The system just does not know which of these solutions is the correct one and thus the measurement is rejected. If there is a difference of only 10% in matched triangles between alternative orientation solutions in most cases the software is capable of identifying the correct solution. The fit angle on its own cannot identify this problem.
After 25 years these tools and parameters are still indispensable and at the basis of every EBSD dataset that is collected with an EDAX system. You don’t have to talk about them. They are there for you.
I recently had the opportunity to attend the RMS EBSD meeting, which was held at the National Physics Lab outside of London. It was a very enjoyable meeting, with lotsof nice EBSD developments. While I was there, I was able to take in a bit of London as well. One of the places I visited was the Shakespeare’s Globe Theater. While I didn’t get a chance to see a show here (I saw School of Rock instead), it did get me thinking about one of the Bard’s more famous lines, “What’s in a name? That which we call a rose by any other word would smell as sweet” from Romeo and Juliet.
I bring this up because as EBSD Product Manager for EDAX, one of my responsibilities is to help name new products. Now my academic background is in Materials Science and Engineering, so understanding how to best name a product has been an interesting adventure.
The earliest product we had was the OIM system, which stood for Orientation Imaging Microscopy. The name came from a paper introducing EBSD mapping as a technique. At the time, we were TSL, which stood for TexSem Laboratories, which was short for Texture in an SEM. Obviously, we were into acronyms. We used a SIT (Silicon Intensified Target) camera to capture the EBSD patterns. We did the background processing with a DSP-2000 (Digital Signal Processor). We controlled the SEM beam with an MSC box (Microscope System Control).
Our first ‘mapped’ car.
For our next generator of products, we branched out a bit. Our first digital Charge-Coupled Device (CCD) camera was called the DigiView, as it was our first digital camera for capturing EBSD patterns instead of analog signals. Our first high-speed CCD camera was called Hikari. This one may not be as obvious, but it was named after the high-speed train in Japan, as Suzuki-san (our Japanese colleague) played a significant role in the development of this camera. Occasionally, we could find the best of both worlds. Our phase ID product was called Delphi. In Greek mythology, Delphi was the oracle who was consulted for important decisions (could you describe phase ID any better than that?). It also stood for Diffracted Electrons for Phase Identification.
Among our more recent products, PRIAS stands for Pattern Region of Interest Analysis System. Additionally, though, it is meant to invoke the hybrid use of the detector as both an EBSD detector and an imaging system. TEAM stands for Texture and Elemental Analysis System, which allowed us to bridge together EDS and EBSD analysis in the same product. NPAR stands for Neighbor Pattern Averaging and Reindexing, but I like this one as it sounds like I named it because of my golf game.
I believe these names have followed in the tradition of things like lasers (light amplification by stimulated emission of radiation), scuba (self-contained underwater breathing apparatus), and CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). It generates a feeling of being part of the club, knowing what these names mean.
Velocity EBSD Camera
The feedback I get though, is that our product names should tell us what the product does. I don’t buy into this 100%, as my Honda Pilot isn’t a self-driving car, but it is the first recommendation on how to name a product (https://aytm.com/blog/how-to-name-a-product-10-tips-for-product-naming-success/). Following this logic, our latest and world’s fastest EBSD camera is the Velocity. It sounds fast, and it is.
Of course, even when using this strategy, there can be some confusion. Is it tEBSD (Transmission EBSD) or TKD (Transmission Kikuchi Diffraction)? Does HR-EBSD give us better spatial resolution? Hopefully as we continue to name new products, we can make our answer clear.
Don’t just read the title of this post and skip to the photos or you might think it is some soap opera drama about strained relations – instead, the title is, once again, my feeble attempt at a punny joke!
I was recently doing a little reference checking and ended up on the website for Microscopy and Microanalysis (the journal, not the conference). On my first glance, I was surprised to see my name in the bottom right corner. Looking closer, I noticed that the paper Matt Nowell, David Field and I wrote way back in 2011 entitled “A Review of Strain Analysis Using Electron Backscatter Diffraction” is apparently the most cited article in Microscopy and Microanalysis. I am pleased that so many readers have found it useful. I remember, at the time, that we were getting a lot of questions about the tools within OIM Analysis for characterizing local misorientation and how they relate to strain. It was also a time when HREBSD was really starting to gain some momentum and we were getting a lot of questions on that front as well. So, we thought it would be helpful to write a paper that hopefully would answer some practical questions on using EBSD to characterize strain. From all the citations, it looks as though we actually managed to achieve what we had strived for.
My co-authors on that paper have been great to work with professionally; but I also count them among my closest personal friends. David Field joined Professor Brent Adams’ research group at BYU way back in 1987 if my memory is correct. We both completed master’s degrees at BYU and then followed Brent to Yale in 1988 to do our PhDs together. David then went on to Alcoa and I went to Los Alamos National Lab. Brent convinced David to leave and join the new startup company TSL and I joined about a year later. David left TSL for Washington State University shortly after EDAX purchased TSL.
Before, I joined TSL, Matt Nowell* had joined the company and he has been at TSL/EDAX ever since. Even with all the comings and goings we’ve remained colleagues and friends.
I’ve been richly blessed by both their excellent professional talents and their fun spirited friendship. We’ve worked, traveled and attended conferences together. We’ve played basketball, volleyball and golf together. I must also brag that we formed the core of the soccer team to take on the Seoul National University students after ICOTOM 13 in Seoul. Those who attended ICOTOM 13 may remember that it was held shortly after the 2002 World Cup hosted jointly by Korea and Japan; in which Korea had such a good showing – finishing 4th. A sequel was played at SNU where the students pretty much trounced the rest of the world despite our best efforts . Here are a few snapshots of us with our Korean colleagues at ICOTOM 13 – clearly, we were always snappy dressers!
After all these years I still get excited about new technologies and their resulting products, especially when I have had the good fortune to play a part in their development. As I look forward to 2019, there are new and exciting products on the horizon from EDAX, where the engineering teams have been hard at work innovating and enhancing capabilities across all product lines. We are on the verge of having one of our most productive years for product introduction with new technologies expanding our portfolio in electron microscopy and micro-XRF applications.
Our APEX software platform will have a new release early this year with substantial feature enhancements for EDS, to be followed by EBSD capabilities later in 2019. APEX will also expand its wings to uXRF providing a new GUI and advanced quant functions for bulk and multi-layer analysis.
Our OIM Analysis EBSD software will also see a major update with the addition of a new Dictionary Indexing option.
A new addition to our TEM line will be a 160 mm² detector in a 17.5 mm diameter module that provides an exceptional solid angle for the most demanding applications in this field.
Elite T EDS System
Velocity, EDAX’s low noise CMOS EBSD camera, provides astonishing EBSD performance at greater than 3000 fps with high indexing on a range of materials including deformed samples.
Velocity EBSD Camera
Last but not least, being an old x-ray guy, I can’t help being so impressed with the amazing EBSD patterns we are collecting from a ground-breaking direct electron detection (DED) camera with such “Clarity” and detail, promising a new frontier for EBSD applications!
It will be an exciting year at EDAX and with that, I would like to wish you all a great, prosperous year!
We all give presentations. We write and review papers. Either way, we have to be critical of our data and how it is presented to others, both numerically and graphically.
With that said, I thought it would be nice to start this year with a couple of quick tips or notes that can help with mistakes I see frequently.
The most common thing I see is poorly documented cleanup routines and partitioning. Between the initial collection and final presentation of the data, a lot of things are done to that data. It needs to be clear what was done so that one can interpret it correctly (or other people can reproduce it). Cleanup routines can change the data in ways that can either be subtle (or not so subtle), but more importantly they could wrongly change your conclusions. The easiest routine to see this on is the grain dilation routine. This routine can turn noisy data into a textured dataset pretty fast (fig. 1).
Figure 1. The initial data was just pure noise. By running it iteratively through the grain dilation routine, you can make both grains and textures.
Luckily for us, OIM Analysis keeps track of most of what is done via the cleanup routines and partitioning in the summary window on either the dataset level or the partition level (fig. 2).
Figure 2. A partial screenshot of the dataset level summary window shows cleanup routines completed on the dataset, as well as the parameters used. This makes your processing easily repeatable.
The other common issue is not including the full information needed to interpret a map. I really need to look at 3 things to get the full picture for an EBSD dataset: the IPF map (fig. 3), the Phase Map (fig. 4) and the IPF Legend (fig. 5) of those phases. This is very important because while the colors used are the same, the orientations differ between the different crystal symmetries.
Figure 3. General IPF Map of a geological sample. Many phases are present, but the dataset is not complete without a legend and phase map. The colors mean nothing without knowing both the phase and the IPF legend to use for that phase.
Below is a multiple phase sample with many crystal symmetries. All use Red-Green-Blue as the general color scheme. By just looking at the general IPF map (fig. 3), I can easily get the wrong impression. Without the phase map, I do not know which legend I should be using to understand the orientation of each phase. Without the crystal symmetry specific legend, I do not know how the colors change over the orientation space. I really need all these legends/maps to truly understand what I am looking at. One missing brick and the tower crumbles.
Figure 5. With all the information now presented, I can actually go back and interpret figure 3 using figures 4 and 5 to guide me.
Figure 4. In this multiphase sample, multiple symmetries are present. I need to know which phase a pixel is, to know which legend to use.
Being aware of these two simple ideas alone can help you to better present your data to any audience. The fewer the questions about how you got the data, the more time you will have to answer more meaningful questions about what the data actually means!