Recently I gave a webinar on dynamic pattern simulation. The use of a dynamic diffraction model [1, 2] allows EBSD patterns to be simulated quite well. One topic I introduced in that presentation was that of dictionary indexing [3]. You may have seen presentations on this indexing approach at some of the microscopy and/or materials science conferences. In this approach, patterns are simulated for a set of orientations covering all of orientation space. Then, an experimental pattern is tested against all of the simulated patterns to find the one that provides the best match with the experimental pattern. This approach does particularly well for noisy patterns.
I’ve been working on implementing some of these ideas into OIM Analysis™ to make dictionary indexing more streamlined for datasets collected using EDAX data collection software – i.e. OIM DC or TEAM™. It has been a learning experience and there is still more to learn.
As I dug into dictionary indexing, I recalled our first efforts to automate EBSD indexing. Our first attempt was a template matching approach [4]. The first step in this approach was to use a “Mexican Hat” filter. This was done to emphasize the zone axes in the patterns. This processed pattern was then compared against a dictionary of “simulated” patterns. The simulated patterns were simple – a white pixel (or set of pixels) for the major zone axes in the pattern and everything else was colored black. In this procedure the orientation sampling for the dictionary was done in Euler space. It seemed natural to go this route at the time, because we were using David Dingley’s manual on-line indexing software which focused on the zone axes. In David’s software, an operator clicked on a zone axis and identified the <uvw> associated with the zone axis. Two zone axes needed to be identified and then the user had to choose between a set of possible solutions. (Note – it was a long time ago and I think I remember the process correctly. The EBSD system was installed on an SEM located in the botany department at BYU. Our time slot for using the instrument was between 2:00-4:00am so my memory is understandably fuzzy!)
One interesting thing of note in those early dictionary indexing experiments was that the maximum step size in the sampling grid of Euler space that would result in successful indexing was found to be 2.5°, quite similar to the maximum target misorientation for modern dictionary indexing. Of course, this crude sampling approach may have led to the lack of robustness in this early attempt at dictionary indexing. The paper proposed that the technique could be improved by weighting the zone axes by the sum of the structure factors of the bands intersecting at the zone axes. However, we never followed up on this idea as we abandoned the template matching approach and moved to the Burn’s algorithm coupled with the triplet voting scheme [5] which produced more reliable results. Using this approach, we were able to get our first set of fully automated scans. We presented the results at an MS&T symposium (Microscale Texture of Materials Symposium, Cincinnati, Ohio, October 1991) where Niels Krieger-Lassen also presented his work on band detection using the Hough transform [6]. After the conference, we hurried back to the lab to try out Niels’ approach for the band detection part of the indexing process [7].
Modern dictionary indexing applies an adaptive histogram filter to the experimental patterns (at left in the figure below) and the dictionary patterns (at right) prior to performing the normalized inner dot-product used to compare patterns. The filtered patterns are nearly binary and seeing these triggered my memory of our early dictionary work as they reminded me of the nearly binary “Sombrero” filtered patterns– Olé! We may not have come back full circle but progress clearly goes in steps and some bear an uncanny resemblance to previous ones. I doff my hat to the great work that has gone into the development of dynamic pattern simulation and its applications.
[1] A. Winkelmann, C. Trager-Cowan, F. Sweeney, A. P. Day, P. Parbrook (2007) “Many-Beam Dynamical Simulation of Electron Backscatter Diffraction Patterns” Ultramicroscopy 107: 414-421. [2] P. G. Callahan, M. De Graef (2013) “Dynamical Electron Backscatter Diffraction Patterns. Part I: Pattern Simulations” Microscopy and Microanalysis 19: 1255-1265. [3] S.I. Wright, B. L. Adams, J.-Z. Zhao (1991). “Automated determination of lattice orientation from electron backscattered Kikuchi diffraction patterns” Textures and Microstructures 13: 2-3. [4] Y.H. Chen, S. U. Park, D. Wei, G. Newstadt, M.A. Jackson, J.P. Simmons, M. De Graef, A.O. Hero (2015) “A dictionary approach to electron backscatter diffraction indexing” Microscopy and Microanalysis 21: 739-752. [5] S.I. Wright, B. L. Adams (1992) “Automatic-analysis of electron backscatter diffraction patterns” Metallurgical Transactions A 23: 759-767. [6] N.C. Krieger Lassen, D. Juul Jensen, K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning Microscopy 6: 115-121. [7] K. Kunze, S. I. Wright, B. L. Adams, D. J. Dingley (1993) “Advances in Automatic EBSP Single Orientation Measurements.” Textures and Microstructures 20: 41-54.
John Haritos, Regional Sales Manager Southwest USA. EDAX
I recently had the opportunity to host a demo for one of my customers at our Draper, Utah office. This was a long-time EDAX and EBSD user, who was interested in seeing our new Velocity CMOS camera, and to try it on some of their samples.
When I started in this industry back in the late 90s, the cameras were running at a “blazing” 20 points per second and we all thought that this was fast. At that time, collection speed wasn’t the primary issue. What EBSD brought to the table was automated orientation analysis of diffraction patterns. Now users could measure orientations and create beautiful orientation maps with the push of a button, which was a lot easier than manually interpreting these patterns.
Fast forward to 2019 and with the CMOS technology being adapted from other industries to EBSD we are now collecting at 4,500 pps. What took hours and even days to collect at 20 pps now takes a matter of minutes or seconds. Below is a Nickel Superalloy sample collected at 4,500 pps on our Velocity™ Super EBSD camera. This scan shows the grain and twinning structure and was collected in just a few minutes.
Figure 1: Nickel Superalloy
Of course, now that we have improved from 20 pps to 4,500 pps, it’s significantly easier to get a lot more data. So the question becomes, how do we analyze all this data? This is where OIM Analysis v8™ comes to the rescue for the analysis and post processing of these large data sets. OIM Analysis v8™ was designed to take advantage of 64 bit computing and multi-threading so the software can handle large datasets. Below is a grain size map and a grain size distribution chart from an Aluminum friction stir weld sample with over 7 Million points collected with the Velocity™ and processed using OIM Analysis v8™. This example is interesting because the grains on the left side of the image are much larger than the grains on the right side. With the fast collection speeds, a small (250nm) step size could still be used over this larger collection area. This allows for accurate characterization of grain size across this weld interface, and the bimodal grain size distribution is clearly resolved. With a slower camera, it may be impractical to analyze this area in a single scan.
Figure 2: Aluminum Friction Stir Weld
In the past, most customers would setup an overnight EBSD run. You could see the thoughts running through their mind: will my sample drift, will my filament pop, what will the data look like when I come back to work in the morning? Inevitably, the sample would drift, or the filament would pop and this would mean the dreaded “ugh” in the morning. With the Velocity™ and the fast collection speeds, you no longer need to worry about this. You can collect maps in a few minutes and avoid this issue in practice. It’s a hard thing to say in a brochure, but its easy to appreciate when seeing it firsthand.
For me, watching my customer see the analysis of many samples in a single day was impressive. These were not particularly easy samples. They were solar cell and battery materials, with a variety of phases and crystal structures. But under similar conditions to their traditional EBSD work, we could collect better quality data much faster. The future is now. Everyone is excited with what the CMOS technology can offer in the way of productivity and throughput for their EBSD work.
When you have been working with EBSD for many years it is easy to forget how little you knew when you started. EBSD patterns appear like magic on your screen, indexing and orientation determination are automatic, and you can produce colourful images or maps with a click of a mouse.
Image 1: IPF on PRIAS™ center EBSD map of cold-pressed iron powder sample.
All the tools to get you there are hidden in the EBSD software package that you are working with and as a user you don’t need to know exactly how all of it happens. It just works. To me, although it is my daily work, it is still amazing how easy it sometimes is to get high quality data from almost any sample even if it only produces barely recognisable patterns.
Image 2: Successful indexing of extremely noisy patterns using automatic band detection.
That capability did not just appear overnight. There is a combination of a lot of hard work, clever ideas, and more than 25 years of experience behind it that we sometimes just forget to talk about, or perhaps even worse, expect everybody to know already. And so it is that I occasionally get asked a question at a meeting or an exhibition where I think, really? For example, some years ago I got a very good question about the EBSD calibration.
Image 3: EBSD calibration is based on the point in the pattern that is not distorted by the projection. This is the point where the electrons reach the screen perpendicularly (pattern center).
As you probably suspect EBSD calibration is not some kind of magic that ensures that you can index your patterns. It is a precise geometrical correction that distorts the displayed EBSD solution so that it fits the detected pattern. I always compare it with a video-projector. That is also a point projection onto a screen at a small angle, just like the EBSD detection geometry. And when you do that there is a distortion where the sides of the image on the screen are not parallel anymore but move away from each other. On video projectors there is a smart trick to fix that: a button labelled keystone correction which pulls the sides of the image nicely parallel again where they belong.
Image 4: Trapezoid distortion before (left) and after (right) correction.
Unfortunately, we cannot tell the electrons in the SEM to move over a little bit in order to make the EBSD pattern look correct. Instead we need to distort the indexing solution just so that it matches the EBSD pattern. And now the question I got asked was, do you actually adjust this calibration when moving the beam position on the sample during a scan? Because otherwise you cannot collect large EBSD maps. Apparently not everybody was doing that at that time, and it was being presented at a conference as the invention of the century that no EBSD system could do without. It was finally possible to collect EBSD data at low magnification! So, when do you think this feature will be available in your software? I stood quiet for a moment before answering, well, eh, we actually already have such a feature that we call the pattern centre shift. And it had been in the system since the first mapping experiments in the early 90’s. We just did not talk about it as it seemed so obvious.
There are more things like that hidden in the software that are at least as important, such as smart routines to detect the bands even in extremely noisy patterns, EBSD pattern background processing, 64-bit multithreading for fast processing of large datasets, and efficient quaternion-based mathematical methods for post-processing. These tools are quietly working in the background to deliver the results that the user needs.
There are some other original ideas that date back to the 1990’s that we actually do regularly talk about, such as the hexagonal scanning grid, triplet voting indexing, and the confidence index, but there is also some confusion about these. Why do we do it that way?
The common way in imaging and imaging sensors (e.g. CCD or CMOS chips) is to organise pixels on a square grid. That is easy and you can treat your data as being written in a regular table with fixed intervals. However, pixel-to-pixel distances are different horizontally and diagonally which is a drawback when you are routinely calculating average values around points. In a hexagonal grid the point-to-point distance is constant between all neighbouring pixels. Perhaps even more importantly, a hexagonal grid offers ~15% more points on the same area than a square grid, which makes it ideally suited to fill a surface.
Image 5: Scanning results for square (left) and hexagonal (right) grids using the same step size. The grain shape and small grains with few points are more clearly defined in the hexagonal scan.
This potentially allows improvements in imaging resolution and sometimes I feel a little surprised that a hexagonal imaging mode is not yet available on SEMs.
The triplet voting indexing method also has some hidden benefits. What we do there is that a crystal orientation is calculated for each group of three bands that is detected in an EBSD pattern. For example, when you set the software to find 8 bands, you can define up to 56 different band triangles, each with a unique orientation solution.
Image 6: Indexing example based on a single set of three bands – triplet.
Image 7: Equation indicating the maximum number of triplets for a given number of bands.
This means that when a pattern is indexed, we don’t just find a single orientation, we find 56 very similar orientations that can all be averaged to produce the final indexing solution. This averaging effectively removes small errors in the band detection and allows excellent orientation precision, even in very noisy EBSD patterns. The large number of individual solutions for each pattern has another advantage. It does not hurt too much if some of the bands are wrongly detected from pattern noise or when a pattern is collected directly at a grain boundary and contains bands from two different grains. In most cases the bands coming from one of the grains will dominate the solutions and produce a valid orientation measurement.
The next original parameter from the 1990’s is the confidence index which follows out of the triplet voting indexing method. Why is this parameter such a big deal that it is even patented?
When an EBSD pattern is indexed several parameters are recorded in the EBSD scan file, the orientation, the image quality (which is a measure for the contrast of the bands), and a fit angle. This angle indicates the angular difference between the bands that have been detected by the software and the calculated orientation solution. The fit angle can be seen as an error bar for the indexing solution. If the angle is small, the calculated orientation fits very closely with the detected bands and the solution can be considered to be good. However, there is a caveat. What now if there are different orientation solutions that would produce virtually identical patterns? This may happen for a single phase where it is called pseudosymmetry. The patterns are then so similar that the system cannot detect the difference. Alternatively, you can also have multiple phases in your sample that produce very similar patterns. In such cases we would typically use EDS information and ChI-scan to discriminate the phases.
Image 8: Definition of the confidence index parameter. V1 = number of votes for best solution, V2 = mumber of votes for 2nd best solution, VMAX= Maximum possible number of votes.
Image 9: EBSD pattern of silver indexed with the silver structure (left) and copper structure (right). Fit is 0.24″, the only difference is a minor variation in the band width matching.
In both these examples the fit value would be excellent for the selected solution. And in both cases the solution has a high probability of being wrong. And that is where the confidence index or CI value becomes important. The CI value is based on the number of band triangles or triplets that match each possible solution. If there are two indistinguishable solutions, these will both have the same number of triangles and the CI will be 0. This means that there are two or more apparently valid solutions that may all have a good fit angle. The system just does not know which of these solutions is the correct one and thus the measurement is rejected. If there is a difference of only 10% in matched triangles between alternative orientation solutions in most cases the software is capable of identifying the correct solution. The fit angle on its own cannot identify this problem.
After 25 years these tools and parameters are still indispensable and at the basis of every EBSD dataset that is collected with an EDAX system. You don’t have to talk about them. They are there for you.
I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting
Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains [1] and about 500 pixels per average grain in order to capture the grain size well [2]. This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.
Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.
This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.
As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).
TDI
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.
Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.
Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.
References
[1] SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
[2] SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.
Dr. Felix Reinauer, Applications Specialist Europe, EDAX
A few days ago, I visited the Schlossgrabenfest in Darmstadt, the biggest downtown music festival in Hessen and even one of the biggest in Germany. Over one hundred bands and 12 DJs played all kinds of different music like Pop, Rock, Independent or House on six stages. This year the weather was perfect on all four days and a lot of people, celebrated a party together with well known, famous and unknown artists. A really remarkable fact is the free entrance. The only official fee is the annual plastic cup, which must be purchased once and is then used for any beverage you can buy in the festival area.
During the festival my friend and I listened to the music and enjoyed the good food and drinks sold at different booths in the festival grounds. In this laid-back atmosphere we started discussing the taste of the different kinds of beer available at the festival and throughout Germany. Beer from one brewery always tastes the same but you can really tell the difference if you try beer from different breweries. In Germany, there are about 1500 breweries offering more than 5000 different types of beer. This means it would take 13.5 years if you intended to taste a different beer every single day. Generally, breweries and markets must guarantee that the taste of a beer is consistent and that it stays fresh for a certain time.
In the Middle Ages a lot of people brewed their own beer and got sick due to bad ingredients. In 1516 the history of German beer started with the “Reinheitsgebot”, a regulation about the purity of beer. It says that only three ingredients, malt, water, and hops, may be used to make beer. This regulation must still be applied in German breweries. At first this sounds very unspectacular and boring, but over the years the process was refined to a great extent. Depending on the grade of barley roasting, the quantity of hops and the brewing temperature, a great variety of tastes can be achieved. In the early times the beer had to be drunk immediately or cooled in cold cellars with ice. To take beer with you some special container was invented to keep it drinkable for a few hours. Today beer is usually sold in recyclable glass bottles with a very tight cap keeping it fresh for months without cooling. This cap protects the beer from oxidation or getting sour.
Coming back to our visit to the Schlossgrabenfest; in the course of our discussions about the taste of different kind of beer we wondered how the breweries guarantee that the taste of the beer will not be influenced by storage and transport. The main problem is to seal the bottles gas-tight. We were wondered about the material the caps on the bottles are made of and whether they are as different as the breweries and maybe even special to a certain brewery.
I bought five bottles of beers from breweries located in the north, south, west, and east of Germany and one close to the EDAX office in Darmstadt. After opening the bottles, a cross section of the caps was investigated by EDS and EBSD. To do so, the caps were cut in the middle, embedded in a conductive resin and polished (thanks to René). The area of interest was the round area coming from the flat surface. The EDS maps were collected so that the outer side of the cap was always on the left side and the inner one on the right side of the image. The EBSD scans were made from the inner Fe metal sheet.
Let´s get back to our discussion about the differences between the caps from different breweries. The EDS spectra show that all of them are made from Fe with traces of Mn < 0.5 wt% and Cr, Ni at the detection limit. The first obvious difference is the number of pores. The cap from the east only contains a few, the cap from north the most and the cap from the middle big ones, which are also located on the surface of the metal sheet. The EBSD maps were collected from the centers of the caps and were indexed as ferrite. The grains of the cap from the middle are a little bit smaller and with a larger size distribution (10 to 100 microns) than the others, which are all about 100 microns. A remarkable misorientation is visible in some of the grains in the cap from the north.
Now let´s have a look at the differences on the inside and outside of the caps. EDS element maps show carbon and oxygen containing layers on both sides of all the caps, probably for polymer coatings. Underneath, the cap from the east is coated with thin layers of Cr with different thicknesses on each side. On the inside a silicone-based sealing compound and on the outside a varnish containing Ti can also be detected. The cap from the south has protective coatings of Sn on both sides and a silicon sealing layer can also be found on the inside. The composition of the cap from the west is similar to the cap from the east but with the Cr layer only on the outside. The large pores in the cap from the middle are an interesting difference. Within the Fe metal sheet, these pores are empty, but on both sides, they are filled with silicon-oxide. It seems that this silicon oxide filling is related to the production process, because the pores are covered with the Sn containing protective layers. The cap from the north only contains a Cr layer on the inside. The varnish contains Ti and S.
In summary, we didn’t expect the caps would have these significant differences. Obviously, the differences on the outside are probably due to the different varnishes used for the individual labels from each of the breweries. However, we didn’t think that the composition and microstructure of the caps themselves would differ significantly from each other. This study is far from being complete and cannot be used as a basis for reliable conclusions. However, we had a lot of fun before and during this investigation and are now sure that the glass bottles can be sealed to keep beer fresh and guarantee a great variety of tastes.
If you have attended an EDAX EBSD training course, you have seen the following slide in the Pattern Indexing lecture. This slide attempts to explain how to collect a background pattern before performing an OIM scan. The slide recommends that the background come from an area containing at least 25 grains.
Those of you who have performed re-indexing of a scan with saved patterns in OIM Analysis 8.1 may have noticed that there is a background pattern for the scan data (as well as one of the partitions). This can be useful if re-indexing a scan where the raw patterns were saved as opposed to background corrected patterns. This background pattern is formed by averaging 500 patterns randomly selected from the saved patterns. 500 is a lot more than the minimum of 25 recommended in the slide from the training lecture.
Recently, I was thinking about these two numbers – is 25 really enough, is 500 overkill? With some of the new tools (Callahan, P.G. and De Graef, M., 2013. Dynamical electron backscatter diffraction patterns. Part I: Pattern simulations. Microscopy and Microanalysis, 19(5), pp.1255-1265.) available for simulating EBSD patterns I realized this might be provide a controlled way to perhaps refine the number of orientations that need to be sampled for a good background. To this end, I created a set of simulated patterns for nickel randomly sampled from orientation space. The set contained 6,656 patterns. If you average all these patterns together you get the pattern at left in the following row of three patterns. The average patterns for 500 and 25 random patterns are also shown. The average pattern for 25 random orientations is not as smooth as I would have assumed but the one with 500 looks quite good.
I decided to take it a bit further and using the average pattern for all 6,656 patterns as a reference I compared the difference (simple intensity differences) between average patterns from n orientations vs. the reference. This gave me the following curve: From this curve, my intuitive estimate that 25 grains is enough for a good background appears be a bit optimistic., but 500 looks good. There are a few caveats to this, the examples I am showing here are at 480 x 480 pixels which is much more than would be used for typical EBSD scans. In addition, the simulated patterns I used are sharper and have better signal-to-noise ratios than we are able to achieve in experimental patterns at typical exposure times. These effects are likely to lead to more smoothing.
I recently saw Shawn Bradley who is one of the tallest players to have played in the NBA, he is 7’6” (229cm) tall. I recognized him because he was surrounded by a crowd of kids – you can imagine that he really stood out! This reminded me that these results assume a uniform grain size. If you have 499 tiny grains encircling one giant grain, then the background from these 500 grains will not work as a background as it would be dominated by the Shawn Bradley grain!
A few weeks ago, I participated in a joint SEM – in-situ analysis workshop in Fuveau, France with Tescan electron microscopes and Newtec (supplier of the heating-tensile stage). One of the activities during this workshop was to perform a live in-situ tensile experiment with simultaneous EBSD data collection to illustrate the capabilities of all the systems involved. In-situ measurements are a great way to track material changes during the course of an experiment, but of course in order to be able to show what happens during such an example deformation experiment you need a suitable sample. For the workshop we decided to use a “simple” 304L austenitic stainless-steel material (figure 1) that would nicely show the effects of the stretching.
Figure 1. Laser cut 304L stainless steel tensile test specimen provided by Newtec.
I received several samples a few weeks before the meeting in order to verify the surface quality for the EBSD measurements. And that is where the trouble started …
I was hoping to get a recrystallized microstructure with large grains and clear twin lamellae such that any deformation structures that would develop would be clearly visible. What I got was a sample that appeared heavily deformed even after careful polishing (figure 2).
Figure 2. BSE image after initial mechanical polishing.
This was worrying as the existing deformation structures could obscure the results from the in-situ stretching. Also, I was not entirely sure that this structure was really showing the true microstructure of the austenitic sample as it showed a clear vertical alignment that extended over grain boundaries.
And this is where I contacted long-time EDAX EBSD user Katja Angenendt at the MPIE in Düsseldorf for advice. Katja works in the Department of Microstructure Physics and Alloy Design and has extensive experience in preparing many different metals and alloys for EBSD analysis. From the images that I sent, Katja agreed that the visible structure was most likely introduced by the grinding and polishing that I did and she made some suggestions to remove this damaged layer. Armed with that knowledge and new hope I started fresh and polished the samples once more. And I had some success! Now there were grains visible without internal deformation and some nice clean twin lamellae (figure 3). But not everywhere. I still had lots of areas with a deformed structure and whatever I tried I could not get rid of those.
Figure 3. BSE image after optimized mechanical polishing.
Back to Katja. When I discussed my remaining polishing problems she helpfully proposed to give it a try herself using a combination of mechanical polishing and chemical etching. But even after several polishing attempts starting from scratch and deliberately introducing scratches to verify that enough material was removed we could not completely get rid of the deformed areas. Now we slowly started to accept that this deformation was perhaps a true part of the microstructure. But how could that be if this is supposed to be a recrystallised austenitic 304L stainless steel?
Table 1. 304/304L stainless steel composition.
Let’s take a look at the composition. In table 1 a typical composition of 304 stainless steel is given. The spectrum below (figure 4) shows the composition of my samples.
Figure 4. EDS spectrum with quantification results collected with an Octane Elite Plus detector.
All elements are in the expected range except for Ni which is a bit low and that could bring the composition right at the edge of the austenite stability field. So perhaps the deformed areas are not austenite, but ferrite or martensite? This is quickly verified with an EBSD map and indeed the phase map below confirms the presence of a bcc phase (figure 5).
Figure 5. EBSD map results of the sample before the tensile test, IQ, IPF, and phase maps.
Having this composition right at the edge of the austenite stability field actually added some interesting additional information to the tensile tests during the workshop. Because if the internal deformation in the austenite grains got high enough, we might just trigger a phase transformation to ferrite (or martensite) with ongoing deformation.
Figure 6. Phase maps (upper row) and Grain Reference Orientation Deviation (GROD) maps (lower row) for a sequence of maps collected during the tensile test.
And that is exactly what we have observed (figure 6). At the start of the experiments the ferrite fraction in the analysis field is 7.8% and with increasing deformation the ferrite fraction goes up to 11.9% at 14% strain.
So, after a tough start the 304L stainless steel samples made the measurements collected during the workshop even more interesting by adding a phase transformation to the deformation. If you are regularly working with these alloys this is probably not unexpected behavior. But if you are working with many different materials you have to be aware that different types of specimen treatment, either during preparation or during experimentation, may have a large influence on your characterization results. Always be careful that you do not only see what you believe, but ensure that you can believe what you see.
Finally I want to thank the people of Tescan and Newtec for their assistance in the data collection during the workshop in Fuveau and especially a big thank you to Katja Angenendt at the Max Planck Institute for Iron Research in Düsseldorf for helpful discussions and help in preparing the sample.
November seems to be the month when the industry tries to squeeze in as many events as possible before the winter arrives. I have had the opportunity to attend a few events and missed others, however, I want to share with you how much I enjoyed ICOTOM18*!
ICOTOM (International Conference on Texture of Materials) is an international conference held every three years and this year it took place in St. George, Utah, the gateway to Zion National Park.
This was the first time I have ever attended ICOTOM which is, for the most part, a highly technical conference, which deals with the material properties that can be detected and analyzed by Electron Backscatter Diffraction (EBSD) and other diffraction techniques. What stood out to me this year were the depth and degree of technical presentations made at this conference, especially from industry contributors. The presentations were up to date, data driven, and as scientifically sound as any I have ever seen in the past 25 years of attending more than my share of technical conferences.
The industrial adaptation of technology is not new since X-ray diffraction has been utilized for over half a century to evaluate texture properties of crystalline materials. At ICOTOM I was most impressed by the current ‘out of the laboratory’ role of microanalysis, and especially EBSD, for the evaluation of anisotropic materials for quality enhancement.
The embracing of the microanalysis as a tool for product enhancement means that we equipment producers need to develop new and improved systems and software for EBSD applications that will address these industrial requirements. It is essential that all technology providers recognize the evolving market requirements as they develop, so that they can stay relevant and supply current needs. If they can’t do this, then manufacturing entities will find their own solutions!
*In the interests of full disclosure, I should say that EDAX was a sponsor of ICOTOM18 and that my colleagues were part of the organizing committee.
In interacting with Rudy Wenk of the University of California Berkeley to get his take on the word “texture” as it pertains to preferred orientation reminds me of some other terminologies with orientation maps that Rudy helped me with several years ago.
Map reconstructed form EBSD data showing the crystal orientation parallel to the sample surface normal
Joe Michael of Sandia National Lab has commented to me a couple of times his objection to the term “IPF map”. As you may know, the term is commonly used to describe a color map reconstructed from OIM data where the color denotes the crystallographic axis aligned with the sample normal as shown below. Joe points out that the term “orientation map” or “crystal direction map” or something similar would be much more appropriate and he is absolutely right.
The reason behind the name “IPF map”, is that I hi-jacked some of my code for drawing inverse pole figures (IPFs) as a basis to start writing the code to create the color-coded maps. Thus, we started using the term internally (it was TSL at the time – prior to EDAX purchasing TSL) and then it leaked out publicly and the name stuck – my apologies to Joe. We later added the ability to color the microstructure based on the crystal direction aligned with any specified sample direction as shown below.
Orientation maps showing the crystal directions aligned with the normal, rolling and transverse directions at the surface of a rolled aluminum sheet.
The idea for this map was germinated from a paper I saw presented by David Dingley where a continuous color coding schemed was devised by assigning red, green and blue to the three axes of Rodrigues-Frank space: D. J. Dingley, A. Day, and A. Bewick (1991) “Application of Microtexture Determination using EBSD to Non Cubic Crystals”, Textures and Microstructures, 14-18, 91-96. In this case, the microstructure had been digitized and a single orientation measured for each grain using EBSD. Unfortunately, I only have gray scale images of these results.
SEM micrograph of nickel, grain orientations in Rodrigues-Frank space and orientation map based on color Rodrigues vector coloring scheme. Source: Link labeled “Full-Text PDF” at www.hindawi.com/archive/1991/631843/abs/
IPF map of recrystallized grains in grain oriented silicon steel from Y. Inokuti, C. Maeda and Y. Ito (1987) “Computer color mapping of configuration of goss grains after an intermediate annealing in grain oriented silicon steel.” Transactions of the Iron and Steel Institute of Japan 27, 139-144. Source: Link labeled “Full Text PDF button’ at www.jstage.jst.go.jp/article/isijinternational1966/27/4/27_4_302/_article
We didn’t realize it at the time; but, an approach based on the crystallographic direction had already been done in Japan. In this work, the stereographic unit triangle (i.e. an inverse pole figure) was used in a continues color coding scheme were red is assigned to the <110> direction, blue to <111> and yellow to <100> and then points lying between these three corners of the stereographic triangle are combinations of these three colors. This color coding was used to shade grains in digitized maps of the microstructure according to their orientation. Y. Inokuti, C. Maeda and Y. Ito (1986) “Observation of Generation of Secondary Nuclei in a Grain Oriented Silicon Steel Sheet Illustrated by Computer Color Mapping”, Journal of the Japan Institute of Metals, 50, 874-8. The images published in this paper received awards in 1986 by the Japanese Institute of Metals and TMS.
AVA map and pole figure from a quartz sample from “Gries am Brenner” in the Austrian alps south of Innsbruck. The pole figure is for the c-axis. (B. Sander (1950) Einführung in die Gefügekunde der Geologischen Körper: Zweiter Teil Die Korngefüge. Springer-Vienna) Source: In the last chapter (Back Matter) in the Table of Contents there is a link labeled “>> Download PDF” at link.springer.com/book/10.1007%2F978-3-7091-7759-4
I thought these were the first colored orientation maps constructed until Rudy later corrected me (not the first, nor certainly the last time). He sent me some examples of mappings of orientation onto a microstructure by “hatching” or coloring a pole figure and then using those patterns or colors to shade the microstructure as traced from micrographs. H.-R. Wenk (1965) “Gefügestudie an Quarzknauern und -lagen der Tessiner Kulmination”, Schweiz. Mineralogische und Petrographische Mitteilungen, 45, 467-515 and even earlier in B. Sander (1950) Einführung in die Gefügekunde Springer Verlag. 402-409 . Sanders entitled this type of mapping and analysis as AVA (Achsenvertilungsanalyse auf Deutsch or Axis Distribution Analysis in English).
Such maps were forerunners to the “IPF maps” of today (you could actually call them “PF maps”) to which we are so familiar with. It turns out our wanderin’s in A Search for Structure (Cyril Stanley Smith, 1991, MIT Press) have actually not been “aimless” at all but have helped us gain real insight into that etymologically challenged world of microstructure.
Call me old-fashioned, but when I want to relax I always try to go outdoors, away from computers and electronic gadgets. So when I go on vacation with my family we look for quiet places where we can go hiking and if possible we visit places with interesting rocks that contain fossils. Last summer I spent my summer vacation with my family in the Hunsrück in Germany. The hills close to where we stayed consisted of shales. These are strongly laminated rocks that have been formed by heating and compaction of finegrained sediments, mostly clay, that have been deposited under water in a marine environment. These rocks are perfect for the occurrence of fossils. When an organism dies and falls on such a bed of clay and is covered by a successive stack of mud layers, it can be beautifully preserved. The small grains and airtight seal of the mud can give a very good preservation such that the shape of the plant or animal can be found millions of years later as a highly detailed fossil. Perhaps the most famous occurrence of such fossil-bearing shale is the Burgess shale in British Columbia, Canada which is renowned for the preservation of soft tissue of long-extinct creatures. The Hunsrück region in Germany may not be that spectacular, but it is a lot closer to home for me and here also beautiful fossils have been found.
Figure 1. Crinoid or sea lily fossil found in the waste heap of the Marienstollen in Weiden, Germany.
So, when we would go hiking during our stay we just had to pack a hammer in our backpack to see if we would be lucky enough to find something spectacular of our own. What we found were fragments of a sea lily or crinoid embedded in the rock (Figures 1,3) and as is typical for fossils from the area, much of the fossilised remains had been replaced by shiny sulphide crystals (Figure 2). Locally it is said that the sulphides are pyrite. FeS2. So of course, once back home I could not resist putting a small fragment of our find in the SEM to confirm the mineral using EDS and EBSD. The cross section that had broken off the fossil showed smooth fracture surfaces which looked promising for analysis (Figure 4). EDS was easy and quickly showed that the sulphide grains were not iron sulphide, but instead copper bearing chalcopyrite. Getting EBSD results was a bit trickier because although EBSD bands were often visible, shadows cast by the irregular surface confuse the band detection (Figure 5).
Figure 4. Cross section of shale with smooth sulphide grains along the fracture surface.
Figure 5. EBSD patterns collected from the fracture surface. Indexing was done after manual band selection. Surface irregularities are emphasized by the projected shadows.
Now the trick is getting these patterns indexed and here I do like computers doing the work for me. Of course, you can manually indicate the bands and get the orientations of individual patterns, but that will not be very helpful for a map. The problem with a fracture surface is that the substrate has a variable tilt with respect to the EBSD detector. Parts of the sample might be blocking the path to the EBSD detector which complicates the EBSD background processing.
The EDAX EBSD software has many functions to help you out of such tight spots when analyzing challenging samples. For example, in addition to the standard background subtraction that is applied to routine EBSD mapping there is a library of background processing routines available. These routines can be helpful if your specimen is not a “typical” flat, well-polished EBSD sample. This library allows you to create your own recipe of image processing routines to optimize the band detection on patterns with deviating intensity gradients or incomplete patterns due to shadowing.
The standard background polishing uses an averaged EBSD pattern of more than ~25 grains such that the individual bands are blended out. This produces a fixed intensity gradient that we use to remove the background from all the patterns in the analysis area. When the actual intensity gradient shifts due to surface irregularities it is not enough to just use such a fixed average background. In that case you will need to add a dynamic background calculation method to smooth out the resulting intensity variations.
This is illustrated in the EBSD mapping of the fossil in Figure 6. The first EBSD mapping of the fossil using standard background subtraction only showed those parts of the grains that happened to be close to the optimal orientation for normal EBSD. When the surface was pointing in another direction, the pattern intensity had shifted too much for successful indexing. Reindexing the map with optimised background processing tripled the indexable area on the fracture surface.
Figure 6. Analysis of the fracture surface in the fossil. -1- PRIAS center image showing the smooth sulphide grains, -2- Superimposed EDS maps of O(green), Al(blue), S(magenta), and Fe(orange) -3- EBSD IPF on IQ maps with standard background processing, -4- original IPF map, -5- EBSD IPF on IQ maps with optimized background processing, -6- IPF map with optimized background.
In addition to the pattern enhancements also the band detection itself can be tuned to look at specific areas of the patterns. Surface shadowing mainly obscures the bottom part of the pattern, so when you shift the focus of the band detection to the upper half of the pattern you can maximize the number of detected bands and minimize the disturbing effects of the edges of the shadowed area. It is unavoidable to pick up a false band or two when you have a shadow, but when there are still 7-9 correct bands detected as well, indexing is not a problem.
Figure 7. Band detection on shadowed EBSD pattern. Band detection in the Hough transform is focused at the upper half of the pattern to allow detection of sufficient number of bands for correct indexing.
In the images below are a few suggestions of background processing recipes that can be useful for a variety of applications.
Of course, you can also create your own recipe of image processing options such that perhaps you will be able to extract some previously unrecognized details from your materials.