grain measurement

From Collecting EBSD at 20 Patterns per second (pps) to Collecting at 4,500 pps

John Haritos, Regional Sales Manager Southwest USA. EDAX

I recently had the opportunity to host a demo for one of my customers at our Draper, Utah office. This was a long-time EDAX and EBSD user, who was interested in seeing our new Velocity CMOS camera, and to try it on some of their samples.

When I started in this industry back in the late 90s, the cameras were running at a “blazing” 20 points per second and we all thought that this was fast. At that time, collection speed wasn’t the primary issue. What EBSD brought to the table was automated orientation analysis of diffraction patterns. Now users could measure orientations and create beautiful orientation maps with the push of a button, which was a lot easier than manually interpreting these patterns.

Fast forward to 2019 and with the CMOS technology being adapted from other industries to EBSD we are now collecting at 4,500 pps. What took hours and even days to collect at 20 pps now takes a matter of minutes or seconds. Below is a Nickel Superalloy sample collected at 4,500 pps on our Velocity™ Super EBSD camera. This scan shows the grain and twinning structure and was collected in just a few minutes.

Figure 1: Nickel Superalloy

Of course, now that we have improved from 20 pps to 4,500 pps, it’s significantly easier to get a lot more data. So the question becomes, how do we analyze all this data? This is where OIM Analysis v8™ comes to the rescue for the analysis and post processing of these large data sets. OIM Analysis v8™ was designed to take advantage of 64 bit computing and multi-threading so the software can handle large datasets. Below is a grain size map and a grain size distribution chart from an Aluminum friction stir weld sample with over 7 Million points collected with the Velocity™ and processed using OIM Analysis v8™. This example is interesting because the grains on the left side of the image are much larger than the grains on the right side. With the fast collection speeds, a small (250nm) step size could still be used over this larger collection area. This allows for accurate characterization of grain size across this weld interface, and the bimodal grain size distribution is clearly resolved. With a slower camera, it may be impractical to analyze this area in a single scan.

Figure 2: Aluminum Friction Stir Weld

In the past, most customers would setup an overnight EBSD run. You could see the thoughts running through their mind: will my sample drift, will my filament pop, what will the data look like when I come back to work in the morning? Inevitably, the sample would drift, or the filament would pop and this would mean the dreaded “ugh” in the morning. With the Velocity™ and the fast collection speeds, you no longer need to worry about this. You can collect maps in a few minutes and avoid this issue in practice. It’s a hard thing to say in a brochure, but its easy to appreciate when seeing it firsthand.

For me, watching my customer see the analysis of many samples in a single day was impressive. These were not particularly easy samples. They were solar cell and battery materials, with a variety of phases and crystal structures. But under similar conditions to their traditional EBSD work, we could collect better quality data much faster. The future is now. Everyone is excited with what the CMOS technology can offer in the way of productivity and throughput for their EBSD work.

Back to Basics

Dr. René de Kloe, Applications Specialist, EDAX

When you have been working with EBSD for many years it is easy to forget how little you knew when you started. EBSD patterns appear like magic on your screen, indexing and orientation determination are automatic, and you can produce colourful images or maps with a click of a mouse.

Image 1: IPF on PRIAS™ center EBSD map of cold-pressed iron powder sample.

All the tools to get you there are hidden in the EBSD software package that you are working with and as a user you don’t need to know exactly how all of it happens. It just works. To me, although it is my daily work, it is still amazing how easy it sometimes is to get high quality data from almost any sample even if it only produces barely recognisable patterns.

Image 2: Successful indexing of extremely noisy patterns using automatic band detection.

That capability did not just appear overnight. There is a combination of a lot of hard work, clever ideas, and more than 25 years of experience behind it that we sometimes just forget to talk about, or perhaps even worse, expect everybody to know already. And so it is that I occasionally get asked a question at a meeting or an exhibition where I think, really? For example, some years ago I got a very good question about the EBSD calibration.

Image 3: EBSD calibration is based on the point in the pattern that is not distorted by the projection. This is the point where the electrons reach the screen perpendicularly (pattern center).

As you probably suspect EBSD calibration is not some kind of magic that ensures that you can index your patterns. It is a precise geometrical correction that distorts the displayed EBSD solution so that it fits the detected pattern. I always compare it with a video-projector. That is also a point projection onto a screen at a small angle, just like the EBSD detection geometry. And when you do that there is a distortion where the sides of the image on the screen are not parallel anymore but move away from each other. On video projectors there is a smart trick to fix that: a button labelled keystone correction which pulls the sides of the image nicely parallel again where they belong.

Image 4: Trapezoid distortion before (left) and after (right) correction.

Unfortunately, we cannot tell the electrons in the SEM to move over a little bit in order to make the EBSD pattern look correct. Instead we need to distort the indexing solution just so that it matches the EBSD pattern. And now the question I got asked was, do you actually adjust this calibration when moving the beam position on the sample during a scan? Because otherwise you cannot collect large EBSD maps. Apparently not everybody was doing that at that time, and it was being presented at a conference as the invention of the century that no EBSD system could do without. It was finally possible to collect EBSD data at low magnification! So, when do you think this feature will be available in your software? I stood quiet for a moment before answering, well, eh, we actually already have such a feature that we call the pattern centre shift. And it had been in the system since the first mapping experiments in the early 90’s. We just did not talk about it as it seemed so obvious.

There are more things like that hidden in the software that are at least as important, such as smart routines to detect the bands even in extremely noisy patterns, EBSD pattern background processing, 64-bit multithreading for fast processing of large datasets, and efficient quaternion-based mathematical methods for post-processing. These tools are quietly working in the background to deliver the results that the user needs.
There are some other original ideas that date back to the 1990’s that we actually do regularly talk about, such as the hexagonal scanning grid, triplet voting indexing, and the confidence index, but there is also some confusion about these. Why do we do it that way?

The common way in imaging and imaging sensors (e.g. CCD or CMOS chips) is to organise pixels on a square grid. That is easy and you can treat your data as being written in a regular table with fixed intervals. However, pixel-to-pixel distances are different horizontally and diagonally which is a drawback when you are routinely calculating average values around points. In a hexagonal grid the point-to-point distance is constant between all neighbouring pixels. Perhaps even more importantly, a hexagonal grid offers ~15% more points on the same area than a square grid, which makes it ideally suited to fill a surface.

Image 5: Scanning results for square (left) and hexagonal (right) grids using the same step size. The grain shape and small grains with few points are more clearly defined in the hexagonal scan.

This potentially allows improvements in imaging resolution and sometimes I feel a little surprised that a hexagonal imaging mode is not yet available on SEMs.
The triplet voting indexing method also has some hidden benefits. What we do there is that a crystal orientation is calculated for each group of three bands that is detected in an EBSD pattern. For example, when you set the software to find 8 bands, you can define up to 56 different band triangles, each with a unique orientation solution.

Image 6: Indexing example based on a single set of three bands – triplet.

Image 7: Equation indicating the maximum number of triplets for a given number of bands.

This means that when a pattern is indexed, we don’t just find a single orientation, we find 56 very similar orientations that can all be averaged to produce the final indexing solution. This averaging effectively removes small errors in the band detection and allows excellent orientation precision, even in very noisy EBSD patterns. The large number of individual solutions for each pattern has another advantage. It does not hurt too much if some of the bands are wrongly detected from pattern noise or when a pattern is collected directly at a grain boundary and contains bands from two different grains. In most cases the bands coming from one of the grains will dominate the solutions and produce a valid orientation measurement.

The next original parameter from the 1990’s is the confidence index which follows out of the triplet voting indexing method. Why is this parameter such a big deal that it is even patented?
When an EBSD pattern is indexed several parameters are recorded in the EBSD scan file, the orientation, the image quality (which is a measure for the contrast of the bands), and a fit angle. This angle indicates the angular difference between the bands that have been detected by the software and the calculated orientation solution. The fit angle can be seen as an error bar for the indexing solution. If the angle is small, the calculated orientation fits very closely with the detected bands and the solution can be considered to be good. However, there is a caveat. What now if there are different orientation solutions that would produce virtually identical patterns? This may happen for a single phase where it is called pseudosymmetry. The patterns are then so similar that the system cannot detect the difference. Alternatively, you can also have multiple phases in your sample that produce very similar patterns. In such cases we would typically use EDS information and ChI-scan to discriminate the phases.

Image 8: Definition of the confidence index parameter. V1 = number of votes for best solution, V2 = mumber of votes for 2nd best solution, VMAX= Maximum possible number of votes.

Image 9: EBSD pattern of silver indexed with the silver structure (left) and copper structure (right). Fit is 0.24″, the only difference is a minor variation in the band width matching.

In both these examples the fit value would be excellent for the selected solution. And in both cases the solution has a high probability of being wrong. And that is where the confidence index or CI value becomes important. The CI value is based on the number of band triangles or triplets that match each possible solution. If there are two indistinguishable solutions, these will both have the same number of triangles and the CI will be 0. This means that there are two or more apparently valid solutions that may all have a good fit angle. The system just does not know which of these solutions is the correct one and thus the measurement is rejected. If there is a difference of only 10% in matched triangles between alternative orientation solutions in most cases the software is capable of identifying the correct solution. The fit angle on its own cannot identify this problem.

After 25 years these tools and parameters are still indispensable and at the basis of every EBSD dataset that is collected with an EDAX system. You don’t have to talk about them. They are there for you.

Old Eyes?

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting

Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains [1] and about 500 pixels per average grain in order to capture the grain size well [2]. This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.

Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.


This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of  = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.

As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).

TDI
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.

Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.


Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.

References
[1] SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
[2] SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.

Crown Caps = Fresh Beer?

Dr. Felix Reinauer, Applications Specialist Europe, EDAX

A few days ago, I visited the Schlossgrabenfest in Darmstadt, the biggest downtown music festival in Hessen and even one of the biggest in Germany. Over one hundred bands and 12 DJs played all kinds of different music like Pop, Rock, Independent or House on six stages. This year the weather was perfect on all four days and a lot of people, celebrated a party together with well known, famous and unknown artists. A really remarkable fact is the free entrance. The only official fee is the annual plastic cup, which must be purchased once and is then used for any beverage you can buy in the festival area.

During the festival my friend and I listened to the music and enjoyed the good food and drinks sold at different booths in the festival grounds. In this laid-back atmosphere we started discussing the taste of the different kinds of beer available at the festival and throughout Germany. Beer from one brewery always tastes the same but you can really tell the difference if you try beer from different breweries. In Germany, there are about 1500 breweries offering more than 5000 different types of beer. This means it would take 13.5 years if you intended to taste a different beer every single day. Generally, breweries and markets must guarantee that the taste of a beer is consistent and that it stays fresh for a certain time.

In the Middle Ages a lot of people brewed their own beer and got sick due to bad ingredients. In 1516 the history of German beer started with the “Reinheitsgebot”, a regulation about the purity of beer. It says that only three ingredients, malt, water, and hops, may be used to make beer. This regulation must still be applied in German breweries. At first this sounds very unspectacular and boring, but over the years the process was refined to a great extent. Depending on the grade of barley roasting, the quantity of hops and the brewing temperature, a great variety of tastes can be achieved. In the early times the beer had to be drunk immediately or cooled in cold cellars with ice. To take beer with you some special container was invented to keep it drinkable for a few hours. Today beer is usually sold in recyclable glass bottles with a very tight cap keeping it fresh for months without cooling. This cap protects the beer from oxidation or getting sour.

Coming back to our visit to the Schlossgrabenfest; in the course of our discussions about the taste of different kind of beer we wondered how the breweries guarantee that the taste of the beer will not be influenced by storage and transport. The main problem is to seal the bottles gas-tight. We were wondered about the material the caps on the bottles are made of and whether they are as different as the breweries and maybe even special to a certain brewery.

I bought five bottles of beers from breweries located in the north, south, west, and east of Germany and one close to the EDAX office in Darmstadt. After opening the bottles, a cross section of the caps was investigated by EDS and EBSD. To do so, the caps were cut in the middle, embedded in a conductive resin and polished (thanks to René). The area of interest was the round area coming from the flat surface. The EDS maps were collected so that the outer side of the cap was always on the left side and the inner one on the right side of the image. The EBSD scans were made from the inner Fe metal sheet.

Let´s get back to our discussion about the differences between the caps from different breweries. The EDS spectra show that all of them are made from Fe with traces of Mn < 0.5 wt% and Cr, Ni at the detection limit. The first obvious difference is the number of pores. The cap from the east only contains a few, the cap from north the most and the cap from the middle big ones, which are also located on the surface of the metal sheet. The EBSD maps were collected from the centers of the caps and were indexed as ferrite. The grains of the cap from the middle are a little bit smaller and with a larger size distribution (10 to 100 microns) than the others, which are all about 100 microns. A remarkable misorientation is visible in some of the grains in the cap from the north.

Now let´s have a look at the differences on the inside and outside of the caps. EDS element maps show carbon and oxygen containing layers on both sides of all the caps, probably for polymer coatings. Underneath, the cap from the east is coated with thin layers of Cr with different thicknesses on each side. On the inside a silicone-based sealing compound and on the outside a varnish containing Ti can also be detected. The cap from the south has protective coatings of Sn on both sides and a silicon sealing layer can also be found on the inside. The composition of the cap from the west is similar to the cap from the east but with the Cr layer only on the outside. The large pores in the cap from the middle are an interesting difference. Within the Fe metal sheet, these pores are empty, but on both sides, they are filled with silicon-oxide. It seems that this silicon oxide filling is related to the production process, because the pores are covered with the Sn containing protective layers. The cap from the north only contains a Cr layer on the inside. The varnish contains Ti and S.

In summary, we didn’t expect the caps would have these significant differences. Obviously, the differences on the outside are probably due to the different varnishes used for the individual labels from each of the breweries. However, we didn’t think that the composition and microstructure of the caps themselves would differ significantly from each other. This study is far from being complete and cannot be used as a basis for reliable conclusions. However, we had a lot of fun before and during this investigation and are now sure that the glass bottles can be sealed to keep beer fresh and guarantee a great variety of tastes.

A Little Background on Backgrounds

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

If you have attended an EDAX EBSD training course, you have seen the following slide in the Pattern Indexing lecture. This slide attempts to explain how to collect a background pattern before performing an OIM scan. The slide recommends that the background come from an area containing at least 25 grains.

Those of you who have performed re-indexing of a scan with saved patterns in OIM Analysis 8.1 may have noticed that there is a background pattern for the scan data (as well as one of the partitions). This can be useful if re-indexing a scan where the raw patterns were saved as opposed to background corrected patterns. This background pattern is formed by averaging 500 patterns randomly selected from the saved patterns. 500 is a lot more than the minimum of 25 recommended in the slide from the training lecture.

Recently, I was thinking about these two numbers – is 25 really enough, is 500 overkill? With some of the new tools (Callahan, P.G. and De Graef, M., 2013. Dynamical electron backscatter diffraction patterns. Part I: Pattern simulations. Microscopy and Microanalysis, 19(5), pp.1255-1265.) available for simulating EBSD patterns I realized this might be provide a controlled way to perhaps refine the number of orientations that need to be sampled for a good background. To this end, I created a set of simulated patterns for nickel randomly sampled from orientation space. The set contained 6,656 patterns. If you average all these patterns together you get the pattern at left in the following row of three patterns. The average patterns for 500 and 25 random patterns are also shown. The average pattern for 25 random orientations is not as smooth as I would have assumed but the one with 500 looks quite good.

I decided to take it a bit further and using the average pattern for all 6,656 patterns as a reference I compared the difference (simple intensity differences) between average patterns from n orientations vs. the reference. This gave me the following curve:
From this curve, my intuitive estimate that 25 grains is enough for a good background appears be a bit optimistic., but 500 looks good. There are a few caveats to this, the examples I am showing here are at 480 x 480 pixels which is much more than would be used for typical EBSD scans. In addition, the simulated patterns I used are sharper and have better signal-to-noise ratios than we are able to achieve in experimental patterns at typical exposure times. These effects are likely to lead to more smoothing.

I recently saw Shawn Bradley who is one of the tallest players to have played in the NBA, he is 7’6” (229cm) tall. I recognized him because he was surrounded by a crowd of kids – you can imagine that he really stood out! This reminded me that these results assume a uniform grain size. If you have 499 tiny grains encircling one giant grain, then the background from these 500 grains will not work as a background as it would be dominated by the Shawn Bradley grain!

Seeing is Believing?

Dr. René de Kloe, Applications Specialist, EDAX

A few weeks ago, I participated in a joint SEM – in-situ analysis workshop in Fuveau, France with Tescan electron microscopes and Newtec (supplier of the heating-tensile stage). One of the activities during this workshop was to perform a live in-situ tensile experiment with simultaneous EBSD data collection to illustrate the capabilities of all the systems involved. In-situ measurements are a great way to track material changes during the course of an experiment, but of course in order to be able to show what happens during such an example deformation experiment you need a suitable sample. For the workshop we decided to use a “simple” 304L austenitic stainless-steel material (figure 1) that would nicely show the effects of the stretching.

Figure 1. Laser cut 304L stainless steel tensile test specimen provided by Newtec.

I received several samples a few weeks before the meeting in order to verify the surface quality for the EBSD measurements. And that is where the trouble started …

I was hoping to get a recrystallized microstructure with large grains and clear twin lamellae such that any deformation structures that would develop would be clearly visible. What I got was a sample that appeared heavily deformed even after careful polishing (figure 2).

Figure 2. BSE image after initial mechanical polishing.

This was worrying as the existing deformation structures could obscure the results from the in-situ stretching. Also, I was not entirely sure that this structure was really showing the true microstructure of the austenitic sample as it showed a clear vertical alignment that extended over grain boundaries.
And this is where I contacted long-time EDAX EBSD user Katja Angenendt at the MPIE in Düsseldorf for advice. Katja works in the Department of Microstructure Physics and Alloy Design and has extensive experience in preparing many different metals and alloys for EBSD analysis. From the images that I sent, Katja agreed that the visible structure was most likely introduced by the grinding and polishing that I did and she made some suggestions to remove this damaged layer. Armed with that knowledge and new hope I started fresh and polished the samples once more. And I had some success! Now there were grains visible without internal deformation and some nice clean twin lamellae (figure 3). But not everywhere. I still had lots of areas with a deformed structure and whatever I tried I could not get rid of those.

Figure 3. BSE image after optimized mechanical polishing.

Back to Katja. When I discussed my remaining polishing problems she helpfully proposed to give it a try herself using a combination of mechanical polishing and chemical etching. But even after several polishing attempts starting from scratch and deliberately introducing scratches to verify that enough material was removed we could not completely get rid of the deformed areas. Now we slowly started to accept that this deformation was perhaps a true part of the microstructure. But how could that be if this is supposed to be a recrystallised austenitic 304L stainless steel?

Table 1. 304/304L stainless steel composition.

Let’s take a look at the composition. In table 1 a typical composition of 304 stainless steel is given. The spectrum below (figure 4) shows the composition of my samples.

Figure 4. EDS spectrum with quantification results collected with an Octane Elite Plus detector.

All elements are in the expected range except for Ni which is a bit low and that could bring the composition right at the edge of the austenite stability field. So perhaps the deformed areas are not austenite, but ferrite or martensite? This is quickly verified with an EBSD map and indeed the phase map below confirms the presence of a bcc phase (figure 5).

Figure 5. EBSD map results of the sample before the tensile test, IQ, IPF, and phase maps.

Having this composition right at the edge of the austenite stability field actually added some interesting additional information to the tensile tests during the workshop. Because if the internal deformation in the austenite grains got high enough, we might just trigger a phase transformation to ferrite (or martensite) with ongoing deformation.

Figure 6. Phase maps (upper row) and Grain Reference Orientation Deviation (GROD) maps (lower row) for a sequence of maps collected during the tensile test.

And that is exactly what we have observed (figure 6). At the start of the experiments the ferrite fraction in the analysis field is 7.8% and with increasing deformation the ferrite fraction goes up to 11.9% at 14% strain.

So, after a tough start the 304L stainless steel samples made the measurements collected during the workshop even more interesting by adding a phase transformation to the deformation. If you are regularly working with these alloys this is probably not unexpected behavior. But if you are working with many different materials you have to be aware that different types of specimen treatment, either during preparation or during experimentation, may have a large influence on your characterization results. Always be careful that you do not only see what you believe, but ensure that you can believe what you see.

Finally I want to thank the people of Tescan and Newtec for their assistance in the data collection during the workshop in Fuveau and especially a big thank you to Katja Angenendt at the Max Planck Institute for Iron Research in Düsseldorf for helpful discussions and help in preparing the sample.

Looking At A Grain!

Sia Afshari, Global Marketing Manager, EDAX

November seems to be the month when the industry tries to squeeze in as many events as possible before the winter arrives. I have had the opportunity to attend a few events and missed others, however, I want to share with you how much I enjoyed ICOTOM18*!

ICOTOM (International Conference on Texture of Materials) is an international conference held every three years and this year it took place in St. George, Utah, the gateway to Zion National Park.

This was the first time I have ever attended ICOTOM which is, for the most part, a highly technical conference, which deals with the material properties that can be detected and analyzed by Electron Backscatter Diffraction (EBSD) and other diffraction techniques. What stood out to me this year were the depth and degree of technical presentations made at this conference, especially from industry contributors. The presentations were up to date, data driven, and as scientifically sound as any I have ever seen in the past 25 years of attending more than my share of technical conferences.


The industrial adaptation of technology is not new since X-ray diffraction has been utilized for over half a century to evaluate texture properties of crystalline materials. At ICOTOM I was most impressed by the current ‘out of the laboratory’ role of microanalysis, and especially EBSD, for the evaluation of anisotropic materials for quality enhancement.

The embracing of the microanalysis as a tool for product enhancement means that we equipment producers need to develop new and improved systems and software for EBSD applications that will address these industrial requirements. It is essential that all technology providers recognize the evolving market requirements as they develop, so that they can stay relevant and supply current needs. If they can’t do this, then manufacturing entities will find their own solutions!

*In the interests of full disclosure, I should say that EDAX was a sponsor of ICOTOM18 and that my colleagues were part of the organizing committee.