Technical note

A Bit of Background Information

Dr. Jens Rafaelsen, Applications Engineer, EDAX

Any EDS spectrum will have two distinct components; the characteristic peaks that originate from transitions between the states of the atoms in the sample and the background (Bremsstrahlung) which comes from continuum radiation emitted from electrons being slowed down as they move through the sample. The figure below shows a carbon coated galena sample (PbS) where the background is below the dark blue line while the characteristic peaks are above.

Carbon coated galena sample (PbS) where the bacground is below the dark blue line while the characteristic peaks are above.

Some people consider the background an artefact and something to be removed from the spectrum (either through electronics filtering or by subtracting it) but in the TEAM™ software we apply a model based on Kramer’s law that looks as follows:where E is the photon energy, N(E) the number of photons, ε(E) the detector efficiency, A(E) the sample self-absorption, E0 the incident beam energy, and a, b, c are fit parameters¹.

This means that the background is tied to the sample composition and detector characteristic and that you can actually use the background shape and fit/misfit as a troubleshooting tool. Often if you have a bad background, it’s because the sample doesn’t meet the model requirements or the data fed to the model is incorrect. The example below shows the galena spectrum where the model has been fed two different tilt conditions and an overshoot of the background can easily be seen with the incorrect 45 degrees tilt. So, if the background is off in the low energy range, it could be an indication that the surface the spectrum came from was tilted, in which case the quant model will lose accuracy (unless it’s fed the correct tilt value).


This of course means that if your background is off, you can easily spend a long time figuring out what went wrong and why, although it often doesn’t matter too much. To get rid of this complexity we have included a different approach in our APEX™ software that is meant for the entry level user. Instead of doing a full model calculation we apply a Statistics-sensitive Non-linear Iterative Peak-clipping (SNIP) routine². This means that you will always get a good background fit though you lose some of the additional information you get from the Bremsstrahlung model. The images below show part of the difference where the full model includes the steps in the background caused by sample self-absorption while the SNIP filter returns a flat background.

So, which one is better? Well, it depends on where the question is coming from. As a scientist, I would always choose a model where the individual components can be addressed individually and if something looks strange, there will be a physical reason for it. But I also understand that a lot of people are not interested in the details and “just want something that works”. Both the Bremsstrahlung model and the SNIP filter will produce good results as shown in the table below that compares the quantification numbers from the galena sample.

While there’s a slight difference between the two models, the variation is well within what is expected based on statistics and especially considering that the sample is a bit oxidized (as can be seen from the oxygen peak in the spectrum). But the complexity of the SNIP background is significantly reduced relative to the full model and there’s no user input, making it the better choice for the novice analyst of infrequent user.

¹ F. Eggert, Microchim Acta 155, 129–136 (2006), DOI 10.1007/s00604-006-0530-0
² C.G. RYAN et al, Nuclear Instruments and Methods in Physics Research 934 (1988) 396-402

What an Eclipse can teach us about our EDS Detectors

Shawn Wallace, Applications Engineer, EDAX

A large portion of the US today saw a real-world teaching moment about something microanalysts think about every day.

Figure 1. Total solar eclipse.                                  Image credit-nasa.gov

With today’s Solar Eclipse, you could see two objects that have the same solid angle in the sky, assuming you are in the path of totality. Which is bigger, the Sun or the Moon? We all know that the Sun is bigger, its radius is nearly 400x that of the moon.

Figure 2. How it works.                                             Image credit – nasa.gov

Luckily for us nerds, it is also 400x further away from the Earth than the moon is. This is what makes the solid angle of both objects the same, so that from the perspective of viewers from the Earth, they take up the same area in the sphere of the sky.

The EDAX team observes the solar eclipse in NJ, without looking at the sun!

Why does all this matter for a microanalyst? We always want to get the most out of our detectors and that means maximizing the solid angle. To maximize it, you really have two parameters to play with: how big the detector is and how close the detector is to the sample. ‘How big is the detector’ is easy to play with. Bigger is better, right? Not always, as the bigger it gets, the more you start running in to challenges with pushing charge around that can lead to issues like incomplete charge collection, ballistic deficits, and other problems that many people never think about.

All these factors tend to lead to lower resolution spectra and worse performance at fast pulse processing times.
What about getting closer? Often, we aim for a take-off angle of 350 and want to ensure that the detector does not protrude below the pole piece to avoid hitting the sample. On different microscopes, this can put severe restrictions on how and where the detector can be mounted and we can end up with the situation where we need to move a large detector further back to make it fit within the constraining parameters. So, getting closer isn’t always an option and sometimes going bigger means moving further back.

Figure 3. Schematic showing different detector sizes with the same solid angle. The detector size can govern the distance from the sample.

In the end, bigger is not always better. When looking at EDS systems, you have to compare the geometry just as much as anything else. The events happening today remind of us that. Sure the Sun is bigger than Moon, but the latter does just as good a job of making a part of the sky dark as the Sun does making it bright.

For more information on optimizing your analysis with EDS and EBSD, see our webinar, ‘Why Microanalysis Performance Matters’.

Molecular Machines are the Future…

René Jansen, Regional Manager, Europe

The ground in the north of Holland was recently shaking and not because of an earthquake, but because Professor Ben Feringa from the University of Groningen has won the 2016 Nobel Prize in Chemistry for his work on the development of molecular machines.
Feringa discovered the molecular motor — a light-driven rotary molecular motor – which is widely recognized as a spectacular scientific breakthrough.

Electrically driven directional motion of a four-wheeled molecule on a metal surface

‘Building a moving molecule is not that difficult in itself, but being able to steer it, have control over it, is a different matter.’, he said. Years ago he already presented the first molecular motor, consisting of a molecule, part of which performed a full rotation under the influence of light and heat. He has designed many different engines since, including a molecular ‘4-wheel drive’ car. By fixating the engine molecules to a surface, he developed a nano ‘mill park’ in which the mills rotate when exposed to light. And last year he described the world’s first symmetrical molecular engine. Feringa also succeeded in putting these molecular engines to work, having them turn a glass cylinder 10,000 times their size. Amazing.

Feringa is internationally recognized as a pioneer in the field of molecular engines. One of the potential applications of his engines is the delivery of medication inside the human body.

I recently heard an interview with him, in which he promoted the idea that universities should be playgrounds, where scientists must be able to do whatever they want to create real breakthroughs. Today, the ability of universities to create these playgrounds is limited due to a constant reduction of budgets over recent years. It would be interesting to know how the University of Groningen has managed to do this.

Another, less famous, department at the University of Groningen is working on the formation/deformation of materials which are exposed to high temperature (> 1000 degrees Celsius). Measuring EBSD patterns while temperature increases, shows that new crystals are formed at a certain temperature. Now my hopes are that this “playground” too will end up in a few years from now with a Nobel prize for a breakthrough in Materials Science.

Rotary Engines Go “Round and Round”

Dr. Bruce Scruggs, XRF Product Manager EDAX

Growing up outside of Detroit, MI, automobiles were ingrained in the culture, particularly American muscle cars. I was never a car buff but if I said little and nodded knowingly during these car discussions, I could at least survive. Engine displacement? Transmission? Gear ratios? Yep, just nod your head and grunt a little bit. Well, it turns out working at EDAX that I’ve run into a couple of serious car restoration experts. There always seems to be a common theme with these guys: how do I get more power out of this engine?

Recently, one of these restoration experts brought in a small section of the rotor housing of a Mazda engine circa early ‘80s. Turns out, this guy likes to rebuild Mazda engines, tweak the turbocharging and race them. As we all know, Mazda was famous for commercializing the Wankel engine, aka the rotary engine, to power their cars. Rotary engines are famous for their simplicity and the power one can generate from a relatively small engine displacement. These engines are also infamous (i.e. poor fuel consumption and emissions) as well which has led Mazda to end general production in roughly 2012 with the last of the production RX-8s.

Now, one of the questions in rebuilding these engines is how to repair and resurface the oblong rotor housing. In older engines of this type, the surface of the rotor housing can suffer deep gouges. The gouges can be filled and then need to be resurfaced. Initially, we imaged the cross-section of the rotor housing block in an Orbis PC micro-XRF spectrometer to determine what was used to surface coat the rotor housing. If you read up on this engine, (it’s a 12A variant), the block is aluminum with a cast iron liner and a hard chromium plating. The internet buzz claims the liner is installed via a “sheet metal insert process”. And when I google “sheet metal insert process” all I get are links to sheet metal forming and links referring to webpages which have copied the original reference to “sheet metal insert process”.

In the following Orbis micro-XRF maps (Figures 1a and 1b), you can see the aluminum rotor housing block and the cast iron liner. Each row of the map is about 100 µm wide with the iron liner being about 1.5 mm thick. If you look carefully, you can also see the chrome coating on the surface of the iron liner. On the cross-section, which was done with a band saw cut, the chrome coating is about one map pixel across. So, it’s less than 100 µm thick. From web searches, hard chrome plating for high wear applications start at around 25 µm thick and range up to hundreds of microns thick. For very thick coatings, they are ground or polished down after the plating process to achieve more uniform application. So, what is found in the elemental map is consistent with the lower end of web-based information for a hard chrome coating, bearing in mind that the coating measured had well over 150k miles of wear and tear. If we had a rotor housing with less wear and tear, we could use XRF to make a more proper measurement of the chrome plating thickness and provide a better estimate of the original manufacturer’s specification on the hard chrome thickness.

Figure 1a: Orbis PC elemental map

Overlay of 4 elements:
Fe: Blue (from the cast iron liner)
Al: Green (from the aluminum rotor housing block)
Cr: Yellow (coating on the cast iron liner)
Red: Zinc (use unknown)

Figure 1b: Total counts map: Lighter elements such as Al generate fewer X-ray counts and appear darker than the brighter, heavy Fe containing components.

We did have a look at the chrome coating by direct measurement with both XRF, looking for alloying elements such as Ti, Ni, W and Mo, as well as SEM-EDS looking for carbides and nitrides. We found that it’s simply a nominally, pure chrome coating with no significant alloying elements. We did see some oxygen using SEM-EDS, but that would be expected on a surface that has been exposed to high heat and combustion for thousands of operating hours. Again, these findings are consistent with a hard chrome coating.

In some on-line forum discussions, there was even speculation that the chrome coating was micro-porous to hold lubricant. So, we also looked at the chrome surface under high SEM magnification (Figure 2). There are indeed some voids in the coating, but it doesn’t appear that they are there by design, but rather that they are simply voids associated with the metal grain structure of the coating or perhaps from wear. We specifically targeted a shallow scratch in the coating, looking for indications of sub-surface porosity. The trough of the scratch shows a smearing of the chrome metal grains but nothing indicating designed micro-porosity.

Figure 2: SEM image of chrome plated surface of rotor housing liner. The scratch running vertically in the image is about 120 µm thick.

The XRF maps in Figure 1 also provides some insight into the sheet metal insert process. The cast iron liner appears to be wrapped in ribbons of aluminum alloy and iron. The composition of the iron ribbon (approximately 1 wt% Mn) is about the same as the liner. But, the aluminum alloy ribbon is higher in copper content than the housing block. This can be seen in the elemental map (Figure 1a) where the aluminum ribbon is a little darker green, lower Al signal intensity, than the housing block itself. The map also shows a thread of some zinc bearing component running through (what we speculate are) the wrappings around the liner. My best guess here is that it is some sort of joining compound. Ultimately, the sheet metal insert process involves a bit more than a simple press or shrink fit of a cylinder sleeve in a piston engine block. Nod knowingly and grunt a little.

Old Dogs and New Tricks!

Matt Nowell, Product Manager EBSD, EDAX

This year, three of us in the EBSD development group (Stuart Wright, Scott Lindeman, and myself) celebrated 20 years at EDAX.  I consider myself quite fortunate to have gotten involved with EBSD so early in its commercial development, and it’s been exciting and rewarding to see its growth, both in terms of number of users but also in the wide array of applications.

However, there are still some characterization challenges that we continue to revisit.  One example is differentiating ferrite from martensite in different steel alloys.  This phase differentiation application is challenging because martensite is crystallographically only slightly distorted from the ferrite body-centered cubic cell, and that distortion will depend on carbon content and thermal processing history.  This makes it difficult to differentiate these phases directly via crystallographic structure measurements.  Because the martensitic phase is generally more strained, most differentiation work has focused on using the EBSD Image Quality value as the key differentiation metric [1-2].

Figure 1.

As new features are developed, it is enjoyable to see where these features can be applied, and what benefits might be gained from them beyond what was initially envisioned.  One example is Neighbor Pattern Averaging and Reindexing or NPAR.  NPAR improves the signal to noise of an EBSD pattern by averaging each pattern with all the neighboring patterns, as shown in Figure 1.    NPAR was initially created as a method of successfully indexing some very noisy patterns we received from a customer, but we quickly found benefits trying this approach on a range of different materials and under different SEM operating conditions.  More details can be found in an earlier blog post at : http://edaxblog.com/2015/09/.

Figure 2a. Figure 2b.

Figure 2 shows EBSD Image Quality (IQ) maps collected on a dual phase ferritic-martensitic steel sample.  Fig 2a shows the IQ map collected under standard conditions, while Fig 2b shows the IQ map after NPAR processing.  It can easily be seen that the phase contrast has been increased after using NPAR.  This is because the quality of the EBSD pattern from the martensitic phase is lower due to the internal strain and not because of camera parameters.  This means that the spatial pattern averaging of NPAR does not improve the IQ values for the martensitic phase at the same rate as it does for the ferritic phase, hence increasing the phase contrast values.

Figure 3a. Figure 3b.

NPAR processing does have another effect that can be observed.  Using NPAR, orientation precision is improved through better Signal to Noise levels in the EBSD pattern, resulting in more precise band detection. Because of this effect, the average misorientation (as measured here with the Kernel Average Misorientation metric) measured within each martensitic grain is lower with NPAR processing.  The results with and without NPAR processing are shown in Figure 3.  While NPAR does improve indexing and orientation precision performance, this improvement reduces the effectiveness of the KAM value to differentiate these phases.  I think the fact that NPAR improves one indirect differentiation method while not improving another shows why this is a challenging characterization problem.

In the end, while NPAR does offer some improvements, we still have not found a fully satisfactory solution to the ferrite-martensite differentiation problem.  I look forward to continuing to work on this and other characterization problems as we continue with EBSD product development.

[1] Wilson, A. W., J. D. Madison and G. Spanos (2001). “Determining phase volume fraction in steels by electron backscattered diffraction.” Scripta Materialia 45(12): 1335-1340.
[2] Nowell, M. M., S. I. Wright and J. O. Carpenter (2009). A Practical investigation into Identifying and Differentiating Phases in Steel Using Electron Backscatter Diffraction. Materials Processing and Texture. A. D. Rollett. Hoboken, NJ, John Wiley & Sons: 285-292.

To learn more about NPAR click here to see our video overview.

Notes from Madison: Atom Probe Tomography Users’ Meeting

Dr. Katherine Rice, Applications Scientist at CAMECA Instruments, Inc.

Dinner at the top of the Park with a view of the Wisconsin State Capitol

The Terrace at the University of Wisconsin

Last week was a great week up here in Madison for our bi-annual users’ meeting, with about 90 atom probe enthusiasts making the trek to Madison, WI to discuss the finer points of atom probe tomography (APT).   There were plenty of great sessions involving, for example, correlative microscopy, cryo-atom probe, and new ways to detect evaporated ions.  Lest anyone think that we are too serious up here in Wisconsin, we also enjoyed talks on atom probing rodent teeth and even beer, as well as having several social events where our attendees could sample local brews.

Demo attendees watching a map being taken

Many of the users have been implementing transmission EBSD (or TKD, as some folks prefer) on their needle-shaped atom probe specimens which are typically shaped by a focused ion beam (FIB) microscope.  This allows for identification of any grain boundaries present, and also helps position a grain boundary close to the specimen apex so there is a good chance it will be captured in an APT analysis.  Atom probe specimens usually have a radius of ~100 nm which makes them ideally sized for transmission EBSD at SEM voltages between 20-30 kV.   The users’ group meeting also marked another special event:  the debut of Atom Probe Assist (APA) mode in the TEAM™ software.  Transmission EBSD can be challenging, but APA mode makes the analysis faster and easier by implementing recipes for background subtraction developed by EDAX and by skipping mapping of areas not intercepted by the specimen.  We had about 20 users at the Tuesday demos of APA mode and another few at an additional demo on Friday.  CAMECA’s Dr. Yimeng Chen manned the FIB and quickly targeted a grain boundary for FIB milling while our EDAX friend Dr. Travis Rampton took maps after each milling step to make sure the grain boundary was contained in the specimen.

Yimeng Chen and Travis Rampton present a poster.

Sample holders that work well for t-EBSD and FIB were also on debut at the meeting.  Many of CAMECA’s atom probe users mount up each specimen to our Microtip coupons, which are 3 mm X 5 mm pieces of Si that hold 22 flat topped posts.  Our Microtip Holder (affectionately nicknamed the Moth) was developed to do transmission EBSD on each of 22 mounted specimens, and then transfer the stub portion directly into the atom probe.  Even if you don’t do APT, these microtip posts are a convenient way to mount multiple thin samples for transmission EBSD.

The moth sample holder containing a microtip coupon

It was incredible to see the explosion of transmission EBSD for atom probe, and the cool things that many LEAP users are discovering when they try it out on their atom probe samples.  Perhaps the greatest strength of this technique is how easy and integrated it is in the atom probe specimen preparation process.  You don’t even need to move your sample or the camera between steps when you are shaping a liftout wedge into a specimen that is atom probe ready.  I look forward to hearing about the new applications that are being discovered when combining t-EBSD and APT!

There is more here than meets the eye!

Dr. Bruce Scruggs, Product Manager Micro-XRF, EDAX

EDAX has introduced a product line of coating thickness measurement instruments based on XRF spectrometry.  These units were designed to measure coatings on samples varying in size from small parts to spools of metal sheet stock a mile long.  The markets for these products are generally in the areas of Quality Control/Quality Assurance and Process Control.

Recently, I received a simple, small electrical component, i.e. some type of solder contact or lug, and was asked to verify the coating thicknesses on the sample and check whether it was in specification or not.  It seemed like a simple enough task and I wasn’t expecting to learn anything special.

Figure 1: Electrical contact lug

I was given the following specifications:
• Sn / Ni / Al (substrate)
• Sn thickness:  5 µm +/- 1 µm
• Ni thickness:  2 µm +/- 1 µm
• eyelet is coated; tail is uncoated

I made some measurements on the eyelet and the tail and these were consistent with the eyelet being coated with Sn and Ni and the tail section being an uncoated Al alloy.  There were some irregularities that I was not expecting.  I found trace Ga in the Al alloy.  I thought that was rather odd because I don’t see Ga that often.  I also found strong peak intensities for Zn and Cu which were completely inconsistent with the weak peaks found in the Al alloy.  A “standardless” modeling quantification analysis of the Al alloy indicated Zn and Cu at 40 ppm and Ga at 110 ppm.  Googling “Gallium in Aluminum alloys” produced numerous hits explaining that Ga is a trace element in bauxite, the raw material used to produce Al metal.  Hence, Ga is a trace impurity in Al alloys.  Incidentally, the following week, I saw trace Ga in every Al alloy I measured for another project.

Since the Zn and Cu peak intensities found in the measurement of the eyelet were much stronger than the base alloy, this means the Zn and Cu had to be in the Sn/Ni coatings.  After completing all the spectral measurements on the eyelet, I had to resort to polishing an edge on the eyelet and evaluating the Sn and Ni layers in cross-section using SEM-EDS to evaluate the content of the Sn and Ni layers.  The Sn and Ni layers were smeared because the polishing was done very quickly without embedding the sample in epoxy.  But, SEM-EDS clearly showed the Zn and Cu originating from the Ni layer and not the Sn layer.  So, now we had a layer system of Sn / Ni(ZnCu) / Al alloy.  It wasn’t clear to me whether the Zn and Cu represented a quality problem or not.

Figure 2: SEM image of the cross section of the edge of the eyelet. The Sn and Ni layers can be seen from left to right

Now we come to the actual measurement of the coating thickness.  Since, Sn and Ni foils are commercially available for coating calibration, I decided to use stackable Sn and Ni foils, i.e. 2.06 um Sn on 1.04 um Ni, (sourced from Calmetrics Inc, Holbrook, NY  USA) on an Al substrate to calibrate the coating model.  I also used pure Zn and Cu “infinites”, i.e. samples with a thickness such that further increase in thickness provides no increase in signal, to give the coating quantification model a point of reference for these other two elements not in my Ni foil standard.

I built a coating quantification model based on the Sn(K), Ni(K), Zn(K) and Cu(K) lines and another based on the Sn(L) lines as opposed to the Sn(K) lines.  The Sn(K) lines, being more energetic , allow you to measure thicker layers while the Sn(L) lines are more sensitive to layer variations for thinner layers.  Both coating quantification models were calibrated with the same standard.  But, to my surprise, measurements off the same point on the sample using these two different coating models didn’t agree!  This is often a question that our customers ask, “Why are the results not the same if I use a different line series?”

Table 1: Initial coating thickness measurements on the eyelet.

I pondered this result for a while and then remembered that X-rays are penetrating.  This is why this is an effective means of non-destructively measuring coatings.  After measuring the overall thickness of the part, i.e. 0.8 mm, and doing a few quick calculations, I realized that the Al alloy substrate is not thick enough to stop Sn(K) X-rays.  The website I like to use for these types of calculations is: http://henke.lbl.gov/optical_constants/filter2.html.

0.8 mm of Al only absorbs about 30% of the Sn(K) X-rays at 25.2 keV and this sample happens to be coated on BOTH sides of the substrate.  (The absorption for Sn(L) at 3.4 keV and Ni(K) at 7.5 keV happen to be essentially 100%.)  So, the measurement is seeing the Sn(K) from the top surface as well as the opposite surface coating while the measurement is only seeing the Sn(L) and Ni(K) from the top surface.  I thought it would be interesting to make the measurement again at the same spot after polishing off the coating on the opposing side of the part.

Table 2: Coating measurement at nominally same position as in Table 1 after removing the coating on the opposite side of the part.

Now the Sn (and Ni) layers agree to within better than 10%. In this case, the result for the Ni layer also changes because, given the same Ni intensity in each case, the quantitative X-ray modeling will predict that the Ni layer thickness must decrease as the Sn layer thickness decreases. You can also see that the Sn layer is well out of specification and there is about 10 wt% Zn in the Ni layer.  I still don’t know if that’s a quality problem or not.  But, I was definitely impressed with how much I learned from just measuring this simple electrical part.

What’s in Your EBSD Pattern?

Dr. Travis Rampton, Applications Engineer EDAX

When collecting EBSD data it is important to optimize the detector/camera to obtain the desired information which is usually focused on crystal orientation. However, many factors affect the creation of an EBSD pattern beyond orientation. Some of these are demonstrated in Figure 1. In this blog post we will examine some of those factors. In doing so we will also take advantage of PRIAS imaging to illustrate a few of the different effects.

Figure 1: Examples of EBSD patterns collected under varied conditions. The pattern on the left represents the ideal pattern, while the one in the middle is a mixed pattern and the pattern on the right is an unprocessed image.

A list of a few of the most important factors that affect EBSD patterns is given here. The author recognizes that all effects may not be accounted for and invites your additions in the comments section of this blog.

  • SEM kV and beam current
  • EBSD camera parameters (gain, exposure, image processing)
  • Sample/detector geometry
  • Material density
  • Surface structures (topography, defects, quality)
  • Crystal structure/orientation
  • Interaction volume
  • Magnetic domains

Not only do the listed factors affect entire EBSD patterns, but some manifest more apparently in certain regions of the image. This is often best manifested in geological samples containing topography, atomic differences, and orientation contrast (see Figure 2).

Figure 2: (Left) PRIAS image taken from the top of the EBSD detector, showing surface topography; (middle) PRIAS image taken from the center for the detector, showing orientation contrast; (right) PRIAS image taken from the bottom of the detector, showing some atomic difference.

Some of the factors that affect EBSD patterns are apparent enough that they can be seen by eye; others are so subtle that they require more sensitive techniques. The final example that will be shown in this post is of magnetic domains. The effect of magnetic domains is not visible by just looking at the EBSD patterns, however, PRIAS imaging makes this effect visible. For this example we will look at a grain oriented electrical steel. Figure 3 shows distinct magnetic regions especially when compared to the SEM image.

Figure 3: (left) SEM image taken of steel and (right) PRIAS image taken from the left side of the detector showing magnetic domains.

The images in Figure 3 give one view of the magnetic domains. An additional set of views is seen in the full 5 x 5 array of PRIAS images shown in Figure 4. A close inspection of all 25 images reveals several varying structures. The differences between all of the ROIs is not fully understood at this time and is the subject of an ongoing study.

Figure 4: 5 x 5 grid of PRIAS images taken from grain oriented electrical steel. Each ROI image shows different structure.

These examples represent just a few of the factors that affect the formation of an EBSD pattern. Often these effects can be seen in the patterns alone, but other times PRIAS imaging is required for clear visualization.  While EBSD is a reliable method for measuring crystallographic orientation and phase information there is often much more information in the EBSD pattern. So I pose the question, what’s in your EBSD pattern?

How Low Can You Go?

Jens Rafaelsen, Applications Engineer, EDAX

My colleague René de Kloe’s March blog contribution “Resolving Matters” on resolution in the world of EBSD sparked a few thoughts along similar lines for EDS. Just like EBSD we have several resolutions in play when we are discussing EDS data. The easiest one to deal with is the detector resolution, which is defined as the FWHM at the Mn K peak and has typical values in the range of 121-130 eV. This is a value that is pretty easy to understand and the value at the different processing times can be seen directly in the TEAM interface. But often we are asked the question of “how low can you go?” and when this comes up, we are typically talking about very different resolution parameters. It used to be that this question meant “how low concentrations of this or that element can you detect” or in other words, what’s the method/minimum detectable limit (MDL), but with mapping now being a standard data acquisition technique, it often means “how small things can you see in the maps?”

The first interpretation of the question is not as easy as one might think. Unlike the detector resolution, which is a fixed value regardless of your sample or microscope settings, the MDL is heavily dependent on the microscope settings, the sample composition, the detector resolution and last but definitely not least, the number of counts in the spectrum. At the end of the day, the MDL comes down to whether we can reliably say that there is a peak at the energy of a certain element. The method to establish whether a peak is present is typically borrowed from the world of microprobes, where we simply look at the number of counts on the peak centroid and the number of background counts. If the number of peak counts is above the background by some level (often 3 standard deviations) we say the element is above the detectable limit. This means that we need a high degree of precision in our results to drive the noise levels down, which means that the number of counts come into play; essentially, the longer the acquisition time is, the lower the MDL will be with all other parameters being the same. And since both background and peak counts are affected by the microscope parameters and the composition of the sample, one MDL measured on one sample would not be applicable to another sample unless the composition of the two were very close and the same microscope parameters were used. This also means that the MDL will differ significantly depending on which X-ray lines are used. If we consider the Pyrite (FeS2) spectra shown below for the range of the Fe K-lines and L-lines and simply use the counts the highest channel and the corresponding background fit value, the peak to background ratios are 37 and 18 for the high energy and low energy peak respectively. While this approach is a little simplified, this basically means that the MDL when using the high energy peak is half of that for the low energy peak, simply due to the difference in peak to background ratio. And of course if the processing time is changed, the detector resolution will change and consequently the peak will become broader or more narrow, which will change the counts on the highest channel.

The lesson to be learned here is, that when we are reluctant to answer this aspect of the “how low can you go?” question, it’s not because we are trying to hide something, it’s simply because we would need to know the composition of the sample, the microscope parameters used, the X-ray line of interest, and the statistics/number of counts in the spectrum, before we can even start to do the math or give you an educated guess. And it should also be taken into account that the answer that comes out of the calculations will only be applicable to that specific sample and not a universal limit.

The other aspect of “how low can you go?” really comes down to the resolving power of X-ray maps. We can easily increase the size of the maps we acquire to say 4096×3200 pixels, but if the area/volume we get information from is larger than the single pixel size, this doesn’t really give us any additional information. The interaction volume can be modeled in various ways, but since an image is worth a thousand words, I had been thinking about a good sample to illustrate this for a while. Luckily I was at Pittcon in New Orleans this March, and thanks to John Yorston from Zeiss, I learned that flicking a lighter and letting the sparks hit a stub with carbon tape will allow you to pick up particles rich in O, Mg, Fe, La, and Ce with a size distribution from tens of µm down to about 100 nm. Shortly after getting back into the lab, I “borrowed” an empty lighter from our Software Manager Divyesh Patel, and popped the resulting sample in our FEI Nova Nanolab 200. The resulting SEM image acquired at 10 kV can be seen below.

SEM Image Acquired at 10 kV

The largest particle in the center of the image is roughly 700 nm in diameter while the slightly smaller particle below it is about 300 nm and the small particles scattered around are on the order of 100 nm. While these particles can easily be seen in the SEM image, there is typically a world of difference between the interaction volume of secondary electrons and X-ray photons, so what would an X-ray image look like? In the TEAM™ software we also build up an image based on the X-ray counts per second (CPS) in each pixel when we are mapping the sample. The CPS image of the same region as the SEM image can be seen below, and one can immediately see that they are quite similar. Some shadowing can be seen to the right of the large particle where X-rays are being blocked and a few features are slightly more blurry in the CPS image, but the CPS image also shows us features that were not visible in the SEM image (compare the top left corner in the two images) and clearly resolve the 100 nm particles.

CPS Map

So does this mean that the resolution of the X-ray maps are comparable to the SEM image? Well, yes and no. The secondary electrons that we use for the SEM image are typically defined as anything with an energy below 50 eV, which means that they can only escape from a region very close to the surface. X-rays on the other hand can travel through a significant distance of material, depending on the composition and photon energy. Shown below are two simulations (created using the CASINO software) of the energy deposited in the sample with a 100 nm layer of either CeLa or Mg on top of a carbon substrate.

Two simulations of the energy deposited in the sample with a 100 nm layer of either CeLa or Mg on top of a carbon substrate.

The simulations show that for the CeLa layer, the energy is pretty much confined to the layer/particle, while we get a significant penetration into the substrate if the layer is made of Mg. The data showed that the large center particle was primarily La and Ce while the smaller particles around it were mostly Fe and Mg (La and Ce maps shown below). So while we seemingly have quite good resolution in the CPS image, the depth from which we get information can vary dramatically with the composition of the particles.

La and Ce maps

Again we end up with the problem that there’s no simple answer to this aspect of the “How low can you go?” question. It depends on several parameters, including the composition of the feature of interest, but simulations can help us a long way towards understanding what is going on in the sample and give us an idea of what settings we should use for the data acquisition.

While I have been somewhat non-committal in the details of the mapping resolution here, we will be discussing some of these topics in more detail in our upcoming webinar “Low Energy and High Spatial Resolution EDS Mapping”* on June 18, 2015 and look at what differences we see when changing the parameters. As for the MDL side of the discussion, this might be something for a future webinar as well, or come see us August 2-6 at M&M in Portland where at the very least we will have a poster covering part of this subject.

*Registration for this webinar is now here.

Fine-Tuning the Microstructure

Dr. Stuart Wright, Senior Scientist, EDAX

Since my blog about piano wires back in November 2014, I’ve continued to think about what it all means in terms of music. As I mentioned in my posting, my friend Keith Kopp provided the wires for me to look at. Keith is very generous with his time. I’ve seen him many times help out with music at various church and neighborhood socials. I’m always impressed that someone like Keith can clearly hear when an instrument is even slightly out of tune and also recognize how to fix the problem. I certainly don’t have the ear for that kind of thing. Keith mentioned to me that he can hear the difference between the two wires he supplied to me. I realized quickly I had no hope of picking that up with my insensitive ears. I then realized that Keith was saying he could hear the difference even when the two wires are tuned. I wondered what it was that he was hearing. I turned to Wikipedia for some insight and stumbled across an entry on “inharmonicity” which is “the degree to which the frequencies of overtones (also known as partials or partial tones) depart from whole multiples of the fundamental frequency”.  I realized that the sound waves are travelling through the wires at slightly different rates due to elastic anisotropy coupled with grain-to-grain differences in orientation. Thus, while the average pitch of the wire will be in tune there will actually be a spread about that pitch. I might be able to estimate that spread using the principles of elastic anisotropy.

For a single crystal the elastic behavior is anisotropic as is illustrated in a plot for the elastic modulus for an iron single crystal below (courtesy of Megan Frary at Boise State University).

The elastic properties of a single crystal can be expressed in terms of a tensor. This is handy, because rotating the property tensor to reflect the grain orientation with respect to a set of samples axes is fairly straight forward (C is a fourth order elasticity tensor and g is the orientation matrix):

The next step was to simply go to a reference volume and find the single crystal elastic constants for fcc and bcc iron, plug them into OIM Analysis and then “Bob’s your uncle”. However, I learned it wasn’t nearly as straightforward as I thought. Once again, some searching on the internet led me to some papers on first principle calculations of elastic constants and I quickly discovered that estimating elastic constants at room temperature is not as simple as I would have thought. I found several papers by Levente Vitos and co-workers. Professor Vitos was kind enough to teach me a little about this field and after some correspondence with him I decided to use the elastic constants for Fe-Mn found in Zhang, H., Punkkinen, M. P., Johansson, B., & Vitos, L. (2012). Elastic parameters of paramagnetic iron-based alloys from first-principles calculations. Physical Review B, 85:  054107-1. My thinking was that while the absolute values of the constants were probably not accurate, the ratios between the components would be constant enough to at least give me a rough idea of the distribution. (I tried some of the other constants in this paper and the results were all pretty similar.)

I then calculated the distribution of elastic moduli parallel to the longitudinal direction of the wire thinking this might give me an idea of the differences in the distribution of pitches one might hear when a piano wire is struck. The results are shown below – on the left for actual elastic moduli and then on the right for how this might translate to a distribution of pitches. However, this second plot is only a schematic to illustrate my thinking – I have no idea how the elastic moduli variations would translate to pitch variations, the horizontal scale could be much wider, i.e. the flats and sharps could be much farther away from the center pitch and it may also not be a linear relationship. Also the choice of C is completely arbitrary – the actual pitch will depend on the diameter and tension on the wire.


The bad wire (in terms of breakage), which according to Keith’s ear has a clearer sound than the wire less prone to breaking, has a narrower distribution of elastic moduli. Of course, I may be completely off-base as a “fuller” sound may correspond to the broader distribution as well. Perhaps what Keith can hear is the fine balance between clarity and fullness. So if my large set of assumptions is correct then, while I may not be able to hear the difference, I can at least see the difference in the texture data.