A large portion of the US today saw a real-world teaching moment about something microanalysts think about every day.
Figure 1. Total solar eclipse. Image credit-nasa.gov
With today’s Solar Eclipse, you could see two objects that have the same solid angle in the sky, assuming you are in the path of totality. Which is bigger, the Sun or the Moon? We all know that the Sun is bigger, its radius is nearly 400x that of the moon.
Figure 2. How it works. Image credit – nasa.gov
Luckily for us nerds, it is also 400x further away from the Earth than the moon is. This is what makes the solid angle of both objects the same, so that from the perspective of viewers from the Earth, they take up the same area in the sphere of the sky.
The EDAX team observes the solar eclipse in NJ, without looking at the sun!
Why does all this matter for a microanalyst? We always want to get the most out of our detectors and that means maximizing the solid angle. To maximize it, you really have two parameters to play with: how big the detector is and how close the detector is to the sample. ‘How big is the detector’ is easy to play with. Bigger is better, right? Not always, as the bigger it gets, the more you start running in to challenges with pushing charge around that can lead to issues like incomplete charge collection, ballistic deficits, and other problems that many people never think about.
All these factors tend to lead to lower resolution spectra and worse performance at fast pulse processing times.
What about getting closer? Often, we aim for a take-off angle of 350 and want to ensure that the detector does not protrude below the pole piece to avoid hitting the sample. On different microscopes, this can put severe restrictions on how and where the detector can be mounted and we can end up with the situation where we need to move a large detector further back to make it fit within the constraining parameters. So, getting closer isn’t always an option and sometimes going bigger means moving further back.
Figure 3. Schematic showing different detector sizes with the same solid angle. The detector size can govern the distance from the sample.
In the end, bigger is not always better. When looking at EDS systems, you have to compare the geometry just as much as anything else. The events happening today remind of us that. Sure the Sun is bigger than Moon, but the latter does just as good a job of making a part of the sky dark as the Sun does making it bright.
Dr. Patrick Camus, Director of Research and Innovation, EDAX
Stimulation for new research approaches and topics can come from odd origins and at the most unexpected times.
We recently held a Sales Meeting at the factory in Mahwah. During a presentation by Dr. Jens Rafaelsen, an Applications Scientist, he mentioned an unexpected EDS result. He found that a brand new EDS Elite detector was collecting more x-rays than a larger older Octane detector for the same geometry and SEM conditions. This result is quite unexpected and seems to violate physics and our typical ideas about x-ray detection. If confirmed, this result has far reaching implications for Sales and Marketing and would be exploited in the coming months. But the science behind the result is unknown at the time.
EDS spectrum and modelling of Mg-Calcite.
A further discussion with Jens after his presentation inspired me to draft some notes on the scrap of paper that I had on hand. From these notes, I drafted an approach to an x-ray detection modelling experiment that would require input from Jens and another Scientist within the company. The experiment is to go beyond the simple description of associating detector detection performance with simply solid angle. That method may work when much of the sub-assemblies of the detection system are similar. However, for the latest generation of EDS detection systems, the use of modern materials requires a more complete system analysis.
Together, we will refine the model, compare the results to empirical results, and hope to publish both internal and external publications.
All of this work was sparked by a subtle but original observation by a coworker. Inspiration can come from unexpected sources and at unexpected times. Where have your inspirations come from?
BLOG UPDATE FROM PAT – March 23, 2016
A new result has been found while modelling different detector configurations. The thickness of the Silicon support grid for the windows is significantly different for the traditional polymer (>300 um) and the new Si-N (<50 um) windows. This creates a different absorption of x-rays as a function of x-ray energy. This is illustrated in the following figure.
The predicted increase of the transparency of the Si-N window grid at intermediate x-ray energies has the potential to increase the total count rates of the detection system by a significant amount. More details to follow.
One of the things I learned during the 2015 Microscopy and Microanalysis meeting was just how efficient plasma cleaners really are and this is a short story about how it saved the day for us. We had shipped our older Hitachi S3400N microscope from Mahwah to Portland for the show and had tested everything before it went on the truck. The meeting opened Monday August 3 at noon so Sunday was set aside for getting everything set up and calibrated. While our service group had done most of the work, I had a bit of data I wanted to collect for the days to follow. So I sat down at the microscope, turned on the beam, and stared at the current meter showing next to nothing. I checked the usual microscope settings and fidgeted with the apertures but still couldn’t get a decent current down through the column. Since we were a little short on time and the Hitachi booth was close by, we went over and looked sufficiently desperate for the Hitachi service guys to take pity on us and come to help.
I noticed the Hitachi guys going through the same steps I had done and end up with the same problem, so at least it wasn’t just down to my short comings regarding microscope service. As the last step they pulled out the aperture strip and the black gunk covering all three apertures gave us a pretty good indication of the problem: the beam was being severely attenuated simply because the apertures were clogged up with carbon contamination. Of course the Hitachi guys’ immediate question was “Did you bring a new aperture strip?” and my answer was a meek “No…”. But then I remembered that I did bring a plasma cleaner. I didn’t really believe that it would be able to do much with the level of contamination that we had on the apertures but it was still worth a shot. So I put the aperture strip in the cleaner chamber and ran it at a pressure of 2*10-2 mbar with a power of 50 W.
I have to say that I was extremely surprised when the aperture strip looked as good as new after only 10 minutes of plasma exposure. Both the EDAX and Hitachi service guys were equally impressed and after mounting the strip back in the column we were up and running again. So 10 minutes of plasma cleaning saved us from having to either try to have an aperture strip shipped in overnight or run the microscope with no aperture and ensuing risk of sample damage and reduced imaging capability. Unfortunately I didn’t take any pictures before and after cleaning, as I honestly was not expecting it to work, but the picture below shows us busy running demos on the Hitachi during the show.
The EDAX booth at M&M 2015
At this point you might wonder why I had brought a plasma cleaner in the first place. Well, one of the things that we were highlighting with the new Octane Elite detector we launched at the show was the silicon nitride window and its durability. I had run a test on my office desk with a live detector mounted directly on an asher chamber (shown in Figure 1) that I borrowed from Vince Carlino of ibss Group, Inc. When the asher chamber is running, it looks like something out of a science fiction movie so we wanted to do something similar at the M&M meeting as a visual prop.
Since a full detector takes up space we simply put a single detector module directly in the asher chamber and started the cleaning process on Monday when the exhibition began. I took pictures of the controller for the system and the module at the start and end of each day as can be seen in the picture sequence below.
Figure 2: The controller and module at the start and end of each day.
After almost 76 hours of continuous plasma exposure, the silicon nitride window shows no signs of degradation and knowing what plasma cleaning did to the aperture strip, I am pretty certain that was absolutely no carbon contamination on the window. Of course this is more of a show-and-tell kind of experiment and the testing I did before this involved detailed monitoring of the module performance and temperature to detect any pin-holes that would not be visible by eye. That report will be available shortly.
The next step will be to try the same with a polymer window but I am still thinking about exactly how to design the experiment. Of course I could just clean it for an extended period of time and see if the window is still intact but it would be nice to have a metric of how fast the damage occurs (or not). One idea would be to use a bare window and correlate the ratio of the carbon and aluminum signal of the window to the silicon peak from the support grid in order to monitor any changes in thickness, but if anyone has other suggestions, I would be happy to hear them.
Even though the tabletop plasma cleaner has been around for a number of years, its complete usefulness is sometimes is overlooked because it is a small piece of auxillary equipment. Sometimes, however, the smallest of equipment can provide the largest benefit!
The last few months have been some of the most rewarding ever throughout my time at EDAX. In the last year or more we’ve been working on a series of new detector technology offerings, which we can now finally bring to our customers. These detector advancements are quite literally shattering past performance limits. And it’s not just one technology, but a combination of three technologies together which makes the Octane Elite launch one of the most exciting of my 20 year career here.
Two months ago, I sat at the system generating data that would give us an idea of the performance specifications that we could associate with the product promotion as we went to launch. I had just achieved a never before reached input count rate of 2 million counts per second, but was slightly hesitant to promote that, since it’s variable based on SEM conditions and sample. So, I let it sit for a bit and we went into a stellar M&M show with a strong set of performance specs from low energy performance to grid materials and spectral resolutions at high speeds. Following the show, we had a webinar planned, which again focused on those performance limits. It’s been one of my goals this year to be very data-driven, using direct examples to let a story show itself, so a crucial piece of the webinar was to collect the applied examples that illustrate the specs, and this made a great opportunity to revisit the 2 MCPS data collection.
Being efficient (much like our detectors!) I like to try to use one sample to tell multiple stories, so I grabbed a favorite ductile iron sample, which has both carbon for low energy performance and iron for high speed mapping. My first notable point was that I could run the count rate up to 750 K CPS input with max output at 60% deadtime and still obtain an excellent carbon peak in a spectrum extracted from a map (Figure 1). At these high count rates, older technology detectors cannot maintain this type of performance, and we’ve even seen carbon dropping off at 500 K CPS, our previous best, which was also an industry high. And by dropping off, I really do mean that the spectrum will no longer show a carbon peak as the spectrum no longer displays the peak at all, or in some cases, a highly distorted peak with little differentiation from the background. So, by achieving a carbon map at one and a half times the highest count rate ever achieved before, I felt I really shattered previous limits. I didn’t stop there, but pushed the count rate up to 1.5 M CPS input and still was able to detect the carbon peak, albeit with some degradation in the quality of the spectrum.
Figure 1 shows a clear display of the quality low energy performance even extracted extracted from a high speed map collected at 750 KCPS.
But why stop there? As I was already ramping up the count rate, I figured I’d continue as far as I could, and I opened the aperture all the way on our thermal FEG. At this point, I was running our SEM at 20 kV and max aperture, which would mean a beam current at or above 100 nA. This is really not an achievement in itself, since most all thermal FEGs can get there, and this SEM is 15 years old, so it’s not a new type of achievement. The steel sample was conductive, of course, making it suitable for this condition, but it was mounted in a non-conductive mount, so I had it grounded simply with carbon tape.
Once I opened the aperture, I had to do a double take at the CPS since that was a lot of numbers, and I actually counted to make sure I had it right – we were at 2.8 M CPS input! The reason I had a hard time believing this is that normally at that count rate, the detector would saturate and this time it did not. I was certain at that moment that I had broken our new detector and what I was seeing must be noise, because even just getting those counts without the detector turning off is a feat. So, of course I had to collect data to see what the quality was. And while the dead time was high at >90%, I was still able to collect a phase map (Figure 2) where both the low energy elements and higher energy steel were solved by the phase map routine in just a few passes.
These detectors have a great many additional performance enhancements with the Silicon Nitride window, vacuum encapsulation and CUBE electronics, but this example serves as a good display of the payoff of all combined, and this work would not be possible without the benefits of all of these aspects together.
To also address the windowless comments that I’ve gotten since my webinar, in summary, that’s an altogether different product. Our Octane Elite is a mainstream, general purpose detector that has all of these performance benefits, while the windowless serves more of a niche set of applications. I’ve had a windowless detector in my lab for years now and I’ll be very honest, it sits unused on a bench and I only mount it when I have a special requirement. My detector of choice, given my unlimited detector options, is absolutely the Octane Elite.
Figure 2 shows the highest x-ray map ever collected at EDAX at 2.8 million counts per second at 20 kV with the Octane Elite detector technology. Steel matrix is shown in red and graphite nodules are blue.
On a side note – we’re currently looking to fill an EBSD apps position in our NJ lab, and as I describe the job to potential candidates, I’m always drawn to some of the real highlights that an applications position offers someone in the technology field. I hope this blog today captures it perfectly. As an apps person, we bridge the area between commercial and development, or customer and engineering. In fact, it’s even part of our mission statement that the applications group understands the real world customer needs and translates them into the product development process at EDAX for our future products. This in turn, strengthens our products and services to meet the most important needs, those of our customers and those that further the technology into groundbreaking directions like this never before achieved detector performance.
My colleague René de Kloe’s March blog contribution “Resolving Matters” on resolution in the world of EBSD sparked a few thoughts along similar lines for EDS. Just like EBSD we have several resolutions in play when we are discussing EDS data. The easiest one to deal with is the detector resolution, which is defined as the FWHM at the Mn K peak and has typical values in the range of 121-130 eV. This is a value that is pretty easy to understand and the value at the different processing times can be seen directly in the TEAM interface. But often we are asked the question of “how low can you go?” and when this comes up, we are typically talking about very different resolution parameters. It used to be that this question meant “how low concentrations of this or that element can you detect” or in other words, what’s the method/minimum detectable limit (MDL), but with mapping now being a standard data acquisition technique, it often means “how small things can you see in the maps?”
The first interpretation of the question is not as easy as one might think. Unlike the detector resolution, which is a fixed value regardless of your sample or microscope settings, the MDL is heavily dependent on the microscope settings, the sample composition, the detector resolution and last but definitely not least, the number of counts in the spectrum. At the end of the day, the MDL comes down to whether we can reliably say that there is a peak at the energy of a certain element. The method to establish whether a peak is present is typically borrowed from the world of microprobes, where we simply look at the number of counts on the peak centroid and the number of background counts. If the number of peak counts is above the background by some level (often 3 standard deviations) we say the element is above the detectable limit. This means that we need a high degree of precision in our results to drive the noise levels down, which means that the number of counts come into play; essentially, the longer the acquisition time is, the lower the MDL will be with all other parameters being the same. And since both background and peak counts are affected by the microscope parameters and the composition of the sample, one MDL measured on one sample would not be applicable to another sample unless the composition of the two were very close and the same microscope parameters were used. This also means that the MDL will differ significantly depending on which X-ray lines are used. If we consider the Pyrite (FeS2) spectra shown below for the range of the Fe K-lines and L-lines and simply use the counts the highest channel and the corresponding background fit value, the peak to background ratios are 37 and 18 for the high energy and low energy peak respectively. While this approach is a little simplified, this basically means that the MDL when using the high energy peak is half of that for the low energy peak, simply due to the difference in peak to background ratio. And of course if the processing time is changed, the detector resolution will change and consequently the peak will become broader or more narrow, which will change the counts on the highest channel.
The lesson to be learned here is, that when we are reluctant to answer this aspect of the “how low can you go?” question, it’s not because we are trying to hide something, it’s simply because we would need to know the composition of the sample, the microscope parameters used, the X-ray line of interest, and the statistics/number of counts in the spectrum, before we can even start to do the math or give you an educated guess. And it should also be taken into account that the answer that comes out of the calculations will only be applicable to that specific sample and not a universal limit.
The other aspect of “how low can you go?” really comes down to the resolving power of X-ray maps. We can easily increase the size of the maps we acquire to say 4096×3200 pixels, but if the area/volume we get information from is larger than the single pixel size, this doesn’t really give us any additional information. The interaction volume can be modeled in various ways, but since an image is worth a thousand words, I had been thinking about a good sample to illustrate this for a while. Luckily I was at Pittcon in New Orleans this March, and thanks to John Yorston from Zeiss, I learned that flicking a lighter and letting the sparks hit a stub with carbon tape will allow you to pick up particles rich in O, Mg, Fe, La, and Ce with a size distribution from tens of µm down to about 100 nm. Shortly after getting back into the lab, I “borrowed” an empty lighter from our Software Manager Divyesh Patel, and popped the resulting sample in our FEI Nova Nanolab 200. The resulting SEM image acquired at 10 kV can be seen below.
SEM Image Acquired at 10 kV
The largest particle in the center of the image is roughly 700 nm in diameter while the slightly smaller particle below it is about 300 nm and the small particles scattered around are on the order of 100 nm. While these particles can easily be seen in the SEM image, there is typically a world of difference between the interaction volume of secondary electrons and X-ray photons, so what would an X-ray image look like? In the TEAM software we also build up an image based on the X-ray counts per second (CPS) in each pixel when we are mapping the sample. The CPS image of the same region as the SEM image can be seen below, and one can immediately see that they are quite similar. Some shadowing can be seen to the right of the large particle where X-rays are being blocked and a few features are slightly more blurry in the CPS image, but the CPS image also shows us features that were not visible in the SEM image (compare the top left corner in the two images) and clearly resolve the 100 nm particles.
So does this mean that the resolution of the X-ray maps are comparable to the SEM image? Well, yes and no. The secondary electrons that we use for the SEM image are typically defined as anything with an energy below 50 eV, which means that they can only escape from a region very close to the surface. X-rays on the other hand can travel through a significant distance of material, depending on the composition and photon energy. Shown below are two simulations (created using the CASINO software) of the energy deposited in the sample with a 100 nm layer of either CeLa or Mg on top of a carbon substrate.
Two simulations of the energy deposited in the sample with a 100 nm layer of either CeLa or Mg on top of a carbon substrate.
The simulations show that for the CeLa layer, the energy is pretty much confined to the layer/particle, while we get a significant penetration into the substrate if the layer is made of Mg. The data showed that the large center particle was primarily La and Ce while the smaller particles around it were mostly Fe and Mg (La and Ce maps shown below). So while we seemingly have quite good resolution in the CPS image, the depth from which we get information can vary dramatically with the composition of the particles.
La and Ce maps
Again we end up with the problem that there’s no simple answer to this aspect of the “How low can you go?” question. It depends on several parameters, including the composition of the feature of interest, but simulations can help us a long way towards understanding what is going on in the sample and give us an idea of what settings we should use for the data acquisition.
While I have been somewhat non-committal in the details of the mapping resolution here, we will be discussing some of these topics in more detail in our upcoming webinar “Low Energy and High Spatial Resolution EDS Mapping”* on June 18, 2015 and look at what differences we see when changing the parameters. As for the MDL side of the discussion, this might be something for a future webinar as well, or come see us August 2-6 at M&M in Portland where at the very least we will have a poster covering part of this subject.