From Intern to Analyst – Studying the Impact of ‘Non-Ideal’ Samples on Quant Results

Kylie Simpson and Robert Rosenthal, 2016 Summer Interns at EDAX

Being surrounded by equipment worth more than your average college student can even fathom is incredibly daunting. Your heart still skips a beat at every hiss or beep that the microscope produces. Not to mention the fear of ramming into the pole piece while inserting the EDS detector (we later learned there was a hard stop to prevent this but it never quite seemed to alleviate the fear). It’s hard to summarize all of the experiences from our internship at EDAX this summer. While it was only about two and a half months, the sheer amount knowledge we gained through hands on experience is unquantifiable. The five day EDS training course in itself contained enough information to be taught over an entire college semester.

Working with the Applications team gave us a real feel for what EDAX is all about. Not only did we get to work on a summer-long project, we also got to work with the marketing, engineering, and software teams on a regular basis. We also helped with support for the new APEX software. This work setting provided us with a plethora of new knowledge, not only of the physics and programming behind EDAX software but also of the inner workings of the company and the crucial role that teamwork plays in accomplishing tasks. Having access to an electron microscope as well as the specialized knowledge of the members of the Applications team enabled us to get the most out of our summer here at EDAX. After sitting in on a meeting with other members of the Applications team, we were exposed to some of the real-world problems faced by customers on a regular basis and decided to investigate this further with our summer project.

When collecting quantification results for EDS, the ZAF matrix corrections are based on the assumption that the sample is flat, homogeneous, and infinitely thick to the electron beam. Although these are the ideal collection requirements, many customers run into problems when their samples do not meet these assumptions. We spent our time here testing the impact of ‘non-ideal’ samples on quant results while also determining ways for customers to improve the accuracy of quant results with these samples. We tested samples with rough topography by scratching up and polishing a stainless steel and a pyrite sample (Figure 1). By collecting a counts per second map for the steel (Figure 2), we were able to visualize the impact of rough samples and confirm the need for sample prep.

Figure 1. Pyrite particles and polished pyrite Figure 2. CPS maps of stainless steel surfaces

We also tested inhomogeneous samples, including a Lead-Tin solder sample and a stainless steel sample (pictured below). By collecting spectra of these samples at different magnifications, we observed the correlation between lower magnification and a higher accuracy of quant results.

Figure 3: Lead-Tin solder and stainless steel samples

Figure 3: Lead-Tin solder and stainless steel samples

Finally, we tested the impact of thin samples on quant results using an aluminum coated piece of silicon. This sample was very hard to obtain, being that we had to coat the silicon five separate times, but it yielded very interesting results (see graph (left) in Figure 4 below). Our results illustrated the influence and importance of collecting spectra while also allowing us to back-calculate the thickness of each aluminum layer (pictured in Figure 4 (right) below).

Figure 4.

Figure 4.

Overall, we thoroughly enjoyed our summer at EDAX and will take away not only knowledge of EDS, EBSD, SEMs, computer programming, and teamwork, but also valuable problem solving skills applicable to classes, professions, and other real-world scenarios that we will encounter in the future.

Meet the Interns

Kylie Simpson: Kylie is currently a student at the Thayer School of Engineering at Dartmouth. She is participating in a duel-degree program with Colby College and Dartmouth College and is studying mechanical engineering and physics.

Robert Rosenthal: Robbie is currently a student at the University of Colorado at Boulder. He in going into his junior year studying Mechanical Engineering.

Training classes and You

Shawn Wallace, Applications Engineer, EDAX

Over the last month or so, I have spent quite a bit of time training people on our systems. Between a workshop, the Lehigh Microscopy school, two webinars, and two in-house training courses, I have interacted with all levels of users. This had me thinking back to my experiences, years ago on the other side of the desk in the EDAX classroom and what I learned from the courses. With that in mind, I began thinking about what our customers/students can do to get the best out of our training sessions.
Lunch and Learn M&M 2016
The biggest thing they can do is to spend time familiarizing themselves with the general operation of their complete system: their SEM, our systems, and most importantly, with their samples.  Sit down, fiddle with things and just learn how different settings interact; Amp time and Deadtime for EDS, Camera settings for EBSD (see my ‘Camera Optimization’ webinar). The main thing this does is makes you start thinking about what these settings are doing and how they work with your samples. While you do this, you will start to formulate questions in your mind. For some of these questions you will be able to come to an answer. Some will be directly answered during the course. Others will click while you listen and make connections to your work and I will see that ‘Aha!’ moment on your face as you figure out, why that little trick worked or possibly failed miserably.  By spending the time to figure out things on your own, you are getting in the right mindset to come to our courses and ask questions.

This leads to the second biggest thing you can do: Ask me questions! That is why engaging with your system is so important. You are setting yourself up to ask pertinent questions about your samples and your systems. You are finding your natural work flow, but our job is to help you to optimize it, to help you to understand what you are doing, and most importantly help you to understand why you should do it that way. This is why running your system with your samples is a very important thing to do before you come to our courses.

Another reason for asking questions is that you need to be an active learner and engage with your instructor (aka me). Ever sat in a college class and had the teacher just talk and talk and talk for hours on a subject as you sip your coffee to try to keep yourself from dozing off? Ever taught a class and looked at the faces of people sipping their coffee as their heads do that little nod as they fail to stay awake? It’s not fun for either person. I always start my training courses by saying that I want questions. I want you to be engaged and thinking during the entirety of my courses. I want it to not be a lecture, but a conversation. I want that instant feedback to help me understand what concepts you are struggling with and what topics are clicking, so that I can dive deeper into subjects that I need to.
That’s it. That is all you need to do to come to our courses and get the most out of them. Be prepared and be engaged. You will absorb the information we are giving you and you will be able to take it home and put it to use to get better and faster results, while understanding what the system is doing at a much deeper level.

With all that said, there is one more important step. You should never stop learning. Luckily for you, the applications team here at EDAX is always creating new resources for our customers to use to learn with. Sometimes it is quick blog post about some neat new feature we have implemented, at other times it’s a webinar covering the most difficult aspects of microanalysis.

I hope to see you soon on the other side of a desk. Happy Learning in the meantime!

Click here for more information about upcoming EDAX training sessions.

“It’s not the size of the dog in the fight, it’s the size of the fight in the dog.” (Mark Twain)

Dr. Oleg Lourie, Senior Product Manager, EDAX

San Javier, Spain, October 18, 2015: Airbus A400M airlifter escorted by Sains Patulla Aguila squad on their 30th anniversary celebration event.

Many of us like to travel and some people are fascinated by the view of gigantic A380’ planes slowly navigating on tarmac with projected gracious and powerful determination. I too could not overcome a feel of fascination every time I observed these magnificent planes, they are really – literally big..  The airline industry however seems to have a more practical perspective on this matter – the volume of the A380s purchase is on decline and according to the recent reports Airbus is considering reducing their production based on growing preference towards smaller and faster airplanes. Although the connection may seem slightly tenuous,  in my mind I see a fairly close analogy to this situation in EDS market, when the discussion comes to the size of EDS sensors.

In modern microanalysis where the studies of a compositional structure rapidly become dependent on a time scale, the use of the large sensors can no longer be a single solution to optimize the signal. The energy resolution of an EDS spectrometer can be related to its signal detection capability, which determines the signal to noise ratio and as a result the energy resolution of the detector. Fundamentally, to increase signal to noise ratio one may choose to increase signal, or number of counts, or as alternative to reduce the noise of the detector electronics and improve its sensitivity. The first methodology, based on larger number of counts, is directly related to the amount of input X-rays determined by a solid angle of the detector, and/or the acquisition time. A good example for this approach would be a large SDD sensor operating at long shaping times. A conceptually alternative methodology, would be to employ a sensor with a) reduced electronics noise; and b) having higher efficiency in X-ray transmission, which implies less X-ray losses in transit from sample to the recorded signal in the spectra.

Using this methodology signal to noise ratio can be increased with a smaller sensor having higher transmissivity and operating at higher count rates vs larger sensor operating at lower count rates.

To understand the advantage of using a small sensor at higher count rates we can review a simple operation model for SDD.  A time for a drift of the charge generated by X-ray in Si body of the sensor can be modeled either based on a simple linear trajectory or a random walk model. In both cases, we would arrive to approximate l~√t dependence, where l is the distance traveled by charge from cathode to anode and t is the drift time. In regard to the sensor size this means that a time to collect charge from a single X-ray event is proportional to the sensor area. As an example, a simple calculation with assumed electron mobility of 1500 cm2/V-1s and bias 200 V results in 1 µs drift time estimate for 100 mm2 and 100 ns drift time for 10 mm2 sensors. This implies that in order to collect a full charge in a large sensor the rise time for preamplifier needs to be in the range of 1 µs vs 100 ns rise time that can be used with 10 mm2 sensor.  With 10 times higher readout frequency for 10 mm2 sensor it will collect equivalent signal to a 100 mm2 sensor.

What will happen if we run a large sensor at the high count rates? Let’s assume that a 100mm2 sensor in this example can utilize the 100 ns rise time. In this case, since the rise time is much shorter than the charge drift time (~1 µs), not all electrons, produced by an X-ray event, will be collected. This shortage will result in an incomplete charge collection effect (ICC), which will be introducing artifacts and deteriorating the energy resolution. A single characteristic X-ray for Cu (L) and Cu Kα will generate around 245 and 2115 electrons respectively in Si, which will drift to anode, forced by applied bias, in quite large electron packets.  Such large electron packets are rapidly expanding during the drift with ultimately linear expansion rate vs drift time. If the rise time used to collect the electron packet is too short, some of the electrons in the packet will be ‘left out’ which will result in less accurate charge counting and consequently less accurate readout of the X-ray energy. This artifact, called a ‘ballistic deficit’ (BD), will be negatively affecting the energy resolution at high count rates. It is important to note that both ICC and BD effects for the large sensors are getting more enhanced with increasing energy of the characteristic X-rays, which means the resolution stability will deteriorate even more rapidly for higher Z elements compare to the low energy/light elements range.

Figure 1: Comparative Resolution at MnKa (eV).

Figure 1: Comparative Resolution at MnKα (eV) *

As the factual illustration to this topic, the actual SDD performance for sensors with different areas is shown in the Fig. 1. It displays the effect of the acquisition rates on the energy resolution for the EDS detectors having different sensors size and electronics design. Two clear trends can be observed – a rapid energy resolution deterioration with increase of the sensor size for the traditional electronics design; and much more stable resolution performance at high count rates for the sensor with new CMOS based electronics. In particular, the data for Elite Plus with 30 mm2 sensor shows stable resolution below 0.96 µs shaping time, which corresponds to >200 kcps OCR.

In conclusion, conceptually, employing a smaller sensor with optimized signal collection efficiency at higher count rates does offer an attractive alternative to acquiring the X-ray signal matching the one from large area sensors, yet combined with high throughput and improved energy resolution. Ultimately, the ideal solution for low flux applications will be a combination of several smaller sensors arranged in an array, which will combine all the benefits of smaller geometry, higher count rates, higher transmissivity and maximized solid angle.

* SDD performance data courtesy of the EDAX Applications Team.

How To Get the Maximum Benefit from Visiting the Show Floor at a Microanalysis Conference.

Dr. Patrick Camus, Director of Research and Innovation, EDAX

Control 2016

This is the time of year when many analysts are scrambling to finalize details for the Microscopy & Microanalysis Conference – to be held this year in Columbus, OH. We too are striving to present our products in the best light for attendees to evaluate.

As conference attendees, you may well be coming with the task of evaluating and comparing software and equipment from a variety of vendors. Many will also be booking demonstrations, provided by the very capable application specialists of the representative companies. Their job, as well as mine, is to sell you the best product available, which obviously is from EDAX (wink, wink).

But what is your task for the week, and how should you prepare? I have a few universal topics that you might like to consider before you even hit the show floor.

Your primary task is to get enough information to make an educated decision about the best system at the fairest price to benefit the customers of your lab. That system may be the BEST IN THE WORLD system or it may have the absolute lowest price, but knowing the criteria before seeing the competing systems will help in balancing the cost and the benefits and select the best system for your lab.

Below I will present some criteria for system selection. I will use x-ray microanalysis systems as examples because that is the equipment that EDAX sells, but the approach is universal for all equipment purchases.

  • Understand and appreciate all the system specifications because they are the best indicators of the system quality and performance, but emphasize those that that you currently employ or realistically could implement. For instance, if you have a low-level SEM, do you or will you operate at the maximum beam current of the system? How often do you really operate under the conditions necessary to obtain resolutions specifications? Make sure you understand how the system operates AWAY from the conditions used for specifications. These deviations may be more indicative of how your users operate and how the system will be useful for them.
  • Appreciate aesthetics, but look beneath the system “skin” to actual technical performance substance. Do your current operators work that way or can they be retrained to work that way? Is the technology truly new or just re-skinned? The workflow may demo well, but do your operators work in that manner?
  • Ask about your projected local service engineers. Ask for an interview with them before the sale. Over the lifetime of the system, you will probably work with them more than anyone else at the company.
  • During a demo, perform tests under your typical or expected operating conditions to get a feeling for real-world performance in your lab. But also ask for suggested optimized conditions for better performance for future analyses. How much training is included or can be upgraded? Would training at your site or at the vendor site be more effective for those involved?

These are just a few of the topics that you should consider. This is a lot of preparation work to do before you even hit the show floor, but the answers to these topics will make your system selection that much more satisfying in the long run. And job satisfaction for you and your users goes a long way!

Why is There an Error in My Measurement?

Sia Afshari, Global Marketing Manager, EDAX
 download (1)

One interesting part of my job has been the opportunity to meet people of all technical backgrounds and engage in conversations about their interest in analytical instrumentation and ultimately what a measurement means!

I recall several years back on a trip to Alaska I met a group of young graduate students heading to the Arctic to measure the “Ozone Hole.”  Being an ozone lover, I start asking questions about the methodology that they were going to use in their attempt to accomplish this important task, especially the approach for comparative analysis about the expansion of the Ozone hole!

I learned that analytical methods used for ozone measurement have changed over time and the type of instrumentation utilized for this purpose has changed along with the advances in technology.  I remember asking about a common reference standard that one would use among the various techniques over the years to make sure the various instrumentation readings are within the specified operation parameters and the collected data represents the same fundamental criteria obtained by different methods, under different circumstances over time.  I was taken by the puzzled look of these young scientists about my questions regarding cross comparison, errors, calibration, reference standards, statistical confidence, etc. that I felt I’d better stop tormenting them and let them enjoy their journey!

Recently I had an occasion to discuss analytical requirements for an application that included both bulk and multilayer coating samples with a couple of analysts.  We talked about the main challenge for multi-layer analysis as being the availability of a reliable type standard that truly represents the actual samples being analyzed.  It was noted that the expectation of attainable accuracy for the specimen where type standards are not available need to be evaluated by consideration of the errors involved and the propagation of such errors through measurements especially when one approaches “infinite thickness” conditions for some of the constituents!

As the demand for more “turn-key” systems increases where users are more interested in obtaining the “numbers” from analytical tools, it is imperative as a manufacturer and SW developer to imbed the fundamental measurement principals into our data presentation in a manner that a measurement result is qualified and quantified with a degree confidence that is easily observable by an operator.  This is our goal as we set to embark on development of the next generation intelligent analytical software.

The other part of our contribution as a manufacturer is the training aspect of the users in their understanding of the measurement principals.  It is imperative to emphasize the basics and the importance of following a set of check sheets for obtaining good results!

My goal is to use a series of blogs as a venue for highlighting the parameters that influence a measurement, underlying reasons for errors in general, and provide references for better understanding of the expectation of performance for analytical equipment in general and x-ray analysis in particular!


So to start:

My favorite easy readings on counting statistics and errors are old ones but classics never go out of style:
• Principles and Practices of X-ray Spectrometric Analysis, by Eugene Bertin, Chapter 11, “Precision and Error: Counting Statistics.”
• Introduction to the Theory of Error, by Yardley Beers (Addison-Wesley, 1953).  Yes, it is old!

Also, Wikipedia has a very nice write up on basic principles of measurement uncertainty that could be handy for a novice and even an experienced user that was recommended by my colleague!  If you don’t believe in Wikipedia, at least the article has a number of linked reference documents for further review.

As food for thought on measurements, I would consider:
• What I am trying to do?
• What is my expectation of performance in terms of accuracy and precision?
• Are there reference standards that represent my samples?
• What techniques are available for measuring my samples?

With recognition of the facts that:
• There is no absolute measurement technique, all measurements are relative.  There is uncertainty in every measurement!
• The uncertainty in a measurement is a function of systematic errors, random errors, and bias.
• All measurements are comparative in nature, so there is a requirement for reference standards.
• Reference standards that represent the type of samples analyzed provide the best results.
• One cannot measure more accurately than the specified reference standard’s error range.  (Yes reference standards have error too!)
• Fundamental Parameters (FP) techniques are expedients when type standards are not available but have limitations.
• A stated error for a measurement needs to be qualified with the degree of confidence as number, i.e. standard deviation.
• Precision is a controllable quantity in counting statistics by extending the measurement time.
• What else is present in the sample often is as important as the targeted element in analysis. Sample matrix does matter!
• The more complex the matrix of the specimen being measured, the more convoluted are the internal interactions between the matrix atoms.
• Systematic errors are in general referred to as Accuracy and random errors as precision.
• The uncertainties/errors add in quadrature (the square root of the sum of squares).

Till next time, where we will visit these topics and other relevant factors in more details, questions, suggestions, and related inputs are greatly appreciated. By the way, I am still thinking that the ozone layer may not be being measured scientifically these days!

Cleaning Up After EBSD 2016

Matt Nowell, EBSD Product Manager, EDAX

I recently had the opportunity to attend the EBSD 2016 meeting, the 5th topical conference of the Microanalysis Society (MAS) in a series on EBSD, held this year at the University of Alabama. This is a conference I am particularly fond of, as I have been able to attend and participate in all 5 of these meetings that have been held since 2008. This conference has grown significantly since then, from around 100 participants in 2008 to around 180 this year. This year there were both basic and advanced tutorials, with lab time for both topics. There have also been more opportunities to show live equipment, with demonstrations available all week for the first time. This is of course great news for EDAX, but I did feel a little badly that Shawn Wallace, our EBSD Applications guru in the US, had to stay in the lab while I was able to listen to the talks all week. For anyone interested or concerned, we did manage to make sure he had something to eat and some exposure to daylight periodically.

This conference also strongly encourages student participation, and offers scholarships (I want to say around 70) that allow students to travel and attend this meeting. It’s something I try to mention to academic users all the time. I’m at a stage in my career now that I am seeing that people, who were students when I trained them years ago, are now professors and professionals throughout the world. I’ve been fortunate to make and maintain friendships with many of them, and look forward to seeing what this year’s students will do with their EBSD knowledge.

There were numerous interesting topics and applications including transmission-EBSD, investigating cracking, both hydrogen and fatigue induced, HR-EBSD, nuclear materials (the sample prep requirements from a safety perspective were amazing), dictionary-based pattern indexing, quartz bridges in rock fractures, and EBSD on dinosaur fossils. There were also posters on correlation with Nanoindentation, atom probe specimen preparation, analysis of asbestos, ion milling specimen preparation, and tin whisker grain analysis. The breadth of work was great to see.

One topic in particular was the concept of cleaning up EBSD data. EBSD data clean up must be used carefully. Generally, I use a Grain CI Standardization routine, and then create a CI >0.1 partition to evaluate the data quality. This approach does not change any of my measured orientations, and gives me a baseline to evaluate what I should do next. My colleague Rene uses this image, which I find appropriate at this stage:

Figure 1: Cleanup ahead.

Figure 1: Cleanup ahead.

The danger here, of course, is that further cleanup will change the orientations away from the initial measurement. This has to be done with care and consideration. I mention all this because at the EBSD 2016 meeting, I presented a poster on NPAR and people were asking about the difference is between NPAR and standard cleanup. I thought this blog would be a good place to address the question.

With NPAR, we average each EBSD pattern with all of the neighboring patterns to improve the signal to noise ratio (SNR) of the averaged pattern prior to indexing. Pattern averaging to improve SNR is not new to EBSD, we used this with analog SIT cameras years ago, but moved away from it as a requirement as digital CCD sensors improved pattern quality. However, if you are pushing the speed and performance of the system, or working with samples with low signal contrast, pattern averaging is useful. The advantage of the spatial averaging with NPAR is that one does not have the time penalty associated with collecting multiple frames in a single location. A schematic of this averaging approach is shown here:

Figure 2: NPAR.

Figure 2: NPAR.

As an experiment, I used our Inconel 600 standard (nominally recrystallized), and found a triple junction. I then collected multiple patterns from each grain with a fast camera setting with corresponding lower SNR EBSD pattern. Representative patterns are shown below.

Figure 3: Grain Patterns.

Figure 3: Grain Patterns.

Now if one averages patterns from the same grain with little deformation, we expect SNR to increase and indexing performance to improve. Here is an example from 7 patterns averaged from grain 1.

Figure 4: Frame Averaged Example.

Figure 4: Frame Averaged Example.

That is easy though. Let’s take a more difficult case, where with our hexagonal measurement grid averaging kernel, we have 4 patterns from one grain and 3 patterns from another. The colors correspond to the orientation maps of the triplet junction shown below.

Figure 5: Multiple Grains

Figure 5: Multiple Grains.

In this case, the orientation solution from this mixed averaged pattern was only 0.1° from the pattern from the 1st grain, with this solution receiving 35 votes out of a possible 84. What this indicated to me was that 7 of the 9 detected bands matched this 1st grain pattern. It’s really impressive what the triplet indexing approach accomplishes with this type of pattern overlap.

Finally, let’s try an averaging kernel where we have 3 patterns from one grain, 2 patterns from a second grain, and 2 patterns from a third grain, as shown here:

Figure 6: Multiple Grains.

Figure 6: Multiple Grains.

Here the orientation solution misoriented 0.4° from the pattern from the 1st grain, with this solution receiving 20 votes out of the possible 84. This indicates that 6 of the 9 detected bands matched this 1st grain pattern. These example do show that we can deconvolute the correct orientation measurement from the strongest pattern within a mixed pattern, which can help improve the effective EBSD spatial resolution when necessary.

Now, to compare NPAR to traditional cleanup, I then set my camera gain to the maximum value, and collected an OIM map from this triple junction, with an acquisition speed near 500 points per second at 1nA beam current. I then applied NPAR to this data. Finally, I reduced the gain and collected a dataset at 25 points per second at the same beam current as a reference. The orientation maps are shown below with corresponding Indexing Success Rates (ISR) as defined by the CI > 0.1 fraction after CI Standardization. This is a good example of how clean up can be used to improve the initial noisy data, as NPAR provides a new alternative with better results.

Figure 7: Orientation Maps.

Figure 7: Orientation Maps.

We can clearly see that the NPAR data correlated well with the slower reference data with the NPAR data collected ≈ 17 times faster than the traditional settings.

Now let’s see how clean up (or noise reduction, although I personally don’t like this term as often we are not dealing with noise-related artifacts) compared to the NPAR results. To start, I used the grain dilation routine in OIM Analysis, which first determines a grain (I used the default 5° tolerance angle and 2 pixel minimum defaults), and then expands that grain out by one step per pass. The results from a single pass, a double pass, and dilation to completion (when all the grains are fully grown together) are shown below. If we compare this approach with the NPAR and As-Collected references, we see that dilation cleanup has brought the 3 primary grains into contact, but a lot of “phantom” artifact grains with low confidence index are still present (and therefore colored black).

Figure 8: Grain Dilation.

Figure 8: Grain Dilation.

The other clean up routine I will commonly use is the Neighbor Orientation Cleanup routine, which in principle is similar to the NPAR neighbor relation approach. Here, instead of averaging patterns spatially, from each measurement point we compare the orientation measurements of all the neighboring points, and if 4 of the 6 neighbors have the same orientation, we change the orientation of the measurement point to this new neighbor orientation. Results from this approach are shown here.

Figure 9: Neighbor Orientation Correlation.

Figure 9: Neighbor Orientation Correlation.

Now of course the starting data is very noise, and was intentionally collected at higher speeds with lower beam currents to highlight the application of NPAR. With initial data like this, traditional clean up routines will have limitations in representing the actual microstructure, and this is why we urge caution when using these procedures. However, clean up can be used more effectively with better starting data. To demonstrate this, a single pass dilation and single pass of neighbor orientation correlation was performed on the NPAR processed data. These results are shown below, along with the reference orientation map. In this case, the low confidence points near the grain boundary have been filled with the correct orientation, and more of the grain boundary interface has been filled in, which would allow better grain misorientation measurements.

Figure 10: NPAR Cleanup.

Figure 10: NPAR Cleanup.

When I evaluate these images, I think the NPAR approach gives me the best representation relative to the reference data, and I know that the orientation is measured from diffraction patterns collected at or adjacent to each measurement point. I think this highlights an important concept when evaluating EBSD indexing, namely that one should understand how pattern indexing works in order to understand when it fails. Most importantly, I think (and this was also emphasized at the EBSD 2016 meeting) that it is good practice to always report what approach was used in measuring and presenting EBSD data to better interpret and understand the measurements relative to the real microstructure.

From Third World to First World – Through Innovation, Technology and Manufacturing.

Koh Kwan Loke, Regional Sales Manager Asia, EDAX


Changi Airport, Singapore

Another Sunday and I woke up early in the morning to have some local coffee before heading to the airport. When I reached Singapore Changi airport, I started to consider all the airports I have visited. After going through my fingers for a couple of rounds, I realized that I have been to >20 countries and >60 airports around the world.

Over the years I have spent many hours waiting in airports and I started to wonder why airports around the world spend so much money on doing up and renovating their older facilities. I have seen many transformations of other airports and tend to compare these airports with Singapore.

In one of the fastest growing countries – China, I have been to many local domestic airports. They are all built with fine architecture and a sense of ecofriendly design. The government is determined to improve infrastructure by building roads and highways, to link airports to cities. There is an old saying that to connect the world, you need a good transportation system. Like the Romans 2000 years ago, they build roads for easy transport of goods and soldiers. There is no comparison with the advanced infrastructures, which China has spent so much money on and this gives the first world countries a head start.

So this lead me to think about the extensive changes, which have taken place in my region over the last few years. Overall we have seen a transition in Asia from a 3rd world to 1st world region in terms of innovation, technology, and manufacturing. This is due to investment from government and private sectors and ensures that Asia will be a key player in the world economy.

Kinetic Rain (Changi Airport Terminal 1)

Kinetic Rain (Changi Airport Terminal 1)

Asia is a key and important market for Electron Microscopy and EDAX has benefited too as users upgrade older system for newer ones. EDAX now has installations on EM systems from all the principal global manufacturers. With the new products we have launched recently, we are confident that we can generate a good traction for business in the various countries of the region.

China is always hungry for new technologies and with our latest EDS and EBSD products, there is a good flow of new inquiries. After the launch of the ELEMENT Silicon Drift Detector (SDD), the China team has sold >30 units in 6 months. EDAX has been selling averagely two EBSD per quarter and this volume has generated a new breed of EBSD users. More and more EBSD applications have been presented and discussed at local conferences. EDAX can do our part by more sending more experts from the factory to have sharing sessions either during conferences or through individual meetings.

EDAX has grown in India over the years and has become a top supplier for EDS and EBSD. We currently have >50% market share for EBSD and have been recognized by key tier 1 universities. We have been successful in improving our market position through consistency and persistence. There were challenges for EDAX but we overcame them one at a time and we now have support from all major Electron Microscope suppliers. We now have a good team in India, comprising sales and applications support for all the local day to day requirements.

Singapore has been a key location for some time with many high end system purchases by industrial and academic customers. With the influx of manufacturing companies setting up facilities in South East Asia, this create good opportunities for EDAX products. One recent success we had was the sale of ORBIS µXRF analyzers into the forensic and electronic industries. We have also successful penetrated Malaysia MJIT with EDS and EBSD on a JEOL SEM. This will be a good reference for future potentials in the S.E.A. region.

Asia will continue to be a hub for research and manufacturing. We will expect to see assembly facilities setting up in Vietnam, Philippines and Malaysia, new requirements for Electron Microscopes and EDS. India government is determined to create a “Build in India” campaign and attract foreign investment to improve India economy.

All this development, which is so obvious in the airports and transportation systems of the region, can also be seen in many industries, including microscopy and microanalysis. If you would like to hear more, please give us a call or come and pay us a visit!