Don’t Beam Me Up Yet!

Sia Afshari, Global Marketing Manager, EDAX
enterprise
This September is the 50th anniversary of the airing the Star Trek series and if you live around New York City, where this year’s convention is being held, you cannot help running into some of the characters walking around in the Star Fleet uniforms or some alien costumes.

The Star Trek series left its mark on the psyche of many especially those who grew up in that era. Many inventions from the Motorola flip phone, tablets, flat screens, hypospray, stun guns, universal translator, all the way to the concept of a retractor beam that is being used as an optical tweezer to trap and remove bacteria by a focused laser beam have been inspired or attributed to this series.

The most intriguing device in the series was the Tricorder concept. It performed medical, biological, geological, physical, and chemical analyses along with detecting spatial anomalies and alien life forms all in one handheld device!

2e0a28b400000578-0-image-a-34_1446467985814

Mr. Spock used his Tricorder to deliver results with an unquestionable degree of accuracy and confidence level that covered analytical capabilities of the following techniques in one package:
IR, entire photon spectrum detection, colorimetry, pressure sensor, humidity gauge, ultrasonic, particle analyses (e-, e+, n, p, ν, HP), XRF, XRD, ICP, AA, HPLC, medical X-rays, MRI, CAT-scan, PET-scan, Electron Microscopy, EDS, EBSD, Raman, to name a few.

And, Tricorder did them all apparently remotely, without any interaction with its subject matter that as we know is the fundamental rule of a measurement.

As has been reported in the media this week, some of the Tricorder functionalities have come to fruition. NASA’s handheld device called LOCAD, measures microorganisms such as E. coli, fungi and salmonella onboard the International Space Station. Two handheld medical devices are on their way to help doctors examine blood flow and check for cancer, diabetes or bacterial infection. Loughborough University in England announced a photo-plethysmography technology in a handheld device that can monitor the heart function and at Harvard Medical School a small device that utilizes a similar technology as MRI that can non-invasively inspect the body. China’s version of the Tricorder health monitor is reported to have cleared FDA approval for the US market!

Regardless of the validity of the Star Trek inspired inventions as being real or nostalgic, one cannot deny the everlasting impact that the series has had on the imaginations of those who saw the achievable possibilities through science and technology in the future. At least it allowed our imaginations to go wild for that one hour.

220px-locad-pts

In insomniac moments, besides the whereabouts of the Orion planet one may wonder, is there a signature force for matter that has not been discovered yet that can be used for the design of a true Tricorder? Until that time, focusing on EDS miniaturization for the next generation portable electron microscope is on my mind with the hope that I will not be beamed up until the realization of the concept! You hear that Scotty?

Browsing for the Trekkies:
http://www.tricorderproject.org/tricorder-mark2.html
http://www.nature.com/nphoton/journal/v6/n2/full/nphoton.2011.322.html
https://spie.org/membership/spie-professional-magazine/spie-professional-archives-and-special-content/2016_january_archive/the-photonics-of-star-trek?WT.mc_id=ZTWZ
China’s Version Of A ‘Star Trek’ Tricorder Has Just Been Approved By The FDA
http://spectrum.ieee.org/biomedical/diagnostics/the-race-to-build-a-reallife-version-of-the-star-trek-tricorder

Metals, Minerals and Gunshot Residue

Dr. Jens Rafaelsen, Applications Engineer EDAX

During a recent visit to a customer facility I was asked what kind of samples, and applications I typically see. It would seem that this would be a pretty easy question to answer but I struggled to narrow it down to anything “typical”. Over the past three weeks I have spent a couple of days each week at customer facilities and I think a brief description of each of them will explain why I had a hard time answering the question.

The first facility I went to was a university in the process of qualifying an integrated EDS/EBSD system on a combined focused ion beam (FIB) and scanning electron microscope (SEM). A system like this allows one to remove material layer by layer and reconstruct a full 3D model of the sample. The dataset in Figure 1 illustrates why this information can be crucial when calculating material properties based on the grain structure from an EBSD scan. If one looks at the image on the left in the figure, it seems obvious that there are a few large grains in the sample with the area between them filled by smaller grains. However, the reconstructed grain on the right shows that several of these smaller “grains” seen in the single slice are actually interconnected and form a very large grain that stretches outside the probed volume.

Figure 1: Single slice EBSD map (left) and single reconstructed grain from 3D slice set (right).

Figure 1: Single slice EBSD map (left) and single reconstructed grain from 3D slice set (right).

While we spent a good amount of time documenting exactly what kind of speed, signal-to-noise, resolution and sensitivity we could get out of the system, one of the customer’s goals was to measure strain to use as a basis for material modelling. We also discussed a potential collaboration since our EBSD applications engineer, Shawn Wallace, has access to meteorite samples through his previous position at the American Museum of Natural History in New York and a 3D measurement of the grains in a meteorite could make a very compelling study.

Next up was a government agency where the user’s primary interest was in mineral samples but also slag and biological materials retained in mineral matrices. Besides the SEM with EDS they had a microprobe in the next room and they would often investigate the samples in the SEM first before going to the microprobe for detailed analysis (when and if this is required is a different discussion, I would recommend Dale Newbury and Nicholas Ritchie’s article for more details: J Mater Sci (2015) 50:493–518 DOI 10.1007/s10853-014-8685-2).

A typical workflow would be to map out an area of the sample and identify the different phases present to calculate the area fraction and total composition. Since the users of the facility work with minerals all the time, they could easily identify the different parts of the sample by looking at the spectra and quantification numbers, but I have a physics background and will readily admit that I would be hard pressed to tell the difference between bustamite and olivine without the help of Google or a reference book. However, this specific system had the spectrum matching option, which eliminates a lot of the digging in books and finding the right composition. The workflow is illustrated in Figure 2, where one starts by collecting a SEM image of the area of interest and, when the EDS map is subsequently collected, the software will automatically identify areas with similar composition and assign colors accordingly. The next step would then be to extract the spectrum from one area and match it up against a database of spectra. As we can see in the spectrum of Figure 2, the red phase of the EDS map corresponds to a obsidian matrix with slightly elevated Na, Al, and Ca contributions relative to the standard.
Figure 2a

Figure 2: Backscatter electron image (top left) and corresponding phase map (top right) showing different compositions in the sample. The bottom spectrum corresponds to the red phase and has an obsidian spectrum overlaid.

Figure 2: Backscatter electron image (top left) and corresponding phase map (top right) showing different compositions in the sample. The bottom spectrum corresponds to the red phase and has an obsidian spectrum overlaid.

The last facility I visited was a forensic lab, where they had purchased an EDS system primarily for gunshot residue (GSR) detection. The samples are usually standard 12.7 mm round aluminum stubs with carbon tabs. The sticky carbon tabs are used to collect a sample from the hands of a suspect, carbon coated and then loaded into the SEM. The challenge is now to locate particles that are consistent with gunshot residue amongst all the other stuff there might be on the sample. The criteria are that the particle has to contain antimony, barium and lead, at least for traditional gunpowder. Lead free gunpowder is available but it is significantly more expensive and when asking how often it is seen in the labs, I was told that apparently the criminal element is price conscious and not particularly environmentally friendly!

The big challenge with GSR is that the software has to search through the entire stub, separate carbon tape from particles down to less than 1 micron, and then investigate whether a particle is consistent with GSR based on the composition. The workflow is illustrated in Figure 3 and is done by collecting high resolution images, looking for particles based on greyscale value in the image, collecting a spectrum from each particle and then classifying the particle based on composition. Once the data is collected, the user can go in and review the total number of particles and specifically look for GSR particles, relocate them on the sample, and collect high resolution images and spectra for documentation in a potential trial.

Figure 3: Overview showing the fields collected from the full sample stub (top left), zoomed image corresponding to the red square in the overview image (top right) and gunshot residue particle from the red square in the zoomed image (bottom).

Figure 3: Overview showing the fields collected from the full sample stub (top left), zoomed image corresponding to the red square in the overview image (top right) and gunshot residue particle from the red square in the zoomed image (bottom).

Three weeks, three very different applications and a very long answer to the question of what kind of samples and applications I typically see. Each of these three applications is typical in its own way although they have little in common. This brings us to the bottom line: most of the samples and applications we come by might be based in the same technique but often the specifics are unique and I guess the uniqueness is really what is typical.

From Intern to Analyst – Studying the Impact of ‘Non-Ideal’ Samples on Quant Results

Kylie Simpson and Robert Rosenthal, 2016 Summer Interns at EDAX

Being surrounded by equipment worth more than your average college student can even fathom is incredibly daunting. Your heart still skips a beat at every hiss or beep that the microscope produces. Not to mention the fear of ramming into the pole piece while inserting the EDS detector (we later learned there was a hard stop to prevent this but it never quite seemed to alleviate the fear). It’s hard to summarize all of the experiences from our internship at EDAX this summer. While it was only about two and a half months, the sheer amount knowledge we gained through hands on experience is unquantifiable. The five day EDS training course in itself contained enough information to be taught over an entire college semester.

Working with the Applications team gave us a real feel for what EDAX is all about. Not only did we get to work on a summer-long project, we also got to work with the marketing, engineering, and software teams on a regular basis. We also helped with support for the new APEX software. This work setting provided us with a plethora of new knowledge, not only of the physics and programming behind EDAX software but also of the inner workings of the company and the crucial role that teamwork plays in accomplishing tasks. Having access to an electron microscope as well as the specialized knowledge of the members of the Applications team enabled us to get the most out of our summer here at EDAX. After sitting in on a meeting with other members of the Applications team, we were exposed to some of the real-world problems faced by customers on a regular basis and decided to investigate this further with our summer project.

When collecting quantification results for EDS, the ZAF matrix corrections are based on the assumption that the sample is flat, homogeneous, and infinitely thick to the electron beam. Although these are the ideal collection requirements, many customers run into problems when their samples do not meet these assumptions. We spent our time here testing the impact of ‘non-ideal’ samples on quant results while also determining ways for customers to improve the accuracy of quant results with these samples. We tested samples with rough topography by scratching up and polishing a stainless steel and a pyrite sample (Figure 1). By collecting a counts per second map for the steel (Figure 2), we were able to visualize the impact of rough samples and confirm the need for sample prep.

Figure 1. Pyrite particles and polished pyrite Figure 2. CPS maps of stainless steel surfaces

We also tested inhomogeneous samples, including a Lead-Tin solder sample and a stainless steel sample (pictured below). By collecting spectra of these samples at different magnifications, we observed the correlation between lower magnification and a higher accuracy of quant results.

Figure 3: Lead-Tin solder and stainless steel samples

Figure 3: Lead-Tin solder and stainless steel samples

Finally, we tested the impact of thin samples on quant results using an aluminum coated piece of silicon. This sample was very hard to obtain, being that we had to coat the silicon five separate times, but it yielded very interesting results (see graph (left) in Figure 4 below). Our results illustrated the influence and importance of collecting spectra while also allowing us to back-calculate the thickness of each aluminum layer (pictured in Figure 4 (right) below).

Figure 4.

Figure 4.

Overall, we thoroughly enjoyed our summer at EDAX and will take away not only knowledge of EDS, EBSD, SEMs, computer programming, and teamwork, but also valuable problem solving skills applicable to classes, professions, and other real-world scenarios that we will encounter in the future.

Meet the Interns

Kylie Simpson: Kylie is currently a student at the Thayer School of Engineering at Dartmouth. She is participating in a duel-degree program with Colby College and Dartmouth College and is studying mechanical engineering and physics.

Robert Rosenthal: Robbie is currently a student at the University of Colorado at Boulder. He in going into his junior year studying Mechanical Engineering.

Training classes and You

Shawn Wallace, Applications Engineer, EDAX

Over the last month or so, I have spent quite a bit of time training people on our systems. Between a workshop, the Lehigh Microscopy school, two webinars, and two in-house training courses, I have interacted with all levels of users. This had me thinking back to my experiences, years ago on the other side of the desk in the EDAX classroom and what I learned from the courses. With that in mind, I began thinking about what our customers/students can do to get the best out of our training sessions.
Lunch and Learn M&M 2016
The biggest thing they can do is to spend time familiarizing themselves with the general operation of their complete system: their SEM, our systems, and most importantly, with their samples.  Sit down, fiddle with things and just learn how different settings interact; Amp time and Deadtime for EDS, Camera settings for EBSD (see my ‘Camera Optimization’ webinar). The main thing this does is makes you start thinking about what these settings are doing and how they work with your samples. While you do this, you will start to formulate questions in your mind. For some of these questions you will be able to come to an answer. Some will be directly answered during the course. Others will click while you listen and make connections to your work and I will see that ‘Aha!’ moment on your face as you figure out, why that little trick worked or possibly failed miserably.  By spending the time to figure out things on your own, you are getting in the right mindset to come to our courses and ask questions.

This leads to the second biggest thing you can do: Ask me questions! That is why engaging with your system is so important. You are setting yourself up to ask pertinent questions about your samples and your systems. You are finding your natural work flow, but our job is to help you to optimize it, to help you to understand what you are doing, and most importantly help you to understand why you should do it that way. This is why running your system with your samples is a very important thing to do before you come to our courses.

Another reason for asking questions is that you need to be an active learner and engage with your instructor (aka me). Ever sat in a college class and had the teacher just talk and talk and talk for hours on a subject as you sip your coffee to try to keep yourself from dozing off? Ever taught a class and looked at the faces of people sipping their coffee as their heads do that little nod as they fail to stay awake? It’s not fun for either person. I always start my training courses by saying that I want questions. I want you to be engaged and thinking during the entirety of my courses. I want it to not be a lecture, but a conversation. I want that instant feedback to help me understand what concepts you are struggling with and what topics are clicking, so that I can dive deeper into subjects that I need to.
Classroom-small
That’s it. That is all you need to do to come to our courses and get the most out of them. Be prepared and be engaged. You will absorb the information we are giving you and you will be able to take it home and put it to use to get better and faster results, while understanding what the system is doing at a much deeper level.

With all that said, there is one more important step. You should never stop learning. Luckily for you, the applications team here at EDAX is always creating new resources for our customers to use to learn with. Sometimes it is quick blog post about some neat new feature we have implemented, at other times it’s a webinar covering the most difficult aspects of microanalysis.

I hope to see you soon on the other side of a desk. Happy Learning in the meantime!

Click here for more information about upcoming EDAX training sessions.

“It’s not the size of the dog in the fight, it’s the size of the fight in the dog.” (Mark Twain)

Dr. Oleg Lourie, Senior Product Manager, EDAX

San Javier, Spain, October 18, 2015: Airbus A400M airlifter escorted by Sains Patulla Aguila squad on their 30th anniversary celebration event.

Many of us like to travel and some people are fascinated by the view of gigantic A380’ planes slowly navigating on tarmac with projected gracious and powerful determination. I too could not overcome a feel of fascination every time I observed these magnificent planes, they are really – literally big..  The airline industry however seems to have a more practical perspective on this matter – the volume of the A380s purchase is on decline and according to the recent reports Airbus is considering reducing their production based on growing preference towards smaller and faster airplanes. Although the connection may seem slightly tenuous,  in my mind I see a fairly close analogy to this situation in EDS market, when the discussion comes to the size of EDS sensors.

In modern microanalysis where the studies of a compositional structure rapidly become dependent on a time scale, the use of the large sensors can no longer be a single solution to optimize the signal. The energy resolution of an EDS spectrometer can be related to its signal detection capability, which determines the signal to noise ratio and as a result the energy resolution of the detector. Fundamentally, to increase signal to noise ratio one may choose to increase signal, or number of counts, or as alternative to reduce the noise of the detector electronics and improve its sensitivity. The first methodology, based on larger number of counts, is directly related to the amount of input X-rays determined by a solid angle of the detector, and/or the acquisition time. A good example for this approach would be a large SDD sensor operating at long shaping times. A conceptually alternative methodology, would be to employ a sensor with a) reduced electronics noise; and b) having higher efficiency in X-ray transmission, which implies less X-ray losses in transit from sample to the recorded signal in the spectra.

Using this methodology signal to noise ratio can be increased with a smaller sensor having higher transmissivity and operating at higher count rates vs larger sensor operating at lower count rates.

To understand the advantage of using a small sensor at higher count rates we can review a simple operation model for SDD.  A time for a drift of the charge generated by X-ray in Si body of the sensor can be modeled either based on a simple linear trajectory or a random walk model. In both cases, we would arrive to approximate l~√t dependence, where l is the distance traveled by charge from cathode to anode and t is the drift time. In regard to the sensor size this means that a time to collect charge from a single X-ray event is proportional to the sensor area. As an example, a simple calculation with assumed electron mobility of 1500 cm2/V-1s and bias 200 V results in 1 µs drift time estimate for 100 mm2 and 100 ns drift time for 10 mm2 sensors. This implies that in order to collect a full charge in a large sensor the rise time for preamplifier needs to be in the range of 1 µs vs 100 ns rise time that can be used with 10 mm2 sensor.  With 10 times higher readout frequency for 10 mm2 sensor it will collect equivalent signal to a 100 mm2 sensor.

What will happen if we run a large sensor at the high count rates? Let’s assume that a 100mm2 sensor in this example can utilize the 100 ns rise time. In this case, since the rise time is much shorter than the charge drift time (~1 µs), not all electrons, produced by an X-ray event, will be collected. This shortage will result in an incomplete charge collection effect (ICC), which will be introducing artifacts and deteriorating the energy resolution. A single characteristic X-ray for Cu (L) and Cu Kα will generate around 245 and 2115 electrons respectively in Si, which will drift to anode, forced by applied bias, in quite large electron packets.  Such large electron packets are rapidly expanding during the drift with ultimately linear expansion rate vs drift time. If the rise time used to collect the electron packet is too short, some of the electrons in the packet will be ‘left out’ which will result in less accurate charge counting and consequently less accurate readout of the X-ray energy. This artifact, called a ‘ballistic deficit’ (BD), will be negatively affecting the energy resolution at high count rates. It is important to note that both ICC and BD effects for the large sensors are getting more enhanced with increasing energy of the characteristic X-rays, which means the resolution stability will deteriorate even more rapidly for higher Z elements compare to the low energy/light elements range.

Figure 1: Comparative Resolution at MnKa (eV).

Figure 1: Comparative Resolution at MnKα (eV) *

As the factual illustration to this topic, the actual SDD performance for sensors with different areas is shown in the Fig. 1. It displays the effect of the acquisition rates on the energy resolution for the EDS detectors having different sensors size and electronics design. Two clear trends can be observed – a rapid energy resolution deterioration with increase of the sensor size for the traditional electronics design; and much more stable resolution performance at high count rates for the sensor with new CMOS based electronics. In particular, the data for Elite Plus with 30 mm2 sensor shows stable resolution below 0.96 µs shaping time, which corresponds to >200 kcps OCR.

In conclusion, conceptually, employing a smaller sensor with optimized signal collection efficiency at higher count rates does offer an attractive alternative to acquiring the X-ray signal matching the one from large area sensors, yet combined with high throughput and improved energy resolution. Ultimately, the ideal solution for low flux applications will be a combination of several smaller sensors arranged in an array, which will combine all the benefits of smaller geometry, higher count rates, higher transmissivity and maximized solid angle.

* SDD performance data courtesy of the EDAX Applications Team.

How To Get the Maximum Benefit from Visiting the Show Floor at a Microanalysis Conference.

Dr. Patrick Camus, Director of Research and Innovation, EDAX

Control 2016

This is the time of year when many analysts are scrambling to finalize details for the Microscopy & Microanalysis Conference – to be held this year in Columbus, OH. We too are striving to present our products in the best light for attendees to evaluate.

As conference attendees, you may well be coming with the task of evaluating and comparing software and equipment from a variety of vendors. Many will also be booking demonstrations, provided by the very capable application specialists of the representative companies. Their job, as well as mine, is to sell you the best product available, which obviously is from EDAX (wink, wink).

But what is your task for the week, and how should you prepare? I have a few universal topics that you might like to consider before you even hit the show floor.

Your primary task is to get enough information to make an educated decision about the best system at the fairest price to benefit the customers of your lab. That system may be the BEST IN THE WORLD system or it may have the absolute lowest price, but knowing the criteria before seeing the competing systems will help in balancing the cost and the benefits and select the best system for your lab.

Below I will present some criteria for system selection. I will use x-ray microanalysis systems as examples because that is the equipment that EDAX sells, but the approach is universal for all equipment purchases.

  • Understand and appreciate all the system specifications because they are the best indicators of the system quality and performance, but emphasize those that that you currently employ or realistically could implement. For instance, if you have a low-level SEM, do you or will you operate at the maximum beam current of the system? How often do you really operate under the conditions necessary to obtain resolutions specifications? Make sure you understand how the system operates AWAY from the conditions used for specifications. These deviations may be more indicative of how your users operate and how the system will be useful for them.
  • Appreciate aesthetics, but look beneath the system “skin” to actual technical performance substance. Do your current operators work that way or can they be retrained to work that way? Is the technology truly new or just re-skinned? The workflow may demo well, but do your operators work in that manner?
  • Ask about your projected local service engineers. Ask for an interview with them before the sale. Over the lifetime of the system, you will probably work with them more than anyone else at the company.
  • During a demo, perform tests under your typical or expected operating conditions to get a feeling for real-world performance in your lab. But also ask for suggested optimized conditions for better performance for future analyses. How much training is included or can be upgraded? Would training at your site or at the vendor site be more effective for those involved?

These are just a few of the topics that you should consider. This is a lot of preparation work to do before you even hit the show floor, but the answers to these topics will make your system selection that much more satisfying in the long run. And job satisfaction for you and your users goes a long way!

Why is There an Error in My Measurement?

Sia Afshari, Global Marketing Manager, EDAX
 download (1)

One interesting part of my job has been the opportunity to meet people of all technical backgrounds and engage in conversations about their interest in analytical instrumentation and ultimately what a measurement means!

I recall several years back on a trip to Alaska I met a group of young graduate students heading to the Arctic to measure the “Ozone Hole.”  Being an ozone lover, I start asking questions about the methodology that they were going to use in their attempt to accomplish this important task, especially the approach for comparative analysis about the expansion of the Ozone hole!

I learned that analytical methods used for ozone measurement have changed over time and the type of instrumentation utilized for this purpose has changed along with the advances in technology.  I remember asking about a common reference standard that one would use among the various techniques over the years to make sure the various instrumentation readings are within the specified operation parameters and the collected data represents the same fundamental criteria obtained by different methods, under different circumstances over time.  I was taken by the puzzled look of these young scientists about my questions regarding cross comparison, errors, calibration, reference standards, statistical confidence, etc. that I felt I’d better stop tormenting them and let them enjoy their journey!

Recently I had an occasion to discuss analytical requirements for an application that included both bulk and multilayer coating samples with a couple of analysts.  We talked about the main challenge for multi-layer analysis as being the availability of a reliable type standard that truly represents the actual samples being analyzed.  It was noted that the expectation of attainable accuracy for the specimen where type standards are not available need to be evaluated by consideration of the errors involved and the propagation of such errors through measurements especially when one approaches “infinite thickness” conditions for some of the constituents!

As the demand for more “turn-key” systems increases where users are more interested in obtaining the “numbers” from analytical tools, it is imperative as a manufacturer and SW developer to imbed the fundamental measurement principals into our data presentation in a manner that a measurement result is qualified and quantified with a degree confidence that is easily observable by an operator.  This is our goal as we set to embark on development of the next generation intelligent analytical software.

The other part of our contribution as a manufacturer is the training aspect of the users in their understanding of the measurement principals.  It is imperative to emphasize the basics and the importance of following a set of check sheets for obtaining good results!

My goal is to use a series of blogs as a venue for highlighting the parameters that influence a measurement, underlying reasons for errors in general, and provide references for better understanding of the expectation of performance for analytical equipment in general and x-ray analysis in particular!

totalanalyticerrorconcept

So to start:

My favorite easy readings on counting statistics and errors are old ones but classics never go out of style:
• Principles and Practices of X-ray Spectrometric Analysis, by Eugene Bertin, Chapter 11, “Precision and Error: Counting Statistics.”
• Introduction to the Theory of Error, by Yardley Beers (Addison-Wesley, 1953).  Yes, it is old!

Also, Wikipedia has a very nice write up on basic principles of measurement uncertainty that could be handy for a novice and even an experienced user that was recommended by my colleague!  If you don’t believe in Wikipedia, at least the article has a number of linked reference documents for further review.
• https://en.wikipedia.org/wiki/Measurement_uncertainty

As food for thought on measurements, I would consider:
• What I am trying to do?
• What is my expectation of performance in terms of accuracy and precision?
• Are there reference standards that represent my samples?
• What techniques are available for measuring my samples?

With recognition of the facts that:
• There is no absolute measurement technique, all measurements are relative.  There is uncertainty in every measurement!
• The uncertainty in a measurement is a function of systematic errors, random errors, and bias.
• All measurements are comparative in nature, so there is a requirement for reference standards.
• Reference standards that represent the type of samples analyzed provide the best results.
• One cannot measure more accurately than the specified reference standard’s error range.  (Yes reference standards have error too!)
• Fundamental Parameters (FP) techniques are expedients when type standards are not available but have limitations.
• A stated error for a measurement needs to be qualified with the degree of confidence as number, i.e. standard deviation.
• Precision is a controllable quantity in counting statistics by extending the measurement time.
• What else is present in the sample often is as important as the targeted element in analysis. Sample matrix does matter!
• The more complex the matrix of the specimen being measured, the more convoluted are the internal interactions between the matrix atoms.
• Systematic errors are in general referred to as Accuracy and random errors as precision.
• The uncertainties/errors add in quadrature (the square root of the sum of squares).

Till next time, where we will visit these topics and other relevant factors in more details, questions, suggestions, and related inputs are greatly appreciated. By the way, I am still thinking that the ozone layer may not be being measured scientifically these days!