product management

My New Lab Partner

Matt Nowell, EBSD Product Manager, EDAX

It has been an exciting month here in our Draper Utah lab, as we have received and installed our new FEI Teneo FEG SEM. We are a small lab, focusing on EBSD development and applications, and without a loading dock, so timing is critical when scheduling the delivery. So, 3 months ago, we looked at the calendar to pick a day with sunshine and without snow. Luckily, we picked well.

Figure 1: Our new SEM coming off the truck.

Figure 1: Our new SEM coming off the truck.

Once we got the new instrument up and running, of course the next step was to start playing with it. This new SEM has a lot more imaging detectors than our older SEM, so I wanted to see what I could see with it. I chose a nickel superalloy turbine blade with a thermal barrier coating, as it had many phases for imaging and microanalysis. The first image I collected was with the Everhart-Thornley Detector (ETD). For each image shown, I relied on the auto contrast and brightness adjustment to optimize the image.

Figure 2: ETD image

Figure 2: ETD image

With imaging, contrast is information. The contrast in this image shows phase contrast. On the left, gamma/gamma prime contrast is visible in the Nickel superalloy while different distinct regions of the barrier coating are seen towards the right. The next image I collected was with the Area Backscatter Detector (ABS). This is a detector that is positioned under the pole piece for imaging. With this detector, I can use the entire detector, the inner annular portion of the detector, or any of three regions towards the outer perimeter of the detector.

Figure 3: ABS Detector image.

Figure 3: ABS Detector image.

I tried each of the different options, and I selected the inner annular ring portion of the detector. Each option provided similar contrast as seen in Figure 3, but I went with this based on personal preference. The contrast is like the ETD contrast is Figure 2. I also compared with the imaging options using the detector in Concentric Backscatter (CBS) mode, where 4 different concentric annular detectors are available.

Figure 4: T1 Detector (a-b mode).

Figure 4: T1 Detector (a-b mode).

My next image used the T1 detector, which to my understanding is an in-lens detector. In this mode, I selected the a – b mode, so the final image is obtained by subtracting the image from the b portion of the detector from the a portion of the detector. I selected this image because the resultant contrast is reversed from the first couple of images. Here phases that were bright are now dark, and detail within the phases is suppressed.

Figure 5: T2 Detector.

Figure 5: T2 Detector.

My final SEM image was collected with the T2 detector, another in-lens detector option. Here we see the same general phase contrast, but the contrast range is more limited and the detail within regions is again suppressed.

I have chosen to show this set of images to illustrate how different detectors, and their positioning, can generate different images from the area, and that the contrast/information obtained with each image can change. Now I have done a cursory interpretation of the image contrast, but a better understanding may come from reading the manual and knowing the effects of the imaging parameters used.

Figure 6: Always Read the Manual!

Figure 6: Always Read the Manual!

Of course, I’m an EBSD guy, so I also want to compare this to what I can get using our TEAM™ software with Hikari EBSD detectors. One unique feature we have in our software is PRIAS™, which uses the EBSD detector as an imaging system. With the default imaging mode, it subsets the phosphor screen image into 25 different ROI imaging detectors, and generates an image from each when the beam is scanned across the area of interest. Once these images are collected, they can be reviewed, mixed, added, subtracted, and colored to show the contrast of interest, similar to the SEM imaging approach described above.

The 3 most common contrasts we see with PRIAS™ are phase, orientation, and topographic. To capture these, we also have a mode where 3 pre-defined regional detectors are collected during EBSD mapping, and the resulting images available with the EBSD (and simultaneous EDS) data.

Figure 7: PRIAS™ Top Detector Image.

Figure 7: PRIAS™ Top Detector Image.

The first ROI is positioned at the top of the phosphor screen, and the resulting phase contrast is very similar to the contrast obtained with the ETD and ABS imaging modes on the SEM.

Figure 8: PRIAS™ Center Detector Image.

Figure 8: PRIAS™ Center Detector Image.

The second ROI is positioned at the center of the phosphor screen. This image shows more orientation contrast.

Figure 9: PRIAS™ Bottom Detector Image.

Figure 9: PRIAS™ Bottom Detector Image.

The third ROI is positioned at the bottom of the phosphor screen. This image shows more topographical contrast. All three of these images are complementary, both to each other but also to the different SEM images. They all give part of the total picture of the sample.

Figure 10: Defining Custom ROIs in PRIAS™.

Figure 10: Defining Custom ROIs in PRIAS™.

With PRIAS™ it is also possible to define custom ROIs. In Figure 10, 3 different ROIs have been drawn within the phosphor screen area. The 3 corresponding images are then generated, and these can be reviewed, mixed, and then selected. In this case, I selected an ROI that reversed the phase contrast, like the contrast seen with the T1 detector in Figure 4.

Figure 11: PRIAS™ Center Image with EDS Bland Map (Red-Ni, Blue – Al, Green-Zr)

Figure 12: PRIAS™ Center Image with Orientation Map (IPF Map Surface Normal Direction).

figure-12a

Of course, the PRIAS™ information can also be directly correlated with the EDS and EBSD information collected during the mapping. Figure 11 shows an RGB EDS map while Figure 12 shows an IPF orientation map (surface normal direction with the corresponding orientation key) blended with the PRIAS™ center image. Having this available adds more information (via contrast) to the total microstructural characterization package.

I look forward to using our new SEM, to develop new ideas into tools and features for our users. I imagine a few new blogs posts should come from it as well!

Considerations for your New Year’s Resolutions from Dr. Pat

Dr. Patrick Camus, Director of Research and Innovation, EDAX

The beginning of the new calendar year is a time to reflect and evaluate important items in your life. At work, it might also be a time to evaluate the age and capabilities of the technical equipment in your lab. If you are a lucky employee, you may work in a newly refurbished lab where most of your equipment is less than 3 years old. If you are such a fortunate worker, the other colleagues in the field will be envious. They usually have equipment that is much more than 5 years old, some of it possibly dating from the last century!

Old Jalopy circa 1970 EDAX windowless Si(Li) detector circa early 70’s

In my case, at home my phone is 3 years old and my 3 vehicles are 18, 16, and 3 years old. We are definitely evaluating the household budget this year to upgrade the oldest automobile. We need to decide what are the highest priority items and which are not so important for our usage. It’s often important to sort through the different features offered and decide what’s most relevant … whether that’s at home or in the lab.

Octane Elite Silicon Drift Detector 2017 Dr. Pat’s Possible New Vehicle 2017

If your lab equipment is older than your vehicles, you need to determine whether the latest generation of equipment will improve either your throughput or the quality of your work. The latest generations of EDAX equipment can enormously speed up throughput and the improve quality of your analysis over that of previous generations – it’s just a matter of convincing your boss that this has value for the company. There is no time like the present for you to gather your arguments into a proposal to get the budget for the new generation of equipment that will benefit both you and the company.
Best of luck in the new year!

Adding a New Dimension to Analysis

Dr. Oleg Lourie, Regional Manager A/P, EDAX

With every dimension, we add to the volume of data, we believe that we add a new perspective in our understanding and interpretation of the data. In microanalysis adding space or time dimensionality has led to the development of 3D compositional tomography and dynamic or in situ compositional experiments. 3D compositional tomography or 3D EDS is developing rapidly and getting wider acceptance, although it still presents challenges such as the photon absorption, associated with sample thickness and time consuming acquisition process, which requires a high level of stability, especially for TEM microscopes. After setting up a multi hour experiment in a TEM to gain a 3D compositional EDS map, one may wonder Is there any shortcut to getting a ‘quick’ glimpse into 3-dimensional elemental distribution? The good news is that there is one and compared to tilt series tomography, it can be a ‘snapshot’ type of the 3D EDS map.

3D distribution of Nd in steel.

3D distribution of Nd in steel.

To enable such 3D EDS mapping on the conceptual level we would need at least two identical 2D TEM EDS maps acquired with photons having different energy – so you can slide along the energy axis (adding a new dimension?) and use photon absorption as a natural yardstick to probe the element distribution along the X-ray path. Since the characteristic X-rays have discrete energies (K, L, M lines), it might work if you subtract the K line map from the L line or M line map to see an element distribution based on different absorption between K and L or M line maps. Ideally, one of EDS maps should be acquired with high energy X-rays, such as K lines for high atomic number elements, and another with low energy X-rays where the absorption has a significant effect, such as for example M lines. Indeed, in the case of elements with a high atomic number, the energies for K lines area ranged in tens of keV having virtually 0 absorption even in a thick TEM sample.

So, it all looks quite promising except for one important detail – current SDDs have the absorption efficiency for high energy photons close to actual 0. Even if you made your SDD sensor as large 150 mm2 it would still be 0. Increasing it to 200 mm2 would keep it steady close to 0. So, having a large silicon sensor for EDS does not seem to matter, what matters is the absorption properties of the sensor material. Here we add a material selection dimension to generate a new perspective for 3D EDS. And indeed, when we selected a CdTe EDS sensor we would able to acquire X-rays with the energies up to 100 keV or more.

To summarize, using a CdTe sensor will open an opportunity for a ‘snapshot’ 3D EDS technique, which can add more insight about elemental volume distribution, sample topography and will not be limited by a sample thickness. It would clearly be more practical for elements with high atomic numbers. Although it might be utilized for a wide yet selected range of samples, this concept could be a complementary and fast (!) alternative to 3D EDS tomography.

Rotary Engines Go “Round and Round”

Dr. Bruce Scruggs, XRF Product Manager EDAX

Growing up outside of Detroit, MI, automobiles were ingrained in the culture, particularly American muscle cars. I was never a car buff but if I said little and nodded knowingly during these car discussions, I could at least survive. Engine displacement? Transmission? Gear ratios? Yep, just nod your head and grunt a little bit. Well, it turns out working at EDAX that I’ve run into a couple of serious car restoration experts. There always seems to be a common theme with these guys: how do I get more power out of this engine?

Recently, one of these restoration experts brought in a small section of the rotor housing of a Mazda engine circa early ‘80s. Turns out, this guy likes to rebuild Mazda engines, tweak the turbocharging and race them. As we all know, Mazda was famous for commercializing the Wankel engine, aka the rotary engine, to power their cars. Rotary engines are famous for their simplicity and the power one can generate from a relatively small engine displacement. These engines are also infamous (i.e. poor fuel consumption and emissions) as well which has led Mazda to end general production in roughly 2012 with the last of the production RX-8s.

Now, one of the questions in rebuilding these engines is how to repair and resurface the oblong rotor housing. In older engines of this type, the surface of the rotor housing can suffer deep gouges. The gouges can be filled and then need to be resurfaced. Initially, we imaged the cross-section of the rotor housing block in an Orbis PC micro-XRF spectrometer to determine what was used to surface coat the rotor housing. If you read up on this engine, (it’s a 12A variant), the block is aluminum with a cast iron liner and a hard chromium plating. The internet buzz claims the liner is installed via a “sheet metal insert process”. And when I google “sheet metal insert process” all I get are links to sheet metal forming and links referring to webpages which have copied the original reference to “sheet metal insert process”.

In the following Orbis micro-XRF maps (Figures 1a and 1b), you can see the aluminum rotor housing block and the cast iron liner. Each row of the map is about 100 µm wide with the iron liner being about 1.5 mm thick. If you look carefully, you can also see the chrome coating on the surface of the iron liner. On the cross-section, which was done with a band saw cut, the chrome coating is about one map pixel across. So, it’s less than 100 µm thick. From web searches, hard chrome plating for high wear applications start at around 25 µm thick and range up to hundreds of microns thick. For very thick coatings, they are ground or polished down after the plating process to achieve more uniform application. So, what is found in the elemental map is consistent with the lower end of web-based information for a hard chrome coating, bearing in mind that the coating measured had well over 150k miles of wear and tear. If we had a rotor housing with less wear and tear, we could use XRF to make a more proper measurement of the chrome plating thickness and provide a better estimate of the original manufacturer’s specification on the hard chrome thickness.

Figure 2: Orbis PC elemental map

Figure 1a: Orbis PC elemental map

Overlay of 4 elements:
Fe: Blue (from the cast iron liner)
Al: Green (from the aluminum rotor housing block)
Cr: Yellow (coating on the cast iron liner)
Red: Zinc (use unknown)

Figure 3: Total counts map: Lighter elements such as Al generate fewer X-ray counts and appear darker than the brighter, heavy Fe containing components.

Figure 1b: Total counts map: Lighter elements such as Al generate fewer X-ray counts and appear darker than the brighter, heavy Fe containing components.

We did have a look at the chrome coating by direct measurement with both XRF, looking for alloying elements such as Ti, Ni, W and Mo, as well as SEM-EDS looking for carbides and nitrides. We found that it’s simply a nominally, pure chrome coating with no significant alloying elements. We did see some oxygen using SEM-EDS, but that would be expected on a surface that has been exposed to high heat and combustion for thousands of operating hours. Again, these findings are consistent with a hard chrome coating.

In some on-line forum discussions, there was even speculation that the chrome coating was micro-porous to hold lubricant. So, we also looked at the chrome surface under high SEM magnification (Figure 2). There are indeed some voids in the coating, but it doesn’t appear that they are there by design, but rather that they are simply voids associated with the metal grain structure of the coating or perhaps from wear. We specifically targeted a shallow scratch in the coating, looking for indications of sub-surface porosity. The trough of the scratch shows a smearing of the chrome metal grains but nothing indicating designed micro-porosity.

Figure 4: SEM image of chrome plated surface of rotor housing liner. The scratch running vertically in the image is about 120 µm thick.

Figure 2: SEM image of chrome plated surface of rotor housing liner. The scratch running vertically in the image is about 120 µm thick.

The XRF maps in Figure 1 also provides some insight into the sheet metal insert process. The cast iron liner appears to be wrapped in ribbons of aluminum alloy and iron. The composition of the iron ribbon (approximately 1 wt% Mn) is about the same as the liner. But, the aluminum alloy ribbon is higher in copper content than the housing block. This can be seen in the elemental map (Figure 1a) where the aluminum ribbon is a little darker green, lower Al signal intensity, than the housing block itself. The map also shows a thread of some zinc bearing component running through (what we speculate are) the wrappings around the liner. My best guess here is that it is some sort of joining compound. Ultimately, the sheet metal insert process involves a bit more than a simple press or shrink fit of a cylinder sleeve in a piston engine block. Nod knowingly and grunt a little.

With Great Data Comes Great Responsibility

Matt Nowell, EBSD Product Manager, EDAX

First, I have to acknowledge that I stole the title above from a tweet by Dr. Ben Britton (@BMatB), but I think it applies perfectly to the topic at hand. This blog post has been inspired by a few recent events around the lab. First, our data server drives suffered from multiple simultaneous hard drive failures. Nothing makes you appreciate your data more than no longer having access to it. Second, my colleague and friend Rene de Kloe wrote the preceding article in this blog, and if you haven’t had the opportunity to read it, I highly recommended it. Having been involved with EBSD sample analysis for over 20 years, I have drawers and drawers full of samples. Some of these are very clearly labeled. Some of these are not labeled, or the label has worn off, or the label has fallen off. One of these we believe is one of Rene’s missing samples, although both of us have spent time trying to find it. Some I can recognize just by looking, others need a sheet of paper with descriptions and details. Some are just sitting on my desk, either waiting for analysis or around for visual props during a talk. Here is a picture of some of these desk samples including a golf club with a sample extracted from the face, a piece of a Gibeon meteorite that has been shaped into a guitar pick, a wafer I fabricated myself in school, a rod of tin I can bend and work harden, and then hand to someone else to try, and a sample of a friction stir weld that I’ve used as a fine grained aluminum standard.

fig-1_modified
Each sample leads to data. With high speed cameras, it’s easier to collect more data in a shorter period of time. With simultaneous EDS collection, it’s more data still. With things like NPAR™, PRIAS™, HR-EBSD, and with OIM Analysis™ v8 reindexing functionality, there is also a driving force to save EBSD patterns for each scan. With 3D EBSD and in-situ heating and deformation experiments, there are multiple scans per sample. Over the years, we have archived data with Zip drives, CDs, DVDs, and portable hard drives. Fortunately, the cost for storage has dramatically decreased in the last 20+ years. I remember buying my first USB storage stick in 2003, with 256 MB of storage. Now I routinely carry around multiple TBs of data full of different examples for whatever questions might pop up.

cost-per-gigabyte-large_modified
How do we organize this plethora of data?
Personally, I sometimes struggle with this problem. My desk and office are often a messy conglomerate of different samples, golf training aids (they help me think), papers to read, brochures to edit, and other work to do. I’m often asked if I have an example of one material or another, so there is a strong driving force to be able to find this quickly. Previously I’ve used a database we wrote internally, which was nice but required all of us to enter accurate data into the database. I also used photo management software and the batch processor in OIM Analysis™ to create a visual database of microstructures, which I could quickly review and recognize examples. Often however, I ended up needing multiple pictures to express all the information I wanted in order to use this collection.

blog-fig-3_modified

To help with this problem, the OIM Data Miner function was implemented into OIM Analysis™. This tool will index the data on any given hard drive, and provide a list of all the OIM scan files present. A screenshot using the Data Miner on one of my drives is shown above. The Data Miner is accessed through this icon on the OIM Analysis™ toolbar. I can see the scan name, where it is located, the date associated with the file, what phases were used, the number of points, the step size, the average confidence index, and the elements associated with any simultaneous EDS collection. From this tool, I can open a file of interest, or I can delete a file I no longer need. I can search by name, by phase, or by element, and I can display duplicate files. I have found this to be extremely useful in finding datasets, and wanted to write a little bit about it in case you may also have some use for this functionality.

“It’s not the size of the dog in the fight, it’s the size of the fight in the dog.” (Mark Twain)

Dr. Oleg Lourie, Senior Product Manager, EDAX

San Javier, Spain, October 18, 2015: Airbus A400M airlifter escorted by Sains Patulla Aguila squad on their 30th anniversary celebration event.

Many of us like to travel and some people are fascinated by the view of gigantic A380’ planes slowly navigating on tarmac with projected gracious and powerful determination. I too could not overcome a feel of fascination every time I observed these magnificent planes, they are really – literally big..  The airline industry however seems to have a more practical perspective on this matter – the volume of the A380s purchase is on decline and according to the recent reports Airbus is considering reducing their production based on growing preference towards smaller and faster airplanes. Although the connection may seem slightly tenuous,  in my mind I see a fairly close analogy to this situation in EDS market, when the discussion comes to the size of EDS sensors.

In modern microanalysis where the studies of a compositional structure rapidly become dependent on a time scale, the use of the large sensors can no longer be a single solution to optimize the signal. The energy resolution of an EDS spectrometer can be related to its signal detection capability, which determines the signal to noise ratio and as a result the energy resolution of the detector. Fundamentally, to increase signal to noise ratio one may choose to increase signal, or number of counts, or as alternative to reduce the noise of the detector electronics and improve its sensitivity. The first methodology, based on larger number of counts, is directly related to the amount of input X-rays determined by a solid angle of the detector, and/or the acquisition time. A good example for this approach would be a large SDD sensor operating at long shaping times. A conceptually alternative methodology, would be to employ a sensor with a) reduced electronics noise; and b) having higher efficiency in X-ray transmission, which implies less X-ray losses in transit from sample to the recorded signal in the spectra.

Using this methodology signal to noise ratio can be increased with a smaller sensor having higher transmissivity and operating at higher count rates vs larger sensor operating at lower count rates.

To understand the advantage of using a small sensor at higher count rates we can review a simple operation model for SDD.  A time for a drift of the charge generated by X-ray in Si body of the sensor can be modeled either based on a simple linear trajectory or a random walk model. In both cases, we would arrive to approximate l~√t dependence, where l is the distance traveled by charge from cathode to anode and t is the drift time. In regard to the sensor size this means that a time to collect charge from a single X-ray event is proportional to the sensor area. As an example, a simple calculation with assumed electron mobility of 1500 cm2/V-1s and bias 200 V results in 1 µs drift time estimate for 100 mm2 and 100 ns drift time for 10 mm2 sensors. This implies that in order to collect a full charge in a large sensor the rise time for preamplifier needs to be in the range of 1 µs vs 100 ns rise time that can be used with 10 mm2 sensor.  With 10 times higher readout frequency for 10 mm2 sensor it will collect equivalent signal to a 100 mm2 sensor.

What will happen if we run a large sensor at the high count rates? Let’s assume that a 100mm2 sensor in this example can utilize the 100 ns rise time. In this case, since the rise time is much shorter than the charge drift time (~1 µs), not all electrons, produced by an X-ray event, will be collected. This shortage will result in an incomplete charge collection effect (ICC), which will be introducing artifacts and deteriorating the energy resolution. A single characteristic X-ray for Cu (L) and Cu Kα will generate around 245 and 2115 electrons respectively in Si, which will drift to anode, forced by applied bias, in quite large electron packets.  Such large electron packets are rapidly expanding during the drift with ultimately linear expansion rate vs drift time. If the rise time used to collect the electron packet is too short, some of the electrons in the packet will be ‘left out’ which will result in less accurate charge counting and consequently less accurate readout of the X-ray energy. This artifact, called a ‘ballistic deficit’ (BD), will be negatively affecting the energy resolution at high count rates. It is important to note that both ICC and BD effects for the large sensors are getting more enhanced with increasing energy of the characteristic X-rays, which means the resolution stability will deteriorate even more rapidly for higher Z elements compare to the low energy/light elements range.

Figure 1: Comparative Resolution at MnKa (eV).

Figure 1: Comparative Resolution at MnKα (eV) *

As the factual illustration to this topic, the actual SDD performance for sensors with different areas is shown in the Fig. 1. It displays the effect of the acquisition rates on the energy resolution for the EDS detectors having different sensors size and electronics design. Two clear trends can be observed – a rapid energy resolution deterioration with increase of the sensor size for the traditional electronics design; and much more stable resolution performance at high count rates for the sensor with new CMOS based electronics. In particular, the data for Elite Plus with 30 mm2 sensor shows stable resolution below 0.96 µs shaping time, which corresponds to >200 kcps OCR.

In conclusion, conceptually, employing a smaller sensor with optimized signal collection efficiency at higher count rates does offer an attractive alternative to acquiring the X-ray signal matching the one from large area sensors, yet combined with high throughput and improved energy resolution. Ultimately, the ideal solution for low flux applications will be a combination of several smaller sensors arranged in an array, which will combine all the benefits of smaller geometry, higher count rates, higher transmissivity and maximized solid angle.

* SDD performance data courtesy of the EDAX Applications Team.

Why is There an Error in My Measurement?

Sia Afshari, Global Marketing Manager, EDAX
 download (1)

One interesting part of my job has been the opportunity to meet people of all technical backgrounds and engage in conversations about their interest in analytical instrumentation and ultimately what a measurement means!

I recall several years back on a trip to Alaska I met a group of young graduate students heading to the Arctic to measure the “Ozone Hole.”  Being an ozone lover, I start asking questions about the methodology that they were going to use in their attempt to accomplish this important task, especially the approach for comparative analysis about the expansion of the Ozone hole!

I learned that analytical methods used for ozone measurement have changed over time and the type of instrumentation utilized for this purpose has changed along with the advances in technology.  I remember asking about a common reference standard that one would use among the various techniques over the years to make sure the various instrumentation readings are within the specified operation parameters and the collected data represents the same fundamental criteria obtained by different methods, under different circumstances over time.  I was taken by the puzzled look of these young scientists about my questions regarding cross comparison, errors, calibration, reference standards, statistical confidence, etc. that I felt I’d better stop tormenting them and let them enjoy their journey!

Recently I had an occasion to discuss analytical requirements for an application that included both bulk and multilayer coating samples with a couple of analysts.  We talked about the main challenge for multi-layer analysis as being the availability of a reliable type standard that truly represents the actual samples being analyzed.  It was noted that the expectation of attainable accuracy for the specimen where type standards are not available need to be evaluated by consideration of the errors involved and the propagation of such errors through measurements especially when one approaches “infinite thickness” conditions for some of the constituents!

As the demand for more “turn-key” systems increases where users are more interested in obtaining the “numbers” from analytical tools, it is imperative as a manufacturer and SW developer to imbed the fundamental measurement principals into our data presentation in a manner that a measurement result is qualified and quantified with a degree confidence that is easily observable by an operator.  This is our goal as we set to embark on development of the next generation intelligent analytical software.

The other part of our contribution as a manufacturer is the training aspect of the users in their understanding of the measurement principals.  It is imperative to emphasize the basics and the importance of following a set of check sheets for obtaining good results!

My goal is to use a series of blogs as a venue for highlighting the parameters that influence a measurement, underlying reasons for errors in general, and provide references for better understanding of the expectation of performance for analytical equipment in general and x-ray analysis in particular!

totalanalyticerrorconcept

So to start:

My favorite easy readings on counting statistics and errors are old ones but classics never go out of style:
• Principles and Practices of X-ray Spectrometric Analysis, by Eugene Bertin, Chapter 11, “Precision and Error: Counting Statistics.”
• Introduction to the Theory of Error, by Yardley Beers (Addison-Wesley, 1953).  Yes, it is old!

Also, Wikipedia has a very nice write up on basic principles of measurement uncertainty that could be handy for a novice and even an experienced user that was recommended by my colleague!  If you don’t believe in Wikipedia, at least the article has a number of linked reference documents for further review.
• https://en.wikipedia.org/wiki/Measurement_uncertainty

As food for thought on measurements, I would consider:
• What I am trying to do?
• What is my expectation of performance in terms of accuracy and precision?
• Are there reference standards that represent my samples?
• What techniques are available for measuring my samples?

With recognition of the facts that:
• There is no absolute measurement technique, all measurements are relative.  There is uncertainty in every measurement!
• The uncertainty in a measurement is a function of systematic errors, random errors, and bias.
• All measurements are comparative in nature, so there is a requirement for reference standards.
• Reference standards that represent the type of samples analyzed provide the best results.
• One cannot measure more accurately than the specified reference standard’s error range.  (Yes reference standards have error too!)
• Fundamental Parameters (FP) techniques are expedients when type standards are not available but have limitations.
• A stated error for a measurement needs to be qualified with the degree of confidence as number, i.e. standard deviation.
• Precision is a controllable quantity in counting statistics by extending the measurement time.
• What else is present in the sample often is as important as the targeted element in analysis. Sample matrix does matter!
• The more complex the matrix of the specimen being measured, the more convoluted are the internal interactions between the matrix atoms.
• Systematic errors are in general referred to as Accuracy and random errors as precision.
• The uncertainties/errors add in quadrature (the square root of the sum of squares).

Till next time, where we will visit these topics and other relevant factors in more details, questions, suggestions, and related inputs are greatly appreciated. By the way, I am still thinking that the ozone layer may not be being measured scientifically these days!