Applications

Is It Worth The Salt?

Felix Reinauer, Applications Specialist, EDAX

When you are in Sweden at Scandem 2019 it is the perfect time to order SOS as an appetizer or for dinner. It is made of smör, ost and sill (butter, cheese and herring) served together with potatoes. Sometimes the potatoes need a little bit of improvement in taste. It is very easy to take the salt mostly located on all tables and salt them. Doing that I thought about how easy it is to do this today and what am I really pouring on my potatoes?

Salt was very important in the past. In ancient times salt was so important that the government of Egypt and other countries setup salt taxes. Around 4000 years ago in China and during the Bronze age in Europe, people started to preserve food using brine. The Romains had soldiers guarding and securing the transportation of salt. Salt was as expensive as gold. Sal is the Latin word for salt and the soldiers used to get their salare. Today you still get a salary. Later ‘Streets of Salt’ were settled to guarantee safe transportation all over the country. As a result, cities along these roads got wealthy. Even cities, like Munich, were founded to make money with the salt tax. Salt even destroyed empires and caused big crises. Venice fought with Genoa over spices in the middle ages. In the 19th century soldiers were sent out to conquer a big mountain of salt of an Inconceivable value, lying along the Missouri River. We all know the history of India´s independence. Mohandas Gandhi organized a salt protest to demonstrate against the British salt tax. The importance of the word salt is also implemented in our languages, “Worth the salt”, “Salz in der Suppe” or “Mettre son grain de sel”.

The two principle ways of getting salt are from underground belts and from the sea. It can be extracted from underground either by mining or by using solution mining. Sea salt is produced in small pools which were filled up during high tide and water evaporates under sunny weather conditions. Two kinds of salt mining are done. Directly digging the salt out of the mountain, then dissolving it to clean it. Or hot water is directly used to dissolve the salt and then the brine is pumped up.

Buying salt today is no longer that expensive, dangerous or difficult. But now a new problem arises. I´m talking about salt for consumption, which usually means NaCl in nice white crystals. So, are there any advantages to using different kind of salts? If we believe advertisements or gourmets, it is important, where the salt we use came from and how it was produced. Today the most time-consuming issue is the selection of the kind of salt you want in the supermarket!

For my analysis I chose three kinds of salts from three different areas. The first question was, are the differences big enough to detect them using EDS or will the differences be related to minor trace elements which can only be seen in WDS. It was a surprise for me that the differences are that huge. I had a look at several crystals from one sample. Shown as examples are the typical analysis of the different compounds and elements for that provenance.

First looking at the mined salt. I selected a kind of salt from the oldest salt company in Germany established over 400 years ago. One kind from Switzerland manufactured in the middle of the Alpes and one from the Kalahari, to be as far away as possible from the others. The salt from Switzerland is the purest salt only containing NaCl with some minor traces. The German salt contains a bigger amount of potassium and the Kalahari salt a bigger amount of sulfur and oxygen (Figure 2.).

Figure 2.

Secondly, I was interested in the salt coming from the sea. I selected two types of salt from French coasts one from the Atlantic Ocean in Brittany and another one from the Mediterranean Sea. The third one came from the German coast at the Baltic Sea. The first interesting impression is that all the sea salt contains many more elements. The Mediterranean salt contains the smallest amount of trace elements. The salt from the Atlantic Ocean and the Baltic sea contains, besides the main NaCl, phases containing Ca, K, S, Mg and O. A difference in the two is the amount of Ca containing compounds (Figure 3.).

Figure 3.

Finally, I was interested in some uncommon types of salt. In magazines and television, experts often publish recipes with special types supposedly offering a special taste, or advertising offers remarkable new kinds of healthy salt. So, I was looking for three kinds which seem to be unusable. I found two, a red and a black colored, Hawaiian salt. The spectrum of the red salt shows nicely that Fe containing minerals cause the red color. Even titanium can be found and a bigger amount of Al, Si and O. The black salt contains mainly the same elements. Instead of Fe the high amount of C causes the black color. A designer salt is the Pyramid finger salt, which is placed on top of the meat to make it look nicer. Beside the shape, the only specialty is the higher amount of Ca, S and O (Figure 4).

Figure 4.

It was really interesting that salt is not even salt. As the shape of the crystals varies, so they differ in composition. In principle it is NaCl but contain more or less different kinds of compounds or even coal to color it. There are elements found in different amounts related to the type of salt and area it came from. These different salts are located in a few very small areas in and on the crystals.
And finally, I pour salt onto my potatoes and think, ok it is NaCl.

 

Hats Off/On to Dictionary Indexing

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

Recently I gave a webinar on dynamic pattern simulation. The use of a dynamic diffraction model [1, 2] allows EBSD patterns to be simulated quite well. One topic I introduced in that presentation was that of dictionary indexing [3]. You may have seen presentations on this indexing approach at some of the microscopy and/or materials science conferences. In this approach, patterns are simulated for a set of orientations covering all of orientation space. Then, an experimental pattern is tested against all of the simulated patterns to find the one that provides the best match with the experimental pattern. This approach does particularly well for noisy patterns.

I’ve been working on implementing some of these ideas into OIM Analysis™ to make dictionary indexing more streamlined for datasets collected using EDAX data collection software – i.e. OIM DC or TEAM™. It has been a learning experience and there is still more to learn.

As I dug into dictionary indexing, I recalled our first efforts to automate EBSD indexing. Our first attempt was a template matching approach [4]. The first step in this approach was to use a “Mexican Hat” filter. This was done to emphasize the zone axes in the patterns. This processed pattern was then compared against a dictionary of “simulated” patterns. The simulated patterns were simple – a white pixel (or set of pixels) for the major zone axes in the pattern and everything else was colored black. In this procedure the orientation sampling for the dictionary was done in Euler space.
It seemed natural to go this route at the time, because we were using David Dingley’s manual on-line indexing software which focused on the zone axes. In David’s software, an operator clicked on a zone axis and identified the <uvw> associated with the zone axis. Two zone axes needed to be identified and then the user had to choose between a set of possible solutions. (Note – it was a long time ago and I think I remember the process correctly. The EBSD system was installed on an SEM located in the botany department at BYU. Our time slot for using the instrument was between 2:00-4:00am so my memory is understandably fuzzy!)

One interesting thing of note in those early dictionary indexing experiments was that the maximum step size in the sampling grid of Euler space that would result in successful indexing was found to be 2.5°, quite similar to the maximum target misorientation for modern dictionary indexing. Of course, this crude sampling approach may have led to the lack of robustness in this early attempt at dictionary indexing. The paper proposed that the technique could be improved by weighting the zone axes by the sum of the structure factors of the bands intersecting at the zone axes.
However, we never followed up on this idea as we abandoned the template matching approach and moved to the Burn’s algorithm coupled with the triplet voting scheme [5] which produced more reliable results. Using this approach, we were able to get our first set of fully automated scans. We presented the results at an MS&T symposium (Microscale Texture of Materials Symposium, Cincinnati, Ohio, October 1991) where Niels Krieger-Lassen also presented his work on band detection using the Hough transform [6]. After the conference, we hurried back to the lab to try out Niels’ approach for the band detection part of the indexing process [7].
Modern dictionary indexing applies an adaptive histogram filter to the experimental patterns (at left in the figure below) and the dictionary patterns (at right) prior to performing the normalized inner dot-product used to compare patterns. The filtered patterns are nearly binary and seeing these triggered my memory of our early dictionary work as they reminded me of the nearly binary “Sombrero” filtered patterns– Olé!
We may not have come back full circle but progress clearly goes in steps and some bear an uncanny resemblance to previous ones. I doff my hat to the great work that has gone into the development of dynamic pattern simulation and its applications.

[1] A. Winkelmann, C. Trager-Cowan, F. Sweeney, A. P. Day, P. Parbrook (2007) “Many-Beam Dynamical Simulation of Electron Backscatter Diffraction Patterns” Ultramicroscopy 107: 414-421.
[2] P. G. Callahan, M. De Graef (2013) “Dynamical Electron Backscatter Diffraction Patterns. Part I: Pattern Simulations” Microscopy and Microanalysis 19: 1255-1265.
[3] S.I. Wright, B. L. Adams, J.-Z. Zhao (1991). “Automated determination of lattice orientation from electron backscattered Kikuchi diffraction patterns” Textures and Microstructures 13: 2-3.
[4] Y.H. Chen, S. U. Park, D. Wei, G. Newstadt, M.A. Jackson, J.P. Simmons, M. De Graef, A.O. Hero (2015) “A dictionary approach to electron backscatter diffraction indexing” Microscopy and Microanalysis 21: 739-752.
[5] S.I. Wright, B. L. Adams (1992) “Automatic-analysis of electron backscatter diffraction patterns” Metallurgical Transactions A 23: 759-767.
[6] N.C. Krieger Lassen, D. Juul Jensen, K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning Microscopy 6: 115-121.
[7] K. Kunze, S. I. Wright, B. L. Adams, D. J. Dingley (1993) “Advances in Automatic EBSP Single Orientation Measurements.” Textures and Microstructures 20: 41-54.

Saying What You Mean and Meaning What You Say!

Shawn Wallace, Applications Engineer, EDAX

A recent conversation on a list serv discussed sloppiness in the use of words and how it can cause confusion. This made me consider that in the world of microanalysis, we are not immune. We are probably sloppiest with two particular words. They are resolution and phase.

Let us start with how we use the word phase and how phases are commonly defined in microanalysis. In Energy Dispersive Spectroscopy (EDS), we use phase for everything, for example, phase mapping, phase library. In Electron Backscatter Diffraction (EBSD), the usage is a little more straightforward.

So, what is a phase? Well to me, a geologist, a phase has both a distinct chemistry and a distinct crystal structure. Why does this matter to a geologist? Two different minerals with the same chemistry, but with different structures, can behave in very different ways and this gives me useful information about each of them.
The classic example for geologists is the Al2SIO5 system (figure 1). It has three members, Kyanite, Sillimanite, and Andalusite. They each have the same chemistry but different structures. The structure of each is controlled by the pressure and temperature at which the mineral equilibrated. Simple chemistry tells me nothing. I need the structure to tease out that information.

Figure 1. Phase Diagram of the Al2SiO5 system in geological conditions. Different minerals form at different pressures and temperatures, letting geologists know how deep and/or the temperature at which the parent rock formed.**

EDS users use the term phase much more loosely. A phase is something that is chemically distinct. Our phase maps look at a spectrum pixel by pixel and see how they compare. In the end, the software goes through the entire map and groups each pixel with like pixels. The phase library does chi squared fits to compare the spectrum to the library (figure 2).

Figure 2. Our Spectrum Library Match uses as Chi-squared fit to determine the best possible matches. This phase is based on compositional data, not compositional and structural data.

While the definition of phase is relatively straight forward, the meaning of resolution gets a little murkier. If you asked someone what the EDS resolution is, you may get different answers depending on who you ask. The main way we use the term resolution when talking about EDS is spectral resolution. This defines how tight the peaks in a spectrum are (figure 3).

Figure 3. Comparison of EDS vs. WDS spectral resolution. WDS has much higher resolution (tighter peaks) than EDS, but fewer counts and more set-up are required.

The other main use of resolution, in EDS is the spatial resolution of the EDS signal itself (figure 4). There are many factors which determine this, but the main ones are the accelerating voltage and sample characteristics. This resolution can go from nanometers to microns.

Figure 4. Distribution of the electron energy deposited in an aluminum sample (top row) and a gold sample (bottom row) at 15 kV (left column) and 5 kV (right column). Note the dramatic difference in penetration given by the right hand side scale bar.

The final use of resolution for EDS is mapping resolution. This is by far the easiest to understand. It is just the step size of the beam while you are mapping.

Luckily for us, the easiest way to find out what people mean when they use the terms resolution or phase, is just to ask. Of course, the way to avoid any confusion is to be as precise as possible with your choice of words. I resolve to do my part and communicate as clearly as I can!

** Source: Wikipedia

A New Light on Leonardo

Sue Arnell, Marcom Manager, EDAX

I recently spent 10 days’ vacation back in the UK, but my visit “home” turned into somewhat of a busman’s holiday when I visited the current exhibition at the Queen’s Gallery in London: LEONARDO DA VINCI: A LIFE IN DRAWING. While all the drawings were very interesting, one particular poster particularly caught my eye.

Figure 1: Poster showing the use of X-ray Fluorescence (XRF) analysis on one of the drawings in the exhibition.

It may be hard to see in this small image, but the drawing in the bottom left corner of the poster showed two horses’ heads, while the rest of the sheet showed very indistinct lines. When viewed under ultraviolet light, however, it is clear that there were an additional two horses depicted on the same page.

Figure 2: Drawing of horses seen under ultraviolet light

A video on the exhibit site shows a similar result with a second page:

Figure 3: Hand study seen in daylight

Figure 4: Hand study seen under ultraviolet light

According to the poster, researchers* at the Diamond Light Source at Harwell in Oxfordshire used X-ray fluorescence, which is non-destructive and would not therefore harm the priceless drawing, to explain the phenomenon in the first drawing of the horses. Scanning a small part of the drawing to analyze individual metalpoint lines, they were able to extract the spectrum in Figure 5.

Figure 5: the results of XRF analysis on the drawing showing the presence of copper (Cu) and Zinc (Zn) in the almost invisible lines and almost no silver (Ag).

The conclusion was that Leonardo must have used a metalpoint based on a Cu/Zn alloy and that these metals have reacted over time to produce salts and render the lines almost invisible in daylight. However, under ultraviolet light, the full impact of the original drawings is still visible.

When I shared this analysis back in the EDAX office in Mahwah, NJ, Dr. Patrick Camus (Director of Engineering) had a few additional (more scientific) observations.

  • XRF may be useful in determining the fading mechanism by looking for elements associated with environmental factors such as Cl, (from possible contact with human fingertips), or S in the atmosphere from burning coal over the centuries. It may be related to exposure to sunlight as well.
  • The use of ultraviolet light as an incoming beam has a similar reaction but slightly different with the material as the x-rays producing emissions at much smaller energy level. This process is called photoluminescence. The incoming beam excites valence electrons across an energy gap in the material to a higher energy level which during relaxation to the base energy releases a photon. The energy of these photons is typically 1-10 eV or much less than x-ray detectors can sense. Interestingly, this excitation does not occur in conductors/metals, thus proving more evidence of the picture material being a band-gap or insulating material like a salt.
  • This example shows that a single technique does not always provide a complete picture of the structure or composition of a sample, but the use of multiple techniques can provide information greater than the sum of the individual contributions.

From my point of view, I have been trying to explain, promote and market the EDAX products and analysis techniques for over eight years now, so it was very interesting to see the value of some of ‘our’ applications in a real-world situation.

* Dr. Konstantin Ignatyev, Dr. Giannantonio, Dr. Stephen Parry

Picture postcards from…

Dr. Felix Reinauer, Applications Specialist, EDAX

Display of postcards from my travels.

…L. A. – this is the title of a popular song from Joshua Kadison which one may like or dislike but at least three words in this title describe a significant part of my work at EDAX. Truth be told I’ve never been to Los Angeles, but as an application specialist traveling in general is a big part of my job. I´m usually on the move all over Europe meeting customers for trainings or attending exhibitions and workshops. This part of my job gives me the opportunity to meet with lots of people from different places and have fruitful discussions at the same time. If I am lucky, there is sometimes even some time left for sightseeing. The drawback of the frequent traveling is being separated from family and friends during these times.

Nowadays it is easy to stay in touch thanks to social media. You send a quick text message or make phone calls, but these are short-term. And here we get back to the title of this post and Joshua Kadison´s pop song, because quite some time ago I started the tradition of sending picture postcards from the places I travel to. And yes, I am talking about the real ones made from cardboard, documenting the different cities and countries I get to visit. Additionally, these cards are sweet notes highly appreciated by the addressee and are often pinned to a wall in our apartment for a period of time.

Within the last couple of years, I notice that it is getting harder to find postcards, this is especially true in the United States. Sometimes keeping on with my tradition feels like an Iron Man challenge. First, I run around to find nice picture postcards, then I have to look for stamps and the last challenge is finding a mailbox. Finally, all these exercises must be done in a limited span of time because the plane is leaving, the customer is waiting, or the shops are closing. But it is still worth it.

It is not only the picture on the front side, which is interesting, each postcard holds one or more stamps – tiny pieces of artfully designed paper – as well. Postage stamps were first introduced in Great Britain in 1840. The first one showed the profile of Queen Victoria and is called “Penny Black” due to the black background and its value. Thousands of different designs have been created ever since attracting collectors all over the world. Sadly, this tradition might be fading. Nowadays the quick and easy way of printed stamps from a machine with only the value on top seems to be becoming the norm. But the small stamps are often beautiful to look at and are full of interesting information, either about historical events, famous persons or remarkable locations.

A selection of postage stamps from countries I have visited.

For me, as a chemist I was also curious about the components of the stamps. Like a famous painting, investigated by XRF to collect information about the pigments and how the artist used them. For the little pieces of art, the SEM in combination with EDS is predestinated to investigate them in low vacuum mode without damaging them. The stamps I looked at are from my trips to Sweden, Great Britain, the Netherlands and the Czech Republic. In addition, I added one German stamp as a tribute to one of the most important chemists, Justus von Liebig after whom the Justus-Liebig University in Gießen is named, where he was professor (1824 – 1852) and I did my Ph. D. (a few years later).

All the measurements shown below were done under the same conditions using an acceleration voltage of 20 kV, with a pressure of 30 Pa and 40x magnification. With the multifield map option the entire stamp area was covered, using a single field resolution of 64×48 each and 128 frames.

Czech Republic Germany

 

Netherlands Sweden

United Kingdom

The EDS results show that modern paper is a composite material. The basic cellulose fibers are covered with a layer of calcium carbonate to ensure a good absorption of the different pigments used. This can be illustrated with the help of phase mappings. Even after many kilometers of travelling and all the hands treating the postcards all features of the stamps are still intact and can be detected. The element mappings show that the colors are not only based on organic compounds, but the existence of metal ions indicate a use of inorganic pigments. Typical elements detected were Al, S, Fe, Ti, Mn and others. The majority of the analysis work I do for EDAX and with EDAX customers is very specialized and involves materials, which would not be instantly familiar to non-scientists. It was fun to be able to use the same EDS analysis techniques on recognizable, everyday objects and to come up with some interesting results.

Common Mistakes when Presenting EBSD Data

Shawn Wallace, Applications Engineer, EDAX

We all give presentations. We write and review papers. Either way, we have to be critical of our data and how it is presented to others, both numerically and graphically.

With that said, I thought it would be nice to start this year with a couple of quick tips or notes that can help with mistakes I see frequently.

The most common thing I see is poorly documented cleanup routines and partitioning. Between the initial collection and final presentation of the data, a lot of things are done to that data. It needs to be clear what was done so that one can interpret it correctly (or other people can reproduce it). Cleanup routines can change the data in ways that can either be subtle (or not so subtle), but more importantly they could wrongly change your conclusions. The easiest routine to see this on is the grain dilation routine. This routine can turn noisy data into a textured dataset pretty fast (fig. 1).

Figure 1. The initial data was just pure noise. By running it iteratively through the grain dilation routine, you can make both grains and textures.

Luckily for us, OIM Analysis™ keeps track of most of what is done via the cleanup routines and partitioning in the summary window on either the dataset level or the partition level (fig. 2).

Figure 2. A partial screenshot of the dataset level summary window shows cleanup routines completed on the dataset, as well as the parameters used. This makes your processing easily repeatable.

The other common issue is not including the full information needed to interpret a map. I really need to look at 3 things to get the full picture for an EBSD dataset: the IPF map (fig. 3), the Phase Map (fig. 4) and the IPF Legend (fig. 5) of those phases. This is very important because while the colors used are the same, the orientations differ between the different crystal symmetries.

Figure 3. General IPF Map of a geological sample. Many phases are present, but the dataset is not complete without a legend and phase map. The colors mean nothing without knowing both the phase and the IPF legend to use for that phase.

Below is a multiple phase sample with many crystal symmetries. All use Red-Green-Blue as the general color scheme. By just looking at the general IPF map (fig. 3), I can easily get the wrong impression. Without the phase map, I do not know which legend I should be using to understand the orientation of each phase. Without the crystal symmetry specific legend, I do not know how the colors change over the orientation space. I really need all these legends/maps to truly understand what I am looking at. One missing brick and the tower crumbles.

Figure 5. With all the information now presented, I can actually go back and interpret figure 3 using figures 4 and 5 to guide me.

Figure 4. In this multiphase sample, multiple symmetries are present. I need to know which phase a pixel is, to know which legend to use.

 

 

 

 

 

 

 

 

 

 

 

 

 

Being aware of these two simple ideas alone can help you to better present your data to any audience. The fewer the questions about how you got the data, the more time you will have to answer more meaningful questions about what the data actually means!

Old Eyes?

Dr. Stuart Wright, Senior Scientist EBSD, EDAX

I was recently asked to write a “Tips & Tricks” article for the EDAX Insight Newsletter as I had recently done an EDAX Webinar (www.edax.com/news-events/webinars) on Texture Analysis. I decided to follow up on one item I had emphasized in the Webinar. Namely, the need for sampling enough orientations for statistical reliability in characterizing a texture. The important thing to remember is that it is the number of grain orientations as opposed to the number of orientations measured. But that lead to the introduction of the idea of sub-sampling a dataset to calculate textures when the datasets are very large. Unfortunately, there was not enough room to go into the kind of detail I would have liked to so I’ve decided to use our Blog forum to cover some details about sub-sampling that I found interesting

Consider the case where you not only want to characterize the texture of a material but also the grain size or some other microstructural characteristic requiring a relatively fine microstructure relative to the grain size. According to some previous work, to accurately capture the texture you will want to measure approximately 10,000 grains [1] and about 500 pixels per average grain in order to capture the grain size well [2]. This would result in a scan with approximately 5 million datapoints. Instead of calculating the texture using all 5 million data points, you can use a sub-set of the points to speed up the calculation. In our latest release of OIM Analysis, this is not as big of a concern as it once was as the texture calculations have been multithreaded so they are fast even for very large datasets. Nonetheless, since it is very likely that you will want to calculate the grain size, you can use the area weighted average grain orientation for each grain as opposed to using all 5 million individual orientation measurements for some quick texture calculation. Alternatively, a sub-set of the points through random or uniform sampling of the points in the scan area could be used.

Of course, you may wonder how well the sub-sampling works. I have done a little study on a threaded rod from a local hardware store to test these ideas. The material exhibits a (110) fiber texture as can be seen in the Normal Direction IPF map and accompanying (110) pole figure. For these measurements I have simply done a normalized squared difference point-by-point through the Orientation Distribution Function (ODF) which we call the Texture Difference Index (TDI) in the software.


This is a good method because it allows us to compare textures calculated using different methods (e.g. series expansion vs binning). In this study, I have used the general spherical harmonics series expansion with a rank of L = 22 and a Gaussian half-width of  = 0.1°. The dataset has 105,287 points with 92.5% of those having a CI > 0.2 after CI Standardization. I have elected only to use points with CI > 0.2. The results are shown in the following figure.

As the step size is relatively coarse with respect to the grain size, I have experimented with using grains requiring at least two pixels before considering a set of similarly oriented points a grain versus allowing a single pixel to be a grain. This resulted in 9981 grains and 25,437 grains respectively. In both cases, the differences in the textures between these two grain-based sub-sampling approaches with respect to using the full dataset are small with the 1 pixel grain based sub-sampling being slight closer as would be expected. However, the figure above raised two questions for me: (1) what do the TDI numbers mean and (2) why do the random and the uniform sampling grids differ so much, particularly as the number of points in the sub-sampling gets large (i.e. at 25% of the dataset).

TDI
The pole figure for the 1000 random points in the previous figure certainly captures some of the characteristics of the pole figure for the full dataset. Is this reflected in the TDI measurements? My guess is that if I were to calculate the textures at a lesser rank, something like L = 8 then the TDI’s would go down. This is already part of the TDI calculation and so it is an easy thing to examine. For comparison I have chosen to look at four different datasets: (a) all of the data in the dataset above (named “fine”), (b) a dataset from the same material with a coarser step size (“coarse”) containing approximately 150,000 data points, (c) sub-sampling of the original dataset using 1000 randomly sampled datapoints (“fine-1000”) and (d) the “coarse” dataset rotated 90 degrees about the vertical axis in the pole figures (“coarse-rotated”). It is interesting to note that the textures that are similar “by-eye” show a general increase in the TDI as the series expansion rate increases. However, for very dissimilar textures (i.e “coarse” vs “coarse-rotated”) the jump to a large TDI is immediate.

Random vs Uniform Sampling
The differences between the random and uniform sampling were a bit curious so I decided to check the random points to see how they were positioned in the x-y space of the scan. The figure below compares the uniform and random sampling for 4000 datapoints – any more than this is hard to show. Clearly the random sampling is reasonable but does show a bit of clustering and gaps within the scan area. Some of these small differences show up with higher differences in TDI values than I would expect. Clearly, at L = 22 we are picking up quite subtle differences – at least subtle with respect to my personal “by-eye” judgement. It seems to me, that my “by-eye” judgement is biased toward lower rank series expansions.


Of course, another conclusion would be that my eyesight is getting rank with age ☹ I guess that explains my increasingly frequent need to reach for my reading glasses.

References
[1] SI Wright, MM Nowell & JF Bingert (2007) “A comparison of textures measured using X-ray and electron backscatter diffraction”. Metallurgical and Materials Transactions A, 38, 1845-1855
[2] SI Wright (2010) “A Parametric Study of Electron Backscatter Diffraction based Grain Size Measurements”. Practical Metallography, 47, 16-33.