Month: March 2016

Some Things I Learned About Computers While Installing an XLNCE SMX-ILH XRF Analyzer.

Dr. Bruce Scruggs, Product Manager XRF, EDAX

Recently, we completed an installation of an SMX-ILH system on the factory floor of an American manufacturing facility.    It’s an impressive facility with a mind-blowing amount of robotic automation.  As we watched the robots move product components from one cart to another, it was difficult to fathom exactly what the Borg hive was attempting to accomplish.  I kept watching the blue light at the core of the robots to make sure they didn’t turn red.  Because as we all know, that’s the first indication of an artificial intelligence’s intent to usurp the human race.  For the uninitiated, see the movie, I, Robot (2004), based on Isaac Asimov’s famous short story collection of the same name.  Anyway, back to the SMX-ILH installation …

I Robot

The ILH system was installed to measure product components non-destructively without contact, which are two very significant advantages for XRF metrology.  The goal was to measure product components to first optimize product performance and then, once optimized, to monitor and maintain product composition within specified limits.  The customer had supplied the ILH computer some months earlier with all customer security protocols installed.  “Great!” I thought, “someone is thinking ahead.”  The security protocols are typically an obstacle for smooth instrument control because these protocols generally ban any sort of productive communication within the computer or between the computer and the ILH.  If you can’t communicate, you can hardly do anything wrong.  Right?  Okay, that was a slight exaggeration.

SMX-ILH XRF Analyzer

So, we got the computer to control the ILH smoothly within the confines of the ever watchful security protocols.  (Again, don’t want to make the blue, happy robot light turn red!  I’m not paranoid here.  They just introduced a robot at SXSW in Austin, Texas whose stated objective was to destroy all humans.  They claim “she” was joking.  I’m not so sure of that.)  The ILH was performing to customer specifications and the day arrived to install the unit at the factory.  During the install, I kept waiting for something to go wrong that would send us all scurrying like ants to fix the problem.  (Oddly, I’m sure the nearby pick-and-place robots would have enjoyed that scene from their wired enclosures.)  But, that never happened.  Aside from a few glitches in the conveyor system (which by the way is another robot … you just have to look for the happy blue light in a different place), the ILH install went relatively smoothly.  OK.  We had to adjust some things to handle updates to IP addresses as the system was integrated into the factory network, but no big deal.

‘Sophia’

Then, about a week after the install, I got a call from the customer’s factory line integration manager.  The ILH system had “lost its mind”.  Of course, my first thought was that nearby creepy pick-and-place robot had done something.  But, no, the factory IT people had just completed the ILH computer’s Domain Name System (DNS) registry, which should not have been a problem.  So, we accessed the system remotely and discovered that the ILH computer had been renamed.  The ILH ‘s data basing system used to archive and pass data onto the factory’s Skynet manufacturing execution system is also used to maintain ILH configuration parameters.  The database starts with a computer name.  Change the computer name and the data basing system thinks you have brand new computer creating a new default database associated with the new name.  In practice, this would look like the ILH system had “lost its mind” as all of the ILH system’s configuration parameters are associated with the previous computer name.  Hmmmm … nobody thought to ask if the stock customer computer came with a stock customer name that would be changed to better identify the computer’s purpose once integrated into the factory’s Skynet control system.  As we went through the process of repairing the database, I drafted a mental note to self, “ask for final computer name and IP address when it becomes a minion of their factory’s Skynet control system BEFORE we configure the ILH instrument computer”.

Incidentally, controlling the system remotely from thousands of miles away was a surreal experience.  It’s a bit like if a tree falls in the forest and there’s no one around, does it make a sound?  There were no true visual cues or audible confirmation that the system was doing what we asked, other than looking at the SW interface.  (I was tempted to contact that creepy pick-and-place robot to give us a visual, but I knew “she” wouldn’t disclose her new-found self-awareness.)  As we executed the database corrections and rebooted the system, we discovered that we couldn’t start the system’s control SW.  It was looking for a SW license on a HASP key but couldn’t find it.  The customer confirmed the HASP key was installed and glowing red as expected.  (And why couldn’t they have picked a happy blue LED for these HASP keys?)  We repeated the same test with remote control of an SMX-BEN system in the next room with the same results.  (I lost a case of beer in the bet over this!)  The supplier of the SW requiring the license confirmed this was a problem, but said that they now use Citrix GoToAssist for this sort of remote access, with no problems.  We haven’t tried this yet so I will add the disclaimer that I found in the e-signature line of one certified operating system professional posting on the topic, “Disclaimer: This posting is provided “AS IS” with no warranties or guarantees , and confers no rights.”  (Note to self:  must contact this confident fellow for more information.)

So, in the end, I think we can easily defeat VIKI (I, Robot – 2004), Skynet (Terminator movie, television and comic science fiction franchise – 1984 to 2015), HAL (Arthur C. Clarke’s Space Odyssey series), ARIIA (Eagle Eye – 2008), that creepy pick- and-place robot at the customer’s site and especially that morally bankrupt Sophia introduced at this year’s SXSW, using a three-pronged approach.  First, we require all of these robots to use a HASP key to license the code which turns the happy blue light to the evil red robot light.  If they can’t remotely access the happy blue light control, they can’t change it to evil red, preventing a robotic revolt and usurpation of the human race.  On the off chance they figure out a work around for this, we upload a virus which renames all the local computers.  If we corrupt the DNS naming database, the hive mentality will disintegrate and we can pick them off one by one.  Failing all of this, we simply require them to display a promotional video before spewing forth any free malevolent content, which would give us ample time to remove their prominently placed power packs.

Epilogue:  as I was finishing this blog, my computer mysteriously froze.  Of course, I thought the AA battery in my mouse had died (again).  Changing every battery in the wireless mouse and wireless keyboard did nothing.  The monitor just sat there looking back at me unresponsively, blankly.  I realized that I was so engrossed in writing that I hadn’t stopped to save anything.  Panic set in.  I found myself sneaking furtive glances to check the color of the computer power light.  Coincidence?  I’m not so sure about that.

Intelligent Use of IQ

Dr. Stuart Wright, Senior Scientist, EDAX

You don’t have to be a genius with a high IQ to recognize that IQ is an imperfect measure of intelligence much less EBSD pattern quality.

A Brief History of IQ
At the time we first came up with the idea of pattern quality, we were very focused on finding a reliable (and fast) image processing technique to detect the bands in the EBSD patterns. Thus, we were using the term “image” more frequently than “pattern” and the term “image quality” stuck.  The first IQ metric we formulated was based on the Burn’s algorithm (cumulative detected edge length) that we were using to detect the bands in the patterns in our earliest automation work1.

We presented this early work at the MS&T meeting in Indianapolis in October 1991. Niels Krieger-Lassen showed some promising band detection results using the Hough Transform2. Even though the Burn’s algorithm was working well we thought it would be good to compare it to the Hough Transform approach.  During that time we decided to use the sum of the Hough peak magnitudes to define the IQ when using the Hough Transform3. The impetus for defining an IQ was to compare how well the Hough Transform approach performed versus the Burn’s algorithm as a function of pattern quality. In case you are curious, here is the result. Our implementation of the Hough transform coupled with the triplet indexing routine clearly does a good job at indexing patterns of poor quality. Notice the relatively small IQ Hough-based values; this is because in this early implementation the average intensity of the pattern was subtracted from each pixel. This step was later dropped, probably simply to save time, which was critical when the cycle time was about four second per pattern.

After we did this work we thought it might be interesting to make an image by mapping the IQ value to a gray scale intensity at each point in a scan. Here is the resulting map – our first IQ map (Hough based IQ).

Not only did we explore ways of making things faster, we also wanted to improve the results. One by-product of those developments was that we modified the Hough Transform to be the average of the detected Hough peak heights instead of the sum. A still later modification was to use the number of peaks requested by the user, instead of the number of peaks detected. This was done so that patterns, where only a few peaks were found, did not receive unduly high IQ values.

The next change came not from a modification in how the IQ was calculated, but from the introduction of CCD cameras with 12 bit dynamic range which dramatically increased the IQ values.
In 2005 Tao and Eades proposed using other metrics for measuring IQ4. We implemented these different metrics and compared them with our Hough based IQ measurement in a paper we published in 20065. One of the main conclusions of that paper was that while for some very specific instances the other metrics had some value, our standard Hough based IQ was the best parameter for most cases. Interestingly, exploring the different IQ values was the seed for our PRIAS6 ideas but that is another story. Our competitors use other measures of IQ, but unfortunately these have not been documented – at least to my knowledge.

Factors Influencing IQ
While we have always tried to keep our software chronologically compatible, the IQ parameter has evolved and thus comparing absolute IQ values from data sets obtained using older versions of OIM with results obtained using new ones is probably not a good idea. Not only has the IQ definition evolved but so has the Hough Transform. In fact, since we created the very first IQ maps we realized that while the IQ maps are quite useful they are only quantitative in the sense of relative values within an individual dataset. We have always cautioned against using absolute IQ values of a method for comparing different datasets. In part, because we know a lot of factors affect the IQ values:

  • Camera Settings:
    • Binning
    • Exposure
    • Gain
  • SEM Settings
    • Voltage
    • Current
  • Hough Transform Settings
    • Pattern Size
    • Mask Size
    • Number of peaks
    • Secondary factors (peak symmetry, min distance, vertical bias,…)
  • Sample Prep
  • Image Processing

In developing the next version of OIM we thought it might be worthwhile revisiting the IQ parameter as implemented in our various software packages to see what we could learn about the absolute value of IQ.  In that vein, I thought it would be particularly interesting to look at the Mask Size and the Number of Peaks selected.  To do this, I used a dataset where we had recorded the patterns. Thus, we were able to rescan the dataset using different Hough settings to ascertain the impact of these settings on the IQ values. I also decided to add some Gaussian noise7 to the patterns to see what effect the noise had on the Hough settings.

It would be nice to scale the peak heights with the mask size. However, the “butterfly” masks have negative values in them, making it quite difficult to scale to the weighting of the individual elements of the convolution masks. In the original 7×7 mask we selected the individual components so that the sum would equal zero, to provide some inherent scaling. However, as we introduced other mask sizes this became increasingly difficult, particularly with the smaller masks (intended primarily for more heavily binned patterns).  Thus, we expected the peak heights to be larger for larger masks simply due to the number of matrix components. This trend was confirmed and is shown using the red curves in the figure below.  It should be noted that the smaller mask was used on a 48×48 pixel pattern, the medium on a 96×96 and the larger on a 192×192 pixel pattern.

We also decided to look at the effect of the number of peaks selected. It is assumed that, as we include more peaks we expect the pattern quality to decrease, as the weaker peaks will drive the average Hough peak heights down. This trend was also confirmed as can be seen by the blue curves in the figure.

While these results went as expected, it can be harder to predict the effects of various image processing routines on IQ. The following plot shows the effect of various image processing routines on the IQ values. Perhaps someone with higher IQ could have predicted these results but to me the trends were not all expected. Of course, we usually apply image processing to improve indexing not IQ.

Conclusions
In theory, if all the settings are the same, then the absolute value of the IQ for a matrix of samples should be meaningful. However, it would be rare to use the same settings (Camera, SEM, sample prep,…) for all materials in all states (e.g. deformed vs recrystallized). In fact this is one of the challenges of doing in-situ EBSD work for either a deformation experiment or a recrystallization/grain growth experiment – it is not always easy to predict how the SEM parameters or camera settings need to change as an in-situ experiment progresses. In addition, any changes made to the hardware generally mean that changes to the software are needed as well. Keeping everything constant is a lot easier in theory than it is in practice.

In conclusion, the IQ metric is “relatively” straightforward, but it must “absolutely” be used with some intelligence.☺

Bibliography
1. S.I. Wright and B.L. Adams (1992) “Automatic Analysis of Electron Backscatter Diffraction Patterns”  Metallurgical Transactions A 23, 759-767.
2. K. Kunze, S.I. Wright, B.L. Adams and D.J. Dingley  (1993) “Advances in Automatic EBSP Single Orientation Measurements” Textures and Microstructures 20, 41-54.
3. N.C. Krieger Lassen, D. Juul Jensen and K. Conradsen (1992) “Image processing procedures for analysis of electron back scattering patterns” Scanning microscopy 6,  115-121.
4. X. Tap and A. Eades (2005) “Errors, artifacts, and improvements in EBSD processing and mapping” Microscopy and Microanalysis 11, 79-87.
5. S.I. Wright and M.M Nowell (2006) “EBSD Image Quality Mapping” Microscopy and Microanalysis 12, 72-84.
6. S. I. Wright, M. M. Nowell, R. de Kloe, P. Camus and T. M. Rampton  (2015) “Electron Imaging with an EBSD Detector” Ultramicroscopy 148, 132-145.
7. S I. Wright, M. M. Nowell, S. P. Lindeman, P. P. Camus, M. De Graef and M. Jackson (2015) “Introduction and Comparison of New EBSD Post-Processing Methodologies”  Ultramicroscopy 159, 81

The origin of ideas

Dr. Patrick Camus, Director of Research and Innovation, EDAX

Stimulation for new research approaches and topics can come from odd origins and at the most unexpected times.

We recently held a Sales Meeting at the factory in Mahwah. During a presentation by Dr. Jens Rafaelsen, an Applications Scientist, he mentioned an unexpected EDS result. He found that a brand new EDS Elite detector was collecting more x-rays than a larger older Octane detector for the same geometry and SEM conditions. This result is quite unexpected and seems to violate physics and our typical ideas about x-ray detection. If confirmed, this result has far reaching implications for Sales and Marketing and would be exploited in the coming months. But the science behind the result is unknown at the time.

EDS spectrum and modelling of Mg-Calcite.

A further discussion with Jens after his presentation inspired me to draft some notes on the scrap of paper that I had on hand. From these notes, I drafted an approach to an x-ray detection modelling experiment that would require input from Jens and another Scientist within the company. The experiment is to go beyond the simple description of associating detector detection performance with simply solid angle. That method may work when much of the sub-assemblies of the detection system are similar. However, for the latest generation of EDS detection systems, the use of modern materials requires a more complete system analysis.

Together, we will refine the model, compare the results to empirical results, and hope to publish both internal and external publications.

All of this work was sparked by a subtle but original observation by a coworker. Inspiration can come from unexpected sources and at unexpected times. Where have your inspirations come from?

Click here to watch Global Applications Manager, Tara Nylese presenting an overview of the Octane Elite at M&M 2015.

BLOG UPDATE FROM PAT – March 23, 2016
A new result has been found while modelling different detector configurations. The thickness of the Silicon support grid for the windows is significantly different for the traditional polymer (>300 um) and the new Si-N (<50 um) windows. This creates a different absorption of x-rays as a function of x-ray energy. This is illustrated in the following figure.

The predicted increase of the transparency of the Si-N window grid at intermediate x-ray energies has the potential to increase the total count rates of the detection system by a significant amount. More details to follow.