News:

:) Please keep your software updated for best results!

Main Menu

Problems with the Analysis

Started by John Donovan, November 29, 2024, 11:56:14 AM

Previous topic - Next topic

John Donovan

This is an extract from the Probe for EPMA User Reference manual (Appendix D) entitled "Problems with the Analysis", and I post it here because it does have some some useful suggestions for when something doesn't seem right about ones analyses.

And who actually reads "the manual" these days?  🙁

"Problems with the Analysis

Quantitative analysis is always beset with many difficulties and it is often difficult to pinpoint the cause of "bad" analyses. What is a bad analysis? This is usually based upon one or two tests: is the weight percent elemental (or oxide) totals very close to 100%? And if the material should have a certain stoichiometry, does it?

More often than not, the effect is the sum of more than one problem.  Several things need to be checked. First of all, start by examining the standards.

Are the standards really "good" standards (both on their own, and for your particular suite of unknowns). Have their compositions been determined by a reliable analytical method? Traditionally, major element concentrations of standards have been determined using a bulk chemical technique, commonly "wet chemistry". The greatest problem is with natural minerals and glasses, which may be somewhat heterogeneous, and may have a sample "split" that is of a slightly different composition compared to the "official" composition. It is in situations like this, the use of secondary standards can really be helpful in determining what is going on. If the primary and secondary standards do not agree with each other, there is a problem: which is correct?

Traditionally, analysts  have run analyses of several standards, using another as the standard for a particular element, and then repeat with different standards, to see which gives the "most correct" values the most times. The Probe for EPMA "Evaluate" application takes this one step further and plots up all standards for one element against each other (stated composition vs. ZAF corrected intensity), giving a clear indication of which standards are "good" (consistent with each other) and which are not (i.e., off the 1:1 line).

Also, are the concentrations entered in the Standard.mdb database correct and without typographical errors? And were counts acquired on the actual intended standard? It is easy to get lost at 300-400 magnification when using a standard mount which contains many standards (EDS helps here).

          Some other things to consider :

1. Is the operating voltage correctly specified? Is the correct x-ray line tuned for each element? Check the on-peak position offsets from the Peak/Scan Option dialog and see if they are reasonable. The program will usually type a warning if the actual and calculated peak positions are very different. Be sure that the spectrometer is not tuned on a nearby line of another element if using multi-element standards.

2. Be sure that no "bad" data points are in the standards samples used for the quantitative calibration. The best way to check for this is to analyze each standard and examine the results to look for points with obviously bad or low totals (epoxy, bad surface polish, bad carbon coat, etc.). If a bad point is found, one can disable it. Remember, one can always enable data later on. A disabled point is simply not used in calculating the analytical calculations but is still present. A good rule of thumb is to only disable points that have low totals since generally most of the problems mentioned above will result in fewer x-ray counts. Avoid disabling points just to get better agreement between the primary and secondary standards. Points that have high totals should not be arbitrarily disabled. It may be necessary to look for other problems such as points with low totals in the primary standards.

3. Look for interferences on the analyzed elements. One easy way to do this is to use the Interferences menu item in the Standard program. One can also perform a wavelength scan and display possible interfering peak markers. If the element causing the interference is present in significant concentrations in your unknowns, and is not being analyzed for, it may be necessary to add the interfering element to the run by creating a new sample with the interfering element as an analyzed element. Be sure that the proper standards are available to use for the interference correction.

4. It would be useful to examine the precise choices for x-ray peaks; did the software/firmware routine choose the best peak position? Sometimes the algorithm doesn't. You want to use the center of the peak quasi-plateau where either any small spectrometer drift or peak/shape effects will be minimized. Probe for EPMA has a "post-scan" confirmation option where you can verify this. This issue would be for the main elements in a multi-element phase, and not that important for minor or trace elements where the determination of the background is the critical endeavor.

5. If none of the above suggestions seem to help, try acquiring the standards again. Probe for EPMA uses an automatic standard drift corrections which can make a significant difference in situations where one or more of your analytical channels is drifting.  Note that since the program will perform an automatic drift correction not only on the standards, but also the interference standards and the MAN background standards, it might be also be necessary to run additional sets of those standards or MAN standards.

6. In the case of trace or minor elements, also check to see that none of the off-peak positions are interfered with by another peak. This can cause a reduction in the on-peak counts, sometimes enough to result in a negative k-ratio. Always run at least one wavelength scan on a sample, using the same count time as your quantitative analyses, and if a peak is seen interfering with the off-peak marker, use the Low and/or High buttons in the Graph Data window to select a new off-peak position that is not interfered with.

10. Detector electronics: Did you explicitly check/set the bias, gain, baseline? If operating in differential mode, did you verify the window width? Are you aware that high count rates on a standard will give you artificially low counts, as the pulse shifts to the left, potentially outside the lower window? To prevent this occurrence always check your PHA setting by running the Graph PHA Distribution button in the PHA setting dialog.

11. If you have high totals, have you considered the possibility of secondary fluorescence, if you are measuring small phases within a matrix or near another phase? Normally this is a small effect on the total but when measuring trace or minor elements it can be a source for large relative errors.

In addition, to solve problems with trace analysis, it is always useful to run at least one "blank" that is similar to your unknown sample, at least in terms of average atomic number, if not actual chemistry. For example, when analyzing trace elements in quartz, try to run an unknown analysis on a "pure" synthetic quartz sample, one that is known to contain known concentrations of the trace elements. The point is to try to determine how closely one can measure "zero" concentrations."

The Probe for EPMA User Reference manual can also be accessed by hitting the F1 key in the application or in PDF format from the Help menu.

Another good resource for trouble shooting problems with your analyses is this webinar:

https://www.youtube.com/watch?v=mOca7-G4FvQ&ab_channel=ProbeSoftwareInc
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

One of the most difficult things about microanalysis is determining whether an analytical problem is from a problem in the hardware or software, or for that matter, in firmware (embedded code in our instrument electronics)!

Here's an example of a situation where two entirely unrelated instrumental problems appear to cause problems in the data. In this example, a user reported that they had some low totals and also some spurious high totals during a long overnight run:



First note that the totals here plotted against elapsed hours in the run appear to be close to 100% at the start and then they drift down to 98% or so, and then drift back up to 100% at the end of the run. Interestingly, the standard intensities at the beginning and end of the run appeared to be very similar, so whatever was causing the drift, it had recovered on it's own by the end of the run.

I personally have seen this sort of thing when I ran the microprobe at UC Berkeley back in the 90s. It turned out that in order to save energy, the school started shutting down the heating system fans after 5 PM, but they had to keep the chemical hood fans running for safety.  This caused the building to go excessively negative in air pressure which did two things:

First because Berkeley tends to get a thick cold fog, especially in the summer, this caused the building to suck in cold air from outside dropping the inside temperature dramatically overnight. This caused our Cameca microprobe to lose intensity from the "peaked" positions, especially for elements at low sin theta and/or on a PET crystal which has a large coefficient of expansion:

https://smf.probesoftware.com/index.php?topic=210.0

The worst part of this story is that because the building was so cold by the morning, many of the staff ended up bringing in electric space heaters to put under their desks, to warm them up, so when I asked the facilities people if we were saving any energy in the actual utility bills, it turned out there was essentially no difference.  Finally I was able to convince them to leave the building heating system on overnight. 

The second problem was that because the intake fans on the top of the building weren't on, the outside air that was sucked in came in unfiltered through doors, mostly at ground level, which over time made the building just filthy with grime. But I digress...

Anyway, these temperature changes in the lab overnight were a primary reason we implemented the automatic standard intensity drift correction in Probe for EPMA. However, please note that in order to correct for the sort of intensity drift seen above, one must re-standardize frequently enough to track the drift as a linear approximation. In the above case that means maybe every 4 or 5 hours or so. See here for a discussion on the standard intensity drift correction during automation:

https://smf.probesoftware.com/index.php?topic=168.0

https://smf.probesoftware.com/index.php?topic=1618.0

https://smf.probesoftware.com/index.php?topic=1618.msg12489#msg12489

In this user's case, they said they didn't see this all the time, but once in a while the HVAC in their building does something weird. My suggestion, become friends with the custodial staff.  They tend to know what is going on in the building, especially at night!

The other issue that we can see in the totals plot are are these strange spikes in the totals once in a while.  It turns out that their Faraday cup is a little slower than normal and so the Faraday "wait in" time needed to be increased slightly for accurate beam current measurements:

https://smf.probesoftware.com/index.php?topic=80.0 

Basically, it should be noted that Probe for EPMA makes *two* beam current measurements per analysis. One at the beginning of the analysis, and one at the end of the analysis. It then takes the average of these two measurements to correct for beam drift (an entirely different thing from the standard intensity drift correction we discussed above!).

Because the Faraday cup measurement at the beginning of the analysis is usually performed when the Faraday cup is already inserted, this measurement is usually not problematic. But for the beam current measurement at the end of the analysis, the cup has to be inserted, and then stabilize before an accurate beam current measurement can be made.

On the Cameca and JEOL 8900 instruments, the required delay was usually under 1 second. But with the JEOL 8200/8500 instruments, the required delay was usually around 1.5 seconds or so, but on the new JEOL instruments (8203, 8530, iHP200F, etc.), this Faraday "wait in" delay may need to be as long as 2.5 seconds or even longer for an accurate beam current measurement.

In this user's case, the second Faraday cup measurement was occasionally low, causing the beam drift correction to over correct when the two beam current measurements were averaged, resulting in occasional high totals. When the faraday "wait in" time was increased, the problem went away. See the Faraday application for quick testing of this parameter in the probewin.ini file.

Finally note that for accurate *sample* (absorbed) current measurements, the parameter that may be to be adjusted is the FaradayWaitOutTime. Obviously we would like these delay parameters to be as short as possible, but long enough to obtain accurate beam (and sample) current measurements.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"