News:

:) Remember, you need to be logged in to see posted attachments!

Main Menu

Blank Assignments

Started by Malcolm Roberts, March 31, 2015, 05:52:21 PM

Previous topic - Next topic

Malcolm Roberts

Hi John
I usually pile a whole bunch of reference materials into a run to cross check calibration and sometimes one discovers the need or potential need for a blank correction after the event during data processing of unknowns. So one can bust into the access database and reassign one of these as an unknown blank correction, which can stuff everything up if you don't know what you are doing. Or one could have a button to either reassign a standard analysis as an unknown for blank correction OR even copy standard to new unknown for blank correction. In this latter case, there would be less work as the MAN assignments would not need retweaking.

Does this idea sound stupid?
Cheers,
Malc.

Probeman

Quote from: Malcolm Roberts on March 31, 2015, 05:52:21 PM
I usually pile a whole bunch of reference materials into a run to cross check calibration and sometimes one discovers the need or potential need for a blank correction after the event during data processing of unknowns. So one can bust into the access database and reassign one of these as an unknown blank correction, which can stuff everything up if you don't know what you are doing. Or one could have a button to either reassign a standard analysis as an unknown for blank correction OR even copy standard to new unknown for blank correction. In this latter case, there would be less work as the MAN assignments would not need retweaking.

Hi Malcolm,
It's not a bad idea, except that I worry about the blank correction getting misused.  By the way, one should *never* edit the MDB file in Microsoft Access, because it is just to easy to "stuff things up" as you describe....  so many tables are dependent on other tables.  It's fun to open an MDB file in Access but only to gaze in awe!    :D

The reason for only allowing an unknown for use as a blank standard is so that we can be relatively confidant that the blank sample was run using the same conditions and same elements and same counting times as the unknown that it will be applied to.

Otherwise it could actually make the accuracy worse.  Trace elements are so sensitive to so many aspects of the measurement, that the blank should be identical to the unknown as much as possible.

I'm willing to discuss it though.  Can you provide an example of the situation you are in? 
The only stupid question is the one not asked!

Malcolm Roberts

Hi John
Yes.......
Pyrite analyses for instance with Bi that is not there. What I have done in most cases is to add a pyrite standard as an unknown for blank correction. I am finding similar cases with Bi and Pb in arsenopyrite. In both cases the corrections are being added under the same conditions as the unknowns and the calibration carried out under same conditions as the unknowns, so its not like I am changing conditions for which your point in very valid. Another case I have noted is the presence of Fe in pure Ni when Fe is part of the package. As you say, the key to analysing something is whether you can analyse when it is not there!! There are others. In cases where I have a well-characterised reference material identical to the unknowns I am running under identical conditions and have not analysed it as an unknown, where I have sufficient references in the run to provide a cross check and where this material IS NOT BEING USED IN CALIBRATION it might be very handy to be able to reassign it as a blank.
Cheers,
malc.

Probeman

#3
Quote from: Malcolm Roberts on March 31, 2015, 06:57:48 PM
Pyrite analyses for instance with Bi that is not there. What I have done in most cases is to add a pyrite standard as an unknown for blank correction. I am finding similar cases with Bi and Pb in arsenopyrite. In both cases the corrections are being added under the same conditions as the unknowns and the calibration carried out under same conditions as the unknowns, so its not like I am changing conditions for which your point in very valid. Another case I have noted is the presence of Fe in pure Ni when Fe is part of the package. As you say, the key to analysing something is whether you can analyse when it is not there!! There are others. In cases where I have a well-characterised reference material identical to the unknowns I am running under identical conditions and have not analysed it as an unknown, where I have sufficient references in the run to provide a cross check and where this material IS NOT BEING USED IN CALIBRATION it might be very handy to be able to reassign it as a blank.

I wonder if we can count on other probe operators to be as scrupulous an analyst as you are?      ;D

I need to think about how to protect data integrity if we allow this...
The only stupid question is the one not asked!

Mike Matthews

Hi John,

I've just tried using the blank correction and I don't quite understand what it's doing. I've used a pure uranium sample analysis as a blank (50ppm) for C, setting this in the standard assignments for the C standard, and for the pure U and the Carbide included U unknowns, but when I reprocess the data for the pure U (i.e. exactly the data point I've used as the C blank reference) I get -0.213 wt% C. Why doesn't it come out at the 50ppm I've said is in that analysis point?

Mike

Mike Matthews

A subsidiary question prompted by a discussion with Ben Buse on blank corrections: Is the blank correction treated as a fixed amount regardless of the measured counts (counts vs wt% gradient fixed but offset) or proportional to the counts (so gradient changes to go through the calibration and blank points instead of the origin)?

John Donovan

#6
Hi Mike,
The blank correction simply takes the difference in the wt.% results between what it actually obtains on the blank (standard acquired as an unknown) sample and the value (usually zero) that the user specifies what is expected. Probe for EPMA then converts that concentration difference in wt.%, into an x-ray intensity (based on the standard k-factor and standard intensity for that emission line), and subtracts that intensity (difference) from the k-ratio, during the matrix physics iteration loop.  Here is the blank equation explanation outlined in red:



This is found on page 278 of my 2011 Amer. Min. paper:

http://epmalab.uoregon.edu/pdfs/3631Donovan.pdf

Technically this blank intensity offset is not applied in the matrix correction iteration loop itself, but instead is applied in the outer "compositionally dependent" corrections iteration loop (along with other compositionally dependent corrections such as the interference correction, binary APF correction, MAN bgd correction, etc.).

I should also make clear that because CalcZAF only has the normal matrix correction loop, it does not perform any of the "secret sauce" (compositionally dependent) corrections that Probe for EPMA has- such as the blank correction. 

Which reminds me: one way you can get an over correction (negative wt.% result) with the blank correction is if you apply both the blank correction and also a spectral interference correction to the same element. For example if you apply a blank correction of uranium on carbon and *also* assign an interference correction of uranium on carbon.  The software should warn you if you accidentally assign both a blank and an interference correction to the same element, but it will allow it- though perhaps we shouldn't...

Also, when analyzing carbon (of course) one also has to make sure that one is not carbon coating the unknown, but also not carbon coating the standard used for the blank correction- because it is acquired as an unknown.  I haven't tried analyzing carbon in uranium, but I recently analyzed carbon in nitrided steel as seen here:

http://smf.probesoftware.com/index.php?topic=1005.msg6659#msg6659

For the blank correction I used a piece of pure Fe standard that was not carbon coated- just like the unknown steel.  I left the assigned carbon value to zero in this vacuum remelted Fe standard, because I really don't know what it is (Johnson Matthey states a bulk analysis: oxygen 310 ppm, nitrogen 10 ppm). In the above link you can see the steel sample with and without the blank correction.  Without the blank correction we get a carbon average of around 1 wt.% and with the blank correction on carbon we get very close to zero.

So in summary, the blank correction does not force the sample value to the blank value, rather it merely applies a systematic intensity offset to the k-ratio based on the analyzed difference in composition between the blank measurement and the blank value specified by the user. 
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

#7
One other thought. 

I've observed highly variable behavior with regard to the carbon intensity changes over time (TDI effects) on various samples.  Sometimes the carbon intensity increases over time and sometimes it decreases over time.  The above link to the carbon quant in steel post discusses this issue a bit.

I assume you had the TDI acquisition turned on?  What sort of behavior did you see in the carbon intensities as a function of time on your uranium sample?  What about in the uranium standard blank?

Interestingly enough, in the nitrided steel I saw the carbon intensity decrease over time, while on the pure Fe standard, the carbon intensity over time was pretty constant as seen here:



I cannot explain these observations except to say that different samples (even with very similar compositions) seem to behave differently in this TDI respect.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Probeman

#8
This TDI plot is from carbon measurements (off-peak exponential fit) across an Fe-V diffusion profile. I did not clean the surface before hand and I think that was a mistake, because I suspect it is slightly contaminated with hydrocarbons. The reason I think so is that instead of the carbon signal slowly increasing over time (as one might expect from electron beam induced carbon contamination), the signal starts high and *decreases* over time:



The extrapolated carbon concentrations on the blank sample are around 0.9 wt.% as described in this post:

https://smf.probesoftware.com/index.php?topic=354.msg7688#msg7688

Here is the TDI curve for the pure uncoated Fe metal blank:



Any speculations on why this might be occurring?  Next time I'm going to carefully clean the surface and try again to see what differences there are.
The only stupid question is the one not asked!

Mike Matthews

#9
A very timely question John, I've just had a paper on e-beam induced carbon erosion accepted for publication in MiMi (TDI mode in PfE came in very handy for the analyses!). Even if you see a carbon contamination ring around analysis points, the area under the primary beam impact can see net erosion. If you measure absorbed current during the analysis you may see a decrease as the surface carbon thins, bringing your higher Z substrate closer to the surface and consequently increasing the back scatter coefficient. Although your samples are uncoated I suspect you have deposited hydrocarbons before your analyses start, either by imaging the area prior to analysis or by rapid deposition of hydrocarbons present under the beam when you first turn the beam on.

Probeman

#10
Hi Mike,
Yes, I think you are correct.

Although I did not acquire slow scan images prior to the analyses (I usually do that afterwards because sometimes the analysis points are visible in the image, and also it's nice to compare those spots in the image to the recorded/plotted stage positions based on the image X/Y and rotation calibration), I did have to perform some fast scan imaging just to calibrate the amount of image shift necessary (relative to the optical image), to correct for the beam deflection by the magnetic sample as described here:

https://smf.probesoftware.com/index.php?topic=354.msg7688#msg7688

And for trace carbon analyses yes, absolutely we want to avoid exposing the sample to the beam until we actually start acquiring quant analyses.

So next week I think I will try cleaning any hydrocarbon contamination off the sample just before I put it in the airlock.  Then do the image shift deflection for magnetism on a slightly different area and then acquire a carbon blank on a previously unexposed area of the sample.  Do you think that cleaning with 190 proof ethanol and scrubbing with a Kimwipe tissue would be effective for this?
The only stupid question is the one not asked!

Joe Boesenberg

John

Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?

Thanks Joe
Joseph Boesenberg
Brown University
Electron Microprobe Manager/Meteoriticist

Probeman

Quote from: jboesenberg on March 28, 2023, 09:23:55 AM
Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?

This is a really great question and I have to say I haven't tested how much the matrix effect matters for the blank correction enough.

I would think it sort of depends on what exactly is causing the background artifact. For example, if the background artifact is in the Bragg crystal (secondary Bragg diffraction artifacts, e.g., PET crystal for Ti Ka) or the gas detector (absorption edge artifacts, e.g., Ar absorption edge), the zero blank matrix may not matter all that much.

But if the background artifact is related to the sample composition, e.g., measuring off-peak bgds across an absorption edge caused by an element in the sample, the blank matrix could matter a lot.

Clearly if we have a perfect matrix match blank standard that would be ideal in all circumstances. But exactly how much is needed depends on the cause of the background artifact.

This all reminds me of a discussion Mike Jercinovic had regarding creating a synthetic monazite blank:

https://smf.probesoftware.com/index.php?topic=928.msg8506#msg8506

The fact that a synthetic monazite with REEs would probably show compositional zoning as it grows is actually a useful thing. Just as long as we knew that there was no U, Th or Pb anywhere in the synthetic monazite.  And then we could test the ability of the blank to measure zero for these elements in these zones of different composition.
The only stupid question is the one not asked!

Probeman

Quote from: Joe Boesenberg on March 28, 2023, 09:23:55 AM
Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?

So yes, to answer your last question, I think for testing of trace accuracy in olivines it would be worth trying a zero blank of any Al-free silicate. For trace Al accuracy in olivines the main problem is the sloped (and curved) shape of the background at the Al Ka peak position on TAP due to the large nearby Si Ka peak.  See fig 3 in this paper:

Donovan, John J., Heather A. Lowers, and Brian G. Rusk. "Improved electron probe microanalysis of trace elements in quartz." American Mineralogist 96.2-3 (2011): 274-282.

So any high purity material containing Si and no Al would probably work fine. If you don't have a synthetic high purity olivine such as Mg2SiO4 maybe try SiO2. Note that synthetic Mg2SiO4 is available commercially and  think Will Nachlas has found sources for it.  The main problem with some of these synthetic materials is that they are intended for epitaxial crystal growth substrates, so they normally want to sell you oriented and polished material which of course isn't necessary for our use as microanalysis standards.

I would contact the vendors and see if they have "raw" material that they would be willing to provide to you.
The only stupid question is the one not asked!

Probeman

If anyone has performed trace element analyses (e.g., ICP-MS) of the synthetic Mg2SiO4 materials obtained from either MTI or Oxide Corp (or any other commercial source for that matter!) I would be very interested to hear about the results.
The only stupid question is the one not asked!