Hi John
I usually pile a whole bunch of reference materials into a run to cross check calibration and sometimes one discovers the need or potential need for a blank correction after the event during data processing of unknowns. So one can bust into the access database and reassign one of these as an unknown blank correction, which can stuff everything up if you don't know what you are doing. Or one could have a button to either reassign a standard analysis as an unknown for blank correction OR even copy standard to new unknown for blank correction. In this latter case, there would be less work as the MAN assignments would not need retweaking.
Does this idea sound stupid?
Cheers,
Malc.
Quote from: Malcolm Roberts on March 31, 2015, 05:52:21 PM
I usually pile a whole bunch of reference materials into a run to cross check calibration and sometimes one discovers the need or potential need for a blank correction after the event during data processing of unknowns. So one can bust into the access database and reassign one of these as an unknown blank correction, which can stuff everything up if you don't know what you are doing. Or one could have a button to either reassign a standard analysis as an unknown for blank correction OR even copy standard to new unknown for blank correction. In this latter case, there would be less work as the MAN assignments would not need retweaking.
Hi Malcolm,
It's not a bad idea, except that I worry about the blank correction getting misused. By the way, one should *never* edit the MDB file in Microsoft Access, because it is just to easy to "stuff things up" as you describe.... so many tables are dependent on other tables. It's fun to open an MDB file in Access but only to gaze in awe! :D
The reason for only allowing an unknown for use as a blank standard is so that we can be relatively confidant that the blank sample was run using the same conditions and same elements and same counting times as the unknown that it will be applied to.
Otherwise it could actually make the accuracy worse. Trace elements are so sensitive to so many aspects of the measurement, that the blank should be identical to the unknown as much as possible.
I'm willing to discuss it though. Can you provide an example of the situation you are in?
Hi John
Yes.......
Pyrite analyses for instance with Bi that is not there. What I have done in most cases is to add a pyrite standard as an unknown for blank correction. I am finding similar cases with Bi and Pb in arsenopyrite. In both cases the corrections are being added under the same conditions as the unknowns and the calibration carried out under same conditions as the unknowns, so its not like I am changing conditions for which your point in very valid. Another case I have noted is the presence of Fe in pure Ni when Fe is part of the package. As you say, the key to analysing something is whether you can analyse when it is not there!! There are others. In cases where I have a well-characterised reference material identical to the unknowns I am running under identical conditions and have not analysed it as an unknown, where I have sufficient references in the run to provide a cross check and where this material IS NOT BEING USED IN CALIBRATION it might be very handy to be able to reassign it as a blank.
Cheers,
malc.
Quote from: Malcolm Roberts on March 31, 2015, 06:57:48 PM
Pyrite analyses for instance with Bi that is not there. What I have done in most cases is to add a pyrite standard as an unknown for blank correction. I am finding similar cases with Bi and Pb in arsenopyrite. In both cases the corrections are being added under the same conditions as the unknowns and the calibration carried out under same conditions as the unknowns, so its not like I am changing conditions for which your point in very valid. Another case I have noted is the presence of Fe in pure Ni when Fe is part of the package. As you say, the key to analysing something is whether you can analyse when it is not there!! There are others. In cases where I have a well-characterised reference material identical to the unknowns I am running under identical conditions and have not analysed it as an unknown, where I have sufficient references in the run to provide a cross check and where this material IS NOT BEING USED IN CALIBRATION it might be very handy to be able to reassign it as a blank.
I wonder if we can count on other probe operators to be as scrupulous an analyst as you are? ;D
I need to think about how to protect data integrity if we allow this...
Hi John,
I've just tried using the blank correction and I don't quite understand what it's doing. I've used a pure uranium sample analysis as a blank (50ppm) for C, setting this in the standard assignments for the C standard, and for the pure U and the Carbide included U unknowns, but when I reprocess the data for the pure U (i.e. exactly the data point I've used as the C blank reference) I get -0.213 wt% C. Why doesn't it come out at the 50ppm I've said is in that analysis point?
Mike
A subsidiary question prompted by a discussion with Ben Buse on blank corrections: Is the blank correction treated as a fixed amount regardless of the measured counts (counts vs wt% gradient fixed but offset) or proportional to the counts (so gradient changes to go through the calibration and blank points instead of the origin)?
Hi Mike,
The blank correction simply takes the difference in the wt.% results between what it actually obtains on the blank (standard acquired as an unknown) sample and the value (usually zero) that the user specifies what is expected. Probe for EPMA then converts that concentration difference in wt.%, into an x-ray intensity (based on the standard k-factor and standard intensity for that emission line), and subtracts that intensity (difference) from the k-ratio, during the matrix physics iteration loop. Here is the blank equation explanation outlined in red:
(https://smf.probesoftware.com/gallery/1_31_01_18_8_41_38.png)
This is found on page 278 of my 2011 Amer. Min. paper:
http://epmalab.uoregon.edu/pdfs/3631Donovan.pdf
Technically this blank intensity offset is not applied in the matrix correction iteration loop itself, but instead is applied in the outer "compositionally dependent" corrections iteration loop (along with other compositionally dependent corrections such as the interference correction, binary APF correction, MAN bgd correction, etc.).
I should also make clear that because CalcZAF only has the normal matrix correction loop, it does not perform any of the "secret sauce" (compositionally dependent) corrections that Probe for EPMA has- such as the blank correction.
Which reminds me: one way you can get an over correction (negative wt.% result) with the blank correction is if you apply both the blank correction and also a spectral interference correction to the same element. For example if you apply a blank correction of uranium on carbon and *also* assign an interference correction of uranium on carbon. The software should warn you if you accidentally assign both a blank and an interference correction to the same element, but it will allow it- though perhaps we shouldn't...
Also, when analyzing carbon (of course) one also has to make sure that one is not carbon coating the unknown, but also not carbon coating the standard used for the blank correction- because it is acquired as an unknown. I haven't tried analyzing carbon in uranium, but I recently analyzed carbon in nitrided steel as seen here:
http://smf.probesoftware.com/index.php?topic=1005.msg6659#msg6659
For the blank correction I used a piece of pure Fe standard that was not carbon coated- just like the unknown steel. I left the assigned carbon value to zero in this vacuum remelted Fe standard, because I really don't know what it is (Johnson Matthey states a bulk analysis: oxygen 310 ppm, nitrogen 10 ppm). In the above link you can see the steel sample with and without the blank correction. Without the blank correction we get a carbon average of around 1 wt.% and with the blank correction on carbon we get very close to zero.
So in summary, the blank correction does not force the sample value to the blank value, rather it merely applies a systematic intensity offset to the k-ratio based on the analyzed difference in composition between the blank measurement and the blank value specified by the user.
One other thought.
I've observed highly variable behavior with regard to the carbon intensity changes over time (TDI effects) on various samples. Sometimes the carbon intensity increases over time and sometimes it decreases over time. The above link to the carbon quant in steel post discusses this issue a bit.
I assume you had the TDI acquisition turned on? What sort of behavior did you see in the carbon intensities as a function of time on your uranium sample? What about in the uranium standard blank?
Interestingly enough, in the nitrided steel I saw the carbon intensity decrease over time, while on the pure Fe standard, the carbon intensity over time was pretty constant as seen here:
(https://smf.probesoftware.com/gallery/1_31_01_18_9_39_01.png)
I cannot explain these observations except to say that different samples (even with very similar compositions) seem to behave differently in this TDI respect.
This TDI plot is from carbon measurements (off-peak exponential fit) across an Fe-V diffusion profile. I did not clean the surface before hand and I think that was a mistake, because I suspect it is slightly contaminated with hydrocarbons. The reason I think so is that instead of the carbon signal slowly increasing over time (as one might expect from electron beam induced carbon contamination), the signal starts high and *decreases* over time:
(https://smf.probesoftware.com/gallery/395_10_10_18_2_01_18.png)
The extrapolated carbon concentrations on the blank sample are around 0.9 wt.% as described in this post:
https://smf.probesoftware.com/index.php?topic=354.msg7688#msg7688
Here is the TDI curve for the pure uncoated Fe metal blank:
(https://smf.probesoftware.com/gallery/395_10_10_18_2_59_26.png)
Any speculations on why this might be occurring? Next time I'm going to carefully clean the surface and try again to see what differences there are.
A very timely question John, I've just had a paper on e-beam induced carbon erosion accepted for publication in MiMi (TDI mode in PfE came in very handy for the analyses!). Even if you see a carbon contamination ring around analysis points, the area under the primary beam impact can see net erosion. If you measure absorbed current during the analysis you may see a decrease as the surface carbon thins, bringing your higher Z substrate closer to the surface and consequently increasing the back scatter coefficient. Although your samples are uncoated I suspect you have deposited hydrocarbons before your analyses start, either by imaging the area prior to analysis or by rapid deposition of hydrocarbons present under the beam when you first turn the beam on.
Hi Mike,
Yes, I think you are correct.
Although I did not acquire slow scan images prior to the analyses (I usually do that afterwards because sometimes the analysis points are visible in the image, and also it's nice to compare those spots in the image to the recorded/plotted stage positions based on the image X/Y and rotation calibration), I did have to perform some fast scan imaging just to calibrate the amount of image shift necessary (relative to the optical image), to correct for the beam deflection by the magnetic sample as described here:
https://smf.probesoftware.com/index.php?topic=354.msg7688#msg7688
And for trace carbon analyses yes, absolutely we want to avoid exposing the sample to the beam until we actually start acquiring quant analyses.
So next week I think I will try cleaning any hydrocarbon contamination off the sample just before I put it in the airlock. Then do the image shift deflection for magnetism on a slightly different area and then acquire a carbon blank on a previously unexposed area of the sample. Do you think that cleaning with 190 proof ethanol and scrubbing with a Kimwipe tissue would be effective for this?
John
Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?
Thanks Joe
Quote from: jboesenberg on March 28, 2023, 09:23:55 AM
Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?
This is a really great question and I have to say I haven't tested how much the matrix effect matters for the blank correction enough.
I would think it sort of depends on what exactly is causing the background artifact. For example, if the background artifact is in the Bragg crystal (secondary Bragg diffraction artifacts, e.g., PET crystal for Ti Ka) or the gas detector (absorption edge artifacts, e.g., Ar absorption edge), the zero blank matrix may not matter all that much.
But if the background artifact is related to the sample composition, e.g., measuring off-peak bgds across an absorption edge caused by an element in the sample, the blank matrix could matter a lot.
Clearly if we have a perfect matrix match blank standard that would be ideal in all circumstances. But exactly how much is needed depends on the cause of the background artifact.
This all reminds me of a discussion Mike Jercinovic had regarding creating a synthetic monazite blank:
https://smf.probesoftware.com/index.php?topic=928.msg8506#msg8506
The fact that a synthetic monazite with REEs would probably show compositional zoning as it grows is actually a useful thing. Just as long as we knew that there was no U, Th or Pb anywhere in the synthetic monazite. And then we could test the ability of the blank to measure zero for these elements in these zones of different composition.
Quote from: Joe Boesenberg on March 28, 2023, 09:23:55 AM
Had a question for you with respect to the idea of checking the zero concentrations on standard calibrations when doing trace elements to ensure your backgrounds are placed properly. Barring an overlap interference, would it matter which standard you used to check the zero concentration. In my case, where I have been measuring trace Al in olivine, could I use an Fe or Si metal standard, or some oxide standard, to check the zero concentration for Al, or do I have to use a synthetic, Al-free olivine? Obviously the matrix corrections will change, so I assume, whenever possible, use the same or similar mineral?
So yes, to answer your last question, I think for testing of trace accuracy in olivines it would be worth trying a zero blank of any Al-free silicate. For trace Al accuracy in olivines the main problem is the sloped (and curved) shape of the background at the Al Ka peak position on TAP due to the large nearby Si Ka peak. See fig 3 in this paper:
Donovan, John J., Heather A. Lowers, and Brian G. Rusk. "Improved electron probe microanalysis of trace elements in quartz." American Mineralogist 96.2-3 (2011): 274-282.
So any high purity material containing Si and no Al would probably work fine. If you don't have a synthetic high purity olivine such as Mg2SiO4 maybe try SiO2. Note that synthetic Mg2SiO4 is available commercially and think Will Nachlas has found sources for it. The main problem with some of these synthetic materials is that they are intended for epitaxial crystal growth substrates, so they normally want to sell you oriented and polished material which of course isn't necessary for our use as microanalysis standards.
I would contact the vendors and see if they have "raw" material that they would be willing to provide to you.
If anyone has performed trace element analyses (e.g., ICP-MS) of the synthetic Mg2SiO4 materials obtained from either MTI or Oxide Corp (or any other commercial source for that matter!) I would be very interested to hear about the results.
John
I did the zero concentration test on Al2O3 on my Fo100 today. Using my OPX2525 standard for Al2O3, I come up with -0.0060 wt% Al2O3 (I am low 60ppm) on what should be zero. I assume the use of my background multiplier (slope) of 1.2 is a bit high and probably needs to be in the range of 1.1 to 1.15 to get to zero. Using the same 1.2 slope, but using corundum, I get -0.0075 wt% Al2O3 on Fo100, so either my opx is slightly off in its composition or my spectrometer is having a dead time issue. I suspect the latter is more likely.
Thanks Joe
Quote from: Joe Boesenberg on April 05, 2023, 11:48:17 AM
I did the zero concentration test on Al2O3 on my Fo100 today. Using my OPX2525 standard for Al2O3, I come up with -0.0060 wt% Al2O3 (I am low 60ppm) on what should be zero. I assume the use of my background multiplier (slope) of 1.2 is a bit high and probably needs to be in the range of 1.1 to 1.15 to get to zero. Using the same 1.2 slope, but using corundum, I get -0.0075 wt% Al2O3 on Fo100, so either my opx is slightly off in its composition or my spectrometer is having a dead time issue. I suspect the latter is more likely.
Hi Joe,
It's great that you are investigating this issue. Please confirm that you are using Al2O3 as a primary standard for Al Ka and the Fo100 as a zero blank for Al in olivine (though we don't know exactly what the Al level in the Fo100 synthetic is, do we?).
Your problem is either your background slope (you'll have a much easier time using the polynomial or exponential background fit in PFE, because you won't have to guess at the slope), or your Al value is off in your opx standard. I would say either or both.
It's probably not the dead time, as dead time tends to be an insignificant correction at trace levels. Yes, dead time affects the measurement on your Al2O3 primary standard, but you can run the Al2O3 at a lower beam current than your unknown. Since your opx is only 4 wt%, any error in that value gets propagated to your unknown, so use Al2O3 as the primary Al standard for trace Al.
Here's post I made just now where I attempt to explain these considerations on standard selection that Anette von der Handt and I are working on:
https://smf.probesoftware.com/index.php?topic=610.msg11752#msg11752
Hi
When using blank assignment, and exporting peak counts and background counts, from which if calculate manual error in excel get wrong error,, as the peak and background counts are before blank correction
Wondered how the blank correction is propergated through the error in PFE and whether the necessary data such as magnitude of blank correction could be exported to allow manual calculations in excel
Thanks
Quote from: Ben Buse on October 23, 2023, 08:43:47 AM
Hi
When using blank assignment, and exporting peak counts and background counts, from which if calculate manual error in excel get wrong error,, as the peak and background counts are before blank correction
Wondered how the blank correction is propergated through the error in PFE and whether the necessary data such as magnitude of blank correction could be exported to allow manual calculations in excel
Thanks
It's a good question.
Since the blank correction only affects accuracy, I would output the error values without the blank correction as those error values should be correct as far as counting statistics goes.
Then utilize the blank corrected concentration values for the actual concentrations.
But now, thinking about it a bit more, the error statistics from the raw data are calculated without the blank correction (e.g., using the data button in the Analyze! window), so those should be unaffected by the blank correction.
Hi John,
Here's an example - you'll see change in error given by PFE when blank corrected selected. Upper dataset blank turned off for Ti, turned on for Ti in lower dataset. For Al its on for both
What it is, is when you take the same absolute error and make it a relative error for a changed composition
(https://smf.probesoftware.com/gallery/453_24_10_23_3_54_08.png)
To confirm, your question is: why are the calculated analytical sensitivities slightly different when the blank correction is used or not?
Quote from: Ben Buse on October 24, 2023, 03:56:20 AM
Hi John,
Here's an example - you'll see change in error given by PFE when blank corrected selected. Upper dataset blank turned off for Ti, turned on for Ti in lower dataset. For Al its on for both
What it is, is when you take the same absolute error and make it a relative error for a changed composition
(https://smf.probesoftware.com/gallery/453_24_10_23_3_54_08.png)
Aurelien and I looked into this and as you report there are small differences in the reported analytical sensitivity calculations (from Scott and Love, 1983) method when the blank correction is applied due to the way we calculate it (though this behavior has now been changed as described further below). See the Probe for EPMA User Reference Manual for details on the Scott and Love 1983 calculation.
The reason for this previous behavior is that the Scott and Love (1983) calculation requires knowing the raw counts from the unknown peak and unknown background. Originally we calculated these raw counts from the peak and background counts after they had been corrected for dead time, beam drift and background (the interpolated off-peak or MPB or calculated MAN intensities). This is mostly because one cannot use the raw high and low off-peak measurements because it's the interpolated or calculated background intensities that need to be utilized in the case of the MAN (and MPB to some extent), and these background intensity values are calculated during the quantification.
The problem with using these calculated background values is that they need to be de-normalized for dead time, beam drift and counting time (to obtain raw photon counts) in order to determine the standard deviations (SQRT). But some time ago we began utilizing the original stored raw intensities for simple statistics such as the variance on the on peak and high/low off-peak intensities. Then skipping the de-normalization steps we previously utilized and re-normalizing them again for dead time, beam drift and counting time if these display normalizations were selected.
However, for statistical calculations such as the analytical sensitivities this approach will not work as we don't have the raw intensities stored for the calculated background intensities, as they are only calculated as needed and of course they are always in normalized intensity (not raw count) units. The raw on-peak photon count intensities are of course available, just not the raw calculated background intensities, and that is why we continued to utilize the de-normalization, then take the square root, then the re-normalization method.
The problem with this method as you might already have realized is that the blank correction (and interference correction, etc) is applied during the quantification process, which is great if one wants to export k-ratios that are already corrected for everything, say for off-line thin film calculations. But then there will be (very) small differences in the peak intensities from these corrections, and that is what Ben is seeing as quoted above.
So, not a big deal, but I understand why Ben was wondering what was the cause of this. Of course in my opinion, one should not even concern themselves with analytical sensitivity calculations for small values. That is because as the peak intensity approaches the background, the analytical sensitivity calculation gets very large, indicating of course that one's analytical sensitivity is worse than the background variance. In fact I believe one really should focus more on analytical sensitivity for higher concentrations, and be more concerned with the
detection limits for low concentrations...
But as Aurelien and I were thinking about this issue, we also noticed that we were missing code in the analytical sensitivity calculation to de-normalize for dead time prior to taking the square root (it's the only place it was missing as we checked the other code!). At low count rates this missing de-normalization has almost no effect but of course at high count rates it does. But at high count rates the analytical sensitivity calculation is not sensitive to the background intensity, so that's why we never noticed this!
So we added the dead time de-normalization code and then immediately found a problem when de-normalizing small intensities for dead time using the logarithmic dead time correction. It's because the maximum value that one can handle when de-normalizing using the exponential math function is 10^308 in double precision, and small background intensities can generate values larger than this!
That's when Aurelien and I came up with the idea that for obtaining the raw background (interpolated/calculate) count intensity, we could simply apply the
peak to background ratio values obtained from the quant calculation, to the raw on-peak intensity value and voila, we now have a raw background intensity and can take the square root directly without any de-normalization. Nice!
Here are the detection limits and analytical sensitivities for some Ti in synthetic SiO2 (20 keV, 200 nA, 1920 sec on and 1920 sec off-peak), first without the blank correction:
Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):
ELEM: Ti Ti Ti Ti Ti
271 .00049 .00033 .00032 .00054 .00046
272 .00048 .00033 .00032 .00054 .00046
273 .00049 .00033 .00032 .00054 .00046
274 .00049 .00033 .00032 .00054 .00046
275 .00049 .00033 .00031 .00054 .00046
AVER: .00049 .00033 .00032 .00054 .00046
SDEV: .00000 .00000 .00000 .00000 .00000
SERR: .00000 .00000 .00000 .00000 .00000
Percent Analytical Relative Error (One Sigma, Single Line):
ELEM: Ti Ti Ti Ti Ti
271 129.7 15.5 5.2 53.2 673.5
272 501.4 15.4 5.6 136.4 25.6
273 198.3 15.2 5.3 41.2 331.3
274 192.8 12.7 5.4 51.1 47.4
275 94.2 12.9 5.1 36.3 47.3
AVER: 223.3 14.3 5.3 63.6 225.0
SDEV: 161.5 1.4 .2 41.3 280.8
SERR: 72.2 .6 .1 18.5 125.6
And here with the blank correction:
Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):
ELEM: Ti Ti Ti Ti Ti
271 .00049 .00033 .00032 .00054 .00046
272 .00048 .00033 .00032 .00054 .00046
273 .00049 .00033 .00032 .00054 .00046
274 .00049 .00033 .00032 .00054 .00046
275 .00049 .00033 .00031 .00054 .00046
AVER: .00049 .00033 .00032 .00054 .00046
SDEV: .00000 .00000 .00000 .00000 .00000
SERR: .00000 .00000 .00000 .00000 .00000
Percent Analytical Relative Error (One Sigma, Single Line):
ELEM: Ti Ti Ti Ti Ti
271 655.7 40.7 592.7 395.7 42.5
272 238.3 40.6 67.1 70.8 72.3
273 875.4 42.1 181.7 346.5 45.4
274 1001.7 96.0 103.7 545.7 251.0
275 226.5 81.3 978.9 163.5 249.1
AVER: 599.5 60.1 384.8 304.5 132.1
SDEV: 357.3 26.6 392.9 188.9 108.3
SERR: 159.8 11.9 175.7 84.5 48.5
Now the sharp-eyed among you will have noticed that the analytical sensitivities are still different (generally larger) with the blank correction applied. And why is that? Well it's pretty much the same reason it was before.
With the previous de-normalize/re-normalize method, the background counts didn't change depending on the blank correction, but the peak counts did because the blank correction is applied to the (corrected) peak counts!
In the new code based on the raw peak counts and the calculated peak to background method, the raw peak counts didn't change (because they are based on the measured count rate), but the background counts did change because the blank correction affects the calculated peak to background ratio.
Specifically, the peak to background ratio went from 0.998234 (for the first line of channel 1), to 0.999650 (closer to unity) when the blank correction is applied.
So after all that we are sort of where we were but the analytical sensitivity calculation is more accurate because now we are utilizing raw intensities for the Scott and Love calculation. I still think for trace elements the detection limit is what one should be considering though...
I went ahead and posted the new analytical sensitivity code here for those interested:
https://smf.probesoftware.com/index.php?topic=1307.msg12149#msg12149
Oh, and one last thing: previously we always reported negative analytical sensitivities when the concentrations were negative (it happens when one is attempting to measure zero!). But someone (?) asked a while back why we did this, and now that we've thought about it a bit more, we now take the absolute value of the analytical sensitivity for reporting purposes. This also makes the average of the analytical sensitivities more sensible. We can change this back if anyone has a reasonable objection.
Hi John,
I don't entirely follow
But as I understand it there would be two components
1. counting stats error - as described by Scott & Love - which is the measurement before blank correction adjusted
2. systematic error - associated with the blank correction - which in part is the count stats on the sample used as blank correction standard - and in part indescribable - whether the blank correction standard is correct
The Scott and Love 1983 method for analytical sensitivity is essentially based on the peak to background ratios of the standard and unknown. If the peak to background ratio changes in the unknown because of the blank correction being applied (which it will for a trace element), the calculated analytical sensitivity changes (according to Scott and Love).
But I will say it again, for trace elements you should not be looking at the analytical sensitivity but rather the detection limit. Note that the detection limit for trace elements does not significantly change when the blank correction is applied as one would expect.
Analytical sensitivity calculations, on the other hand, are really more appropriate for major and minor elements and used to discern whether two values are statistically significantly different from one another. At trace levels all values are basically not statistically significantly different from each other. The more appropriate question for trace elements is: are these trace levels statistically significant relative to the background level.
I tried to say this previously in the post above.
QuoteSo, not a big deal, but I understand why Ben was wondering what was the cause of this. Of course in my opinion, one should not even concern themselves with analytical sensitivity calculations for small values. That is because as the peak intensity approaches the background, the analytical sensitivity calculation gets very large, indicating of course that one's analytical sensitivity is worse than the background variance. In fact I believe one really should focus more on analytical sensitivity for higher concentrations, and be more concerned with the detection limits for low concentrations...