News:

:) All Electron Probe Micro-Analysts are welcome to register and post!

Main Menu

An alternate means of calculating detector dead time

Started by Brian Joy, June 17, 2022, 09:44:45 PM

Previous topic - Next topic

Brian Joy

#15
I've recently made determinations of the dead time constant via the ratio method using the Mo Lα and Mo Lβ3 peaks on uncoated, recently cleaned Mo metal in the same manner as in previous runs, though I increased the number of calculated ratios within a given set (60 in the first and 70 in the second).  I chose the Lβ3 peak instead of Lβ1 because I wanted to obtain a ratio of peak count rates between about 15 and 20 on a given spectrometer such that the source of noticeably non-linear correction within each calculated ratio would be accounted for essentially solely by the Mo Lα measurement.  While Si Kα/Kβ might have worked for this purpose just as well, the ratio of peak heights is even greater (so that it would have been unreasonably time-consuming to obtain an adequate number of Si Kβ counts to keep counting error low).  Unfortunately, Ti Kβ cannot be accessed on PETH, which I'd forgotten about.  For the first measurement set on Mo, I collected data at PCD currents ranging from 5 to 600 nA, and, for the second set, from 5 to 700 nA.  The maximum observed count rate (for Mo Lα on PETH at IPCD = 700 nA) was roughly 240 kcps.  On channel 2/PETL at the same current, observed Mo Lβ3 count rate was about 13 kcps.

Notation for the two measurement sets is as follows:

N'11 = measured Mo Lα count rate on channel 2/PETL
N'21 = measured Mo Lβ3 count rate on channel 3/PETJ
N'31 = measured Mo Lβ3 count rate on channel 5/PETH

N'12 = measured Mo Lβ3 count rate on channel 2/PETL
N'22 = measured Mo Lα count rate on channel 3/PETJ
N'32 = measured Mo Lα count rate on channel 5/PETH

I was hoping that my calculated dead times would be virtually identical to those I've previously determined, as I no longer believe that X-ray counter dead time varies systematically with X-ray energy.  (But maybe I'm wrong considering the pattern possibly emerging in the calculated dead times.)  Perhaps my PHA settings weren't quite right?  If so, considering that PHA shifts should be smaller for lower X-ray energies, my most recent determinations on Ti and Mo should be the most accurate.  The age of the counters could be a factor as well, as anomalies certainly are present in the pulse amplitude distributions, at least under certain conditions.  I examined these distributions carefully, though, across a wide range of count rates, and I don't think I made any serious errors.  Maybe I'll run through the whole process again on Cu or Fe and see if I get the same results as before.  Currently, I have dead time constants for my channels 2, 3, and 5 set at 1.45, 1.40, and 1.40 μs, respectively, but I might eventually lower the values for channels 2 and 3 a little – channel 5 has been more consistent.  At any rate, the values for Mo aren't drastically lower and are most similar to those obtained from Ti.  Continuing the same pattern as before, channel 2 gives the largest dead time constant using the ratio method:

Mo Lα/Mo Lβ3 ratio method [μs]:
channel 2/PETL:  1.38, 1.43     (Ti: 1.43, 1.46; Fe: 1.44, 1.48; Cu: 1.50, 1.46)
channel 3/PETJ:  1.33              (Ti: 1.37; Fe: 1.41; Cu: 1.45)
channel 5/PETH:  1.37             (Ti: 1.42; Fe: 1.41; Cu: 1.42)
Current-based method, Mo Lα [μs]:
channel 2/ PETL:  1.27
channel 3/ PETJ:  1.17
channel 5/ PETH:  1.27
Current-based method, Mo Lβ3 [μs]:
channel 2/ PETL:  0.18
channel 3/ PETJ:  -1.15
channel 5/ PETH:  0.20





The apparent picoammeter anomaly was somewhat less pronounced than it was during my measurements on Ti.  Its somewhat lower magnitude accounts for the slightly larger Mo Lα dead time values (compared to those for Ti Kα) calculated using the current-based method below 85 kcps (measured):



Channel 3 shows the ugliness a little better due to its low count rate at high current:



As an aside, I'd like to emphasize that operating sealed Xe counters routinely at count rates greater than tens kcps constitutes abuse of them and shortens their useful lifespans, even if anode bias is kept relatively low (~1600-1650 V).

Below I've fit both a straight line and a parabola (dashed curve) to the uncorrected data for N'12/N'32 (where N'12 corresponds to Mo Lβ3 on channel 2/PETL and N'32 to Mo Lα on channel 5/PETH).  Although the parabola fits the data very nicely (R2 = 0.9997), easy calculation of τ1 and τ2 relies on use of a correction function linear in τ, such as N'/N = 1 – τN'.  Although the two dead time constants cannot be extracted readily from the second degree equation, since Mo Lβ3 count rate on channel 2/PETL does not exceed 13 kcps (at IPCD = 700 nA), essentially all non-linear behavior is contributed to the ratio by Mo Lα on channel 5/PETH, and so it alone accounts for the quadratic term.  The regression line (R2 = 0.991) that provides the linear approximation at measured count rates below ~85 kcps is roughly tangent to the parabola at N'12 = N12 = 0, and so both intersect the vertical axis at or close to the true ratio, N32/N12 (subject to counting error, of course).  The size of the parabola is such that, below ~85 kcps, it is approximated very well by a straight line (considering counting error).  Further, in truth, the dead time constant determined in the effectively linear correction region should work just as well in equations of higher degree or in any equation in which the true count rate, N, is calculated (realistically) as a function of the measured count rate, N'.  If the dead time constant can be calculated simply and accurately in the linear correction region on a single material (and only at the peak), then there is no need to make the process more complicated if only to avoid additional propagated counting error and potential systematic errors.



And here I've magnified the above plot to show the region of relatively low count rates:


Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

Quote from: Brian Joy on July 17, 2022, 07:50:36 PM
As an aside, I'd like to emphasize that operating sealed Xe counters routinely at count rates greater than tens kcps constitutes abuse of them and shortens their useful lifespans, even if anode bias is kept relatively low (~1600-1650 V).

This comment brings to mind something that Colin MacRae said to me during one of my visits to Australia a while back. He said that xenon detectors will either get contaminated or pumped out after a few years, so that he replaces his xenon detectors every 3 to 4 years on their JEOL instruments at CSIRO.

Quote from: Brian Joy on July 17, 2022, 07:50:36 PM
I was hoping that my calculated dead times would be virtually identical to those I've previously determined, as I no longer believe that X-ray counter dead time varies systematically with X-ray energy.  (But maybe I'm wrong considering the pattern possibly emerging in the calculated dead times.)  Perhaps my PHA settings weren't quite right?

This is also something that I've been looking at, so I started a new topic on this question. I hope we can share data and try to come to a sort of a conclusion on this very interesting question:

https://smf.probesoftware.com/index.php?topic=1475.0
The only stupid question is the one not asked!

Brian Joy

#17
I've collected data to determine the dead time constant via the ratio method using Si Kα and Si Kβ diffracted by TAPJ on my channels 1 and 4.  I calculated a total of 80 ratios in each measurement set, with measurements made at PCD currents ranging from 1 to 136 nA.  On channel 1, the Si Kα count rate at 136 nA was 226 kcps; on channel 4, Si Kα count rate at the same current was 221 kcps.

Notation for the two measurement sets is as follows:

N'11 = measured Si Kα count rate on channel 1/TAPJ
N'21 = measured Si Kβ count rate on channel 4/TAPJ

N'12 = measured Si Kβ count rate on channel 1/TAPJ
N'22 = measured Si Kα count rate on channel 4/TAPJ

Results for the dead time constant are as follows:

Si Kα/Si Kβ ratio method [μs]:
channel 1/TAPJ:  1.44
channel 4/TAPJ:  1.07
Current-based method, Si Kα [μs]:
channel 1/TAPJ:  1.38
channel 4/TAPJ:  1.12
Current-based method, Si Kβ [μs]:
channel 1/TAPJ:  0.64
channel 4/TAPJ:  -0.42

Notice that the results for the current-based method using Si Kα are very similar to those for the ratio method.  This is likely due to the fact that picoammeter nonlinearity was minimized due to the high count rates at relatively low current.  The Si Kα count rate was in the vicinity of 70 kcps at 30 nA and around 90 kcps at 40 nA.  Using Si Kβ for the same purpose results in the usual anomalously low (or negative) dead time values, with count rates reaching only ~11 kcps at IPCD = 136 nA.





In the ratio plots above, I've propagated counting error in each ratio used in my regressions and have plotted lines at two standard deviations above and below the linear fit (i.e., I am assuming that the line represents the mean).  The propagated error is calculated for the left-hand plot as

σratio = (N'11/N'21) * ( 1/(N'11*t11) + 1/(N'21*t21) )1/2

where each N' carries units of s-1 and where t is count time in seconds.  The small, abrupt shifts in the calculated uncertainty are due to instances in which I changed (decreased) the count time as I increased the beam current.  To be absolutely clear, I've plotted the counting error using the measured number of X-ray counts along with the Si Kα count rate corrected for dead time using the linear model.  Within the effectively linear region, all ratios except for one (at N'11 = 15.4 kcps) plot within two standard deviations of the line (noting that one value out of every twenty should be expected to plot outside the envelope).  This suggests that counting error is the dominant source of uncertainty and that the linear model corrects adequately for dead time at measured count rates up to several tens kcps.  Not a big surprise.

Ultimately, the goal is for the corrected data to plot along (within counting error of) the horizontal line representing the true ratio as shown in in each plot below.  When I apply the two-term (Willis) and six-term (Donovan et al.) equations, the models can be seen not to fit the data particularly well at high count rates when using the dead time constant determined using the ratio method; further, they result in positive departures in the effectively linear range on the left-hand plot and negative departures on the right-hand plot, which is physically impossible, as values are pushed beyond the limit set by the true ratio.  Further, using the plot of N12/N22 versus N12 (i.e., Si Kβ count rate on channel 1 represents the horizontal axis) as an example, if I increase the value of τ2 from 1.07 to 1.20 μs, then the fit (solid line in second set of plots) particularly within the effectively linear region deteriorates badly, while the overall fit is only marginally improved (just judging by eye).  At this point, the dead time constant has been stripped of all physical meaning and serves simply as a "fudge factor."



In the below plots showing the results of modifying τ2, the right-hand plot is an enlargement of the left-hand plot.  Because the measured Si Kβ count rate (N'12) is relatively low, changing the value of τ1 has only minimal effect on the plot.  (The arrow on each plot above or below points at the highest count rate used in the linear regression.) 



In summary, while the two-term and six-term equations may appear to provide better fits than the linear equation over a very wide range of count rates, the functional forms of the corrections are easily observed to be physically unrealistic.  Graphically, the effect of the added terms is simply to allow the function to bend or pivot in an apparently favorable direction, but at the risk of producing variations in slope that are inexplicable in terms of physical processes.  In contrast, the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I'm totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N').  For the "best" correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

#18
Quote from: Brian Joy on August 09, 2022, 04:51:33 PM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I'm totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N').  For the "best" correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:



Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://smf.probesoftware.com/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.
The only stupid question is the one not asked!

Brian Joy

Quote from: Probeman on August 09, 2022, 07:41:35 PM
Quote from: Brian Joy on August 09, 2022, 04:51:33 PM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I'm totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N').  For the "best" correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:



Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://smf.probesoftware.com/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.

OK, the equation is now written in a nicer form, but it still does not accurately describe the necessary correction to the measured count rates.  Did you not look at my plots above?  I keep looking for a mistake in my spreadsheet, but I can't find one.  (I'm not joking.)  If your model worked, then it should correct the ratios such that they plot along a horizontal line.

We just had a discussion about hypothesis testing.  You seem to be completely sold on your model.  Why are you not trying to probe it for flaws?  That task should be your primary focus.  In a sense, I'm doing your work for you.

I'll probably post again in a few days with some more plots  that I hope will make my points more clear.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

#20
Quote from: Brian Joy on August 09, 2022, 10:07:39 PM
Quote from: Probeman on August 09, 2022, 07:41:35 PM
Quote from: Brian Joy on August 09, 2022, 04:51:33 PM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I'm totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N').  For the "best" correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:



Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://smf.probesoftware.com/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.

OK, the equation is now written in a nicer form, but it still does not accurately describe the necessary correction to the measured count rates.  Did you not look at my plots above?  I keep looking for a mistake in my spreadsheet, but I can't find one.  (I'm not joking.)  If your model worked, then it should correct the ratios such that they plot along a horizontal line.

We just had a discussion about hypothesis testing.  You seem to be completely sold on your model.  Why are you not trying to probe it for flaws?  That task should be your primary focus.  In a sense, I'm doing your work for you.

I'll probably post again in a few days with some more plots  that I hope will make my points more clear.

You're funny. I didn't realize that "nicer form" was an important scientific criteria.   :)   What's nice about it is that it actually captures the situation of multiple photon coincidence. As you stated yourself: "the two-term and six-term equations may appear to provide better fits than the linear equation over a very wide range of count rates"!  I personally would call that *more physically realistic* simply because it agrees with the empirical data!  Maybe you've heard about scientific models?

Yes, you mentioned science, and I responded that I know a little bit about testing hypotheses (you can look Donovan up on Google Scholar!). In fact, hypothesis testing is exactly how we arrived at these new dead time correction expressions. We performed (and are continuing to make) k-ratio measurements over a wide range of beam currents for a number of emission lines for a number of primary and secondary standards.

And what we saw (please read from the beginning of the topic to see our hypotheses evolve over time, because you know, science...) was that the traditional expression breaks down (as you have already admitted) at count rates attainable at even moderate beam currents.  So after examining alternative expressions in the literature (Willis, 1993), we realized they could be further extended (six term) and eventually integrated into a simple logarithmic expression (above).

https://smf.probesoftware.com/index.php?topic=1466.0

I don't know what you're doing wrong with your evaluations, as these new (six term and log) expressions are accurately correcting for count rates up to 400K cps.  OK, here's a thought: I suspect that you're using a dead time constant calibrated for the traditional method, and trying to apply the same value to these more precise expressions.  As pointed out in this post, using a dead time constant calibrated using the traditional expression, will cause one to overestimate the actual dead time constant:

https://smf.probesoftware.com/index.php?topic=1466.msg11013#msg11013

Maybe you should try the constant k-ratio method with these newer expressions and you'll see exactly where you're going wrong?  Plus, as mentioned in the constant k-ratio method topic, you can utilize the same k-ratio data set not only to find the correct dead time constants, but also test your picoammeter accuracy/linearity and finally also test for simultaneous k-ratios to be sure your spectrometers/crystals are properly aligned. 

It's a very intuitive method, and clearly explained so you can easily follow it. The pdf procedure is attached to this post below.

To summarize: both the six term and the log expression give essentially the same results as expected because they correctly handle cases of multiple photon coincidence at higher count rates.  As you admitted in your previous post, the traditional dead time expression does not handle these high count rate situations.

I'm not sure why you can't grasp this basic physical concept of multiple photon coincidence. Nevertheless I'm certainly not asking you to do any work for me!  That's on you!    ;D

Instead I can only thank you for your comments and criticisms as they have very helpfully propelled our manuscript forward.
The only stupid question is the one not asked!

sem-geologist

#21
For the  starter I will say You are both wrong  8). I said in other post that this log equation probably is one of most important advancements (I imagined it to do right thing), but now after clearly seeing it in this nice form (yes, mathematical formula is much more nicer to read than someones code. Especially the code written in VB; would it be in Julia - maybe it could be argued to be readable enough) I had changed my mind. I am starting to be very skeptical.

This problem is known for ages as proportional counters are used not only on EPMA, but no one could come up with working equation. Could it be useful? maybe yes, it needs to be thoroughly tested before being released to the public, and nomenclature of things in this model needs to be changed as currently it brings in confusion.

Lets start with definition of what is the dead time (which should had been there at the very beginning of all posts of new proposed methods)? If we can't define what it is, then all the rush with calibration efforts are doomed.

I could talk in electronics and signal processing terms (and I probably will do when I will find time to continue the thread about counting pipelines), but (from my grim experience) rather I will explain this in absurd fictional story below to be sure everyone will understand. (initially it was dark gruesome with deaths (how to get the story about dead time without any deaths? Huh?), but I changed it with some other folk-getting-rid mechanics, like opening some trap doors which send some folk through tube like in water amusement park - nothing lethal; no one dies in this story, OK?)

I put my story into a quote, in case someone wants to skip my badly written story.
Quote
In the city of Opium-Sco-Micro on some very large hills there were few gang clubs (There was "EDS gang", "WDS gang" also other gangs which this story will omit). Clubs wanted to keep everything under control and wanted to know exactly how many people comes in to the club. And so these club entrances have the guards armed with guns remote button, which control some trap door opening at the ground under the entry, which, when  activated, sends unfortunate folk(s) to some wheeeeeeeeeeeee slide-trip in the tube with water to the closest river.

Guard stands perpendicularly to the line of folks trying to enter the club. Their field of view covers few meters of folks approaching, crossing the entrance and few meters moving inside the club. Unfortunately guards are not very capable of fast counting neither have a perfect sight. Thus there is the sign above the entry that people should keep at least 2 meter distance (or more) in the line, if they don't want to be flushed to the river while trying to enter the club and not keeping the required distance. Folks are coming at random,  however all of them don't know how to read, and all of them has the same constant movement speed.

How does that works. This is where EDS gang guards and WDS gang guards would differ. EDS gang guards would flush anyone who does not keep those 2 meters before him, even if that person in front in line is just folk who is at the moment being flushed. WDS gang guard would care only about the last passed-in folk, and would measure these 2 meters from that last passed-in folk ignoring flushed folks in-between.

If more folks would come at random, sometimes incidentally two people would go side by side (a row), and guard having bad sight would be unable to distinguish if that is one or two person, and would take a row for a single person (a pile-up). So in case no one in front both would be let in as a single person, and guardian would just take a note of 1 person passing into the club, otherwise both would be flushed to the river.

Now, if a moderately large crowd of folks wanting to enter the club would arrive, there would appear more combinations where 2 or 3 or more folks would be side-by-side and bad-sighted guardian would be unable to identify if it is a single person or more (oh these sneaky bastards). If there would come very large crowd, where no one would keep distance, the EDS guard would let in the first row (and would count to 1), and would keep the trap door opened sending the rest of the crowd down to the river (EDS dead-time=100%). Differently to EDS guardians, the WDS gang club guard would let folks/rows every 2 meters, and would flush everyone in between (non-expandable dead-time).

Everyone would know that you should not joke with EDS gang (they have "expandable dead time"), and so people should not be provoked to come in too large crowds (keep your dead time below 25% they say).

It looks that Guards from EDS and WDS gangs flushes people a bit differently, also they count it a bit differently.
WDS guard has nice counting system where he pushes the button for adding next +1 to the total count number, where EDS has to replace (reset) guards every nth minute as they can count up in their head to 100 (or other settable low number), and at time when guardians are replaced - the trap door is automatically open, so no one can enter - on trial to enter they just fall into trap and are flushed to the river (additional part of EDS dead time at "Charge sensitive pre-amplifier" level, dead time which WDS has not). For some internal reasons gang announces that club can be tried to be entered only in 10 minutes (or 5, or 3, or 7 or.... you know), where entrance is shut before and gets completely shut after. Interestingly, the administration would like to know how many people wanted to enter the club in those 10, 5, 3, 7 or else minutes and thus are not so much interested in total number of passed-in folks, but rate of folks incoming to the club (i.e. then there was "Asbestos-Y" visiting the club, there was entering 200.4 folks per minute, but when "Rome Burned & ANSI-111" came to present their new album "Teflon 3", the rate was 216.43 folks per minute), as knowing the speed of folks and distance of set and declared "keep distance" sign they can calculate that. At least such was the initial idea.

Anyway, at some moment WDS gang got a bit confused.
Someone noticed that last night there had to be more people inside the club than it was counted at entrance, there was drinks for one thousand people, but all bottles and barrels got empty pretty fast as like there had been two thousands. Some of gang members would say "maybe our guards has bad sight for how much are those `2 meters`. You know I hear that other WDS gang from other city requires only 1 meter distance to be kept between - we probably are more affected with this problem as our requirements are higher." Some other gang member fluent in statistics and physics would say then: "maybe problem dwells in that guardian is unable to see those 2 meters correctly, we should calibrate his '2 meters' with account of how many drinks are consumed at the club. Let's do an experiment moving our club from city to city with same DJ's to get different sized crowds wanting to visit our club", from these results we will get regression line pointing to correct size of guardian's `2 meters`.

After few attempts it would get obvious that results is a bit weird. It would be noticed that calculations shows that this guardian's "2 meters" shrinks and enlarges depending from the size of group. Some one would say "But at least you see! - all results points that it is inaccurate in some cases it is less/more than 2 meters. We should at once send the warning to all other gangs in other cities that they should calibrate the sight and ability to assert the distance of their guards.". Few gang members reported that they think this 2 meter shrinks depending from what band or DJ was sound checking that day. Some folks thought that this is nonsense as Guards are deaf and that should not influence them at all.

Then some idea would appear, that actually we should not calibrate those "2 meters" to the single event, but to a proportions of two same happenings (i.e. compare rate of club visitors when "Asbestos-Y" plays vs rate of visitors when "There comes the riot" band plays) and in the different cities with different population (that would be k-ratios vs beam current out from this story). After some more investigative experiments someone at last would correctly notice that probably size of crowd interferes with guardians perception of the "2 meters" due to rows being formed - "Oh those sneaky Folks". Firstly, someone took into account possibility of 1 additional folk sneaking in, later someone enlarge it up to total 6, but latest iteration of equation decided to be not discriminant and replaced that with natural logarithm to account for rows with infinite number of folks. That equation itself needs to be used to find the probability how many multi-person rows would appear depending from size of the crowd by tinkering the "constant" of "2 meters" and if it fits the found size of "2 meters" then it is correct. Some voices also had raised an idea that those guardians with age goes mental, and their "2 meters" grows (or shrink) in size with age.

This new finding and practice of such calibration would in haste be spread to other gang clubs around countries and cities... While overlooking (or absolutely ignoring) the fact how in real the guardians keep those 2 meters. I.e. EDS club guard has three lines drawn on the floor (with very precisely measured distances): the main line directly at the entrance with trap doors to flush the folk(s), then there is one line 2 meters away from main line inside the club, and other is 2 meters outside the club. Guardian opens the trap the moment the folk (or row with sneaky folks) pass the main line, and closes it only if both areas (2 m before and 2 m after the entrance) are clear.

WDS gang club guardian have simplified system, only two lines drawn on the floor: the first main line with the flushing trap door is under the entrance, the other line is 2 meters further inside the club. There is no 3rd line in front of the club. If there is no folks in between these two lines the flushing trap is closed then (folk can pass).

What was missed by gang statisticians and physicists is that their observed discrepancy of folk count rate can't depend only from the distance which folks needs to keep in between (2 meters), but also the poor eyesight of the guardians and their delay of pushing button after first folk crossed the line. They need to delay the opening of trap door for some milliseconds, or else the first folk in line would be caught and flushed to the river, and absolutely no one would be able to enter the club. Because Guardian has poor eyesight and a some delay reaction to push the button, those multiple (pile up'ed) sneaky folks are able to enter the club while being counted as a single person. The equation had not accounted for delay, and resolution of eyesight of guardian and thus the created calibration and formula does not represent what it states to represent. Also there is risk that it will not work correctly if guardians would replace the requirements of distance between folks (settable dead time). These "2 meters" in this story would correspond to the dead time constant outside of the story. Taking into consideration the constant moving speed of all folks the dead-time will be time portion when the trap door was open preventing anyone from entering the club. Lets say the thought up formula can predict the real rate of folks trying to get to the club if calibrated "correctly", but even then the "calibrated value" have nothing in common with real set distance between drawn lines at club entrance and naturally should be called something else.

So, Your all formulas (Brian, Yours proposed too) tries to calculate something while excluding crucial value (variable in sens of EDS systems, or static value in WDS systems) - the "Shapping Time" of shapping aplifier, from which the result highly depends and that value is absolutely independent from the set(able) dead time. Make formulas work with pre-set, known and declared dead time constants of instrument (physically measured and precisely tweaked with oscilloscope in factory by real engineers), please stop treating instrument producers like some kind of Nigerian Scam gang, that is not serious. In case this formulas are left for publication as is, please consider replacing the tau letters with something else and please stop calling it the dead-time constant - as in this current form it is not that at all. I propose to call it "p" - the factor of sneaked-in pile-ups at fixed dead time. I also would be interested in your formula if tau would be replaced with multiplication with of the real tau (the settable, OEM declared dead time) and experimentally calculable "p" - the factor of sneaked-in pile-ups depending from shapping time. That I think would be reasonable because not everyone are able to open covers, take a note of relevant chips and electronic components, find and read corresponding datasheets to get the "Pulse Shapping" time correctly, or measure it directly with oscilloscope (which, indeed, is not so hard to do on Cameca SX probes).

Currently I saw no results present using proposed formulas when dead time (the real hardware dead time) would be set to anything else than 3µs, try Your formula with setting the hardware to 1µs, to 5µs, 7µs, 11µs... If it will work with all of those then formula can be considered to be closer to correct. Oh no'es - You need to redo finding of factor  when changing hardware dead time? You see - that is the reason why tau should be divided into two variables: settable fully controllable dead time variable (tau), and proposed constant of "p" - so that formula would work with any hardware settings without need of re-calibration of it. But probably You won't be interested in that as currently PfS is limited to single hardware dead time per detector and XTAL (As far I track the advancements and configuration file formats of PfS, please correct me if I am wrong). I even see that at this moment my strategy of not crossing 10kcps (where simple dead time correction works reasonably) and using Cameca Peaksight gives me more freedom. On Peaksight I can switch between those hardware dead time constants like crazy depending from what I need (i.e. if I need better PHA precision, cause I want to use differential mode - I switch it to 7 µs, if I need more throughput and don't care about PHA precision (integral mode) - I switch to 1µs DT (+0.3 µs to all modes to account the additional dead time by "pulse-hold" chips)).

I tried this log formula just for fun, and...
I get negative values if I enter to this log formula 198000cps at 3.3µs, Also It highly underestimates count rates if I change hardware DT to 1µs.

Going above 150kcps (i.e. there is mentioned these unrealistic 400kcps) is not very wise, and knowing how PHA looks above that point (and after seeing real Oscilloscope picture which gives much better view what is going on) I really advice of not going there. Actually I highly urge to stay away from anything >100kcps as the counting gets really complicated and is hard to model (for sure this  oversimplified log formula can't account for count losses by Base line shifting of Pulse Shapping amplifier output). Even worse This calibration process calibrates this factor to the something what should not be used biasing wrongly the middle juicy region of detector (10-100kcps, or 10 to 70kcps if low pressure).


BTW, this is not Cameca problem (please stop comparing the dead time constants between Jeol and Cameca; it is comparing of apples and oranges. Jeol is forced to use shorter constants as WDS electronics have to do more work as it needs to push background measurements through PHA; where Cameca instruments have this luxury of not caring about background noise as it is cleverly cut out and not blocking the pipeline. So Cameca engineers could place in a bit lengthier "Shapping Amplifier" for more precise PHA producing similar throughput as JEOL instruments). So pile-ups is general problem of all detectors going BIG (or really severely overly absolutely too big. Why? because it looks better on the paper and sells better? As everything.). This problem appeared on WDS due to large diff XTALS, but EDS at current market is not better at all. We are looking for new SEM configuration. Jeezz, Sales people look to me like someone coming out from a cave when I ask for EDS SDD with 10mm2 or even lesser active area. If I do any EDS standard-based analysis the dead time should not be more than 2% (as pile-ups will kick in ruining the analysis) and I want still have BSE nice and smooth at the same time, not some static noise (which would be if using very low beam currents to make these big EDS SDD stay in 2% DT). And this requirement is hard to meet with minimum offered 30mm2... Well they initially offer 70mm2 or 100mm2 (or even more). What a heck? Counting electronics had not improved this much at all to move even to 30mm2 for precise quantitative analysis. Their main argument - "You can produce these nice colorful pictures in instance - you will be more productive". Yeah I guess I will be more productive... producing garbage. OK, ME: "What about pile-ups?". THEY: "We have some built in correction...",ME: "Which I guess works nicely with Fe metal alloy...", "THEY: "How had You know this very confident information...", ME:  "What about your pile-up correction for this REE mineral?"...

Probeman

#22
Ha, OK. Whatever.   ::)    Everybody's a contrarian apparently.

You mention an observed count rate of 198K cps and a dead time of 3.3 usec and getting a negative count rate. Of course you do!  Talk about a "non physical" situation! At 3.3 usec, an observed count rate over 190K cps implies an actual count rate of over 10^7 cps!    :o

User specified dead time constant in usec is: 3.3
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
  190000   509383.3      0.373    1076881   0.1764355    7272173   2.612699E-02   1.374501E+07   0.0138232   
  191000   516635.1     0.3697    1116561   0.171061   1.073698E+07   1.778898E-02   3.869025E+07   4.936644E-03   
  192000   524017.4     0.3664    1158892   0.1656756   2.043884E+07   9.393877E-03   -4.764755E+07   -4.029588E-03   
  193000     531534     0.3631    1204149   0.1602792   2.050771E+08   9.411094E-04   -1.47588E+07   -1.307694E-02   
  194000   539188.4     0.3598    1252647   0.154872   -2.562786E+07   -7.569886E-03   -8736024   -0.0222069   
  195000   546984.6     0.3565    1304750   0.1494539   -1.208202E+07   -1.613968E-02   -6206045   -3.142098E-02   
  196000   554926.4     0.3532    1360876   0.1440249   -7913163   -2.476886E-02   -4813271   -4.072075E-02   
  197000     563018     0.3499    1421510   0.138585   -5887980   -3.345799E-02   -3931522   -5.010782E-02   
  198000   571263.7     0.3466    1487221   0.1331343   -4691090   -4.220768E-02   -3323049   -5.958384E-02   

Jesus, Mary and Joseph and the wee donkey they rode in on... can we get back to reality?  Any physically existing proportional detector is completely paralyzed at actual count rates above 400 to 500K cps. So please just stop with the nonsense extrapolations.   :)

Seriously, I mean did you not know that at sufficiently high count rates and sufficiently large dead times, even the *traditional* dead time expression will produce negative count rates?  I posted this weeks ago:

https://smf.probesoftware.com/index.php?topic=1466.msg11032#msg11032

Hint: take a look at the green line in the plot!   

Anette and I (and our other co-authors) are merely looking at the empirical data from *both* JEOL and Cameca instruments and finding that these new dead time expressions allow us to obtain high accuracy analyses at count rates that were previously unattainable using the traditional dead time expression expression (and of course they all give the same count rates at low count rates). 

In my book that's a good thing.  But you guys can keep on doing whatever it is that you do. Now maybe you coded something wrong or maybe as I suspect Brian has, you're utilizing the same dead time constant for the different expressions:

https://smf.probesoftware.com/index.php?topic=1466.msg11013#msg11013

The dead time constant *must* change if one is accounting for multiple photon coincidence! If fact the dead time constant *decreases* with the more precise expressions. Here's the code we are using for the logarithmic dead time expression in case it helps you:

' Logarithmic expression (from Moy)
If DeadTimeCorrectionType% = 4 Then
If dtime! * cps! < 1# Then
temp# = 1# + Log(1 - dtime! * cps!)
If temp# <> 0# Then cps! = cps! / temp#
End If
End If


Finally you mention these new expressions should be tested, and that is exactly what we have been doing in this topic:

https://smf.probesoftware.com/index.php?topic=1466.0

There is copious data there for anyone who wants to review it, and we further encourage additional testing by everyone.  In Probe for EPMA this is easy because all four dead time correction expressions are available for selection with a click of the mouse:



But hey, if nothing else, I'm sure this is all very entertaining for everyone else.    ;D
The only stupid question is the one not asked!

sem-geologist

#23
In post which You had linked there is this:
Quote from: Probeman on July 21, 2022, 12:18:17 PM
...
We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).
...
You asked "anyone" and I gave You example with supposedly 2.4Mcps input count measurement (950nA on SXFiveFE, Cr Ka on LPET high pressure P10, partly PHA shift corrected for this experiment). Had you not expected someone would come with 4Mcps like we live in caves or what? Why Would You ask in a first place? If You Want I can go to our SX100 and try putting 1.5µA to get closer to those 4Mcps and report the results.


That is my python code functionally just doing exactly the same thing as Your VB code (whithout checking for negative values):

def aurelien(raw_cps, dt):
     dt = dt / 1000000 # because I want to enter dead time in µs
     in_cps = raw_cps / (1 + np.log(1-raw_cps*dt))
     return in_cps


So at first I just entered my hardware dead time. If It is set to 3µs the real dead time can only be a bit larger (by some signalling delays at traces between chips - i.e. Pulse Hold chip). If Your equation and calibration brakes that - it simply exposes itself to be false (i.e. dead time of 2.9µs while hardware is set to 3µs) or falsifies claiming what it does...
As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable. With digital clock of 16MHz 1µs will be 16 clock cycles, 2µs will be 32 clock cycles, 3µs will be 48 clock cycles and so on. it can't be tweaked to 15 or 17 cycles randomly. If You imply that digital clocks suddenly in all  probes can decalibrate itself to such a bizarre degree, at machines maintained at stable humidity and temperature, while other technology full of digital clocks roams outside and show none of such signs - something is wrong. Should such digital clocks be failing at such unprecedented rate as proposed for all our probes, we should see most 20 year old ships sinking, massive plains exploding in the air as turbines would go boom with desync of digital clocks, power plants exploding, satellites falling... but we don't see any of these things because digital clocks are one of these simple and resilient technological miracles and corner stones of our technological age. The proposition of lower dead times than the set hardware clock cycles are simply completely unrealistic in any acceptable way whatsoever as additional to those set clock cycles there should be some delay in signaling between chips.

Thus said the 3µs stays there (the red-line) and final dead time (unless we will stop calling it like that) can't go below that value - it is counting electronics and not damn time traveling machine! What about additional hardware time. Cameca Peaksight by default adds 0.3µs to any set dead time (can be set to other values by user) - PulseHold chip has 700ns, and I was always thinking that this additional 0.3µs is difference to round up and align the counting to the digital clock at 1µs boundaries. So that is why I had entered 3.3µs in the end to the equation and found out the negative number.
This whole endeavor was very beneficial for my understanding, as finally I had realized that it is Pulse Hold Chip which decides if the signal pipeline needs to be blocked from incoming pulses. Thus in reality +0.3 or + (0.7 + 0.3) to the set integer blanking time constant of 3µs have no base, as the blocking of pipeline and pulse holding in the chip works at the same time in parallel to blocking of pipeline from incoming signal.

Ok, in that case increasing the dead time to 3.035μs these 198kcps of output gives the expected result (about 2430kcps of input). So then 3µs is the blanking time where 35 ns is signal traveling from pulse hold chip to the FPGA so that it would block the pipeline (this is huge simplification skipping some steps) - generally taking into consideration LVTTL this timing looked a bit short to me initially, but with well designed system such and even smaller digital signaling delay is achievable (the new WDS board is more compact than older board, traces are defenitely shorter and narrower - less capacitance - faster switching time). However, these 35 ns could also represent the half of clock cycle. The fact of pulse arrival can be read by FPGA only with clock resolution (period of 16MHz is 0.065µs), thus before FPGA would trigger 3µs pipe blockade some additional random part of clock cycle would need to pass, where average would be about a half of the cycle.  We also don't know with what time resolution FPGA works as it can increase the input clock frequency for internal use. Anyway now we know that to integer part other processing takes 0.035µs and thus it should be same addition for other integer time constants. Its time to test this equation for dead time constant of 1µs (+0.035).

Measured output count rate was 315kcps with only dead time changed to 1µs. The input count rate stays absolutely the same as conditions are the identical.
Calling the log function with 1.035µs gave me about 520kcps of input counts.
Ok lets find which dead time value would satisfy the log formula to get expected input count rate....
looks 1.845 µs would work - which makes completely no physical sense.

But what I should expect from the models which completely ignores how the hardware works. You say it is based on Empirical data? Maybe You have too little set of empirical data and thus this fails miserably at different set dead times.

But then it really gets funny and explains why hardware internal workings are not considered in this proposed formula:
Quote from: Probeman on August 11, 2022, 08:49:39 AM
Jesus, Mary and Joseph and the wee donkey they rode in on... can we get back to reality?  Any physically existing proportional detector is completely paralyzed at actual count rates above 400 to 500K cps. So please just stop with the nonsense extrapolations.   :)

Are these input or output rates? if those are output rates, 198kcps is just much below these,so where is that extrapolation?.
In any case that replay makes no sense, because on Cameca instruments the max output rate you can squeeze out from counting system is just a bit above 320kcps at input count rates of more than 2.5Mcps and hardware dead time constant set to 1µs. And if Jeol has not settable dead time constant and fixed to 2µs those mentioned 400Kcps or 500kcps of output count rates are not achievable in any way. It would be achievable if dead time would be settable below <1µs. Again I am talking about real measurements not extrapolations or interpolations which You try to smear me about.

But wait a moment, after rereading more of post I had realized you are indeed talking about input count rate limitations of proportional counters. When why bother asking for "4Mcps Anyone" in a first place in the linked post? Is there any proof for that far-fetched claim of limitations? I am curious, because I have physical direct proof of contrary - the real oscilloscope measurements/recordings, which directly demonstrate that proportional counters (at least those mounted on Cameca WDS) have no  any direct paralizable behaviour going all way up to 2.5Mcps. The output of Shapping amplifier gets a bit mingled due to being overcrowded at higher count rates, but I had not noticed any paralizable behaviour of Proportional counter. With solving atmospheric influence on pressure and humidity these are very incredible piece of device - with better counting electronics those would leave the performance of SDD in the dust. Even SDD - the detector chip and Charge sensitive preamplifier has no paralizable behaviour! Generated amount of X-rays on our e-beam instruments are insignificant to do any real self inflicting feed-back cut off of pulses on those detectors. It is the EDS pulse counting system which introduces such behaviour, and WDS has no such paralyzable mode inside the counting system (output count rate increases with increasing beam current - and never starts decreasing; ok it increases with slower and slower pace with higher beam currents, but curve never ever reverts, unless diff mode).

I suggest to read the story written in my previous post to understand the differences, I particularly included in-personification of EDS system there to demonstrate the functional difference clearly from WDS.

In normal circumstances the Streamer and Geiger region for high pressure counter is totally out of the reach (where Self Quenched Streamer can introduce some dead time, and Geiger events for sure can block the detection for hundreds of ms). Low pressure counter - that's a bit different story, and probably I would not want to find out that, but if Bias on those is not pushed high, the amount of generated X-rays reaching such counter is just a scratch and won't affect the performance anyhow. So Where you got this misinformation about paralyzing behavior of proportional counters again?

It would not hurt to get familiar with hardware before creating the math models for behavior of that hardware, else You have model far detached from physical reality. Modeling Pile ups from dead time constant is poor chose, when it is Pulse Shapping time which decides how small time difference needs to be so sneaky pile up peaks would make PulseHold chip include them in the amplitude measurement. The dead time is completely not influencing that anyhow at all as it is triggered after PulseHold chip read the amplitude in by that PulseHold chip.

It is often miss-understanding that dead time introduction in counting electronics is not for pile-up correction, but mainly to filter out overlapped (which pulse hold chip can often still differentiate) pulses for more precise amplitude estimation - which for EDS is crucial. The dead time in WDS without extendable behavior gives very little to no benefit (thus You get the PHA shifts, but keep the throughput) and is rather an excuse to cut the board costs down (shared ADC, buses etc...).

Any model normally is covered by some empirical data, but is it enough to say that such math model is true? Especially then it fails in some normal and edge cases and brakes very principal laws? Model with numerous anomalies?

Probeman

#24
Quote from: sem-geologist on August 11, 2022, 03:57:30 PM
So at first I just entered my hardware dead time. If It is set to 3µs the real dead time can only be a bit larger (by some signalling delays at traces between chips - i.e. Pulse Hold chip). If Your equation and calibration brakes that - it simply exposes itself to be false (i.e. dead time of 2.9µs while hardware is set to 3µs) or falsifies claiming what it does...

I've already explained that you are assuming that the hardware calibration is accurate.  It may not be.

Quote from: sem-geologist on August 11, 2022, 03:57:30 PM
As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable.

And once again we fall back to the nomenclature argument.

Clearly it's a constant in the equation, but equally clearly it depends on how the constant is calibrated.  If one assumes that there are zero multiple coincident photons, then one will obtain one constant, but if one does not assume there are zero multiple coincident photons, then one will obtain a different constant. At sufficiently high count rates of course.

Quote from: sem-geologist on August 11, 2022, 03:57:30 PM
But what I should expect from the models which completely ignores how the hardware works. You say it is based on Empirical data? Maybe You have too little set of empirical data and thus this fails miserably at different set dead times.

Yes, we are using the same equation for both JEOL and Cameca hardware and the constant k-ratio data from both instruments seem to support this hypothesis.  So if we are correcting for multiple photon coincidence (and that is all we are claiming to correct for), then the hardware details should not matter.  As for detector behavior at even higher count rates I cannot comment on that.  But if we have extended our useful quantitative count rates from tens of thousands of cps, to hundreds of thousands of cps then I think that is a useful improvement.

I really think you should think about this carefully because it is clear to me that you are missing some important concepts.  For example, the traditional dead time expression is non physical because it does not account for multiple photon coincidence.  Wouldn't it be a good idea to deal with these multiple photon coincidence events since they are entirely predictable (at sufficiently high count rates and sufficiently large dead time constants)?

Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.
The only stupid question is the one not asked!

sem-geologist

#25
Quote from: Probeman on August 11, 2022, 05:14:40 PM
Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.

You know already my answer from other post about matrix correction vs matrix matched standards. And to repeat that answer it is Absolutely Certainly Yes! And yes I agree both needs to be considered: 1) the range of count rates and 2) and the range of beam currents; because i.e. PET will have different pulse density than LPET at same beam currents. Well I am even more strict on this as I would add 3) the same spectrometer (the same proportional counter) with different types of XTALS! because gas pressure of counter and take-off angle (these probable very small tilt of the sample) are the same.

Don't understand me wrong, I am as well very aware that classical dead time correction equation is not able to provide satisfactory solution to estimate the input count rates from observed output count rates, and I absolutely agree that there is a need for better equations.

You are not new in this field, look this pretty summarize some previous attempts: https://doi.org/10.1016/j.net.2018.06.014.
...And Your proposed equation is just another not universal approach, as it fails to address the Pulse Shapping time, albeit it have potential to work satisfactory for EPMA with some restrictions. That is not bad, and for sure it is much better than keeping using classical simple form of correction.

You talk like I would be clueless about coincidences.
Listen, I was into this before this was even cool!
I saw this need of accounting the coincidences much earlier than You guys started working on this here.

Look, I had even made the monte-carlo simulation to  address this more than a year ago and found out that it predicts observed counts very well (albeit it has 0 downloads. No one ever cared to check it out).

Quote from: sem-geologist on April 20, 2021, 06:32:06 PM
Modeled values agrees well with spectrometer count rate capping at different set dead time values.
I.e. at dead time set at 1µs the count rate caps at ~300kcps, while at 3µs it caps at about 200kcps, which this modeling predicts very well.

The only thing is that I failed to make a working equation which would take raw output counts and would output correctly predicted input counts. My attempts to use some automatic equation fitting to those MC generated datasets failed (or I am simply really bad at function fitting).

I was lacking then more knowledge and particularly important key points, which I had gathered from that time and which is crucial as it simplifies things a lot. I got some hope to finish my work in some future and construct completely transparent universal equation for Proportional counters and its counting.

So the mentioned crucial points:

  • There is practically no dead time at Flow Proportional Gas counters mounted on WDS, not at these petite, insignificant X-ray intensities obtainable with highest possible beam currents and most intense X-ray lines on these electron-micro probes (at least for Cameca, can't tell things for Jeol as things could be a bit different there).
  • It is important to recognize where the coincidence appear in the whole pipeline as probability of coincidences depends not from dead-time, but from pulse width, which is different at different parts of the pipeline (few ns in the proportional counter, hundreds or thousands of ns after Pulse Shapping Amplifier).

So You call it multiple photon coincidence as that would be happening massively in the detector, while I call it Pulse-pile up which happens not at photon level, but at electronic signal level.

Have a MEME:


This problem of pulse-pile ups is signal processing problem, not the problem of proportional counters. Will the two (or more) photons hit the detector with time difference of 1ns or 400ns (which in first case would look as pileup in the raw voltage of counters wire and in the other as two very well separated pulses) - that completely does not matter while Shapping Amplifier (SA) is set for 500ns shapping time – in both cases Your counting electronics will see just a single piled-up peak received from the SA, unless the time difference between photon incidence will be larger than that shapping time. Would the shapping amplifier have lower shapping time than half pulse width of the raw proportional counter charge collection signal - You then should care about the photon coincidence (but seriously Shapping amplifier in such a combination would be then not just completely useless, but a signal-distorting thing).
Thus please stop repeating this "multiple photon incidence" mantra as what You should tell is your log equation solves is pulse-pile-up problem. But it actually does not just solve it, on its way it masquerade few phenomena as it would be single simply understandable constant of dead time, blaming producers and just spitting into face to physics (whole electric engineering in particularly) indirectly stating that digital clocks goes with time complete nuts - This is what I am not good with at all.

Look I salute You, for getting better constant k-ratios across multiple counting rates/beam currents and coming with very elegant simple formula. I am really good with it that this is probably going to be hastily adapted across many PfS users. But I completely am appalled by this masquerading of dead time constant where in real it is is fuse of the 3 in disguise: the real dead time constant (digital clock based constant),  probability (depends from density of events or range of readable raw cps, can change with age of the detector wire) of pile-ups depending from shaping time (constant value). You call it "calibration" but what I see in real is "masquerading" and blaming original unchangeable dead time constant.

Another thing without changing the nomenclature of your "tau", You are stating that You can find the probability of coincidence on this dead time constant alone, which is nonsense as probability of pileups does not depends from the dead time at all. It depends from shapping amplifiers shapping time and density of pulses.

Why I am so outraged with wrong nomenclature? Yes is Yes, No is No, Apple is apple, orange is orange. We keep science at maintainable state (some times I start to doubt about that, especially seeing things like here) by keeping calling things what it is. We normally try to talk in language of reason. If You suddenly start to call things or masquerade something else under the skin of previously well known and well defined nomenclature - that is no more science - that is basically modern politics.
The problem which I see could arise if someone in the future will waste time because they had read in your paper between the lines that dead time constants can change with aging. Some one who will try to improve x-ray counting even more and stumble across Your equation will have countless sleepless nights trying to understand that a heck.

Some will catch this nonsense (of fluid dead time constant) and without giving more deep thoughts will try applying to other metrological fields. I had seen countless times how some errors had spread like a fire on a dry grass, while brilliant and well thought precise and accurate and well thought stuff stays somewhere in the fringes. Sometime authors come to their senses and tries to correct their initial error later, but guess what, often at that point absolutely no one cares and keeps spreading the initial error.

Would you change tau in the equation to some other letter (I had already proposed "p" (from pile-ups), also I think "f" (from "fused") would fit, or "c" (from "coincidence") and would explain: "Look folks, we came up with this elegant equation, we ditch hitherto used strict deterministic dead time constant (we don't care any more about it in this method, we had found we can do without that knowledge) - instead we find this fused factor which fuses the dead time, shaping-time and probabilities of pile-up into single neat value with simple method. How cool is that? (BTW, we even managed to convince sem-geologist to agree that that actually is pretty cool!). The outstanding feature of our new method is that we can very easily calibrate that factor of all these three without even knowing the dead time and shapping time of electronic components - You won't need to open black box to calibrate it!". Would be it something like that – I would say "Godspeed with your publication!", but it is not.

BTW, I agree that at current state of counting electronics it is not very wise to wonder to very high beam currents. I actually am more strict about it and would not go above >100kcps (for high pressure) or > 80kcps (for low pressure) with raw counts at differential mode. However at integral mode there should be no problem going up to 1Mcps (input) with proper equation which would correspond to about 305kcps of raw counts at 1µs settings, and about 195kcps of raw counts at 3µs settings on Cameca instruments. Going over that point the uncertainty will start overgrowing the benefits of precision gain from huge count number.

Probeman

Quote from: sem-geologist on August 12, 2022, 04:12:02 AM
(BTW, we even managed to convince sem-geologist to agree that that actually is pretty cool!).

SG: Great post, I very much enjoyed reading it.  Here is a response that I think you will also agree is "pretty cool": 

https://smf.probesoftware.com/index.php?topic=1466.msg11100#msg11100

:)
The only stupid question is the one not asked!

Brian Joy

Quote from: sem-geologist on August 11, 2022, 08:08:21 AM
So, Your all formulas (Brian, Yours proposed too) tries to calculate something while excluding crucial value (variable in sens of EDS systems, or static value in WDS systems) - the "Shapping Time" of shapping aplifier, from which the result highly depends and that value is absolutely independent from the set(able) dead time. Make formulas work with pre-set, known and declared dead time constants of instrument (physically measured and precisely tweaked with oscilloscope in factory by real engineers), please stop treating instrument producers like some kind of Nigerian Scam gang, that is not serious. In case this formulas are left for publication as is, please consider replacing the tau letters with something else and please stop calling it the dead-time constant - as in this current form it is not that at all. I propose to call it "p" - the factor of sneaked-in pile-ups at fixed dead time. I also would be interested in your formula if tau would be replaced with multiplication with of the real tau (the settable, OEM declared dead time) and experimentally calculable "p" - the factor of sneaked-in pile-ups depending from shapping time. That I think would be reasonable because not everyone are able to open covers, take a note of relevant chips and electronic components, find and read corresponding datasheets to get the "Pulse Shapping" time correctly, or measure it directly with oscilloscope (which, indeed, is not so hard to do on Cameca SX probes).

I get rather weary of your armchair criticisms.  You pontificate on various subjects, yet you provide no data or models.  If you have data or a model that sheds light on the correction for dead time or pulse pileup, then please present it.  A meme is not an adequate substitute.

On the instrument that I operate (JEOL JXA-8230), the required correction to the measured count rate is in the vicinity of 5% at 30 kcps.  Under these conditions, how significant is pulse pileup likely to be?  I have never advocated for application of the correction equation of Ruark and Brammer (1937) outside the region in which calculated ratios plot as an ostensibly linear function of measured count rate.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Brian Joy

#28
It's clear that the linear correction equation, N(N'), of Ruark and Brammer (1937, Physical Review 52:322-324, eq. 2) as used by Heinrich et al. (1966) and many others is not perfectly accurate – I never claimed that it was.  It fails badly at high count rates and actually begins deviating from reality at count rate greater than zero, yet it still provides a very reasonable approximation of corrected count rate up to several tens kcps.  The equation is pragmatically simple, and this is adequate justification for its use.  It does not need to be perfectly accurate to provide valuable results.  This is also true, for instance, when applying any of the modern matrix correction models.

In order to determine the dead time constant using the linear model, a linear regression must be performed using data (ratios) that fall within the region of effectively linear correction.  The dead time constant may not be adjusted arbitrarily as was done in construction of the plot in the quote below.  When such an approach is taken, i.e., when it is applied improperly, the linear model can be made to behave poorly.  Notice the minimum that forms at about 60 nA (at whatever corrected count rate that produces):

Quote from: Probeman on August 12, 2022, 10:43:54 AM
So we saw in the previous post that the traditional dead time correction expression works pretty well in the plot above at the lowest beam currents, but starts to break down at around 40 to 60 nA, which on Ti metal is around 100K (actual) cps.

If we increase the dead time constant in an attempt to compensate for these high count rates we see this:



Once the dead time constant is determined by means of the linear equation within the region of effectively linear behavior, it sets a physical limit that cannot be violated either in the positive or negative direction on the plot.  The statement that the dead time constant needs to be adjusted to suit a particular model is patently false.  It is not a model-dependent quantity but instead constitutes an intrinsic property of the X-ray counter and pulse processing electronics.

Let me present once again a plot of ratios (uncorrected and corrected) calculated from data collected in my measurement set 1 for Si.  In addition to the data plotted as a function of measured count rate, I've also plotted the linear correction model, the Willis model, and the Donovan et al. six-term model (as functions of corrected count rate).  For the Donovan et al. model, I've also plotted (as a line) the result that is obtained by decreasing the dead time constant to 1.36 μs from 1.44 μs (the latter being the result from the linear regression), which appears to force the model to produce a reasonably accurate correction over a wide range of count rates:



In order to examine behavior of the Donovan et al. six-term model more closely, I'll define a new plotting variable to which I'll simply refer as "delta."  For my measurement set 1, it looks like this:

ΔN11/N21 =  (N11/N21)6-term – (N11/N21)linear

Since formation of the difference requires evaluation of the linear correction equation, the delta value should generally be applied within the region of the plot in which the uncorrected ratio, N'11/N'21, plots as an ostensibly linear function of the uncorrected count rate.  Below is a plot of delta for various values of the dead time constant within the linear region:



Realistically, delta values should in fact always be positive relative to the linear correction.  This is the case if the dead time constant from the linear regression (1.44 μs) is used.  Unfortunately, use of this dead time constant leads to over-correction via the six-term model.  If instead the function is allowed to pivot downward by decreasing the value of the dead time constant so as to fit the dataset better as a whole, then delta must become negative immediately as count rate increases from zero.  This is not physically reasonable, as it would imply that fewer interactions occur between X-rays as the count rate increases and that a minimum is present – in this case – at ~40 kcps.  Note that, on the previous plot, which extends to higher count rate, the Donovan et al. model is seen to produce a corrected ratio that eventually passes through a maximum.

I'll now turn my attention to my measurement set 2, in which problems with the six-term model are much more obvious.  As I did when I posted my results initially, I've plotted results of the six-term model using the dead time constant determined in the linear regression (1.07 μs) and also using a dead time constant (1.19 μs) that allows the six-term model to fit the data better overall:



I'll now define a new "delta" for measurement set 2 as

ΔN12/N22 = (N12/N22)6-term – (N12/N22)linear

Here is the plot of delta for various values of the dead time constant:



On this plot, a negative deviation indicates over-correction of the channel 4 Si Kα count rate.  Note that varying the dead time constant for Si Kβ measured on channel 1 will produce little change in the plot, as the Si Kβ count rate requires only minimal correction.  Even if I use the dead time constant determined in the linear regression, the Donovan et al. model over-predicts the correction.  The situation gets worse as the dead time constant is adjusted so that the model fits the data better.  For τ2 = 1.19 μs, the six-term model intersects the 2σ envelope at Si Kβ count rate roughly equal to 2200 cps, which corresponds to measured Si Kα count rate around 60 kcps.  Obviously, the six-term model does not correct the data accurately.

As a final note, consider the plot in the quote below.  It illustrates a correction performed with the Donovan et al. model obtained by varying the dead time constant arbitrarily.  Although it is clouded somewhat by counting error, it is difficult to escape noticing that a minimum is present on the plot (as I've noted elsewhere).  As I've shown above, the presence of such a feature is a likely indication of failure of the model, with pronounced error possibly present even in the region of low count rates, in which region the linear model displays superior performance:

Quote from: Probeman on August 12, 2022, 10:43:54 AM
So, we simply adjust our dead time constant to obtain a *constant* k-ratio, because as we all we already know that we *should* obtain the same k-ratios as a function of beam current!  So let's drop it from 1.32 usec to 1.28 usec. Not a big change but at these count rates the DT constant value is very sensitive:



In conclusion, I urge people in the strongest possible terms not to use the model of Donovan et al.  Not only does it produce physically unrealistic corrections at all count rates, but it is substantially out-performed by the linear model of Ruark and Brammer (1937) at the relatively low count rates (no greater than several tens kcps) at which the linear model is applicable.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

Quote from: Brian Joy on August 14, 2022, 07:47:25 PM
It's clear that the linear correction equation, N(N'), of Ruark and Brammer (1937, Physical Review 52:322-324, eq. 2) as used by Heinrich et al. (1966) and many others is not perfectly accurate – I never claimed that it was.  It fails badly at high count rates and actually begins deviating from reality at count rate greater than zero, yet it still provides a very reasonable approximation of corrected count rate up to several tens kcps.  The equation is pragmatically simple, and this is adequate justification for its use.  It does not need to be perfectly accurate to provide valuable results.

Wow, really?  So you're going down with the ship, hey?   :P

Let me get this straight: you're saying that although the logarithmic expression is more accurate over a larger range of count rates, we should instead limit ourselves to the traditional expression because it's, what did you say, "pragmatically simple"?   ::)

Yeah, the geocentric model was "pragmatically simple" too!   ;D

Quote from: Brian Joy on August 14, 2022, 07:47:25 PM
In conclusion, I urge people in the strongest possible terms not to use the model of Donovan et al.  Not only does it produce physically unrealistic corrections at all count rates, but it is substantially out-performed by the linear model of Ruark and Brammer (1937) at the relatively low count rates (no greater than several tens kcps) at which the linear model is applicable.

I really don't think you understand what the phrase "physically unrealistic" means!  Accounting for multiple photon coincidence is "physically unrealistic"?

As for saying the logarithmic expression "is substantially out-performed by the linear model... at... relatively low count rates", well the data simply does not support your claim at all.

Here is a k-ratio plot of the Ti Ka k-ratios for the traditional (linear) expression at 1.32 usec and the logarithmic expressions at both 1.28 and 1.32 usec, plotting only k-ratios produced from 10 to 40 nA.  You know, that region you're saying where the logarithmic model is "substantially out-performed"?



First of all, what everyone (except you apparently) will notice, is that the k-ratios measured at 10 nA and 20 nA, are statistically identical for all the expressions!

Next, what everyone (except you apparently) will also notice, is that although the linear model at 1.32 usec (red symbols) and the logarithmic model at 1.28 seconds (green symbols) are statistically identical at 10 nA and 20 nA, the wheels start to come off the linear model (red symbols) beginning around 30 and 40 nA.   

You know, at the typical beam currents we use for quantitative analysis! So much for "is substantially out-performed by the linear model... at... relatively low count rates"!   :o

As for your declaration by fiat that we cannot adjust the dead time constant to an expression that actually corrects for multiple photon coincidence, it's just ridiculous.  You do realize (or maybe you don't) that multiple photon coincidence occurs at all count rates. To varying degrees depending on the count rate and dead time value. So an expression that actually corrects for these events (at all count rates) improves accuracy under all conditions.  Apparently you're against better accuracy in science...   ::)

Now here's the full plot showing k-ratios from 10 nA to 140 nA.



Now maybe you want to arbitrarily limit your quantitative analyses to 10 or 20 nA, but I think most analysts would prefer the additional flexibility in their analytical setups, especially when performing major, minor and trace element analysis at the same time at moderate to high beam currents, while producing statistically identical results at low beam currents.

I think we should have choices in our scientific models. We have 10 different matrix corrections in Probe for EPMA, so why not 4 dead time correction models?  No scientific model is perfect, but some are better and some are worse.  The data shows us which ones are which.  At least to those who aren't blind to progress.
The only stupid question is the one not asked!