News:

:) You cannot see "members only" boards if you are not a member, so please join the forum!

Main Menu

New method for calibration of dead times (and picoammeter)

Started by Probeman, May 26, 2022, 09:50:09 AM

Previous topic - Next topic

sem-geologist

#135
Quote from: Probeman on December 13, 2022, 03:58:47 PM
I am not sure about a "similar result".

Hmm... After looking once again finally I had spotted what is the crucial difference in our simulations. Mine  includes rudimentary the additional pulse length correction (remember, it can be sensed only rising up from 0V). It is "rudamentary" as it adds fixed 1µs to hardware time, which allows my MC simulation predictably with physical observations to arrive to similar count rates at high count rate (I am talking about >1M of count rates, and >800nA currents).

Ok, what I am babbling here? Maybe some illustrations will clear things up.

it is pulse sensing which miss the pulses and mess up the measurements. The detector, pre-amplifier and shapping amplifier forwards all information to gain and pulse sensing and PHA electronics (basically detector itself produce no dead time). Yes shapping amplifier tends to convolute the information a bit (albeit no information loss is there, and it can be deconvoluted with right signal processing techniques), and that troubles the oversimplified pulse sensing circuit from correctly recognizing every pulse with increasing pulse density.

So at least on Cameca hardware Gain electronics reverse the signal (pulses are negative). There are different designs of using Sample/Hold together with comparator for pulse sensing, and it looks that Cameca used the worst possible way (I see some reason behind why they do that way - they kinda shoot two foxes here with single bullet: this do pulse sensing and get rid from detecting noise as a pulse... but when if there are lots of "foxes" this shooting blows hunters own foot-off... :) ). There is 3 elements which work together to sense a pulse: comparator (compares two analog signals), sample-and-hold chip (which have two modes, in first mode it passes a bit delayed analog signal (from gain-amplified signal as an input), second mode is triggered when S/H pin is set HIGH – it takes a sample (which takes some short time) and hold (and output it) so long as S/H pin is kept HIGH. And the third element on Cameca design is a D-flip-flop. It is the component which glues the feedback of comparator, FPGA or microcontroler and sample and hold chip. Comparators output goes as Clock input of D-flip-flop chip. This kind of D-Flip-flop reacts only to the rising edge of comparators output (As a clock input, it ignores falling edge). Additional constraint is that it reacts to such rising edge only if its state is cleared - that is when flip-flops CLR pin is triggered and D is set to HIGH. The Clock rising edge of flip-flop then triggers (sets HIGH) its output Q which forwards that state to two devices: to FPGA (where it increases counter for integral mode counting by one); and other copy of Q state goes to Sample/Hold chip where it triggers and holds the S/H pin HIGH, so that S/H chip stops tracking the input but starts sampling and holding the amplitude of the pulse. The FPGA for set hardware time pulls the CLR and D to LOW and thus Flip-Flop is made inactive and does not react to any further CLK input from comparator. It does that for selectable amount of hardware enforced dead time (where on Cameca it is integer number of µs). The comparator compares the Gain-amplified and reversed signal (the main input to this pulse sensing circuit) and the output of Sample-and-hold chip (which takes gained reversed signal and reverses it one more time thus its output is uptight), the comparator output is set to HIGH if output of S/H chip is higher than main input of reversed gained signal (some of Comparators can have some threshold value for difference to switch the output state).

So below illustrations show these events in some unit-less time line. It contains two parallel time axes showing on top two analog inputs of the comparator, and below is shown the digital (TTL) output of comparator corresponding to the relation between those two analog inputs. At first lets look to the worst case scenario:

dashed blue line here marks how signal would look if S/H would be not pulled HIGH. TTL logic depends from generation of electronics. In old times it was 5V (TTL), newer designs moved to (LV)TTL - that is 3.3V.

The crucial thing is that FPGA will unblock the flip-flop after set enforced hardware dead time times out, but in some cases the comparator is still kept at HIGH state and thus prolonging the effective blocking of the sensing any new pulses, and thus it makes additional hardware dead time which depends from 2 things: First, the pulse length at positive side (at 0V, not at half amplitude) and in case of Cameca pulses it is about 1µs. In presented above worst case scenario that is about additional 1µs. Second, The probability of such pulse to appear at such position overlapping with the  - that depends from count rate.

The best case scenario is that there is no other pulse too-close to enforced dead time lifting moment and the comparator can be put to LOW state with very small delay after hardware enforced dead time is lifted. I emphasize the "small delay" - that will be not 0, and will depend from two factors: 1) the amplitude it was holding (the higher amplitude it was holding the longer way down to 0V, the delay will be bigger); 2) how fast the output can restore to track the input after the S/H pin is lowered. So, Indeed, this delay will depend from GAIN and HV BIAS but as it will be small, and its effects can statistically interfere only some range of counting rates, there never was straight forward linkage found.

The best case scenario:


And then at medium count rates there will be lots of intermediate situations:

there as we see the pulse which prolong the dead time happened at enforced time, its long pulse time however goes over that enforced time and prolongs the dead time after.

As this can be seen in the above pictures, this additional dead time will depend from pulse density and length of pulse. Would be the S/H chip comparator and flip-flop connected in better way the time span of pulse would have no influence to the result.

So my initial model adds +1µs to the dead time, and comes at observable count rate at very high count rates (>1Mcps), probably (I need to check) it is underestimating the observable counts at mid-range. I am rewriting that part to address that.

edit: Just to correct my claim about 1µs, actually pulse length measured at 0V is not 1µs but 1.2µs.

Probeman

Quote from: sem-geologist on December 14, 2022, 04:41:09 AM
Hmm... After looking once again finally I had spotted what is the crucial difference in our simulations. Mine  includes rudimentary the additional pulse length correction (remember, it can be sensed only rising up from 0V). It is "rudamentary" as it adds fixed 1µs to hardware time, which allows my MC simulation predictably with physical observations to arrive to similar count rates at high count rate (I am talking about >1M of count rates, and >800nA currents).

Ok, what I am babbling here? Maybe some illustrations will clear things up.

it is pulse sensing which miss the pulses and mess up the measurements. The detector, pre-amplifier and shapping amplifier forwards all information to gain and pulse sensing and PHA electronics (basically detector itself produce no dead time). Yes shapping amplifier tends to convolute the information a bit (albeit no information loss is there, and it can be deconvoluted with right signal processing techniques), and that troubles the oversimplified pulse sensing circuit from correctly recognizing every pulse with increasing pulse density.

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://smf.probesoftware.com/index.php?topic=1466.msg11391#msg11391

and here:

https://smf.probesoftware.com/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?
The only stupid question is the one not asked!

sem-geologist

#137
Quote from: Probeman on December 14, 2022, 08:12:21 AM

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://smf.probesoftware.com/index.php?topic=1466.msg11391#msg11391

and here:

https://smf.probesoftware.com/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?

No. I thought I had already answered that rectinilearity of pulses are not there (it is like wishing that apples would be square - of course from transport management of apples to market POV - square shape of apples would make more sense - less space would be wasted in logistics, However, Nature had a bit different idea how it should work...) and it does not solve anything. I start feeling a bit devastated to argue about these rectilinearity stuff again. Maybe me being not-English native speaker stands like a huge wall preventing me to communicate my knowledge to others properly and efficiently with words and sentences... So I retract to image-drawing. Maybe in that way I can point how pointless it (this rectilinearity) is. Here those three figures remade with rectilinear like shapes. Figures are presented in the same sequence.
Worst case scenario:


Best case scenario:


mid-case scenario:


So does the rectilinearity solves anything? The outcome (the comparator output) looks absolutely exactly identical, thus it does not do anything.

So my message is that in some pulse sensing designs (i.e. those of Cameca) pulse length (the duration of pulse) is also crucial for correct dead time correction, and that is exaggeratedly included in my MC (worse case scenario applied for every pulse adding +1µs to the hardware enforced dead time in any case) - and that is wrong (yes mine MC is wrong too :) haha) as it overestimates dead time at mid-count rate range (10kcps - 500kcps, I guess), but allows to more correctly predict the ceiling of count rate which Cameca probe can spit out (the raw count rate). Aurelien's model supposes that pulse is only 10ns, and there is no afterward consequencies of the pulse. That is unrealistic on many levels as 1) the raw Townsend Avalanche on its own takes about 200ns. Shapping amplifier convolutes its duration to more less over 2µs, where following differentiation shortens it down to about 1.2µs. Aurelien's MC comes to same values as that equation as both of them ignores the pulse duration and how it affects the counting.
Try to expand the Aurelien's MC to 2Mcps input count rate and You will see it gets physically unachievable values.

My message is that dead time is not detectors problem, but pulse counting (and sensing) problem and both correction equation and MC modeling needs to know exactly how the counting hardware works and how it can fail to sense the pulses. Would be the design of counting different - the equation could be different. I.e. I know different type (which I was wishful naive thinking some time ago the Cameca uses) of pulse sensing with comparator and S/H chip. Such design would have no dynamic addition of dead time which depends from count rate itself. The key difference is that both inputs are uptight, and it is using the S/H chips one of best features - the delay of signal:


Such design has its week points as well, however it would have this "additional dead time" much less dynamic than in current Cameca pulse sensing design - and in that way actually the equation and Aurelien's MC would be more applicable to real hardware. Additionally the PHA shift would not make pulses not detectable as it still would be sensed. The integral mode of such design would be The Real True Integral mode.

And then there is alternative with FPGA real time deconvolution based design - which would render this whole effort pointless...

Probeman

#138
Quote from: sem-geologist on December 14, 2022, 09:54:28 AM
Quote from: Probeman on December 14, 2022, 08:12:21 AM

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://smf.probesoftware.com/index.php?topic=1466.msg11391#msg11391

and here:

https://smf.probesoftware.com/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?

No. I thought I had already answered that rectinilearity of pulses are not there (it is like wishing that apples would be square - of course from transport management that would make more sense, less space would be wasted in logistics, but Nature had a bit different idea how it should work) and it does not solve anything. I start feeling a bit devastated to argue about these rectilinearity stuff again. Maybe me being not-English native speaker stands like a huge wall preventing me to communicate my knowledge to others properly and efficiently with words and sentences...

Yes, I think we have a language problem because you are missing the point I am trying to make.  Let me try again: We are not claiming that the pulses in our detection systems are rectilinear.

Instead, we are saying that *if* the pulses were rectilinear *or* if they are sufficiently spaced far enough apart in time so it doesn't matter what shape they are, then that might be the reason why at low count rates, the traditional expression appears to properly correct for single *and* multiple photon coincidence in these systems. That is what Aurelien's Monte Carlo modeling appears to support, since he does assume rectilinear pulse shapes and when he does, the response of the system is completely linear.

But at high count rates where the pulses start to overlap with each other, as you point out correctly I think, the actual non-rectilinear shape of these pulses appears to cause a non-linear response of the system, by creating an "effective" dead time with a larger value, as you showed in your previous post.  Hence the reason the logarithmic expression seems to perform better at these high count rates, when these non-rectilinear pulses begin to overlap in time.
The only stupid question is the one not asked!

sem-geologist

#139
Its pulse duration, not its shape which matters. Look again to the figures – rectilinear pulses (of same duration) would give the absolutely same outcome for this kind of detection system (Cameca pulse sensing). It is not so important what the pulse shape is, but how pulse is sensed. The Aureliens MC and classical equation could be partly applicable on this alternative system proposed in my last post(which is actually well known and classical pulse sensing, not some unnecessary invention of Cameca, and I somehow suspect Jeol probably use that classical type, and that is why it is less affected at higher count rates). In way Cameca senses the pulses it introduce additional dynamic dead time. It is not rectangular shape in Aurelien's MC which is problem. Actually AFAI understood from Aurelien's replay and looking at his VBA code there is no shape modeling at all. Rectangularity of pulses is moot point. The problem is the unrealistically short pulse duration of only 10ns, or even less and supposing that pulses hitting the detector while it is "dead" has absolutely no consequences afterward. The problem is treating the event to be of same length as MC time resolution.

As for equation it does really poor job with coincidences as it overestimates the count rate at very low rates, and underestimates at high rates. Because of these contradictions at mid-rates it more less fits to observations as those contradictory workings compensate and thus it is accepted. Your invented log function is much better as it overestimates less the count rate at low rates, and underestimates less at high rates.

Probeman

#140
Quote from: sem-geologist on December 14, 2022, 10:28:19 AM
Its pulse duration, not its shape which matters. Look again to the figures – rectilinear pulses (of same duration) would give the absolutely same outcome for this kind of detection system (Cameca pulse sensing). It is not so important what the pulse shape is, but how pulse is sensed. The Aureliens MC and classical equation could be partly applicable on this alternative system proposed in my last post(which is actually well known and classical pulse sensing, not some unnecessary invention of Cameca, and I somehow suspect Jeol probably use that classical type, and that is why it is less affected at higher count rates). In way Cameca senses the pulses it introduce additional dynamic dead time.

So you are saying that the reason the logarithmic expression performs better at high count rates, is because of the way the Cameca system senses the pulses, it introduces additional dead time?  So why do you think the JEOL system also performs better using the logarithmic expression?

Quote from: sem-geologist on December 14, 2022, 10:28:19 AM
It is not rectangular shape in Aurelien's MC which is problem. Actually AFAI understood from Aurelien's replay and looking at his VBA code there is no shape modeling at all. The problem is the unrealistically short pulse duration of only 10ns, or even less and supposing that pulses hitting the detector while it is "dead" has absolutely no consequences afterward.

I think the reason he's using such a short input pulse is because I thought we all agreed that the output from the detector itself is very short. And that the primary formation of dead time was a result of the pulse processing system whether it is due to pulse shape or as you say pulse sensing.
The only stupid question is the one not asked!

sem-geologist

#141
Quote from: Probeman on December 14, 2022, 10:37:01 AM
I think he's using such a short input pulse is because I thought we all agreed that the output from the detector itself is very short. And that the primary formation of dead time was a result of the pulse processing system whether it is due to pulse shape or as you say pulse sensing.

Pulse shape is not pulse sensing. Pulse shape is produced with Shapping amplifier - it is analog device and works continuously without any hiccup or interruption. Again, measured typical Townsend events on Proportional counters have duration of about 200 ns. "Neither 5ns neither 3ns is the number", but 200 ns (more less). That is why Cameca chose to use A203 IC with fixed shapping time constant of 250ns, as that is optimal shaping time for proportional counter produced signals (it integrates fully all collected charge, and thus there is no problem of incomplete charge collection). That 250ns is a 1 sigma value, thus the generated pulse duration is about 2µs near its bottom (not at FWHM), (actually due to not symetric shape it is more like 4µs), and after differentiation it is shortened to ~1µs duration (if measuring only positive side of bi-polar). That is far from 1 or 10 ns.

"Sensing" is making digital sense of train of such pulses (it happens on different board dedicated for that) - recognizing and sending the digital impulse(s) that there was a pulse spotted in the continues analog signal incoming from the detector (From the shapping amplifier to be more precise), so that digital counter of pulses could be increased by 1, and also measurement of pulse amplitude be triggered. This sensing interrupts itself and thus the dead time is generated here.

I also got feeling (while reading Aurelien's VBA code) that MC of Aurelien is generalizing dead time to be the detector thing.
So I want to point out some crucial facts.
The Gas Flow proportional counter (sealed as well) itself has no deadtime (compared to G-M counters) as:
1) has quenching Gas presented in the mixture (prevents avalanche to spread uncontrollably by loop of secondary ionisation and UV propagation)
2) Townsend avalanche is happening at limited space and very limited fraction of wire (does not take whole wire, where on G-M it spreads on whole wire)
3) The voltage drop on wire is insignificant to the bias voltage (thus we need charge sensitive pre-amplifier to sense it at all, on G-M tube the voltage drop can be so big that it lowers the bias so much that it can interrupt/terminate ongoing avalanche on its own)

Then we look further in the pipeline to  Charge sensitive preamplifier and shapping amplifier - those are analog devices working continuously. Gain amplifiers are also analog devices (setting gain is digital, but amplification itself is analog and continuous without interruptions) and works continuously. The interuptions appear only at Pulse-Sensing circuits as this is where this analog signal needs to cross into digital domain and digital domain is not such continuous.

I can only guess how Jeol does things, as I have no hardware access to these machines. As I said the classical pulse sensing (comparing same polarity incoming and delayed signal) also has its weaknesses, which would show up with increasing count rate (it would have "Race-conditions" then comparator would send rising edge to D-flip-flop before the flip-flop being reset) - however such conditions would be rare in case of shorter set enforced dead time.

Aurelien Moy

#142
I have been playing with Sem-geologist's Python code and it indeed leads to the same results as my MC code when using the same parameters: same aimed count rate, same total dead time and same time interval (1µs in the Python code).

I also replaced the function generating random pulses "detector_raw_model" by the NumPy built-in Poisson distribution (just to see how they compared). I did that by replacing it with
count_rate = 500000 #aimed count rate in c/s
delta_t = 1e-6 #time interval, here 1 µs.
ts = np.random.poisson(count_rate * delta_t, 1000000)


Below are the results I obtained between my MC code, Sem-geologist's MC code using the original detector_raw_model function and also using the NumPy Poisson distribution. I also plotted results obtained with the original dead time correction formula.



As we can see, the MC codes give similar results. However, they seem different from the original dead time formula. This is because the time interval Sem-geologist chose, 1 µs, is too large compared to the total dead time of 3 µs (2 µs +1 µs). This simulation time interval is too large and does not correctly take into account how many photons will be missed (not counted) by the detector.

To confirm this, I have modified the Python code to have a time interval of 0.1 µs. This was done by increasing dt ten times in the dead_time_model function, just after dt = software_set_deadtime + 1:

dt = dt * 10

and by increasing the size of the time_space array from 1 million to 10 millions as well as the values used to generate the batch array:

time_space = np.zeros(10000000, dtype=np.int64)  # 10M µs for 10MHz
    for i in range(aimed_pulse_rate // incremental_count_size):
        batch = np.where(np.random.randint(0,10000000,10000000) < incremental_count_size, 1, 0)
        time_space += batch
    return time_space


Here are the results I obtained with this smaller simulation time interval:



Note that I also changed the simulation time interval in my code to be 0.1 µs.

We can see that both codes are producing similar results and they are much closer to what the traditional dead time expression produces.

The only visible discrepancy is for a count rate of 500,000 c/s. This is because taking a simulation time interval of 0.1 µs is still not small enough compared to the 3 µs total dead time that we have considered. If I change the simulation time interval to 0.01 µs, we get:



I only did the calculations for a count rate of 500,000 c/s, lower count rates were not visibly different from the previous plot. Now, we can see that all the methods agree with each other.

sem-geologist

#143
Thanks Aurelien,

I am now absolutely convinced that I suck at math :D, I was previously unable to understand how to use the numpy'es poisson function for random generation, thus was "reinventing the wheel". Generating random events directly with Poisson function is whole order faster, more simple and less error prone. This will speed up more advanced M-C when applying randomized amplitude and common pulse shape, and simulating the detection with more other lesser count hiding effect.

Albeit I still am scratching my head and can't understand why there is such a huge difference between course and fine grained durations (1µs vs 10 or 1ns) because the real counting system can't physically respond to anything above 500kHz. And pulses itself at counting has 1.2µs duration... How exactly decreasing the step down to 10ns helps the simulation? Maybe it does not bring the results of simulation more in line with what we see observed on machine, but close to theoretical calculation thus it fits so good with the equation.

I am staying out of work today, and I see I forgot to take the data of raw achievable count rate limits at 1300nA for Cr on LPET (should be about 2.8Mcps) and lesser ultra high count rates. The initial simulation when changing the enforced dead time constant could get to exactly observed achievable limits at such very high current (that is ~326kcps at 1µs enforced dead time, 276kcps at 2µs, 199kcps at 3µs...109kcps at 7µs. However at 500kcps input count it clearly underestimated largely the detected count rates from observed on machine. Also I could anecdotally observe diminishing count rate then increasing the gain at very high count rates (which has explanation in my previous post explaining how GAIN and BIAS can influence the additional dead time (The Sample/Hold chip has limited Voltage slew rate and thus the higher major dominant pulse amplitude - the larger voltage difference to 0V and thus the longer it takes for it to drop.).

Probeman

#144
Quote from: sem-geologist on December 15, 2022, 11:18:54 AM
I am now absolutely convinced that I suck at math :D, I was previously unable to understand how to use the numpy'es poisson function for random generation, thus was "reinventing the wheel". Generating random events directly with Poisson function is whole order faster, more simple and less error prone. This will speed up more advanced M-C when applying randomized amplitude and common pulse shape, and simulating the detection with more other lesser count hiding effect.

Aurelien is amazing, isn't he?

First some definitions:
Incident photon: a single (initial) photon event which creates a nominal dead time period.
Coincident photon: one of more photons that arrive within the dead time interval of a previous incident (initial) photon.

So would you say that the cause of these increasingly non-linear effects at moderate to high count rates is due to the manner in which the pulse processing electronics responds to photon coincidence at high count rates?

Specifically when a photon arrival is coincident within the incident (initial) photon's dead time interval, does it somehow cause an "extending" of the nominal dead time interval from the incident photon? 

Furthermore, is it possible that this "extension" of the dead time at these higher count rates could depend on the number of photons coincident with the incident (initial) photon?  In other words, could two (or more) coincident photons create a longer "extension" of the dead time interval than a single coincident photon?
The only stupid question is the one not asked!

sem-geologist

#145
Quote from: Probeman on December 15, 2022, 02:16:20 PM
Aurelien is amazing, isn't he?

Yes I agree, I think he is :)

Quote from: Probeman on December 15, 2022, 02:16:20 PM
First some definitions:
Incident photon: a single (initial) photon event which creates a nominal dead time period.
Coincident photon: one of more photons that arrive within the dead time interval of a previous incident (initial) photon.

So would you say that the cause of these increasingly non-linear effects at moderate to high count rates is due to the manner in which the pulse processing electronics responds to photon coincidence at high count rates?

Yes. And it depends very much from how pulse recognition/sensing is implemented. The Cameca has quite weird signal sensing/recognition implementation. The problem is that it does not change linearly and predictable by changing the count rate. Also because of bipolar pulses (depending how they overlap at some pointed timestamps such overlap can be destructive and at some constructive, that is adding can reduce intensity of amplitude or increase it.) It is not just simple relation where we get more dead time if we increase or decrease the hardware enforced dead time or/and count rate. It is quite over-complicated and I still am trying to figure out most of corner cases and details.
Not all datasheets contain sufficient information, and brief properties available in datasheets allows to construct only brief model.

Quote from: Probeman on December 15, 2022, 02:16:20 PM
Specifically when a photon arrival is coincident within the incident (initial) photon's dead time interval, does it somehow cause an "extending" of the nominal dead time interval from the incident photon?

It can cause extension if they do constructively interfere, and its pulse (without taking into account the after-pulse of reversed polarity) duration overlaps with ending of enforced dead time (integer time enforced with FPGA, lets call it tau_he (hardware enforced). If there is no coincident photons, enforced dead time normally is prolonged with additional dead time caused by S/H amplifier being switched back to sampling mode and trying to catch up with input signal (the initial signal incoming from gain amplifier). That catching-up can take about additional 300ns (that is why default additional dead time by Cameca in Peaksight is set to 0.3µs - I finally see the reason behind the default number.).  Lets call this additional dead time tau_shr (sampling-hold restore). That additional dead time will wary from PHA distribution. 300ns is applicable only for cases when PHA distribution is centered at the middle. If distribution is highly pushed to 5V (increasing bias and gain) then this tau_shr will raise up to 600ns ( for low count rates) but with PHA shift to left (at higher count rates) it will go down to 300ns (and lees) for high count rates. However, that tau_shr can also be shortened overusing bipular-pulse's negative afterpulse i.e. setting tau_he to 1µs - the comparator will be reset faster than 300ns, because the as bipolar pulse incoming from gain amplifier is reversed, and so bipolar-after pulse (in that reversed polarity will be also positive value) amplitude will be much closer to holded value by S/H amplifier and there is less of V difference to cut through to switch the comparator state to LOW which effectively prepares it to sens the next pulse. - That makes some kind of paradox. It also convinces me that for integral mode 1µs enforced time is even better than 2µs.

Then incident photon pulse is overlapped with coincident photon pulse - Thanks to bipolar pulse nature - this however does not extend in-perpetum (as in well known design in EDS), but extends the dead time with max about 1.2µs (lets call this tau_ppu (pulse-pile-up)) in worst case scenario. Nevertheless if hitting at right moment it also can shorten the tau_shr - thus it is very complicated matter with no definitive answer still.
Quote from: Probeman on December 15, 2022, 02:16:20 PM
Furthermore, is it possible that this "extension" of the dead time at these higher count rates could depend on the number of photons coincident with the incident (initial) photon?  In other words, could two (or more) coincident photons create a longer "extension" of the dead time interval than a single coincident photon?
It is not only the number but also its position as it could extend a bit more than 1 pulse width (if interferes constructively) or could shorten the dead time (if interfers destructively). The tau_he cant be anyhow shortened by coincident photon induced pulses, but tau_shr can, So the dead time can range (per single events): from tau = tau_he + tau_shr to  tau = tau_he + tau_ppu

tau_ppu is competing with tau_shr, thus the one which is longer overshadows other process.

The worst thing is this sample/hold amplifier workings are a bit secretive, I find documentation quite too little informative. Probably I will end buying one chip to experiment with it (it is quite expensive).

Probeman

I've had a few off-line conversations recently regarding the subject of this topic and what seems to help people the most is to consider this simple question:

"Given a primary standard and a secondary standard, should the k-ratio of these two materials (ideally) be constant (within counting statistics) over a range of beam currents?"

Think about it and if your answer is yes, then you have already grasped most of what is necessary to understand the fundamentals of this topic.

Next combine that understanding of the constant k-ratio with the selection of two materials with significantly different concentrations (i.e., count rates) and you will understand precisely how we have arrived at, not only this new method for the calibration of our dead time "constants", but also improved dead time expressions, in order to account for the observable non-linear behavior of our pulse processing electronics at moderate to high count rates.
The only stupid question is the one not asked!

Dan R

#147
Quote from: Probeman on January 04, 2023, 12:12:08 PM
...But I quickly want to point out that this dead time calibration method is now considered obsolete as it is only good to around 50 kcps. Instead you should be utilizing the much discussed "constant k-ratio" method (combined with the new logarithmic dead time expression), which allows one to perform quantitative analyses at count rates up to 300 or 400 kcps. More information on the "constant k-ratio" can be found in this topic and you might want to start with this post here, though the entire topic is worth a read:

https://smf.probesoftware.com/index.php?topic=1466.msg11416#msg11416

In fact, if you have a fairly recent version of Probe for EPMA, the step by step directions for this "constant k-ratio" method are provided as a pdf from the Help menu...

Thanks probeman, I will try the constant k-ratio method.

@John Donovan -- i did see the following error when i attempted the procedure in the constant k-ratio.pdf

I am using PFE v 13.1.8 and my probewin.ini file is changed to:
[software]
DeadtimeCorrectionType=4      ; normal deadtime forumula=1, high-precision dt formula=2

but i get an error saying that this new value is out of range:
InitINISoftware
DeadtimeCorrectionType keyword value out of range in ...


FYI it works as it should if i set the value to 3 (i.e., logarithmic deadtime correction)


John Donovan

#148
Quote from: Dan Ruscitto on January 05, 2023, 10:16:49 AM
@John Donovan -- i did see the following error when i attempted the procedure in the constant k-ratio.pdf

I am using PFE v 13.1.8 and my probewin.ini file is changed to:
[software]
DeadtimeCorrectionType=4      ; normal deadtime forumula=1, high-precision dt formula=2

but i get an error saying that this new value is out of range:
InitINISoftware
DeadtimeCorrectionType keyword value out of range in ...

FYI it works as it should if i set the value to 3 (i.e., logarithmic deadtime correction)

It's because your Probe for EPMA version is way out of date!  Version 13.1.8 is all the way back from July this summer!   :)

Actually a value of 3 in the probewin.ini file is the six term expression (the options in the Analysis Options dialog are zero indexed, so 4 is the logarithmic expression).

[software]
DeadtimeCorrectionType=3   ; 1 = normal, 2 = high precision deadtime correction, 3 = super high precision, 4 = log expression (Moy)

The pdf instructions are correct, but you need to update PFE using the Help menu to v. 13.2.3.

In practice the six term and the logarithmic expressions are almost identical. The logarithmic expression is just very elegant!  See here:

https://smf.probesoftware.com/index.php?topic=1466.msg11041#msg11041
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Dan R

Alright, I performed the newer constant k-ratio deadtime calculations as discussed using Si Ka on TAP/PETJ for all 5 spectrometers on Orthoclase | Si metal.
Here are how my values (usec) compare:
                     Initial DT (2013)    2023 (Carpenter, full-fit)     2023 (Constant K-ratio)   
Sp1 (TAP, FC)        1.72                              1.85                         1.5185
Sp2 (TAP, FC)        1.49                              1.25                         1.1650
Sp3 (PETJ, Xe)      1.70                              1.43                          1.2000
Sp4 (PETJ, Xe)      1.65                              1.42                          1.4000
Sp5 (TAP, FC)        1.68                              1.70                          1.4450

Now for some of my Flow counters, i got some odd behavior shown in the attached image. Any idea whats going on here? this is as good as i could get it...