News:

:) Please keep your software updated for best results!

Main Menu

Setting dead times on Cameca SX50/SX100/SXFive

Started by lucaSX50, August 13, 2013, 12:12:45 PM

Previous topic - Next topic

Paul Carpenter

#15
Ok, I just saw what Donovan posted.  Several comments from the Si Excel sheet you posted:

First, the plot of abs/probe shows ~20% variation so something is not right about the fundamental setting and/or measurement of probe current or sample conductivity. 

For Si use only data from the TAP-equipped spectrometers because you probably are not going to get a high enough count rate for Si on a PET spectrometer unless you go to very high probe currents. So use the date from Tap2 and Tap4.

The two Si PET plots exhibit sigmoidal curves (apparently). I don't know the origin of this.

Secondly, you need to use the count rate range that is for example ~1000 cps up to ~200k cps. Donovan's sheet shows a plot using the LTap for which the lowest count rate is ~50kcps and it exhibits paralyzable behavior at the high end. The two linear fits show very different deadtime values and the paralyzable portion has a much steeper slope, so you don't want to use a data range like that. Again, if you were doing X-ray mapping using the LTAP at high probe current, you will likely observe different intensities on Si-rich phases that will not be the correct intensities unless a deadtime correction is being used.

I start out these runs by determining the low and high probe current needed for the range, dividing it up into the number of measurements and using that increment to span the count rate range.  This is done using the Excel remote interfact to PFE, but your probe has to be able to set the probe current correctly over the required range.  John has used this to collect the data, hence the values for the probe current being used.

John, thanks for posting that data.

Paul
Paul Carpenter
Washington University St. Louis

John Donovan

Quote from: Paul Carpenter on December 19, 2014, 11:16:41 AM
First, the plot of abs/probe shows ~20% variation so something is not right about the fundamental setting and/or measurement of probe current or sample conductivity. 

Yes, sorry, I should have pointed that out. We've had bad sample current measurements for about 5 years now and our engineer is sure that the connector inside the stage isn't making good contact when the sample holder is inserted.  For example, I'll be at 20 nA on the faraday and pull out the cup and the sample current goes to 200 nA which is impossible of course! 

The engineer is supposed to drop the stage and deal with that but in the last 5 years we've only dropped the stage once for some other reason and he forgot to check it! But we don't use absorbed current for anything else so it hasn't been a priority... but it should be!

But I think the deadtime data attached above is pretty good because the specimens were pure Ti and pure Si on conductive mounts.

Quote from: Paul Carpenter on December 19, 2014, 11:16:41 AM
For Si use only data from the TAP-equipped spectrometers because you probably are not going to get a high enough count rate for Si on a PET spectrometer unless you go to very high probe currents. So use the date [sic] from Tap2 and Tap4.

Yes, ideally that would be best.  One of the PETs was LPET, so that is 3x better intensity.  I really will never buy another instrument that doesn't have all large area crystals (except for the 4 crystal spectros of course).

Quote from: Paul Carpenter on December 19, 2014, 11:16:41 AM
Secondly, you need to use the count rate range that is for example ~1000 cps up to ~200k cps. Donovan's sheet shows a plot using the LTap for which the lowest count rate is ~50kcps and it exhibits paralyzable behavior at the high end. The two linear fits show very different deadtime values and the paralyzable portion has a much steeper slope, so you don't want to use a data range like that. Again, if you were doing X-ray mapping using the LTAP at high probe current, you will likely observe different intensities on Si-rich phases that will not be the correct intensities unless a deadtime correction is being used.

Fortunately CalcImage does perform a full deadtime correction for quant!   :)
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

#17
Quote from: Paul Carpenter on December 19, 2014, 11:16:41 AM
Again, if you were doing X-ray mapping using the LTAP at high probe current, you will likely observe different intensities on Si-rich phases that will not be the correct intensities unless a deadtime correction is being used.

This is a little off-topic but I should mention this since the issue of quant came up. In addition to all quant data in Probe for EPMA and CalcImage being deadtime corrected (along with all the other required corrections of course!), we also perform deadtime and beam drift corrections to all displayed raw intensities as well. That is for both point and x-ray map intensities.

That is, unless one has disabled these corrections from the Analytical | Analysis Options menu dialog as seen here:

John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Philipp Poeml

#18
Hi Paul, John,

thanks for all the info. When I'll get to the office on Monday, I'll post my excel sheets -- I don't have them here. Maybe together we can figure out how to determine that deadtime for our SX100R.

Sorry for not posting them earlier!

Edit: Now attached to this post!!!!

Best
Philipp

UofO EPMA Lab

Quote from: Philipp Poeml on December 21, 2014, 05:26:21 AM
thanks for all the info. When I'll get to the office on Monday, I'll post my excel sheets -- I don't have them here. Maybe together we can figure out how to determine that deadtime for our SX100R.

Hi Philipp,
I took a quick look at the first spreadsheet and it looks pretty good.  You should update the comment on the first row as it still has my conditions noted there!

I assume these were acquired at 1 usec enforced deadtimes?  If so, then I would set those Cameca integer values in the SCALERS.DAT file as 4, 3, 2, 3 usec and remeasure them again and then use the new deadtime values for the software deadtimes for those crystals in the lower section of the SCALERS.DAT files.
john
UofO MicroAnalytical Facility

Philipp Poeml

Hi John,

thanks for checking this out. It would be great to also get the opinion from Paul.

So, I re-measured everything, now with the DTIM suggested by you. It looks better now, I attach the two xls files, the 1,1,1,1 one and the 4,3,2,3 one.

What do you both think? Possibly a good idea to also check on a Ti metal?

Which values would you finally set in PfE then? I understand that 4,3,2,3 should go into line 35. From the xls values, I take the worst one for line 13?

Happy Christmas!

John Donovan

#21
Quote from: Philipp Poeml on December 23, 2014, 04:27:05 AM
What do you both think? Possibly a good idea to also check on a Ti metal?

Absolutely.  In fact ideally one should measure an emission line on each crystal, so you can populate the SCALERS.DAT lines starting at line 72 with values for each crystal. 

In other words, because deadtime is somewhat dependent on photon energy (and detector bias) as Paul stated above, having a different deadtime value for each crystal (think of it as an energy range), means that one can nicely compensate for this variation by specifying a deadtime value for each crystal.

Quote from: Philipp Poeml on December 23, 2014, 04:27:05 AM
Which values would you finally set in PfE then? I understand that 4,3,2,3 should go into line 35. From the xls values, I take the worst one for line 13?

The values from your spreadsheet calculations go into lines 72-76.  Line 13 should be ignored, it is obsolete. Yes, the 4,3,2,3 values go in line 35 for the enforced hardware values.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Ben Vos

Quote from: Mike Jercinovic on August 14, 2013, 12:50:08 PM

The second issue is with the picoammeter.  No matter how well tuned it may have been, this will go out of spec at some point, and may need tuning frequently.  If the current measurements are not linear, then your deadtime tests can give you a wrong apparent deadtime, or, in many cases, different apparent deadtimes depending on the current regime you measure.  There are five gain loops in the picoammeter for each current range: <0.5nA, 0.5-5nA, 5-50 nA, 50-500nA, and 500-10000nA.  In a perfect world, these are all linearized.  In actuality, the gains and offsets tend to fall out of linearity, so you may see a completely different deadtime slope in the 5-50 range compared to the 50-500 range.  Unfortunately, the 5-50 range resistors do not have trimmers (so you have to change resistors to affect the gain/slope), the others do, so they are all trimmed to match the 5-50 range.  Mike J.
Hi Mike,

I recently calibrated deadtime on our SX100.
Some results in attached file
1 ) At the moment I use Cameca integer deadtime = 3 µs for the 4 Spectrometers.
Can I reduce this value to 2 µs as the pulse stretcher circuit already seems to be active at this value for the 4 Spectrometers?     
2 ) From the graph it's clear that the 5-50 nA and 50-500 nA range or not trimmed correctly at this moment. 
- The deadtime values that I put in table or only valid for the 50-500 range?  What to do if I want to calibrate standard (Pure Element )in the range 5-50 nA and do measurement sample (Minor Element) in the range 50-500 nA. 
- Can I trim/tune  the 50-500 nA range myself?  How to do it? 

Greetings,

Benedict Vos

Ben Vos

Hi all,

I was at EMAS 2019 in Trondheim, Norway for the 16th European Workshop on modern developments and applications in microbeam analysis.  My poster presentation was about a "calibration device for accurate current measurement on a CAMECA SX100 EPMA".
The abstract and poster are given in annex.

With this calibration device, I'm sure that both the range 5 - 50 nA and 50 - 500 nA are very well trimmed now. 

For more details feel free to contact me.

Greetings,

Ben Vos

John Donovan

Quote from: Ben Vos on June 11, 2019, 01:49:23 AM
Hi all,

I was at EMAS 2019 in Trondheim, Norway for the 16th European Workshop on modern developments and applications in microbeam analysis.  My poster presentation was about a "calibration device for accurate current measurement on a CAMECA SX100 EPMA".
The abstract and poster are given in annex.

With this calibration device, I'm sure that both the range 5 - 50 nA and 50 - 500 nA are very well trimmed now. 

For more details feel free to contact me.

Greetings,

Ben Vos

Hi Ben,
Very interesting. Would you be willing to attach your poster as a pdf here?
john
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

sem-geologist

#25
I was scratching my head for years (actually only 6 years, I am quite fresh in this field) about this dead time and PHA (those are tightly connected, and all missing counts at very high beam currents at intensive lines are missing due to that, I will explain that further). Unfortunately most of available educational material on these subjects are lacking and some are even completely misleading.

The "improvements" by vendors (i.e. enforced cutting off values bellow some energy in PHA, that is cutting out Ar esc-peak) hides the mechanism-behind away from such newcomers as me even more further. I should mention that our SXFIVE FE is equipped with 5 spectrometers where 4 have large xtals. Due to being a FEG machine it has only single condenser lens, which makes changing between high and low currents during single analysis not practical, the power of condenser needs to be changed enormous (compared to 2-lense system, i.e. sx100), and requires thermal equilibrium time to stabilize. So for this reason when we want to measure trace elements with high or very high (max available current, or "more power, Scotty") current, we also analyze major elements with same conditions. I had done similar experiments as here a few years ago: with total counts/real time vs cnts/beam current  and wondered with few things: non linearity of that, and PHA peak deformation and shift. My "cure" for this problem was (and is up to now,  but  maybe it will change) making separate calibrations at high current - this minimizes the effect, but does not completely eliminates it, and generally is cumbersome.

I should tell that lately I am very into electronics (probably inherited genes, I have good memories how my dad was building synthesizers, and I love the smell of solder fumes  (burnt rosin flux). Whole last month I got an excuse to apply my hobby obsession on our SX100, as it had broke down. Due to Covid crisis we had no clients for month, and I had plenty of time to be not bothered - just machine and me. After fixing major problems (lens supply) I got into hunting and troubleshooting small issues which had accumulated through years. Also Covid situation showed that remote control of machine is not easy, ergonomic or efficient, particularly that at home I have 3Mb/s internet - that simply sucks. And I know that plenty of our customers has similar connectivity problems, thus to overcome such problem I was inspecting options for custom hardware (+ software), which would send lossless video through network with lowest possible latency with lowest bandwidth. There is this unused EDS slot on electronics motherboard (we have no full version of Bruker Esprit, with full license that slot would be used by cable to EDS card for video and external scan generation). While my main aim is extracting the video signal, however led by curiosity I had probed (with oscilloscope) the WDS signal pins exposed there. It outputs counts as pulses (5V). wait... I am going to design the chip, which will use only video pins and leave those other signals just in peace? All kind of possibilities came into my mind, there are tons of limitations in Cameca Peaksight software and hardware acquisitions (limited mapping resolutions as an example, or limited resolution of WDS scans), and these pins with right hardware and software can at last let me go over it.

So the digital WDS pulse has 500 ns width. (i had no idea about WDS schematics at that point, nothing had broken there). And so I got many important questions in my head "what if". If I would want to count peaks what kind of counter chip I would need, 500ns stuffed at full side by side would do 2Mhz. Can two peaks exist side by side (which would exclude counter chip working on rising edges, thus would need more expensive complicated chip/system)? To find answer to this questions I had set beam to burning 3 micro-amperes to find out how dense these peaks can get! Ni  Ma line on TAP for x-rays of stage just was enough. The x-ray meter on the monitor showed E6 level (white) - just excellent.

So with persistence set on oscilloscope for 2 seconds I had found out that digital pulses are aligned perfectly at 1 μs steps. The closest pulse (rising edge) to the triggered pulse was 4 μs counting from the rising edge of the triggered pulse. That is consistent with software set dead time to 3µs. So despite digital pulse being only 500ns, it represents 1μs step, and thus setting deadtime to non-integer values makes no sense. Changing the dead time changes the length of gap between closest pulses, or in other words enforces the rejection of anything at interval counting from 1+set_dead_time counting from rising edge of the last counted pulse.  And then I asked myself the politically incorrect question: could there be "pile-up" peaks on proportional counter?

The short answer is.... (drumroll... dramatic music) yes.  :o

So led by my curiosity, I hanged oscilloscope probe on the raw spectrometer signal (by raw I mean the signal coming out from spectrometer, not processed with spectrometer board). The set dead time in GUI does not affect anything there, and with high intensity beam the closest peak from triggered rising edge is 2μs apart. The peaks coming from pre-amplifier (and mild opamp, for spec- intermediate board transfer) have about 1μs width. So the smallest gap between two peaks is... 1μs. Interestingly looking at small time scale it is clear that peaks (and gaps) are aligned at 1μs steps. Now lets stop here, I understand why digital pulses are aligned like that, but analog pulses? Would that intend that electron beam is pulsing beam in 1MHz?

Anyway, this is another proof showing that setting dead time to non-integer values makes no sense. So if minimal observed gaps are 1μs thus physical dead time of counter is 1μs, why simply not set the dead time to 1μs and leave like that? We could do that if only there would be no pile-ups and higher order  diff x-rays...  because this physical dead time depends from amplitude of peak it follows. High order of more energetic peak (or pile-up peak) will produce following voltage drop which is proportional to amplitude of peak, and relaxation time is proportional too. It does not matter if PHA is set to diff or integral - those filtering is applied much latter in the processing pipeline, and does not eliminate physical influence and physical dead time of those higher energy peaks.

So lets talk about the pile-ups. I wouldn't believe it if I wouldn't see it. While running on low current with zoom out view in oscilloscope it is clear that dominating peaks have very similar amplitude. With increase of current to the moment there density of peaks increases, there starts to appear double of amplitude peaks, and with increase of current further triple amplitude and more appears. (i.e. dominating amplitude of ~300mV at 20nA, after going to very high current (2μA) some peaks with 1.5V amplitude appeared -that is quadruple pile-up!).

But that is not all, remember Ar escape peaks? Filtered out they said... filtered out single Ar escape events, but... with high intensity beam there is so many Ar escape peaks that they pile up on themselves, and pile up on the analysed line peak, and on piled-up peaks. And.... this is where You get right wing tail in PHA peak, and gap between peaks is smoothed out by Ar esc piling up.

So with oscilloscope probing the raw spectrometer output I had checked how well the auto PHA works for high current. It does not. It sets the bias too high, which leads to multitude of argon peaks, which gets hidden out ("good" job Cameca for dumbing PHA down). With oscilloscope probing, I could reduce bias to the level where Ar esc peaks just disappear.

According  answer about PHA from one of Cameca engineers, at integral mode pulses up to 15 V are summed (5.5 to 15V is not probed for and thus not shown by PHA).

So why we get PHA peak shifts depending from current? I guess there is some smoothing algorithm before raw peaks are processed, with Ar esc peaks which gets very numerous the smoothed background would be much higher, and thus would decrease the relative amplitude of main peaks.

So my final words:
The 3μs is set by default as the best compromise universal dead time. It can safely be lowered to 1μs if working on low current, and no high order overlapping peaks are at the position. However if high order high energy overlap is expected (despite diff or integral), it is good idea to increase dead time. At high counting rate, due to pile-up'ed pulses physical dead time can be randomly larger than 3μs, and it would be safe to increase the dead time (actually increasing dead time is always most safe option, but not most efficient), albeit that won't fix missing counts, which is a product of counting pile-up peaks as single pulse (integral) or completely discarded (diff mode).

sem-geologist

#26
Followup:

My recent thought is that maybe proper way would be use tight window with diff  mode, discarding anything (pile-ups, ar+pile-up, ar+ar...), but for that those discarded peaks should too be added to dead time. so proper way to calculate the dead time would be calculate dead time for accepted peaks:

diff_counts * dead_time + (integral_counts - diff_counts) * (1 + dead_time)

Of course getting rid of Ar esc peaks from generation is important too, to prevent peak shift in PHA. However, how to do it with this "improved" PHA, and without an oscilloscope is huge challenge.

sem-geologist

#27
Unfortunately (or actually fortunately) I need to correct some statements of mine above. I re-looked at my film material from that experiment of mine, and I see that I got a bit different first impression. My general advice stated above stays the same, however, I see after re-watching that there is no minimal gap between physical pulses in raw x-rays. So actually physically there is no dead time after most of peaks  (that is the advantage of p-10 gas vs pure Ar), unless the height of peak is so huge, that following relaxation voltage drop is so intense preventing an avalanche. (2022 update: looks that I had too limited set of observations of these pulse behavior. After looking more I had got into numerous cases there other pulses are still arriving in that depression - thus in real at these petite X-ray intensities practically the proportional counter has no dead-time at all)

And my guess about argon escape peaks causing PHA shifts is not probably right. Actually it is more likely that shifting is due to closely following pulses where following pulse starts while charge of detector thread has still not recovered from previous pulse, and so while relative (to left background/side) height of peak is correct, the absolute height (to 0V reference) is smaller and after further amplifications and digitization in WDS card it lands at lower voltages in the PHA graph. With high counting rate (E5, E6 level), such closely packed pulses are very common. I think it is worth to try to find a day for concise video and photo documentation of how this works and maybe I could upload it here (I mentioned I have some recordings, but it is bad quality). I guess this would benefit the community. I am not sure about legal aspects - this basically reveals lots of internal working of SX detectors. But on the other hand this is the fundamental piece of methodology to understand.

The more I think about this, the more I am tempted to replace prop counters with SDD (tiny eds).

sem-geologist

#28
Recently I accidentally had found this article:
"Radiation detector deadtime and pile up: A review of the status of science"
https://doi.org/10.1016/j.net.2018.06.014

That can be used to go further with discussion on dead times, it demonstrates that old approaches for dead time is not well thought. It also supports my previous claim that Cameca SX100/Five detectors(+electronics) suffers from pile-up effects.
One way is to correctly calculate the deadtime, when its origination is well understood.

Another way is hardware modifications, to pimp-up the electronics that dead-time would be not bothered anymore.
I mentioned that it is tempting to replace proportional counter with tiny EDS (It was done on some custom tailored JEOL probe, pardon, can't remember all the details); I, however, had changed my view on that - I see that it is not actually SDD over gas proportional counter superiority (EDS windowing vs WDS PHA), but modern "state of art" counting electronics versus ancient "state of art" counting electronics which makes the difference. The spectrometer pre-amplifier electronics probably had not changed from age of SX50 (needs some confirmation). Used A203 charge preamplifier from AMPTEK was state-of-art few decades ago, but AMPTEK went  with R&D through few generations of improvements. A203 signals (shaped pulses) has enormously long tail, and so pile-up with high counting rate is inevitable. That tail was wanted feature with slow low res ("fast" to old standards) ADC used in MCA. To achieve better performance, I think the least modification would be to use the shaped and amplified signal from spectrometer (without modifications) and convert it with high speed high resolution (at least 12-bit) ADC and feed that to DSP cores on some low/middle-end FPGA (no need to go with high-end). That should be able to process without any hiccup/dead-time with full re-pile-up in real time. I wonder what additional increase in performance could be achieved using modern "state-of-art" charge preamplifier (like A250), that would require to replace part of pre-amplifying electronics. While this would be more complicated - it still looks like peanuts if compared to mechanical complete replacement and fitting of SDD EDS in-place of proportional counter.

I wish I would have an access to a spare SX100 (or SX50) to experiment with...

sem-geologist

#29
Looking through those spreadsheets I see that there is approach of fitting linear regression (fit all or fit last), but I want to point that due to two (not single) processes this is not linear. One is dead time - not measured counts; another - not the less important - peak-pile-up. With higher count rate the pile-up process dominates the "missing counts". Going up to 200 kcps and using this linear regression approach looks quite reasonable, albeit fun starts already at 100 kcps. Deadtime looks clearly tricky to Cameca peaksight as in peaksight 6.4 there is additional option for additional dead time correction with setting current threshold. That is clearly a wrong approach, as it depends from count rate, not directly from current, but it illustrates that the problem is not straightforward to understand.

At least with peaksight 6.4 it gets clear how Peaksight (and interpretor, and so I guess dll for communication of ProbeSoftware-to-machine) calculates the counts per second from counts with applied dead time. For set integer deadtime (in µs) additional 0.3 µs is added (it is shown in config window). So if we set it to 1 µs the count rate will be: cps = cts / (real_time - cts * (1µs + 0.3µs));
Why 0.3µs? I guess because without it the discrepancy between cps at different nA was to high to accept. As I already mentioned before, generated digital count pulses are emitted by WDS board aligned to exact 1µs time grid. Pulses from preamplifier comes with 1µs width - even if it would be possible forcibly to set dead time to 0, there is no way to register another peak at falling edge of pulse - for counting the rising edge of pulse is needed. So Why 0.3µs? - there is such chip which holds amplified signal for 700 ns so that slow-ass prehistoric ADC could read the amplitude. ...and that calculates well 0.7 + 0.3 + set_dead_time = time from rising edge of pulse after which electronics is ready to register next pulse.

This 0.3µs in my opinion is for compensating unrecognized by Cameca peak-pile-up process, which works miserably (is insufficient) with high count rate (>100kcps).

So to understand exactly how this dead-time is working I made a MC model. The model does two functions. First function generates randomized simplified model of signal emitted by preamplifier with length of 1 second with 1µs resolution (this simplification can be assumed good enough as electronics emits digital pulses aligned at 1µs grid, and pulses from pre-amplifier comes with ~ 1µs width). It requires to provide wished count rate produced by detector/preamplifier (or aimed raw counts). If it is modeled for low count rate there are only single pulses with vast time in between, but as density grows - the pile-ups starts to appear. The randomisation is semi-precise, there are cut few computational corners to boost modeling speed (it is written in python) and thus it does not create signal pulse by pulse, but in batch vectorised (numpy arrays) form, that is summing a number of generated randomized arrays. That produces sufficient patters which can be compared with those seen on oscilloscope, when probing the pre-amplifier-produced signals.

The second function processes such signal like counting electronics on the WDS card would do, and accounts all of pulse: it iterates through signal from beginning to the end, pooling counts by its height to different variables (so that in the end we could see how many pure pulses would be, how many double, triple and so on pile-ups). In the end it produces number of counts which cameca software/firmware would produce, and calculates count rate as Cameca software would calculate with given software dead time. I checked its results against SXFiveFE and it agrees very well with observations. What left to do is to fit non-linear regression for correct addressing of pile-ups. It demonstrates that up to real 500 kcps the linear approach kinda can work, with small overestimation in lower count rates. However to make it work close to 1 and up to 1.5 Mcps linear approach is not sufficient. i.e. Cr Ka on LPET on Cr2O3 will produce  about  300 raw kcps, and after dead time correction up to ~500 kcps while in reality there should be three times more (quanti analysis would give very low concentration of Cr, with totals about 30% at 650nA, while 100% at 14nA).  Modeled values agrees well with spectrometer count rate capping at different set dead time values.
I.e. at dead time set at 1µs the count rate caps at ~300kcps, while at 3µs it caps at about 200kcps, which this modeling predicts very well.

I am attaching modeling notebook (python ipython notebook, requires numpy and matplotlib for plotting), so this can be looked and experimented with. I am thinking about what kind of non-linear equation could precisely calculate to real counts.

I know most of you don't use such high currents, but our SXFiveFE is crippled with a bug, where recalling saved column conditions sets suppressor to hardcoded 300 V, ignoring value saved in conditions (and Our good as new FEG tip, which is running like new for more than 2 years is adjusted by me to 275 V suppressor). That (+ non-repeatability of C3 lens) makes multi-option approach not possible, these kind of currents does wonders for measuring trace elements for minerals which can withstand such harsh beam conditions (i.e. rutile).
   
Alongside python notebook, I am attaching few charts, few lines unfortunately completely overlaps, that is cps as would be calculated by cameca for 1µs DT and 3µs DT (blue line is under red - the same cps). The curves with "guessed" DT with used dead time as 2.56µs for software DT=1µs and 4.56µs for software DT=3µs overlaps (pink line is under dark yellow). So generally the theoretical DT is software DT + 1.56 µs (for crossing 1M(raw) vs 1M(counted) counts, it should be smaller for better fit at 100k count rates. 

It also shows that diff flattens out much sooner than integral mode, and even goes into "paralizeable dead time" mode. However model does not predict it precisely as in real this is due to PHA shifting, but model follows decrease from single (non piled up ) pulses becoming rare, as most pulses are piled up at high count rates. In real with diff method, peaks shifts toward lower energies in PHA, and window will catch higher pile-up orders. For Cr Ka on LPET the PHA pulse (at low current) is normally centered at right, thus even with 650 nA current, the counts do not diminish, as all pulses those of first order (non-piled up) still is above background cutoff value. Count at Integral mode in both model and real instrument will stabilize and wont go down with increasing current.

The big surprise is that from model it is clear that setting software DT to 1µs is better than to 3µs, as that has better slope at 1M - 1.3M raw counts, thus recalculation to raw counts at such range will have less uncertainty.