News:

:) We depend on your feedback and ideas!

Main Menu

Use manufacturer-provided detector efficiency spectrum in detector definition?

Started by Clueless Micky, November 14, 2025, 03:39:19 PM

Previous topic - Next topic

Clueless Micky

Hello all,

To my understanding, DTSA-II calculates the detector efficiency spectrum from the transmission spectrum of the chosen X-ray window type, and from the dimensions provided in the "Crystal parameters" section of the detector settings dialog.

For my detector, I have a measured efficiency spectrum supplied by the manufacturer, and I'm wondering whether there is way to use this in DTSA-II instead of the calculated efficiency spectrum?

For example, is it possible to insert the efficiency table into a *.xdet file, or can I use a Python script to overwrite an existing det.sensitivity array?

Thank you for your help!

Nicholas Ritchie

You might know that Java executable files are actually ZIP files renamed with an extension .jar.  Inside the epq-??????.jar file at the path  gov\nist\microanalysis\EPQLibrary\Detector\custom.csv is a comma-separated value file that defines a custom detector efficiency (channel-by-channel).  You can replace this file in the .jar with one containing the desired detector efficiency.  Restart DTSA-II and this is the new "Custom Table" detector calibration.
"Do what you can, with what you have, where you are"
  - Teddy Roosevelt

Clueless Micky

This method works very well once I set the detector metal coating thicknesses to zero, and make the detector infinitely thick (50 mm).

Thank you very much!

Nicholas Ritchie

Be careful.  The detector thickness is important to modeling spectra accurately.  Most SDDs are about half-a-millimeter thick and X-rays above about 10 keV do pass right through a fraction of the time and are not detected.
"Do what you can, with what you have, where you are"
  - Teddy Roosevelt

Clueless Micky

Quote from: Nicholas Ritchie on November 20, 2025, 10:33:16 AMBe careful.  The detector thickness is important to modeling spectra accurately.  Most SDDs are about half-a-millimeter thick and X-rays above about 10 keV do pass right through a fraction of the time and are not detected.
After replacing the original "custom.csv" with one that contains the measured efficiency of my detector, I extracted the efficiency calculated by DTSA-II using the function getEfficiency().

I then plotted the efficiency spectrum that I supplied together with the one I read back. The result is that if I choose a detector thickness of 0.5 mm, the two spectra strongly deviate from each other above ~10 keV. They differ by a factor ~3 at 20 keV.

On the other hand, if I choose an infinitely thick detector, say 50 or 100 mm, the two spectra coincide perfectly over the entire energy range.

For now, my understanding is that the X-ray losses above 10 keV due to finite detector thickness are already accounted for in the measured efficiency spectrum of my detector, and that if then in DTSA, I choose a detector thickness of 0.5 mm or so, I'm accounting for these losses twice.

Having said that, I'll make sure to run some systematic spectrum simulation tests to check if this understanding is indeed correct.

Nicholas Ritchie

I think you are correct.  The "custom.csv" file is intended to hold the window transparency.  If you are entering the measured efficiency as the "window transparency" then setting the detector thickness to 0.5 mm would account for this twice.  It seems like you are correct to set an "infinite" detector thickness.
"Do what you can, with what you have, where you are"
  - Teddy Roosevelt