I'm trying to understand how the emitted characteristic X-ray counts in DTSA-II's simulation report table relate to the counts in the simulated spectrum.
I carried out a Monte Carlo bulk sample simulation of SiO2 and am having trouble matching the emitted characteristic oxygen and silicon X-ray counts in the simulation report table to those in the simulated spectrum. For example using the Si(Li) detector provided with DTSA-II, the O X-ray counts in the spectra are significantly higher (~195,000) than the table (~105,000 –- based on summing O K-L2 and O K-L3). Are they not meant to be roughly the same?
The spectrum counts were estimated using approximate keV peak ranges and the "Integrate peak (background corrected)" tool. I uploaded the spectrum bitmap and report html with this message.
My interest in this was sparked because this same pattern occurs with the detector file I created for our SEM. Any help would be appreciated!
Thank you,
Haydee
The short answer is that the tabulated values are reported in flux per milli-steradian and don't account for detector efficiency.
If you account from the solid angle subtended by the detector and the efficiency of the detector (fractional number of x-rays measured per incident x-ray as a function of x-ray energy), the numbers will agree.
The tabulated data is intended to expose the generated and emitted data in a manner that is useful for other types of measurements than EDS - like WDS.