News:

:) We depend on your feedback and ideas!

Main Menu

Recent posts

#1
Discussion of General EPMA Issues / Gun Scans
Last post by Joe Boesenberg - November 09, 2024, 02:59:10 PM
Hi All,

Both the Cameca and JEOL microprobes have a feature called "gun scan" (Cameca has both "gun scan high" and "gun scan low"), which allows you to see and image the electron distribution (I think) at a certain point in the column. For the Cameca you can image near the filament (for the high) and then much lower in the column after the beam has been focused. The JEOL appears to only have one gun scan, and I think it is only the one near the filament. Can anybody explain to me how this is done and what exactly I am looking at? I have asked both Cameca and JEOL engineers and no one seems to be able to answer this question. I know it requires the secondary electron detector, but I am not sure how.   Thanks Joe
#2
Discussion of General EPMA Issues / Re: Light Element Crystal Refr...
Last post by John Donovan - November 08, 2024, 10:12:45 AM
Two questions on JEOL LDE crystals:

1. What is the 2d spacing for the JEOL LDEC? I see 100A but also 200A elsewhere. Which is correct?  😕

2. What is the refractive index (k) for LDEB?  The default K value (0.01) I have for LDEB is clearly too low! 
#3
Probe for EPMA / Re: Latest version changes for...
Last post by John Donovan - November 08, 2024, 06:41:55 AM
As many of you know, Probe for EPMA can display the acquired analysis positions on backscatter, secondary or CL images acquired in Probe for EPMA:

https://smf.probesoftware.com/index.php?topic=138.0

Recently a colleague of ours noted that in a very large probe run with over 4000 data points that if the "All Positions" option was selected, it took 10 or 15 seconds for all the points to get loaded.

The culprit turned out to be the call to load the beam sizes for each analysis position, therefore we modified the code to speed things up and now this display of the analysis positions only takes a few seconds or less.

The same applies to the display of positions in the PictureSnap window as shown here:

https://smf.probesoftware.com/index.php?topic=14.msg12850#msg12850

whether they be already acquired analysis positions from the Analyze! window or digitized positions from the Auto,ate! window you should see a much faster display of the stage positions with the latest version of Probe for EPMA.

Update as usual from the Help menu...
#4
CalcZAF and Standard / Importing standard data as ASC...
Last post by Alejandro Cortes - November 07, 2024, 03:59:33 AM
Hi everyone,

We recently installed PfE at NHM in London. Although the Standard app has an option to load standard info from JEOL, the JEOL files do not have as much info as I would like to them to include. So, I did a python script that I am happy to share here if somebody wants to use it. It is based on the way we are now storing standard data at NHM, but can be easily adjusted if needed (tried to comment the script in the best way possible). If you find any issue let me know.

I think attaching a script might be flagged, so I just copy-paste it here.

Edit by John to paste script into a code control:

# ============================================================================
# This python script helps extracting information from a standard database in csv format
# then the data is written in the format that can be read by PfE
# ============================================================================

# ============================================================================
# Example of Standard Format in PfE ASCII .DAT File:
# This block explains the format of the ASCII .DAT file that the script generates.

# 100            "Calcite"          : std number in PfE, standard name in ""
# "CaCO3"                            : the following six lines are just additional standard info
# Locality: WealGrey_Cornwall
# ID_EPMA: A01
# ID_EPMA2: CAL2_221
# BM: BM32890
# Reference: ChemLab_11076"
# -1            8                  : flag to calculate as oxides (-1) or as elements (0),
#                                      the number of elements in the standard
# 2.71                              : standard density
#    "Mg"  "O"  "Ca"  "P"  "Fe"  "Mn"  "Ti"  "C"                : list of elements in the standard in ""
#      1      1      1      2      1      1      1      1      : cations in oxide or 1 if element
#      1      0      1      5      1      1      2      2    : oxygens in oxide or 0 if element
#  0.082  47.960  37.450  0.017  0.024  0.996  0.022  12.000  : element wt.% in the standard
# "Carbonate"        0            0            ""            "3"      : standard type in "", -1 calculate formula/ 0 doesn't,
#                                                                      number of cations per formula unit (cat p.f.u), formulas,
#                                                                      and standard block in ""
#                                                                      (Note: formula calculations are not included in this code, so these values are set to 0)
# ============================================================================

# ============================================================================
# Example of NHM Standard Database CSV Structure:
# The CSV structure should include the following columns but they can be changed accordingly:
# Name, Formula, Density, ID_EPMA, ID_PfE, ID_Old, BM, Type, Block, Locality, Reference, Mg... (and other elements)
# Example entry (for Calcite):
# Calcite,CaCO3,2.71,A01,100,CAL2_221,BM32890,Carbonate,3,WealGrey_Cornwall,ChemLab_11076,0.0816
# Example entry (for Chromium):
# Chromium,Cr,7.19,B01,101,PCR_117,Element,6,Synthetic,ChemLab
# When there is no reported information or value for an argument, it will be shown as empty (which is read as NaN).
# ============================================================================

# ============================================================================
#libraries needed to run the script. This is to creade dataframes
# ============================================================================
import pandas as pd

# ============================================================================
# Function: convert_ascii_std
# Purpose: Converts a single row of standard data into a PfE-readable ASCII .DAT format.
# Arguments:
#  - row: The row containing standard information from the input CSV.
#  - element_composition: A list of column names representing element compositions (weight %).
#  - cat: A series representing the number of cations for each element.
#  - oxy: A series representing the number of oxygens for each element.
# Returns:
#  - A formatted string containing the .DAT content for a single standard entry.
# ============================================================================
def convert_ascii_std(row, element_composition, cat, oxy):
    # --- Extract Standard Metadata ---
    name = row["Name"]              # Name of the standard
    formula = row["Formula"]        # Formula of the standard
    density = row["Density"]        # Density of the standard
    id_pfe = row["ID_PfE"]          # Standard number in PfE
    id_EPMA = row["ID_EPMA"]        # EPMA ID of the standard
    id_Old = row["ID_Old"]          # Old ID (if available)
    BM = row["BM"]                  # Museum entry number (if applicable)
    type = row["Type"]              # Type of standard (e.g., carbonate, silicate, etc.)
    block = row["Block"]            # Block number in the database
    locality = row["Locality"]      # Locality or source of the standard
    reference = row["Reference"]    # Reference or source information
   
    # --- Extract Element Composition Data ---
    wt_Element = row[element_composition]  # Weight percentages of elements in the standard
   
    # Remove elements that have zero values in the weight percent (i.e., ignore them)
    non_zero_elements = wt_Element[wt_Element > 0]
   
    # --- Filter Cations and Oxygens for Non-Zero Elements ---
    cat_clean = cat[wt_Element > 0]  # Corresponding cation values for non-zero elements
    oxy_clean = oxy[wt_Element > 0]  # Corresponding oxygen values for non-zero elements

    # --- Determine Flag for Oxides vs Elements ---
    display_ox = -1 if "O" in row and row["O"] > 0 else 0  # Flag: -1 for oxides, 0 for elements

    # --- Count Non-Zero Elements ---
    elements_shown = len(non_zero_elements)  # Number of elements with non-zero weight percentages
   
    # --- Generate Element Symbols List ---
    element_symbols = [f'"{elem}"' for elem in non_zero_elements.index]  # List of element symbols for non-zero elements
   
    # --- Generate Element Content Data ---
    element_content = [f'{val:.3f}' for val in non_zero_elements]  # Weight % of each element formatted to 3 decimals
    cat_values = [f'{val}' for val in cat_clean]  # Cation values
    oxy_values = [f'{val}' for val in oxy_clean]  # Oxygen values
   
    # --- Construct ASCII .DAT Content ---
    output = f'''
{id_pfe}            "{name}"
"{formula}
Locality: {locality}
ID_EPMA: {id_EPMA}
ID_EPMA2: {id_Old}
BM: {BM}
Reference: {reference}"
{display_ox}            {elements_shown}
{density:.2f}
'''
    # Add the element symbols (e.g., "Mg", "Ca", etc.)
    output += "  " + "  ".join(element_symbols) + "\n"
   
    # Add the corresponding cations for the elements
    output += "      " + "      ".join(cat_values) + "\n"
   
    # Add the corresponding oxygens for the elements
    output += "      " + "      ".join(oxy_values) + "\n"
   
    # Add the weight % content for the elements
    output += "  " + "  ".join(element_content) + "\n"
   
    # Add additional standard details (e.g., type, formula, block)
    output += f'"{type}"        0            0            "{formula}"            "{block}"'
   
    return output

# ============================================================================
# Function: process_csv
# Purpose: Reads an input CSV file and generates a single .DAT file for each standard.
# Arguments:
#  - input_csv: Path to the input CSV file containing the standard information.
#  - output_file: Path to the output .DAT file where the converted data will be saved.
#  - catox_csv: Path to the CSV file containing cation and oxygen data for each element.
# ============================================================================
def process_csv(input_csv, output_file, catox_csv):
    # --- Read Input CSV Files ---
    df = pd.read_csv(input_csv)  # Standard information CSV (contains name, formula, etc.)
    dfcatoxy = pd.read_csv(catox_csv)  # Cation and oxygen data CSV (contains element cations and oxygens)
   
    # --- Identify Element Composition Columns ---
    element_composition = df.columns[11:]  # The columns containing the element composition data (adjust if needed)
   
    # --- Extract Cation and Oxygen Data ---
    cat = dfcatoxy.iloc
[o]  # Cation values (first row of the cation/oxygen file)
    oxy = dfcatoxy.iloc[1]  # Oxygen values (second row of the cation/oxygen file)
   
    # --- Open Output File for Writing ---
    with open(output_file, 'w') as out_file:
        # Iterate over each row in the DataFrame and convert to ASCII format
        for idx, row in df.iterrows():
            # Generate the .DAT content for the current row
            dat_content = convert_ascii_std(row, element_composition, cat, oxy)
            # Write the generated content to the output file
            out_file.write(dat_content)
   
    # --- Remove First Blank Line ---
    # This step removes the first blank line that may cause issues when reading the file in PfE
    with open(output_file, 'r') as file:
        lines = file.readlines()
   
    with open(output_file, 'w') as file:
        file.writelines(lines[1:])  # Rewrite the file without the first blank line

    print(f"Generated {output_file} with the first line removed")

# ============================================================================
# Main Execution Block
# Purpose: Process the input CSV and generate the output .DAT file for PfE.
# ============================================================================
# Define input and output file paths
input_csv = 'EPMA_Standards.csv'  # Replace with the actual file path to your standard CSV
catox_csv = 'cat_ox.csv'  # Replace with the actual file path to your cation/oxygen data CSV
output_file = 'standardsforPfE.dat'  # Output file name for the converted .DAT file

# Process the CSV and generate the .DAT file
process_csv(input_csv, output_file, catox_csv)
#5
Discussion of General EPMA Issues / Re: New method for calibration...
Last post by John Donovan - November 06, 2024, 01:52:41 PM
With regards to Probeman's post above we've updated our Constant K-Ratio method document which is attached below (remember to log in to see attachments), to emphasize the importance of constructing the k-ratio with both the primary and secondary standards being measured at the same beam current for each beam current measurement.
#6
Discussion of General EPMA Issues / Re: New method for calibration...
Last post by Probeman - November 06, 2024, 01:31:57 PM
Recently, a colleague of ours was acquiring some standards to check their dead time calibrations and had some difficulties with the k-ratios. It turned out that they had acquired the standards in the wrong order!  Allow me to explain:

When these k-ratios are constructed, the intensity of the primary standard is utilized for the denominator and the intensity of the secondary standard is utilized for the numerator as usual. However, in Probe for EPMA the default is to interpolate the intensities based on the acquisition time between two standardizations (if the primary standard was acquired before the secondary standard, and the primary standard was acquired after the secondary standard).

But with the constant k-ratio method we do not want to interpolate intensities acquired at two different beam current conditions in case there is any non-linearity of the picoammeter, therefore we want to turn off the standard intensity drift correction in the Analytical | Analysis Options menu dialog.

When the standard intensity drift correction is turned off, the software only looks for a single standard acquired *before* the secondary standard (or unknown), and does not attempt a drift interpolation.

But this means that the primary standard must be acquired *before* the secondary standard and at the same beam current!  Make sense, right?  :)

Here is a visual example of the order of acquisition required if the primary standard is Si metal and the secondary standard is SiO2:



However, if the SiO2 standard is listed first in the Automate! Window, one could simply assign the SiO2 as the primary standard and allow the Si to be the secondary standard. The actual value of the k-ratio isn't important as we only want to confirm that the k-ratio is constant as a function of beam current.
#7
Probe Image / Re: Latest version changes for...
Last post by John Donovan - November 06, 2024, 10:23:38 AM
One of our users recently had an issue where the x-ray maps acquired in Probe Image weren't being saved automatically.

It turns out the user entered some invalid characters into the samples names and Windows didn't like that.  For example you shouldn't use the following characters in file names: "\", "/", ":", "&", etc.

https://www.mtu.edu/umc/services/websites/writing/characters-avoid/

If any of these invalid characters are found entered by the user, Probe Image will now automatically convert them to allowed characters.  Funny that we'd never run into this issue before.

Please update to v. 1.4.8 of Probe Image:

https://www.probesoftware.com/resources/

Contact us at Probe Software us if you need the unzip password...
#8
Probe for EPMA / Re: Bulk import wavescans into...
Last post by John Donovan - November 06, 2024, 07:48:26 AM
Quote from: Jens Andersen on November 06, 2024, 06:56:12 AMThanks Jon, that's very helpful. I will proceed with that model for now, it sounds like a good way for me. When you load a file setup, does it automatically load the wavescans as well?

Using Load File Setup only loads the selected sample setup and optionally, the standards.  Wavescans can still be loaded one at a time from another run, as you are already doing I suspect.

I think what Jon is suggesting is that instead of using Load File Setup, you create a "base" run with your sample setup, then peak all the elements, and acquire all your wavescans and optionally your standards. 

Then when you want to start a new run, use the File | Save As menu to save that "base" run to a new probe run. That gives you a new run with your sample setup and wavescans (and optionally any standards).

Then in the new run you can re-peak if you want, and acquire standards. Sometimes only the major element standards need to be re-peaked/re-acquired, because for trace elements the standards aren't the most important, rather the backgrounds on the unknowns are much more important. MAN backgrounds standard also tend to be stable since they are not acquired on a peak.  See here for more on this topic:

https://smf.probesoftware.com/index.php?topic=1535.msg12121#msg12121

And also my webinar on YouTube:

https://www.youtube.com/watch?v=mOca7-G4FvQ&t=25s

The idea being that since the wavescans are really there only for avoiding off-peak interferences, they usually don't need to be re-peaked and re-acquired. Whereas major element standards should be re-peaked before re-acquiring as they can shift slightly, especially if there's been some crystal flips since the "base" run was created.

Finally for the next new run on another project, again, open the "base" run, do File | Save As and re-peak and re-standardize as necessary in the new file. 

Of course one can also create multiple "base" probe runs for different samples types, e.g., silicates, sulfides, Fe alloys, etc, etc. And of course one can always add or delete elements as needed to the new probe run.
#9
Probe for EPMA / Re: Bulk import wavescans into...
Last post by Jens Andersen - November 06, 2024, 06:56:12 AM
Thanks Jon, that's very helpful. I will proceed with that model for now, it sounds like a good way for me. When you load a file setup, does it automatically load the wavescans as well?
#10
Probe for EPMA / Re: Bulk import wavescans into...
Last post by JonF - November 06, 2024, 06:51:06 AM
Quote from: Jens Andersen on November 06, 2024, 05:22:51 AMPerhaps I should do a single file with all of the standards and wavescans (but no sample data) and use that as the template for the different variant setups that I need to do (with "save as" to a different file name).

This is similar to how I work: I have a "standard silicate" MDB file with all the elements (inc bias/gain settings, BG positions, standards etc) I'm likely to be interested in for a routine silicate run, plus wavescans over the standards.

For a new run, I'll import the file setup via the Load File Setup button in New Sample/Setup in the Acquire! window, remove any elements I don't need, re-peak (and check PHA) and then re-standardise. You do get the option to import the standard intensities too, but they'll likely be well out of date by that point.