I am not sure where this discussion should be but I thought it would be nice to have a section on post processing data analysis and possibly automation of these tasks. I am sure we all have scripts and other programmes we use to look at the data. Ben has started this with his postings of some scrips:
Beam diameter:
http://smf.probesoftware.com/index.php?topic=775.msg4814#msg4814 (http://smf.probesoftware.com/index.php?topic=775.msg4814#msg4814)
Surfer script to convert grid images for imagej:
http://smf.probesoftware.com/index.php?topic=739.msg4816#msg4816 (http://smf.probesoftware.com/index.php?topic=739.msg4816#msg4816)
Searchable Excel L-value table:
http://smf.probesoftware.com/index.php?topic=537.msg2996#msg2996 (http://smf.probesoftware.com/index.php?topic=537.msg2996#msg2996)
In this spirit I have attached some of my code I use in R to look at k-means clustering and plotting of data. A lot of this is fairly basic but it will save others time when trying to input and output data using R https://www.r-project.org/ (https://www.r-project.org/).
Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering. Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters (http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters). I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this. You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size. It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.
I hope that this will promote discussion and collaboration, but could also be a resource for students.
Best wishes
Gareth
Quote from: Gareth D Hatton on September 27, 2016, 02:05:27 AM
I am not sure where this discussion should be but I thought it would be nice to have a section on post processing data analysis and possibly automation of these tasks. I am sure we all have scripts and other programmes we use to look at the data. Ben has started this with his postings of some scrips:
Beam diameter:
http://smf.probesoftware.com/index.php?topic=775.msg4814#msg4814 (http://smf.probesoftware.com/index.php?topic=775.msg4814#msg4814)
Surfer script to convert grid images for imagej:
http://smf.probesoftware.com/index.php?topic=739.msg4816#msg4816 (http://smf.probesoftware.com/index.php?topic=739.msg4816#msg4816)
Searchable Excel L-value table:
http://smf.probesoftware.com/index.php?topic=537.msg2996#msg2996 (http://smf.probesoftware.com/index.php?topic=537.msg2996#msg2996)
In this spirit I have attached some of my code I use in R to look at k-means clustering and plotting of data. A lot of this is fairly basic but it will save others time when trying to input and output data using R https://www.r-project.org/ (https://www.r-project.org/).
Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering. Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters (http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters). I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this. You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size. It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.
I hope that this will promote discussion and collaboration, but could also be a resource for students.
Hi Gareth,
This is really informative and helpful, but also very heady stuff! :o
It's going to take me a while to digest it all!
john
After a conversation with Ben Buse I have had a crack at loading .grd files into R and have the following which was suprisingly simple
library(rgdal)
grd <- readGDAL("C:/Filepath/Sample_imageNo_WDS1_Al_TAP__Quant.grd")
writeGDAL(grd, fname = "C:/Filepath/file.tif", drivername = "GTiff", type = "Float32")
These files are readable by ImageJ.
What I really want is an automated way of getting these images into an RGB file in R.
I have had a crack at it using the best tool (Google) and come up with:
#import tiffs
library(raster)
AAl <- "C:/Filepath/AAl.tif"
ACe <- "C:/Filepath/ACe.tif"
ABa <- "C:/Filepath/ABa.tif"
rgbRaster <- stack(AAl, ACe, ABa)
plotRGB(rgbRaster,r=3,g=2,b=1, scale=800, stretch = "Lin")
The problem I now have is that these should be square images and they are not, as can be seen in the attached image. I cannot find anything on the internet to help, any one got any ideas? I suspect it is to do with the coordinates somehow...
Quote from: Gareth D Hatton on September 29, 2016, 09:12:05 AM
The problem I now have is that these should be square images and they are not, as can be seen in the attached image. I cannot find anything on the internet to help, any one got any ideas? I suspect it is to do with the coordinates somehow...
Could it be a page size (orientation) issue? Can you specify "Landscape" mode and see if it elongates the image in the other direction?
john
Quote from: Gareth D Hatton on September 27, 2016, 02:05:27 AM
Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering. Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters (http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters). I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this. You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size. It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.
Hi Gareth,
Am I right in saying that with k-means clustering the trick is to identify the cluster centers - and that this can either be done with an algorithm as in PFE (where you specify the number of clusters) or can you also tell it the cluster centers. For an unknown material you want k-means to identify the clusters and run without any prior knowledge - which PFE is good at. For K-means clustering I think the outcome of the result is largely determined by the identification of the initial clusters. I wondered if you already know the clusters or phases present could you specify the clusters and then run the k-means clustering to identify the phases within the sample.
Many rock samples are complex from the algorithm but simple to the geologist. You get mixed analyses from grain boundaries, you get phases that are too small at the analytical conditions and you get phases which occupy a small area proportion (proportion of the dataset) but are crucial for interpretation etc - so with more clusters you don't always get the subdivisions your looking for. But the geologist knows what phases to look for or should do! Is the k-means cluster identification weighted towards the high abundant phases.
Thanks
Ben
Quote from: Gareth D Hatton on September 29, 2016, 09:12:05 AM
The problem I now have is that these should be square images and they are not, as can be seen in the attached image. I cannot find anything on the internet to help, any one got any ideas? I suspect it is to do with the coordinates somehow...
Hi Gareth,
I've finally got round to checking. And using either plot or levelplot I get the correct aspect ratio (Greyscale is tif from probeimage).
(https://smf.probesoftware.com/gallery/453_23_03_17_12_57_49.png)
On the left is levelplot which gives the nice image - and its not flipped.
The code was
library(rgdal)
library(raster)
setwd('O:\\Documents\\PFE_Data\\admin\\PFE_1-5')
grd=readGDAL('1-5_00004_WDS1_Ca_PETJ__Quant.grd')
rgrd<-raster(grd)
plot(rgrd,axes=T,asp=1,col=heat.colors(256))
png('plot.png')
plot(rgrd,axes=T,asp=1,col=heat.colors(256))
dev.off()
library(rasterVis)
levelplot(rgrd,margin=FALSE,ylim=c(ymax(rgrd),ymin(rgrd)),xlim=c(xmax(rgrd),xmin(rgrd)))
png('levelplot.png')
levelplot(rgrd,margin=FALSE,ylim=c(ymax(rgrd),ymin(rgrd)),xlim=c(xmax(rgrd),xmin(rgrd)))
dev.off()
I did find when extracting the data for points plotted on the map - I needed to set the coordinate system so that I could specify the radius of the points extracted.
Hi,
I had a request regarding the code I used to generate principle component maps in the paper "Evaluating X-Ray Microanalysis Phase Maps Using Principal Component Analysis" https://doi.org/10.1017/S1431927618000090 (https://doi.org/10.1017/S1431927618000090), which addresses the questions raised above. Principal component maps are a great way of checking whether the phase mapping algorithm has done a good job at characterising the sample. - Here's the method I used for creating an RGB image of the first 3 principle components.
Not very elegant
1st I converted the maps into x,y,intensity text files, by loading into imagej and saving as xy coordinates. These I imported into imagej. But you can import grid files into imagej see posts above. Using the oxide map files is best (saves converting to oxide).
MapTi<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS5_Ti_PETL__Quant.txt",sep="\t")
MapSi<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS1_Si_TAP__Quant.txt",sep="\t")
MapNa<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS2_Na_TAP__Quant_2.txt",sep="\t")
MapCa<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS3_Ca_PETH__Quant.txt",sep="\t")
MapFe<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS4_Fe_LIFH__Quant.txt",sep="\t")
MapMg<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS1_Mg_TAP__Quant.txt",sep="\t")
MapAl<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS2_Al_TAP__Quant.txt",sep="\t")
MapK<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS3_K_PETH__Quant.txt",sep="\t")
MapMn<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS4_Mn_LIFH__Quant.txt",sep="\t")
Maps<-cbind(MapNa[,3],MapMg[,3],MapAl[,3],MapSi[,3],MapK[,3],MapCa[,3],MapTi[,3],MapFe[,3],MapMn[,3])
pcaMaps<-prcomp(Maps)
xyp<-matrix(nrow=length(pcaMaps$x[,1]),ncol=3)
xyp[,1]<-MapCa[,1]
xyp[,2]<-MapCa[,2]
xyp[,3]<-pcaMaps$x[, 1]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_1.txt",sep="\t")
xyp[,3]<-pcaMaps$x[, 2]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_2.txt",sep="\t")
xyp[,3]<-pcaMaps$x[, 3]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_3.txt",sep="\t")
Then in surfer, Grid from data, then save grid file as img, then import into imagej and create stack, composite image
Ben
Hi Ben,
Let me make sure I understand what you're doing.
So you're taking the oxide wt.% results from CalcImage, then you're converting them into phases using PCA, then you are taking the first three PCA components and displaying them as a composite RGB?
Is that correct?
john
Hi John,
No - I convert the maps into phases using either KNN (k-nearest neighbour) or K-means clustering. Then I do principle component analysis, and plot the first 3 principle components (PC) as a RGB-PC map to graphically show the chemical variation. Then I can compare RGB PC map to the phase map and see if the phase map accurately identifies the chemical variation, or does it not identify minor phases, which can be seen on the RGB PC map. The idea is to have a method of accessing how well the phase classification algorithm performs on a sample.
Ben
Quote from: Ben Buse on April 26, 2018, 03:47:08 AM
Hi John,
No - I convert the maps into phases using either KNN (k-nearest neighbour) or K-means clustering. Then I do principle component analysis, and plot the first 3 principle components (PC) as a RGB-PC map to graphically show the chemical variation. Then I can compare RGB PC map to the phase map and see if the phase map accurately identifies the chemical variation, or does it not identify minor phases, which can be seen on the RGB PC map. The idea is to have a method of accessing how well the phase classification algorithm performs on a sample.
Ben
Hi Ben,
OK, that makes sense. Have you ever tried our modified K-means clustering method in CalcImage as described here:
http://smf.probesoftware.com/index.php?topic=1071.0
How does it compare to what you're doing, at least in the first steps of your procedure?
john
Hi John,
Yes that's what spurred on the investigation, as many papers show k-means clustering works well were you have spherical clusters of similar size, problems come in when you deviate from this, particularly where your using initial random cluster positions, with the result strongly dependant on the initial cluster positions. What I found was it did a good job for the large (abundant) phases, but the small & sparse phases where sometimes not recognised - I did try it on a difficult sample! Increasing the number of clusters tends to subdivide the large phases rather than identifying the sparse phases. Improvements where found when initial cluster positions were specified, or maximum intensity was used for initial positions, or KNN with a subset of data. The other point is the phase classification need not be the end of the process, further data processing can be done subdividing phases, or examining chemical variation within phases.
I tried the sample with other propriety phase algorithms and they also had problems in identifying the minor phases whilst maintaining the image resolution. The bottom line being no algorithm is 100% fool proof. Which comes back to making a critical assessment of the results of phase classification
Ben
Hi everyone,
A separate post-processing topic , averaging a wide line, why do microscopy software use mean?
In example below - using ImageJ. Imagine a noisy x-ray map with counts increasing upwards to interface. To reduce noise I want to do an average profile, but the mean is strongly affected by inclusions
(https://smf.probesoftware.com/gallery/453_01_05_18_8_32_44.png)
If I take the average wide line
(https://smf.probesoftware.com/gallery/453_01_05_18_8_35_03.png)
The profile, is strongly affected by the "inclusions" because the mean is used.
(https://smf.probesoftware.com/gallery/453_01_05_18_8_36_41.png)
I've tried thresholding and setting inclusions to NaN, but the profile won't average/ignore NaN instead it gives breaks in profile
Now I have a tedious code which I paste into R project, which extracts a series of individual lines along an interface, (adjusting for interface angle) and calculating the median. This works well, I wondered what other people's experience was.
Hi Ben,
I can't help you with ImageJ, but did you try running any of the "strip" map averaging methods in CalcImage?
http://smf.probesoftware.com/index.php?topic=41.msg897#msg897
http://smf.probesoftware.com/gallery/1_25_11_17_4_27_57.png
You can specify horizontal or vertical strips of a specified micron width and the Surfer software crunches through the map pixels by row (vertical) or by column (horizontal) and averages the data values.
I'm sure I'm the one missing something, but just wondering if you had somehow missed this feature in CalcImage...
john
Hi John,
My main point was when averaging do you use mean or median. (For the actual project I don't use calcimage, as i'm dealing with subareas in a map that are non orthogonal)
Thanks
Ben
So here's the R code (for two elements here Zn and Si maps, provided interface is slopping in correct direction, just off from horizontal, with right of image lower) Extracts a box 600 um wide, 250um high., for pixel spacing of 1um. (To change from 600um wide, change 1-600 lines. To change from 250 high, change from 250 for vertical start line). Units in mm.
Attached is a generalised code where width and height are set using boxheight and boxwidth. Also can do either direction from horizontal. The automatic script .r file which guides through.
Initialize and read maps from .grd files
library(rgdal)
library(raster)
library(rasterVis)
#enter directory
setwd('')
enter Zn & Si map grid file name
grdZn<-readGDAL('')
grdSi<-readGDAL('')
rgrdZn<-raster(grdZn)
rgrdSi<-raster(grdSi)
calculate angle of interface, by clicking on two points along interface, then right click and end
windows.options(width=20, height=10)
plot(rgrdSi,asp=1)
#grid(3,5)
bs<-click(rgrdSi,xy=T) # click on boundary in two places to determine angle
xs<-max(bs[,1])-min(bs[,1])
ys<-max(bs[,2])-min(bs[,2])
xs100<-(.001/xs)*xs #.0o1 is 1um - so move 100 pixels in x
ys100<-(.001/xs)*ys
a<-max(bs[,1])-min(bs[,1])
o<-max(bs[,2])-min(bs[,2])
sang<-((atan(o/a))*180)/pi
sang_rad<-atan(o/a)
determine start position of extraction box. [i.e. the first line to extract, length of line 250um, orientated perpendicular to interface]
#determine vertical second point
plot(rgrdSi,asp=1)
st<-click(rgrdSi,xy=T) # click on start point
ed<-cbind(st[,1],st[,2]+.25)
st<-cbind(st[,1],st[,2])
#op<-tan(sang_rad)*0.25 #shift along slope for vertical of .25 or 250um
hop<-sin(sang_rad)*0.25#shift along slope for vertical of .25 or 250um
vop<-tan(sang_rad)*hop
#ed<-cbind(ed[,1]+op,ed[,2]-vop)
ed<-cbind(ed[,1]+hop,ed[,2]-vop)
#h<-sqrt(xs^2+ys^2) #pythagoras calculate hypotenuse which #set to step
#hxs<-(1/h)*xs
#hys<-(1/h)*ys
#ed<-cbind(ed[,1]-(hxs*op),ed[,2]-(hys*op))
#check
a<-ed[1,2]-st[1,2]
o<-ed[1,1]-st[1,1]
((atan(o/a))*180)/pi
90-((atan(o/a))*180)/pi
#create line
fL<-rbind(st,ed) #create first line
sfL<-SpatialLines(list(Lines(Line(fL),ID="a")))
p<-(levelplot(rgrdSi,at=seq(minValue(rgrdSi),5,length=100),margin=FALSE,ylim=c(ymax(rgrdSi),ymin(rgrdSi)),xlim=c(xmax(rgrdSi),xmin(rgrdSi)),main=list("Si Wt %",cex=0.8)))
p+layer(sp.lines(sfL,col="red",lwd=1))
extract data from box area, as series of individual lines
lxxyyc<-fL
for(i in 1:600){
assign(paste("L",i,sep="."),cbind(lxxyyc[,1]-xs100,lxxyyc[,2]+ys100))
assign(paste("spL",i,sep="."),SpatialLines(list(Lines(Line(eval(parse(text=paste("L",i,sep=".")))),ID="a"))))
lxxyyc<-cbind(lxxyyc[,1]-xs100,lxxyyc[,2]+ys100)
# to check extracting correct area, comment out following lines, to speed up for test extract
assign(paste("eLSi",i,sep="."),extract(rgrdSi,eval(parse(text=paste("spL",i,sep=".")))))
assign(paste("eLZn",i,sep="."),extract(rgrdZn,eval(parse(text=paste("spL",i,sep=".")))))
}
# angle between lines
o<-fL[1,1]-L.1[1,1]
a<-L.1[1,2]-fL[1,2]
((atan(o/a))*180)/pi
-90-((atan(o/a))*180)/pi
# plot lines on map
LTop<-rbind(L.1[1,],L.600[1,])
LBottom<-rbind(L.1[2,],L.600[2,])
SpTop<-SpatialLines(list(Lines(Line(LTop),ID="a")))
SpBottom<-SpatialLines(list(Lines(Line(LBottom),ID="a")))
png("Position of line extraction_box.png",height=512,width=1024,units="px")
p<-(levelplot(rgrdSi,at=seq(minValue(rgrdSi),5,length=100),margin=FALSE,ylim=c(ymin(rgrdSi),ymax(rgrdSi)),xlim=c(xmin(rgrdSi),xmax(rgrdSi)),colorkey=F))
p<-p+layer(sp.lines(eval(parse(text=paste("spL",1,sep="."))),col="red",lwd=3))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",100,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",200,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",300,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",400,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",500,sep="."))),col="red",lwd=1))
p<-p+layer(sp.lines(eval(parse(text=paste("spL",600,sep="."))),col="red",lwd=3))
p<-p+layer(sp.lines(SpTop,col="red",lwd=3))
p<-p+layer(sp.lines(SpBottom,col="red",lwd=3))
p
dev.off()
extracted lines to stack; average lines using median; save results
eLSi_stack<-eLSi.1[[1]]
for(i in 2:600){
eLSi_stack<-rbind(eLSi_stack,eval(parse(text=paste("eLSi",i,sep=".")))[[1]][])
}
eLZn_stack<-eLZn.1[[1]]
for(i in 2:600){
eLZn_stack<-rbind(eLZn_stack,eval(parse(text=paste("eLZn",i,sep=".")))[[1]][])
}
library(matrixStats)
#save extracted data
avg_res_Si<-cbind(seq(1,length(eLSi_stack[1,]),1),colMedians(eLSi_stack),colMeans(eLSi_stack))
write.table(avg_res_Si,'Si line extract median and mean.csv',row.names=F,sep=",")
avg_res_Zn<-cbind(seq(1,length(eLSi_stack[1,]),1),colMedians(eLZn_stack),colMeans(eLZn_stack))
write.table(avg_res_Zn,'Zn line extract median and mean.csv',row.names=F,sep=",")
write.table(eLSi_stack,'Si 600 lines extracted.csv',sep=",")
write.table(eLZn_stack,'Zn 600 lines extracted.csv',sep=",")
plot results as graph and save
#view
sy <-250
sx<-5
sxm<-1
windows.options(width=20, height=10)
plot(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),pch='',xlim=c(-0.3,sx),ylim=c(0,sy),ylab="distance (microns)",xlab="(wt. %)",cex.lab=1.5,cex.axis=1.5)
lines(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),col="red",lwd=3)
lines(colMedians(eLZn_stack),seq(1,length(eLSi_stack[1,]),1),col="blue",lwd=3)
legend(sx-sxm,sy,legend=c("Si","Zn"),col=c("red","blue"),lty=1:1,cex=1.5,lwd=3)
#save
png("Zn Si extract lines horiz.png",height=512,width=1024,units="px")
plot(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),pch='',xlim=c(-0.3,sx),ylim=c(0,sy),ylab="distance (microns)",xlab="(wt. %)",cex.lab=1.5,cex.axis=1.5)
lines(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),col="red",lwd=3)
lines(colMedians(eLZn_stack),seq(1,length(eLSi_stack[1,]),1),col="blue",lwd=3)
legend(sx-sxm,sy,legend=c("Si","Zn"),col=c("red","blue"),lty=1:1,cex=1.5,lwd=3)
dev.off()
Quote from: Ben Buse on May 02, 2018, 02:37:51 AM
Hi John,
My main point was when averaging do you use mean or median. (For the actual project I don't use calcimage, as i'm dealing with subareas in a map that are non orthogonal)
Thanks
Ben
Hi Ben,
We're just using mean averaging in our Surfer strip scripts, but they could be easily modified for median averaging. Very fancy that you are dealing with non orthogonal directions with your strip averaging! Cool!
john
Here are the results from the cartoon example show above, you can see the difference between mean and median. (Although I realised that for R extract function for non-parallel line there is a slight distortion of distance scale, which must be corrected)
(https://smf.probesoftware.com/gallery/453_02_05_18_9_38_59.png)
Last week at the UK JEOL Users group meeting we were chatting about scripts in ImageJ. Now with the convert to ASCII function in CalcImage its very easy to import maps into ImageJ. I attach a simple script which opens all the '.TXT' files in a folder (i.e. the grd files converted to ASCII), and adds a false color table, an intensity scale and a color scale, and saves the files. It is currently using 16_colors LUT, but this can easily be changed. If you wanted to specify scale width, and number of decimal places on intensity scale these could all be added as user prompts. If you don't want to close all images at end - remove closeall.
input = getDirectory("Input directory");
output = getDirectory("Output directory");
suffix = ".TXT"; //you only want to apply to tiff images, no need to ask
pixelsize = getNumber("pixel size in microns", 1);
processFolder(input);
function processFolder(input) {
list = getFileList(input);
for (i = 0; i < list.length; i++) {
if(File.isDirectory(input + list[i])) //if it's a directory, go to subfolder
processFolder("" + input + list[i]);
if(endsWith(list[i], suffix)) //if it's a tiff image, process it
processFile(input, output, list[i], pixelsize);
//if it's neither a tiff nor a directory, do nothing
}
}
function processFile(input, output, file,pixelsize) {
//here, you put the real processingprocessFolder(input);
run("Text Image... ", "open="+"'"+input+file+"'");
run("Set Scale...", "distance=1 known="+pixelsize+" pixel=1 unit=um");
w = getWidth();
h = getHeight();
w1 = w+100;
h1 = h+50;
run("16_colors");
run("Canvas Size...", "width="+w1+" height="+h1+" position=Top-Left");
setForegroundColor(253, 253, 253);
makeRectangle(w, 0, 100, h1);
run("Fill","slice");
makeRectangle(0, h, w1, h1);
run("Fill","slice");
run("Calibration Bar...", "location=[Upper Right] fill=White label=Black number=5 decimal=3 font=12 zoom=1");
scalewidth = d2s((w/2)*pixelsize,0);
run("Scale Bar...", "width="+scalewidth+" height=4 font=14 color=Black background=None location=[Lower Left]");
name = getTitle;
saveAs("Jpeg", output+name);
}
run ("Close All");
It is based on the scripts described in these webpages
https://imagej.net/Batch_Processing (https://imagej.net/Batch_Processing)
https://forum.image.sc/t/batch-processing-several-folder/6636 (https://forum.image.sc/t/batch-processing-several-folder/6636)
Ben
I've just noticed on ubuntu the lut 16 colors is
run("16 Colors");
Ben
Thanks Ben,
I understand this a lot better than the R script I posted!
Cheers
Glenn
Quote from: Ben Buse on December 19, 2018, 04:45:00 AM
Last week at the UK JEOL Users group meeting we were chatting about scripts in ImageJ. Now with the convert to ASCII function in CalcImage its very easy to import maps into ImageJ. I attach a simple script which opens all the '.TXT' files in a folder (i.e. the grd files converted to ASCII), and adds a false color table, an intensity scale and a color scale, and saves the files. It is currently using 16_colors LUT, but this can easily be changed. If you wanted to specify scale width, and number of decimal places on intensity scale these could all be added as user prompts. If you don't want to close all images at end - remove closeall.
It is based on the scripts described in these webpages
https://imagej.net/Batch_Processing (https://imagej.net/Batch_Processing)
https://forum.image.sc/t/batch-processing-several-folder/6636 (https://forum.image.sc/t/batch-processing-several-folder/6636)
Ben
Absolute hero, you've just saved me hours from having to work this out for myself!
Following on from Ben's lead, I've been working on expanding the macro for automating the import and conversion of text images from ProbeImage in to ImageJ and out again as jpeg with scale and calibration bars.
In lieu of a change log, some of the changes for this version include:
- black background instead of white (personal preference)
- the option to open either ProbeImage ascii .TXT files or JEOL matrix map .csv exports
- choice of several lookup tables (ones that are supplied with my current version of Fiji, at least). The default LUT is the one listed first, so if you prefer to use a different default - or one not listed - just change the order they appear in the script. I'm having a bit of a cyan hot moment, myself
- the option to process and export all the maps automatically or to go through the individual maps and alter the brightness/contrast to suit, or to simply input min and max values (I assume I'm not the only person who gets asked about that one?)
- offset the scale bar slightly to stop it overlapping images (particularly small ones), and made it white on the black background
- rounded the scale bar to (hopefully) the nearest 'nice' number e.g. a 500um scale bar instead of 512um, 1000um instead of 1024um etc (the code for this bit isn't the nicest, so feel free to suggest an alternative, elegant method!)
- prevent the scale bar exceeding 50% of the image width e.g. I was getting a 10um scale bar on a 12um map, which looked a bit silly
I've also annotated the script so that it should be easy for anyone to change things to their preference.
Hopefully it should work for everyone else (I've tested on two machines, at least...).
//ImageJ macro to format ProbeImage .txt outputs.
input = getDirectory("Input directory");
output = getDirectory("Output directory");
Dialog.create("Probe Image Processing");
Dialog.addNumber("pixel size in microns", 1);
Dialog.addChoice("Select file type:", newArray(".TXT", ".csv"));
Dialog.show();
pixelsize = Dialog.getNumber();
suffix = Dialog.getChoice();
//Set the LUT to use for the images, default Cyan Hot
Dialog.create("Probe Image Processing");
Dialog.addChoice("What lookup table (LUT) would you like to use?", newArray("Cyan Hot","16 colors","Fire","Grays","cool","mpl-viridis","Rainbow RGB","Thermal","Yellow Hot"));
Dialog.show();
LUT = Dialog.getChoice();
//Set whether to manually process brightness/contrast for each map or to run through automatically. Default yes.
Dialog.create("Probe Image Processing");
Dialog.addChoice("Would you like to process individual images?", newArray("Yes","No"));
Dialog.addCheckbox("Use preset Min/Max values?", false)
Dialog.show();
ProcLUT = Dialog.getChoice();
PresetMinMax = Dialog.getCheckbox
if(ProcLUT == "Yes") {
setBatchMode(false);
}
else {
setBatchMode(true);
}
if(PresetMinMax == true) {
ProcLUT = "No";
}
else ;
//Run processFolder function which in turn calls processFile function
processFolder(input);
function processFolder(input) {
list = getFileList(input);
for (i = 0; i < list.length; i++) {
if(File.isDirectory(input + list[i])) //if it's a directory, go to subfolder
processFolder("" + input + list[i]);
if(endsWith(list[i], suffix)) //if it's a .txt image, process it
processFile(input, output, list[i], pixelsize);
//if it's neither a .txt nor a directory, do nothing
}
}
function processFile(input, output, file,pixelsize) {
//imports .txt file as text image
run("Text Image... ", "open="+"'"+input+file+"'");
//sets the scale according to the microns per pixel prompt above
run("Set Scale...", "distance=1 known="+pixelsize+" pixel=1 unit=um");
w = getWidth();
h = getHeight();
//add room for calibration bar
w1 = w+100;
//add room for scale bar (h1), plus 10px gap to avoid overlapping the image (h2)
h1 = h+50;
h2 = h+10;
//apply chosen LUT
run(LUT);
//if option to process individual maps, waits for each user to set min and max pixel values before adding scale bar etc
if(ProcLUT == "Yes") {
run("Brightness/Contrast...");
title = "Adjust Brightness/Contrast";
msg = "For each channel and Apply, Press OK to proceed";
waitForUser(title, msg);
}
else ;
//gives the user the option to add min and max values to maps on a per map basis
if (PresetMinMax == true) {
Dialog.create("Set Min Value");
Dialog.addString("Please enter the Min value", 0);
Dialog.show();
min = Dialog.getString;
Dialog.create("Set Max Value");
Dialog.addString("Please enter the Max value", 100);
Dialog.show();
max = Dialog.getString;
}
else {
getMinAndMax(min,max); // get display range
}
setMinAndMax(min,max);
//set foreground and background colour to black (personal preference - this could be made another option)
setBackgroundColor(0, 0, 0);
setForegroundColor(0, 0, 0);
//increase the image size to make room for scale and calibration bars
run("Canvas Size...", "width="+w1+" height="+h1+" position=Top-Left");
makeRectangle(w, 0, 100, h1);
run("Fill","slice");
makeRectangle(0, h, w1, h1);
run("Fill","slice");
run("Calibration Bar...", "location=[Upper Right] fill=Black label=White number=5 decimal=2 font=12 zoom=1");
//determine scale bar size
scalewidth = d2s((w/2)*pixelsize,0);
//determine the denominator to be used to get a rounded scale bar size e.g. 500um rather than 512
if (pixelsize >= 10) {
scaledenom = 1000;
}
else if (pixelsize <=0.2) {
scaledenom = 10;
}
else scaledenom = 100;
//round of scale bar to appropriate size
scalewidthR = scaledenom*(round(scalewidth/scaledenom));
//stops the scale bar being bigger than half the image size e.g. a 10um scale bar for a 10um map
if (scalewidthR >= ((w*pixelsize)/2)) {
scalewidthR2 = scalewidthR/2;
}
else {
scalewidthR2 = scalewidthR;
};
//creates scale bar
makeRectangle(5,h2,1,1);
run("Scale Bar...", "width="+scalewidthR2+" height=4 font=14 color=White background=None location=[At Selection]");
//name the file and save as a jpeg
name = getTitle;
saveAs("Jpeg", output+name);
}
run ("Close All");
Quote from: JonF on January 11, 2019, 09:53:14 AM
Following on from Ben's lead, I've been working on expanding the macro for automating the import and conversion of text images from ProbeImage in to ImageJ and out again as jpeg with scale and calibration bars.
Hi Jon,
Very cool. Some questions:
1. When you say "conversion of text images from ProbeImage" do you mean you are reading and converting the .PrbImg Probe Image files directly?
2. Does your code work for both Cameca and JEOL "flavors" of .PrbImg Probe Image files?
john
Quote from: Probeman on January 12, 2019, 09:33:40 AM
1. When you say "conversion of text images from ProbeImage" do you mean you are reading and converting the .PrbImg Probe Image files directly?
2. Does your code work for both Cameca and JEOL "flavors" of .PrbImg Probe Image files?
john
Nothing so fancy, I'm afraid, it simply imports the ascii text files as exported from CalcImage (not Probe Image, my mistake!) or the csv files as exported by the JEOL mapping software. These could be either raw intensity data or concentrations.
I use this script to replace some of the functionality of Surfer (e.g. making the pretty final images) as most users don't have this (yes, they could use the offline processing PC, but learning Surfer seems overkill for most of what people would use it for). You could then use imageJ to export line profiles etc - although a lot of this functionality is done by the new polygon extraction features!
Quote from: JonF on January 13, 2019, 11:54:23 PM
Quote from: Probeman on January 12, 2019, 09:33:40 AM
1. When you say "conversion of text images from ProbeImage" do you mean you are reading and converting the .PrbImg Probe Image files directly?
2. Does your code work for both Cameca and JEOL "flavors" of .PrbImg Probe Image files?
john
Nothing so fancy, I'm afraid, it simply imports the ascii text files as exported from CalcImage (not Probe Image, my mistake!) or the csv files as exported by the JEOL mapping software. These could be either raw intensity data or concentrations.
I use this script to replace some of the functionality of Surfer (e.g. making the pretty final images) as most users don't have this (yes, they could use the offline processing PC, but learning Surfer seems overkill for most of what people would use it for). You could then use imageJ to export line profiles etc - although a lot of this functionality is done by the new polygon extraction features!
Hi Jon,
OK, that makes sense now. That will allow everyone to process quant x-ray maps from CalcImage in ImageJ, which is very nice.
Yes, Surfer is great for making fine adjustments for high quality presentations of x-ray map quant data, especially for publication, but it's not a cheap package, and really it's overkill for most basic operations of exporting selected pixel areas and/or min/max element filtering as you say.
In fact I think we (and our students) should thank Gareth Seward again for working with us to implement the polygon extraction and pixel filtering feature in CalcImage. It came out quite nice we think, and of course it is available to all students (and all their computers!):
https://smf.probesoftware.com/index.php?topic=1151.0
john
Here's a variant on a theme, open all the oxide.txt files and save as a scaled stack for further processing or interrogation in imagej. Can easily be changed to open all quant.txt files by changing suffix.
Ben
input = getDirectory("Input directory");
output = getDirectory("Output directory");
suffix = "Oxide.TXT"; //you only want to apply to tiff images, no need to ask
pixelsize = getNumber("pixel size in microns", 1);
processFolder(input);
function processFolder(input) {
list = getFileList(input);
for (i = 0; i < list.length; i++) {
if(File.isDirectory(input + list[i])) //if it's a directory, go to subfolder
processFolder("" + input + list[i]);
if(endsWith(list[i], suffix)) //if it's a tiff image, process it
processFile(input, output, list[i], pixelsize);
//if it's neither a tiff nor a directory, do nothing
}
}
function processFile(input, output, file,pixelsize) {
//here, you put the real processingprocessFolder(input);
run("Text Image... ", "open="+"'"+input+file+"'");
}
StackName = getString("Enter name of Stack, no spaces", "Oxide_Stack");
run("Images to Stack", "name="+StackName+" title=[] use");
run("Set Scale...", "distance=1 known="+pixelsize+" pixel=1 unit=um");
name = getTitle;
saveAs("Tiff", output+name);
Hi, I've been playing with stacks - and looking at using a stack of channels.
In ImageJ, a regular stack has a slice for each image, if you change it to a channel for each image - using colour, each channel has its own lut, whilst retaining original values. (I've attached an avi showing it.)
Below is the change I've made to the code above - to create a channel stack, and apply auto brightness contrast to each channel
StackName = getString("Enter name of Stack, no spaces", "Oxide_Stack");
run("Images to Stack", "name="+StackName+" title=[] use");
run("Set Scale...", "distance=1 known="+pixelsize+" pixel=1 unit=um");
run("Stack to Hyperstack...", "order=xyczt(default) channels="+nSlices+" slices=1 frames=1 display=Color");
Stack.getDimensions(width,height,channels,slices,frames);
max2=channels+1;
for (i=1; i<max2; i++) {
Stack.setChannel(i);
run("Enhance Contrast", "saturated=0.35");
}
name = getTitle;
saveAs("Tiff", output+name);
In this case to measure the intensity within a selected area (box, circle, polygon) for the whole stack we use the following:
Stack.getDimensions(width,height,channels,slices,frames);
max2=channels+1;
for (i=1; i<max2; i++) {
Stack.setChannel(i);
run("Measure");
}
Similarly macros written to plot or export the profile data for all the images in the stack can be easily modified, I've attached the modified versions.
Quote from: Ben Buse on November 04, 2020, 03:15:22 AM
In ImageJ, a regular stack has a slice for each image, if you change it to a channel for each image - using colour, each channel has its own lut, whilst retaining original values. (I've attached an avi showing it.)
Ah, great spot. That's been bugging me for a while!
Thanks Ben!