azvoleff / glcm Goto Github PK
View Code? Open in Web Editor NEWCalculate textures from grey-level co-occurrence matrices (GLCMs) in R
Calculate textures from grey-level co-occurrence matrices (GLCMs) in R
An error occurs if a window size other than the standard c(3,3) is chosen.
> require(raster)
> glcm(raster(L5TSR_1986, layer=1),window=c(5,5))
error: Mat::submat(): indices out of bounds or incorrectly used
Error: Mat::submat(): indices out of bounds or incorrectly used
This is perhaps more of a clarification issue. In areas of an image where the window and shifted window all contain the same value (i.e. all in the same bin), variance = 0, and thus, correlation cannot be calculated (divide by 0). I ran glcm on WorldView satellite data in both R and ENVI, and these areas (often water) returned NA in R and 1.0 in ENVI. So it appears the R package is coded to return no data, but ENVI assigns a value of exactly 1.0. This may be good to clarify in the documentation. Thanks.
I'm getting values outside the bounds that should be possible for correlation. Correlation should be constrained to -1 to +1 (when scale_factor=1), but about 1% of the resulting non NA values seem to fall outside this range when using the glcm function on my raster data. The resulting correlation glcm raster has max and min values of +Inf and -Inf and also contains real numbers outside the -1 to +1 bounds ranging from about 2 to 47.
(Running R version 3.4.1 on Windows 10)
I was unable to install the glcm package from CRAN:
install.packages("glcm")
Installing package into ‘/home/paul/R/x86_64-pc-linux-gnu-library/3.5’ (as ‘lib’ is unspecified) Warning in install.packages : package ‘glcm’ is not available (for R version 3.5.3)
The CRAN website for the glcm package ( https://cran.r-project.org/web/packages/glcm/index.html ) reports:
Package ‘glcm’ was removed from the CRAN repository.
Formerly available versions can be obtained from the archive.
Archived on 2019-03-22 as check problems were not corrected in time.
Installing the glcm development version from github worked for me.
The help page ?glcm initially states:
"The default textures are calculated using a 45 degree shift. "
But in the example, it says (twice):
"Calculate using default 90 degree shift"
Which is thus the default?
Also, I'm confused on the actual nature of the "shift". Initially, I understood it was setting the jump of the slicing window,
but after reading the help page I tend to think that it refers to the way the adjacency is defined within the window, that is
for shift = c(1,1) the co-occurrence is checked between pixel (i,i) and pixel (i+1, i+1) within the window.
If this is the case (please confirm), I understand that the slicing window always moves by jumping to next column and then to the next row (please confirm as well). For large images and large windows, moving the window by more than 1 pixel could be an acceptable simplification with a considerable gain of speed.
The documentation states states that "to calculate GLCM textures over 'all directions' (in the terminology of commonly used remote sensing software), use: shift=list(c(0,1), c(1,1), c(1,0), c(1,-1)). This will calculate the average GLCM texture using shifts of 0 degrees, 45 degrees, 90 degrees, and 135 degrees." However, I believe that is only true if the GLCM is constructed in a symmetric way (that is why ENVI averages over 8 directions). For example, if a symmetric GLCM is used, I'd expect a 45 degree shift and a 225 degree shift should produce equivalent results but I get different results. Additionally, this introduces some ambiguity to mean and variance (as well as correlation which is calculated using mean and variance) as there is µi and µj for mean and σi and σj for the variance (these are equivalent in a symmetric GLCM; see formulas in the Hall-Beyer texture tutorial).
library(raster)
library(terra)
library(glcm)
r<- raster(rast(volcano, extent= ext(2667400, 2667400 + ncol(volcano)*10, 6478700, 6478700 + nrow(volcano)*10), crs = "EPSG:27200"))
t1a<- glcm(r, statistics=c("mean", "variance", "homogeneity", "contrast", "dissimilarity", "entropy", "second_moment", "correlation"), n_grey= 32, window =c(3,3), shift= c(1, 1), na_opt = "any") #45 degree shift
t1b<- glcm(r, statistics=c("mean", "variance", "homogeneity", "contrast", "dissimilarity", "entropy", "second_moment", "correlation"), n_grey= 32, window =c(3,3), shift= c(-1, -1), na_opt = "any") #225 degree shift
plot(t1a-t1b) #ab diff
Hi,
I' m using function GLCM in Rstudio, but the result is different from the result that I obtained on another software (SNAP from ESA) with the same function GLCM. I noticed that the GLCM in SNAP has a drop down menu where there are two possibility 'equal distance quantizer' and 'probabilistic quantizer'. I saw that the probabilistic quantizer for me is the best. But when I use GLCM on R, the result is similar to the result of Equal distance in SNAP. Is there a method that allowed to use probabilistic quantizer in R??
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.