Image Classification

Learning outcomes:

  • Carry out a supervised classification on a series of raster layers
  • Assess the user and producer accuracy of the classification

Introduction

We will carry out a supervised classification using Sentinel 2 data for Gewata region in Ethiopia. Athmospherically corrected Level 2A data acquired on December 27 2020 used in this excercise. The data is download from ESA’s online data hub (https://scihub.copernicus.eu/dhus), a part of the Copernicus European Programme. As it is freely available, Sentinel data has been commonly used next to Landsat data for environmental monitoring. Sentinel bands in comparison to Lansat bands

Data exploration

Dowload the data to your computer and open your preferred R IDE to the directory of this tutorial.

After downloading the data we begin with vizualization. The data consists of all the Sentinel 2 bands at a spatial resolution of 20m. We will also make use of training polygons for the land cover classification.

We will be making use of the RasterBrick object.

Set up the environment

# make sure the renv.lock file is in the same directory as the Rmd file, then run renv::restore() or renv::init() in the console and it should download the packages needed for this tutorial. Paste the command: renv::restore() into the console. 
# When asked if you want to procees write 'y' and click enter


## We will be loading the libraries in the code as we get to the sections which use it.
library(raster)
library(sf)
## Load data
data_dir<-"C:/Data/M4L/Core_IC/"
Gewata<-brick(file.path(data_dir, "S2B2A_T36NZP_20201227T075239_20m_gewata_crop.tif"))
names(Gewata)<-readLines(file.path(data_dir, "S2B2A_T36NZP_20201227T075239_20m_gewata_bands.txt"))
# The image is cloud-free, so drop the cloud mask layer
Gewata<-dropLayer(Gewata, "SCL")
## Check out the attributes
Gewata$B02
## Some basic statistics using cellStats() from the raster package
cellStats(Gewata$B02, stat=max)
cellStats(Gewata$B02, stat=mean)
# This is equivalent to:
maxValue(Gewata$B02)
## What is the maximum value of all three bands?
max(c(maxValue(Gewata$B02), maxValue(Gewata$B8A), maxValue(Gewata$B11)))
## summary() is useful function for a quick overview
summary(Gewata$B02)

You might get different values for cellStats an maxValue, the reason for this is that the cellStats looks at all the pixel values whereas the maxValue function takes a subset of the pixel values. As a result maxValue is faster than cellStats.

# histograms for all the bands in one window (automatic, if a RasterBrick is supplied)
hist(Gewata, maxpixels=1000)

Note that the values of these bands have been rescaled by a factor of 10,000. This is done for file storage considerations. For example, a value of 0.5643 stored as a float takes up more disk space than a value of 5643 stored as an integer. If you prefer reflectance values in their original scale (from 0 to 1), this can easily be done using raster algebra or calc().

A scatterplot matrix can be helpful in exploring relationships between raster layers. This can be done with the pairs() function of the raster package, which (like hist()) is a wrapper for the same function found in the graphics packages.

pairs(Gewata, maxpixels=1000)

Note that both hist() and pairs() compute histograms and scatterplots based on a random sample of raster pixels. The size of this sample can be changed with the argument maxpixels in either function.

Calling pairs() on a RasterBrick reveals potential correlations between the layers themselves. In the case of bands of the Gewata subset, we can see that band 3 and 5 (in the visual part of the EM spectrum) and bands 6 and 7 (in the near infrared part of the EM spectrum) are highly correlated. Similar correlation also exist between band 11 and 12. While band 8a contains significant non-redundant information.

Question 1: Given what we know about the location of these bands along the EM spectrum, how could these scatterplots be explained?

A commonly used metric for assessing vegetation dynamics, the normalized difference vegetation index (NDVI), can be calculated to use as another variable for the classifiction. NDVI can be calculated using direct raster algebra, calc() or overlay(). Since we will be using NDVI again later in this tutorial, let’s calculate it and store it in our workspace using overlay().

par(mfrow = c(1, 1)) # reset plotting window
ndvi <- overlay(Gewata$B8A, Gewata$B04, fun=function(x,y){(x-y)/(x+y)})
plot(ndvi)

Aside from the advantages of calc() and overlay() regarding memory usage, an additional advantage of these functions is the fact that the result can be written immediately to the file by including the filename = "..." argument, which will allow you to write your results to file immediately, after which you can reload in subsequent sessions without having to repeat your analysis.

Question 2: What is the advantage of including the NDVI layer in the classification?

Classifying raster data

One of the most important tasks in analysis of remote sensing image analysis is image classification. In classifying the image, we take the information contained in the various bands (possibly including other synthetic bands such as NDVI or principal components). There are two approaches for image classification: supervised and unsupervised. In this tutorial we will explore supervised classification based on the Random Forest method.

Random Forest classification

The Random Forest classification algorithm is an ensemble learning method that is used for both classification and regression. In our case, we will use the method for classification purposes. Here, the Random Forest method takes random subsets from a training dataset and constructs classification trees using each of these subsets. Trees consist of branches and leaves.

Branches represent nodes of the decision trees, which are often thresholds defined for the measured (known) variables in the dataset. Leaves are the class labels assigned at the termini of the trees. Sampling many subsets at random will result in many trees being built. Classes are then assigned based on classes assigned by all of these trees based on a majority rule, as if each class assigned by a decision tree were considered to be a vote.

The figure below gives a simple demonstration of how the random forest method works in principle. For an introduction to the Random Forest algorithm, see this presentation. For more information on random forest implementation in R see this tutorial.

Schematic showing how the Random Forest method constructs classification trees from random subsets of a training dataset. Each tree determines the labels assigned based on the training dataset. Once all trees are assembled, classes are assigned to unknown pixels based on the class which receives the majority of votes based on all the decision trees constructed.

One major advantage of the Random Forest method is the fact that an Out Of the Bag (OOB) -cros validation error estimate and an estimate of variable performance are performed. For each classification tree assembled, a fraction of the training data are left out and used to compute the error for each tree by predicting the class associated with that value and comparing with the already known class. This process results in a confusion matrix, which we will explore in our analysis. In addition an importance score is computed for each variable in two forms: the mean decrease in accuracy for each variable, and the Gini impurity criterion, which will also be explored in our analysis.

To perform the classification in R, it is best to assemble all covariate layers (ie. those layers contaning predictor variable values) into one RasterBrick object. In this case, we can simply append the new layer (NDVI) to our existing RasterBrick (currently consisting of different bands).

First, let’s rescale the original reflectance values to their original scale. This step is not required for the RF classification, but it might help with the interpretation, if you are used to thinking of reflectance as a value between 0 and 1. (On the other hand, for very large raster bricks, it might be preferable to leave them in their integer scale, but we won’t go into more detail about that here.)

gewata <- calc(Gewata, fun=function(x) x / 10000)
## Make a new RasterStack of covariates by 'stacking' together the existing gewata brick and NDVI
covs <- stack(gewata, ndvi)
plot(covs)

You’ll notice that we didn’t give our NDVI layer a name yet. It’s good to make sure that the raster layer names make sense, so you don’t forget which band is which later on. Let’s change all the layer names (make sure you get the order right!).

names(covs) <- c(names(Gewata),"NDVI")
names(covs)
##  [1] "B02"  "B03"  "B04"  "B05"  "B06"  "B07"  "B11"  "B12"  "B8A"  "NDVI"

Training data preparation

For this exercise, we will do a very simple classification for 2020 using three classes: forest, cropland and wetland. While for other purposes it is usually better to define more classes (and possibly fuse classes later), a simple classification like this one could be useful, for example, to construct a forest mask for the year 2020

## we load the training polygons as a csv file using st_read:
trainingPoly <- st_read("data/trainingPoly.csv")
## Reading layer `trainingPoly' from data source `C:\Data\M4L\Core_IC\AdvancedRasterAnalysis\data\trainingPoly.csv' using driver `CSV'
## Simple feature collection with 16 features and 3 fields
## Geometry type: POLYGON
## Dimension:     XY
## Bounding box:  xmin: 813300.2 ymin: 822891.5 xmax: 846719.1 ymax: 848830.4
## CRS:           NA
## Superimpose training polygons onto NDVI plot
par(mfrow = c(1, 1)) # reset plotting window
plot(ndvi)
plot(trainingPoly, add = TRUE)

The training classes are labelled as string labels. For this exercise, we will need to work with integer classes, so we will need to first ‘relabel’ our training classes. There are several approaches that could be used to convert these classes to integer codes. In this case, we will first make a function that will reclassify the character strings representing land cover classes into integers based on the existing factor levels.

## Inspect the trainingPoly object
trainingPoly<-trainingPoly[,c(2:4)] #remove an unused column
trainingPoly
## Simple feature collection with 16 features and 2 fields
## Geometry type: POLYGON
## Dimension:     XY
## Bounding box:  xmin: 813300.2 ymin: 822891.5 xmax: 846719.1 ymax: 848830.4
## CRS:           NA
## First 10 features:
##    OBJECTID   Class                       geometry
## 1         1 wetland POLYGON ((821935.6 833523.3...
## 2         2 wetland POLYGON ((824714 831022.8, ...
## 3         3 wetland POLYGON ((830913.4 832637.4...
## 4         4 wetland POLYGON ((832451.1 833023.5...
## 5         5 wetland POLYGON ((834905.6 837919.2...
## 6         6  forest POLYGON ((833159.4 846635.4...
## 7         7  forest POLYGON ((831922.1 848830.4...
## 8         8  forest POLYGON ((842202 832724.6, ...
## 9         9  forest POLYGON ((840860.9 829199.5...
## 10       10  forest POLYGON ((839926.8 824919.4...
# The 'Class' column is a character but should be converted to factor 
summary(trainingPoly$Class)
##    Length     Class      Mode 
##        16 character character
trainingPoly$Class <- as.factor(trainingPoly$Class)
summary(trainingPoly$Class)
## cropland   forest  wetland 
##        6        5        5
# We can make a new 'Code' column by converting the factor levels to integer by using the as.numeric() function,
trainingPoly$Code <- as.numeric(trainingPoly$Class)
# Inspect the new 'Code' column
summary(trainingPoly$Code)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   1.000   2.000   1.938   3.000   3.000
## Plotting
# Define a colour scale for the classes (as above)
# corresponding to: cropland, forest, wetland
cols <- c("orange", "dark green", "light blue")
### Superimpose training polygons (colour by class) onto NDVI plot
plot(ndvi)
plot(trainingPoly["Class"], add = TRUE, pal=cols)
## Add a customised legend
legend("topright", legend=c("cropland", "forest", "wetland"), fill=cols, bg="white")

Our goal in preprocessing this data is to have a table of values representing all layers (covariates) with known values/classes. To do this, we will first need to know the values of the covariates at our training polygon locations. We can use extract function of raster package for this. Next we convert these data to a data.frame representing all training data.

## Extract pixel values below the polygons
trainingData<-extract(covs, trainingPoly)
# Check structure
str(trainingData)
# It is a matrix (rows are pixels, columns are bands) per polygon
# Convert each matrix to a data.frame and add the class column from the polygons
valuetable <- lapply(1:length(trainingData), function(polygonID) cbind.data.frame(trainingData[[polygonID]], Class=trainingPoly$Class[polygonID]))
# Unlist into one long data.frame
valuetable <- do.call(rbind, valuetable)

This data.frame will be used as an input into the RandomForest classification function. We then inspect the first and last 10 rows.

head(valuetable, n = 10)
##       B02    B03    B04    B05    B06    B07    B11    B12    B8A      NDVI
## 1  0.0492 0.0746 0.0680 0.1105 0.1999 0.2280 0.2425 0.1496 0.2608 0.5863747
## 2  0.0518 0.0759 0.0705 0.1168 0.2093 0.2397 0.2505 0.1576 0.2706 0.5866315
## 3  0.0523 0.0772 0.0710 0.1189 0.2073 0.2418 0.2501 0.1567 0.2717 0.5856434
## 4  0.0482 0.0714 0.0674 0.1120 0.1971 0.2265 0.2331 0.1394 0.2617 0.5903981
## 5  0.0476 0.0732 0.0675 0.1126 0.1959 0.2257 0.2364 0.1421 0.2586 0.5860166
## 6  0.0481 0.0716 0.0671 0.1117 0.1932 0.2231 0.2360 0.1433 0.2550 0.5833592
## 7  0.0483 0.0726 0.0678 0.1128 0.1997 0.2287 0.2398 0.1483 0.2559 0.5810936
## 8  0.0517 0.0760 0.0700 0.1161 0.2057 0.2356 0.2484 0.1558 0.2678 0.5855536
## 9  0.0528 0.0757 0.0717 0.1176 0.2049 0.2360 0.2501 0.1570 0.2699 0.5802108
## 10 0.0502 0.0745 0.0679 0.1145 0.2065 0.2351 0.2480 0.1502 0.2676 0.5952310
##      Class
## 1  wetland
## 2  wetland
## 3  wetland
## 4  wetland
## 5  wetland
## 6  wetland
## 7  wetland
## 8  wetland
## 9  wetland
## 10 wetland
tail(valuetable, n = 10)
##          B02    B03    B04    B05    B06    B07    B11    B12    B8A      NDVI
## 87195 0.0352 0.0544 0.0563 0.0986 0.1997 0.2351 0.2193 0.1324 0.2652 0.6497667
## 87196 0.0377 0.0586 0.0651 0.1053 0.1789 0.2219 0.2357 0.1534 0.2433 0.5778210
## 87197 0.0400 0.0594 0.0736 0.1108 0.1637 0.1842 0.2491 0.1729 0.2144 0.4888889
## 87198 0.0445 0.0750 0.0807 0.1316 0.2074 0.2370 0.2476 0.1602 0.2707 0.5406944
## 87199 0.0334 0.0537 0.0528 0.0960 0.2105 0.2668 0.2187 0.1242 0.2930 0.6946212
## 87200 0.0385 0.0653 0.0605 0.1127 0.2108 0.2402 0.2329 0.1482 0.2709 0.6348823
## 87201 0.0475 0.0758 0.0835 0.1279 0.2009 0.2237 0.2624 0.1716 0.2552 0.5069383
## 87202 0.0578 0.0941 0.1053 0.1617 0.2302 0.2592 0.2904 0.1873 0.2959 0.4750748
## 87203 0.0342 0.0559 0.0585 0.0957 0.1683 0.1825 0.2304 0.1466 0.2152 0.5725247
## 87204 0.0364 0.0588 0.0689 0.1169 0.1619 0.1887 0.2472 0.1672 0.2074 0.5012667
##          Class
## 87195 cropland
## 87196 cropland
## 87197 cropland
## 87198 cropland
## 87199 cropland
## 87200 cropland
## 87201 cropland
## 87202 cropland
## 87203 cropland
## 87204 cropland

We have our training dataset as a data.frame with the class column as a factor. If it is integer, random forest regression will be run, instead of classification. So, good to check on that.

Now we have a convenient training data table which contains, for each of the three defined classes, values for all covariates. Let’s visualize the distribution of some of these covariates for each class. To make this easier, we will create 3 different data.frames for each of the classes. This is just for plotting purposes, and we will not use these in the actual classification.

val_crop <- subset(valuetable, Class == "cropland")
val_forest <- subset(valuetable, Class == "forest")
val_wetland <- subset(valuetable, Class == "wetland")

## NDVI
par(mfrow = c(3, 1))
hist(val_crop$NDVI, main = "cropland", xlab = "NDVI", xlim = c(0, 1), col = "orange")
hist(val_forest$NDVI, main = "forest", xlab = "NDVI", xlim = c(0, 1),  col = "dark green")
hist(val_wetland$NDVI, main = "wetland", xlab = "NDVI", xlim = c(0, 1), col = "light blue")
par(mfrow = c(1, 1))

Other covariates such as the bands can also be plotted like above.

## 3. Bands 8a and 11 (scatterplots)
plot(B8A ~ B11, data = val_crop, pch = ".", col = "orange", xlim = c(0, 0.4), ylim = c(0, 0.5))
points(B8A ~ B11, data = val_forest, pch = ".", col = "dark green")
points(B8A ~ B11, data = val_wetland, pch = ".", col = "light blue")
legend("topright", legend=c("cropland", "forest", "wetland"), fill=c("orange", "dark green", "light blue"), bg="white")

Question 3: Try to produce the same scatterplot plot as above looking at the relationship between other bands. Try B02 & B05, B07 & NDVI (you might have adjust the xlim to incorporate the NDVI values) and another of your choice. What can you say about the relationships between these bands? Which ones give a clear distinction between classes, and where is this less clear?

We can see from these distributions that these covariates may do well in classifying forest pixels, but we may expect some confusion between cropland and wetland (although the individual bands may help to separate these classes). You can save the training data using the write.csv() command, in case something goes wrong after this point and you need to start over again.

Run Random Forest classification

We build the Random Forest model using the training data. For this, we will use the ranger package in R. There is also randomForest package available in R. However, ranger is is implemented in C++ with multithreading and thus is much faster. Using the ranger() function, we will build a model based on a matrix of predictors or covariates (ie. the first 10 columns of valuetable) related to the response (the Class column of valuetable).

Construct a random forest model.

Covariates (x) are found in columns 1 to 10 of valuetable. Training classes (y) are found in the ‘class’ column of valuetable. Caution: this step takes fairly long! but can be shortened by setting importance=FALSE

library(ranger)
modelRF <- ranger(x=valuetable[, 1:ncol(valuetable)-1], y=valuetable$Class,
                  importance = "permutation", seed=0xfedbeef)

Since the random forest method involves the building and testing of many classification trees (the ‘forest’), it is a computationally expensive step (and could take a lot of memory for especially large training datasets). When this step is finished, it would be a good idea to save the resulting object with the saveRDS() command. Any R object can be saved as an .rds file and reloaded into future sessions using readRDS().

Note: there is a similar functionality using the save() and load() commands, but those can save more than one object and don’t tell you their names, you have to know them. That is why saveRDS()/readRDS() is preferred, but in this tutorial in a lot of cases load is still being used.

The resulting object from the ranger() function is a specialized object of class ranger, which is a large list-type object packed full of information about the model output. Elements of this object can be called and inspected like any list object.

## Inspect the structure and element names of the resulting model
modelRF
class(modelRF)
#str(modelRF)
names(modelRF)
## Inspect the confusion matrix of the OOB error assessment
modelRF$confusion.matrix

Earlier we provided a brief explanation of OOB error, though it can be a valuable metric for evaluating your model, it can overestimate the true prediction error depending on the parameters presents in the model.

Since we set importance="permutation", we now also have information on the statistical importance of each of our covariates which we can retrieve using the importance() command.

importance(modelRF)
##        B02        B03        B04        B05        B06        B07        B11 
## 0.34201378 0.17655381 0.21133002 0.09419055 0.07904610 0.08446786 0.11889551 
##        B12        B8A       NDVI 
## 0.21624068 0.08891411 0.12864989

The above shows the variable importance for a Random Forest model showing the mean decrease in accuracy for each variable.

The mean decrease in accuracy indicates the amount by which the classification accuracy decreased based on the OOB assessment. In this case, it seems that Gewata bands 12 and 2 have the highest impact on accuracy. For large datasets, it may be helpful to know this information, and leave out less important variables for subsequent runs of the ranger() function.

Since the NDVI layer scores relatively low according to the mean accuracy decrease criterion, try to construct an alternate Random Forest model as above, but excluding this layer, you can name it something like ‘modelRF2’.

Question 4: What effect does this have on the overall accuracy of the results (hint: compare the confusion matrices of the original and new outputs). What effect does leaving this variable out have on the processing time (hint: use system.time())?

Now we apply this model to the rest of the image and assign classes to all pixels. Note that for this step, the names of the raster layers in the input brick (here covs) must correspond exactly to the column names of the training table. We will use the predict() function from the raster package. This function uses a pre-defined model to predict values of raster cells based on other raster layers. This model can be derived by a linear regression, for example. In our case, we will use the model provided by the ranger() function.

## Double-check layer and column names to make sure they match
names(covs)
##  [1] "B02"  "B03"  "B04"  "B05"  "B06"  "B07"  "B11"  "B12"  "B8A"  "NDVI"
names(valuetable)
##  [1] "B02"   "B03"   "B04"   "B05"   "B06"   "B07"   "B11"   "B12"   "B8A"  
## [10] "NDVI"  "Class"
## Predict land cover using the RF model
predLC <-raster::predict(covs, modelRF, fun=function(...) predict(...)$predictions)
## Plot the results
# recall: 1 = cropland, 2 = forest, 3 = wetland
cols <- c("orange", "dark green", "light blue")
plot(predLC, col=cols, legend=FALSE)
legend("bottomright",
       legend=c("cropland", "forest", "wetland"),
       fill=cols, bg="white")

Note that the predict() function also takes arguments that can be passed to writeRaster() (eg. filename = "", so it is a good idea to write to file as you perform this step (rather than keeping all output in memory).

Assessing the accuracy of the classification

Previously, we have seen the classification accuracy based on the OOB error within the ModelRF object. As those accuracy metrics are based on the training data itself, it is important to assess the classifcation using a separate validation dataset. Here use a validation dataset to assess the classification quality. Generally, you can select random sample locations based on the classified map and collect the validation data.
## we load the validation data :
validationPoints<-readRDS("data/validation")
names(validationPoints)<-c("x","y","refLC")
#extract the value of land cover map on the validation locations
validationPoints$mapLC<-extract(predLC, validationPoints)
head(validationPoints)
#remove NA values
validationPoints<-validationPoints[!is.na(validationPoints$mapLC),]
dim(validationPoints)
##        x      y refLC mapLC
## 1 826560 846870     1     1
## 2 826470 847080     1     1
## 3 823380 837690     1     1
## 4 814650 838290     1     1
## 5 822180 846960     1     2
## 6 825030 851760     1    NA
## [1] 1574    4
Once we have the mapped and reference land cover types in the same dataframe, we can create the confusion matrix (also reffered to as error matrix) to calculate the map accuracy.
#confusion matrix
cm<-table(validationPoints$mapLC, validationPoints$refLC)
row.names(cm)<-c("cropland", "forest", "wetland")
colnames(cm)<-c("cropland", "forest", "wetland")
print(cm)
#calculate overall accuracy
o.a<-sum(diag(cm))/sum(cm)*100
print(paste0("Overall accuracy: ", round(o.a,1)))
#user's accuracy
user.ac<-diag(cm)/rowSums(cm)*100
print("User's accuracy")
print(round(user.ac,1))
#producer's acc
producer.ac<-diag(cm)/colSums(cm)*100
print("Producer's accuracy")
print(round(producer.ac,1))
##           
##            cropland forest wetland
##   cropland      485    176     211
##   forest         31    311      12
##   wetland        10     38     300
## [1] "Overall accuracy: 69.6"
## [1] "User's accuracy"
## cropland   forest  wetland 
##     55.6     87.9     86.2 
## [1] "Producer's accuracy"
## cropland   forest  wetland 
##     92.2     59.2     57.4

Question 5: Where is most of the error coming from in the classification. (Hint: It may also be helpful to look back at the plots in the first part of the tutorial). How can our classification be further improved?

Overall accuracy based on validation data is about 70.0%. It is as expected to be lower than the OOB as a separate validation data is used.

Today’s summary

We learned about:

  • Dealing with Sentinel data
  • How to:
    • Perform a supervised classification using R scripts
    • Assess the accuracy of a classification

For more information