Machine learning benchmarking for land cover map production

Land cover map validation is a complex task. If you read French, you can check this post by Vincent Thierion which shows how the 2016 LC map of France produced by CESBIO stands with respect to data sources independent from those used for its production. But this is only one aspect of the validation. A land cover map is a map, and therefore, there are other issues than checking if individual points belong to the correct class. By the way, being sure that the correct class is known, is not so easy neither.

 

In this epoch of machine learning hype 1, it is easy to fall in the trap of thinking that optimising a single metric accounts for all issues in map validation. Typical approaches used in machine learning contests are far from enough for this complex task. Let's have a look at how we proceed at CESBIO when we assess the quality of a LC map produced by classification.

 

Supervised classification, training, testing, etc.

The iota2 processing chain is highly configurable in terms of how the images are pre-processed, how the reference data is prepared, how the classifiers are paremeterised, etc. We also continuously add new approaches. During this development work, we need to assess whether a change to the workflow performs better than the previous approaches. In order to do this, we use standard metrics for classification derived from the confusion matrix (Overall Accuracy, κ coefficient, F-Score). The confusion matrix is of course computed using samples which are not used for the training, but we go a little further than that by splitting the train and test sets at the polygon level. Indeed, our reference data is made of polygons which correspond to agricultural plots, forests, urban settlements, etc. Since images have a strong local correlation, pixels belonging to the same polygon have a high likelihood of being very similar. Therefore, allowing a polygon to provide pixels for both the training and test sets yields optimistic performance estimations.

 

Most of our tests are performed over very large areas (at least 25% of metropolitan France, often more than that), which means that, using reference data from Corine Land Cover, we have many more samples than we can deal with. Even in this situation, we perform several runs of training and testing by drawing different polygons for each run, which allows us to estimate confidence intervals for all our metrics and therefore assess the significance of the differences in performance between different parameter settings.

 

All this is good and well, but this is not enough for assessing the quality of the results of a particular algorithm.

 

Beyond point-wise validation

The data we feed to the classifier are images and they are pre-processed so that application agnostic machine learning approaches can deal with that. In iota2, we perform eco-climatic stratification, which can introduce artifacts around strata boundaries. We also perform temporal gapfilling followed by a temporal resampling of all data so that all the pixels have the same number of features regardless of the number of available clear acquisitions. After that, sometimes we compute contextual features which take into account the neighbourhood of the pixels, in Convolutional Neural Networks, a patch size is defined, etc.

 

All these pre-processing steps have an influence on the final result, but most of the time, their effect can't be observed on the global statistics computed from the confusion matrix. For instance, contextual features may produce a smeared out image, but since most of the validation pixels are inside polygons and not on their edges, the affected pixels will not be used for the validation. In our case, the reference data polygons are eroded in order to compensate for possible misregistrations between the reference data and the images. Therefore, we have no pixels on the boundaries of the objects.

 

In our paper describing the iota2 methodology, we presented some analysis of the spatial artifacts caused by image tiling and stratification, but we lack a metric for that. The same happens when using contextual features or CNNs. The global point-wise metrics increase when the size of the neighbourhoods increase, but the maps produced are not acceptable from the user point of view. The 2 images below (produced by D. Derksen, a CESBIO PhD candidate) illustrate this kind of issues. The image on the right has higher values for the classical point wise metrics (OA, κ, etc), but the lack of spatial accuracy is unacceptable for most users.

 

Even if we had an exhaustive reference data set (labels for all the pixels), the number of pixels affected by the over-smoothing are a small percentage of the whole image and they would just weight a little in the global metrics. We are working on the development of quantitative tools to measure this effects, but we don't have a satisfactory solution yet.

 

How good is your reference data?

All what has been said above does not consider the quality of the reference data. At CESBIO, we have learned many things over the years about the different kinds of impacts of the quality of reference data, both in the classifier training and the map validation step. We have people here who collect data on the field every year on hundreds of agricultural plots. We have also a bit of experience using off-the-shelf reference data. The quality of the results is much better when we use the data collected by our colleagues and we have a rather good understanding on what happens during training and validation. Ch. Pelletier recently defended her PhD and most of her work dealt with this issue. For instance, she analysed the impact of mislabelled reference data on the classifier training and showed that Random Forests are much more robust than SVM. She also developed techniques for detecting errors in the reference.

We also use simple ways to clean the reference data. For instance, when using Corine Land Cover polygons which have a minimum mapping unit (MMU) of 25 hectares, we use information coming from other data bases, as described from slide 34 in this presentation. An illustration of the results is shown below.

The reasons for having label noise in the reference data can be many, but the 2 main we face are: the MMU and the changes occurred since the collection of the reference data.

 

For our 2016 map, we used Corine Land Cover 2012, and therefore, we may assume that more than 5% of the samples are wrong because of the changes. Therefore, when validating with this data, if for some classes we have accuracies higher than 95%, we must be doing something wrong. If we add the MMU issue to that, for the classes for which we don't perform the cleansing procedure illustrated above, accuracies higher than 90% should trigger an alarm.

 

Our ML friends like to play with data sets to improve their algorithms. Making available domain specific data is a very good idea, since ML folks have something to compete (this is why the work for free for Kaggle!) and they provide us with state of the art approaches for us to choose from. This is the idea of D. Ienco and R. Gaetano with the TiSeLaC contest: they used iota2 to produce gapfilled Landsat image time series and reference data as the ones we use at CESBIO to produce our maps (a mix of Corine Land Cover and the French Land Parcel Information System, RPG) and provided something for the ML community to easily use: CSV files with labelled pixels for training and validation.

 

The test site is the Reunion Island, which is more difficult to deal with than metropolitan France mainly due to the cloud cover. Even with the impressive (ahem …) temporal gapfilling from CESBIO that they used, the task is difficult. Add to that the quality of the reference data set which is based on CLC 2012 for a 2014 image time series, and the result is a daunting task.

 

Even with all these difficulties, several teams achieved FScores higher than 94% and 2 of them were above 99%. It seems that Deep Learning can generalise better than other approaches, and I guess that the winners use these kind of techniques, so I will assume that these algorithms achieve perfect learning and generalisation. In this case, the map they produce, is perfect. The issue is that the data used for validation is not perfect, which means that an algorithm which achieves nearly 100% accuracy, not only has the same amount of error than the validation data, but also that the errors are exactly on the same samples!

 

I don't have the full details on how the data was generated and, from the contest web site, I can't know how the different algorithms work 2, but I can speculate on how an algorithm can achieve 99% accuracy in this case. One reason is over-fitting3, of course. If the validation and training sets are too similar, the validation does not measure generalisation capabilities, but it rather gives the same results as the training set. Several years ago, when we were still working on small areas, we had this kind of behaviour due to a correlation between the spatial distribution of the samples and local cloud patterns: although training and test pixels came from different polygons, for some classes, they were close to each other and were cloudy on the same dates and the classifier was learning the gapfilling artifacts rather than the class behaviour. We made this mistake because we were not looking at the maps, but only optimising the accuracy metrics. Once we looked at the classified images, we understood the issue.

 

Conclusion

In this era of kaggleification of data analysis, we must be careful and make sure that the metrics we optimise are not too simplistic. It is not an easy task, and for some of the problems we address, we don't have the perfect reference data. In other situations, we don't even have the metrics to measure the quality.

The solutions we use to solve mapping problems need an additional validation beyond the standard machine learning metrics.

Footnotes:

2 

Please correct me if my assumptions are wrong!

3 

Deep neural networks are able to fit random labels by memorising the complete data set.

The land cover classification for France in 2016 is available

=>

Poster_OSO_V4.png

just this once, we are ahead of time. Well, nearly. We had promised the 2016 land cover map or France before the end of first term of 2017. It exists and is available here. It's resolution is 10m, with the same 17 classes nomenclature that we used for Landsat landcover map of 2014..

The map is mainly based on the Sentinel-2 data acquired from end 2015 to end 2016, but we have also processed the LANDSAT 8 data.  We will provide some details below.

Continue reading

La première carte d'occupation des sols 2016 de la France avec Sentinel-2

=>

(article copié depuis le blog OSO)

Poster_OSO_V4.png

Une fois n'est pas coutume, nous sommes en avance. Enfin, presque. Nous avions promis une carte d'occupation des sols 2016 de la France métropolitaine avant la fin du premier trimestre 2017. Elle existe et est disponible ici. Il s'agit d'une carte à 10 m de résolution, avec la même nomenclature que celle utilisée pour les derniers produits prototypes Landsat à 17 classes.

La carte est principalement basée sur des données Sentinel-2 allant de fin 2015 à fin 2016, mais nous avons aussi utilisé des données Landsat-8. Nous vous donnons les détails de la procédure de production plus bas.

Continue reading

The iota2 Land cover processor has processed some Sentinel-2 data

=> 

You already heard about iota2 processor, and you must know that it can process LANDSAT 8 time series et deliver land cover maps for whole countries. These las days, Arthur Vincent completed the code that allows processing Sentinel-2 time series. Even if atmospherically corrected Sentinel-2 data are not yet available above the whole France, we used  the demonstration products delivered by Theia to test our processor.

 

Everything seems to work fine, and the 10 m resolution of Sentinel-2 seems to allow seeing much more details. The joined images show two extracts near Avignon, in Provence, which show the differences between Landsat 8 and Sentinel-2. Please just look only at the detail level, and not at the differences in terms of classes. Both maps were produces using different time periods, and a period limited to winter and beginning of spring for Sentinel-2, and the learning database is also different. Please don,'t draw conclusions too fast about the thematic quality of the maps.

 

First extract shows a natural vegetation zone, with some farmland (top LANDSAT8, bottom Sentinel-2)

coudoux.png

Continue reading

La chaîne d'occupation des sols iota2 sait maintenant traiter Sentinel-2

=>

Vous connaissez déjà la chaîne iota2 et vous savez qu'elle sait traiter les séries temporelles Landsat8 et générer des cartes d'occupation des sols. Ces derniers jours, Arthur Vincent a terminé le code permettant d'utiliser les séries temporelles Sentinel-2. Même si nous n'avons pas encore des séries Sentinel-2 sur toute la France (mais elles devraient arriver bientôt), nous avons utilisé des produits de démonstration fournis par THEIA pour valider la chaîne de traitement.

 

Tout a l'air de bien marcher et la résolution de 10m. de Sentinel-2 permet d'avoir beaucoup plus de détails au niveau des cartes produites. Voici 2 extraits (près d'Avignon) qui montrent la différence entre Landsat8 (en haut) et Sentinel-2 (en bas). Attention, la comparaison n'a de sens qu'en termes de détail spatial : les cartes ne correspondent pas aux mêmes périodes d'acquisition, seuls quelques mois de données Sentinel-2 ont été utilisés, sans la période estivale, et les données de référence sont légèrement différentes. Il ne faut pas tirer donc de conclusion par rapport à la qualité thématique de ces cartes.

 

Le premier extrait montre une zone de végétation naturelle avec un peu d'agriculture (en haut LANDSAT8, en bas Sentinel-2)

coudoux.png

Continue reading

New version of fully automatic land cover map of France for 2014 from LANDSAT8

=>

Over the last months, we worked a lot on our method for Land Cover map production. Three main topics (1) were studied with Arthur Vincent and David Morin at CESBIO :

  1. porting and validating the iota2 processor on the CNES High Performance Computing facilities (HPC);
  2. enhancing the method for reference data preparation. Reference data are used both for training and validation;
  3. developing a stratification method which allows to train and apply classifiers per eco-climatic area, for instance.

Using all these new features, we produced a lot (really a lot!) of maps for the continental France. We just released the 4 following examples, produced using all the available LANDSAT8 data in 2014 :

  • regarding reference data :
    1. including 4 classes of artificial surfaces : continuous urban , dicontinuous urban, road surfaces, and commercial and industrial areas (2);
    2. only one artificial class that gathers the 4 above (3);
  • regarding the stratification method :
    1. using eco-climatic areas (4);
    2. without stratification, but using a fusion of several classifiers trained over different sets of tiles.
The pink urban spot, in the center of brown zone, is the village of Chateauneuf du Pape which is famous for its wine, and the brown color is the vineyard class. Validated !

Continue reading

Nouvelle version des produits d'occupation des sols OSO sur la France en 2014

=>

Nous avons beaucoup travaillé sur la procédure de génération des cartes d'occupation des sols ces derniers mois. Trois axes principaux1 ont été abordés par Arthur Vincent et David Morin au Cesbio :

  1. Le portage et la validation de la chaîne de traitement iota2 sur l'infrastructure de calcul à haute performance (HPC) du Cnes.
  2. L'amélioration de la procédure de préparation des données de référence utilisées pour l'apprentissage des classifieurs et la validation des cartes produites.
  3. La mise au point de la stratification qui permet de spécialiser les algorithmes de classification par zone éco-climatique, par exemple.

En utilisant toutes ces nouveautés, nous avons produit beaucoup (vraiment beaucoup!) de cartes sur la France métropolitaine. Nous venons de mettre en ligne quelques exemples sur l'année 2014 en utilisant toutes les données Landsat8 disponibles. Nous avons choisi de vous montrer les 4 cas qui correspondent aux combinaisons suivantes :

  • sur la donnée de référence :
    1. utilisation de 4 classes de surfaces artificielles (abusivement appelées "bâti") : urbain continu, urbain discontinu, surfaces "route" et zones industrielles et commerciales (2);
    2. regroupement a posteriori de ces 4 classes (3);
  • sur le mode de stratification :
    1. avec stratification par zone éco-climatique (4);
    2. sans stratification, mais avec une fusion de plusieurs (10) classifieurs appris sur des tuiles images différentes.

Le village en rose, au centre de la zone marron, c'est le village de Chateauneuf du Pape, et la zone marron autour du village, ce sont des vignes ! Pas besoin de vérité terrain pour le vérifier, mais on veut bien aller vérifier quand même.

Arthur nous a concocté une interface assez pratique pour la visualisation et la comparaison des différentes cartes.  Vous pouvez y accéder ici. L'icône en haut à droite vous permet de sélectionner les cartes qui seront affichées. A gauche, sous les boutons qui gèrent le niveau de zoom, vous avez la possibilité de sélectionner 2 des cartes pour lesquelles les statistiques de qualité (FScore par classe5) seront affichées sous la zone de visualisation. Cela vous permet d'apprécier les différences entre les approches.

 

Aux 4 nouvelles cartes, nous avons ajouté la version que nous avions publié en début d'année, dont la qualité est inférieure. Si vous regardez la précision globale de cette carte (Overall Accuracy) vous verrez qu'elle est en fait supérieure à celle des nouvelles cartes. Ceci est dû au fait que dans cette ancienne version, nous utilisions beaucoup de pixels d'eau pour la validation, et l'eau est très facile à classer. Le problème principal de cette ancienne version est le sur-classement des zones urbaines au dépens des surfaces minérales naturelles et des vergers. Ceci a été amélioré grâce au travail sur la préparation de la donnée de référence.

 

Pour comparer des cartes, il est utile de regarder les FScore par classe. Vous verrez ainsi que la stratification éco-climatique apporte des améliorations importantes sur les valeurs moyennes et sur les intervalles de confiance.

 

Si vous voulez récupérer les fichiers GeoTiff complets (attention, c'est volumineux!), vous pouvez utiliser les liens suivants :

N'hésitez pas à nous faire des retours. Nous continuons à travailler sur les améliorations des méthodes.

Notes:

1Beaucoup d'autres tâches ont été réalisées, dont la préparation de l'ingestion des données Sentinel-2, par exemple.

2Ces 4 classes correspondent à la nomenclature de Corine Land Cover, dont les polygones du millésime 2012 ont été affinés en utilisant une procédure développée par David et Marcela et décrite dans cette présentation (à partir de la planche 33).

3L'apprentissage et la classification sont toujours faits avec les 4 classes séparées, mais elles sont regroupées à la fin, ce qui permet d'augmenter la précision de la carte en échange d'une perte de finesse thématique. Mais les pixels de 30 m. de Landsat ne nous permettent d'être très précis pour ces classes.

4Nous avons utilisé la carte publiée par Joly et al.

5Nous utilisons cette métrique, car elle combine les erreurs d'omission et de commission.

 

Land cover maps quickly obtained using SPOT4 (Take5) data for the Sudmipy site

=>

At CESBIO, we are developing land cover map production techniques, for high resolution image time series, similar to those which will soon be provided by Venµs and Sentinel-2. As soon as the SPOT4 (Take5) data were available over our study area (Sudmipy site in South West France), we decided to assess our processing chains on those data sets. The first results were quickly presented during Take5 user's meeting which was held last October.

1. Experiments

In this post we describe the work carried out in order to produce these first land cover classifications with the SPOT4 (Take5) Sudmipy images (East and West areas) and we compare the results obtained over the common region to these two areas.

 

Prior to the work presented here, we organized a field data collection campaign which was synchronous to the satellite acquisitions. These data are needed to train the classifier training and validate the classification. The field work was conducted in 3 study areas (figure 1) which were visited 6 times between February and September 2013, and corresponded to a total of 2000 agricultural plots. This allowed to monitor the cultural cycle of Winter crops, Summer crops and their irrigation attribute, grasslands, forests and bulit-up areas. The final nomenclature consists in 16 land cover classes.

 

The goal was to assess the results of a classification using limited field data in terms of quantity but also in terms of spatial spread. We wanted also to check whether the East and West SPOT4 (Take5) tracks could be merged. To this end, we used the field data collected on the common area of the two tracks (in pink on the figure) and 5 level 2A images for each track acquired with a one day shift.

 

OUEST EST
2013-02-16
2013-02-21
2013-03-03
2013-04-17
2013-06-06
2013-02-17
2013-02-22
2013-03-04
2013-04-13
2013-06-07
2. Results

The first results of supervised SVM classification (using the ORFEO Toolbox) can be considered as very ipromising, since they allow to obtain more than 90% of correctly classified pixels for both the East and the West tracks and since the continuity between the two swaths is excellent. Some confusions can be observed between bare soils or mineral surfaces and Summer crops, but these errors should be reduced by using LANDSAT 8 images acquired during the Summer, when Summer crops will develop.

Merging of the land cover maps obtained on the East and West Sudmipy tracks (the cloudy areas were cropped out). The comparison against the ground truth (the black dots on the map to the South-West of Toulouse) results in a kappa coefficient of 0.89 for the West and 0.92 on the East.

 

West EAST

This zoom compares the results obtained on the common area of the two tracks (West to the left and East to the right). The two classifications were obtained independently, using the same method and the same training data, but with images acquired at different dates and with different viewing angles. The main errors are maize plots labeled as bare soil, which is not surprising, since this crop was just emerging when the last image was acquired. There are also confusions between wheat and barley, but even on the field, one has to be a specialist to tell them apart.


3. Feedback and retrospective

After performing these experiments, we were very satisfied with the operationnality of our tools. Given the data volume to be processed (about 10 GB of images) we could have expected very long computation times or a limitation in terms of memory limits of the software used (after all, we are just scientists in a lab!). You will not be surprised to know that our processing chains are based on Orfeo Toolbox. More precisely, the core of the chain uses the applications provided with OTB for supervised training and image classification. One just have to build a multi-channel image were each channel is a classification feature (reflectances, NDVI, etc.) and provide a vector data (a shapefile, for instance) containing the training (and validation) data. Then, a command line for the training (see the end of this post) and another one for the classification (idem) are enough.

Computation times are very interesting: several minutes for the training and several tens of minutes for the classification. One big advantage of OTB applications is that they automatically use all the available processors automatically (our server has 24 cores, but any off the shelf PC has between 4 and 12 cores nowadays!).

We are going to continue using these data, since we have other field data which are better spread over the area. This should allow us to obtain even better results. We will also use the Summer LANDSAT 8 images in order to avoid the above-mentioned errors on Summer crops.

4. Command line examples

We start by building a multi-channel image with the SPOT4 (Take5) data, not accounting for the cloud masks in this example :

otbcli_ConcatenateImages -il SPOT4_HRVIR_XS_20130217_N1_TUILE_CSudmipyE.TIF
SPOT4_HRVIR_XS_20130222_N1_TUILE_CSudmipyE.TIF
SPOT4_HRVIR_XS_20130304_N1_TUILE_CSudmipyE.TIF
SPOT4_HRVIR_XS_20130413_N1_TUILE_CSudmipyE.TIF
SPOT4_HRVIR_XS_20130607_N1_TUILE_CSudmipyE.TIF -out
otbConcatImg_Spot4_Take5_5dat2013.tif

We compute the statistics of the images in order to normalize the features :

otbcli_ComputeImagesStatistics -il otbConcatImg_Spot4_Take5_5dat2013.tif -out
EstimateImageStatistics_Take5_5dat2013.xml

We train a SVM with an RBF (Gaussian) kernel :

otbcli_TrainSVMImagesClassifier -io.il otbConcatImg_Spot4_Take5_5dat2013.tif
-io.vd DT2013_Take5_CNES_1002_Erod_Perm_Dissolve16cl.shp -sample.vfn "Class"
-io.imstat EstimateImageStatistics_Take5_5dat2013.xml -svm.opt 1 -svm.k rbf
-io.out svmModel_Take5Est_5dat2013_train6.svm

And Voilà !, we perform the classification:

otbcli_ImageSVMClassifier -in otbConcatImg_Spot4_Take5_5dat2013.tif -mask
EmpriseTake5_CnesAll.tif -imstat EstimateImageStatistics_Take5_5dat2013.xml
-svm svmModel_Take5Est_5dat2013_train_6.svm -out ClasSVMTake5_5dat_16cl_6.tif

Land cover map production: how it works

=>

Land cover and land use maps

Although different, the terms land use and land cover are often used as synonymous. From Wikipedia Land cover is the physical material at the surface of the earth. Land covers include grass, asphalt, trees, bare ground, water, etc. There are two primary methods for capturing information on land cover: field survey and analysis of remotely sensed imagery. and Land use is the human use of land. Land use involves the management and modification of natural environment or wilderness into built environment such as fields, pastures, and settlements. It also has been defined as "the arrangements, activities and inputs people undertake in a certain land cover type to produce, change or maintain it" (FAO, 1997a; FAO/UNEP, 1999).


A precise knowledge of land use and land cover is crucial for many scientific studies and for many operational applications. This accurate knowledge needs frequent information updates, but may also need to be able to go back in time in order to perform trend analysis and to suggest evolution scenarios.

 

Satellite remote sensing offers the possibility to have a global point of view over large regions with frequent updates, and therefore it is a very valuable tool for land cover map production.

 

However, for those maps to be available in a timely manner and with a good quality, robust, reliable and automatic methods are needed for the exploitation of the available data.

 

 

 

Classical production approaches

The automatic approaches to land cover map production using remote sensing imagery are often based on image classification methods.

 

This classification can be:

  • supervised: areas for which the land cover is known are used as learning examples;
  • unsupervised: the image pixels are grouped by similarity and the classes are identified afterwards.

Supervised classification often yields better results, but it needs reference data which are difficult or costly to obtain (field campaigns, photo-interpretation, etc.).

 

 

 

What time series bring

Until recently, fine scale land cover maps have been nearly exclusively produced using a small number of acquisition dates due to the fact that dense image time series were not available.

 

The focus was therefore on the use of spectral richness in order to distinguish the different land cover classes. However, this approach is not able to differentiate classes which may have a similar spectral signature at the acquisition time, but that would have a different spectral behaviour at another point in time (bare soils which will become different crops, for instance). In order to overcome this problem, several acquisition dates can be used, but this needs a specific date selection depending on the map nomenclature.

 

For instance, in the left image, which is acquired in May, it is very difficult to tell where the rapeseed fields are since they are very similar to the wheat ones. On the right image, acquired in April, blooming rapeseed fields are very easy to spot.

 

May image. Light green fields are winter crops, mainly wheat and rapeseed. But which are the rapeseed ones?

April image. Blooming rapeseed fields are easily distinguished in yellow while wheat is in dark green.

 

If one wants to build generic (independent from the geographic sites and therefore also from the target nomenclatures) and operational systems, regular and frequent image acquisitions have to be ensured. This will soon be made possible by the Sentinel-2 mission, and it is right now already the case with demonstration data provided by Formosat-2 and SPOT4 (Take 5). Furthermore, it can be shown that having a high temporal resolution is more interesting than having a high spectral diversity. For instance, the following figure shows the classification performance results (in terms of  \kappa index, the higher the better) as a function of the number of images used. Formosat-2 images (4 spectral bands) and simulated Sentinel-2 (13 bands) and Venµs (12 bands) data have been used. It can be seen that, once enough acquisitions are available, the spectral richness is caught up by a fine description of the temporal evolution.

kappaVFS.png

 

 

What we can expect from Sentinel-2

Sentinel-2 has unique capabilities in the Earth observation systems landscape:

  • 290 km. swath;
  • 10 to 60 m. spatial resolution depending on the bands;
  • 5-day revisit cycle with 2 satellites;
  • 13 spectral bands.

Systems with similar spatial resolution (SPOT or Landsat) have longer revisit periods and fewer and larger spectral bands. Systems with similar temporal revisit have either a lower spatial resolution (MODIS) or narrower swaths (Formosat-2).

 

The kind of data provided by Sentinel-2 allows to foresee the development of land cover map production systems which should be able to update the information monthly at a global scale. The temporal dimension will allow to distinguish classes whose spectral signatures are very similar during long periods of the year. The increased spatial resolution will make possible to work with smaller minimum mapping units.

 

However, the operational implementation of such systems will require a particular attention to the validation procedures of the produced maps and also to the huge data volumes. Indeed, the land cover maps will have to be validated at the regional or even at the global scale. Also, since the reference data (i.e. ground truth) will be only available in limited amounts, supervised methods will have to be avoided as much as possible. One possibility consists of integrating prior knowledge (about the physics of the observed processes, or via expert rules) into the processing chains.

 

Last but not least, even if the acquisition capabilities of these new systems will be increased, there will always be temporal and spatial data holes (clouds, for instance). Processing chains will have to be robust to this kind of artefacts.

 

 

Ongoing work at CESBIO

 

Danielle Ducrot, Antoine Masse and a few CESBIO interns have recently produced a a large land cover map over the Pyrenees using 30 m. resolution multi-temporal Landsat images. This map, which is real craftsmanship, contains 70 different classes. It is made of 3 different parts using nearly cloud-free images acquired in 2010.

 

70-class land cover map obtained from multi-temporal Landsat data.

In his PhD work, Antoine works on methods allowing to select the best dates in order to perform a classification. At the same time, Isabel Rodes is looking into techniques enabling the use of all available acquisitions over very large areas by dealing with both missing data (clouds, shadows) and the fact that all pixels are not acquired at the same dates.

 

These 2 approaches are complementary: one allows to target very detailed nomenclatures, but needs some human intervention, and the other is fully automatic, but less ambitious in terms of nomenclature.

 

A third approach is being investigated at CESBIO in the PhD work of Julien Osman: the use of prior knowledge both quantitative (from historical records) and qualitative (expert knowledge) in order to guide the automatic classification systems.

 

We will give you more detailed information about all those approaches in coming posts on this blog.

La production de cartes d'occupation du sol, comment ça marche?

=>

Les cartes d'occupation du sol

D'après Wikipédia, l'occupation du sol désigne pour la FAO (1998) "la couverture (bio-)physique de la surface des terres émergées" et donc le type d'usage (ou de non-usage) fait des terres par l'Homme. La mosaïque paysagère est cartographiée en identifiant les types homogènes de milieux (ex : zones artificialisées, zones agricoles, forêts ou landes, zones humides, etc.).


La connaissance précise de cette occupation du sol est un enjeu crucial pour beaucoup de travaux de recherche et pour de nombreuses applications opérationnelles. Une connaissance précise demande une mise à jour fréquente de ces informations, mais peut aussi nécessiter de remonter dans le temps pour faire une analyse des tendances et proposer des scénarios d'évolution.

 

La possibilité offerte par la télédétection spatiale d'accéder à une vue d'ensemble de grandes régions de façon récurrente constitue donc un atout majeur pour la production de cartes d'occupation du sol.

 

Cependant, pour que ces cartes soient disponibles dans des délais raisonnables et avec une qualité suffisante, il est nécessaire de disposer de méthodes automatiques robustes et fiables, capables d'exploiter de façon efficace les données disponibles.

 

 

Les approches classiques de production

Les approches automatiques de production de cartes d'occupation du sol à partir d'images de télédétection sont souvent basées sur des méthodes de classification d'images.

 

Cette classification peut être :

  • supervisée : on utilise des zones pour lesquelles on connaît l'occupation du sol comme des exemples pour un apprentissage;
  • non supervisée : on regroupe les pixels de l'image par similarité et on reconnait les classes ensuite.

La classification supervisée fournit souvent de meilleurs résultats, mais elle nécessite des données de référence pour l'apprentissage qui sont coûteuses à obtenir (campagnes sur le terrain, photo-interprétation, etc.). C'est cependant cette approche qui est utilisée dans les travaux actuels du CESBIO, comme par exemple l'édition d'une carte d'occupation des sols annuelle sur la France (avec LANDSAT 8, en attendant Sentinel-2).

 

 

L'apport du multi-temporel

Jusqu'à récemment, les cartes d'occupation du sol à échelle cartographique fine ont été presque exclusivement produites à partir d'un petit nombre de dates et ceci principalement à cause du manque de séries multi-temporelles denses fournies par des capteurs à haute résolution spatiale. L'accent était donc mis sur la richesse spectrale des images pour distinguer les différentes classes d'occupation du sol.

 

Cependant, cette approche "monodate" ne permet pas de distinguer des classes qui auraient la même signature spectrale à une date d'acquisition donnée, mais une signature différente à une autre date (des sols nus qui deviendront des cultures différentes plus tard). Pour pallier à cette difficulté, plusieurs dates peuvent être utilisées, mais cela demande une sélection spécifique de dates en fonction de la nomenclature visée.

 

Par exemple, dans l'image de gauche, acquise au mois de mai, il est très difficile de dire où sont les parcelles de colza et quelles sont les parcelles de blé. Sur l'image de droite, acquise au mois d'avril, les parcelles de colza en fleur sont très faciles à distinguer des parcelles de blé bien vert.

 

]

Image du mois d'avril. Les parcelles de colza en pleine floraison sont parfaitement visibles, elles apparaissent en jaune Image du mois de mai. Les parcelles vert clair sont des cultures d'hiver, blé ou colza principalement. Où sont les champs de Colza ?

 

Si l'on souhaite mettre en place des systèmes opérationnels et génériques (indépendants des sites cartographiés et donc des nomenclatures visées), il faut assurer une acquisition d'images fréquente et régulière. Ceci sera rendu possible par la mission Sentinel-2, et déjà, sur les données de démonstration issues de Formosat-2 et SPOT4 (Take 5). En plus, on peut montrer que le fait de disposer d'une haute résolution temporelle peut être plus intéressant que de disposer d'une grande diversité spectrale. Par exemple, la figure suivante montre des résultats de performances de classification (indice  \kappa ; plus il est élevé, mieux c'est) en fonction du nombre de dates utilisées pour la classification. On a utilisé des images Formosat-2 (4 bandes spectrales) et des simulations Vénµs (12 bandes) et Sentinelle-2 (13 bandes). On constate qu'à partir d'un nombre suffisant de dates utilisées, la richesse spectrale de Vénµs et Sentinelle-2 est rattrapée par une description fine du comportement temporel obtenu avec le simple capteur Formosat-2.

kappaVFS.png

 

 

Ce qui peut être attendu de Sentinelle-2

Sentinelle-2 a des caractéristiques uniques dans le paysage des systèmes d'observation de la Terre :

  • fauchée de 290 km.;
  • résolution spatiale de 10 à 60 m. en fonction des bandes spectrales;
  • revisite de 5 jours (avec 2 satellites);
  • 13 bandes spectrales.

Les systèmes de résolution spatiale comparable (SPOT ou Landsat) ont des revisites plus faibles et moins de bandes spectrales. Les systèmes de revisite similaire, ont une résolution spatiale plus faible (MODIS) ou des fauchées réduites (Formosat-2).

 

Avec le type de données fournies par Sentinelle-2 il est possible d'envisager le développement de systèmes de production de cartes d'occupation du sol capables d'actualiser les informations une fois par mois à l'échelle globale. La dimension temporelle, permettra de distinguer des classes dont les signatures spectrales sont très proches pendant une grande partie de l'année. La résolution spatiale améliorée permettra de travailler avec des unités minimales de cartographie plus fines.

 

Cependant, la mise en oeuvre opérationnelle de tels systèmes nécessitera une attention particulière aux besoins de validation des produits générés et aux énormes volumes de données à traiter.

 

Les cartes d'occupation produites par un tel système devront suivre une validation à échelle régionale, voire globale. De plus, comme les données de référence seront limitées, il faudra se passer au maximum de techniques d'apprentissage et essayer d'intégrer des connaissances a priori (physiques ou expertes) dans les chaînes de traitement.

 

Enfin, même si la capacité d'acquisition des nouveaux systèmes spatiaux sera améliorée, il y aura toujours des trous dans les données (nuages, par exemple). Les chaînes de traitement devront donc savoir combler ces trous, ou en tout cas y être robustes.

 

 

Les travaux du CESBIO

Danielle Ducrot, Antoine Masse et de nombreux stagiaires du CESBIO ont fabriqué récemment une grande carte d'occupation des sols sur la chaîne des Pyrénées à partir de données multi-temporelles de LANDSAT à 30 mètres de résolution. Cette carte, qui représente un vrai travail d'orfèvre, contient 70 classes. Elle a été réalisée en trois parties à partir des images peu nuageuses collectées par les satellites Landsat au cours de l'année 2010.

 

 

Carte d'occupation des sols à 70 classes obtenue à partir de séries temporelles d'images LANDSAT.

Dans sa thèse, Antoine travaille sur les méthodes qui permettent de sélectionner les meilleures dates pour réaliser une classification. De son côté, Isabel Rodes s'intéresse aux méthodes qui permettent d'utiliser toutes les images disponibles sur des zones très étendues tout en gérant les données manquantes (nuages, ombres) et le fait que tous les pixels ne sont pas vus aux mêmes dates. Ces 2 approches sont complémentaires : l'une permet de travailler avec des nomenclatures très détaillées, mais demande l'intervention d'opérateurs humains, l'autre est complètement automatique, mais moins ambitieuse en termes de détails de la classification.

 

Une troisième approche est explorée au CESBIO dans le cadre de la thèse de Julien Osman : l'utilisation de connaissances a priori de type quantitatif (à partir de données historiques) et qualitatif (connaissances d'experts thématiques) pour guider les systèmes de classification automatique.

 

Nous vous décrirons plus en détails ces différentes approches dans des billets à venir.