Feedback on the irrigation scheduling experiment using remote sensing images


CESBIO contributes to an international joint laboratory in Morocco, called TREMA, "Télédétection et Ressources en Eau en Méditerranée semi-Aride", which means "Remote Sensing and Water Resources in Semi-Arid Mediterranean". This year, this laboratory has embarked on an ambitious experiment of irrigation scheduling by satellite imagery, on a wheat plot near Marrakech. This experiment was already described in March, and it gave very promising results.

The main objective of the experiment was to see if  the logistics of irrigation scheduling by water balance model were feasible in real conditions. For this, a farmer accepted to play our game on two four hectares plots of wheat: Irrigation of the reference plot was driven by the farmer in the usual way. The test plot irrigations was driven by our tool SAMIR (FAO-56 model forced by satellite imagery).


Since the sowing late December to the harvest in early June, a weather station installed on a reference culture has given us the daily reference evapo-transpiration measurements. On the other hand, to control a posteriori the quality of our estimates of water requirements for irrigation, two flux measuring stations were set up. We also acquired a series of images SPOT5 early in the season to compensate for the slightly late start of SPOT4 experience (TAKE5) which began in February.


In addition to a clear weather throughout the season, we were able to benefit from the excellent work of the SPOT4 (TAKE5) team which provided us with the georeferenced images very quickly. The NDVI evolutions were thus available in a relatively short time. As an end user, the Office of Agricultural Haouz allowed us to perform the irrigation of the test plot in the best conditions while being subjected to the constraints of the canal system.

On the ground, everything did not work as well as we planned. Following a misunderstanding with the farmer, we completely missed the second irrigation and the fertilizer application was not timely. Indeed, the study plot is installed on a heavy clay soil that forms a crust. We were not aware that, a few days after sowing, a specific irrigation is needed to ease the emergence of plants. On the other hand, the farmer applied nitrogen fertilizer on two plots just after irrigation of the reference parcel and relatively far from the irrigation of the test plot. Under these conditions the nitrogen is relatively less soluble, and our test plot lacked fertilizers.

Our experiment has been seriously hampered by the misunderstandings with the farmer. But despite the bad start, the experiment was pursued to its end.


This plot shows the changes throughout the course of the experiment of the water supply from rainfall and irrigation, the evapo-transpiration ETobs measured in the field and the Evapo-Transpiration ET estimated by SAMIR model, using the vegetation status from SPOT4 (Take5) images. On this plot, the dates of irrigation were suggested by the model.


To our surprise, the results are extremely promising. Indeed, despite a 20% lower biomass compared to the plot driven by the farmer, we got a equivalent performance in grain yield. This can be explained by the fact that, although the average number of wheat blades was much lower on the test plot, it is very likely that the reference plot, irrigated by the traditional method, has suffered water stress in late March limiting the filling of grain.



This full-scale experiment finally turned out to be very instructive. First,  imaging/weather/irrigation logistics worked great : the weather data transmission, the reception and the geometric and radiometric correction of images, the model runs and  irrigation decision were largely automated. The SPOT4 (Take5) data, that prefigure those of Sentinel-2, proved perfectly suited to this application. Unfortunately, the clay crust has severely limited the emergence of culture. Yet this phenomenon, well-known to our farmer, taught us to cultivate humility ;-) , and we will consider the introduction of the risk in a decision support system. Finally, the functional constraints of the gravity irrigation system have taught us that our tool should be more flexible to recommend an irrigation period instead of a single date, and that we should link the service to weather forecasts.


Following this experiment, we started developing a Web service (SAT-IRR) that should shortly provide the essential functions of an irrigation decision support with a simplified interface.


The adjacency effects, how they work.

As explained in the post about atmospheric effects, the scattering of light by molecules and aerosols in the atmosphere brings about several effects : scattering adds some haze on the images (the atmospheric reflectance), lessens the signal from the surface (the atmospheric transmission), and blurs the images (the adjacency effects). This post is about the adjacency effects, the other aspects have already been quickly explained in the above post.


The figure on the right shows the types of paths that light can follow before getting to the satellite. Path 1 corresponds to the atmospheric reflectance, path 2 is path that interacts with the target, it is the one which is useful to determine the surface reflectance, paths 3 and 4 contribute to the total reflectance but interact with the surface away from the target. These paths are thus the cause of adjacency effects and they blur the images.



If not corrected, adjacency effects may cause large errors. Let's take the case of a fully developed irrigated field surrounded by bare soil. For such a case, the second figure on the right shows the relative percentage of errors for reflectances and NDVI as a function of aerosol optical thickness, if adjacency effect is not corrected.




An approximate correction can be applied, but it thus requires to know the aerosol optical thickness. In our MACCS processor, here is how it works :


  1. We first correct the images under the assumption that the Landscape is uniform. We obtain a surface reflectance under uniform absorption which is noted  \rho_{s,unif} .
  2. We compute the neighbourhood reflectance (  \rho_{s,adj} ) using a convolution filter with a 2km radius, that computes the average neighborhood reflectance weighted by the distance to the target. To be fully rigorous, this filter should depend on the optical thickness and on the viewing and sun angle (The less aerosols, the larger radius), but as we did not work on an accurate model, we used a constant radius.
  3. We correct for the contribution of paths 3 and 4 using :

 \rho_{s}=\frac{\rho_{s,unif}.T^{\uparrow}.\frac{1-\rho_{s,unif}.s}{1-\rho_{s,adj}.s}-\rho_{s,adj}. T_{dif}^{\uparrow}}{T_{dir}^{\uparrow}}

  • where  T^{\uparrow}=T_{dif}^{\uparrow}+T_{dir}^{\uparrow} is the total upward transmission, sum of diffuse and direct upward transmissions, and s is the atmosphere spheric albedo. These quantities depend on the wavelength, on the aerosol model and on the AOT. They are computed using Look up Tables based on radiative transfer calculations.


As this processing uses convolution with a large radius, it takes quite a large part of the atmospheric processing time.


Result Exemples

The images below show 3 stages of the atmospheric processing, for 2 Formosat-2 images obtained over Montreal (Canada) with a 2 days interval. The first image was acquired on a hazy day (aerosol optical thickness (AOT) of 0.47 according to MACCS estimate); and the second one on a clear day (AOT=0.1).

  • The first line corresponds to the Top Of Atmosphere images, without atmospheric correction. The left image is obviously blurred compared to the right image.
  • The second line corresponds to the atmospheric correction under uniform landscape assumption (as in step 1). The left image is still obviously blurred compared to the right image.
  • the third line show the same images after adjacency effect correction. In that case, the left image is not blurred any more, it is even maybe a little over corrected as it seems somewhat sharper that the right image.

TOA Images (On the left, the hazy image)

Surface reflectance under uniform landscape assumption (on the left, the hazy image)


Surface reflectance after adjacency effect correction (on the left, the hazy image)


The pixel wise comparison of reflectances is also a way to show the enhancement due to the adjacency effect correction. The plot below compares the images of both dates corrected under the uniform landscape assumption (on the left), and after adjacency effect correction (on the right). You may observe that the dots are closer the the black diagonal on the right. On the hazy image (May 27th), the high reflectances are a little too low, while the low reflectances are a little too high, which is the symptom of a loss of contrast.

Quatre sites posthumes en France pour SPOT4(Take5)?

SPOT4 s'est éteint hier soir, le 29 juin 2013, après avoir consommé tous ses ergols pour réduire son altitude. Mais comme avec les grands compositeurs ou les grands auteurs, SPOT4 nous laisse déjà découvrir quelques œuvres posthumes :

Nous avons en effet la possibilité d'ajouter 4 sites supplémentaires à la liste des sites Take5 en France. Ces sites n'avaient pas été demandés initialement, mais ils ont été rajoutés automatiquement par le système de programmation, et ont été observés tous les 5 jours, comme les autres, en raison de contraintes de fonctionnement du satellite :

  • lorsque les deux instruments de SPOT 4 étaient utilisés, ils devaient fonctionner simultanément,
  • l'enregistreur de bord ne pouvait pas s'éteindre sur de courtes durées. Il fallait des donc acquérir des scènes de remplissage.

Grâce au mauvais temps en France, nous avons acheté moins d'images que prévu, et il nous reste quelques reliquats budgétaires. Ces données pourraient donc être commandées, à condition bien sûr qu'il y ait des utilisateurs pour s'en servir. L'emprise des quatre sites est fournie ci-dessous : n'hésitez pas à m'écrire si vous avez besoin de ces données, en m'envoyant (olivier.hagolle, un fichier shape ou kml avec votre zone d'intérêt, ainsi qu'une courte description de vos travaux sur cette zone.

How to estimate Aerosol Optical Thickness


Caution ! This post contains formulas !

Aerosols play a great role in the atmospheric effects. Aerosols are particles suspended in the atmosphere, which can be of several types: sand or dust, soot from combustion, sulfates or sea salt, surrounded by water... Their size ranges between 0.1 micron and a few microns, depending on the type of aerosol or on the air moisture. Their quantity is also extremely variable : rain can suddenly reduce their abundance (known as "aerosol optical thickness"). The abundance variations result in great variations of observable reflectances from one day to the next, and it is therefore necessary to know the quantity and type of aerosols, in order to correct their effects.


Unfortunately, to correct the effects of aerosols, there is no global aerosol observation network, and the only available data are local observations from the few hundred points of Aeronet network. Therefore, this network can not be used operationally to correct the satellite images over large areas.

Weather forecast models just start predicting the amounts of aerosols, based on satellite observations and modeling of sources and sinks and of the transport of aerosols by the winds, but these data do not seem to have sufficient accuracy yet to be used for the atmospheric correction of images.


Our atmospheric correction method, named MACCS, is therefore based on an estimate of aerosol optical depth from the images themselves. To understand how this method works, one must already understand the effects of aerosols on radiation. We have seen in this post, that the effects of diffusion can be modelled as follows (assuming the corrected gas absorption):

ρTOA = ρatm +Td ρsurf

The reflectance at the top of the atmosphere ρTOA (Top of Atmosphere) is the sum of the atmospheric reflectance  ρatm and of the surface reflectance ρsurf transmitted by the atmosphere. We seek to know the surface reflectance, but for each measurement made at the top of the atmosphere, there are three unknowns to be determined. To separate the effects of the atmosphere and surface effects, we must use other information.


Dark pixel method

When the image includes a surface whose surface reflectance is nearly zero, the reflectance observed at the top of the atmosphere becomes ρTOA = ρatm. We can therefore deduce the atmospheric reflectance and using a radiative transfer model, the aerosols optical thickness (AOT). Finally, knowing the AOT, we can compute the diffuse transmission, and finally calculate ρsurf. An even simpler and more approximate version of this method consists in subtracting directly the reflectance of the dark pixel (or ρatm) to the entire image (neglecting the transmission) [Chavez, 1988].


However, this method assumes that there is a very dark area in the image (which is not always the case), and that the reflectance of the dark surface is known. The method also assumes that the amount of aerosols is constant over the image and it neglects the effect of terrain. The results obtained by this method can be quite inaccurate. In our method (MACCS), however, we use the method of black pixel determine the maximum value of the optical thickness in the area.


Multi Spectral Method, called "DDV"

If you know the type of aerosols in the atmosphere, it is possible to deduce the properties of aerosols in a spectral band from the optical properties in another spectral band.


If there are two spectral bands, there are two measures ρsurf and three unknowns (both surface reflectance in these bands, and the amount of aerosols). An additional equation can be obtained if we know the relationship between the surface reflectance of the two bands.


The method named "Dark Dense Vegetation" (DDV) is based on assumptions about relationships between surface reflectances of the dense vegetation exploiting the fact that the spectrum of dense green vegetation is quite constant. The most famous version of this method is that used by NASA for MODIS project [Remer 2005]. It connects the surface reflectance in the blue and red with those in the SWIR. This provides two equations for estimating the type of aerosol optical thickness. This method works well in temperate and boreal zones, but not in arid areas where it is difficult to find the dense vegetation. Early versions used the following equations:


ρBlue = 0.5 ∗ ρSWIR

ρRed = 0.25 ∗ ρSWIR


The following versions of the MODIS DDV algorithm are a bit more complicated but follow the same principle. Our work has shown that using the equation below allows a more accurate determination of the optical thickness, for less dense vegetation cover (NDVI to a 0.2) because bare soil brown also respect this relationship.


ρBlue = 0.5 * ρRed

(the exact value of the coefficient is adjusted according to the spectral bands of the instrument)

This version of the  method, however, does not allow to determine the aerosol model. In the case of SPOT4 (Take5), the absence of a blue band does not allow us to use this equation, resulting in a slight loss in accuracy.

This diagram shows that the correlation between surface reflectance above vegetation is much better for the (blue, red) couple of spectral bands than for couples including using (SWIR).




Multi Temporal Method

In most cases, the reflectance of the land surface changes slowly over time, while the aerosol optical properties vary rapidly from one day to another. We can therefore consider what changes from one image to another (apart from special cases often linked to human intervention) is associated with aerosols, and deduce the properties of aerosols and then correct for atmospheric effects. This method is too complex to be explained in detail here, interested readers can refer to [Hagolle 2008].


So that surface reflectance be nearly constant from one image to another, however, it is required that images be acquired at a constant angle. Indeed, the reflectance depend on the viewing angles: this is what we call directional effects. This method therefore applies only to satellite observations obtained with constant angle. It does not apply to standard SPOT data, but this condition is true for SPOT4 (Take5) data. It also applies to Landsat Venμs and Sentinel-2.


Finally :


Validation of aerosol optical thickness (AOT) from time-series of FORMOSAT-2 images, depending on the method (multi-spectral, multi-temporal, combined), compared with the measurements provided by the Aeronet network of in-situ measurements. The multi-spectral method works best on sites covered with vegetation and is much less accurate on arid sites, while the multi-temporal method performs a little worse on green sites, but much better on dry sites. The combination of the two methods retains the best of the two basic methods.

The MACCS/MAJA method, used for SPOT4 (Take5) experiment, and also for LANDSAT, VENμS and Sentinel-2 data, combines the three methods described above to obtain robust estimates of aerosol optical thickness. These methods work in many cases, but sometimes fail when the assumptions on which they are based prove to be incorrect. They generally tend to work better on vegetated areas rather than in arid areas. for now, they assume the model known aerosol and in the coming years, we will look for reliable ways to identify the type of aerosols.


References :
Chavez Jr, P. S. (1988). An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data. Remote Sensing of Environment, 24(3), 459-479.

Remer, L. A., and Coauthors, 2005: The modis aerosol algorithm, products, and validation. J. Atmos. Sci., 62, 947–973.

Hagolle, O and co-authors, 2008. « Correction of aerosol effects on multi-temporal images acquired with constant viewing angles: Application to Formosat-2 images ». Remote sensing of environment 112 (4)

Hagolle, O.; Huc, M.; Villa Pascual, D.; Dedieu, G. A Multi-Temporal and Multi-Spectral Method to Estimate Aerosol Optical Thickness over Land, for the Atmospheric Correction of FormoSat-2, LandSat, VENμS and Sentinel-2 Images. Remote Sens. 2015, 7, 2668-2691.

Les aérosols jouent un rôle prépondérant dans les effets atmosphériques. Les aérosols sont des particules en suspension dans l'atmosphère, qui peuvent être de plusieurs types : grains de sable ou poussières, suies issues de combustion, sulfates ou sels marins entourés d'eau... Leur taille peut varier de 0.1 µm à quelques microns, en fonction du type d'aérosols ou de l'humidité de l'air. Quant à leur quantité, elle est extrêmement variable, une pluie pouvant réduire brutalement leur abondance (on parle d'"épaisseur optique d'aérosols"). Ils peuvent faire varier fortement d'un jour à l'autre les réflectances observables depuis le sommet de l'atmosphère et il est donc nécessaire de connaître leur quantité et leur type afin de pouvoir corriger leurs effets.


Malheureusement, pour corriger les effets des aérosols, on ne dispose pas de réseau global d'observation des aérosols, seulement d'observations locales, sur les quelques centaines de points du réseau Aeronet. Ce réseau ne peut donc pas être utilisé pour corriger opérationnellement les images de satellites sur de grandes étendues.
Des modèles météorologiques commencent à prédire les quantités d'aérosols, en se basant sur les observations de satellites et la modélisation des sources et du transport des aérosols par les vents, mais ces données ne semblent pas encore avoir une précision suffisante pour être utilisées pour la correction atmosphérique des images.


Notre méthode de correction atmosphérique (MACCS) repose donc sur une estimation de l'épaisseur optique des aérosols à partir des images elles-mêmes. Pour bien comprendre le fonctionnement de cette méthode, il faut déjà comprendre les effets des aérosols sur le rayonnement. On a vu, dans ce billet, que les effets de la diffusion peuvent être modélisés ainsi (on suppose l'absorption gazeuse corrigée) :

ρTOA = ρatm +Td ρsurf

La réflectance au sommet de l'atmosphère ρTOA (Top of Atmosphere) est la somme de la réflectance atmosphérique ρatm et de la réflectance de surface ρsurf transmise par l'atmosphère. On cherche à connaître la réflectance de surface, mais à chaque mesure réalisée au sommet de l'atmosphère, on a trois inconnues à déterminer. Pour séparer les effets de l'atmosphère et les effets de la surface, il faut donc utiliser d'autres informations.


Méthode du pixel noir

Lorsque l'image contient une surface dont la réflectance est quasi nulle, la réflectance observée au sommet de l'atmosphère devient ρTOA= ρatm. On peut donc en déduire la réflectance atmosphérique, et en utilisant un modèle de transfert radiatif, l'épaisseur optique des d'aérosols. On peut enfin en déduire la transmission diffuse, et finalement calculer ρsurf. Une version encore plus simple et plus approximative consiste à soustraire directement la réflectance du pixel sombre (soit ρatm) à toute l'image. [Chavez, 1988]


Cependant, cette méthode revient à supposer qu'il existe bien une surface très sombre dans l'image (ce qui n'est pas toujours le cas), et que la réflectance de cette surface sombre est connue. La méthode suppose aussi que la quantité d'aérosols est constante dans l'image et elle néglige les effets du relief. Les résultats obtenus par cette méthode peuvent donc être assez imprécis. Dans notre méthode (MACCS), nous utilisons cependant la méthode du pixel noir déterminer la valeur maximale de l'épaisseur optique dans la zone.


Méthode Multi Spectrale, dite "DDV"

Si on connaît le type d'aérosols présent dans l'atmosphère, il est possible de déduire les  propriétés des aérosols dans une bande spectrale, à partir des propriétés optiques dans une autre bande spectrale.


Si on dispose de deux bandes spectrales, on dispose de deux mesures ρsurf et de trois inconnues( les deux réflectances de surface dans ces bandes, et la quantité d'aérosols). Une équation supplémentaire peut être obtenue si on connaît la relation entre les réflectances de surface des deux bandes.


La méthode  méthode "Dark Dense Vegetation" (DDV ) est basée sur des hypothèses de relations entre réflectances de surface sur la végétation dense exploitant le fait que le spectre de la végétation dense et verte est un peu toujours le même. La version la plus connue de cette méthode est celle utilisée par la NASA pour le projet MODIS [Remer 2005]. Elle relie les réflectances de surface dans le bleu et dans le rouge avec celles dans le moyen infra-rouge. On dispose ainsi de deux équations qui permettent d’estimer le type d’aérosols et l’épaisseur optique. Cette méthode fonctionne bien en zones tempérées et boréales, mais pas en zones arides, où il est difficile de trouver de la végétation dense. Les premières versions utilisaient les équations suivante

ρBleu = 0.5 ∗ ρSWIR

ρRouge = 0.25 ∗ ρSWIR

Les versions suivantes ont un peu compliqué ces équations, sans en modifier le principe. Nos travaux ont montré que l’utilisation de l'équation ci dessous  (la valeur exacte du coefficient est à ajuster en fonction des bandes spectrales de l'instrument):

ρBleu = 0.5 ∗ ρRouge

permet une détermination plus précise de l’épaisseur optique, pour des couverts végétaux moins denses (jusqu’à un NDVI de 0.2), car les sols nus de couleur marron respectent aussi cette relation. La méthode ne permet pas, par contre, de déterminer le modèle d’aérosols. Dans le cas de SPOT4 (Take5) l'absence d'une bande bleue ne nous permet pas d'utiliser cette dernière équation, d’où une légère perte en précision.

Ce diagramme montre que la corrélation entre réflectances de surface au dessus de la végétation est bien meilleure pour le couple de bandes spectrales (bleu, rouge) que pour les couples incluant le moyen infra rouge. (SWIR)


Méthode Multi Temporelle

On observe dans la plupart des cas que les réflectances de la surface terrestre évoluent lentement avec le temps, alors que le propriétés optiques des aérosols varient très rapidement, d'un jour à l'autre. On peut donc considérer que ce qui change d'une image à l'autre (en dehors de cas particuliers souvent liées à des interventions humaines) est lié aux aérosols, et donc en déduire les propriétés des aérosols pour ensuite corriger les effets atmosphériques. Cette méthode est un peu trop complexe pour être expliquée en détails ici, les lecteurs intéressés pourront se reporter à [Hagolle 2008].


Pour que les réflectances de surface soient quasi constantes d'une image à l'autre, il faut cependant que les images soient acquises sous un angle de vue constant. Les changements d'angles d'observation font en effet varier les réflectances (ce phénomène sera prochainement expliqué dans un autre article). Cette méthode ne s'applique donc qu'aux seuls satellites permettant des observations à angle constant.  Elle ne s'applique donc pas aux données SPOT normales mais par contre convient parfaitement aux données SPOT4 (Take5). Elle s'appliquera aussi à Landsat, Venµs et Sentinel-2.

En résumé :

Performance de l'estimation de l'épaisseur optique des aérosols sur des séries temporelles d'images Formosat-2,, en fonction de la méthode (multi-spectrale, multi-temporelle, combinée), par comparaison avec les mesures fournies par le réseau de mesures in-situ Aeronet. La méthode multi spectrale fonctionne mieux sur des sites couverts de végétation et moins bien sur des sites arides, la méthode multi-temporelle marche un peu moins bien sur les sites verts, mais beaucoup mieux sur les sites arides. La combinaison des deux méthodes garde le meilleur des deux méthodes élémentaires.


Notre méthode MACCS, utilisée pour l'expérience SPOT4 (Take5), et pour les données LANDSAT, VENµS et Sentinel-2, combine les trois méthodes présentées ci-dessus pour obtenir des estimations robustes des épaisseurs optiques d'aérosols. Ces méthodes fonctionnent dans un grand nombre de cas, mais peuvent parfois échouer quand les hypothèses sur lesquelles elles reposent s'avèrent fausses. Elles ont en général tendance à mieux fonctionner sur des zones couvertes de végétation plutôt que dans des zones arides. pour le moment, elles supposent le modèle d'aérosol connu, et dans les prochaines années, nous chercherons des manières fiables d'identifier le type d'aérosols.


References :
Chavez Jr, P. S. (1988). An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data. Remote Sensing of Environment, 24(3), 459-479.

Remer, L. A., and Coauthors, 2005: The modis aerosol algorithm, products, and validation. J. Atmos. Sci., 62, 947–973.

Hagolle, O and co-authors, 2008. « Correction of aerosol effects on multi-temporal images acquired with constant viewing angles: Application to Formosat-2 images ». Remote sensing of environment 112 (4)

Land cover map production: how it works


Land cover and land use maps

Although different, the terms land use and land cover are often used as synonymous. From Wikipedia Land cover is the physical material at the surface of the earth. Land covers include grass, asphalt, trees, bare ground, water, etc. There are two primary methods for capturing information on land cover: field survey and analysis of remotely sensed imagery. and Land use is the human use of land. Land use involves the management and modification of natural environment or wilderness into built environment such as fields, pastures, and settlements. It also has been defined as "the arrangements, activities and inputs people undertake in a certain land cover type to produce, change or maintain it" (FAO, 1997a; FAO/UNEP, 1999).

A precise knowledge of land use and land cover is crucial for many scientific studies and for many operational applications. This accurate knowledge needs frequent information updates, but may also need to be able to go back in time in order to perform trend analysis and to suggest evolution scenarios.


Satellite remote sensing offers the possibility to have a global point of view over large regions with frequent updates, and therefore it is a very valuable tool for land cover map production.


However, for those maps to be available in a timely manner and with a good quality, robust, reliable and automatic methods are needed for the exploitation of the available data.




Classical production approaches

The automatic approaches to land cover map production using remote sensing imagery are often based on image classification methods.


This classification can be:

  • supervised: areas for which the land cover is known are used as learning examples;
  • unsupervised: the image pixels are grouped by similarity and the classes are identified afterwards.

Supervised classification often yields better results, but it needs reference data which are difficult or costly to obtain (field campaigns, photo-interpretation, etc.).




What time series bring

Until recently, fine scale land cover maps have been nearly exclusively produced using a small number of acquisition dates due to the fact that dense image time series were not available.


The focus was therefore on the use of spectral richness in order to distinguish the different land cover classes. However, this approach is not able to differentiate classes which may have a similar spectral signature at the acquisition time, but that would have a different spectral behaviour at another point in time (bare soils which will become different crops, for instance). In order to overcome this problem, several acquisition dates can be used, but this needs a specific date selection depending on the map nomenclature.


For instance, in the left image, which is acquired in May, it is very difficult to tell where the rapeseed fields are since they are very similar to the wheat ones. On the right image, acquired in April, blooming rapeseed fields are very easy to spot.


May image. Light green fields are winter crops, mainly wheat and rapeseed. But which are the rapeseed ones?

April image. Blooming rapeseed fields are easily distinguished in yellow while wheat is in dark green.


If one wants to build generic (independent from the geographic sites and therefore also from the target nomenclatures) and operational systems, regular and frequent image acquisitions have to be ensured. This will soon be made possible by the Sentinel-2 mission, and it is right now already the case with demonstration data provided by Formosat-2 and SPOT4 (Take 5). Furthermore, it can be shown that having a high temporal resolution is more interesting than having a high spectral diversity. For instance, the following figure shows the classification performance results (in terms of  \kappa index, the higher the better) as a function of the number of images used. Formosat-2 images (4 spectral bands) and simulated Sentinel-2 (13 bands) and Venµs (12 bands) data have been used. It can be seen that, once enough acquisitions are available, the spectral richness is caught up by a fine description of the temporal evolution.




What we can expect from Sentinel-2

Sentinel-2 has unique capabilities in the Earth observation systems landscape:

  • 290 km. swath;
  • 10 to 60 m. spatial resolution depending on the bands;
  • 5-day revisit cycle with 2 satellites;
  • 13 spectral bands.

Systems with similar spatial resolution (SPOT or Landsat) have longer revisit periods and fewer and larger spectral bands. Systems with similar temporal revisit have either a lower spatial resolution (MODIS) or narrower swaths (Formosat-2).


The kind of data provided by Sentinel-2 allows to foresee the development of land cover map production systems which should be able to update the information monthly at a global scale. The temporal dimension will allow to distinguish classes whose spectral signatures are very similar during long periods of the year. The increased spatial resolution will make possible to work with smaller minimum mapping units.


However, the operational implementation of such systems will require a particular attention to the validation procedures of the produced maps and also to the huge data volumes. Indeed, the land cover maps will have to be validated at the regional or even at the global scale. Also, since the reference data (i.e. ground truth) will be only available in limited amounts, supervised methods will have to be avoided as much as possible. One possibility consists of integrating prior knowledge (about the physics of the observed processes, or via expert rules) into the processing chains.


Last but not least, even if the acquisition capabilities of these new systems will be increased, there will always be temporal and spatial data holes (clouds, for instance). Processing chains will have to be robust to this kind of artefacts.



Ongoing work at CESBIO


Danielle Ducrot, Antoine Masse and a few CESBIO interns have recently produced a a large land cover map over the Pyrenees using 30 m. resolution multi-temporal Landsat images. This map, which is real craftsmanship, contains 70 different classes. It is made of 3 different parts using nearly cloud-free images acquired in 2010.


70-class land cover map obtained from multi-temporal Landsat data.

In his PhD work, Antoine works on methods allowing to select the best dates in order to perform a classification. At the same time, Isabel Rodes is looking into techniques enabling the use of all available acquisitions over very large areas by dealing with both missing data (clouds, shadows) and the fact that all pixels are not acquired at the same dates.


These 2 approaches are complementary: one allows to target very detailed nomenclatures, but needs some human intervention, and the other is fully automatic, but less ambitious in terms of nomenclature.


A third approach is being investigated at CESBIO in the PhD work of Julien Osman: the use of prior knowledge both quantitative (from historical records) and qualitative (expert knowledge) in order to guide the automatic classification systems.


We will give you more detailed information about all those approaches in coming posts on this blog.

First Level 2A time series of SPOT4 (Take5) images

(aerosol images have been added at the end of the post)

The verification of the various steps of our SPOT4(take5) processing scheme is going on. On Thursday, we received our first time series, I orthorectified them on Friday, and we were then able to start testing our level 2A processor with the first time series. The one displayed below was obtained on the CESBIO site in Tensift valley : Marrakech is near the center of the image, while the Atlas mountains are in the South East part of the image.

The images on the left column are ortho-rectified, and expressed in Top of Atmosphere reflectance (Level 1C product), while the right column displays the same images after atmospheric correction and cloud detection (Level 2A products), produced by Mireille Huc (CESBIO).

We quickly figured out that the cloud detection would be easy on these very clear images, even if on the February 10th, several diffuse plane contrails can be hardly seen but are partially detected, and some of their shadows as well (clouds are circled by red lines, while shadows are circled by a black line). No false cloud detection is visible. Water bodies and snow are also correctly detected for this first try (circled in blue and purple respectively)

The atmospheric correction, based on a multi-temporal method that detects the aerosols, enabled to detect that the image of February the 5th was hazier than the images of January 31st and February 10th.The February 5th image (left column) has a subtle blueish haze compared to the other dates. On the right column, the tint is roughly constant from one image to the other, which means that the aerosol detection and the atmospheric correction are working well. The aerosol images provided below are also very consistent, with the Atlas mountains playing their role of physical barrier blocking the aerosols on either side of the images. There is an aerosol measurement station on this site but it broke down at the end of January, just for the start of the experiment : Murphy's law...

So, we have reviewed and tested all the steps of the processing, but we still have to check that our methods are sufficiently robust to handle correctly the very diverse situations offered by the 42 sites. How do you say, in English "ce n'est pas une mince affaire" ?

Level 1C products expressed in reflectance at the top of atmosphere.
(c) CNES, processing : CESBIO
Level 2A products expressed in surface reflectance after atmospheric correction
(c) CNES, processing : CESBIO

Aerosol optical thickness images are displayed below. One can note that the image of the February 5th is consitent with a lot of aerosols in the North of the Atlas, and nearly no aerosols in the South. The mountains often act as barriers for the aerosols witch usually stay at a low altitude. The orange dots correspond to the snow mask whereas the red ones correspond to the cloud mask. The brighter spots on the last image may be artifacts.

THEIA : A new French Data Centre dedicated to Land Surfaces

(French Version)

The THEIA Land Data Centre is a French inter-agency initiative designed to promote the use of satellite data, primarily for environmental research on land surfaces but also for public policy monitoring and for management of environmental resources. Its objective is to foster the use of remote sensing data to measure the impact of human pressure and climate on ecosystems and local areas, to observe, quantify and model water and carbon cycles, to follow the evolution of societies and of their activities, including agricultural practices, and to understand the dynamics of biodiversity.


Within the Land Data Centre, CNES set up a production centre named MUSCATE. This centre aims are providing users with ready-to-use products derived from time series of images acquired over large areas. Sentinel-2 will of course be the spearhead of the production centre, but before the launch of the Sentinel-2, MUSCATE will already begin to produce data from the SPOT4 (Take 5) experiment. At the same time, the processing centre also prepares the production of all Landsat data acquired over mainland France from 2009 to 2011.


MUSCATE production centre already exists in the form of a prototype developed by CNES with strong support from Cap Gemini. This prototype is already able to handle LANDSAT, SPOT, FORMOSAT-2, Sentinel-2 and Venμs data, using processors developed by CNES for geometric processing [1], and developed by CESBIO for cloud detection [2] and for atmospheric correction [3]. Simultaneously, the development of an operational production facility is being specified.

Products provided by the MUSCATE Centre are:

Simulations of SPOT4(Take5) products from Formosat-2 data
  • Level 1C (orthorectified reflectance at the top of the atmosphere)
  • Level 2A (ortho-rectified surface reflectance after atmospheric correction, along with a mask of clouds and their shadows, as well as a mask of water and snow).
  • Level 3A (bi-monthly or monthly composite products of surface reflectances, obtained as the weighted average surface reflectance of non-cloudy pixels obtained during the period). Up to now, Level 3A chain is only available for Venμs satellite.

The data produced by MUSCATE will be freely distributed to research laboratories on the one hand, and to the French public institutions on the other, they will be if possible distributed freely to a wider community. The Land Data Center is also building a distribution server to make all these data available.


Further reading about these products :

[1]: Baillarin, S., P. Gigord, et O. Hagolle. 2008. « Automatic Registration of Optical Images, a Stake for Future Missions: Application to Ortho-Rectification, Time Series and Mosaic Products ». In Geoscience and Remote Sensing Symposium, 2008, 2:II‑1112‑II‑1115. doi:10.1109/IGARSS.2008.4779194.

[2]: Hagolle, Olivier, Mireille Huc, David Villa Pascual, et Gérard Dedieu. 2010. « A multi-temporal method for cloud detection, applied to FORMOSAT-2, VENµS, LANDSAT and SENTINEL-2 images ». Remote Sensing of Environment 114 (8) (août 16): 1747‑1755. doi:10.1016/j.rse.2010.03.002.

[3]: Hagolle, O, G Dedieu, B Mougenot, V Debaecker, B Duchemin, et A Meygret. 2008. « Correction of aerosol effects on multi-temporal images acquired with constant viewing angles: Application to Formosat-2 images ». REMOTE SENSING OF ENVIRONMENT 112 (4) (avril 15): 1689‑1701. doi:10.1016/j.rse.2007.08.016.