SPOT4 (Take5) after 3 months

=>

The SPOT4 (Take5) is already 3 months old and will only last for one month and a half. For the in-situ measurements among the users teams, there is also only one month left.

 

On that occasion, we just created a quicklook page that shows all the images acquired between January the 31st and April the 15th.

 

With the help of Vincent Poulain (of Thales, funded by CNES Image Quality service (thank you!)), large improvements have been made on the settings for ortho-rectification SIGMA software, in order to eliminate much of the poor correlations that disrupted our measurements. With the exception of two very uniform rainforest sites (Borneo and Sumatra), the registration performance for all sites are now excellent. Selection thresholds are however a little too strict for two other rainforest sites (Gabon and Congo), where most images are rejected. The work is going on...

 

In this April 6 to Cameroon picture, one of the rainforest sites, whose ortho-rectification was greatly enhances, small cumulus clouds are present at above the ground and not above the sea If a meteorologist passes by, an explanation of this phenomenon frequently encountered would be welcome .

Despite very bad weather in many places, especially in France early this year, SPOT4 managed to take some beautiful pictures without clouds on almost all sites. The quick-looks from images clear enough to be ortho-rectified are now available on this blog. On many of these sites (in Brittany, Alsace, Sumatra, China), it will require using partially cloudy images to obtain a correct repetitivity. You may judge for yourself that the repeatability of 5 days is absolutely necessary (in some cases even a little insufficient) to properly monitor vegetation growth.

 

On CNES side, the Take5 production center was installed on April the 30 th. The official production can start (I guess we'll have to experiment to configure all the settings). The  distribution served of the French Land Data Center is also ready. It is just waiting for the opening of its domain name (ptsc.fr), that requires a few signatures. A special data server for Take5 is also being finalized. The next weeks will be busy, but the data should be released in June as planned.
Finally, CNES and Astrium have agreed on a very liberal license for SPOT4(Take5) data, that should be issued in the coming days.

There is an issue with Paris

=>

When Mireille Huc steps in my office saying "il y a un problème" (there is an issue), it is generally due to a new case that is not handled by our level 2A methods. This time, she said : "there is an issue with Paris".

 

Such issues happen regularly each time we observe a new type of Landscape. That's why it so important to process a wide range of landscapes and images to qualify our methods. Here are a few examples of the issues we encounter once in a while :

  • The very turbid Gironde estuary classified as snow, because very bright in the visible while dark in the SWIR, as for snow
  • false clouds due to drying white soils whose reflectance is increasing
  • False  Cloud Shadows after fields were irrigated in Mexico, whose reflectance decreases quickly
  • Snow in the middle of desert in South Tunisia (a dry salt lake,  bright in the visible, and dark in the SWIR)

 

This time, on this SPOT4 (Take5) time series near Versailles, the clouds are well classified, the atmospheric correction seems to work, but ... but, this zone oulined in blue in the North East Corner, it is Paris centre. It means the region is under water, but if Paris had been under water for two months, we would have known (given what they say when there is only one inch of snow). Mireille checked the data, and it turns out that Paris centre meets all the criteria we use to detect water (computed at 200m resolution):

  • NDVI <0

    SPOT4(Take5) time series near Versailles. Colour composite: (R,G,B)=(NIR, Red, Green). The clouds are circled in green, their shadow in black, and wter bodies and Paris centre in blue. Click twice on each image to see it at 40m resolution.

  • NDWI <0
  • Red Reflectance  < 0.1

 

It is the first town for which we observe this issue, maybe because of Paris huge density, of its slate roofs, and because of the winter period with no leaves on the trees.

A solution would be to process the water mask at a higher resolution (100m, 60m). Or could we just say that our "water" mask is in fact a "water and dense town centre with slate roofs" mask ? Or even better ask Parisians to grow plants on their roofs.

 

Meanwhile,  one can note on this time series that despite a repetitivity of 5 days, we only got one partially clear image in March, but this month was exceptionally cloudy, they say in Paris. The onset of vegetation can also be clearly seen on the last image of the series, after nice weather came back at the beginning of April.

 

 

 

SPOT4 (Take 5) : As a clockwork, but no bed of roses...

=>

I have written that SPOT4 (Take 5) was working  as a clockwork, but  I have to admit that the ortho-rectification of SPOT4 images is not as easy as I thought initially.

 

This plot provides the location error along the West-East axis (x) and along the North South axis( y) for each image, with a different color for each decade, A strong bias is observed, particularly during the last decade of February and the first decade of March, for sites other Europe. This location error is corrected after the ortho-rectification.

The location error of an image is the average difference between the actual position of the pixels of an image, and the position calculated knowing the position of the satellite, its orientation (in space science, we say "attitude") , the  orientation of the mirror and the location of the detectors in the instrument. While the SPOT4 image localization performance measured at CNES, has usually a standard deviation of 450 meters, we met over a fifty scenes with localization errors greater than 1000 meters. Most of them were acquired close to Europe.

 

We have not yet an explanation for this issue, which is still within SPOT4 requirements (1500m RMS). On recent satellites,the attitude is measured very precisely by star trackers. These sensors are small optical instruments that identify  stars in the sky whose position is known to determine the attitude of the satellite, as the walker lost at night can use the North Star to find his or her way. But when SPOT 4 was designed in the early 1990s, these star trackers were not operational yet, and SPOT4 used another type of sensor, the earth sensor. This device works in the thermal infrared : it scans the horizon of the earth, and deduces the position of the center of the earth. However, its accuracy is altered by the presence or absence of high clouds that modify slightly the horizon. For this reason, earth sensors are less accurate than star trackers.

 

In short, the location of SPOT4 (Take5) images is quite poor sometimes, and when we search for a ground control point, we need to search its match in a range that reaches 2.5 kilometers. This long distance research increases the probability of matching similar neighborhoods that do not correspond in fact to the same places. Therefore, within the set of ground control points that we use for the orthorectification, we may obtain erroneous ground control points more frequently than usually. Because of that,  some images might be misregistered.

 

SPOT4(Take5) multi-temporal registration accuracy, for te images of NASA's site Maricopa, Which is observed twice every 5 days under two different angles. The accuracy is expressed for 80% of pixels. The 20% remaining measurements are considered as due to registration measurement errors. The registration is slightly better when the images are observed with the same viewing angle.

With the help of some CNES colleagues (Cécile Déchoz, Stéphane May, Sylvia Sylvander), I have spent the last month tuning a parameter set that would minimize the amount of false GCP's by selecting them carefully, without removing too many of the good ones, in order to be able to ortho-rectify images with a large cloud cover. Results are now enhancing, but once in a while, misregistered images are still encountered.

 

Same plot for JRC's site in Tanzania. This site is much more cloudy than Maricopa, but the performance is equivalent.

Finally, in most of the cases, the registration of Take5 data should be quite good, with most of the pixels within 0.5 pixel accuracy, but some images may have higher errors. The ortho-retification diagnostics enable us to detect these cases, as in the image below, but the images will not be delivered at Level 1C.

Kind of image (Sumatra) for which the registration error is higher. The cloud cover is high, the surface is quite uniform, and the LANDSAT reference image itself is quite cloudy.

 

Land cover map production: how it works

=>

Land cover and land use maps

Although different, the terms land use and land cover are often used as synonymous. From Wikipedia Land cover is the physical material at the surface of the earth. Land covers include grass, asphalt, trees, bare ground, water, etc. There are two primary methods for capturing information on land cover: field survey and analysis of remotely sensed imagery. and Land use is the human use of land. Land use involves the management and modification of natural environment or wilderness into built environment such as fields, pastures, and settlements. It also has been defined as "the arrangements, activities and inputs people undertake in a certain land cover type to produce, change or maintain it" (FAO, 1997a; FAO/UNEP, 1999).


A precise knowledge of land use and land cover is crucial for many scientific studies and for many operational applications. This accurate knowledge needs frequent information updates, but may also need to be able to go back in time in order to perform trend analysis and to suggest evolution scenarios.

 

Satellite remote sensing offers the possibility to have a global point of view over large regions with frequent updates, and therefore it is a very valuable tool for land cover map production.

 

However, for those maps to be available in a timely manner and with a good quality, robust, reliable and automatic methods are needed for the exploitation of the available data.

 

 

 

Classical production approaches

The automatic approaches to land cover map production using remote sensing imagery are often based on image classification methods.

 

This classification can be:

  • supervised: areas for which the land cover is known are used as learning examples;
  • unsupervised: the image pixels are grouped by similarity and the classes are identified afterwards.

Supervised classification often yields better results, but it needs reference data which are difficult or costly to obtain (field campaigns, photo-interpretation, etc.).

 

 

 

What time series bring

Until recently, fine scale land cover maps have been nearly exclusively produced using a small number of acquisition dates due to the fact that dense image time series were not available.

 

The focus was therefore on the use of spectral richness in order to distinguish the different land cover classes. However, this approach is not able to differentiate classes which may have a similar spectral signature at the acquisition time, but that would have a different spectral behaviour at another point in time (bare soils which will become different crops, for instance). In order to overcome this problem, several acquisition dates can be used, but this needs a specific date selection depending on the map nomenclature.

 

For instance, in the left image, which is acquired in May, it is very difficult to tell where the rapeseed fields are since they are very similar to the wheat ones. On the right image, acquired in April, blooming rapeseed fields are very easy to spot.

 

May image. Light green fields are winter crops, mainly wheat and rapeseed. But which are the rapeseed ones?

April image. Blooming rapeseed fields are easily distinguished in yellow while wheat is in dark green.

 

If one wants to build generic (independent from the geographic sites and therefore also from the target nomenclatures) and operational systems, regular and frequent image acquisitions have to be ensured. This will soon be made possible by the Sentinel-2 mission, and it is right now already the case with demonstration data provided by Formosat-2 and SPOT4 (Take 5). Furthermore, it can be shown that having a high temporal resolution is more interesting than having a high spectral diversity. For instance, the following figure shows the classification performance results (in terms of  \kappa index, the higher the better) as a function of the number of images used. Formosat-2 images (4 spectral bands) and simulated Sentinel-2 (13 bands) and Venµs (12 bands) data have been used. It can be seen that, once enough acquisitions are available, the spectral richness is caught up by a fine description of the temporal evolution.

kappaVFS.png

 

 

What we can expect from Sentinel-2

Sentinel-2 has unique capabilities in the Earth observation systems landscape:

  • 290 km. swath;
  • 10 to 60 m. spatial resolution depending on the bands;
  • 5-day revisit cycle with 2 satellites;
  • 13 spectral bands.

Systems with similar spatial resolution (SPOT or Landsat) have longer revisit periods and fewer and larger spectral bands. Systems with similar temporal revisit have either a lower spatial resolution (MODIS) or narrower swaths (Formosat-2).

 

The kind of data provided by Sentinel-2 allows to foresee the development of land cover map production systems which should be able to update the information monthly at a global scale. The temporal dimension will allow to distinguish classes whose spectral signatures are very similar during long periods of the year. The increased spatial resolution will make possible to work with smaller minimum mapping units.

 

However, the operational implementation of such systems will require a particular attention to the validation procedures of the produced maps and also to the huge data volumes. Indeed, the land cover maps will have to be validated at the regional or even at the global scale. Also, since the reference data (i.e. ground truth) will be only available in limited amounts, supervised methods will have to be avoided as much as possible. One possibility consists of integrating prior knowledge (about the physics of the observed processes, or via expert rules) into the processing chains.

 

Last but not least, even if the acquisition capabilities of these new systems will be increased, there will always be temporal and spatial data holes (clouds, for instance). Processing chains will have to be robust to this kind of artefacts.

 

 

Ongoing work at CESBIO

 

Danielle Ducrot, Antoine Masse and a few CESBIO interns have recently produced a a large land cover map over the Pyrenees using 30 m. resolution multi-temporal Landsat images. This map, which is real craftsmanship, contains 70 different classes. It is made of 3 different parts using nearly cloud-free images acquired in 2010.

 

70-class land cover map obtained from multi-temporal Landsat data.

In his PhD work, Antoine works on methods allowing to select the best dates in order to perform a classification. At the same time, Isabel Rodes is looking into techniques enabling the use of all available acquisitions over very large areas by dealing with both missing data (clouds, shadows) and the fact that all pixels are not acquired at the same dates.

 

These 2 approaches are complementary: one allows to target very detailed nomenclatures, but needs some human intervention, and the other is fully automatic, but less ambitious in terms of nomenclature.

 

A third approach is being investigated at CESBIO in the PhD work of Julien Osman: the use of prior knowledge both quantitative (from historical records) and qualitative (expert knowledge) in order to guide the automatic classification systems.

 

We will give you more detailed information about all those approaches in coming posts on this blog.

We will play Take5 until the end of Spring !

=>

The end of SPOT4 (Take5) experiment was initially planned the 28th of May, but CNES just decided to extend it until the end of Spring.  The last SPOT4 images should be acquired around June the 21st. In France, and in many other countries, June 21st is the "Music Day" : we will be playing Take5 'til the Music Day.

 

In France, this extra time will enable us to monitor the end of winter crops and the start of summer crops, we will also see the end of snow melt in the mountains, and we will have more data to validate our algorithms. The total duration of the experiment will be around 5 months

 

Many thanks to our CNES colleagues !

Offre spéciale / Special offer

 

Non, nous n'avons pas encore décidé de financer nos travaux par la publicité... Il s'agit simplement de vous rappeler que la communication est une part importante du projet SPOT4 (Take5), afin de montrer tout l'intérêt des séries temporelles. Ce blog peut non seulement présenter les excellents (hum) résultats, côté CNES et CESBIO, mais aussi les applications des séries temporelles.

Donc, les utilisateurs qui nous enverront pour ce blog des textes présentant leurs projets auront accès à un traitement dédié et anticipé des séries temporelles SPOT4(Take5) sur leurs sites.

 

Certains d'entre vous l'ont déjà fait :

 

No, we did not decide yet to fund our projects via adverts !

But, I would like to recall that communication is a key element of SPOT4 (Take5) project to help promote the use of high resolution time series. This blog can not only show the excellent (?) results obtained at CNES and CESBIO, but can also display your projects to use time series.

Users who will send us a description of their project with SPOT4 (Take5) for this blog will receive a dedicated processing and anticipated delivery of preliminary data above their site.

 

The atmospheric effects : how they work.

=>

Earth surface observations by space-borne optical instruments are disrupted by the atmosphere. Two atmospheric effects combine to alter the images :

  • the light absorption by air molecules
  • the light scattering by molecules and aerosols

Here are two SPOT4 (Take5) images, acquired with a time gap of 5 days above Morocco. Because of atmospheric effects, the second image has less contrast and is"hazier" than the first one.

 

 

Light Absorption :
Atmospheric absorption : in blue, the surface reflectance of a vegetation pixel, as a function of wavelength. In red, the reflectance of the same pixel at the top of atmosphere.

The air molecules absorb the light within thin absorption bands. Within these absorption bands, the reflectance measured by the satellite is lessened, and in some cases, the light may be completely absorbed and the apparent reflectance at the top of atmosphere (TOA) is zero.  (for instance, at 1.4µm, in the figure on the right. Such a property is used to detect high clouds with Sentinel-2 or Landsat-8).

Thankfully, the satellite designers usually choose to locate the spectral bands away from strong absorption bands (but beware of satellite designers ;-) ). Within the satellite channels, the absorption is generally sufficiently low so that an approximate knowledge of the absorbent abundance is enough to obtain an accurate correction of absorption. Information on absorbing gases (ozone, water vapour) concentration may be found in weather analyses.

 

Light scattering

The air molecules scatter the light. A photon that passes close to a molecule will be deflected in another direction. As the air molecules are very small compared to visible light wavelengths, they will mainly scatter short wavelengths (in the blue range). The blue sky results from the scattering of sun light by air molecules, since the blue light in the sun spectrum is much scattered while the other wavelengths are mainly transmitted to the ground. A cloud also scatters the light, but its large particles (droplets, crystals) scatter all wavelengths, which explain its white colour.

 

Apart from clouds and air molecules, scattering may be due to aerosols. Aerosols are particles of diverse nature (sulphates, soot, dust...), suspended in the atmosphere. Their abundance, type and size are extremely variable, and their effect on light is also variable. Small aerosols will mostly scatter blue light, while larger aerosols will scatter all wavelengths. Some aerosols may also absorb light. All this variability makes the correction of their effect very tricky.

The above video, provided by NASA, gives an idea of the way aerosol properties may change from one day to the other, within a two years period. The colour gives an idea of aerosol types, while the colour intensity provides the aerosol optical thickness.

Simplified model :

In a very simplified way, atmospheric effects may be modelled as follows :

ρTOA= Tgatm +Td ρsurf)

where :

  • ρTOA is the Top of Atmosphere reflectance
  • ρsurf is the earth surface reflectance
  • ρatm is the atmospheric reflectance
  • Tg is the air molecules (gazeous) transmission (Tg<1)
  • Td is the transmission due to scattering (Td<1)

When aerosol quantity increases, ρatm increases while Td decreases. These two variables also depend on view and sun angles. The closer to vertical, the lower value of ρatm and the higher value of Td .

 

Adjacency effects :

The above model should only be applied to a uniform landscape. But above a standard landscape, a heavy loaded atmosphere will also blur the images. This is explained in another post.

Models, corrections.

Several models may be used to perform atmospheric corrections. For, approximate corrections, the SMAC model should be one of the simplest. SMAC be downloaded from the CESBIO site. The difficulty in using any atmospheric correction model lies in providing the necessary information on aerosol properties. We will talk about that in another post.

Other more accurate models may be used. In our case, in the MACCS processor, we pre-compute "Look-up Tables " using an accurate radiative transfer code (Successive Orders of Scattering), that simulates the light propagation through the atmosphere. But the use of a complex model is only justified if it is possible to obtain an accurate knowledge of the aerosol optical properties.

SPOT4(Take5) : Cloud statistics after one month

=>

We have now received all the L1A images of the SPOT4(Take5) experiment taken between January the 31st and March the 10th, for which at least some part of the surface is visible. We ortho-rectify these images to obtain level 1C products, but sometimes, the cloud cover is still too high to process the image. We can use all these productions to derive some statistics about cloud cover.

 

Proportion of images processed at Level 1A and Level 1C for the sites selected by each agency.
Institution Images acquired L1A processed L1C processed % L1A % L1C
CNES 324 184 157 56 % 49 %
JRC 54 29 27 53 % 50 %
ESA 84 41 34 49 % 40 %
NASA 48 26 26 54% 54%
CCRS 6 1 1 17 % 17 %

 

Between 40% and 50% of the images taken are sufficiently clear so that the ortho-rectification is feasible. When the production of all cloud masks (level2A) is finished, we will be able to compute the number of cloud free observations for each pixel.

After having looked at all the images in Europe or North Africa, we can confirm that all the pixels of these sites have been observed at least once without clouds, except for 3 sites : CAlsace, EBelgium and CTunisia (!). For the site in Alsace, we had to wait until the 4th of March, and until the 10th of March for the site in Tunisia. And up to now, only a little part of the site in Belgium has been observed, on the 8th of March.

 

Number of images acquired in February,
as a function of their cloud cover
Site Clouds < 10% 10% < Clouds < 50% 50% < Clouds < 80% 80% < Clouds
Alpes 2 0 2 2
Alsace 0 0 0 6
Ardèche 1 1 0 4
Loire 1 0 3 2
Bretagne 1 0 1 4
Languedoc 0 2 2 2
Provence 2 3 1 0
SudmipyO 1 1 1 3
SudmipyE 1 1 1 3
VersaillesE 2 0 1 3

In France, despite a very cloudy month of February, the 5 days repetitivity enabled to observe nearly each site at least once. But if SPOT4 had only imaged one out of two overpasses, only the sites in Versailles, Provence and the Alps would have been observed in any case.

 

This result confirms that it is absolutely necessary to launch both Sentinel-2 satellites with a short time interval, so enable the numerous operational applications that need to rely on a monthly clear observation. And it would be a pity if the recent GMES/Copernicus budget cuts resulted in delaying the Sentinel-2B satellite, reducing the repetitivity to only 10 days for several long years.

SPOT4(Take5) first cloud masks

=>

Now that you know almost everything on our cloud detection method and on our shadow detection method, we can show you the first results obtained by Mireille Huc (CESBIO) with SPOT4(Take5) time series. As the method is multi-temporal, it needs an initialisation phase, and we had to wait until we had a sufficient number of images to produce the masks. These first results are not (yet) perfect, but are already quite presentable.

 

The images shown below are a series of 6 Level 1C images, expressed in Top of Atmosphere reflectance, with the contours of several masks orverlayed : the clouds are circled in green, their shadows in black, the water and snow mask are respectively circled in blue and pink. You may click twice on the images to see the details of the masks. These images were acquired in Provence (France), each of them is made from 4 (60x60 km²) SPOT Images obtained on the same day, ortho-rectified, then merged.

 

Most clouds are detected, including very thin clouds, while the number of false cloud detections is very low. Most large cloud shadow are also detected, even if a few of them were missed. The water mask is also quite accurate with nearly no false detections, taking into account it is produced at 200m resolution. The snow is well classified when the snow cover is high, but often, pixels with a moderate snow cover are classified as clouds. This is a classical difficulty with snow masks.

 

However, we know that your sharp eyes will have noticed some very thin clouds partly missed by our classification in the North East of the first image, a few false cloud detections on the 3rd and the 5th images (the ground dries and becomes brighter and whiter), some missed cloud shadows for some small clouds once in a while (we know why, it is an initialisation problem, but quite long to explain...). The cloud detection threshold for water pixels (the method is different from the cloud detection above land), is maybe a little to low, as some bright Camargue Lakes are wrongly classified as cloudy. But after all, for a first run, the result is not bad, and we will refine all the parameters when we have a sufficient number of images.

On the Fourth Image, only two of the 4 (60*60 km²) images are available, because the two others are too cloudy to be ortho-rectified, as we need to see the surface to take ground control points. In fact, the ortho-rectification step is the first of our cloud masking steps.

 

The clouds are circled in green, their shadows in black, the water and snow mask are respectively circled in blue and pink. You may click twice on the images to see the details of the masks.