Estimating soil moisture from satellite data

Soil moisture estimates can help farmers assess plant health, aid in predicting crop yields, let governments can help alleviate and protect against floodings, measure urban water use for city planners, and more.

Ground sensors are the most accurate tools to track water, but are expensive and difficult to rapidly relocate. Using satellite sources however provides an economic option to monitor across large and remote areas and can be invaluable in assessing change and understanding trends for such things as drought assessments or river overflow effectiveness.

Mackenzie river - Canada - Source Landsat program/ USGS /NASA

How to assess soil moisture from space

The thermal-optical trapezoid model (TOTRAM) is a widely-used model for estimating soil moisture via satellite data. It combines data from thermal and optical sensors. The basis for the model is that ground surface temperature is correlated to soil water content.

Scientists use the optical data to look at vegetation and soil type, as well as other parameters, determining the relationship between ground temperature and soil moisture in a specific area, on a specific day. Then the thermal data is used to calculate moisture based on that relationship.


Removing temperature from the equation

Some satellites, such as Landsat-8, carry both thermal and optical sensors. But not all do. This can limit the data sources when trying to look at an area or date range. This is why Utah State University scientists Morteza Sadeghi and Scott Jones in collaboration with their colleagues Ebrahim Babaeian and Markus Tuller at the University of Arizona decided to create a new model that would only use optical measurements [Sadeghi et al. 2017. The optical trapezoid model: A novel approach to remote sensing of soil moisture applied to Sentinel-2 and Landsat-8 observations. Remote Sensing of Environment, 198, 52-68]. With this, they are able to use data from both Landsat-8 and Sentinel-2 satellites, even though Sentinel-2 does not capture thermal data.

The researchers also found the previous model, TOTRAM, to be somewhat cumbersome, as it required additional calibration for each environment, and for each observation date. For example, scientists had to account for near surface air temperature, relative humidity, wind speed, or other environmental factors that could change the relationship between ground temperature and soil moisture.

A ground reflectance model

To counter these limitations, Sadeghi and colleagues decided to use ground reflectance instead of temperature to assess the amount of water in the ground surface layer. The new optical-trapezoid model (OPTRAM) was developed based on a recently developed physical relationship between soil moisture and shortwave infrared transformed reflectance. The concept is that water, even in the ground, reflects certain wavelengths in a specific way and this would be evident in the data.

Sadeghi and colleagues used ground measurements of soil moisture in several US location to gauge accuracy and to validate and calibrate the model.  Ground testing soon revealed the new model (OPTRAM) to be just as accurate as the old model (TOTRAM).

The relationship between reflectance and moisture is also less affected by the environment than temperature-soil moisture. This meant the new model only required to be calibrated once per location, and was much less dependent on when the data was being captured.

With this new model, Sadeghi and colleagues can now track soil moisture using not only Landsat-8 but also Sentinel-2 satellites.  Indeed in principle any optical satellites covering these spectral bands can now be used.  This offers scientists another helpful resource to help track soil moisture. With less calibration required, the model is also much less costly in computing resources as well.


Expanding the model

Now that the initial research, funded by the U.S. National Science Foundation, is published, Sadeghi and colleagues are continuing to work on improving their new OPTRAM model to better assess calibration and to look for additional remote-sensing datasets that could be used to infer soil moisture, such as MODIS.


You might also be interested in


Modelling pollutant exposure from space

Particulate matter – also referred to as aerosols or air pollution – are microscopic particles present in Earth's atmosphere. Some occur naturally, during wildfires and volcanic eruptions, and some are the result of human activities, such as car exhausts and powerplants, formed from the chemical reactions of gases such as sulphur dioxide (SO2) and nitrogen oxides (NOx: nitric oxide (NO) and nitrogen dioxide (NO2)). Measures to reduce the emissions of these precursor gases are therefore often beneficial in reducing overall levels of particulate matters or air pollution.

Particulate matter is especially important to monitor because of its impacts on climate, as well as its adverse effects on human health, having been linked to asthma, lung cancer, prenatal complications, and a number of respiratory diseases.

Looking at pollution from space

Air pollution is often hard to see from the ground so it can come as a surprise that it is quite visible from space.

The Ozone Monitoring Instrument (OMI), aboard the NASA Aura satellite, provides measurements of NO2, one of the precursor gases responsible for particulate matter pollution. Additionally, most of the NO2 captured by the OMI satellite instrument is right near the surface. As a result, patterns in the NO2 data can strongly reflects what is happening at the surface in terms of emissions. This explains why OMI is one of the most widely used instruments for researchers looking at air pollution from space.

For example, thanks to data from OMI, highways and ships pathways are very visible from space. “As the problems of pirates off the coast of Somalia became more pronounced in 2009 relative to 2004, researchers at the University of Bremen pointed out that that the pollution tracks from ships moved further and further out to sea as they started to go further away from the coast to avoid pirates”, recalled Daven Henze, associate professor of mechanical engineering at the University of Colorado Boulder, US.

Other applications of NO2 measurements include monitoring cities emissions, to assess socio-economic activity – a strong wealth marker – and identify causes of daily or weekly variations in output. For example, the country’s predominant religion could explain drops in emissions on a certain day.

Scientists have also been looking at global trends and estimates for emissions of  NOx – the nitrogen oxides most relevant for air pollution, NO and NO2– comparing results with the state’s pollution reduction goals, trying to understand the impact of emissions control policies overtime.

NO2 map placed on top of the shipping route map - Copyright: CLS - KNMI - ESA

NO2 map placed on top of the shipping route map - Copyright: CLS - KNMI - ESA

Building models for pollution to find better mitigation strategy and improve human health and climate

Most models for aerosols – or particulate matter – distribution are built for the global scale – with spatial resolution spawning several hundreds of kilometers, making it difficult to assess exposure at the city level. This is why Henze and his team are working on improving the models’ precision – down to 10 km – for the UN Climate and Clean Air Coalition (CCAC).

According to Henze, “If you used global model resolution – 200-300 km scale models –, to try and estimate aerosols exposure, you can incur an error of up to 30-40% because the spatial scale is so different than the spatial scales of gradients in population densities”, i.e. the scales of transitions from urban to rural areas. “

This is the reason why, in 2015, Henze decided to increase the accuracy of the global distribution model by incorporating satellite data identifying population concentrations, correcting the aerosols concentrations to more closely match where humans are present – adjusting, for example, for urban and rural areas.

The new model, developed in collaboration with Randall Martin’s group in Dalhousie, was built using a particulate matter concentration model derived from data collected by MODIS, MISR, SeaWIFS, and Calypso satellite instruments that estimate surface fine particulate air pollution – or PM2.5 – at high resolution.

Using this new combined model to calculate exposure at a much finer resolution than the coarse global simulations, the team built a tool kit for the United Nations Climate and Clean Air Coalition (CCAC) that estimates relationships between pollutants exposures and emissions, called LEAP-IBC.

The toolkit is offered to member nations trying to get funding from the UN for pollution reduction policies. Not all countries have the resources to do detailed modeling, and air pollution health impact analysis. But now, countries can use open satellite data with this kit to gauge the impact specific mitigation strategies, such as powerplant emissions reductions, would have on human health and climate.

Nitrogen dioxide pollution, averaged yearly from 2005-2011, has decreased across the United States.Image Credit: NASA Goddard's Scientific Visualization Studio/T. Schindler

Nitrogen dioxide pollution, averaged yearly from 2005-2011, has decreased across the United States.Image Credit: NASA Goddard's Scientific Visualization Studio/T. Schindler


Saving the world, one satellite at a time

The World Health Organization (WHO) estimates that "fine particulate air pollution – or PM2.5 –, causes about 3% of mortality from cardiopulmonary disease, about 5% of mortality from cancer of the trachea, bronchus, and lung, and about 1% of mortality from acute respiratory infections in children under 5 years, worldwide."  

As satellite sensors continue to improve and the number of satellites increases, toolkits, such as the one developed by Henze and Martin, will only become more precise, offering countries the ability to monitor on a daily, or even hourly, basis the efficiency of their mitigating strategies.

Using satellite data to find new ways to reduce pollution exposure could help save millions of lives every year.


Interested in using this data, simply visit the SEDAC website


You might also be interested in

Get more satellite news, straigth in your inbox


Using spatial and aerial imagery to estimate crop surfaces in developing countries

Agricultural statistics are essential for monitoring production changes, planning government interventions and future investments, and estimating crop outputs for policymakers, researchers, and organizations. Poor agricultural data can lead to disastrous misallocations of resources and unsuccessful policies, as well as having a dire impact on populations and farmers alike.

Rwanda fields


Jacques Delincé, a veteran agricultural statistician and former head of Agrilife and MARS units at the European Commission, is currently working as a consultant for the Food and Agriculture Organization – or FAO – on the Global Strategy to improve Agriculture and Rural Statistics. The team is looking for more cost-efficient methods for agricultural statistics in developing countries and, in particular, comparing the accuracy and costs of list and area frames for farmer surveys.

Estimating crop areas by conducting farmer interviews involves collecting data through regular household questionnaires, asking farmers to estimate the superficies of planted crops for an individual field or farm. Area frame surveys, on the other hand, is a global estimate drawn from a sample collection of well-defined land units.

In Nepal and Brazil, the FAO ran ground data collection surveys. Ground surveys have many strengths, but can also be costly and strict quality control procedures are needed to ensure data integrity. And, despite rigorous statistical modeling approaches, accuracy remains an issue as cost considerations often restrict sample sizes.

This could drastically change in the next few years as Earth observation data becomes more accessible and affordable satellite imagery can be used to supplement ground based systems. And in areas where surveys are unsafe due to civil wars and violence, aerial images may be the only approach.

In the interest of saving costs while working in Rwanda, Delincé and his team opted to use ground surveys from the current year and combine them with recent remote-sensed data. If the team could obtain crop area results similar enough to the ground surveys by analyzing aerial images from the same location, then Delincé could apply the same discriminating methodology to satellite images of the entire country to get an accurate estimate of crop surface areas.

For their analysis, Global Strategy’s team started with images from Sentinel-2, Landsat-8, and Sentinel-1. Rwanda is around 25,000 km2, roughly the size of Maryland. With a country of this size, it is often more economical to use satellite imagery over drones or planes.

Drones can be a great tool to cover small, defined areas, but regulations can vary greatly by location. For example, rules such as line-of-sight, requiring the pilot to be able to see the drone at all times, or large no-fly zones around buildings, like airports or hospital with helipads, represent a serious hurdle to full area coverage.

Planes, on the other hand, efficiently cover very large surfaces, are less susceptible to cloud coverage issues often plaguing satellites, and offer significant savings over very high-resolution satellite images. But, as with drones, military or government restrictions prevent statisticians and scientists to fly over certain areas.

Accurately estimating crop areas from aerial images is never easy and is even more of a challenge in the context of African farming systems. Crop areas in Sub-Saharan Africa are often characterized by smallholder farms that produce a wide range of diverse crops, non-uniform plots in a wide range of sizes – sometimes of the order of a few meters square – and intercropping, where farmers plant different crops within the same field.

Rwanda is no exception and the preliminary results from the study were inconclusive. The spatial resolution of open data used was too coarse, at 15 and 10 m, to properly distinguish between cultures. Despite this, through Global Strategy’s research, the Rwandan governments and NGOs, both global and local, will be able to better estimate future crop allocation, develop help plans for farmers, and even model the impact of specific food subsidies on the local economy.

To improve results accuracy further and widen the applicability, the team is now looking into sub-meter satellite data. As satellite sensors continue to sharpen and Earth observation data access improves over time, programs like the FAO Global Strategy will develop even more cost-efficient methods to improve Agriculture and Rural Statistic for developing countries.


You might also be interested in

Get more satellite news, straigth in your inbox


What resolution do I need when using satellite Earth observation data?

Trick question: it depends on what you are trying to do.

What does resolution mean?

When it comes to Earth observation, you might hear about spatial resolution, spectral resolution, and temporal resolution. While all three need to be considered when looking for the satellite data, most often, when people ask about resolution, they mean “spatial resolution”.

Spatial resolution is the size of one pixel on the ground. Pixel stands for 'picture element' – the smallest individual 'block' that makes up the image. With a finer spatial resolution, 30 cm for example  – where each pixel represents a 30 x 30 cm area, for optical data – you would be able to distinguish details, such as houses or cars. With a coarser resolution, an image of a similar digital size would cover a much larger surface on Earth and smaller features become harder to distinguish.

Note from the SkyWatch SAR expert: The above definition only applies to optical data. Synthetic aperture radar data (SAR) is not acquired at nadir like optical data but rather on a slant. Therefore the data is in slant range and the pixels on the ground are not square.

To better illustrate, here are two satellite pictures of the same location (Burj Khalifa, Dubai) taken with different spatial resolution sensors. On the left, a 30 cm resolution from Triple Sat Constellation, on the right, a 15 m resolution from Landsat-8

Dubai comparison high resolution satellite data vs coarse resolution

Spectral resolution is related to the granularity of the breadth of coverage of the electromagnetic spectrum captured by the satellite sensors. A finer spectral resolution can discriminate between narrower bands of wavelength, differentiating, for example, between red, green, and blue bands and allowing for coloured images.

Wavelength spectrum

Satellite sensors are able to capture data that would be invisible to the naked eye and a higher spectral resolution can provide us with a different view of objects and landscapes. For example, the shortwave infrared ranges enable highly effective geological mapping, because rocks and minerals have their own spectral pattern in these bands.

Temporal resolution refers to the time elapsed between viewings of the same area on Earth at the same angle. It can range from continuous coverage for geostationary platforms – such as a weather satellite, set at a fixed point over the Earth’s surface – to several days between revisits for low earth orbiting platforms (LEO). A higher temporal resolution means a shorter revisit time.

What resolutions are available in 2017?

Spatial resolution for Earth observation satellites in the early 1980s  was around 80 m – as was available on Landsat-4. Now, you can find remote-sensing data to purchase with spatial resolutions as low as 30 cm. For open data, some of the finer sensor can capture up to 15 m resolution images.

The spectral resolution also improved drastically over the past few decades, as sensors were refined and more bands became available for study. Some of the most recent satellites sensors can now capture information on more than 1,000 different spectral bands.

As for temporal resolution, it is still very much varies based on the satellite. However, if you are interested in data regarding one specific area, the sheer amount of satellites that were launched has increased your chances of obtaining multiple, non cloud-covered, usable pictures.

Why a higher spatial resolution is not always better?

Higher costs

Higher spatial and spectral resolutions can be obtained by using the most recent technology. This usually requires heavy investments and this data can be extremely pricey. Additionally, thanks to the Copernicus program and the Landsat program, large amount of coarser resolution satellite data archives are available for viewing and download. Open satellite data can be obtained for free through the SkyWatch API.

Less consistency

Newer commercial satellites often work on a “tasking” basis, which means clients could request a specific satellite to cover a certain area at a certain time. If a priority account task the satellite you relied on, the data you needed might get delayed or simply not be available. When it comes to larger government programs, like Landsat and Sentinel, images are systematically acquired instead of tasked – these satellites follow consistent paths and rhythms – which means you can expect to always get an image in the same mode without worrying about conflicts.

Smaller areas

Covering the same surface area with a 50 cm spatial resolution would render an image with 20 times the amount of pixels than the same surface covered by a 10 m sensor. This means an image of the same area will be a much larger file, and of course, take longer to download. This can be an important consideration factor when building an app.

Lower availability

While some of the most recent satellites can offer imagery up to 25 cm resolution, a large number of  satellites currently circling the Earth have sensors that only offer coarser resolutions – in today’s standards. Additionally, data collected by older satellites from the Landsat and Copernicus programs over the past decades have been made freely available. As a result, specifying a coarser spatial resolution is likely to drastically enhance your chances of obtaining more than one image for the same area.

Shorter time period

With the improvements in sensor technology, for the first time in the early 2000, IKONOS made sub-meter resolution images available for purchase. Numerous satellites offering commercially available very high resolution images have since then been successfully launched. However, in studies of longer trends will require to use data that could be 10, 20, or 30 years old and will have a coarser resolution.


Numerous satellite data applications, such as climate studies, look at larger patterns and global trends. In such cases, short revisit rates, and high spectral resolution are key to answering questions and global data from Sentinel-3 and MODIS are more valuable than sub-meter imagery. A coarser spatial resolution would actually be preferred.

How to decide which spatial resolution you need?

Our advice, when doing remote-sensing data analysis or building a space app: think first about what you are trying to achieve and what resolutions you need to solve your business problem.The most detailed spatial resolution may not always be the best.

For example, a consortium, led by the Joint Research Centre (JRC) has successfully combined data from multiple coarser resolution satellites to monitor forest fires, using each satellite to compensate for the deficiencies from the other sensor – cloud perturbations for Sentinel-2 and sensitivity to ground moisture for Sentinel-1.

Continuous improvements in sensors, as well as the higher amount of available satellite data have helped dramatically expand the applications of satellite imagery and the possibilities are now almost limitless.


You might also be interested in

Get more satellite news, straigth in your inbox


Monitoring forest fires

Between November 2015 and April 2016, over 36,000 hectare of forest was burnt in the Republic of Congo. The main commercial activity in the area is the extraction of round wood from areas leased from the national authorities by private companies. These cover an extensive part of forest in the north of the country including the Marantaceae forests, which were affected by the fires. 

While it is usually too humid for forests to burn in the area, a very strong El Niño caused a high number of fires to spread at the beginning of 2016. During peak times,  the fires were spreading as fast as 1,600 hectares a day, making it especially dangerous to track through conventional ways.

The Congolese forests are not only the primary habitat for many large mammals, such as gorillas and forest elephants, but they are also an important carbon storage for the world. 

In order to determine the source of the fires and help monitor the spread to better allocate resources to combat and prevention, a consortium, led by the Joint Research Centre used satellite data from both Sentinel-1 and Sentinel-2 — each satellite's sensor compensating for the difficulty of the other (cloud perturbations for Sentinel-2 and sensitivity to ground moisture for Sentinel-1).

"Burnt areas mapped by Sentinel-1's synthetic aperture radar and Sentinel-2's multispectral imagery highlighted that the origin of the fires correlates with accessibility to the forest, suggesting they were caused by human activity.

With a temporal resolution of 10 days and a spatial resolution of 10 m, Sentinel-2A images allow the timing and extent of fire events to be mapped precisely. This is an improvement on the temporal resolution of 16 days and spatial resolution of 30 m that Landsat-8 provides."

Source: Verhegghen et al, 2016



You might also be interested in

Get more satellite news, straigth in your inbox