Home / Cases / Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture
satellite2611

Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture

While UAVs or drones represent excellent tools for collecting images to identify problems and supplement crop scouting in agriculture (blog post to come about how our customers use Agribotix drones for their crop scouting needs), a consensus has not been reached on how to extract the most valuable information from these images. Many have suggested that Normalized Difference Vegetation Index (NDVI) images taken using either purpose built or modified consumer cameras could serve as excellent indicators of crop health. In theory, these NDVI images could be used for everything from prescribing fertilizer application to estimating yields to identifying weed patches. However, the current state of the art is currently far behind the possibilities and a blog post detailing the Agribotix approach to NDVI imagery is in order.

First off, what is NDVI? Substantial confusion exists on this subject, but it is actually quite straightforward. We consistently see high profile websites refer to NDVI as a measure of chlorophyll content, water content, or something else entirely, but NDVI is simply a ratio of near infrared (NIR) reflectivity minus red reflectivity (VIS) over NIR plus VIS.

NDVI=(NIR-VIS)/(NIR+VIS)

Specifically, NDVI was developed by a NASA scientist named Compton Tucker in a 1977 paper entitled, “Red and Photograghic Infrared Linear Combinations for Monitoring Vegetation.” He examined 18 different combinations of NIR (Landsat MSS 7 800-1100 nm), red (Landsat MSS 5 600-700 nm), and green (Landsat MSS 4 500-600 nm) and compared these results with the density of both wet and dry biomass to in an attempt determine which combination correlated best. His findings were that NIR/red, SQRT(NIR/red), NIR-red, (NIR-red)/(NIR+red), and SQRT((NIR-red)/(NIR+red)+0.5) were all very similar for estimating the density of photosynthetically active biomass.

This is  classic NDVI image of the Earth taken by one of the Landsats. The NDVI equation was developed with assessing the amount of vegetation on Earth using, at the time, brand new satellite technology. The original Landsat had 4 bands and a 60 m resolution.

This is  classic NDVI image of the Earth taken by one of the Landsats. The NDVI equation was developed with assessing the amount of vegetation on Earth using, at the time, brand new satellite technology. The original Landsat had 4 bands and a 60 m resolution.

These results are not surprising. Plants reflect strongly in the near infrared because of a spongy layer found on the bottom surface of the leaf, but not strongly in the red (plants are green after all, meaning they reflect green light). Soil, on the other hand, reflects both. However, when a plant becomes dehydrated or sickly, the spongy layer collapses and the plant ceases to reflect as much NIR light. Thus, a linear combination of the NIR reflectivity and red reflectivity should provide excellent contrast between plant and soil and even healthy plants and sickly plants. It turns out which combination is not particularly important, but the NDVI index of (NIR-red)/(NIR+red) was particularly effective at normalizing for different irradiation conditions and Compton had to pick one, so it stuck. Compton proceeded to publish more than a hundred papers using this index, including a 1979 paper following corn and soybean development that we will discuss later.

The basic principle of NDVI relies on the fact that, due the their spongy layers found on their backsides, leaves reflect a lot of light in the near infrared, in stark contrast with most non-plant object. When the plant becomes dehydrated or stressed, the spongy layer collapses and the leaves reflect less NIR light, but the same amount in the visible range. Thus, mathematically combining these two signals can help differentiate plant from non-plant and healthy plant from sickly plant.

The basic principle of NDVI relies on the fact that, due the their spongy layers found on their backsides, leaves reflect a lot of light in the near infrared, in stark contrast with most non-plant object. When the plant becomes dehydrated or stressed, the spongy layer collapses and the leaves reflect less NIR light, but the same amount in the visible range. Thus, mathematically combining these two signals can help differentiate plant from non-plant and healthy plant from sickly plant.

Now we fast-forward 30 years. The Landsat-1 crashed decades ago, Landsat-8 has been upgraded to 11 bands, NDVI has largely been displaced by Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) and other indices for vegetation research, but it has become very (relative to 1977) inexpensive to modify a consumer camera to collect infrared bands and fly it aboard a small UAV. However, the bands collected by these consumer infrared cameras can vary widely from the original bands used to develop these indices, the ground resolution of UAV imagery is more than one thousand times higher, the reflected radiation does not have to travel through the entire atmosphere to be collected, and the incident light is dramatically more varied. These considerations mean that the vegetation index images produced by UAVs may not exactly reflect those taken by satellite or handheld device.

While any consumer camera can collect blue (400-500 nm), green (500-600 nm), and red (600-700nm) bands that approximate the Landsat-1 bands well, collecting the NIR band requires camera modification. Because the silicon detectors on digital cameras absorb until about 1200 nm, removing the infrared filter that is found on all consumer cameras and replacing with another allows the camera to detect the infrared light necessary for producing NDVI images. Two general strategies exist. Once the infrared filter is removed, both the blue and the red channels absorb significantly in the NIR so is possible to either use a notch filter that blocks red to reassign the red channel to NIR and the blue channel to blue + NIR or a long-pass filter that blocks blue to reassign the blue channel to NIR and the red channel to red + NIR. Note that in both of the scenarios an unadulterated red channel is not available.

However, NDVI was developed without these limitations. It is not a gold standard for the sake of itself, but rather an index that worked well given the data available. It turns out that, while plants obviously reflect more green light than any other visible color, the amount of any visible light reflected is actually very small compared to the amount of NIR light reflected. Thus, while both filter options eliminate the red channel, any visible channel has been shown to work. In other words, generally non-plant matter like soil reflects all visible bands equally and, because plants reflect so much more NIR than any visible channel, differences between channels are unimportant. As a practical matter, this translates into Agribotix using a blue notch filter in our cameras and the green channel to serve as the visible signal in vegetation index calculations.

Since selecting this camera configuration, we and our customers have flown tens of thousands of acres and have spent significant time on determining if the standard NDVI equation is right for UAV-collected imagery. After analyzing and ground truthing a number of fields, we have come to the conclusion that the answer is probably not. While dividing through by NIR+VIS may have been an excellent normalization factor for comparing large swaths of earth illuminated uniformly, a small denominator can make the vegetation index go crazy. Practically, this means something of low visible and NIR reflectance will paradoxically produce a very strong NDVI signal. For example, the shadows cast by crops on a small scale and clouds on a larger scale dramatically affect these images, which is a problem that researchers working with satellite data never had to consider. An example of this can be seen below. The raw image is shown along with the NDVI image and the NIR-VIS image. Notice that the low reflectance values dominate the NDVI image showing the strongest vegetation signal from the tractor. In contrast, simply subtracting the visible from the NIR yields a very robust vegetation index. We have observed this trend again and again, with, due to incident light variations, the NDVI image often returns crazy results.

The NIR-VIS index returns a very reproducible vegetation map. Here we've applied a false coloring that scales from green (dense vegetation) to yellow to red to grey (no vegetation). Notice the tractors, houses, and roads have a low signal, while the fields have a higher signal.

The NIR-VIS index returns a very reproducible vegetation map. Here we’ve applied a false coloring that scales from green (dense vegetation) to yellow to red to grey (no vegetation). Notice the tractors, houses, and roads have a low signal, while the fields have a higher signal.

The NIR-VIS image is in sharp contrast to the NDVI image. Here, the tractors looks like vegetables! These normalization problems are an artifact of the varied light conditions we experience on Earth that the Landsat simply doesn't have to deal with.

The NIR-VIS image is in sharp contrast to the NDVI image. Here, the tractors looks like vegetables! These normalization problems are an artifact of the varied light conditions we experience on Earth that the Landsat simply doesn’t have to deal with.

The raw image take with a blue notch filter gives a general idea about the scene. Feel free to download and play with all of the vegetation indices you see thrown around.

The raw image take with a blue notch filter gives a general idea about the scene. Feel free to download and play with all of the vegetation indices you see thrown around.

The reason that Compton Tucker did divide through by the NIR+VIS is that he wanted to normalize for the incident light to compare over time and across seasons. However, this kind of analysis requires not only very well calibrated images, but also in-depth knowledge of crop reflectivity through the plant cycle. In 1979, Compton wrote a paper using his new index to study plant development. Not surprising for those familiar with the topic, the NDVI signal varied wildly over time for both corn and soy. Thus a strengthening NDVI signal is an excellent indicator of progress for the first fifty days of plant development, but a weakening signal serves the same purpose for the last fifty days.

This figure, taken from Compton Tucker's 1979 paper, "Monitoring Corn and Soybean Crop Development with Hand-Held Radiometer Spectral Data," shows the NDVI (called vegetation index here, the term NDVI did not catch on until later) signal of corn over time. The Julian Date refers to the number of days after January 1st. Notice the signal will vary not only with crop health and incident light, but also with time of year and stage in plant development, making comparisons between fields very difficult.

This figure, taken from Compton Tucker’s 1979 paper, “Monitoring Corn and Soybean Crop Development with Hand-Held Radiometer Spectral Data,” shows the NDVI (called vegetation index here, the term NDVI did not catch on until later) signal of corn over time. The Julian Date refers to the number of days after January 1st. Notice the signal will vary not only with crop health and incident light, but also with time of year and stage in plant development, making comparisons between fields very difficult.

Clearly, between the developmental changes, the effect of incident light (the GreenSeeker uses a laser, eliminating this problem), and the fact that almost all cameras and image processing programs stretch the image histograms to some extent to allow easier visualization, there are significant hurdles to overcome in collecting NDVI images that can be reliably compared over fields and over time. In response to these hurdles and the fact that our customers are generally interested in comparing the signal within a single field, Agribotix has decided to use the NIR-VIS as a vegetation index, which is totally supported by the vegetation sensing community and much less sensitive to differences in incident light than NDVI.

With all that discussion out of the way, let’s see how these UAV collected images compare to what’s really happening on the ground. For our test plot, we’ll look at the organic, dryland wheat farm shown below. On the left is a standard color image, on the right is the NIR-VIS vegetation index, and in the center is an NDVI for comparison. You’ll notice that under normal circumstances, the NIR-VIS looks identical to the NDVI. It’s only when signals get low due to shadows or absorbing materials that the NDVI signal yields strange results.

Color, NDVI, and NIR-VIS (VI) maps of a dryland, organic farm. Winter wheat is growing in all five middle fields.

Color, NDVI, and NIR-VIS (VI) maps of a dryland, organic farm. Winter wheat is growing in all five middle fields.

You’ll notice the image contains the entirety of five fields and the fringes of five more to the North and several more to the South. Let’s look at them one by one to see how the vegetation index image (NIR-VIS), which we’ll call VI from now on, compares to the images on the ground, but first a couple of gross observations. The field at top left is planted with oats that are just sprouting and their VI, consistent with what Compton reported for corn and soy, is very low relative to the more mature wheat. The trapezoidal field next to it is wild grasses, which have very low VI signals. The blotchy signal is cheat grass (previously misidentified as poverty weed several blog posts below), which does reflect a strong signal. The sliver of a field in the top right is a more mature oats field. The five fields wholly contained in the middle of the image are, from left to right, wheat-after-wheat, wheat-after-fallow, fallow, unplanted (and plowed shortly after we took these), and wheat-after-fallow. We won’t worry about the fields to the South. Without any ground-truthing, it is clear that the VI images Agribotix collects are reflective of the general state of the different fields and can already be used to make gross assessments of the states of each.

The difference between wheat-after-wheat and wheat-after-fallow is striking from both the air and the ground. The image below is a color and VI image taken at the border. You can guess which field is which.

Color image of the border between the wheat-after-wheat image.

Color image of the border between the wheat-after-wheat image.

VI image of the border between the wheat-after-wheat image. The stark contrasts between fields can be easily seen from the aerial imagery.

VI image of the border between the wheat-after-wheat image. The stark contrasts between fields can be easily seen from the aerial imagery.

In the previous example, the grower already knew that the fallow field would yield more vegetation, so the UAV imagery recognizing this merely served as a reality check. However, Agribotix uncovered many examples where the grower was unaware of dramatic differences in vegetation. Below is a color image from the road. It is very difficult to see that the wheat closer to the road is coming in thicker and healthier than the wheat several hundred feet away.

From the road it is very difficult to identify where the wheat is coming in thicker or sparser.

From the road it is very difficult to identify where the wheat is coming in thicker or sparser.

A UAV flight combined with VI images revealed this striking difference, which is confirmed by the images from the ground. We have seen this many times (providing more examples might test the patience of our audience), but we strongly conclude that UAV collected VI images can be used to make robust vegetation maps that solidly identify areas of dense and sparse vegetation. We will have to wait until the end of the growing season to determine how early along these images can determine important information like yields and fertilizer requirements, but stay tuned for the next blog post on how we use these images for  crop scouting and weed detection.

Agribotix UAV-collected VI imagery recognized that the wheat density observed from the road was not indicative of the whole field. The red areas will likely produce less wheat.

Agribotix UAV-collected VI imagery recognized that the wheat density observed from the road was not indicative of the whole field. The red areas will likely produce less wheat.

We walked to this location to ground-truth the aerial images and found much sparser rows in the red areas shown in the image at left.

We walked to this location to ground-truth the aerial images and found much sparser rows in the red areas shown in the image at left.

(Source – http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collected-ndvi-imagery-and-the-agribotix-experience-in-ground-truthing-these-images-for-agriculture)

Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture обновлено: November 26, 2014 автором: admin

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*