Original research articles

Estimation of Leaf Area Index in vineyards by analysing projected shadows using UAV imagery

Abstract

A few decades ago, farmers could precisely monitor their croplands just by walking over the fields, but this task becomes more difficult as farm size increases. Precision viticulture can help better understand the vineyard and measure some key structural parameters, such as the Leaf Area Index (LAI). Remote Sensing is a typical approach to monitoring vegetation which measures the spectral information directly emitted and reflected from vegetation. This study explores a new method for estimating LAI which measures the projected shadows of plants using UAV (unmanned aerial vehicle) imagery. A flight mission over a vineyard was scheduled in the afternoon (15:30 to 16:00 solar time), which is the optimal time for the projection of vine shadows on the ground. Real LAI was measured destructively by removing all the vegetation from the area. Then, the projected shadows in the image were detected using machine learning methods (k-means and random forest) and analysed at pixel level using a customised R code. A strong linear relationship (R² = 0.76, RMSE = 0.160 m² m-2 and MAE = 0.139 m² m-2) was found between the shaded area and the LAI per vine. This is a quick and simple method, which is non-destructive and gives accurate results; moreover, flights can be scheduled during other periods of the day than solar noon, such as in the morning or afternoon, thus enabling pilots to extend their working day. Therefore, it may be a viable option for determining LAI in vineyards trained on Vertical Shoot Positioned (VSP) systems.

Introduction

According to European Union policies (Zarco-Tejada et al., 2014), Precision Agriculture is a farming management concept based on observing, measuring, and responding to inter- and intra-field variability in crops, and in which the spatial variability of vineyards plays a vital role. Vineyards are characterised by a strong spatial structure that is generally stable over time (Bramley and Lamb, 2003) and is affected by several factors, some of which are quite constant over time and can be dealt with through differentiated crop management (Bramley and Hamilton, 2004). Although Precision Agriculture can improve yields, the most significant advantage is the reduction of yield variations over time, which leads to more stability and resilience to a changing climate (Yost et al., 2016). On the other hand, until a few decades ago, farmers could precisely monitor their croplands just by walking over the fields, but with increasing farm size this task is becoming more difficult without using technology (Balafoutis et al., 2017). All of this has led to a growing interest in precision viticulture and further research efforts (Santesteban, 2019), but results can vary significantly depending on knowledge of the crop. Therefore, it is necessary to use methods that precisely define key variables to avoid any problems; for example, over-cropping, which will lead to a canopy with an insufficient amount of healthy active leaves which will not be able to produce enough sugar to obtain the desirable level of ripening in all clusters, resulting in grapes lacking in aroma/flavour and/or the desirable phenol compounds (Reynolds, 2010). Key variables to be defined include cluster, flower or berry number, which directly affect yield and ripening, and leaf area, which is an important variable to monitor, because the ripening of grape clusters depends on the leaf area/fruit weight ratio (Keller, 2015; Jackson, 2020). Leaves are organs specialised in intercepting radiation, which is necessary for photosynthesis. Leaf area is commonly measured using the Leaf Area Index, LAI (Lambers and Oliveira, 2019), which is closely related to photosynthetic active radiation assimilation (Pessarakli, 2014) and is defined as the vegetative development of a crop per unit area of land (Watson, 1947).

1. Leaf Area Index in viticulture

In viticulture, LAI is one of the most used parameters to represent canopy area (Delrot et al., 2010); it is a key indicator, since leaf area directly correlates with other critical parameters, such as transpiration, root development and photosynthetic capacity, thus limiting yield (Keller, 2015). Furthermore, LAI is related to canopy transpiration and water use, and can thus influence irrigation decisions (Netzer et al., 2008; Munitz et al., 2019), and it is affected by management and environmental factors, such as irrigation, nutrient management, and training systems (Oliveira and Santos, 1995).

2. Current Remote Sensing methods for estimating leaf area index

LAI can be measured using traditional methods, such as the Carbonneau method (Carbonneau, 1976), the adapted Point Quadrat (Smart and Robinson, 1991), the Lopes and Pinto method (Lopes and Pinto, 2005), or by measuring specific parameters related to leaf shape (Williams and Martinson, 2003). However, these methods are inefficient, time-consuming and, in some cases, destructive.

Remote Sensing is a quicker way of estimating LAI via; different technologies can be used, such as field spectroradiometers (Wang et al., 2019), multispectral and hyperspectral data (Mananze et al., 2018), satellite imagery (Meyer et al., 2019; Dube et al., 2019) and thermal imagery (Neinavaz et al., 2019). These technologies can vary in accuracy and complexity:

i) Different results have been reported for satellite imagery; for example, Johnson et al. (2003) showed a significant correlation between image-based leaf area and ground-based leaf area (R² > 0.7), while Beeri et al. (2020) found a weak relationship between satellite information and LAI (R² > 0.3).

ii) Specific tools have been developed exclusively for indirect LAI estimation based on the measurement of radiation extinction through the foliage, such as the Plant Canopy Analyzer (PCA, LAI-2000 and 2200, Li-Cor Inc., Lincoln, NE, USA). Johnson and Pierce (2004) have reported high accuracy using PCA in viticulture, but even though this tool has been specifically developed to measure LAI, it is not easy to employ it correctly, since there are different protocols for its use (White et al., 2018).

iii) General-purpose sensors such as conventional RGB cameras have also shown great potential for measuring LAI using various methods. For example, Fuentes et al. (2014) used a camera mounted on a pole to acquire downward-looking digital images from the middle of the plant, and De Bei et al. (2016) developed an app (VitiCanopy) to assess canopy architecture parameters in vineyards using upward-looking imagery. Diago et al. (2012) used an RGB camera mounted on a tripod set normal to the canopy plane, 2 m from the row axis and 1.05 m aboveground. Both authors used an RGB camera successfully, reporting high accuracy. Furthermore, these RGB cameras can be mounted on unmanned aerial vehicles (UAVs), thus increasing their possibilities improving the accuracy of the results. Kalisperakis et al. (2015) used a low-cost standard RGB camera to estimate LAI, obtaining good results (R² > 0.7), and they improved the results (R² > 0.8) by mounting a hyperspectral VNIR imaging sensor on a UAV.

iv) Hyperspectral and multispectral sensors are an improvement on RGB cameras. Using multispectral airborne images, Hall et al. (2008) found close relationships between the planimetric canopy area and ground-based measurements of LAI at several phenological stages, whereas no significant relationships were found between NDVI and LAI. Towers et al. (2019) used a multispectral sensor to capture nadir-view images of the canopy, reporting varying accuracies depending on whether soil values were used or not, and showing that soil backscattering can contribute more to the signal than vegetation cover can. In a similar way, but capturing the images from a UAV, Comba et al. (2020) used a multispectral camera combined with point cloud creation and 3D modelling using SfM (Structure from Motion) to estimate LAI, obtaining good results (R² > 0.8). Mathews and Jensen (2013) captured images from nadir and varying oblique angles, obtaining less accurate results than Comba et al. (2020) (R² > 0.5), and showing that increasing the number of differing angles/perspectives with overlap improves the SfM product. Kalisperakis et al. (2015) reported good accuracy (R² > 0.8) by creating a 3D model via 3D triangulation; they used aerial RGB images to generate a dense point cloud by employing dense stereo and multi-image matching algorithms. Therefore, even when the same sensors and technology (RGB cameras) are used, different methods can result in different levels of accuracy.

v) An alternative approach involves the use of specific tools that can capture the three dimensions (3-D) of the area. For example, Arnó et al. (2013) computed geometric and structural parameters using a tractor-mounted LiDAR system to measure vines (using TAI, tree area index) in a transverse direction along rows with high accuracy (R² > 0.8).

Each method has its advantages and limitations. Some are cheap to implement (e.g., RGB cameras (Fuentes et al., 2014; Diago et al., 2012)), but may have different drawbacks depending on the method; for example, Diago et al. (2012) had problems using RGB cameras in terms of distance to the remnant foliage when the defoliation process was performed over highly dense canopies. Del-Moral-Martínez et al. (2016) reported that LiDAR needs to be very precisely set up, otherwise it can generate incorrect LAI values for vines with poor leaf development, recording zones in the canopy containing a considerable percentage of gaps as effective leaf wall area. Mathews and Jensen (2013) reported that the SfM approach needs more time than other UAV missions due to the higher number of images, angles or overlap. In addition, some methods are more expensive than others due to the technologies involved, such as multispectral (Towers et al., 2019; Comba et al., 2020) and hyperspectral cameras (Kalisperakis et al., 2015) or a tractor for mounting the LiDAR system on. Even a low-cost version of LiDAR (Arnó et al., 2013) can still be much more expensive than a standard RGB camera.

For most of these methods, strong correlations with LAI can generally be found (R² > 0.7), and a key requirement of most of them is to measure data around midday to minimise the effects of shadowing between the vine rows. The development of new methods that will complement current techniques and help expand target areas and extend the work to other periods of the day could be very beneficial for the application of these techniques in practice.

3. Remote Sensing approach to shadows

In Precision Agriculture, Remote Sensing is typically used for monitoring vegetation by measuring the spectral information directly emitted and reflected from the vegetation; each band is analysed separately, or indices are calculated, such as NDVI (Rouse et al., 1973), which is related to vineyard vegetation (Vélez et al., 2020b). In these approaches, shadows are frequently treated as non-desirable information (Ma et al., 2008; Zhang and Chen, 2010; Wu and Bauer, 2013; Aboutalebi et al., 2018) and are usually removed from datasets (Poblete-Echeverría et al., 2017; Jiang et al., 2018). Nevertheless, some problems can arise when the images are not obtained close to solar noon, as a result of the shaded and sunlit parts of leaves having different reflectance values. In addition, shaded areas can generate noisy data, which significantly affect the estimation of vegetation parameters (Zhang et al., 2015).

4. Proposed approach

The leaf area of the plant is correlated with intercepted light and can be estimated by measuring its shadow (Baeza et al., 2010). Therefore, a newer and less time-consuming approach to estimating LAI could be the study of the shaded soil area within the vineyard using UAVs. Each plant projects its own shadow and image analysis can be used to measure the lateral leaf area of the vines in the shaded inter-row space.

UAV platforms have been extensively used for studying and exploring vineyards, offering valuable technology for estimating numerous vineyard parameters (e.g.; Mathews and Jensen, 2013; Zarco-Tejada et al., 2013; Matese et al., 2016; Santesteban et al., 2017; Weiss and Baret, 2017; Poblete-Echeverría et al., 2017). In general, UAV offers the possibility of obtaining precise and high-resolution multispectral imagery, which is critical for measuring the shape of the vine shadows correctly. Furthermore, it allows the time of the flight to be chosen (Vélez et al., 2020a), which was a crucial factor in this study as the flight needed to be scheduled according to sun elevation, period of the year and desired shadow size (related to plant height). Other authors have hypothesised that shadows can be used as an indirect measure of leaf area and canopy characteristics (Zheng and Moskal, 2009), or have even suggested the possibility of using optical remote Sensing close to solar noon to monitor and map vineyard shaded area (Johnson and Scholasch, 2005). However, to our knowledge, this is the first study to follow and evaluate a field protocol for measuring vine leaf area using the shaded area under real conditions. This study takes advantage of advances in image capturing and UAV technologies, combining previous knowledge in viticulture with remote Sensing and machine learning methods.

In the present study, LAI is estimated by capturing plant shadows using UAV imagery. This method is compatible with previous methods because they are all based on the need to find a quick, non-destructive, and accurate way of measuring leaf area. However, as previously stated, these methods usually require measurements to be carried out around midday to minimise the effect of shadowing between vine rows.

Materials and methods

1. Experimental site

A large-scale field experiment was carried out in a vineyard (cv. Cabernet-Sauvignon) located in 'Zamadueñas estate' (coordinates: 41.7013º N, 4.7088º W, Valladolid, Spain), which belongs to the Agricultural Technology Institute of Castilla y León (ITACyL). According to the WRB classification (FAO), the soil is medium-coarse textured (FLc) Calcaric Fluvisol + (FLe) Eutric Fluvisol (Nafría et al., 2013). The climate is Csb (temperate with dry or temperate summer - Köppen-Geiger Climate Classification) and is characterised by dry summers and mild, wet winters (AEMET, 2011). Table 1 summarises the meteorological data from the study site collected by 'VA101 Finca Zamadueñas' weather station. The information is available at http://www.inforiego.org/.

Table 1. Climatic characterisation of the study area. Data collected between 1st January and 31 July 2019.


Variable

Average

Total value

Daily average temperature (ºC)

12.0

2550.8

Daily max temperature (ºC)

20.1

4250.8

Daily min temperature (ºC)

4.7

989.6

Radiation (MJ/m2)

19.1

4059.0

Cumulative rain (mm)

2.2

133.1

The vineyard comprised a vertical shoot positioned trellis (VSP), and the vines were spur pruned (with bilateral cordon training), with eight spurs per plant, two buds per spur, 2.5 m x 1.2 m row and plant-spacing respectively, and 1.8 m average vine height. The orientation was NE-SW (northeast-southwest), 35 º to the north. The soil was kept free of any weeds that would have affected image processing (Fountas et al., 2014).

2. Field determination of LAI

In June 2019, the vines were removed from the vineyard. The vines were geo-positioned (Figure 1a) using a GPS Triumph-2 JAVAD GNSS model (Triumph-2, JAVAD GNSS Inc, San Jose, California, USA; Figure 1b) with centimetre accuracy to mark the vines in the field to be removed within the study area. The GPS TRIUMPH-2 has 216 channels of dual-frequency GPS and GLONASS and could be connected to the mobile phone via Bluetooth and Wi-Fi to access the local GNSS Reference Station Network. Each grapevine was cut in the lower-middle part (Figure 1c) and all the material was extracted from the vineyard.

The real LAI (Leaf Area Index) was determined on a sample of 36 vines. The total leaf area of each removed vine was measured using the EasyLeafArea application (Easlon and Bloom, 2014). The EasyLeafArea app automatically calculates leaf area from green leaf and red scale areas.

It is important to note that the shadow area analysed in this study represents PAI (Plan Area Index), as it captures all plant structures and not only the leaves. However, the term LAI is used in the present manuscript, because, like other indirect methods based on images, the method employed in this study does not distinguish between leaf and non-leaf material in the analysis; therefore, LAI was used for consistency with real leaf area values obtained using the EasyLeafArea application, and from previous literature (Fuentes et al., 2014; Kalisperakis et al., 2015; De Bei et al., 2016; Towers et al., 2019; Beeri et al., 2020). In addition, there is a strong relationship between PAI and LAI from May to October. Doring et al. (2014) observed that estimated PAI did not differ substantially from directly measured LAI in a vineyard, showing a remarkably high correlation between PAI and LAI (R² = 0.93). Moreover, Arnó et al. (2013) found a very high correlation between PAI and LAI (R² = 0.99) using 'TAI' instead of 'PAI' since LiDAR does not distinguish between green and non-green elements.

Figure 1. (a) GPS Triumph-2 JAVAD GNSS (b) Vine geo-positioning (c) Vine removal (d) 3DR UAV

3. Image acquisition

3.1 Flight campaign/survey

The UAV images were acquired on 27 June 2019. Considering the average vine height and using basic trigonometry (Figure 2a), the sun elevation was fixed at β = 45 º. As a result, the flight was scheduled for 15:30 to 16:00 solar time (18:00 local time) using NOAA Solar Calculator (https://www.esrl.noaa.gov/gmd/grad/solcalc/), and under 1 Okta cloud cover conditions and azimuth α = 265 °. This time was used to maximise the vine shadow projection on the floor.

Figure 2. (a) Azimuth and relationship between sun elevation and shadows (b) Planned flight mission.

Yellow line: UAV path. Red points: GCPs.

3.2. Unmanned Aerial System (UAS)

Before the UAV flight, a set of 12 ground control points (GCPs) were located in the vineyard and georeferenced using a real-time kinematic (RTK) GPS Triumph-2 JAVAD to improve the geometric accuracy of the image mosaicking process (Figure 2b). Toffanin (2019) defines a Ground Control Point (GCP) as a position measurement made on the ground that can be set using existing structures like pavement corners or lines on a parking lot. A minimum of five GCPs are enough when they include points near corners and within the study area. In this study, clearly distinguishable field structures such as lampposts, maintenance holes and vineyard posts were used (Figure 2b).

An Unmanned Aerial System (UAS) composed of a 3DR quadcopter platform (3DR SOLO, 3D Robotics, Berkeley, California, USA, Figure 1d) was used to fly autonomously on a previously planned mission using open-source autopilot software: ArduCopter 3.3 (Ardupilot) installed on Pixhawk 2.0 mainboard (3D Robotics, Berkeley, California, USA). The image was acquired using a MAPIR® low-cost RGB+RGN system (Survey3W, MAPIR® Inc, San Diego, California, USA) commanded by a drone's flight controller and composed of two cameras with 12-megapixel rolling shutter CMOS sensors. These sensors had a focal length of 3.37 mm with a horizontal field of view (HFOV) of 87 ° and -1 % extreme low distortion glass lens and dual-band filter made of silicon, which is sensitive in the Visible and Near-Infrared spectrum from about 400 to 1200 nm. The sensor captures Blue (450 nm), Green (550 nm), Red (660 nm) and Near Infrared light (850 nm). Each camera had a f/2.8 aperture and was factory calibrated. The images were automatically geotagged using an attached GNSS system (NEO-m8, u-blox, Thalwil, Switzerland).

Flight paths (Figure 2b) were designed using the Mission Planner (1.3.68 Version, Michael Oborne, GNU license), and the flight control was provided by Tower Ground Control V.4.0.0 open-source app. The UAV horizontal speed was 3 m/s, and the flying height was 22 m above ground level (AGL) with a 75 % forward and 80 % side overlap. Moreover, it is crucial to correctly configure the camera settings (shutter speed, ISO and EV) so that no pixels reach the maximum pixel value. If a pixel is higher than the maximum value, the information will be lost. Pixels have a value that ranges from a minimum to a maximum based on the image bitrate. The higher the bitrate, the more information that can be stored in the image. A sensor captures each image in a RAW format and then saves the RAW or converts it to a more standard format (typically by compressing it). Based on the information provided by the manufacturer, the Survey3 cameras capture 16-bit RAW photos per RGB/RGN channel, which means that there are 16 bits (65,536) pixels and that a pixel value ranges from 0 to 65,535. In this work, since the reflectance of light was captured to analyse differences between pixels to identify shadows, RAW was the used format. In order to keep the pixels from reaching the maximum value mentioned above, the camera was configured as follows: shutter speed: 1/1000 seconds, ISO:50, and exposure: +0.0. The Ground Sample Distance (GSD) estimated by the software (Mission planner) for the used camera profile (SURVEY 3W) and a flight height of 22 was 1.01 cm/px.

3.3. Orthomosaic generation

The 16-bit raw images were first converted to tiff format using MAPIR® Camera Control (MCC) software. Each pixel was corrected using a known reflectance value by capturing a photo of the 'MAPIR® Camera Reflectance Calibration Ground Target Package' just before the survey; the photo contained four targets that were measured along incremental wavelengths by a calibrated spectrometer (based on the information provided by the manufacturer, reflective measurements were made along incremental wavelengths of 1 nm and from 350 nm to 1100 nm using multiple Shimadzu spectrophotometers with an integrating sphere). The pixel values of the captured target image were then compared with the known reflectance values of the targets. Using this information in MAPIR® Camera Control (MCC), the pixel values were transformed, and thus the survey images were calibrated. Then, the images were imported into image mosaicking software (Agisoft Metashape 1.5.2, Agisoft LLC, St Petersburg, Russia) based on the structure-from-motion method (SfM) algorithm (Westoby et al., 2012) to generate the orthophotos. The Exif metadata of each image from the GNSS was first used to help in the image alignment process. Then the locations of GCPs from the JAVAD GNSS were manually identified and added to the aligned images to optimise camera positions and orientation data and in turn to improve orthophoto accuracy.

4. Data analysis

The data analysis pipeline is presented in Figure 3. Shadows were detected using remote sensing data via three methods: i) Image reclassification (sensitivity method/manual), ii) K-means, and iii) Random Forest. Each plant area was then cropped and the shadow within the area was measured and correlated with real LAI.

Figure 3. LAI estimation workflow.

4.1. Grid definition

A vectorial grid was developed in order to isolate the shadow of each vine. First, azimuth and row orientations were used to calculate the grid angles, resulting in a parallelepiped with angles of 95 ° (360 - 265 = 95 °) and 35 ° (Figure 4a) and four sides which were 1.2 and 1.8 m long. The next step was to replicate the grid in the whole vineyard (Figure 4b), excluding a 0.7 m width line corresponding to the vegetation.

Once the grid was designed (Figure 4c), the points taken in the field using GPS were used to indicate which grid area corresponds to each vine. Finally, the image was cropped crop using the grid to isolate the area that belonged to each vine.

Figure 4. (a) Area definition, (b) Ground area of one vine, (c) Vineyard grid.

4.2. Shadow recognition models

4.2.1 Pixel value analysis

Once the image corresponding to each vine was isolated, a random sampling strategy was manually carried out at various points where there was shade, obtaining the R, G, B and NIR values. Subsequently, a sensitivity threshold was defined by the ' a ' coefficient:

f(x) = (1±a) x

whereby the detected shaded area is a function of the band value x modified by the sensitivity coefficient a. As a result, a range was defined by an upper and lower limit, depending on the sensitivity value. Within the range, the value is considered 'shadow positive'. It is important to consider that not all shades have the same intensity due to the differences in vegetative development modifying intercepted radiation (Zheng and Moskal, 2009, Baeza et al., 2010); therefore, in order to establish an optimal sensitivity value, a visual analysis was performed of the threshold effect on the image. The coefficient varied from 0.1 to 1(10 % to 100 %) at intervals of 0.1 (10 levels in total), covering the whole range of shadows: from low to high shadow intensity. This process was carried out for each band, reclassifying the image with the values corresponding to the range of shadows, where '1' corresponds to 'shadow' and '0' indicates 'absence of shadow' (Figure 5a).

Figure 5. (a) Pixel reclassification method, (b) Single-vine shadow, Reclassified single-vine shadow (multi-band) and Reclassified final product.

Subsequently, the bands were combined to create an RGB or RGN product comprising a combination of the corresponding bands (Figure 5b); the corresponding shaded area was calculated from the total area of the vine and a product with the following shaded area was obtained:

fx,y,z=1±ax·1±by·1±cz

where x, y and z are the band values (RGB or RGN) and a, b and c are the sensitivity coefficients for each band (10 levels ranging from 0.1 to 1.0). For each RGB and RGN product, 10= 1000 cases were calculated. This information was validated with the real LAI values in order to find the closest relationship between the detected shadows and the LAI. The best result was used as input for the machine learning algorithms.

Finally, the potential of the method is shown by the classification of the vines depending on shaded area into three LAI levels (high, medium, and low), thus demonstrating that the method could be useful for real applications, such as zoning for differential management, irrigation, or fertilisation.

4.2.2. K-means clustering (Unsupervised machine learning)

K-means clustering is an unsupervised machine learning method that classifies the input data objects into multiple classes based on their inherent distance from each other, thus dividing the input data set into k clusters previously defined by the user (Hung et al., 2005). The clustering problem can be formulated as an optimisation problem, described by:

P:minimise zW,M=i=1nj=1kwijdxi,μj


subject toj=1kwij=1,for i=1,,n,


wij=0 or 1,for i=1,,n, and j=1,,k,

where wij = 1 implies object xi belongs to cluster G(j), and d(xi, μj) denotes the Euclidean distance between xi and μj for = 1,…, n and = 1,…, k. (Pérez-Ortega et al., 2020). The standard version of the algorithm consists of four steps: 1) centroids (k points) are randomly generated in the space, 2) each point is assigned to its closest centroid, according to the distance from all the centroids, 3) new centroids are calculated using the mean value of the objects that belong to each cluster, and 4) the process is repeated from step 2 until equilibrium is reached (i.e. when the number of points remains stable within each cluster).

By using k-means it is possible to classify pixels automatically without training samples. The number of k clusters was set from 2 to 6 to determine the effect of different clusters in the shadow classification process. Subsequently, the cluster with the best visual coincidence with the shadow locations was assigned to the shadow class. The initial location of the centroids was set randomly, and the maximum number of iterations was set to 1000.

4.2.3. Random Forest classification (Supervised machine learning)

Random Forest is a supervised machine learning method that uses a decision tree for classification and prediction. The algorithm fits many classification trees to a dataset and then combines the predictions from all the trees. Each tree can be computed separately from other trees, because each tree is independently constructed using a bootstrap sample of the data set (Kuhn, 2008).

The Random Forest algorithm works in four steps: 1) many bootstrap samples from the data are selected, 2) a classification tree is fit to each bootstrap sample, 3) each tree is used to predict the out-of-bag observations, and 4) the predicted class of an observation is calculated by majority vote of the out-of-bag predictions for that observation (Cutler et al., 2007). The observations in the original dataset that do not occur in a bootstrap sample are the out-of-bag observations. In this study, the number of trees was set to 500, and the training set was divided into four classes: soil, shadows, shaded vegetation, and vegetation.

4.3. Model comparison

Calibration (cross-validated, 50 % split) and validation were used to assess and compare the ability of each model to predict actual LAI. The coefficient of determination (R²), root mean square error (RMSE) and mean absolute error (MAE) were used to define the model's performance.

Some authors recommend MAE as the most natural measure of average error magnitude instead of RMSE, since measures of average error (such as RMSE), which are based on the sum of squared errors, are functions of the average error (MAE), the distribution of error magnitudes (or squared errors) and n1/2; therefore, they do not describe average error alone and MAE is less sensitive to the effect of outliers than RMSE as an indicator of model performance (Willmott and Matsuura, 2005). However, RMSE is also indicated, since it is commonly used in Remote Sensing literature (López-Lozano et al., 2009; Li et al., 2014; Darvishzadeh et al., 2019; Beeri et al., 2020; Campos et al., 2021).

All image and data analyses were carried out using AutoCAD® (version 2021 R.47.X Autodesk, Inc. San Rafael, California, USA), QGIS (version 3.14.X, QGIS developer team 2020), customised codes written in R (version 3.6.X, R Core Team 2019) and R packages stat, caret and randomForest obtained from the Comprehensive R Archive Network (CRAN).

Results

1. Shadow pixel analysis

The sensitivity of the thresholds for the shadow detection of each band was adjusted separately (Figure 6a). Most of the bands have a maximum correlation with LAI at a sensitivity level of around 30 %.

Figure 6. Correlation values (y-axis) for each sensitivity level (x-axis).

Sensitivity values from 0.1 to 1.0, in intervals of 0.1 (10 levels in total). (a) single-band, and (b) multi-band.

Once the maximum R² value is reached, the correlation decreases progressively. As previously explained, the band combinations were analysed using sensitivity values from 0.1 to 1.0, at intervals of 0.1 (10 levels in total). A strong linear correlation can be observed between the real LAI and the area obtained from the shadows. For the RGB product, using a sensitivity for all coefficients of 0.3, the maximum R² value was 0.57. For the RGN product, the maximum was R² = 0.64 (Table 2), using a sensitivity of 0.4 (Figure 6b).

Table 2. Maximum R² values for each product and combination.


BAND

R2

p-value

Coefficients

a

b

c

R

0.36

< 0.01

0.3

-

-

G

0.46

< 0.01

0.3

-

-

B

0.36

< 0.01

0.2

-

-

NIR

0.60

< 0.01

0.3

-

-

RGB

0.57

< 0.01

0.3

0.3

0.3

RGN

0.64

< 0.01

0.4

0.4

0.4

RGB

0.61

< 0.01

0.3

0.3

0.4

RGN

0.68

< 0.01

0.3

0.5

0.4

a, b and c are the sensitivity coefficients used for each band.

To increase accuracy, combinations of different coefficients were formed for each band, showing that the highest correlation for RGB was f(x, y, z) = (1 ± 0.3)· (1 ± 0.3)G · (1 ± 0.4)B, with R2 = 0.61, and for RGN it was f(x, y, z) = (1 ± 0.3)R · (1 ± 0.5)G · (1 ± 0.4), with R² = 0.68 (Table 2).

2. Shadow pixels classification

RGN was the input selected for the machine learning models, since it achieved the maximum correlation with real LAI. Figure 7 shows k-means clustering results for each cluster number. Since the input data is automatically classified using an unsupervised machine learning method, the products obtained differed substantially; however, for any number of clusters, and beginning from k = 2, k-means was able to identify shadows. Regarding Random Forest, the algorithm was able to identify the four required classes, and the shadows were appropriately classified (Figure 8).

Figure 7. K-means classification results.

The number of k clusters was set from 2 to 6. (a) Original image, (b) k-means, k = 2, (c) k-means, k = 3, (d) K-means, k = 4, (e) k-means, k = 5, and (f) k-means, k = 6.

Figure 8. Random Forest classification results.

3. Observed vs predicted LAI values

First, RGB and RGN datasets were compared, because they had the highest correlation with LAI. Once the model was defined, the predicted LAI was compared with the observed LAI (Figure 9).

For the RGB product, the best result was obtained using coefficients = 0.3, = 0.3 and = 0.4 (y = 1.005x + 0.741), with R² = 0.71, RMSE = 0.141 m2 m-2, and MAE = 0.110 m2 m-2. For RGN, the highest accuracy was obtained using the coefficients = 0.3, = 0.5 and = 0.4 (y = 1.163x + 0.642), with R² = 0.74, RMSE = 0.146 m2 m-2, and MAE = 0.123 m2 m-2.

Regarding machine learning models (Figure 10), Random Forest slightly improved the previous RGN accuracy. It showed the highest accuracy for predicting LAI (R² = 0.76, RMSE = 0.160 m2 m-2, and MAE = 0.139 m2 m-2) matched by k-means with = 6 (R² = 0.76, RMSE = 0.165 m2 m-2, and MAE = 0.142 m2 m-2). However, Random Forest had slightly lower error values. The rest of k-means classification moved from R² = 0.62 to 0.75. The worst correlation was found for k-means with = 4.

Figure 9. Observed vs Predicted LAI. (a) RGB, (b) RGN.

Figure 10. Observed vs. Predicted LAI. (a) Random Forest, (b) k-means, k = 2, (c) k-means, k = 3, (d) k-means, k = 4, (e) k-means, k = 5, and (f) k-means, k = 6.

Discussion

Aerial orthophotography provides information about the canopy size in two dimensions, but shadows comprise a projection in the third dimension, providing additional valuable information. The shadows depend not only on the shape of the vegetation but also on the lighting. Therefore, it is imperative to have an optimal light source (sunny days). Shadows vary throughout the day depending on the position of the sun; therefore, it is possible to plan the flight at the optimum time when the sun is in the desired position (e.g., in the afternoon). However, images can be captured at other times of the day, as long as there are shadows, but the accuracy will probably be related to the size and quality of the shadows.

Our results show that it is possible to accurately plan a flight at a given time by calculating the azimuth and elevation of the sun so that the information extracted from the shadows can be maximised. This study shows that, in terms of shadow extraction, the afternoon is a good time to take images; however, the exact time should be adjusted depending on vineyard characteristics. At the optimal time, shadows effectively cover the ground between rows, but they are neither affected by the vegetation in the adjacent line, nor by the lack of precision due to an absence of shade. A balance can thereby be achieved, and the maximum amount of information can be obtained from the shadows generated on the ground. The optimum time for taking the image will be determined by the height of the vines and the distance between the rows of plants.

Overall, the results showed a significant positive relationship between the plant shadows and the LAI. According to the results of the RGB and RGN analysis, a correctly identified shadow resulted in a positive and close significant relationship of up to R² = 0.74. Figure 6 shows a maximum correlation between the shadows and the LAI at around 30 % sensitivity: if the sensitivity is too low, the plant's shadows cannot be easily detected, whereas if it is too high, the algorithm will identify pixels as shadows when they are not (Figure 11). Therefore, errors can occur by overestimating or by underestimating the shadows. However, the slope is much more stable after reaching the maximum value of R²; thus, in order to detect shadows, it is better to overestimate than to underestimate the value of the pixels. In this study, the models tend to underestimate at high LAI values and overestimate at low LAI values (Figures 9 and 10), which is probably due to an increase in leaf overlap in big canopies.

When comparing RGB and RGN products, the results show RGN to be a better approach, probably because NIR can identify the shaded vegetation, thus helping to discern it from shaded soil.

Figure 11. Effect of the sensitivity level

Regarding machine learning methods, Random Forest improved the RGN accuracy for predicting LAI reaching R² = 0.76; this was matched by k-means with = 6 (R² = 0.76). The rest of the k-means classifications showed good accuracy, even in the worst case, with R² ranging from 0.62 to 0.75. The best accuracy was obtained for = 3 and = 6, and the worst was found for = 4, showing that in k-means, a higher number of clusters for classification is not strictly related to a better accuracy level.

1. Method considerations

In order to ensure that this method works optimally, several essential factors must be taken into consideration, for example:

i) Sun position, which influences the shape of the shadows; it is, therefore, essential to consider azimuth and elevation.

ii) Grapevine management. Trimming the vegetation will modify the vineyard shadows. Depending on the phenological stage, new leaves may cover the gaps. Moreover, errors in shadow detection can occur if the vegetation is not within the trellis for mainly two reasons:

→ a. The vegetation covers the shade, which therefore goes undetected (Figure 12a)

→ b. The vegetation itself creates shaded areas onto itself (Figure 12b). NIR can help identify them since the values of shade in NIR are different from that in RGB, thus explaining why it is better to use NIR for the detection of vegetation.

iii) Weed management. Weeds can add noise to the image and render shade detection difficult.

iv) Vineyard orientation. The optimum orientation for late-day image capture in the study zone is close to SSE-NW, since it is the orthogonal plane regarding the path of the sun. The applicability of this method decreases as the difference between the real plane and the optimum plane increases. However, this should not be an issue in viticulture, since rows are generally oriented close to north-south to maximise light interception on both sides of the canopy for a part of the day (Jackson, 2020).

Figure 12. Shadow recognition issues: (a) vegetation over the shadows, and (b) shadows in the vegetation.

2. Limitations of the method

There are some limitations and important points to raise related to the vineyard characteristics and the method itself:

i) The vineyard vegetation growing in the assigned area extends along the whole length of the vertical trellis, so that it is possible for the vegetation of one grapevine to mix with the adjacent one and thus be wrongly assigned to the other vine. However, this is inherent to the farming system, and it is not a source of error specific to this method.

ii) If the vineyard dimensions are not exact, by creating an equal and regular mesh of parallelepipeds at certain points, it is possible to under- or overestimate the vegetation due to the error introduced by the designated area. This could be overcome by delineating a boundary (box) around and above each vine trunk using a centroid approach (with prior knowledge of vine spacing).

iii) Proper overlap is extremely important in order to obtain good quality data and avoid defects such as blotchy artefacts or errors in image alignment.

iv) The effect of the slope on the shadows could be a problem; however, the effects of topography can be addressed by proper flight/mission planning and the use of DEMs/DCMs. The terrain slope effect was ignored in this work because the vineyard was on flat terrain.

v) Light intensity. This study was carried out under good light conditions, and according to our results, a clear sky is needed. If the light intensity varies because of clouds or diffuse radiation, the shadows will not be defined, and the results will be affected.

vi) Structures, such as VSP wires, poles or irrigation pipes project shadows which could get mixed up with the shadows of the vines.

vii) The method might not work in high-density vineyards/orchards due to the mutual shading of vines/trees.

3. Comparison with other Remote Sensing methods to estimate LAI

As previously discussed, each method has its advantages and disadvantages. Table 3 briefly compares some previously described methods. The comparison considered similar factors to those considered by Jin et al. (2021), including 'economic cost' in an attempt to represent the cost of the operation, including all the equipment involved and 'time investment', which is the estimated time required for data collection in the field.

In terms of time investment, our proposed method is much quicker than methods that employ specific instrumentation such as PCAs or ceptometers, which are carried on foot to survey plants one at a time (Johnson and Pierce, 2004; López-Lozano and Casterad, 2013; White et al., 2018). LiDAR must be mounted on a ground vehicle (Arnó et al., 2013), as is generally the case for equipment in any field survey method (Diago et al., 2012; Fuentes et al., 2014; Towers et al., 2019). Compared to other methods that use similar technologies (UAV/Airborne + camera), our method involves similar input time (Hall et al., 2008; Kalisperakis et al., 2015; Comba et al., 2020) and is quicker than SfM methods, which require more images (Mathews and Jensen, 2013). However, it is still necessary to go to the vineyard to collect the data; therefore, satellite methods are even quicker (Johnson, 2003; Beeri et al., 2020).

The proposed method is low-cost as basic UAV and normal RGB+RGN cameras can be used. The best performance was obtained with the RGN camera, which is slightly more expensive than an RGB camera, such as that used in Diago et al. (2012) and Fuentes et al. (2014); however, it is cheaper than the multispectral sensors used by Towers et al. (2019) and Comba et al. (2020) or the hyperspectral sensor used by Kalisperakis et al. (2015). Specific sensors such as PCA or ceptometer are also more expensive. Moreover, when considering the cost of the platform, the UAV used in the study is much cheaper than a ground vehicle (Arnó et al., 2013).

As regards accuracy, our method is in line with other suitable methods in studies reporting similar correlation values. Furthermore, to our knowledge, this is the first time a method of this kind has been developed and will therefore likely be improved in further experiments, which has been in the case in other studies; for example, Mathews and Jensen (2013) reported R² > 0.5 accuracy, but in subsequent studies, Kalisperakis et al. (2015) and Comba et al. (2020) reported higher accuracy (R² > 0.8).

In summary, the method described in this article is cheaper and less time-consuming than other methods for calculating LAI and similar levels of accuracy can be obtained (Table 3). Lastly, other methods usually require the experiment to be carried out close to solar noon, or they try to at least avoid shadows; the present method thus allows the flight to be scheduled during other periods of the day than solar noon, such as morning or afternoon, enabling pilots to extend their working day.

Table 3. Comparison between current Remote Sensing LAI measurement methods and shadow measurement methods.


Method

Platform

Economic cost

Time invested

LAI Correlation

Authors

Shadow measurement

UAV

Low

Low

R2 > 0.7

Present study

3D SfM

UAV

Low

Low

R2 > 0.5

Mathews and Jensen (2013)

UAV

Low

Low

R2 > 0.8

Kalisperakis et al. (2015)

UAV

Medium

Low

R2 > 0.8

Comba et al. (2020)

2D RGB
Or multispectral

UAV

Low

Low

R2 > 0.7

Kalisperakis et al. (2015)

Ground

Low

High

R2 > 0.8

Diago et al. (2012)

Ground

Medium

High

R2 > 0.9

Fuentes et al. (2014)

Ground

Medium

High

R2 > 0.5

Towers et al. (2019)

Airborne

Medium

Low

R2 > 0.5

Hall et al. (2008)

LiDAR

Ground

High

Medium

R2 > 0.8

Arnó et al. (2013)

Hyperspectral camera

UAV

High

Low

R2 > 0.8

Kalisperakis et al. (2015)

Plant canopy analyser

Ground

Medium

High

R2 > 0.7

Johnson and Pierce (2004)

White et al. (2018)

Satellite

Satellite

Low

Very low

R2 > 0.3

Beeri et al. (2020)

R2 > 0.7

Johnson et al. (2003)

The R² values are the values reported by the authors. The 'Economic cost' and 'Time invested' values are based on the methods described by authors compared to the present study method. In 'Economic cost': 'low' total cost up to 3,000 $; 'medium' up to 6,000 $; 'high' more than 6,000 $.

It should be noted that this comparison only aims to highlight the potential and accuracy of existing methods versus the usefulness of the method proposed in this paper. The results presented in the literature are not fully comparable, because they have been obtained from studies carried out under different experimental conditions. The reference LAI used was not obtained in the same way in all the experiments. In the present study, real LAI was used, which was measured destructively to validate the estimated results, but other authors have used LAI estimated by other methods, such as with a Li-COR PCA (Johnson, 2003; Fuentes et al., 2014) or an AccuPAR LP-80 ceptometer (Mathews and Jensen, 2013).

4. Potential applications

This method can be applied to other woody crops, such as olive or walnut trees, where mutual shading of consecutive trees has a lower effect, depending on the planting distance. However, in these cases (bigger canopies), the internal leaf overlap is higher, an effect which would need to be considered in the application/calibration of the proposed method. In addition, given that the vegetation of one plant can mix with the adjacent in vertical-trellis trained crops, this method may be even more effective when applied to non-trellis trained crops.

Images could also be taken several times a year to monitor the growth of the vegetation (for plant vigour). From the image history of the crop, it would be possible to create a temporal model of the evolution of the shadows, and therefore of the LAI, and thereby adjust vineyard management accordingly (e.g., irrigation, treatments, and leaf removal). However, the number of woody structures covered by the leaves will certainly affect the accuracy of the method.

Finally, to illustrate the real applicability of the method, a cluster analysis was developed: the dataset was split into three groups, with each plant being classified according to shaded area on three LAI levels (high, medium, and low), and 0.88, 0.75 and 0.63 m2 of soil shaded area as centroids and 1.79, 1.54 and 1.31 for high, medium, and low LAI respectively. Figure 13a shows three different clusters in which the vines are grouped according to LAI; k-means classification was used for this, but other machine learning methods could be applied. Figure 13b shows the result of the classification on a map, showing that this method could be used for, for example, zoning the vineyard according to different irrigation or fertilisation management.

Figure 13. (a) Classification according to LAI level using k-means and (b) Mapping/zoning according to LAI level.

The previously mentioned factors prove that shadows can estimate the LAI. Furthermore, the results show that, by using simple equations and machine learning methods and by controlling the limitation factors, leaf surface area can be estimated from the shadow of the plants. Therefore, all advantages and limitations considered, the shadow captured from UAV images is a good estimator of LAI, provided that the method used to obtain the shadows is the appropriate one.

Further research is needed to assess the temporal stability of the relationship between plant shadows and LAI, and the applicability of the method to other woody crops and farming systems or vineyards planted with cover crops. Moreover, it would be interesting to explore whether this method can be used to estimate other vineyard parameters or to assess the possibility of extracting more information about canopy structure and vegetation density; for example, Figure 11 shows gaps in the detected vegetation using different sensitivity levels (a, b and c) due to differing shadow intensities for each pixel, indicating that it may be possible to use this method for extracting more information about canopy structure and vegetation density. Other authors (Zheng and Moskal, 2009; Baeza et al., 2010) have studied the relationships between the incident radiation intercepted by the vine, LAI, and aboveground biomass, concluding that the measurement of the gap fraction is a way of analysing canopy structure and that it can be parameterised by LAI and leaf angle distribution. Additional research is required in this area.

Conclusions

This study shows a simple and effective method for estimating LAI in vineyards using machine learning tools and projected shadows captured by a UAV. It is a combination of methods known for centuries and modern image recognition techniques.

A strong positive relationship was observed between the shaded area and the leaf area of the vines, thus resulting in a high level of accuracy for the LAI estimation. (R² = 0.76, RMSE = 0.160 mm-2 and MAE = 0.139 mm-2). However, to accurately estimate LAI, shadow areas must be correctly identified, otherwise errors can occur due to over- or underestimating the shadows. In addition, it is essential to manage the influence of parameters on the method (e.g., the position of the sun, vineyard management or weed management) to avoid noise in the image.

The results of the study are useful because LAI is an essential parameter for vineyard management and zoning. Moreover, this method is fully compatible with other Remote Sensing methods and can be incorporated into a flexible working day.

Acknowledgements

This study was possible thanks to the financial support of Junta de Castilla y León (Spain), the project INIA RTA2014-00077-C02, FPI-INIA2016-017 and FEDER funding. We would like to thank the staff of Viticulture for their cooperation in the vineyard operations.

References

  • Aboutalebi, M., Torres-Rua, A. F., McKee, M., Kustas, W., Nieto, H., & Coopmans, C. (2018). Behavior of vegetation/soil indices in shaded and sunlit pixels and evaluation of different shadow compensation methods using UAV high-resolution imagery over vineyards. Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III. Published. https://doi.org/10.1117/12.2305883
  • AEMET (2011). Agencia Estatal de Meteorología. Iberian climate Atlas. ISBN: 978-84-7837-079-5
  • Arnó, J., Escolà, A., Vallès, J. M., Llorens, J., Sanz, R., Masip, J., Palacín, J., & Rosell-Polo, J. R. (2013). Leaf area index estimation in vineyards using a ground-based LiDAR scanner. Precision Agriculture, 14(3), 290–306. https://doi.org/10.1007/s11119-012-9295-0
  • Baeza, P., Sánchez-De-Miguel, P., & Lissarrague, J. R. (2010). Radiation Balance in Vineyards. Methodologies and Results in Grapevine Research, 21–29. https://doi.org/10.1007/978-90-481-9283-0_2
  • Balafoutis, A. T., Beck, B., Fountas, S., Tsiropoulos, Z., Vangeyte, J., van der Wal, T., Soto-Embodas, I., Gómez-Barbero, M., & Pedersen, S. M. (2017). Smart Farming Technologies – Description, Taxonomy and Economic Impact. Progress in Precision Agriculture, 21–77. https://doi.org/10.1007/978-3-319-68715-5_2
  • Beeri, O., Netzer, Y., Munitz, S., Mintz, D. F., Pelta, R., Shilo, T., Horesh, A., & Mey-tal, S. (2020). Kc and LAI Estimations Using Optical and SAR Remote Sensing Imagery for Vineyards Plots. Remote Sensing, 12(21), 3478. https://doi.org/10.3390/rs12213478
  • Bramley, R. & Lamb, D. (2003). Making sense of vineyard variability in Australia. In: Ortega, R. and Esser, A. (Eds) Precision Viticulture. Proceedings of an international symposium held as part of the IX Congreso Latinoamericano de Viticultura y Enologia, Chile. Centro de Agricultura de Precisión, Pontificia Universidad Católica de Chile, Facultad de Agronomía e Ingenería Forestal, Santiago, Chile. pp. 35-54.
  • Bramley, R., & Hamilton, R. (2004). Understanding variability in winegrape production systems. Australian Journal of Grape and Wine Research, 10(1), 32–45. https://doi.org/10.1111/j.1755-0238.2004.tb00006.x
  • Carbonneau, A. (1976). Principes et méthodes de mesure de la surface foliaire. Essai de caractérisation des types de feuilles dans le genre Vitis, Ann. Amélior. Plantes, vol.26, issue.2, pp.327-343.
  • Campos, J., García-Ruíz, F., & Gil, E. (2021). Assessment of Vineyard Canopy Characteristics from Vigour Maps Obtained Using UAV and Satellite Imagery. Sensors, 21(7), 2363. https://doi.org/10.3390/s21072363
  • Comba, L., Biglia, A., Ricauda Aimonino, D., Tortia, C., Mania, E., Guidoni, S., & Gay, P. (2020). Leaf Area Index evaluation in vineyards using 3D point clouds from UAV imagery. Precision Agriculture, 21(4), 881–896. https://doi.org/10.1007/s11119-019-09699-x
  • Cutler, D. R., Edwards, T. C., Beard, K. H., Cutler, A., Hess, K. T., Gibson, J., & Lawler, J. J. (2007). Random Forests for classification in ecology. Ecology, 88(11), 2783–2792. https://doi.org/10.1890/07-0539.1
  • Darvishzadeh, R., Wang, T., Skidmore, A., Vrieling, A., O'Connor, B., Gara, T., Ens, B., & Paganini, M. (2019). Analysis of Sentinel-2 and RapidEye for Retrieval of Leaf Area Index in a Saltmarsh Using a Radiative Transfer Model. Remote Sensing, 11(6), 671. https://doi.org/10.3390/rs11060671
  • De Bei, R., Fuentes, S., Gilliham, M., Tyerman, S., Edwards, E., Bianchini, N., Smith, J., & Collins, C. (2016). VitiCanopy: A Free Computer App to Estimate Canopy Vigor and Porosity for Grapevine. Sensors, 16(4), 585. https://doi.org/10.3390/s16040585
  • Del-Moral-Martínez, I., Rosell-Polo, J., Company, J., Sanz, R., Escolà, A., Masip, J., Martínez-Casasnovas, J., & Arnó, J. (2016). Mapping Vineyard Leaf Area Using Mobile Terrestrial Laser Scanners: Should Rows be Scanned On-the-Go or Discontinuously Sampled? Sensors, 16(1), 119. https://doi.org/10.3390/s16010119
  • Delrot, S., Medrano, H., Or, E., Bavaresco, L., & Grando, S. (Eds.). (2010). Methodologies and Results in Grapevine Research. Springer. Published. https://doi.org/10.1007/978-90-481-9283-0
  • Diago, M. P., Correa, C., Millán, B., Barreiro, P., Valero, C., & Tardaguila, J. (2012). Grapevine Yield and Leaf Area Estimation Using Supervised Classification Methodology on RGB Images Taken under Field Conditions. Sensors, 12(12), 16988–17006. https://doi.org/10.3390/s121216988
  • Doring, J., Stoll, M., Kauer, R., Frisch, M., & Tittmann, S. (2014). Indirect Estimation of Leaf Area Index in VSP-Trained Grapevines Using Plant Area Index. American Journal of Enology and Viticulture, 65(1), 153–158. https://doi.org/10.5344/ajev.2013.13073
  • Dube, T., Pandit, S., Shoko, C., Ramoelo, A., Mazvimavi, D., & Dalu, T. (2019). Numerical Assessments of Leaf Area Index in Tropical Savanna Rangelands, South Africa Using Landsat 8 OLI Derived Metrics and In-Situ Measurements. Remote Sensing, 11(7), 829. https://doi.org/10.3390/rs11070829
  • Easlon, H. M., & Bloom, A. J. (2014). Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area. Applications in Plant Sciences, 2(7), 1400033. https://doi.org/10.3732/apps.1400033
  • Fountas, S., Anastasiou, E., Balafoutis, A., Koundouras, S., Theoharis, S. & Theodorou, N. (2014). The influence of vine variety and vineyard management on the effectiveness of canopy sensors to predict winegrape yield and quality. In Proceedings of the International Conference of Agricultural Engineering, Zurich, Switzerland, 6–10 July 2014.
  • Fuentes, S., Poblete-Echeverría, C., Ortega-Farias, S., Tyerman, S., & de Bei, R. (2014). Automated estimation of leaf area index from grapevine canopies using cover photography, video and computational analysis methods. Australian Journal of Grape and Wine Research, 20(3), 465–473. https://doi.org/10.1111/ajgw.12098
  • Hall, A., Louis, J., & Lamb, D. (2008). Low-resolution remotely sensed images of winegrape vineyards map spatial variability in planimetric canopy area instead of leaf area index. Australian Journal of Grape and Wine Research, 14(1), 9–17. https://doi.org/10.1111/j.1755-0238.2008.00002.x
  • Hung, M.-C., Wu, J., Chang, J.-H. & Yang, D.-L. (2005). An Efficient K-Means Clustering Algorithm Using Simple Partitioning. J Journal of Information Science and Engineering, Vol. 21 No. 6, pp. 1157-1177.
  • Jackson, R. (2020). Wine Science: Principles and Applications, 5th ed., Elsevier: Cambridge. ISBN 978-0-12-816118-0.
  • Jiang, H., Wang, S., Cao, X., Yang, C., Zhang, Z., & Wang, X. (2018). A shadow- eliminated vegetation index (SEVI) for removal of self and cast shadow effects on vegetation in rugged terrains. International Journal of Digital Earth, 12(9), 1013–1029. https://doi.org/10.1080/17538947.2018.1495770
  • Jin, X., Zarco-Tejada, P. J., Schmidhalter, U., Reynolds, M. P., Hawkesford, M. J., Varshney, R. K., Yang, T., Nie, C., Li, Z., Ming, B., Xiao, Y., Xie, Y., & Li, S. (2021). High-Throughput Estimation of Crop Traits: A Review of Ground and Aerial Phenotyping Platforms. IEEE Geoscience and Remote Sensing Magazine, 9(1), 200–231. https://doi.org/10.1109/mgrs.2020.2998816
  • Johnson, L. F. (2003). Temporal stability of an NDVI-LAI relationship in a Napa Valley vineyard. Australian Journal of Grape and Wine Research, 9(2), 96–101. https://doi.org/10.1111/j.1755-0238.2003.tb00258.x
  • Johnson, L.F. & Pierce, L.L. (2004). Institute for Earth Systems Science and Policy, California State University, Monterey Bay, 100 Campus Center, Seaside, CA 93955. Crop Production, 39, 3.
  • Johnson, L., Roczen, D., Youkhana, S., Nemani, R., & Bosch, D. (2003). Mapping vineyard leaf area with multispectral satellite imagery. Computers and Electronics in Agriculture, 38(1), 33–44. https://doi.org/10.1016/s0168-1699(02)00106-0
  • Johnson, L., & Scholasch, T. (2005). Remote Sensing of Shaded Area in Vineyards. HortTechnology, 15(4), 859–863. https://doi.org/10.21273/horttech.15.4.0859
  • Kalisperakis, I., Stentoumis, C., Grammatikopoulos, L., & Karantzalos, K. (2015). Leaf Area Index estimation in vineyards from UAV hyperspectral data, 2D image mosaics and 3D canopy surface models. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-1/W4, 299–303. https://doi.org/10.5194/isprsarchives-xl-1-w4-299-2015
  • Keller, M. (2015). The Science of Grapevines: Anatomy and Physiology. Second edition. Elsevier/AP, Academic Press. Amsterdam, Boston. ISBN: 9780124199873.
  • Kuhn, M. (2008). Building Predictive Models in R Using the caret Package. Journal of Statistical Software, 28, 1–26.
  • Lambers, H., & Oliveira, R. S. (2019). Plant Physiological Ecology. Springer. Published. https://doi.org/10.1007/978-3-030-29639-1
  • Li, X., Zhang, Y., Bao, Y., Luo, J., Jin, X., Xu, X., Song, X., & Yang, G. (2014). Exploring the Best Hyperspectral Features for LAI Estimation Using Partial Least Squares Regression. Remote Sensing, 6(7), 6221–6241. https://doi.org/10.3390/rs6076221
  • Lopes, C. & Pinto, P.A. (2005). Easy and Accurate Estimation of Grapevine Leaf Area with Simple Mathematical Models. Vitis 44 (2), 55–61. https://doi.org/10.5073/vitis.2005.44.55-61
  • López-Lozano, R., Baret, F., García De Cortázar-Atauri, I., Bertrand, N., & Casterad, M. A. (2009). Optimal geometric configuration and algorithms for LAI indirect estimates under row canopies: The case of vineyards. Agricultural and Forest Meteorology, 149(8), 1307–1316. https://doi.org/10.1016/j.agrformet.2009.03.001
  • López-Lozano, R., & Casterad, M. (2013). Comparison of different protocols for indirect measurement of leaf area index with ceptometers in vertically trained vineyards. Australian Journal of Grape and Wine Research, 19(1), 116–122. https://doi.org/10.1111/ajgw.12005
  • Ma, H., Qin, Q., & Shen, X. (2008). Shadow Segmentation and Compensation in High Resolution Satellite Images. IGARSS 2008 - 2008 IEEE International Geoscience and Remote Sensing Symposium. Published. https://doi.org/10.1109/igarss.2008.4779175
  • Mananze, S., Pôças, I., & Cunha, M. (2018). Retrieval of Maize Leaf Area Index Using Hyperspectral and Multispectral Data. Remote Sensing, 10(12), 1942. https://doi.org/10.3390/rs10121942
  • Matese, A., di Gennaro, S. F., & Berton, A. (2016). Assessment of a canopy height model (CHM) in a vineyard using UAV-based multispectral imaging. International Journal of Remote Sensing, 38(8–10), 2150–2160. https://doi.org/10.1080/01431161.2016.1226002
  • Mathews, A., & Jensen, J. (2013). Visualising and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud. Remote Sensing, 5(5), 2164–2183. https://doi.org/10.3390/rs5052164
  • Meyer, L. H., Heurich, M., Beudert, B., Premier, J., & Pflugmacher, D. (2019). Comparison of Landsat-8 and Sentinel-2 Data for Estimation of Leaf Area Index in Temperate Forests. Remote Sensing, 11(10), 1160. https://doi.org/10.3390/rs11101160
  • Munitz, S., Schwartz, A., & Netzer, Y. (2019). Water consumption, crop coefficient and leaf area relations of a Vitis vinifera cv. "Cabernet Sauvignon" vineyard. Agricultural Water Management, 219, 86–94. https://doi.org/10.1016/j.agwat.2019.03.051
  • Nafría, D.A., Garrido, N., Álvarez, M.V., Cubero, D., Fernández, M., Villarino, I., Gutiérrez, A. & Abia, I. (2013). Atlas agroclimático de Castilla y León. NIPO 281-13-008-5.
  • Neinavaz, E., Darvishzadeh, R., Skidmore, A., & Abdullah, H. (2019). Integration of Landsat-8 Thermal and Visible-Short Wave Infrared Data for Improving Prediction Accuracy of Forest Leaf Area Index. Remote Sensing, 11(4), 390. https://doi.org/10.3390/rs11040390
  • Netzer, Y., Yao, C., Shenker, M., Bravdo, B. A., & Schwartz, A. (2008). Water use and the development of seasonal crop coefficients for Superior Seedless grapevines trained to an open-gable trellis system. Irrigation Science, 27(2), 109–120. https://doi.org/10.1007/s00271-008-0124-1
  • Oliveira, M. & Santos, M. (1995) A semi-empirical method to estimate canopy leaf area of vineyards. American Journal of Enology and Viticulture 46, 389–391.
  • Pérez-Ortega, J., Nely Almanza-Ortega, N., Vega-Villalobos, A., Pazos-Rangel, R., Zavala-Díaz, C. & Martínez-Rebollar, A. (2020). The K -Means Algorithm Evolution. In Introduction to Data Science and Machine Learning, Sud, K., Erdogmus, P., Kadry, S., Eds., IntechOpen, ISBN 978-1-83880-333-9.
  • Pessarakli, M. (2014). Plant and crop physiology. 3rd ed. CRC Press, Taylor & Francis Group. Boca Raton, FL. ISBN: 978-1-4665-5329-3
  • Poblete-Echeverría, C., Olmedo, G., Ingram, B., & Bardeen, M. (2017). Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard. Remote Sensing, 9(3), 268. https://doi.org/10.3390/rs9030268
  • Reynolds, A.G. (2010). Managing Wine Quality. Volume 1: Viticulture and wine quality. Woodhead Publishing. ISBN 9781845694845
  • Rouse, J.W., Jr., Haas, R.H., Schell, J.A. & Deering, D.W. (1973). Monitoring vegetation systems in the Great Plains with ERTS. In Proceedings of the Third ERTS Symposium, NASA SP-351 1. U.S., Government Printing Office,Washington, DC, USA, 10–14 December 1973, pp. 309–317.
  • Santesteban, L., di Gennaro, S., Herrero-Langreo, A., Miranda, C., Royo, J., & Matese, A. (2017). High-resolution UAV-based thermal imaging to estimate the instantaneous and seasonal variability of plant water status within a vineyard. Agricultural Water Management, 183, 49–59. https://doi.org/10.1016/j.agwat.2016.08.026
  • Santesteban, L. G. (2019). Precision viticulture and advanced analytics. A short review. Food Chemistry, 279, 58–62. https://doi.org/10.1016/j.foodchem.2018.11.140
  • Smart, R. & Robinson, M. (1991). Sunlight into wine. Winetitles. ISBN: 9781875130108.
  • Toffanin, P. (2019). Open Drone Map: The Missing Guide. First edition. UAV4GEO. MasseranoLabs LLC.
  • Towers, P. C., Strever, A., & Poblete-Echeverría, C. (2019). Comparison of Vegetation Indices for Leaf Area Index Estimation in Vertical Shoot Positioned Vine Canopies with and without Grenbiule Hail-Protection Netting. Remote Sensing, 11(9), 1073. https://doi.org/10.3390/rs11091073
  • Vélez, S., Barajas, E., Rubio, J. A., Poblete-Echeverría, C. & Olmedo, G.F. (2020a) ‘Vitis: Biology and Species’. Chapter 10. Remote Sensing: In the Digital Viticulture Era. ISBN: 978-1-53618-308-5. Nova Publishers.
  • Vélez, S., Barajas, E., Rubio, J. A., Vacas, R., & Poblete-Echeverría, C. (2020b). Effect of Missing Vines on Total Leaf Area Determined by NDVI Calculated from Sentinel Satellite Data: Progressive Vine Removal Experiments. Applied Sciences, 10(10), 3612. https://doi.org/10.3390/app10103612
  • Wang, L., Chang, Q., Li, F., Yan, L., Huang, Y., Wang, Q., & Luo, L. (2019). Effects of Growth Stage Development on Paddy Rice Leaf Area Index Prediction Models. Remote Sensing, 11(3), 361. https://doi.org/10.3390/rs11030361
  • Watson, D. J. (1947). Comparative Physiological Studies on the Growth of Field Crops: I. Variation in Net Assimilation Rate and Leaf Area between Species and Varieties, and within and between Years. Annals of Botany, 11(1), 41–76. https://doi.org/10.1093/oxfordjournals.aob.a083148
  • Weiss, M., & Baret, F. (2017). Using 3D Point Clouds Derived from UAV RGB Imagery to Describe Vineyard 3D Macro-Structure. Remote Sensing, 9(2), 111. https://doi.org/10.3390/rs9020111
  • Westoby, M., Brasington, J., Glasser, N., Hambrey, M., & Reynolds, J. (2012). 'Structure-from-Motion' photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology, 179, 300–314. https://doi.org/10.1016/j.geomorph.2012.08.021
  • White, W. A., Alsina, M. M., Nieto, H., McKee, L. G., Gao, F., & Kustas, W. P. (2018). Determining a robust indirect measurement of leaf area index in California vineyards for validating remote sensing-based retrievals. Irrigation Science, 37(3), 269–280. https://doi.org/10.1007/s00271-018-0614-8
  • Williams, L., & Martinson, T. E. (2003). Non-destructive leaf area estimation of 'Niagara' and 'DeChaunac' grapevines. Scientia Horticulturae, 98(4), 493–498. https://doi.org/10.1016/s0304-4238(03)00020-7
  • Willmott, C., & Matsuura, K. (2005). Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Research, 30, 79–82. https://doi.org/10.3354/cr030079
  • Wu, J., & Bauer, M. (2013). Evaluating the Effects of Shadow Detection on QuickBird Image Classification and Spectroradiometric Restoration. Remote Sensing, 5(9), 4450–4469. https://doi.org/10.3390/rs5094450
  • Yost, M. A., Kitchen, N. R., Sudduth, K. A., Sadler, E. J., Drummond, S. T., & Volkmann, M. R. (2016). Long-term impact of a precision agriculture system on grain crop production. Precision Agriculture, 18(5), 823–842. https://doi.org/10.1007/s11119-016-9490-5
  • Zarco-Tejada, P., Guillén-Climent, M., Hernández-Clemente, R., Catalina, A., González, M., & Martín, P. (2013). Estimating leaf carotenoid content in vineyards using high resolution hyperspectral imagery acquired from an unmanned aerial vehicle (UAV). Agricultural and Forest Meteorology, 171–172, 281–294. https://doi.org/10.1016/j.agrformet.2012.12.013
  • Zarco-Tejada, P., Hubbard, N. & Loudjani, P. (2014) Precision agriculture: an opportunity for EU farmers – potential support with the CAP 2014–2020. Joint Research Centre (JRC) of the European Commission. Monitoring Agriculture ResourceS (MARS). Unit H04, Brussels, Belgium
  • Zhang, Z., & Chen, F. (2010). A shadow processing method of high spatial resolution remote sensing image. 2010 3rd International Congress on Image and Signal Processing. Published. https://doi.org/10.1109/cisp.2010.5646850
  • Zhang, L., Sun, X., Wu, T., & Zhang, H. (2015). An Analysis of Shadow Effects on Spectral Vegetation Indexes Using a Ground-Based Imaging Spectrometer. IEEE Geoscience and Remote Sensing Letters, 12(11), 2188–2192. https://doi.org/10.1109/lgrs.2015.2450218
  • Zheng, G., & Moskal, L. M. (2009). Retrieving Leaf Area Index (LAI) Using Remote Sensing: Theories, Methods and Sensors. Sensors, 9(4), 2719–2745. https://doi.org/10.3390/s90402719

Authors


Sergio Vélez

velmarse@itacyl.es

https://orcid.org/0000-0001-9004-2877

Affiliation : Instituto Tecnológico Agrario de Castilla y León (ITACyL), Unidad de Cultivos Leñosos y Hortícolas. Valladolid

Country : Spain


Carlos Poblete-Echeverría

Affiliation : South African Grape and Wine Research Institute (SAGWRI), Department of Viticulture and Oenology, Faculty of AgriSciences, Stellenbosch University, Private Bag X1, Matieland 7602

Country : South Africa


José Antonio Rubio

Affiliation : Instituto Tecnológico Agrario de Castilla y León (ITACyL), Unidad de Cultivos Leñosos y Hortícolas. Valladolid

Country : Spain


Rubén vacas

Affiliation : Instituto Tecnológico Agrario de Castilla y León (ITACyL), Unidad de Cultivos Leñosos y Hortícolas. Valladolid

Country : Spain


Enrique Barajas

Affiliation : Instituto Tecnológico Agrario de Castilla y León (ITACyL), Unidad de Cultivos Leñosos y Hortícolas. Valladolid

Country : Spain

Attachments

No supporting information for this article

Article statistics

Views: 3186

Downloads

XML: 83

Citations

PlumX