In these operations plants grow in distinct rows and the wheels of the autonomous vehicles must drive only inside the space between rows. Examples include open field row crops ; orchards with trees/vines/shrubs and their support structures; greenhouses and indoor farms. Crop-relative auto-guidance is necessary in the situations described above. Researchers have used various sensors, such as onboard cameras and laser scanners to extract features from the crops themselves, and use them to localize the robot relative to the crop lines or trees rows in order to auto-steer. Crop-relative guidance in open fields and orchards is still more of a research endeavor rather than mature, commercial technology. Researchers have used monocular cameras in the visible or near infrared spectrum , or multiple spectra to segment crop rows from soil based on various color transformations and greenness indices that aimed at increasing segmentation robustness against variations in luminance due to lighting conditions. Recently, U-Nets , a version of Fully Convolutional Networks were used to segment straw rows in images in real-time . Other approaches do not rely on segmentation but rather exploit the a priori knowledge of the row spacing, either in the spatial frequency domain – using band pass filters to extract all rows at once – or in the image domain . An extension of this approach models the crop as a planar parallel texture. It does not identify crop rows perse, but computes the offset and heading of the robot with respect to the crop lines . Once candidate crop row pixels have been identified various methods have been used to fit lines through them. Linear regression has been used,vertical growing racks where the pixels participating are restricted to a window around the crop rows . Single line Hough transform has also been used per independent frame , or in combination with recursive filtering of successive frames .
In an effort to increase robustness, a pattern Hough transform was introduced that utilizes data from the entire image and computes all lines at once. Researchers have also used stereo vision for navigation. In an elevation map was generated and the maximum value of the cross-correlation of its profile with a cosine function was used to identify the target navigation point for the vehicle.Most reported work was based on monocular cameras, with limited use of stereo vision and 2D/3D lidars. One reason is that in early growth stages the crops can be small in surface and short in height; hence, height information is not always reliable. Given the increasing availability of real-time, low-cost 3D cameras, extensions of some of the above methods to combine visual and range data are conceivable and could improve robustness and performance in some situations. Also, given the diversity of crops, cropping systems and environments, it is possible that crop or application targeted algorithms can be tuned to perform better than ‘generic’ ones and selection of appropriate algorithm is done based on user input about the current operation. The generation of publicly available datasets with accompanying ground truth for crop lines would also help evaluate and compare approaches.Orchards rows are made of trees, vines or shrubs. If these plants are short and the auto-guided robot is tall enough to straddle them, the view of the sensing system will include several rows and the guidance problem will be very similar to crop-row relative guidance. When the plants are tall or the robot is small and cannot straddle the row, the view of the sensing system is limited to two tree rows when the robot travels inside an alley, or one row if it is traveling along an edge of the orchard. Multiple rows may be visible only when the robot is at a headland during an entrance or exit maneuver. In this situation the images captured by the sensing system look very different than between tree rows, and therefore the row-following sensing and guidance techniques cannot be used. The main approach is to detect the tree rows and compute geometrical lines in the robot’s coordinate frame and use them for guidance.
Robustness and accuracy are very important, because erroneous line calculations could cause the robot to drive into trees and cause damage to itself and the trees and orchard infrastructure. Although the problem seems well defined and structured, the following conditions present significant challenges: the presence of cover crops or weeds on the ground can make it difficult to discriminate based only on color; tall vegetation can hide tree trunks that are often used as target for row-detection systems; trunks from neighboring rows are often visible too; variability in illumination conditions during the day or nighttime operations , and environmental conditions affect sensing. Trees grow at different rates and may be pruned/hedged manually resulting in non-uniform tree row geometries. Also, there is a large variety in tree shapes, sizes and training systems, and orchard layouts, which makes it difficult to design ‘universal’ guidance algorithms that rely on specific features. For example, Figure 4a shows a recently established orchard with small trees right out from a nursery, where canopies are small and sparse; Figure 4b shows younger trellised pear trees; Figure 4c shows high density fruit-wall type trellised apple trees; Figure 4d shows old open-vase pear trees in winter; Figure 4e shows large, open-vase cling-peach trees; Figure 4f shows a row of table grape vines.In a tree trunk-based approach visual point features from tree trunks are tracked and a RANSAC algorithm selects a number of inlier points whose locations are reconstructed in 3D using wheel odometry and the vehicle kinematic model. Then, lines are fitted to the points and an Extended Kalman filter integrates the vanishing point with these lines to improve their estimate. In a sky-based approach is pursued, where the high contrast between tree canopies and the sky was used to extract the boundary of the portion of the sky visible from the camera, and from that the vehicle heading. In the image is segmented into classes such as terrain, trees and sky. Then, Hough transform is applied to extract the features required to define the desired central path for the robot. The fact that even young trees in commercial orchards extend much higher than the ground level has resulted in heavier use of ranging sensors for orchard guidance than what has been reported for row-crop guidance. In a 2D laser scanner was placed 70 cm above the ground, horizontally, and Hough transform was used to fit lines through the points sensed from trunks at the left and right of the robot. Line regression has also been used to fit lines and was combined with filtering to improve the robustness of line parameter estimation .
Line-based SLAM was also proposed to simultaneously estimate rows and localize the robot with respect to them . 3D lidar has also been used to get range measurements from the surroundings , given that 2D lidars can only scan at a certain height above the ground where tall vegetation or vigorous canopy growth may partially occlude or even hide trunks. In the point cloud of each lidar scan is registered with odometry and combined with recent previous ones in a single frame of reference. Then, the left and right line equations for tree rows are computed in a RANSAC algorithm operating on the entire point cloud,vertical grow room design and an Extended Kalman filter is used to improve the robustness of line parameter estimation. The lateral offsets of the fitted lines are refined further by using points from heights that correspond to trunks. Off the shelf, low-cost 3D cameras were used also to detect orchard floor and tree trunks . Random sampling and RANSAC were used to reduce the number of points and exclude outliers in the point cloud, and a plane was fitted to the data to extract the ground, whereas trees were detected by their shadows in the generated point cloud. Sensor fusion has also been reported for auto-guidance in orchards. In an autonomous multi-tractor system was presented that was used extensively in commercial citrus orchards for mowing and spraying operations. A precise orchard map was available depicting tree rows , fixed obstacles, roads and canals. An RTK GPS was the primarily guidance sensor for each autonomous tractor. However, tree growth inside orchard rows often necessitates that the robot deviate from pre-planned paths. A 3D lidar and high dynamic range color cameras were used to build a 3D occupancy grid, and a classifier differentiated between voxels with weeds and trees, thus keeping only voxels representing empty space and tree canopies. The row guidance algorithm used GPS to move towards the waypoint at the end of the row and the 3D grid to find the lateral offset that must be added to the original planned path to keep the tractor from running into trees on either side. Robust, accurate and repeatable turning at the end of a row using relative positioning information with respect to the trees is very difficult and has not been addressed adequately. A successful turn involves detecting the approach and the end of the current row, initiating and executing the turning maneuver, and detecting the entrance of the target row to terminate the turn and enter the next row. One approach is to introduce easily distinguishable artificial landmarks at the ends of tree rows . Landmarks can be used to create a map , detect the end of the current row, the entrance of the next row, and localize the robot during turning, using dead reckoning. In end-of-row detection utilized a 2D lidar, a camera and a tree-detection algorithm. Turning maneuvers were executed using dead reckoning based on wheel odometry. Dead reckoning with slip compensation has also been used .
Crop status and growth are governed by the interaction of plant genetics with the biotic and abiotic environment of the crop, which are shaped by uncontrolled environmental factors and agricultural management practices. The biotic environment consists of living organisms that affect the plant, such as neighboring plants of the same crop or antagonistic plants , bacteria, fungi, viruses, insect and animal pests, etc. The abiotic environment includes all nonliving entities affecting the plant, i.e., surrounding air, soil, water and energy . The environment can cause biotic or abiotic crop physiological stresses, i.e., alterations in plant physiology that have negative impact on plant health and consequently yield, or quality. Examples include plant stress due to fungal diseases, water stress due to deficit irrigation, reduced yields due to weeds or drought, crop damage due to excessive temperatures, intense sunlight, etc. The environment and potential stressors affect strongly crop physiological processes and status, which are expressed through the plant’s biochemical and biophysical properties, some of which can be measured directly or indirectly. Examples of such biochemical properties are the number and types of volatile organic compounds emitted from leaves . Examples of biophysical properties include leaf properties such as chlorophyll content, relative water content and water potential, stomatal conductance, nitrogen content, as well as canopy structure properties. Canopy structure is defined as “the organization in space and time, including the position, extent, quantity, type and connectivity, of the above ground components of vegetation” . Components can be vegetative such as leaves, stems and branches, or reproductive, i.e., flowers and fruits. Canopy structure properties can be based on individual components , on indices that characterize ensembles of components or indices that characterize entire plants, such as a canopy’s leaf area index. Finally, a special property, that of plant ‘species’ is of particular importance because it is used to distinguish crops from weeds and classify weed species for appropriate treatment. Estimation of crop and environmental biophysical and biochemical properties is based on measurements that can be made through contact or remote sensing. Contact measurements are mostly associated with the assessment of soil physical and chemical properties and involve soil penetration and measurement of quantities like electrical conductivity or resistance . Contact sensing for crops has been very limited so far , partly due to plant tissue sensitivity and the difficulties of robotic manipulation .