When walking through a pomegranate orchard, farmers often rely on experience and a quick visual check to judge fruit size and ripeness. Researchers, on the other hand, typically turn to calipers and manual sampling—accurate, but time-consuming and limited. In this study, Rosa Pia Devanna, Francesco Vicino, Simone Pietro Garofalo, Gaetano Alessandro Vivaldi, Simone Pascuzzi, Giulio Reina, and Annalisa Milella present a farmer robot capable of spotting pomegranates on the tree and estimating their size automatically using an affordable RGB-D camera.
What makes the work particularly interesting is how the team designed the system to handle real orchard conditions—uneven sunlight, dense canopies, and fruits partially hidden by leaves. Instead of depending on large, perfectly labeled datasets, they used a multi-stage training process that gradually teaches the robot from simple lab images to real field scenes. Once the fruit is detected in the RGB image, the robot pairs it with depth information to recover its actual 3D shape and calculate the polar and equatorial diameters.
The tests were carried out in a commercial orchard and included 96 pomegranates whose sizes were also measured manually. The robot’s estimates were impressively close—on average within about 1 cm of the caliper measurements. The examples shown in the paper even highlight how the robot handles fruits at different distances, detects empty or problematic trees, and copes with challenging lighting.
For farmers, this kind of technology could make routine monitoring far less laborious—improving yield predictions, supporting harvest planning, and reducing the need for manual sampling. For AgRibot’s research community, the study demonstrates how relatively low-cost sensors and robust perception algorithms can serve as building blocks for more advanced autonomous tasks.
If you’re curious to dive deeper into the methods and results, the full publication is available on Zenodo.
