Вы находитесь на странице: 1из 6

Agriculture

6th IFAC Conference on Sensing, Control and Automation for


Agriculture
6th IFAC Conference on Sensing, Control and Automation for
December
Agriculture
6th 4-6, 2019. Sydney,
IFAC Conference Australia
on Sensing, Control and Automation for
December
Agriculture4-6, 2019. Sydney, Australia
Available online at www.sciencedirect.com
December
Agriculture4-6, 2019. Sydney, Australia
December 4-6, 2019. Sydney, Australia
December 4-6, 2019. Sydney, Australia
ScienceDirect
IFAC PapersOnLine 52-30 (2019) 312–317
3D
3D Vision
Vision for
for Precision
Precision Dairy
Dairy Farming
Farming
3D
3D Vision
Vision for
for Precision
Precision Dairy
Dairy Farming
Farming
3D Vision for Precision Dairy Farming
Niall
Niall O’O’ Mahony*,
Mahony*, Sean Sean Campbell*,
Campbell*, Anderson
Anderson Carvalho*,
Carvalho*, Lenka Lenka Krpalkova*,
Krpalkova*, Daniel Daniel Riordan*,
Riordan*, Joseph Joseph Walsh* Walsh*
Niall O’ Mahony*, Sean Campbell*, Anderson Carvalho*, Lenka Krpalkova*, Daniel Riordan*, Joseph Walsh*
Niall O’ Mahony*, Sean Campbell*, Anderson Carvalho*, Lenka Krpalkova*, Daniel Riordan*, Joseph Walsh*
Niall O’ Mahony*, Sean Campbell*, Anderson Carvalho*, Lenka Krpalkova*, Daniel Riordan*, Joseph Walsh*
** IMaR
IMaR Research
Research Centre,
Centre, Institute
Institute of of Technology
Technology Tralee, Tralee, Tralee,
Tralee, Co. Co. Kerry,
Kerry, Ireland
Ireland
* IMaR Research Centre, (e-mail: Institute of Technology Tralee, Tralee, Co. Kerry, Ireland
niall.omahony@research.ittrelee.ie).
* IMaR Research Centre, Institute of Technology
(e-mail: niall.omahony@research.ittrelee.ie). Tralee, Tralee, Co. Kerry, Ireland
* IMaR Research Centre, (e-mail:Institute
niall.omahony@research.ittrelee.ie).
of Technology Tralee, Tralee, Co. Kerry, Ireland
(e-mail: niall.omahony@research.ittrelee.ie).
(e-mail: niall.omahony@research.ittrelee.ie).
Abstract:
Abstract: 3D vision systems
3D vision systems willwill play
play anan important
important role role in
in next-generation
next-generation dairy dairy farming
farming due due toto the
the sensing
sensing
Abstract:
capabilities 3Dtheyvision systems
provide in will
the play an important
automation of role in
animal next-generation
husbandry tasks dairyasfarming
such the due to theherding,
monitoring, sensing
Abstract: 3D vision systems will play an important role in next-generation
capabilities they provide in the automation of animal husbandry tasks such as the monitoring, herding, dairy farming due to the sensing
capabilities
Abstract:
feeding, 3Dthey
milkingvision provide
andsystems in will
the automation
play of
an important animal
role husbandry
in next-generationtasks3D such
dairy asfarming
the monitoring,
due to theherding,
sensing
capabilities
feeding, they
milking and bedding
provide in the of
bedding of animals.
automation
animals. of This paper
animal
This will
will review
paperhusbandry reviewtasks3D computer
such vision
vision systems
as the monitoring,
computer systemsherding, and
and
feeding, milking
capabilities
techniques they
that andand
provide
are bedding
in
may thebeof animals. of
automation
implemented This in paper
animal will Dairy
husbandry
Precision reviewtasks 3D such
Farming. computer
as the
This vision systems
monitoring,
review will herding,
includeand
feeding, milking
techniques that are andand bedding
may beof implemented
animals. Thisinpaper Precisionwill Dairy
reviewFarming.
3D computer vision systems
This review will include and
techniques
feeding,
evaluations that
milking
of are
theand and may beofof
bedding
applicability implemented
animals.
Time This
of inpaper
Precision
Flight and will Dairy
review
Streoscopic Farming.
3D This
computer
Vision review
vision
systems to will
systems include
agricultural and
techniques that
evaluations of the are applicability
and may be of implemented
Time of Flight in Precision Dairy Farming.
and Streoscopic VisionThis reviewto will
systems include
agricultural
evaluations that
techniques
applications of
as the
are
well applicability
and
as a may be
breakdown of Time
implemented
of the ofcategories
Flight
in and
Precision
of Streoscopic
Dairy
computer Vision
Farming.
vision systems
This
algorithms review to will
which agricultural
are include
being
evaluations of
applications the applicability
as well as a breakdown of Timeof theofcategories
Flight and Streoscopic
of computer Vision
vision systemswhich
algorithms to agricultural
are being
applications
evaluations
explored in as
of well as use
a breakdown
the applicability of Timeof thecases
ofcategories
Flight andof computer
Streoscopic vision
Visionalgorithms
systems which are being
to agricultural
applications
explored in aa as
variety
well
variety of
as use
of cases.
cases. These
a breakdown Theseofuse thecases
use range
range from
categories robotic platforms
of computer
from robotic platforms
vision such
algorithms
such as milking
as milking
which robots
are being
robots and
and
explored
applications
autonomous in a variety
as well
vehicles of
as use
a
which cases.
breakdown
must These
interactofuse
the cases range
categories
closely and from
safelyof robotic
computer
with platforms
animalsvisionto such
algorithms
intelligent as milking
which
systems robots
are
which and
being
can
explored in avehicles
autonomous variety of use cases.
which These use
must interact casesand
closely range fromwith
safely robotic platforms
animals such as milking
to intelligent systems robots
which and can
autonomous
explored
identify in
dairya vehicles
variety
cattle which
of use
and must interact
cases.
detect These
deviations closely
use cases
in and safely
range
health from with animals
robotic
indicators to intelligent
platforms
such as Bodysuch as systems Score
milking
Condition which and
robots can
autonomous
identify dairy vehicles
cattle which must interact
and detect deviations closely and safely
in health with animals
indicators such as to intelligent
Body Condition systems Scorewhich and can
identify
autonomous
Locomotion dairy cattle
vehicles and
which detect
must deviations
interact in
closely health
and indicators
safely with such
animals as
to Body
intelligentCondition
systems Score
which and
can
identify
Locomotion dairyScore. Upon
cattleUpon
Score. analysisdeviations
and detect
analysis of each
of each useuse case, it
in health
case, itindicators
is apparent
is apparentsuch thatassystems
that systems
Body whichwhich
Condition can Score
can operateand
operate in
in
Locomotion
identify
unconstrained dairy Score.
cattle Upon
and
environments analysis
detect
and of each
deviations
adapt to use
in case,
health
variations in it is apparent
indicators
herd suchthatassystems
characteristics, Body
weather which
Condition can Score
conditions, operate
farmyard andin
Locomotion Score.
unconstrained Upon analysis
environments and adapt of each use case,
to variations in it
herdis apparent that systems
characteristics, weatherwhich can operate
conditions, farmyard in
unconstrained
Locomotion
layout andand different environments
Score.
different scenarios and
Upon analysis adapt to variations
of each
in animal-robot
animal-robot use in it
case,
interaction herd characteristics,
isareapparent
required.that weatherwhich
systems
Considering conditions,
this farmyard
can operate
requirement, in
this
unconstrained
layout environments
scenarios and
in adapt to variations in herd
interaction arecharacteristics,
required. weather this
Considering conditions,
requirement,farmyard this
layout proposes
paper and different
unconstrained the scenariosand
environments
application in of
animal-robot
adapt to
techniques interaction
variations
arisingin herd
from arecharacteristics,
required.
the emerging Considering
weather
field of this requirement,
conditions,
research in farmyard
Artificialthis
layout proposes
paper and different scenarios in of
the application animal-robot
techniques interaction
arising from arethe required.
emerging Considering this requirement,
field of research in Artificial this
paper proposes
layout that the
and different
Intelligence application
scenarios Deep
is Geometric
Geometric in of techniques
animal-robot arising from
Learning. interaction arethe emerging
required. field of research
Considering in Artificial
this requirement, this
paper proposes
Intelligence that the
is application of
Deep techniques
Learning. arising from the emerging field of research in Artificial
Intelligence
paper proposes that theis Geometric
application Deep
of Learning. arising from the emerging field of research in Artificial
techniques
Intelligence
Keywords: that
© 2019, IFACAutomation is Geometric
(International and Deepof
Federation Learning.
Automatic Control) andHosting by Elsevier Ltd. All rights reserved.
Intelligence
Keywords: that
Automation and Robotics
is Geometric Deepin
Robotics Agriculture;
Agriculture; Sensing
inLearning. Sensing and Automation
Automation in in Animal
Animal Farming;
Farming;
Keywords: Automation and Robotics in Agriculture; Sensing and Automation in Animal Farming;
Keywords: Automation and Robotics in Agriculture; Sensing and Automation in Animal Farming;
Keywords: 1. Automation and Robotics in Agriculture; Sensing and Automation in Animal Farming;
1. INTRODUCTION
INTRODUCTION 2.
2. 3D 3D VISION
VISION TECHNOLOGY
TECHNOLOGY
1. INTRODUCTION 2. 3D VISION TECHNOLOGY
Precision 1. INTRODUCTION 2. 3D VISION TECHNOLOGY
Precision Livestock
Livestock 1. Farming
Farming (PLF)
(PLF) can
INTRODUCTION can be
be defined
defined as as real-time
real-time 2. 2.13D VISION
2.1 3D
3D Vision
TECHNOLOGY
Vision Hardware
Hardware
Precision
monitoring Livestock
technologiesFarming (PLF)atcan
aimed be defined
managing as real-time
the temporal 2.1 3D Vision Hardware
Precision
monitoring Livestock
technologiesFarming (PLF)atcan
aimed be defined
managing as real-time
the temporal 2.1 3D Vision Hardware
monitoring
Precision
variability technologies
Livestock Farming aimed
(PLF)at atcanmanaging
be defined the temporal
as real-time
variability of
monitoring the
the smallest
smallest manageable
oftechnologies aimed
manageable production
managing
productionthe unit, known
known A
temporal
unit, A 3D3D camera
camera captures 2.1 aa3D
captures depth
Vision
depth image
image where
where the
Hardware the amplitude
amplitude of of
variability
monitoring
as ‘the per ofanimal
the smallest
technologiesapproach’ manageable
aimed at
(Halachmi production
managing
et al., unit,
the known
temporal
2016). With A
each3D camera
pixel in captures
the image a depth image
corresponds where
to the the
length amplitude
of of
variability
as ‘the perofanimal
the smallest
approach’ manageable
(Halachmi production unit, known
et al., 2016). With A each3Dpixelcamera captures
in the imagea corresponds
depth image to where the amplitude
the length of the
the rayof
ray
as ‘the advancements
intense perofanimal
variability the approach’
smallest in Computer (Halachmi
manageable Vision et
(CV)al.,and
production 2016).
unit, With each
known
Artificial A 3D
extendingpixel
camera in
from the image
captures
the a
centre corresponds
depth
of image
the to the
where
camera length
the
through of the
amplitude
that ray
of
pixel
as ‘the advancements
intense per animal approach’ in Computer (Halachmi
Vision et(CV)al.,and2016). With each
Artificial extendingpixel fromin thethe image
centre corresponds
of the camera to thethrough
length of thatthepixel
ray
intense
as ‘the advancements
per
Intelligence animal
(AI), there inhas
approach’Computer
arisen an Vision
(Halachmi
array (CV)
et
of al., and Artificial
2016).
opportunities With
for extending
each
until pixel
it from
in
collides the the centre
image
with an of the
corresponds
obstacle. camera
to
There the through
haslength
been that
of
a the pixel
ray
notable
intense advancements
Intelligence (AI), thereinhas Computer
arisen anVision
array of(CV) and Artificial
opportunities for extending
until it collides from the withcentre of the camera
an obstacle. There has through
been that pixel
a notable
Intelligence
intense (AI), there
advancements has
inhas arisen
Computer anVision
array of(CV)opportunities
and Artificialfor extending
until it collides
from withwithcentre
the an obstacle.
of the There has
camera been a notable
through
these
these technologies
Intelligence to
to become
(AI), there
technologies become even
evenanmore
arisen arrayuseful
more of in
in monitoring
opportunities
useful monitoring for untilincrease
increase in
in the
it collides the availability
an obstacle.
availability of There has been that
of 3D/Range-Imaging/Depth-
3D/Range-Imaging/Depth-
pixel
a notable
these
the technologies
Intelligence
needs (AI),
and to become
there
behaviour has of even
arisen an
every more
array useful
animalof and in monitoring
opportunities
also for
allow increase
until
sensing it in
collidesthe availability
with an of
obstacle. 3D/Range-Imaging/Depth-
There has been a notable
these technologies
the needs to become
and behaviour of even
everymore useful
animal and in monitoring
also allow increase sensing cameras in the in
cameras recent
recent years
availability
in with
with several
several technologies
of 3D/Range-Imaging/Depth-
years technologies
the
these needs and
technologies behaviour
towith
become of every animal and also allow increase sensing cameras in recent years with several technologies
robotics
the needs
robotics to interact
to and with animals
behaviour
interact of even
animals every more
safely.
safely.
useful
animal and in monitoring
Applications
Applications alsoinclude having
having been
allow sensing
include in developed
cameras
been
the in
developed
availability
and
recent of 3D/Range-Imaging/Depth-
and commercialised.
years with several
commercialised. Each technology
Eachtechnologies
technology
robotics
the needsto and
automaticinteract with animals
behaviour
monitoring ofof safely.
every
cattle by Applications
animal and
intelligent also include
allow
camera having
sensing
offers been
its developed
cameras
own in
advantages and
recent commercialised.
foryears
certain with several
applicationsEach technology
technologies
over others,
robotics to interact
the automatic with animals
monitoring safely.byApplications
of cattle intelligent include
camera having offers its been
owndeveloped
advantages andforcommercialised.
certain applications Eachover technology
others,
the automatic
robotics to
surveillance monitoring
interact
technology,with and ofthecattle
animals safely. byApplications
automation intelligent
of tasks camera
include
such as offers
having
but in its own
been
general, advantages
developed
3D camerasand for certain
commercialised.
provide a applications
data dense Each over others,
technology
representation
the automatic
surveillance monitoring
technology, andofthecattle by intelligent
automation of tasks suchcamera as offers its own advantages
but in general, 3D camerasfor certain
provide applications
a data over others,
dense representation
surveillance
the automatic
herding, technology,
milking, monitoring
feeding and
and ofthe automation
cattle
bedding. by of tasks such
intelligent camera as offers
but
of in
the general,
its own
environment 3D cameras
advantages
that is provide
for certain
particularly a data dense
applications
useful for representation
over others,
determining
surveillance
herding, milking,technology,
feedingand and the automation of tasks such as but
bedding. of the in general,
environment 3D cameras provide a data
that is particularly denseforrepresentation
useful determining
herding, milking,
surveillance feedingand
technology, and thebedding.
automation of tasks such as of
but
wherethe
in environment
general,
obstacles 3Dare that
cameras
in is particularly
provide
physical a
relationdatauseful
todense
the for determining
representation
camera.
herding,
Recent milking, feedingthe anduse bedding. of
wherethe environment
obstacles are that is particularly
in physical relationuseful
to the forcamera.determining
Recent works
herding, milking,
works involving
feedingthe
involving anduse of
of 3D
bedding. 3D vision
vision in in agricultural
agricultural where where
of obstacles are that
the environment
obstacles are
in physical
in
relationuseful
is particularly
physical relation
to the for
to the
camera.
camera.determining
Recent works
applications involving
have also the
been use of
reviewed 3D vision
based in
on agricultural
the recent
applications have also been reviewed based on the recent In a previous paper we reviewed four primary competing
Recent works involving the use of 3D vision in agricultural In
wherea previous
obstacles paper
are in we reviewed
physical relationfour to primary
the camera. competing
applications
Recent
economic have
workshave also been
involving reviewed
thetechnological
use based
of 3D vision on agricultural
in the recent In a previous paper we reviewed Time four primary competing
economic affordability
applications affordabilityalso and been
and reviewed
technological capabilities
based on theon
capabilities offer
onrecent
offer In technologies
a previous for
technologies paper
for depth
we sensing:
depth reviewed Time
sensing: of
of Flight
four primary Flight (ToF),
competing
(ToF),
economic
applications
from affordability
optical have
3D also
sensors and
been technological
reviewed
(Vázquez-Arellano capabilities
basedet on
al., theonrecent
2016). offer
The technologies
In a previous
Stereovision, for
paper depth
Structured we sensing:
reviewed
light and Time
four
LiDAR of
primary
(O’ Flight
Mahony (ToF),
competing
economic
from affordability
optical 3D sensorsand technological capabilities
(Vázquez-Arellano et al., 2016).on offer Stereovision, Structured light and LiDAR (O’ Mahony et
The technologies for depth sensing: Time of Flight et al.,
(ToF),
al.,
from optical
economic
objective of 3D sensors
affordability
this paper and
is(Vázquez-Arellano
totechnological
review existing et3Dal.,CV
capabilities 2016).
on The technologies
offer
systems Stereovision,
2018a). In Structured
thisfor
paper depth
we light and
sensing:
confine LiDAR
our Time
review (O’
ofto Mahony
Flight
the et al.,
(ToF),
from optical
objective 3D sensors
of this paper is(Vázquez-Arellano
to review existinget3D al.,CV2016). The Stereovision,
systems 2018a). In thisStructuredpaper we light confine andour LiDAR
review (O’ the two which
to Mahony two et al.,
which
objective
from of this
optical 3Dthatpaper
sensors is to review existing et3Dal.,CV systems 2018a). In thisStructured
paper we confine our review to the two which
and
and techniques
objective of this
techniques are
paper
that is(Vázquez-Arellano
are being
being implemented
to review
implementedexistingin dairy
in 3D
dairyCV
2016).
farming The
systems
farming in Stereovision,
are
are most
in 2018a). mostInused
usedthisin PLF
inpaper
PLF we
light
applications;
confine
applications;
andour LiDAR
ToF
ToF and (O’to Mahony
and Stereovision.
review the two which
Stereovision.
et al.,
and techniques
objective
applications of this
suchthatasare
paper being
is
mobile to implemented
review
robotics, existing
animalin 3D
dairy CV farming
monitoring systems
andin 2018a).
are most used
In thisin PLF
paper applications;
we confine ourToF and
review Stereovision.
to the two which
and techniques
applications suchthatasare being robotics,
mobile implemented animalin dairy farmingand
monitoring in are most used in PLF applications; ToF and Stereovision.
applications
and techniques such asare
thatas mobile
beinguse robotics,
implemented animal monitoring
in dairy and
farmingand in The principal of operation of
of ToF cameras is
is toto measure
measure the
automatic
automatic milking.
applications such
milking. These
mobile
These use cases
robotics, will
will be
cases animal analysed
analysed with
bemonitoring with are Themost
The
phase
used in
principal
principal
shift of
of PLF
of
the
applications;
operation
operation
light returningof
ToF ToF
ToF
from
cameras
cameras
a
and Stereovision.
modulatedis to measure
light
the
the
source.
automatic
applications
regard to milking.
such
hardware as These
mobile
and use cases
robotics,
software will
animal
aspects. be analysed
monitoring
Consequently, with
and
we The
phase principal
shift of of
the operation
light returningof ToF
from cameras
a modulatedis to measure
light the
source.
automatic milking. and
regard to hardware These use cases
software will Consequently,
aspects. be analysed with we phase The shift
principal
Contrarily, of the
of light returning
operation
Stereoscopic of
vision from
ToF a modulated
cameras
systems is
utilizes to light
measure
two source.
camerasthe
regard
automatic
propose to hardware
milking. and
These software
use aspects.
cases will Consequently,
be analysed we
with
proposeto the
regard implementation
hardware
the and software
implementation of recent
recent innovations
of aspects. Consequently,
innovations in AI
AI phase
in we Contrarily,
Contrarily,
phase
and
shift of
shift
the light returning
Stereoscopic
Stereoscopic
of
accompanying the light
vision from
vision
returning
stereo depth
systems
systems
from
a modulated
a
utilizes two
utilizes
modulated
algorithms to
lightcameras
two
light
source.
cameras
mimicsource.
the
propose
regard
specific to the implementation
hardware and software of recent
aspects. innovations
Consequently, in AI
we
specific to
propose 3D
tothe camera
camera data
3Dimplementation data which
which of have
recent
have not yet
yet been
been exploited
notinnovations in AI Contrarily,
exploited and accompanying
and
Stereoscopic
accompanying
Contrarily,
binocular Stereoscopic
ability
stereo
stereo
vision
depth
depth
vision
systems utilizestotwo
algorithms
algorithms
systems utilizesto totwo
cameras
mimic
mimic
cameras
the
the
specific
propose
in
in dairy
to 3Dimplementation
tothe
dairy farming.
specific
camera data which
3D camera data which
farming.
have
of have
recentnot yet been exploited
notinnovations
yet been exploitedin AI and binocular ability of
accompanying animal
of stereo
animaldepth eyes to
to determine
eyes algorithms
determine depth.
mimic
depth. The
the
The
binocular
and ability
accompanying
distinctive of
characteristics animal
stereo of eyes
depth to determine
algorithms to depth.
mimic The
the
in dairy farming.
specific
in to 3D camera data which have not yet been exploited binocular
dairy farming. distinctive ability of animal
characteristics of 3D
3D eyescameras have
have proved
to determine
cameras proveddepth.to
to give
The
give
in dairy farming. distinctive
binocular
important characteristics
ability
advantages of animal
in of 3D eyes
several cameras
to
fields have
determine
and isproveddepth.
currentlyto give
The
the
distinctive characteristics
important advantages in of 3D cameras
several fields and haveisprovedcurrentlyto give
the
important
distinctive
subject of advantages
characteristics
many research in projects
several
of 3D fields
cameras
in PLF. and
have
Since isproved
currently
each to
task the
give
will
important
subject of many advantages
research in projects
several in fields
PLF.and Since is each
currently the
task will
subject
important
be of many
application research
advantages
dependent, in projects
several
so too in
will PLF.
fields
the Since
and
types is ofeach task will
currently
sensors the
that
subject
be applicationof manydependent,
research projects
so too will in PLF. Sinceofeach
the types task will
sensors that
be application
subject of many dependent,
research so too will
projects in the types
PLF. Since ofeach
sensors
task that
will
be application dependent, so too will the types of sensors that
be application dependent, so too will the types of sensors that
Copyright © 2019 IFAC 312
Copyright © 2019 IFAC 312
2405-8963 ©
Copyright 2019, IFAC
© 2019 IFAC (International Federation of Automatic Control)
312Hosting by Elsevier Ltd. All rights reserved.
Copyright
Peer review©under
2019 responsibility
IFAC of International Federation of 312Control.
Automatic
Copyright © 2019 IFAC 312
10.1016/j.ifacol.2019.12.555
IFAC AGRICONTROL 2019
December 4-6, 2019. Sydney, Australia

Niall O’ Mahony et al. / IFAC PapersOnLine 52-30 (2019) 312–317 313

are used in each system. For automatic cattle sensing. The conception of a proxy-detection system is
monitoring/assessment system, for example, it is necessary to motivated by the need of better resolution, accuracy,
determine the shape of the cow’s volume with a mm-level or temporality and lower cost, compared to remote sensing. The
cm-level accuracy, in real-time and in very challenging use of CV techniques allows obtaining this information
environmental and lighting conditions. ToF cameras best fit automatically with objective measurement in contrast with the
such applications as they provide their own light source, they difficulty and subjectivity of visual or manual acquisition.
provide reliable measurements in real-time with the required
precision (as proven by (Salau et al., 2017)). One can classify the wide range of applications where 3D
sensors are used by considering their scenario of application,
On the other hand, for mobile robot navigation, it may only be yielding scene-related tasks, object-related tasks and
necessary to guide an autonomous vehicle across the terrain applications involving interaction with animals. Scene-related
and avoid obstacles which requires decimetre-level accuracy tasks generally involve mobile robots and large displacements,
and low power consumption in less stringent lighting while object-related tasks involve instead robotic arms or
conditions. For these applications, stereovision may be more humanoid-like robots, and small depths. Finally, applications
applicable, being a passive depth-sensing technique it requires involving interaction with animals involve dynamic response
very little power, it provides adequate accuracy for navigation capabilities and safety considerations. In the context of this
(Vázquez-Arellano et al., 2016) and can be incorporated with review; 3D vision for perception in mobile robotics falls under
2D high-definition cameras for the identification of a wide the scene-related category, Automatic cattle monitoring &
variety of different objects. feeding falls under the object-related category and systems for
automatic milking & herding can be categorised as
2.2 3D Vision Data Analysis applications involving interaction with animals.
For the sake of brevity, this section will just outline that 3D 3.1 Mobile Robot perception
data analysis techniques emanate from one of two fields of
research; To enable mobile robots to move autonomously in a variety of
unstructured environments, 3D information about the
1. Traditional 3D processing techniques which consists of environment is essential. Several studies of accuracy and
manually defining 3D descriptors which may be local (e.g. usability of 3D sensors for mobile robots have been published
Point Feature Histogram (PFH), Signature of Histogram as summarised by (Rauscher et al., 2014). Their paper also
of Orientations (SHOT) and Rotational Projection presents an experimentally derived model of the expected
Statistics (RoPS)) or global (e.g. Ensemble of Shape standard deviation which was dependent on the distance to the
Functions (ESF) and Viewpoint Feature Histogram measured object as well as measured intensity and
(VFH)) (Alexandre, 2012) (Ioannidou et al., 2017). environmental illumination for several 3D sensors which is
2. 3D Deep Learning, also known as Geometric Deep important to know for probabilistic approaches in robotics.
Learning, an emerging field which applies deep learning The accuracy of a perception system is important in mapping,
to 3D data. Most of this work involves 3D Convolutional localization and obstacle classification problems. Whilst
Neural Networks (3D-CNNs) and techniques to make obstacle avoidance problems place higher priority on the
them more efficient and more invariant to rigid translation availability and reliability of the measured data. We discuss
and rotation. localisation and object detection techniques and dealing with
A discussion of the pros and cons of these two strands of sources of uncertainty in 3D vision including limited
research may be found in our previous paper, (O’ Mahony et availability due to limited field-of-view and limited accuracy
al., 2019c), and a more in-depth review of 3D deep learning arising from variations in distance & lighting in a previous
can be found in (O’ Mahony et al., 2018b). To summarise, paper (O’ Mahony et al., 2019a).
greater accuracy is achievable with deep learning, given For the most part, much of the work done in mobile robot
training data of sufficient volume and variety. navigation has been involved in crop management applications
Although, 3D deep learning has an inherent edge over its 2D where 3D CV has been used for crop phenotyping (Vázquez-
counterpart, it has not benefited from the recent developments Arellano et al., 2016). These techniques may be extended to
in deep learning due to the unavailability of large training applications in PLF where a robot may have to traverse terrain
datasets as well as large test datasets. Therein lies a gap in the to herd/monitor cattle or carry materials around the farm, e.g.
research, not only for 3D datasets for agricultural applications (Grimstad et al., 2017), (Salah Sukkarieh et al., 2019). Such
but for 3D CV applications in general. an application has been implemented by Lely’s Shrimp robotic
platforms with the use of line scanners for 3D perception
3. AGRICULTURAL APPLICATIONS (Wendel et al., 2017). With respect to safe navigation using
3D vision, (Kragh et al., 2017) have developed a multimodal
In order to optimize herd management and cater for the needs
dataset (including 3D sensors) intended for training/testing of
of every individual cow, precision agriculture uses new
intelligent navigation systems in farm environments. To date,
technologies such as computing, electronics and imaging. Two
mobile robotics has either focused on indoor applications or
types of imagery can thus be used: proxy-detection and remote
outdoor applications, however transitioning from outdoor

313
IFAC AGRICONTROL 2019
December 4-6, 2019. Sydney, Australia

314 Niall O’ Mahony et al. / IFAC PapersOnLine 52-30 (2019) 312–317

navigation to indoor navigation, and adapting to changes in 3.2.2 Lameness Detection


sensor uncertainties in a multi-modal perception system is not
well studied. As lame cows produce less milk and tend to have other health
problems, finding and treating lame cows is very important for
3.2 Animal Monitoring farmers. The active monitoring and prevention of lameness at
a herd level often requires the involvement of a vet or some
Real-time identification of cattle attributes is challenging, but other trained observer. However, lame cows are often
the increasing availability and sophistication of technology undiagnosed until the problem has become severe. Automated
make automated monitoring of animals with CV practicable. monitoring solutions that measure behaviours associated with
The following section reviews CV-based approaches and lameness in cows can help by alerting the farmer of those cows
algorithms which have been implemented for automated in need of treatment to avoid animal interference or handling
scoring of cattle with respect to animal identification, lameness and observer bias. The state-of-the-art in automatic lameness
and Body Condition Score (BCS) classification. We focus here detection is summarised in Table 1 under categories such as
on techniques which use 3D cameras but a more the size of the test dataset, the degree of automatization and
comprehensive review of the applications can be found in our the accuracy achieved.
previous paper - (O’ Mahony et al., 2019b).
Table 1: Past 3D Vision-based Locomotion Scoring Research
3.2.1 Animal Identification
Reference Number Camera
Accuracy
Features of cows used
As we reviewed in a previous paper , the use of machine vision (%)
tested
for animal identification has progressed from reading ear tags
to identifying facial/muzzle/coat pattern features to identifying Hook bones (Abdul 23 ASUS 95.7
3D shape features (Rind Thomasen et al., 2018). and spine Jabbar et Xtion
al., 2017) PRO
Although traditional 2D visual identification systems are
A body (Viazzi et 951 Kinect 90
advantageous in terms of applicability, cost-effectiveness and movement al., 2014). v1
precision, most of the image processing algorithms used in pattern score
these solutions are influenced by external conditions such as individualized
variable light conditions, reflections and lens to each cow
fouling. Therefore, image segmentation is problematic in real
farm conditions. The use of depth information from a 3D Curvature (Van 186 Kinect 90
around hip Hertem et v1
camera makes changing backgrounds and shadows less
joints and al., 2014),
intrusive and enhances the automatic segmentation of different novel back (van
cows as demonstrated by (Van Nuffel et al., 2015). posture Hertem et
measurement al., 2018)
Another critique of 2D image-based methods is that many of
the systems developed have only been proven on Holstein Hoof location (Gardenier 223 Kinect Hoof
Friesian cattle which are distinctive for exhibiting individually determined by et al., v2 tracking
CNN 2018a) only
unique black & white (or brown & white) coat patterns,
patches and markings over their bodies. Such a system is Arching of the (Hansen et 23 ASUS 83
unlikely to be effective on cattle breeds without individually cow’s spine al., 2018a) Xtion
unique coat patterns. Changes in animal posture and change in PRO
size according to distance are other possible issues. To
overcome these challenges 3D cameras seem to offer a No efficient automated lameness detection system is available
solution and preliminary work done by (Arslan et al., 2014) on the market yet, however there is a clear trend towards
proves the applicability of 3D cameras to the problem albeit systems which are more automatable. The need for
with a small test set. automatability has prompted the use of 3D vision is at the
forefront of the latest research as it allows for systems to be
The problem of having to identify individual cows in any given deployed in a greater variety of scenarios and be more robust
herd also presents the challenge of accommodating new cows to variables such as breed of cow and lighting conditions.
joining the herd, such that they are distinguishable from the
rest of the herd and recognisable the next time they appear A good method can do more with less, i.e. the system should
before the system. In this way, the problem is similar to the not have too many constraints in relation to cow pose and
person reidentification problem in security and authentication motion or camera pose. For example, a system that has greater
domains and to the best of our knowledge the potential of deep constraints on how the cow needs to be positioned, on lighting
learning techniques specifically tailored to this problem have conditions or on the layout of walkways is more likely to
yet to be exploited in agricultural applications. impact on the herd’s daily activities or constrain the
installation of the system to certain locations. Similarly, data-

314
IFAC AGRICONTROL 2019
December 4-6, 2019. Sydney, Australia

Niall O’ Mahony et al. / IFAC PapersOnLine 52-30 (2019) 312–317 315

driven CV techniques which have been trained with sufficient The 3D vision-based animal monitoring implementations
variability across deployment scenarios have fewer rigid demonstrate capability to recognise cow which allows for real-
constraints compared to hard-coded techniques. time insights about animal health to be known and be
incorporated with feeding systems to manage cow nutrition
There have been a lot of studies relating lameness to more effectively and alerting the farmer to ill health promptly.
parameters which can be quantified with CV. Several research
projects have taken advantage of 3D vision to extract the For both Locomotion Scoring and BCS, many of the
curvature of the cow’s spine or to measure between prominent techniques reviewed have used pre-processed depth images as
features such as hook bones and hip bones from the depth input where the 3D spatial relationship between all points
image. These methods follow the similar processing pipelines, measured by the 3D camera is not captured. We propose that
which may include; background removal, height thresholding, techniques which operate on the raw pointcloud data, such as
smoothing to prevent limiting any curvature information, (Qi et al., 2017), would yield better accuracy and reliability as
curvedness data calculation on the high peaks and binary it allows geometric pre-processing and noise reduction
image representation of the curvedness threshold to track the techniques to be applied and the shapes present in the
most distinctly curved/highest convex regions (i.e. spine, pointcloud would be invariant of camera pose, light amplitude
hooks and pins). In an effort to identify temporal signatures of and background light.
lameness, (Gardenier et al., 2018b) propose the use of object
detection CNNs to accurately detect the cow’s hoofs and Table 1: Past 3D Vision-based Body Condition Scoring
thereby track hoof placement which has been demonstrated to Research
be an effective indicator of lameness (Bahr et al., 2008). Such
an approach based on deep learning has proven, at least in Reference Size of Camera
Accuracy
other fields of research, to be more robust compared to the test used
Features (% within
dataset
hand-crafted classical CV approaches which contain 0.25)
(images)
parameters fine-tuned by CV engineers which do not tend to
generalise well. Height (Spoliansky 2650 Kinect 74
measurements et al., 2016) v1
Locomotion scoring is very complex because a cow might not around cow’s
exhibit these traits consistently or at all in certain scenarios back + age +
(e.g. if a camera is deployed in an area where cattle are eager weight
to get to feed or are being herded. This makes deployment very Sphere fitted (Hansen et 95 Asus 66
difficult as a setup which may be effective in one location may to surface of al., 2018b) Xtion
not work in another location, even on the same farm. Some cow’s back Pro
works have tracked traits over time, establishing a sliding
window over which each cow’s characteristic traits are Features (Rodríguez 503 Kinect 78
determined by Alvarez et v2
determined which then allows deviations from the norm to be
CNN al., 2018)
detected. Another approach is to use an assessment
methodology which is flexible enough to be employed in a De Laval’s (Mullins et 344 De 76
wide range of scenarios where the increased frequency of proprietary al., 2019) Laval
observations increases the chances of detection. This is where system camera
geometric deep learning may play a vital role with techniques
which learn from shape alone and are robust to noise and rigid
translation and rotation of the point cloud. 3.3 Applications involving Interaction with Animals
3.2.3 Body Condition Scoring 3.3.1 Robotic Milking

Body Condition Score (BCS) is an indirect estimation of the There are several offerings on the market for Automatic
level of body reserves, and its variation reflects cumulative Milking Systems (AMS). Recent activity in the area has
variation in energy balance. It interacts with reproductive and included the integration of more modern sensors/vision
health performance, which are important to consider in dairy systems, the addition of animal monitoring functionalities such
production but not easy to monitor. Manual visual BCS is as those discussed in Section 3.2, and the integration of robots
subjective, time-consuming and requires experienced into rotary milking parlours.
employees. The state-of-the-art in automatic BCS estimation
Time of Flight (ToF) depth sensing cameras have been used in
is summarised in Table 2 under categories such as the size of
many recent developments in AMSs. Acceptable accuracy
the test dataset, the degree of automatization, the type of
(teat location detection within 5mm) has been achieved when
camera used and the accuracy of scores within 0.25 of manual
the search space is limited to region of interest 150mm wide
BCS reference.
by 80mm deep (Duffy et al., 2006). Teat detection and tracking
using algorithmic solutions from depth images and point-
cloud data was also achieved by (Van Der Zwan et al., 2015).

315
IFAC AGRICONTROL 2019
December 4-6, 2019. Sydney, Australia

316 Niall O’ Mahony et al. / IFAC PapersOnLine 52-30 (2019) 312–317

The method can robustly handle occlusions, variable poses, Extensive testing is necessary to ascertain the automatability
and geometries of the tracked shape, and yields a correct of current research. No two farms are the same and to deploy
tracking rate for over 90% for tests involving real-world vision-guided robotic systems to the end user will require
images obtained from an industrial robot. Other vision systems which can operate in unconstrained environments and
technologies have also been investigated, for example in adapt to variations in herd characteristics, weather conditions,
(Rastogi et al., 2017) using a Kinect structured light camera farmyard layout and different scenarios in animal-robot
and a haar-cascade classifier, in (Ben Azouz et al., 2015) using interaction.
a combination of thermal imaging and stereovision techniques.
This project intends to address this concern through the use 3D
The task of attaching the milking clusters to cows’ teats is a deep learning techniques to derive features which are robust to
challenging one as the shape of an udder is variable between rigid transformation (e.g. rotation and translation) and
cows and between different stages of lactation. The udder is therefore allows more flexible installation options for the
also a moving target if cows are restless in which case there is animal monitoring vision system and real-time situation
a risk for the robot to harm the cow and vice versa for the cow awareness in robotics applications. Secondly the learning of
to throw the robot out of calibration. Cycle time of the milking characteristics specific to individual cattle (e.g. the appearance
operation must also be minimal so that both cow and farmer of each cow for cow identification) has also been identified as
can be more productive with their time. Therefore, an a research gap which this project intends to fill.
intelligent control of the robot is required using visual
feedback to navigate the cluster onto the cow accurately, safely
and quickly. A system which can milk cows with unusually
shaped udders or that do not take to a robot milker which ACKNOWLEDGEMENTS
would otherwise have to be culled as is the case with some
This work was supported, in part, by Science Foundation
existing AMS, would also be a considerable advantage.
Ireland grant 13/RC/2094 and co-funded under the European
This functionality could be achieved through the Regional Development Fund through the Southern & Eastern
implementation of AI that can generalise well to a wider Regional Operational Programme to Lero - the Irish Software
variety of udder shapes in challenging sensing conditions and Research Centre ( www.lero.ie )
dynamically assess the status and viability of teat cup
placement during the procedure so as to avoid stressing the REFERENCES
cow. Such a perception system must be capable of coping with Abdul Jabbar, K. et al. (2017) ‘Early and non-intrusive
obstructions, dynamic targets and avoiding collisions and lameness detection in dairy cows using 3-dimensional video’,
therefore requires a number of components including teat, tail Biosystems Engineering, 153. doi:
and leg identification, target tracking and situation awareness. 10.1016/j.biosystemseng.2016.09.017.
3D point cloud processing techniques are an ideal candidate as
they are less subject to pose dependent ambiguity and allow Alexandre, L. A. (2012) 3D Descriptors for Object and
for geometric reasoning of segmented 3D shapes. Category Recognition: a Comparative Evaluation.
Arslan, A. C., Akar, M. and Alagoz, F. (2014) ‘3D cow
4. CONCLUSIONS identification in cattle farms’, in 2014 22nd Signal Processing
This paper has identified the applicability of 3D vision in farm and Communications Applications Conference (SIU). IEEE,
automation tasks. The diversity of applications will require the pp. 1347–1350. doi: 10.1109/SIU.2014.6830487.
implementation of CV algorithms and deep learning models Ben Azouz, A. et al. (2015) ‘Development of a teat sensing
which deliver accuracy, generalisation, low computational system for robotic milking by combining thermal imaging and
cost and customisability. For example, cow health assessment stereovision technique’, Computers and Electronics in
systems must make accurate measurements of subtle features Agriculture. Elsevier B.V., 110, pp. 162–170. doi:
such as posture and body fat thickness while also being 10.1016/j.compag.2014.11.004.
customisable to individual cow and herd characteristics.
Bahr, C. et al. (2008) ‘Automatic detection of lameness in
The use of 3D vision in the implementation of automated dairy cattle by vision analysis of cow’s gait’, in International
systems for Precision Dairy Farming has been prevalent in Conference on Agricultural Engineering,. Hersonissos, Crete,
recent years. This has enabled real-time accurate 3D Greece: European Society of Agricultural Engineers (AgEng).
measurements which are so important to the applications in
Duffy, A. H. et al. (2006) ‘Teat Detection for an Automated
question. A further trend has been the development of
Milking System’.
algorithms which make use of AI for improved accuracy and
reliability in a diverse range of conditions considering that Gardenier, J., Underwood, J. and Clark, C. (2018a) ‘Object
physical traits can vary greatly depending on stage of lactation Detection for Cattle Gait Tracking’, 2018 IEEE International
and cattle breed and also that poor lighting and lens fouling are Conference on Robotics and Automation (ICRA), pp. 2206–
likely to occur in farmyard environments. 2213. doi: 10.1109/ICRA.2018.8460523.
Gardenier, J., Underwood, J. and Clark, C. (2018b) ‘Object

316
IFAC AGRICONTROL 2019
December 4-6, 2019. Sydney, Australia

Niall O’ Mahony et al. / IFAC PapersOnLine 52-30 (2019) 312–317 317

Detection for Cattle Gait Tracking’, 2018 IEEE International Systems for Precision Livestock Farming (unpublished)’, in
Conference on Robotics and Automation (ICRA), (May), pp. 9th European Conference on Precision Livestock Farming
2206–2213. doi: 10.1109/ICRA.2018.8460523. (ECPLF).
Grimstad, L. and From, P. J. (2017) ‘Thorvald II - a Modular O’ Mahony, N. et al. (2019c) ‘Deep Learning vs. Traditional
and Re-configurable Agricultural Robot’, IFAC- Computer Vision’, in Advances in Computer Vision. Springer
PapersOnLine. Elsevier, 50(1), pp. 4588–4593. doi: Nature Switzerland AG, pp. 128–144.
10.1016/J.IFACOL.2017.08.1005.
Qi, C. R. et al. (2017) ‘PointNet++: Deep Hierarchical Feature
Halachmi, I. and Guarino, M. (2016) ‘Editorial: Precision Learning on Point Sets in a Metric Space’, arXiv preprint
livestock farming: a “per animal” approach using advanced arXiv:1706.02413v1.
monitoring technologies’, animal, 10(09), pp. 1482–1483. doi:
Rastogi, A. et al. (2017) ‘Teat detection mechanism using
10.1017/S1751731116001142.
machine learning based vision for smart Automatic Milking
Hansen, M. F. et al. (2018a) ‘Automated monitoring of dairy Systems’, in 2017 14th International Conference on
cow body condition, mobility and weight using a single 3D Ubiquitous Robots and Ambient Intelligence (URAI). IEEE,
video capture device’, Computers in Industry, 98, pp. 14–22. pp. 947–949. doi: 10.1109/URAI.2017.7992872.
doi: 10.1016/j.compind.2018.02.011.
Rauscher, G., Dube, D. and Zell, A. (2014) ‘A Comparison of
Hansen, M. F. et al. (2018b) ‘Automated monitoring of dairy 3D Sensors for Wheeled Mobile Robots’.
cow body condition, mobility and weight using a single 3D
Rind Thomasen, J. et al. (2018) ‘Individual cow identification
video capture device’, Computers in Industry. Elsevier, 98, pp.
in a commercial herd using 3D camera technology’, in World
14–22. doi: 10.1016/J.COMPIND.2018.02.011.
Congress on Genetics Applied to Livestock Production, p. 613.
van Hertem, T. et al. (2018) ‘Implementation of an automatic
Rodríguez Alvarez, J. et al. (2018) ‘Body condition estimation
3D vision monitor for dairy cow locomotion in a commercial
on cows from depth images using Convolutional Neural
farm’, Biosystems Engineering. Academic Press, 173, pp.
Networks’, Computers and Electronics in Agriculture, 155,
166–175. doi: 10.1016/J.BIOSYSTEMSENG.2017.08.011.
pp. 12–22. doi: 10.1016/j.compag.2018.09.039.
Van Hertem, T. et al. (2014) ‘Automatic lameness detection
Salah Sukkarieh and Matthew Truman (2019) Agriculture and
based on consecutive 3D-video recordings’, Biosystems
the environment - Faculty of Engineering. Available at:
Engineering, 119, pp. 108–116. doi:
https://sydney.edu.au/engineering/our-research/robotics-and-
10.1016/j.biosystemseng.2014.01.009.
intelligent-systems/australian-centre-for-field-
Ioannidou, A. et al. (2017) ‘Deep learning advances in robotics/agriculture-and-the-environment.html (Accessed: 15
computer vision with 3D data: A survey’, ACM Computing July 2019).
Surveys, 50(2). doi: 10.1145/3042064.
Salau, J. et al. (2017) ‘A multi-Kinect cow scanning system:
Kragh, M. et al. (2017) ‘FieldSAFE: Dataset for Obstacle Calculating linear traits from manually marked recordings of
Detection in Agriculture’, Sensors. Multidisciplinary Digital Holstein-Friesian dairy cows’, Biosystems Engineering, 157.
Publishing Institute, 17(11), p. 2579. doi: 10.3390/s17112579. doi: 10.1016/j.biosystemseng.2017.03.001.
Mullins, I. L. et al. (2019) ‘Validation of a Commercial Spoliansky, R. et al. (2016) ‘Development of automatic body
Automated Body Condition Scoring System on a Commercial condition scoring using a low-cost 3-dimensional Kinect
Dairy Farm’, Animals, 9(6), p. 287. doi: 10.3390/ani9060287. camera’, Journal of Dairy Science. Elsevier, 99(9), pp. 7714–
7725. doi: 10.3168/JDS.2015-10607.
Van Nuffel, A. et al. (2015) ‘Lameness detection in dairy
cows: Part 1. How to distinguish between non-lame and lame Vázquez-Arellano, M. et al. (2016) ‘3-D Imaging Systems for
cows based on differences in locomotion or behavior’, Agricultural Applications-A Review.’, Sensors (Basel,
Animals. MDPI AG, pp. 838–860. doi: 10.3390/ani5030387. Switzerland). Multidisciplinary Digital Publishing Institute
(MDPI), 16(5). doi: 10.3390/s16050618.
O’ Mahony, N. et al. (2018a) ‘Computer Vision for 3D
Perception A review’, in Intelligent Systems Conference Viazzi, S. et al. (2014) ‘Comparison of a three-dimensional
(IntelliSys) 2018. Springer, pp. 1–9. and two-dimensional camera system for automated
measurement of back posture in dairy cows’, Computers and
O’ Mahony, N. et al. (2018b) ‘Convolutional Neural Networks
Electronics in Agriculture. Elsevier, 100, pp. 139–147.
for 3D Vision System Data A review’, in 12th International
Conference On Sensing Technology 2018. Wendel, A. and Underwood, J. (2017) ‘Extrinsic Parameter
Calibration for Line Scanning Cameras on Ground Vehicles
O’ Mahony, N. et al. (2019a) ‘Adaptive Multimodal
with Navigation Systems Using a Calibration Pattern’. doi:
Localisation Techniques for Mobile Robots in Unstructured
10.3390/s17112491.
Environments A Review’, in IEEE 5th World Forum on
Internet of Things (WF-IoT). Van Der Zwan, M. and Telea, A. (2015) ‘Robust and Fast Teat
Detection and Tracking in Low-Resolution Videos for
O’ Mahony, N. et al. (2019b) ‘Automatic Cattle Monitoring
Automatic Milking Devices’.

317

Вам также может понравиться