Вы находитесь на странице: 1из 10

Available at www.sciencedirect.

com

INFORMATION PROCESSING IN AGRICULTURE 5 (2018) 124–133

journal homepage: www.elsevier.com/locate/inpa

Cattle behaviour classification from collar, halter,


and ear tag sensors

A. Rahman a,*, D.V. Smith a, B. Little b, A.B. Ingham b, P.L. Greenwood c,


G.J. Bishop-Hurley b
a
Analytics Program, Data61, CSIRO, Australia
b
Productive and Adaptive Livestock Systems, Agriculture and Food, CSIRO, Australia
c
NSW Department of Primary Industries Beef Industry Centre, Australia

A R T I C L E I N F O A B S T R A C T

Article history: In this paper, we summarise the outcome of a set of experiments aimed at classifying cattle
Received 21 June 2017 behaviour based on sensor data. Each animal carried sensors generating time series
Received in revised form accelerometer data placed on a collar on the neck at the back of the head, on a halter posi-
17 October 2017 tioned at the side of the head behind the mouth, or on the ear using a tag. The purpose of
Accepted 18 October 2017 the study was to determine how sensor data from different placement can classify a range
Available online 3 November 2017 of typical cattle behaviours. Data were collected and animal behaviours (grazing, standing
or ruminating) were observed over a common time frame. Statistical features were com-
Keywords: puted from the sensor data and machine learning algorithms were trained to classify each
Sensor data analytics behaviour. Classification accuracies were computed on separate independent test sets. The
Cattle behaviour classification analysis based on behaviour classification experiments revealed that different sensor
Sensors for cattle behaviour tracking placement can achieve good classification accuracy if the feature space (representing
motion patterns) between the training and test animal is similar. The paper will discuss
these analyses in detail and can act as a guide for future studies.
Ó 2018 China Agricultural University. Publishing services by Elsevier B.V. This is an open
access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-
nd/4.0/).

1. Introduction interpreted accurately and in real time. Monitoring cattle


behaviour using wearable sensors is becoming an important
Animals alter their behaviour to enable them to deal with option for farm management and genetic selection programs
stressors such as infection, satiety, or social and environmen- and a greater emphasis on individual wellbeing and perfor-
tal changes. This behaviour is often consistent and pre- mance rather than a more traditional herd based approach.
dictable but cannot be measured at scale because of the Examples of precision agriculture (PA) management and
labour required to physically monitor large numbers of ani- genetic improvement strategies are seen across the agricul-
mals continuously. Wearable sensor technologies offer a pos- tural spectrum including cropping [1,2], dairy/beef [3] and
sible solution to this problem enabling measurement at scale the aquaculture industry [4–6].
but this can only be successful if sensor outputs can be In the livestock sector behaviour analysis can provide
insight into (i) Animal health: animal behaviour patterns
can be linked to animal health [9–13], an early detection of
* Corresponding author. sickness was identified when rumination and general activity
E-mail address: ashfaqur.rahman@data61.csiro.au (A. Rahman). decreased below expected levels; (ii) Feed intake and satiety:
Peer review under responsibility of China Agricultural University.
https://doi.org/10.1016/j.inpa.2017.10.001
2214-3173 Ó 2018 China Agricultural University. Publishing services by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3 125

behaviours like grazing, chewing and feeding are indicators of between training and test data is an important factor for
feed intake. Percentage of time spent on grazing related beha- accurate behaviour classification for any sensor placement.
viours can assist in understanding the amount of feed intake
compared to amount of pasture or supplements offered, ani- 2. Feature extraction for classification
mal preference and satiety state [14–16]; (iii) Heat/Estrus
event: this refers to the period of sexual receptivity and fertil- In previous studies, machine learning based cattle behaviour
ity in female mammals. The heat event has been shown to be classifiers [17,18] employed a standard approach to model
detectable through changes in restlessness (activity) [12]. The development without considering the potential value of state
detection of periods indicate the appropriate time for artificial of the art classifiers and feature representations. The stan-
insemination. dard workflow (Fig. 2) in developing behaviour models
Commercial and research systems presented in [9–21] con- involves partitioning time series data into short time win-
tinuously and automatically monitor the rumination time [9– dows, and for each window, extracting a small set of statisti-
11], grazing time [10] and activity intensity [9–13] of individual cal features (i.e. first to fourth order statistical moments). The
animals. Current behaviour monitoring systems are com- combination of statistical features and corresponding beha-
monly comprised of: (a) an individual sensor or combination viour annotations were used to train a classifier. A set of sta-
of sensors that are fitted to each animal (Fig. 1) - these sensors tistical features were computed from the time series sensor
can include accelerometers, magnetometer, gyroscope, com- data in [19–21] and showed potential to classify cattle beha-
pass, GPS, pressure and microphone; (b) a sensor node to pro- viour with high accuracy. In this study we computed statisti-
cess, store and transmit sensor observations; and (c) a model cal features only for classification experiments.
or set of models [17,18] to infer an animal’s behaviour from An important step in a classification framework (Fig.2) is
the raw sensor observations. feature extraction. For the experiments conducted as part of
Sensors can be placed on different parts of the animal and this study, a set of statistical features were computed from
it is not known if, or how, location might influence classifica- the time and frequency representations associated with each
tion accuracy. We therefore devised an experiment to better window of the input series. Frequency domain representa-
understand how sensor placement influences behaviour clas- tions of the time series data were obtained using Discrete
sification accuracy? In this study, we collected data simulta- Fourier Transformation (DFT). Let xt be the t th element of
neously from accelerometers placed on three different parts the time series. In DFT k th element of the frequency
of the animal body: neck (collar), head (halter) and ear (using domain representation is obtained as:
an ear tag). We developed separate behaviour classification
X
n1
models based on sensor data from these three locations. We fk ¼ xt e2pitk=n ð1Þ
utilised two different testing approaches: (i) Training on data t¼0

from a set of animals and testing models on data from ani- where n is the length of the vector. The interpretation is that x
mals that are not part of the training process and (ii) Mix data represents the signal level at various points in time and f rep-
from all the animals, training on data on part of mixed data resents the signal level at various frequencies. The DC com-
set and testing models on data are not part of the training ponent of the DFT (f 0 : component corresponding to 0
process. Analysis results reveal that feature distribution frequency) is retained as a feature. The remaining statistical
features are then computed from the spectrum after its DC
component has been removed. The statistical features used
include the mean, standard deviation, skewness, and kurtosis
as presented in Table 1.
We also used minimum and maximum of the series xt and
f k as features (with k > 0). The standard deviation, minimum
and maximum features were used to represent the motion
intensity. Along with these features, the period of the signal
within each time window was computed in time domain
using the method presented in [6]. The period feature is
included because some behaviours, such as grazing, walking
and ruminating, have repetition within their motion patterns.
A total of fourteen statistical features were computed.

3. Sensors for data collection

The results presented here are based on trial data collected at


FD McMaster Laboratory in Armidale, NSW, Australia in
November 2014. Accelerometer sensors were placed in a col-
lar, ear tag, and halter at the same time and data from these
three different sources. A video camera recorded the animal’s
Fig. 1 – Collar, halter and ear tag sensors used for cattle behaviours. A domain expert coded the videos to identify a
behaviour monitoring. range of time-stamped data from the three sources were
126 Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3

Fig. 2 – Cattle behaviour classification framework.

Table 1 – Features Computed from time and frequency domain representation of the signal.
Features Time Domain Frequency Domain
P P
Mean lT ¼ xt lF ¼ k>0 f k
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffi
Standard Deviation rT ¼ n1 t ðxt  lT Þ2 rF ¼ n1 k>0 ðf k  lF Þ2
pffiffiffiffiffiffiffi
P pffiffiffiffiffiffiffi
P
ðxt lT Þ3 ðf lF Þ3
Skewness cT ¼ n n2
n1
P t cF ¼ n n2
n1
P k>0 k 2 3=2
ð t ðxt lT Þ2 Þ
3=2
ðf k lF Þ
k>0 P
P
ðx l TÞ
4
ðf lF Þ4
bT ¼ nðnþ1Þðn1Þ Pt nðnþ1Þðn1Þ
bF ¼ ðn2Þðn3Þ P k>0 k 2 2
t
Kurtosis ðn2Þðn3Þ
ð t ðxt lT Þ2 Þ
2
k>0
ðf k lF Þ

aligned to these behaviour labels and three different labelled behaviours while viewing video of the animal recorded in
data sets were produced. the field. Finally, the AF and the DLF are read by the machine
The devices deployed in this experiment were designed by learning software to create the windows of data for each
engineers within CSIRO’s Sensor Technology Group. Two behaviour type.
models were deployed. The first model was the ‘‘Camazotz”
devices with two deployed on each animal, one on a halter
4. Experimental setup
and the other in an ear tag. A detailed description of the
devices can be found in [7]. For this study, the accelerometers
Sensor data obtained from the accelerometer were utilised in
(STMicroelectronics LSM303 3-axis accelerometer/magne-
this experiment. The squared acceleration magnitude (accm )
tometer) were sampled at a frequency of 30 Hz. An ear tag qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
was computed as: accm ¼ acc2x þ acc2y þ acc2z , where
housing was 3D printed to hold the Camazotz and attached
to the left ear of the animal using industry standard tools. accx ; accy andaccz are the three axes of the accelerometer. The
For the halter mounted sensor, a housing was also printed accm series was the only time series used to compute features.
using a 3D printer and attached to the cheek strap of the hal- Our previous study [?] on other cattle behaviour data sets
ter using cable ties. The third, a Fleck device was housed in a indicates that the squared acceleration magnitude is suffi-
box attached to each animal by a collar. The accelerometer in cient to pull to good classification accuracy on the behaviours
the CSIRO monitoring collars ([8]) was a piezoelectric micro- studied in the paper. Hence we used squared acceleration
electromechanical system (MEMS) chip containing a 3-axis magnitude and computed features from that series.
accelerometer and a 3-axis magneto-resistive sensor Six behaviour classes were recorded during the field trial:
(HMC6343 Honeywell, Plymouth, MN). Grazing, Resting, Walking, and Standing, Ruminating and
The accelerometer chip was programmed to collect data at Other. Fig.3 summarises the distribution of different classes
a frequency of 12 Hz. The box containing the electronics was from the three different sources. Note that Walking, Resting
located under the animal’s neck. The resulting data was and Other class have a small representation. We thus concen-
logged to an on board MicroSD card. At completion of the field trated and developed classifiers on the Grazing, Standing and Rumi-
experiment, all the devices were removed from the animals nating classes only. The accm series was partitioned into
and the MicroSD cards removed. The SD Cards were copied windows of 200 samples and statistical features were
to a computer and the stored data converted from binary for- extracted from each time window. Windows were extracted
mat to ‘‘CSV” files, which include variables for date, time and 20 samples apart along the time series. Fourteen statistical
X/Y/Z accelerometer – referred to henceforth as the ‘‘Data Log features were computed from each time window. The fea-
File” (DLF). A second data file, ‘‘Annotation File” (AF), was cre- tures were used to train the Random Forest [22] Classifier that
ated from field observations of the animal behaviour while produced best classification results for our dataset. A random
wearing the devices. The AF file can also be created by a cus- forest is a collection of decision tree classifiers. Each decision
tom application (the AF APP) that allows an operator to code tree is generated from a random subset of the features. Given
Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3 127

Fig. 3 – Class distribution from the data sets [collated from all animals] collected from sensors placed at collar, halter and ear tag.

a test sample, each decision tree in the forest produces a class and
decision that are fused into a single decision using majority true positive
voting. We used the WEKA [23] implementation. All experi- recall ¼ ð4Þ
true positive þ false negative
ments were conducted in MATLAB.
A set of three binary classifiers were trained such that each Replacing precision and recall in F-Score definition gives
target behaviour was classified against a combined class of all us
remaining behaviour classes. The classification accuracies are true positive true positive
true positiveþfalse positive
 true positiveþfalse negative
reported for each target class separately in the results section. F1 ¼ 2  true positive true positive
þ
Binary classifiers are developed as often only one behaviour
true positiveþfalse positive true positiveþfalse negative ð5Þ
true positive
class needs to be inferred for a particular management prac- ¼2
2  true positive þ false negative þ false positive
tice, and hence, multi-class classifiers are not always
required. If multiple behaviours need to be classified for a par- We used the above formula to compute the classification
ticular application, the corresponding set of binary classifiers performance of the binary classifiers.
can be combined.
We measured the classification accuracy using FScore. 5. Results and analysis
FScore is defined as:
precision  recall We conducted a series of experiments to find an answer to the
F1 ¼ 2  ð2Þ research question posed in this paper. We tried to understand
precision þ recall
how effectively different behaviours can be classified using
precision and recall are defined as: the statistical features computed from devices attached to
true positive different locations on the animals head. We developed a
precision ¼ ð3Þ
true positive þ false positive machine learning model for each behaviour separately i.e.
128 Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3

we developed a binary classifier for each behaviour where the Overall, classification accuracy using the SCV approach
target behaviour became one class and the remaining beha- was better than the LOOA approach. To understand the differ-
viours combine into the other class. For all possible sensor ence in performance between the SCV and LOOA approaches,
placements, we conducted two approaches to evaluate the we analysed a number of results. As data from halter showed
classification performance: relatively better performance than from collar and ear tag
devices, we will confine the discussion to the halter data
(i) we conducted a second experiment using the N Fold and the conclusions from the halter is applicable to the other
Stratified Cross Validation (SCV) approach. For each device locations.
data source, we combined the data from all the ani- Class distribution differences between training and test
mals. We then split the data into N folds so that the sets across multiple folds was investigated first. Fig.4 presents
class distribution of the folds remain close to that of the Training set and Test set class distribution between the tar-
the combined data set. After that, each fold becomes get and other class while validating using the LOOA approach.
a test set and the remaining folds combine into one In general, the class distribution between training and test set
training set. The process was repeated where each fold do not match very well. Also, note that the Training set class
becomes a test set by turn and classification accuracy distribution for Standing and Ruminating class is imbalanced.
(FScore) is computed on each test set; and As concluded in [24], classification accuracy measures like
(ii) Leave–Out–One–Animal (LOOA) validation approach FScore can be biased if there is a mismatch between class dis-
where data from one animal becomes the test set and tribution between training and test data set. This becomes
data from the remaining animals combine into the even more critical if the class distribution is imbalanced
training set. Each animal becomes a test animal in turn [24]. This partly explains the poor performance of the LOOA
and classification accuracy is computed on each test approach. The class distribution between training and test
set. set is almost the same when using the SCV approach (Fig.5).
This explains one of the possible reasons for the performance
SCV enforces similarity between training and test set. The difference between the SCV and LOOA approach.
idea behind the LOOA approach is to see how well such sim- The second issue we investigated is how the feature space
ilarity is maintained between the training–test set in a more distributions of the training and test sets vary using the SCV
realistic environment and how it influences the classification and LOOA approaches. Fig.6 and Fig.7 present the Bhat-
accuracy. tacharya distance [25] between the training and test distribu-
We first evaluated the performance of the binary classifiers tions across the fourteen features using LOOA and SCV
using the SCV approach. We computed the average accuracy approaches, respectively. It is clear that feature distributions
(FScore) over the folds. Table 2 presents the average FScore match very closely between training and test data in SCV
computed on three different behaviours from different data leading to a low Bhattacharya distance (Fig.7). There is a
sources using SCV approach. We can observe two things: higher mismatch between the training and test set feature
distribution when using LOOA leading to a higher Bhat-
(a) In general, halter data was classified with higher accu- tacharya distance (Fig.6). Generally, machine learning models
racy compared to collar and ear tag data. require training and test feature distributions to be similarly
(b) Classification accuracy (FScore) is in general high. distributed in order for classification to be successful. Conse-
quently, the greater mismatch in the feature space distribu-
We also evaluated the performance of the binary classi- tions of the LOOA approach (compared with the SCV
fiers using the LOOA validation approach. The average accu- approach) contributes to its poorer classification
racy (FScore) was computed over the test sets. Table 3 performance.
presents the average FScore computed for three behaviours To find out what can be done to improve the classification
from different devices using LOOA approach. We can observe accuracy of the LOOA approach and what happens under an
two things: identical situation with the SCV approach, another set of
experiments were conducted. For each training–test scenario,
(a) In general halter data is classified with better accuracy a learning curve was constructed. A learning curve reveals the
compared against collar and ear tag data. classification performance relationship between the training
(b) Classification accuracy (FScore) is in general low. and test data set as more training samples are added for

Table 2 – FScore Summary using Stratified Cross Validation.


Grazing Standing Ruminating

Collar 0.809 0.874 0.913


Halter 0.914 0.89 0.932
Ear Tag 0.805 0.86 0.895
Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3 129

Table 3 – FScore Summary using Leave Out One Animal validation.


Grazing Standing Ruminating

Collar 0.3596 0.4342 0.149


Halter 0.7967 0.5096 0.6211
Ear Tag 0.4808 0.3675 0.1704

Fig. 4 – Class Distribution between training and test animals while using LOOA approach on Halter data stream. Here Cxxx
represents the ID of the test animal.

Fig. 5 – Class Distribution between training and test animals while using SCV approach on Halter data stream.

model generation. The learning curve provides insight into curve for the model developed with the SCV approach. Fig.8
the model’s suitability for a classification task, in particular, presents the learning curve of a representative fold using
if the model is suffering from bias (under fitting) or variance the Halter data set. Note that under identical class distribu-
(over fitting). This informs the developer what needs to be tion, the training error is very low across all sample sizes,
done to improve the model. We first construct a learning while the test error is monotically decreasing with training
130 Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3

Fig. 6 – Feature distribution difference between training and test set for different behaviour classes using the LOOA approach.
In the 2D histograms, the x–axis presents the 14 features and y-axis presents the 10 bins for each feature. Each feature is
normalised in the range of 0–1 and are split in 10 equal range bins [0–0.1], [0.1–0.2],. . ., [0.9–1]. Each bin in the y–axis
represents the concentration of values of a feature in that bin w.r.t. other bins for that feature. The concentrations are
displayed using a heat map and the colour code on the right of each histogram donates the level of concentration. The
Bhattacharyya distance between the training and test 2D histograms are presented at the bottom for each behaviour.

Fig. 7 – Feature distribution difference between training and test set for different behaviour classes using the SCV approach.
In the 2D histograms, the x–axis presents the 14 features and y-axis presents the 10 bins for each feature. Each feature is
normalised in the range of 0–1 and are split in 10 equal range bins [0–0.1], [0.1–0.2],. . ., [0.9–1]. Each bin in the y–axis
represents the concentration of values of a feature in that bin w.r.t. other bins for that feature. The concentrations are
displayed using a heat map and the colour code on the right of each histogram donates the level of concentration. The
Bhattacharyya distance between the training and test 2D histograms are presented at the bottom for each behaviour.
Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3 131

Fig. 8 – Learning curve on Fold 1 (representative fold) using the SCV approach on Halter data.

sample size. The test error often approaches zero as more LOOA validation, the feature set does not appear to generalise
training samples are added. This suggests the classifier is well between cattle. The source of motion pattern (as represented
behaved and does not require modification. by the features) variation between the cattle can be
While using LOOA validation, Fig.9 presents the learning attributed to differences in the physical movement of
curves for some folds (animals) while using the Halter data individuals and minor differences in IMU positioning
set. Note that as we add more training data there is a big (whether this occurs during deployment or the IMU shifts
gap between training and test data set error when compared position after deployment).
against Fig.8. This suggests the model is suffering from a vari- In summary, when comparing the classification perfor-
ance problem. The possible reasons are: mance of different sensor placements, the key issue is the
similarity of feature space distribution between training and
(a) The machine learning models have been over-trained test animals. When tested on a different animal (LOOA valida-
(suffering from overfitting problem), tion), the feature spaces are not similar between training and
(b) The feature set is not representative of the particular test set resulting in poor classification performance for all
classification problem. sensor placements. When feature space distribution is
enforced using SCV approach, classification performance on
If we compare the learning curve of the SCV approach that test set was very high for all sensor placements. Note that sta-
has been developed under an identical scenario (i.e. same tistical features were good enough to secure high classifica-
type of model and feature set) the classification error of the tion accuracy with SCV approach. Thus simple features can
SVC appears to be significantly smaller than the LOOA do well if the feature space is similar between training and
approach with an identical numbers of training samples. test set for any sensor placement (i.e. collar, halter, and ear
The SVC based classifier is both accurate and well behaved tag).
suggesting that (i) the model has not overfit the classification
problem and (ii) its feature set is representative of the classi- 6. Conclusions
fication problem. This suggests when motion data from the
same steer are included in both the training and test sets, In this paper, we studied cattle behaviour classification using
the feature set is representative. However, when the training machine learning algorithms from collar, halter and ear tag
and test sets consist of different cattle, as in the case of the sensor data. We conducted a series of experiments using
132 Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3

Fig. 9 – Learning Curve on some folds (i.e. animals) using the LOOA approach on Halter data. Here C xxx represents the animal
ID of the individual animal. A missing square means no test data was available for that animal.

two validation approaches under the same model develop- feature distribution variance that degrade performance when
ment conditions; Leave–Out–One–Animal (LOOA) and Strati- applying classifier to new animals, alternative/reduced fea-
fied Cross Validation (SCV). In LOOA the class and feature ture representation and Transfer Learning approaches need
distribution between the training and test data sets were to be investigated. Transfer learning is an active research area
found to be quite dissimilar resulting in poor behaviour clas- in machine learning [26–29] that involves algorithm develop-
sification performance. With SCV, both the class and feature ment for the improvement of learning in a new task through
distribution between the training and test data sets were the transfer of knowledge from a related task that has already
quite similar resulting in a far higher classification accuracy. been learned. In future, we will investigate such approaches
The source of the performance difference between the SCV to attempt to reduce the discrepancy between training and
and LOOA based classifiers was related to the motion varia- test distributions by learning feature representations that
tions across different cattle for the same behaviour. offer greater invariance across different cattle (domain
From the results obtained in this study (Table 3), none of adaptation).
the sensor placements revealed high classification accuracy
because of lack of correspondence between feature space of R E F E R E N C E S
training and test set (in a practical setup with LOOA). How-
ever, when feature space correspondence is enforced (using
the SCV), all sensor placements lead to high classification [1] Seelan SK, Laguette S, Casady G, Seielstad G. Remote sensing
accuracy (Table 2). This suggests that the performance of applications for precision agriculture: a learning community
machine learning based behaviour models improve if there approach. Remote Sensing Environ 2003;88(12):157–69.
is similarity in cattle motion distribution (i.e. feature space) [2] McBratney AB, Whelan BM, Shatar T. Variability and
uncertainty in spatial, temporal and spatio-temporal crop
for any sensor placement (i.e. collar, halter, and ear tag).
yield and related data. In: Lake JV, Bock GR, Goode JA, editors.
In a practical deployment of cattle behaviour models, the
Precision agriculture: spatial and temporal variability of
LOOA is likely to be the most realistic validation approach. environmental quality. England: Wiley & Sons; 1997. p.
It is expected that we would develop models based on histor- 141–60.
ical data collected from a set of animals and then test them [3] Berckmans D. Automatic on-line monitoring of animals by
on a different set of animals in a new trial. To deal with the precision livestock farming. In: Proc. ISAH Conference on
Information Processing in Agriculture 5 ( 2 0 1 8 ) 1 2 4 –1 3 3 133

Animal Production in Europe: The Way Forward in a [17] Robert JB, White B, Renter D, Larson R. Evaluation of three-
Changing World. Saint-Malo, France; 2004. p. 27–31. dimensional accelerometers to monitor and classify
[4] Mallekh R, Lagarde JP, Eneau JP, Clotour C. An acoustic behaviour patterns in cattle. Comput Electron Agric 2009;67
detector of turbot feeding activity. Aquaculture 2003;221(1– (1–2):80–4.
4):481–9. [18] Gonzalez LA, Bishop-Hurley GJ, Handcock R, Crossman C.
[5] Rahman A, Shahriar MS, D’Este C, Smith G, McCulloch J, Behavioural classification of data from collars containing
Timms G. Time–series prediction of shellfish farm closure: a motion data. Comput Electron Agric 2015;110:91–102.
comparison of alternatives. Elsevier Inform Process Agric [19] Smith D, Dutta R, Hellicar A, Bishop-Hurley GJ, Rawnsley R,
2014;1(1):42–51. Henry D, et al. Bag of Class Posteriors, a new multi-variate
[6] Hellicar A, Rahman A, Smith D, Smith G, McCulloch J, time series classifier applied to animal behaviour
Andrewartha S, et al. An algorithm for the automatic identification. Expert Syst Appl 2015;42(7):3774–84.
detection of heart rate and variability for an oyster sensor. [20] Smith D, Little B, Greenwood PL, Valencia P, Rahman A,
IEEE Sens J 2015;15(8):4480–7. Ingham AB, et al. A study of sensor derived features in cattle
[7] Jurdak R, Kusy B, Sommer P, Kottege N, Crossman V, behaviour classification models. In: Proc. IEEE Sensors.
McKeown A, et al. Camazotz: multimodal activity-based GPS Busan, Korea; 2015, doi: https://doi.org/10.1109/ICSENS.2015.
sampling. In: Proc. 12th International Conference on 7370529.
Information Processing in Sensor Networks (IPSN). [21] Rahman A, Smith D, Henry D. Rawnsley R. A comparison of
Philadelphia, USA; 2013. p. 67–78. autoencoder and statistical features for cattle behaviour
[8] González LA, Bishop-Hurley G, Handcock RN, Crossman C. classification. In: Proc. IEEE Joint International Conference on
Behavioural classification of data from collars containing Neural Networks (IJCNN). Vancouver, Canada; 2016. p. 2954–
motion sensors in grazing cattle. Comput Electron Agric 60.
2015;110:91–102. [22] Breiman L. Random Forests. Mach Learn 2001;45(1):5–32.
[9] Allflex. Cow Intelligence. link: <http://www. [23] Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten
scrdairy.com/cow-intelligence>; 2016. IH. The WEKA data mining software: an update. ACM
[10] CowManager. The Cow Manager System. link: <https:// SIGKDD Explorat Newsl 2009;11(1):10–8.
www.cowmanager.com/en-us/>; 2017. [24] Forman G, Scholz M. Apples-to-apples in cross-validation
[11] Dairymaster. Moo Monitor. link: <http:// studies: pitfalls in classifier performance measurement. ACM
www.dairymaster.com/heat-detection/>; 2017. SIGKDD Explorat Newsl 2010;12(1):49–57.
[12] Shahriar MS, Smith D, Rahman A, Freeman M, Hills J, [25] Derpanis KG. The bhattacharyya measure. Link: <http://
Rawnsley R, et al. Detecting heat events in dairy cows using www.cse.yorku.ca/~kosta/CompVis_Notes/bhattacharyya.
accelerometers and unsupervised learning. Elsevier Comput pdf>; 2017.
Electron Agric 2016;128:20–6. [26] Ben-David S, Blitzer J, Cramer K, Pereira F. Analysis of
[13] iCEROROTiCS. CowAlert. link: <http://www.icerobotics.com/ representations for domain adaptation. In: Proc Conference
products/#cowalert>; 2017. on Neural Information Processing Systems (NIPS).
[14] Greenwood PL, Valencia P, Overs L, Paull DR, Purvis IW. New Cambridge; 2007. p. 137–44.
ways of measuring intake, efficiency and behaviour of [27] Bengio Y, Courville A, Vincent P. Representation learning: a
grazing livestock. Animal Prod Sci 2014;54:1796–804. review and new perspectives. IEEE Trans Pattern Anal Mach
[15] Greenwood PL, Bishop-Hurley GJ, Gonzalez LA, Ingham AB. Intell 2013;35(8):1798–828.
Development and application of a livestock phenomics [28] Glorot X, Bordes A, Bengio Y. Domain adaptation for large-
platform to enhance productivity and efficiency at pasture. scale sentiment classification: a deep learning approach. In:
Animal Prod Sci 2016;56:1299–311. Proc. International Conference on Machine Learning (ICML).
[16] Greenwood PL, Paull DR, McNally J, Kalinowski T, Ebert D, Belluvue, Washington; 2011. p. 97–110.
Little B, et al. Use of sensor-determined behaviours to [29] Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans
develop algorithms for pasture intake by individual grazing Knowl Data Eng 2010;22(10):1345–59.
cattle. Crop Pasture Sci 2016. https://doi.org/10.1071/CP16383.

Вам также может понравиться