Atanbori, John
(2017)
Automatic classification of flying bird species using computer vision techniques.
PhD thesis, University of Lincoln.
28629 Atanbori John - Computer Science - June 2017.pdf | | ![[img]](http://eprints.lincoln.ac.uk/28629/1.hassmallThumbnailVersion/28629%20Atanbori%20John%20-%20Computer%20Science%20-%20June%202017.pdf) [Download] |
|
Item Type: | Thesis (PhD) |
---|
Item Status: | Live Archive |
---|
Abstract
Bird species are recognised as important biodiversity indicators: they are responsive to
changes in sensitive ecosystems, whilst populations-level changes in behaviour are both
visible and quantifiable. They are monitored by ecologists to determine factors causing
population fluctuation and to help conserve and manage threatened and endangered
species. Every five years, the health of bird population found in the UK are reviewed
based on data collected from various surveys.
Currently, techniques used in surveying species include manual counting, Bioacoustics
and computer vision. The latter is still under development by researchers. Hitherto,
no computer vision technique has fully been deployed in the field for counting species
as these techniques use high-quality and detailed images of stationary birds, which make
them impractical for deployment in the field, as most species in the field are in-flight and
sometimes distant from the cameras field of view. Techniques such as manual and bioacoustics
are the most frequently used but they can also become impractical, particularly
when counting densely populated migratory species. Manual techniques are labour intensive
whilst bioacoustics may be unusable when deployed for species that emit little or no
sound.
There is the need for automated systems for identifying species using computer
vision and machine learning techniques, specifically for surveying densely populated
migratory species. However, currently, most systems are not fully automated and use
only appearance-based features for identification of species. Moreover, in the field,
appearance-based features like colour may fade at a distance whilst motion-based features
will remain discernible. Thus to achieve full automation, existing systems will have
to combine both appearance and motion features. The aim of this thesis is to contribute to
this problem by developing computer vision techniques which combine appearance and
motion features to robustly classify species, whilst in flight. It is believed that once this is
achieved, with additional development, it will be able to support the surveying of species
and their behaviour studies.
The first focus of this research was to refine appearance features previously used in
other related works for use in automatic classification of species in flight. The bird appearances
were described using a group of seven proposed appearance features, which
have not previously been used for bird species classification. The proposed features improved
the classification rate when compared to state-of-the-art systems that were based
on appearance features alone (colour features).
The second step was to extract motion features from videos of birds in flight, which
were used for automatic classification. The motion of birds was described using a group
of six features, which have not previously been used for bird species classification. The
proposed motion features, when combined with the appearance features improved classification
rates compared with only appearance or motion features.
The classification rates were further improved using feature selection techniques.
There was an increase of between 2-6% of correct classification rates across all classifiers,
which may be attributable directly to the use of motion features. The only motion features
selected are the wing beat frequency and vicinity features irrespective of the method used.
This shows how important these groups of features were to species classification. Further
analysis also revealed specific improvements in identifying species with similar visual
appearance and that using the optimal motion features improve classification accuracy
significantly.
We attempt a further improvement in classification accuracy, using majority voting.
This was used to aggregate classification results across a set of video sub-sequences,
which improved classification rates considerably. The results using the combined features
with majority voting outperform those without majority voting by 3% and 6% on the seven
species and thirteen classes dataset respectively.
Finally, a video dataset against which future work can be benchmarked has been
collated. This data set enables the evaluation of work against a set of 13 species, enabling
effective evaluation of automated species identification to date and a benchmark
for further work in this area of research. The key contribution of this research is that a
species classification system was developed, which combines motion and appearance features
and evaluated it against existing appearance-only-based methods. This is not only
the first work to combine features in this way but also the first to apply a voting technique
to improve classification performance across an entire video sequence.
Repository Staff Only: item control page