| | 要旨トップ | 本企画の概要 | | 日本生態学会第73回全国大会 (2026年3月、京都) 講演要旨 ESJ73 Abstract |
シンポジウム S17-4 (Presentation in Symposium)
Technological advances, including sensors, have led to a rapid increase in the volume of field data collected during wildlife surveys. Analyzing image and video data from camera traps requires enormous human effort, highlighting the need for efficient processing. Automated methods using deep learning have been developed to address this challenge. Animal-borne cameras, increasingly popular in recent years, record the scenes observed by the wearing individual as image and video data, enabling diverse observations of the surrounding environment. However, unlike fixed camera traps, the shooting conditions of animal-borne cameras change drastically due to the animal’s movement and migration, making it difficult to apply camera trap methods directly. Furthermore, previous studies using animal-borne cameras primarily focused on localized analysis, such as detecting other individuals or specific plants. Even wide-area analyses were typically restricted to identifying vegetation cover types, leaving much information unused. We introduce new applications and approaches for processing complex animal-borne camera image data, supported by practical examples.
We targeted the Mongolian gazelle (Procapra gutturosa), an herbivorous ungulate inhabiting Mongolia, with animal-borne cameras. By applying multiple image processing approaches to the resulting data, we attempted to build an automated processing framework for quantitatively evaluating the surrounding environment (e.g., vegetation and snow cover). Instead of the object detection used in existing automation and animal-borne camera studies, this framework employs image classification and segmentation. This enables a broader, more multifaceted analysis of the image when evaluating the environment, rather than detecting specific plants and discarding other information. Furthermore, the implementation combines multiple machine learning techniques sequentially and simplifies processing at each stage, enabling the handling of complex data. Specifically, we first extracted images with clearly visible surroundings from noisy animal-borne camera data. Next, we segmented the images into rough regions such as ground and sky, identifying the ground area. Finally, we classified the ground area into more detailed categories, i.e., vegetation, bare ground, and snow cover. Based on the results, we calculated values representing the individual's surrounding environment, such as vegetation cover and snow cover. Thus, we successfully constructed a highly reliable framework, achieving over 80% accuracy in each step. This framework not only efficiently processes animal-borne camera data but also allows for continuous quantification of the individual's surrounding environment. This new environmental information, distinct from traditional remote sensing methods for quantitative environmental assessment like satellite imagery, is expected to provide new insights for wildlife research.