Methods and Protocols
Methods and Protocols
Below you will find a set of recommended documents to support the development of remote camera or ARU programs. Protocols for ARUs have been developed and tested by the Bioacoustic Unit; remote camera protocols have been developed and tested by the Alberta Biodiversity Monitoring Institute and the Caribou Monitoring Unit. Please contact us at info@wildtrax.ca if you have additional standardized methods or protocols to add.
A zip file containing instructions on programming and setting a Wildlife Acoustics ARU to record to the Bioacoustic Unit standard schedule. This schedule is designed to optimize the detection of vocalizing species over a long period of time to capture as many species as possible.
This protocol provides best practices for maintaining your SM4 ARUs used for remote monitoring. The protocol also provides a checklist for inspection.
This protocol provides best practices for maintaining your SM3 ARUs used for remote monitoring. The protocol also provides a checklist for inspection.
This protocol provides best practices for maintaining your SM2 ARUs used for remote monitoring. The protocol also provides a checklist for inspection.
This protocol outlines step-by-step instructions for deploying ARUs in terrestrial environments.
This protocol outlines step-by-step instructions for deploying and retrieving remote cameras and ARUs in forested, grassland, and wetland areas.
Video demonstrating how to set up an ARU in the field.
(Password: ABMI)
Video demonstrating how to set up a remote camera in the field.
(Password: ABMI)
This protocol outlines step-by-step instructions for deploying ARUs in forested, grassland, and wetland areas.
Datasheet to be filled out when ARUs are deployed in the field.
Datasheet to be filled out when ARUs are picked up from the field or services at the end of the field season or study duration.
Datasheet to be filled out when remote cameras are retrieved from the field at the end of the field season or study duration.
Provincial metadata standards for remote cameras in Alberta
This report outlines how the ABMI calculates animal density from remote camera trap image data. The report describes in detail the components of this density estimate (how the ABMI collects the necessary information, results, assumptions, tests), other factors that need to be considered in some designs, and provides further discussion on the basic assumptions made and dealing with skewed sampling distribution.
The objective of this report is to address some of these questions using a variety of datasets where multiple recordings from ARUs placed in the same area have been listened to for extended periods. The structure of the report is a series of questions with a methods and results specific to the question. A general conclusion is presented at the end of the report based on the answers to these various questions and a discussion of next steps for settling on a standardized protocol for ARU listening.
Point counts are one of the most commonly used methods for assessing bird abundance. Autonomous recording units (ARUs) are increasingly being used as a replacement for human-based point counts. Previous studies have compared the relative benefits of human versus ARU-based point count methods, primarily with the goal of understanding differences in species richness and the abundance of individuals over an unlimited distance. What has not been done is an evaluation of how to standardize these two types of data so that they can be compared in the same analysis, especially when there are differences in the area sampled. We compared detection distances between human observers in the field and four commercially available recording devices (Wildlife Acoustics SM2, SM3, RiverForks, and Zoom H1) by simulating vocalizations of various avian species at different distances and amplitudes. We also investigated the relationship between sound amplitude and detection to simplify ARU calibration. We used these data to calculate correction factors that can be used to standardize detection distances of ARUs relative to each other and human observers. In general, humans in the field could detect sounds at greater distances than an ARU although detectability varied depending on species song characteristics. We provide correction factors for four commonly used ARUs and propose methods for calibrating ARUs relative to each other and human observers.
Automated recognition is increasingly used to extract species detections from audio recordings; however, the time required to manually review each detection can be prohibitive. We developed a flexible protocol called “validation prediction” that uses machine learning to predict whether recognizer detections are true or false positives and can be applied to any recognizer type, ecological application, or analytical approach. Validation prediction uses a predictable relationship between recognizer score and the energy of an acoustic signal but can also incorporate any other ecological or spectral predictors (e.g., time of day, dominant frequency) that will help separate true from false-positive recognizer detections. First, we documented the relationship between recognizer score and the energy of an acoustic signal for two different recognizer algorithm types (hidden Markov models and convolutional neural networks). Next, we demonstrated our protocol using a case study of two species, the Common Nighthawk (Chordeiles minor) and Ovenbird (Seiurus aurocapilla). We reduced the number of detections that required validation by 75.7% and 42.9%, respectively, while retaining at least 98% of the true-positive detections. Validation prediction substantially improves the efficiency of using automated recognition on acoustic data sets. Our method can be of use to wildlife monitoring and research programs and will facilitate using automated recognition to mine bioacoustic data sets.
Automated recognition is increasingly used to extract information about species vocalizations from audio recordings. During processing, recognizers calculate the probability of correct classification (“score”) for each acoustic signal assessed. Our goal was to investigate the implications of recognizer score for ecological research and monitoring. We trained four recognizers with clips of Common Nighthawk (Chordeiles minor) calls recorded at different distances: near, midrange, far, and mixed distances. We found distance explained 49% and 41% of the variation in score for the near and mixed-distance recognizers, but only 3% and 6% of the variation for the midrange and far recognizers.
Wildlife practitioners are increasingly moving to non-invasive and passive monitoring technology, such as autonomous recording units (ARUs) to survey wildlife. Additionally, recent trends in ecological research are to investigate patterns at scales much larger than a single monitoring program is typically capable of (i.e. regional or continental scales). These large-scale studies often require collaboration and the sharing or integration of data from multiple sources to address research questions and objectives over large areas.
Bioacoustic recordings are often used to conduct auditory surveys, in which human listeners identify vocalising animals on recordings. In these surveys, animals are typically counted regardless of their distance from the survey point. When these surveys are carried out in patchy habitat or near edges, detected individuals may frequently occur in a different land-cover type than the survey point itself, which introduces uncertainty regarding species-habitat associations. We propose a method to restrict detections from single microphones to within a pre-specified survey radius. The method uses logistic regression to select a sound level threshold corresponding to the desired distance threshold. We applied this method to acoustic data from the centre of 21 1-ha oil wellsites in northern Alberta.
-
What is automated species recognition?
Biologists are increasingly using autonomous recording units (ARUs) in the field to determine the presence/absence and the abundance of bird species. Unlike humans, these recorders can be left in the field for extensive periods of time, allowing data to be collected over much greater spatiotemporal scales. However, the tradeoff in this approach is the labour-intensive nature of processing such vast datasets. Here, automated species detection provides a path forwards, by shifting the burden of sifting through hours of audio recordings from the technician to the computer.
Put simply, automated (acoustic) species recognition is the process of training a computer to recognize, detect, and evaluate the acoustic signature of a target species’ vocalization. For example, a computer model can be trained to recognize the distinctive “who-cooks-for-you” vocalization of a Barred owl (Strix varia).
Such a model, commonly referred to as a “recognizer”, can then be applied to acoustic datasets to detect signals that resemble the trained model. All types of sounds can be modeled into recognizers, from the chucks and whines of a frog to the drumming of a woodpecker.
-
Why are automated recognizers useful?
Recognizers can make processing acoustic datasets more efficient
As mentioned above, by automating the species detection process, datasets can be processed more efficiently. This is especially true for rare or uncommon species because the amount of effort required to identify those species manually can be substantial.
Recognizers provide many different kinds of data
The most basic information that can be obtained from automated recognition is presence/absence or occurrence data. When coupled with estimates of detection probability, occupancy may also be modeled. Recent methods are exploring the possibility of using clustered recording units to localize an individual bird in time and space. In the future, this could lead to estimates of density, particularly for rare or uncommon birds. Automated species recognizers can also provide information on vocalization phenology, calling rates, and intra-specific variation in calls.
-
How are recognizers built?
There are many approaches to automated acoustic species recognition (summarized briefly in Knight et al., 2017). Generally, the BU has implemented two major approaches to building recognizers: (i) supervised learning algorithms and (ii) neural networks. The former will be discussed below.
By supervised learning algorithm, we mean that the user monitors the computer during the training stage, where the computer is ‘learning’ what a particular species’ vocalization sounds like. This is typically done using a software called SongScope, which is first fed example annotations of a species’ vocalization, to train the computer on. Where high quality training data is available the recognizers can be very accurate in their ability to discriminate signal from the noise.
During the training stage, a number of parameters have their values informed by the user (e.g. number of syllables, range of permitted frequencies, etc.). This is where the ‘supervised’ part of the algorithm comes into play. By setting these parameters using biologically informed priors (e.g. we know that the vocalization usually has 7 syllables, or we can measure the extreme ranges picked up for that vocalization in recordings), the user helps guide the computer to the parameter space it will use to search through real datasets. After preliminary assessments of the recognizer model’s efficacy deem the model satisfactory, then the quality and score thresholds can be set to optimize false positive and negative rates.
-
How do you validate recognizer results?
"All models are wrong but some are useful". This classic scientific adage applies to recognizers. While recognizers can be very accurate there will always be false positives (recognizer says that a vocalization is species X when it is really species Y) and false negatives (recognizers fails to find species X even though it was vocalizing).
With large amounts of acoustic data, the numbers of hits that the recognizer gets means you can't be 100% sure that the computer is correct without checking the vocalization yourself. This is called validation or verification. There are many ways to validate data and how much you need to validate depends on your question. If you are primarily interested in whether a species is present or absent over an entire season of recordings the amount of validation needed is much less than if you want to count every song given by a species. The BU has a number of papers that discuss ways to reduce validation time that can be found here. Using tools like species verification in WildTrax can save time and help to manage and share your data outputs as well.
-
What else should I consider when building a recognizer?
This flow chart is the process the BU uses when developing and using recognizers and provides links to papers that provide more detail. A key element of this flowchart is the quality of recordings used to build a recognizer. There are trade-offs in the creation and interpretation of recognizers when you only use high-quality clips recorded very close to the species of interest versus using recordings of different qualities coming from species at different distances. When trained with high-quality close-clips a recognizer not only identifies the species but also "estimates" distance in that it will be more likely to find the vocalizations of species that are close to the ARU and miss those further away. Training the recognizer using vocalizations that are further from the ARU can improve (but not always) the ability to find a species in the recording because it is trained to detect weaker signal to noise ratios that come from more distant animals. We prefer to use recognizers built from high-quality recordings near the recording device because of the statistical benefits of knowing distance. Recognizers built using vocalizations of different quality and distances often have more false positives.
-
How can I use my recognizer outputs in WildTrax?
As long as the recognizer output you generated contains the appropriate metadata, you can upload the media, create tasks and upload the hits as tags. Using species verification, you can then quickly verify the hits. See the chapters of the Guide on ARU projects and Species verification to learn more.
Ruffed Grouse
In development
Northern Goshawk
In development
Pileated Woodpecker
In development
Brown-headed Cowbird
In development
Bay-breasted Warbler
In development