The widespread availability of relatively cheap commercial off the shelf (COTS) camera traps, alongside improvements in digital design and analytical techniques, has seen the use of camera traps grow exponentially over the last 10 years. Despite their popularity and their use in wildlife research, most COTS camera traps have not actually been designed for research, limiting their usefulness for professional wildlife research.
The WiSE project is researching new techniques that will enable remote capture of imagery for monitoring in a rural environment. It has designed a series of stand-alone digital camera traps and a remote platform. A drawback of any video-based monitoring system is that it can capture large volumes of content that may be of little interest, requiring manual selection of interesting material for data extraction and coding, or processing for inclusion in an archive. The generation of large volumes of ‘non-information’ has been seen as the single biggest disadvantage in the use of camera traps as an environmental monitoring tool. We are piloting new adaptive algorithms that react to scenarios of interest and adapt the imagery collected based on the current system status.
The WiSE platform employs advanced coding methods to detect and compress relevant imagery, and advanced transmission techniques to retrieve remotely the collected imagery. Methods are also explored to allow live access to video and images at reduced quality (e.g. to the public) and on demand download of high quality imagery. This WiSE platform combines many of the advantages of “wired” Internet cameras and remote digital camera traps, while also providing a platform for new digital techniques and to support new applications. The flexibility of the platform represents significant advancement over existing available methods.
Video imagery significantly increases the volume of data being captured. There is thus a clear need to avoid incurring an excessive need for data transmission, storage and analysis. To provide an effective solution next-generation techniques for video compression are being used and automatic identification of scene changes that can work on a par, or better than the sensors used within existing digital camera trap technology. The system trades quality of capture against the power budget and available communications resource.
The monitoring system employs cameras connected to a Gateway that coordinates the monitoring and also provides backhaul connectivity to the Internet. It intelligently selects relevant imagery for transmission (filtering spurious detections). A network of sensors detects movement and trigger recording of video or still images. Image capture may be tailored depending on current battery level, required quality and transmission cost. Multiple views of a scene could be fused and interpreted to: (a) remove spurious detections (i.e. false positives), and (b) select significant scenes. The selected imagery will be transmitted via the backhaul link, requiring new approaches to transmission and bandwidth to support this application. The methods seek to maximize battery power, reduce transmission cost and provide the required image quality. These methods are also expected to reduce the need for subsequent human interpretation of the imagery.
On 6th Sept 2012, the first prototype camera trap was tested for first deployment in an outdoor test location at the University. The salient features of the prototype are:
The camera trap designed for the actual deployment in the outdoor environment started its tests on 18th Feb 2013. This version has a custom designed board to reduce the power consumption. It was aimed to bring improvement by reduction in false triggers and redundant pictures.
The camera trap after its tests and trials has been deployed on 24 May 2013 to the site in uplands. The results enabled comparison with a commercial camera trap and the 1B was found to be more responsive to events.
The initial Arduino based versions 1A and 1B are followed by a Raspberry Pi based version 1C. This design explores the feasibility of integrating digital components and subassemblies to realize an open source battery-powered camera trap that is suitable for a range of deployment scenarios. This approach presented challenges, but has key benefits in that the design can be modified and the software is more flexible and can be easily extended. The offered flexibility can support new modes of monitoring, e.g. combining multiple sensor to trigger capture. Perhaps most significantly, the use of on-board storage and processing is shown to be sufficient to allow real-time image processing of the captured images, to control when images are taken and to help indicate the useful content of an image. A key focus of the work was to reduce the post-processing by users normally required to review imagery to eliminate images that have no useful content (false triggers).
There was a need to supplement the coverage of the IP camera at the WiSE 2 site (see below) with sensor nodes having cameras. The design of 1C was extended to include operation over PoE (power over Ethernet) and some sensors were added to record environmental data.
The extensive digital platform based on the architecture as shown above was deployed on 03 Jul 2013. The system combines still and video capture to monitor a remote location, with sensors and remote processing accessible via satellite Internet access. Night-vision capable cameras and sensors can be flexibly combined to allow triggering methods for the capture of imagery to evolve as operational experience is gained.
WiSE 2 satellite enables effective remote monitoring but it requires a major effort to be moved to another deployment site. After considering various options it was decided to develop a Raspberry Pi based system with day/night cameras, motion sensors with the communication over mobile network instead of satellite. A solar panel will be used to sustain the system for a yearlong operation. The system design thus enables easy and quick transport to any site of interest by the stakeholders etc.