Giter Site home page Giter Site logo

nvt-1009 / animl Goto Github PK

View Code? Open in Web Editor NEW

This project forked from conservationtechlab/animl-r

0.0 0.0 0.0 440.15 MB

Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.

License: MIT License

R 100.00%

animl's Introduction

animl v1.0.0

Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.

Table of Contents

  1. Camera Trap Classificaton
  2. Models
  3. Installation

Camera Trap Classification

Below are the steps required for automatic identification of animals within camera trap images or videos.

1. File Manifest

First, build the file manifest of a given directory.

library(animl)

imagedir <- "examples/TestData"

#create save-file placeholders and working directories
setupDirectory(imagedir)

# Read exif data for all images within base directory
files <- buildFileManifest(imagedir)

# Set Region/Site/Camera names based on folder hierarchy
files <- setLocation(files,imagedir)

# Process videos, extract frames for ID
imagesall<-imagesFromVideos(files,outdir=vidfdir,frames=5)

2. Object Detection

This produces a dataframe of images, including frames taken from any videos to be fed into the classifier. The authors recommend a two-step approach using Microsoft's 'MegaDector' object detector to first identify potential animals and then using a second classification model trained on the species of interest.

A version of MegaDetector compatible with tensorflow can obtained from our server.

More info on MegaDetector.

#Load the Megadetector model
mdsession<-loadMDModel("/path/to/megaDetector/mdv5_.pb")

#+++++++++++++++++++++
# Classify a single image to make sure everything works before continuing
testMD(imagesall,mdsession)
#+++++++++++++++++++++

# Obtain crop information for each image, checkpoint MegaDetector after every 2500 images
mdres <- classifyImagesBatchMD(mdsession,imagesall$Frame,resultsfile=paste0(datadir,mdresults),checkpoint = 2500)

# Add crop information to dataframe
imagesall <- parseMDsimple(imagesall, mdres)

3. Classification

Then feed the crops into the classifier. We recommend only classifying crops identified by MD as animals.

# Pull out animal crops
animals <- imagesall[imagesall$max_detection_category==1,]

# Set of crops with MD human, vehicle and empty MD predictions. 
empty <- setEmpty(imagesall)


modelfile <- "/Models/Southwest/EfficientNetB5_456_Unfrozen_01_0.58_0.82.h5"

# Obtain predictions for each animal crop
pred<-classifySpecies(animals,modelfile,resize=456,standardize=FALSE,batch_size = 64,workers=8)

# Apply human-readable class name to dataframe
# Classes are stored as text file
# Returns a table with number of crops identified for each species
alldata <- applyPredictions(animals,empty,"/Models/Southwest/classes.txt",pred, counts = TRUE)

# Lastly pool crops to get one prediction per file
alldata <- poolCrops(alldata)

Models

All of our pre-trained classification models can be obtained at [https://]

Geographical regions represented:

  • South America
  • African Savanna
  • Southwest United States

Installation

Requirements

  • R >= 4.0
  • Python >= 3.7
  • Tensorflow >= 2.5

We recommend running animl on a computer with a dedicated GPU.

Python

animl depends on python and will install python package dependencies if they are not available if installed via CRAN.
However, we recommend setting up a conda environment using the provided config file.

Instructions to install conda

The file animl-env.yml describes the python version and various dependencies with specific version numbers. To create the enviroment, from within the animl directory run the following line in a terminal:

conda env create -f animl-env.yml

The first line creates the enviroment from the specifications file which only needs to be done once. This environment is also necessary for the python version of animl.

Contributors

Kyra Swanson
Mathias Tobler
Edgar Navarro
Josh Kessler
Jon Kohler

animl's People

Contributors

tkswanson avatar matobler avatar zracano avatar icr-ctl avatar jkessler93 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.