Giter Site home page Giter Site logo

tnc-ca-geo / animl-frontend Goto Github PK

View Code? Open in Web Editor NEW
14.0 8.0 0.0 7.97 MB

A frontend web app for viewing & labeling camera trap data by The Nature Conservancy.

Home Page: https://animl.camera

HTML 0.36% JavaScript 99.55% TypeScript 0.10%
cameratraps wildlife-conservation

animl-frontend's Introduction

Animl Frontend

A frontend web app for viewing & labeling camera trap data by The Nature Conservancy.

Animl screenshot

Related repos

Overview

Animl is an open, extensible, cloud-based platform for managing camera trap data. We are developing this platform because there currently are no software tools that allow organizations using camera traps to:

  • ingest data from a variety of camera trap types (wireless, SD card based, IP, etc.)
  • systematically store and manage images in a single centralized, cloud-based repository
  • upload custom object detection and species clasification ML models and configure automated assisted-labeling pipelines
  • Offer frontend web application to view images, review ML-assisted labels, perform manual labeling
  • Offer an API for advanced querying and analysis of camera trap data
  • Offer tools for exporting ML model training data

This repository contains the frontend web application for viewing and interacting with the camera trap data. It is a React app, using Redux (specifically Redux Toolkit) for state management and Vite for tooling.

Available Scripts

In the project directory, you can run:

npm start

Runs the app in the development mode.
Open http://localhost:5173 to view it in the browser.

The page will reload if you make edits.
You will also see any lint errors in the console.

npm run build

Builds the app for production to the build folder.
It correctly bundles React in production mode and optimizes the build for the best performance.

The build is minified and the filenames include the hashes.
Your app is ready to be deployed!

npm run build:staging

Builds the app for deployment to the staging environment.
It will request backend resources that are also in their respective staging environments.

npm run deploy-dev & npm run deploy-prod

Builds the app for deployment and deploys it to dev/production environment.

Prod deployment

Use caution when deploying to production, as the application involves multiple stacks (animl-ingest, animl-api, animl-frontend), and often the deployments need to be synchronized. For major deployments to prod in which there are breaking changes that affect the other components of the stack, follow these steps:

  1. Set the frontend IN_MAINTENANCE_MODE to true (in animl-frontend/src/config.js), deploy to prod, then invalidate its cloudfront cache. This will temporarily prevent users from interacting with the frontend (editing labels, bulk uploading images, etc.) while the rest of the updates are being deployed.

  2. Set ingest-image's IN_MAINTENANCE_MODE to true (in animl-ingest/ingest-image/task.js) and deploy to prod. While in maintenance mode, any images from wireless cameras that happen to get sent to the ingestion bucket will be routed instead to the animl-images-parkinglot-prod bucket so that Animl isn't trying to process new images while the updates are being deployed.

  3. Wait for messages in ALL SQS queues to wind down to zero (i.e., if there's currently a bulk upload job being processed, wait for it to finish).

  4. Backup prod DB by running npm run export-db-prod from the animl-api project root.

  5. Deploy animl-api to prod.

  6. Turn off IN_MAINTENANCE_MODE in animl-frontend and animl-ingest, and deploy both to prod, and clear cloudfront cache.

  7. Copy any images that happened to land in animl-images-parkinglot-prod while the stacks were being deployed to animl-images-ingestion-prod, and then delete them from the parking lot bucket.

animl-frontend's People

Contributors

dependabot[bot] avatar jbmarsha avatar jue-henry avatar nathanielrindlaub avatar nevoniuk avatar oliverroick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animl-frontend's Issues

Give users option to explicitly include/exclude future cameras/deployments/labels into filter selections & views

Currently when creating a view that includes all currently available cameras & deployments, the filters saved for that view for cameras & deployments will be "null", which means that all new cameras/deps that get added after that will be included in that view.

Similarly, if you have ANY deployments or labels toggled off (for example if you created a view that excluded empty images), any new labels that were added after the view was created would NOT be included in that view.

Make image loading faster

It was abundantly clear after watching how Juli reviews images (very fast, locally), that we need to get image load time down to an absolute minimum.

The first step is to create a medium resolution version of each image in animl-ingest, and use that from the front end (we're currently loading the full-size image, which is way to large).

Other ideas:

  • pre-fetch & cache images in the browser (e.g. this medium post) as soon as we get the next page of images. Also bump up the # of image records fetched per page.

Build UI/UX for applying "empty" object to image

It would be cumbersome and not intuitive to require users to create a bounding box and give it an "empty" label manually, so it would be nice to provide UI/UX for making that easy to do both in label review mode and via point-and-click UI.

For the point-and-click UI, I think a "mark as empty" button next to the "add object" button might do the trick. Clicking it would automatically add a new object of the same dimensions as the image and give it an "empty" label.

For the label review mode, empty labels get validated/invalidated like any other. Maybe if a human invalidates all other labels/objects on an image and then move away from it, we could display a pop-up type thing that asks "mark as empty?" which users could click/use arrow keys to confirm/reject.

See: tnc-ca-geo/animl-api#31

Implement label review UX

  • Ability to toggle between images and labels for unit of iteration
  • Ability to delete, add, edit labels & bounding boxes
  • Rapid label validation / invalidation with keys?

Fix bbox styling bugs

  • bbox doesn't scale if you have loop open and close filter panel
  • bbox is higher z-index than modals

Fix progress bar bug & figure out how to visualize images' current position within a view

Currently, the percentage progress on the progress bar at the bottom of the loupe is calculated with (focusIndex.image / imageCount) * 100.

This works fine if the imageCount isn't changing as we review images. However, each time we fetch a new page, we get a new count back for total # of images that match. So if the 'reviewed' filter is off, for example, the count will get lower and lower as we review images, causing situations in which the focusIndex.image is larger than the remaining imageCount. The count could also increase if (a) new images are added while reviewing that match the filters or (b) a filter is on for a specific label, and the user is adding that label to images as they go.

Using the focusIndex.image to figure out where you are within the range of images in a view is a little dicey in general, especially if we implement the ability to jump to the middle of a list of images and move both forward and backward from there.

Some possible solutions:

  • On a fresh view/filter change, store the the total image count returned by the first page somewhere in state with a name like originalImageCount, and use that instead of the count that might change w/ each new page fetch to calculate progress. However, the focusIndex might get out of sync with this too if the true count is increasing over the course of the review.
  • Maybe only show the progress bar if the 'reviewed' filter is off, and use the progress bar to show the # of images left to review? Could grab the originalImageCount as described above and compare it to the true count. That's the safest option in my mind but not ideal as progress would only change after each new page load...
  • Really what we're looking for is some way to let a user know where they are relative to the whole list of images returned by the view. For every image, it would be great to know how many images came before it within that view, and how many come after it. You could calculate that on the back end each time a new page is fetched (i.e., get the page, grab the first image from the page, build a filter that is the same as the views + only images before the value for the paginated field of that image, and then do the same thing for the last image of the page looking forwards), return those counts, and from there you keep track of an index relative to each images page on the front end... but that seems overly complicated.

This requires more thought.

Make views shareable/router

I imagine as interesting images are coming in. People would like to share and discuss them in meetings, IM, and email. Would be great if there was an easy way to reference them in order to look at them together.

Fetch users from Cognito User Pool

It would be useful for some features to have a list of other users in a project (e.g., so when configuring email alerts, admins could select from a list of other user's email addresses, or know who has access to their project and what their permissions levels are).

I gave this a shot, but in order to get a list of users through AWS Amplify, you need to set up the Amplify CLI, and use it to set up an "Admin Actions" REST endpoint to query for things like listUsers. It seemed cumebrsome and there was no obvious way to make this setup into a Cloudformation template, so I abandoned it.

The alternative would be to use the AWS SDK for Javascript's Cognito Identity Provider Client, but it seemed awkward to have to authenticate two clients. Perhaps there is a way to pass the tokens or credentials returned by Amplify's Auth workflow into a new CognitoIdentityProviderClient()? I don't know. Requires more investigation.

Make spinners consistent

And maybe show a message below the spinner to indicate what it's doing (e.g. "Saving view", "Fetching images", etc.). The Things Stack has a nice implementation of that.

Add map UI

Having a map would be nice for visualizing camera deployments, but it also could be really helpful to allow uses to draw an area on the map & save that GeoJSON polygon as a filter in a view. It would partially help address the chicken-or-the-egg issue of adding new cameras but not having a record of them yet in the DB, so you can't immediately include them in views or route their images through inference pipelines: e.g., I could draw a box around SCI, save it as a view with an automation rule, and any new cameras added that have locations set and fall within that box would automatically get routed through that automation rule.

I believe MongoDB supports geospatial queries.

Misc. label review improvements

  • if you add an object to an image that has an empty label object, automatically invalidate the empty label
  • Add "e" as a hotkey for marking as empty?
  • if both rodent and fox are predicted, put fox first in array (it's more common)
  • Fix adding object bug: it's hard to create new object at right hand edge of the image frame (if you release the mouse outside of the image frame it should create the object but it doesn't and sometimes the object gets created off the frame entirely)

Potential Refactors

  • Severless to SAM
  • Stitches to Styled Components
  • Redux to GraphQL client
  • regular JS to typescript

Make image loading less jarring

Instead of clearing images in table when we’re loading new ones (e.g. filter changed), create semi-opaque overlay and show loading spinner

Add refresh button

Especially helpful b/c current active filters aren't baked into the URL (yet), so refreshing to get new images causes you to lose all of your filter selections.

Provide way for users to quickly view temporal context of images

When demoing the app for Juli, Will, and Lara, all of them said that being able to see images from the same camera that were taken just before and just after any given photo was often very helpful for ID'ing animals. The issue is, when using Animl, a lot of the time users might not be reviewing a single camera's full image set in perfect chronological order (they might have multiple cameras toggled on, filters set to weed out empties, etc.).

I think we need to think through a UX that allows users to leverage the filters, but also gives them the option to quickly see images that were created just before and after any given image that's currently focused in the loupe. Right now I'm imagining a button in the loupe akin to the "live photo" button on iPhone's photo app: maybe when users click and hold this button, we query for that camera's images that were created within a ~5 min buffer before and after the focused image, and while holding the "context" button the arrow keys don't advance the focused image and instead allow users to navigate the contextual images. The UI should change somehow to make it clear we're in context-inspection mode, maybe some kind of carousel?

Display errors to user

  • display message to users when their filters return 0 items (distinguish between 0 images matching query vs error)
    • nonefound images display
    • nonefound filters display (labels, deployments)
    • nonefound cameras display (in cameras admin modal)
    • nonefound projects
    • nonefound models (in automation rule form)
  • display message to users when the image passed into the URL to fetch and pre-focus isn't found

Can add but not remove "mark empty"

I have playing around the interface somewhat more intensively and found a combination of labels and empty I was not able to correct. I would like to send you a link to the photo but that is a little difficult, see issue #49.

Improve label review styling

  • make label coloring more intelligent
  • Improve styling of label pills/objects in table - make difference between locked/unlocked label pills more clear
  • Improve styling of bounding boxes - make currently focused bounding box more clear

Add “filter summaries” to filter headers

E.g.: "5 cameras [x]" ([x] button to clear the filters). For date-rages, if end date is today, use “last x days” as text.

At the very least, or perhaps as an interim solution, add "select all" and "remove all" buttons to each filter section. I'd say that's pretty high priority from a UX perspective.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.