tnc-ca-geo / animl-frontend Goto Github PK
View Code? Open in Web Editor NEWA frontend web app for viewing & labeling camera trap data by The Nature Conservancy.
Home Page: https://animl.camera
License: Other
A frontend web app for viewing & labeling camera trap data by The Nature Conservancy.
Home Page: https://animl.camera
License: Other
I imagine as interesting images are coming in. People would like to share and discuss them in meetings, IM, and email. Would be great if there was an easy way to reference them in order to look at them together.
It would be useful for some features to have a list of other users in a project (e.g., so when configuring email alerts, admins could select from a list of other user's email addresses, or know who has access to their project and what their permissions levels are).
I gave this a shot, but in order to get a list of users through AWS Amplify, you need to set up the Amplify CLI, and use it to set up an "Admin Actions" REST endpoint to query for things like listUsers
. It seemed cumebrsome and there was no obvious way to make this setup into a Cloudformation template, so I abandoned it.
The alternative would be to use the AWS SDK for Javascript's Cognito Identity Provider Client, but it seemed awkward to have to authenticate two clients. Perhaps there is a way to pass the tokens or credentials returned by Amplify's Auth workflow into a new CognitoIdentityProviderClient()
? I don't know. Requires more investigation.
Mark as "needs review" if there are unlocked objects on the image?
To consider them 'reviewed'.
I have playing around the interface somewhat more intensively and found a combination of labels and empty I was not able to correct. I would like to send you a link to the photo but that is a little difficult, see issue #49.
Especially helpful b/c current active filters aren't baked into the URL (yet), so refreshing to get new images causes you to lose all of your filter selections.
Even I have trouble keeping track of what they are.
Having a map would be nice for visualizing camera deployments, but it also could be really helpful to allow uses to draw an area on the map & save that GeoJSON polygon as a filter in a view. It would partially help address the chicken-or-the-egg issue of adding new cameras but not having a record of them yet in the DB, so you can't immediately include them in views or route their images through inference pipelines: e.g., I could draw a box around SCI, save it as a view with an automation rule, and any new cameras added that have locations set and fall within that box would automatically get routed through that automation rule.
I believe MongoDB supports geospatial queries.
Currently when creating a view that includes all currently available cameras & deployments, the filters saved for that view for cameras & deployments will be "null", which means that all new cameras/deps that get added after that will be included in that view.
Similarly, if you have ANY deployments or labels toggled off (for example if you created a view that excluded empty images), any new labels that were added after the view was created would NOT be included in that view.
This would be worth holding off on until we have some feedback from users & can work with them to design an ergonomic label review UX.
AWS tutorial on how to upload to s3 using JS SDK (s3.managedUpload) and using Cognito for auth.
react-window, react-window-infinite-loader, and react-virtualized-autosizer to only render nodes that are visible and load new images just-in-time.
And here's how to use react-window with react-table using virtualized-rows.
A relevant github issue (see bottom post for how to implement virtual scrolling with traditional table elements rather than divs).
When demoing the app for Juli, Will, and Lara, all of them said that being able to see images from the same camera that were taken just before and just after any given photo was often very helpful for ID'ing animals. The issue is, when using Animl, a lot of the time users might not be reviewing a single camera's full image set in perfect chronological order (they might have multiple cameras toggled on, filters set to weed out empties, etc.).
I think we need to think through a UX that allows users to leverage the filters, but also gives them the option to quickly see images that were created just before and after any given image that's currently focused in the loupe. Right now I'm imagining a button in the loupe akin to the "live photo" button on iPhone's photo app: maybe when users click and hold this button, we query for that camera's images that were created within a ~5 min buffer before and after the focused image, and while holding the "context" button the arrow keys don't advance the focused image and instead allow users to navigate the contextual images. The UI should change somehow to make it clear we're in context-inspection mode, maybe some kind of carousel?
So that users know what it is...
Easy lift.
And maybe show a message below the spinner to indicate what it's doing (e.g. "Saving view", "Fetching images", etc.). The Things Stack has a nice implementation of that.
Depends on tnc-ca-geo/animl-api#3
Currently, the percentage progress on the progress bar at the bottom of the loupe is calculated with (focusIndex.image / imageCount) * 100
.
This works fine if the imageCount
isn't changing as we review images. However, each time we fetch a new page, we get a new count back for total # of images that match. So if the 'reviewed' filter is off, for example, the count will get lower and lower as we review images, causing situations in which the focusIndex.image
is larger than the remaining imageCount
. The count could also increase if (a) new images are added while reviewing that match the filters or (b) a filter is on for a specific label, and the user is adding that label to images as they go.
Using the focusIndex.image
to figure out where you are within the range of images in a view is a little dicey in general, especially if we implement the ability to jump to the middle of a list of images and move both forward and backward from there.
Some possible solutions:
originalImageCount
, and use that instead of the count
that might change w/ each new page fetch to calculate progress. However, the focusIndex
might get out of sync with this too if the true count is increasing over the course of the review.originalImageCount
as described above and compare it to the true count. That's the safest option in my mind but not ideal as progress would only change after each new page load...This requires more thought.
It was abundantly clear after watching how Juli reviews images (very fast, locally), that we need to get image load time down to an absolute minimum.
The first step is to create a medium resolution version of each image in animl-ingest, and use that from the front end (we're currently loading the full-size image, which is way to large).
Other ideas:
e.g., select a bunch of images and add label to every object in them or validate them all.
Instead of clearing images in table when we’re loading new ones (e.g. filter changed), create semi-opaque overlay and show loading spinner
can't figure out what's causing that.
It would be cumbersome and not intuitive to require users to create a bounding box and give it an "empty" label manually, so it would be nice to provide UI/UX for making that easy to do both in label review mode and via point-and-click UI.
For the point-and-click UI, I think a "mark as empty" button next to the "add object" button might do the trick. Clicking it would automatically add a new object of the same dimensions as the image and give it an "empty" label.
For the label review mode, empty labels get validated/invalidated like any other. Maybe if a human invalidates all other labels/objects on an image and then move away from it, we could display a pop-up type thing that asks "mark as empty?" which users could click/use arrow keys to confirm/reject.
Add obvious display somewhere for total image count that matches the current set of filters.
Think about what we'd want to learn & track
Sometimes you click one deployment checkbox and multiple get unchecked/checked.
E.g.: "5 cameras [x]" ([x] button to clear the filters). For date-rages, if end date is today, use “last x days” as text.
At the very least, or perhaps as an interim solution, add "select all" and "remove all" buttons to each filter section. I'd say that's pretty high priority from a UX perspective.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.