Giter Site home page Giter Site logo

kitware / minerva Goto Github PK

View Code? Open in Web Editor NEW
36.0 23.0 14.0 20.38 MB

Minerva: client/server/services for analysis and visualization

License: Apache License 2.0

JavaScript 12.90% Python 84.26% CMake 1.83% HTML 0.13% Mako 0.82% CSS 0.08%
python kitware visualization analysis girder gis geovisualization scientific earthscience science

minerva's Introduction

build status documentation status dependencies status License codecov-io

Introduction

Minerva is a sophisticated geospatial application and framework created to enable users to upload, visualize, and analyze small to large geospatial data from a web interface. Minerva uses advancements in web and database technologies such as WebGL and NoSQL(MongoDB) database. It is designed for big data and cloud enabled data analysis and visualization.

Some of the highlights of Minerva include using a web-enabled data management Girder to manage data, metadata, sessions and employing high-performance geospatial data visualization library GeoJS to provide fast interactive visualization of geospatial data on a map. Minerva backend uses open-source tools such as Gaia, GDAL, Shapely, and Fiona for performing geospatial data I/O, filtering, and spatial analysis.

Minerva Minerva Documentation

Documentation for Minerva can be found at http://minervadocs.readthedocs.org.

Contact

Contact [email protected] for questions regarding using Minerva and deploying it in a cloud environment.

License

Copyright 2017 Kitware Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

minerva's People

Contributors

aashish24 avatar brianhelba avatar cjh1 avatar danlamanna avatar danlipsa avatar dorukozturk avatar ebradyjobory avatar jbeezley avatar justincampbell avatar kotfic avatar manthey avatar matthewma7 avatar mbertrand avatar mgrauer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minerva's Issues

Tie available analyses into a group

Analyses will live in an "analysis" folder under a "minerva" collection. These should be available to everyone in a specific group, most likely named "minerva".

From #29.

change defaults of folder search methods in minerva_utility to create=True

Currently the default for finding the named folders in the minerva_utility methods is create=False, which means no folder will be created if one isn't found. This should be changed to True, so that e.g., when a user creates a Dataset they will have a user/minerva/dataset folder.

If the system allows them to register a user, they should probably be allowed to create datasets and sources.

If we want to prevent users from taking up resources, that can be handled in quotas or preventing them from registering. The current default will just cause new users to have normal minerva operations fail.

create a bsve analysis json

Need to create a folder somewhere for storing the json of analyses, then the import analysis utility script can be run on these.

Probably at the top level of the project, as a sibling to utility, create an analyses dir.

Then create a specific one for bsve, which when the import analysis script gets run, will create a bsve search analysis item in the minerva collection under the analysis folder. The import analysis script may need to be updated to have the item gain the "minerva" metadata key (so that the analysis shows up in the analysis panel on the client UI), along with some specifics for the bsve analysis.

cache travis dependencies

info from this comment

There are a few options to handle it:

  • Do what I did for romanesco and serve the dependencies as a tarball at data.kitware.com.
  • Use something like mason which does essentially the same thing but better. This will probably require a minor change to push builds to girder rather than s3.
  • Use circleci, which allows you persist directories between builds. I have used it as part of cdatweb testing to cache a very large docker image, and it works pretty well.

Clean up library packaging and inclusion

Right now the libraries are a mess in inputs, in that some come from cdns, some are included in the source of the project, some come from npm. They are also a mess in outputs, in that some are packaged together and some separately--though there are some tricky issues here in terms of loading order. Also, there are libs and css files from experiments that maybe should not be included in a released version.

Before a first release, this should be addressed.

create csv => geojson conversion on the server side

currently this is only implemented in javascript; the web client converts csv to geojson. probably the csv mapper is broken now also.

Probably a better idea would be to have a unified stream geojson creator on the server, which can take in json or csv along with a mapping, and create geojson. This further depends on design decisions for storing features in mongo. Do we store as json or geojson?

Clarify client/server/services in Readme

As this repo is getting more complex, we should standardize on some terms and clean up the Readme.

client: the web client of Minerva, written in Backbone.js
server: the server side Rest endpoints of Minerva, written in Python, based on Girder
services: the set of services that host and process datasets, including
    * Geonode/Geoserver stack
    * Celery tasks for downloading and processing data
    * Geonode templates for specific tasks
    * Provisioning scripts with Ansible

Permissions issue for non-administrative users

Creating a non-administrative user in minerva appears to throw a permissions issue. As far as I can tell the user does not have access to the session folder that is created for them.

Failed to load resource: the server responded with a status of 403 (Forbidden):8081/static/built/app.min.js:1605
403 Forbidden {"message": "Read access denied for folder 55c9f03cf4b1493c06784ba3 (user 55c9f03cf4b1493c06784b9f).", "type": "access"}defaults.error @ :8081/static/built/app.min.js:1605```

Refactor isWmsSource in DatasetModel once SourceListView is created

most likely the thing to do is remove isWmsSource, and have something specific to WmsSource and WmsDataset render correctly in the appropriate list, Source or Dataset respectively.

For now this function is necessary because Sources and Datasets are listed in the same view.

add local job endpoint and romanesco endpoint for analyses

Currently under minerva_analysis there is a bsveSearchAnalysis endpoint. This should be refactored to a local job endpoint and a romanesco endpoint. The client should just pass in what is needed for the job, as this endpoint does nothing specific for the particular analysis.

Update jsonpath js dependency

Current version is 0.10.0, update to v 0.11.0 as that changes the name of the eval function to evaluation, so we don't need to do jshint exceptions. V 0.11.0 isn't in npm yet though.

refactor server side geojson dataset creation

  • should have dataset_type set to geojson
  • original_files should be a single element array of the geojson
  • source_id can be empty
  • dataset endpoint should be modeled on wms_dataset
  • dataset endpoint should return full item

create new geojson dataset from upload

This may be the trickiest part of this milestone.

Do we allow the user to upload a file, then after the item is created, if the type of file is geojson, "upgrade" the item to a geojson dataset? If so, need to ensure the endpoint created in #99 can handle that.

Or should we add an additional UI step asking the user what kind of file they've uploaded, and if they select geojson, at that point upgrade the item to a geojson dataset?

Also, how to handle cases of pushing up multiple files, and other possible boundary conditions?

refactor client side geojson dataset

  • should split out from the Dataset model
  • should be added to DatasetCollection model function
  • should be able to set the geojs correct renderer and render on the map
  • should be able to download its own data
  • should supply the isRenderable function

API keys get rate limited

Every time the app is loaded somewhere, one connection to the twitter API is opened. If enough people open the app, or if one person reloads enough times, the key's rate limit will be hit, resulting in the app working ok but not producing any data.

It's murky how to get around this. For demo days, etc., we can have @aashish24 regenerate his tokens so they are fresh, and this should cover the demo period.

In the longer run it may be that we need to just cache a single stream within the streaming service. Each person requesting a Tangelo stream of tweets will be getting data from a single Twitter stream. It's not obvious how to do this but it should be possible.

Cannot delete NEX-DCP30 S3 assetstore

Trying to delete an asset store with the following values:

Type: S3
Bucket: nasanex
Path prefix: NEX-DCP30
Access key ID:
Secret access key:
Service:

I recieve a 403 error with the following response

message: "S3ResponseError: S3ResponseError: 403 Forbidden↵<?xml version="1.0" encoding="UTF-8"?>↵<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>F7620A9CA788F786</RequestId><HostId>Ek3ChCUDpiF9Nyko8YVgdsmhJgpbMxJE62e914dqoauzVhqa7gtqQCT5LoluxWJo5Hd1PCLWX/4=</HostId></Error>"
trace: [["girder/api/rest.py", 295, "endpointDecorator", "val = fun(self, args, kwargs)"],…]
0: ["girder/api/rest.py", 295, "endpointDecorator", "val = fun(self, args, kwargs)"]
1: ["girder/api/rest.py", 777, "DELETE", "return self.handleRoute('DELETE', path, params)"]
2: ["girder/api/rest.py", 583, "handleRoute", "val = handler(**kwargs)"]
3: ["girder/api/access.py", 34, "accessDecorator", "return fun(*args, **kwargs)"]
4: ["girder/api/rest.py", 243, "wrapped", "return fun(*args, **kwargs)"]
5: ["girder/api/v1/assetstore.py", 244, "deleteAssetstore", "self.model('assetstore').remove(assetstore)"]
6: ["girder/models/assetstore.py", 82, "remove", "adapter.untrackedUploads([], 'delete')"]
7: ["girder/utility/s3_assetstore_adapter.py", 478, "untrackedUploads", "multipartUpload.cancel_upload()"]
8: ["/home/kotfic/.venvs/NEX/lib/python2.7/site-packages/boto/s3/multipart.py", 330, "cancel_upload",…]
9: ["/home/kotfic/.venvs/NEX/lib/python2.7/site-packages/boto/s3/bucket.py", 1820,…]
type: "internal"

I am logged in as the admin user. All items have been removed that refer to the asset store and the delete button is 'clickable.'

Contour analysis does not validate input

Initially this can validate type based on input datatype contained in GeoContourWidget.js

later we will want to open an issue for validating based on s3 dataset metadata

Jobs that complete immediately do not always update their status in JobPanel

Some jobs that complete very quickly do not update their status in the job panel correctly (they continue to show the spinning wheel). This has happened several times, but it has been intermittent - i.e., it does not happen consistently.

Using S3 data import with the following values:

name: NASA DEMO
s3 bucket: nasanex
prefix: CMIP5/CommonGrid/gfdl-esm2g/rcp45/mon/r1i1p1/pr
check read only

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.