Giter Site home page Giter Site logo

odm2 / odm2pythonapi Goto Github PK

View Code? Open in Web Editor NEW
4.0 4.0 13.0 9.7 MB

A set of Python functions that provides data read/write access to an ODM2 database by leveraging SQLAlchemy.

Home Page: http://odm2.github.io/ODM2PythonAPI/

License: BSD 3-Clause "New" or "Revised" License

Python 7.68% PLpgSQL 83.45% Shell 0.12% PowerShell 0.18% Batchfile 0.06% PLSQL 8.51%
odm2 odm2-python-api python

odm2pythonapi's Introduction

ODM2

The next version of the Observations Data Model.

For more information about the ODM2 development project, visit the wiki.

Have a look at the ODM2 paper in Environmental Modelling & Software. It's open access!

Horsburgh, J. S., Aufdenkampe, A. K., Mayorga, E., Lehnert, K. A., Hsu, L., Song, L., Spackman Jones, A., Damiano, S. G., Tarboton, D. G., Valentine, D., Zaslavsky, I., Whitenack, T. (2016). Observations Data Model 2: A community information model for spatially discrete Earth observations, Environmental Modelling & Software, 79, 55-74, http://dx.doi.org/10.1016/j.envsoft.2016.01.010

If you are interested in learning more about how ODM2 supports different use cases, have a look at our recent paper in the Data Science Journal.

Hsu, L., Mayorga, E., Horsburgh, J. S., Carter, M. R., Lehnert, K. A., Brantley, S. L. (2017), Enhancing Interoperability and Capabilities of Earth Science Data using the Observations Data Model 2 (ODM2), Data Science Journal, 16(4), 1-16, http://dx.doi.org/10.5334/dsj-2017-004.

Getting Started with ODM2

SQL scripts for generating blank ODM2 databases can be found at the following locations:

View Documentation of ODM2 Concepts

For more information on ODM2 concepts, examples, best practices, the ODM2 software ecosystem, etc., visit the Documentation page on the wiki.

View Diagrams and Documentation of the ODM2 Schema

Schema diagrams for the current version of the ODM2 schema are at:

Data Use Cases

The following data use cases are available. We have focused on designing ODM2 to support these data use cases. Available code and documentation show how these data use cases were mapped to the ODM2.

  • Little Bear River - Hydrologic time series and water quality samples from an ODM 1.1.1 database. Implements an ODM2 database in Microsoft SQL Server.
  • PRISM-XAN - Water quality depth profiles and samples from Puget Sound. Implements an ODM2 database in PostgreSQL.

Our Goal with ODM2

We are working to develop a community information model to extend interoperability of spatially discrete, feature based earth observations derived from sensors and samples and improve the capture, sharing, and archival of these data. This information model, called ODM2, is being designed from a general perspective, with extensibility for achieving interoperability across multiple disciplines and systems that support publication of earth observations.

ODM2 Schematic

Credits

This work was supported by National Science Foundation Grant EAR-1224638. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

ODM2 draws heavily form our prior work with the CUAHSI Hydrologic information system and ODM 1.1.1 (Horsburgh et al., 2008; Horsburgh and Tarboton, 2008), our experiences workin on the Critical Zone Observatory Integrated Data Management System (CZOData), and our experiences with the EarthChem systems (e.g., Lehnert et al., 2007; Lehnert et al., 2009). It also extensively uses concepts from the Open Geospatial Consortium's Observations & Measurements standard (Cox, 2007a; Cox, 2007b; Cox, 2011a; Cox, 2011b; ISO, 2011).

References

See a full list of ODM2 related references

Cox, S.J.D. (2007a). Observations and Measurements - Part 1 - Observation schema, OGC Implementation Specification, OGC 07-022r1. 73 + xi. http://portal.opengeospatial.org/files/22466.

Cox, S.J.D. (2007b). Observations and Measurements โ€“ Part 2 - Sampling Features, OGC Implementation Specification, OGC 07-002r3. 36 + ix. http://portal.opengeospatial.org/files/22467.

Cox, S.J.D. (2011a). Geographic Information - Observations and Measurements, OGC Abstract Specification Topic 20 (same as ISO 19156:2011), OGC 10-004r3. 54. http://dx.doi.org/10.13140/2.1.1142.3042.

Cox, S.J.D. (2011b). Observations and Measurements - XML Implementation, OGC Implementation Standard, OGC 10-025r1. 66 + x. http://portal.opengeospatial.org/files/41510 (accessed September 16, 2014).

Horsburgh, J.S., D.G. Tarboton, D.R. Maidment, and I. Zaslavsky (2008). A relational model for environmental and water resources data, Water Resources Research, 44, W05406, http://dx.doi.org/10.1029/2007WR006392.

Horsburgh, J.S., D.G. Tarboton (2008). CUAHSI Community Observations Data Model (ODM) Version 1.1.1 Design Specifications, CUAHSI Open Source Software Tools, http://www.codeplex.com/Download?ProjectName=HydroServer&DownloadId=349176.

ISO 19156:2011 - Geographic information -- Observations and Measurements, International Standard (2011), International Organization for Standardization, Geneva. http://dx.doi.org/10.13140/2.1.1142.3042.

Lehnert, K.A., Walker, D., Vinay, S., Djapic, B., Ash, J., Falk, B. (2007). Community-Based Development of Standards for Geochemical and Geochronological Data, Eos Trans. AGU, 88(52), Fall Meet. Suppl., Abstract IN52A-09.

Lehnert, K.A., Walker, D., Block, K.A., Ash, J.M., Chan, C. (2009). EarthChem: Next developments to meet new demands, American Geophysical Union, Fall Meeting 2009, Abstract #V12C-01.

odm2pythonapi's People

Contributors

aufdenkampe avatar castronova avatar cdesyoun avatar denvaar avatar elijahwalkerwest avatar emiliom avatar horsburgh avatar kwuz avatar lsetiawan avatar ocefpaf avatar sreeder avatar valentinedwv avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

odm2pythonapi's Issues

Add ability to get objects by UUID

Modify the functions in the API to accept UUIDs to get objects corresponding to those UUIDs for objects that have them. These include:

  • SamplingFeatures
  • Results
  • Datasets

NoForeignKey error on "SamplingFeatureExtensionPropertyValues.SamplingFeatureObj"

I'm just starting to get my feet wet with ODM2API. I'm starting with Demo.py, and tweaked it to use my own postgresql-based ODM2 database.

I've run into iterations of this error:

NoForeignKeysError: Could not determine join condition between parent/child tables 
on relationship SamplingFeatureExtensionPropertyValues.SamplingFeatureObj - 
there are no foreign keys linking these tables.  Ensure that referencing columns are associated 
with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.

with core_read.getVariables() or alternatively sampfeat_read.getAllSites(). But I've confirmed that the foreign key relationship does exist in my ODM2 database; and exploring the SamplingFeatureExtensionPropertyValues object (after doing from src.api.ODM2.models import SamplingFeatureExtensionPropertyValues), it looks like the foreign key link really is there:

SamplingFeatureExtensionPropertyValues.__table__
Table('samplingfeatureextensionpropertyvalues', MetaData(bind=None), 
Column('bridgeid', Integer(), table=<samplingfeatureextensionpropertyvalues>, primary_key=True, nullable=False), 
Column('samplingfeatureid', NullType(), ForeignKey('odm2.samplingfeatures.samplingfeatureid'), table=<samplingfeatureextensionpropertyvalues>, nullable=False), 
Column('propertyid', Integer(), ForeignKey('odm2.extensionproperties.propertyid'), table=<samplingfeatureextensionpropertyvalues>, nullable=False), 
Column('propertyvalue', String(length=255), table=<samplingfeatureextensionpropertyvalues>, nullable=False), 
schema='odm2')

I don't know enough about the ODM2API or Sqlalchemy yet to be able to probe further.

But, as far as using Demo.py, I can't get beyond this error, so I'm not able to do anything useful.

Linux MySql requires lower-case-table-names for variable lower_case_table_names

Took many builds to finally get this right. Create a file with the proper name (for 5.5 it is lower-case-table-names, even though variable is named lower_case_table_names)

cat $HOME/.my.cnf

[mysqld]
lower-case-table-names = 1

Check

mysql --verbose -e "show variables like 'lower%';" --user=root

--------------
show variables like 'lower%'
--------------
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| lower_case_file_system | OFF   |
| lower_case_table_names | 1     |
+------------------------+-------+

createTimeSeriesResult fails

The following exception is raised when inserting a timeSeriesResult record into a sqlite database: FlushError: Instance <TimeSeriesResults at 0x108ddecd0> has a NULL identity key. The snippet of code below will duplicate this error.

# create a result record
result = models.Results
result.ResultUUID = uuid.uuid4()
result.FeatureActionID = featureaction.FeatureActionID
result.ResultTypeCV = 'time series'
result.VariableID = variable.VariableID
result.UnitsID = unit.UnitsID
result.ProcessingLevelID = processinglevel.ProcessingLevelID
result.ValueCount = len(dates)
result.SampledMediumCV = 'unknown'

# create time series result
# FlushError: Instance <TimeSeriesResults at 0x1174b5fd0> has a NULL identity key.
timeseriesresult = self.write.createTimeSeriesResult(result=result, aggregationstatistic='unknown',
                                                                         timespacing=timestepvalue,
                                                                         timespacing_unitid=timestepunit.UnitsID)

one more pass at branch/fork unification?

We've done a pretty significant amount of code reorganization and cleanup since mid November, and that's all now pretty stable in the master branch. But there are two possible additional loose ends that we should tackle for merging into master:

  1. @denvaar, your commits to the setup branch since late November or so. It looks like you've fixed several miscellaneous bugs. It'd be great to merge those into master, the delete the setup branch.
  2. @cdesyoun and @valentinedwv, is the ODM2REST API using the current odm2api master, or does it rely on your own fork? If it's using your fork, let's try to identify bug fixes and feature additions you've made, and merge them into master. If you are using master, that'll be great to confirm, too! I suspect you must be using your own fork, because master doesn't seem to have any methods for Measurements result types, but ODM2REST API does handle that result type in my "Marchantaria" use case.

Thanks!

In a file, "ODM2PythonAPI/ODM2/models.py"

Would you please fix these problems below?
-- please fix the spelling error (ResultDateTimeUTCOfffset ==> ResultDateTimeUTCOffset)
-- please add the "cv_relationshiptype" class
class CVRelationshipType(Base):
tablename = 'cv_relationshiptype'
table_args = {u'schema': 'odm2'}

Term = Column('term', String(255), nullable=False)
Name = Column('name', String(255), primary_key=True)
Definition = Column('definition', String(1000))
Category = Column('category', String(255))
SourceVocabularyUri = Column('sourcevocabularyuri', String(255))
def __repr__(self):
    return "<CV('%s', '%s', '%s', '%s')>" %(self.Term, self.name, self.Definition, self.Category)

-- plead add the type, "Integer" in the class "SamplingFeatureExternalIdentifiers".
"BridgeID = Column('bridgeid', Integer, primary_key=True, nullable=False)"
-- please replace "BIT" type with "Boolean" type in the class "Specimens".
"IsFieldSpecimen = Column('isfieldspecimen', Boolean, nullable=False)"

createTimeSeriesResultValues

The following code should be generic, but there seems to be some issues with capitalization that need to be addressed for all db types
def createTimeSeriesResultValues(self, datavalues):
try:
datavalues.to_sql(name="TimeSeriesResultValues",
schema=TimeSeriesResultValues.table_args['schema'],
if_exists='append',
chunksize= 1000,
con=self._session_factory.engine,
index=False)
self._session.commit()

Query Sampling Feature Always Returns Nonetype

When querying a SamplingFeature (e.g. getSamplingFeatureById, getSamplingFeatureByGeometry), None is always returned. When querying all SamplingFeatures TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' is raised. See sample code below to replicate this error:

from osgeo import gdal, ogr
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(1198054.34, 648493.09)

samplingFeatureID = self.insert_sampling_feature(type='site',geometryType=point.GetGeometryName(), WKTgeometry=point.ExportToWkt())

# Querying SamplingFeatures Fails?!
# However, querying the database directly or with sqlite3 works fine.
samplingfeature = self.read.getSamplingFeatureByGeometry(geom.wkt)
test = self.read.getSamplingFeatureById(samplingFeatureID)

# raises an exception that is not caught
all = self.read.getSamplingFeatures()

### Function to insert sampling feature geometries
def insert_sampling_feature(self, type='site',code='',name=None,description=None,geometryType=None,elevation=None,elevationDatum=None,WKTgeometry=None):
        '''
        Inserts a sampling feature.  This function was created to support the insertion of Geometry object since this
        functionality is currently lacking from the ODM2PythonAPI.
        :param type: Type of sampling feature.  Must match FeatureTypeCV, e.g. "site"
        :param name: Name of sampling feature (optional)
        :param description: Description of sampling feature (optional)
        :param geometryType: String representation of the geometry type, e.g. Polygon, Point, etc.
        :param elevation: Elevation of the sampling feature (float)
        :param elevationDatum: String representation of the spatial datum used for the elevation
        :param geometry: Geometry of the sampling feature (Shapely Geometry object)
        :return: ID of the sampling feature which was inserted into the database
        '''
        UUID=str(uuid.uuid4())
        FeatureTypeCV=type
        FeatureCode = code
        FeatureName=name
        FeatureDescription=description
        FeatureGeoTypeCV=geometryType
        FeatureGeometry=WKTgeometry
        Elevation=elevation
        ElevationDatumCV=elevationDatum

        # get the last record index
        res = self.spatialDb.execute('SELECT SamplingFeatureID FROM SamplingFeatures ORDER BY SamplingFeatureID DESC LIMIT 1').fetchall()
        ID = res[0][0]  # get the last id value
        ID += 1 # increment the last id

        values = [ID,UUID,FeatureTypeCV,FeatureCode,FeatureName,FeatureDescription, FeatureGeoTypeCV, FeatureGeometry, Elevation,ElevationDatumCV]
        self.spatialDb.execute('INSERT INTO SamplingFeatures VALUES (?, ?, ?, ?, ?, ?, ?, geomFromText(?), ?, ?)'
                               ,values)

        self.spatial_connection.commit()

        pts = self.spatial_connection.execute('SELECT ST_AsText(FeatureGeometry) from SamplingFeatures').fetchall()

        return ID

ODMconnection fails for MySQL with No Password

Courtesy ODMconnection classes restrict drivers, and can cause issues.
dbconnection.createConnection assumes a password, so a password of None causes an error. As does an Empty password ''

self = <tests.test_connection.Connection instance at 0x7f2e3e711e18>
request = <SubRequest 'setup' for <Function 'test_connection[setup0]'>>
def init(self, request):
db = request.param
print ("dbtype", db[0], db[1] )
session_factory = dbconnection.createConnection(db[1],db[2],db[3],db[4],db[5], echo=True)

  assert session_factory is not None, ("failed to create a session for ", db[0], db[1])

E AssertionError: ('failed to create a session for ', 'mysql_odm2_root', 'mysql')
E assert None is not None

You end up needing to directly create a session using the string: ['mysql+pymysql://root:@localhost/odm2']

Will fix on the setup branch

Questions regarding the dependencies for the conda package

@emiliom the current requirement.txt has

pyodbc
pymysql
six
sqlalchemy
geoalchemy
#https://github.com/ODM2/geoalchemy/archive/v0.7.3.tar.gz
shapely
dateutils
pandas
#psycopg2  # Commented out because I could not pip install it.
#matplotlib
#sqlalchemy-migrate

The forked geoalchemy is easy to solve in a conda package, but I have a few concerns regarding other packages.

  • dateutils is outdated and unavailable in Python > 2.7. Maybe it is worth revisiting the code that uses dateutils and adapt it to a modern library;
  • psycopg2 is commented out and I don't know if it is an optional or mandatory dependency. Also, it is unavailable on Windows. (But it might be available soon. See conda-forge/staged-recipes#101);
  • matplotlib is also commented out. I am guessing that is an optional dependency;
  • sqlalchemy-migrate same as above. There is no conda package for this one. We will need to add it here.

I usually recommend to add the optional dependencies when packaging with conda. What do you want to do here?

CreateService Session closes on failed create

Adding tests based on @Castronova branch, and seeing that if a constraint is invalid, that the session becomes invalid.

Need to see check is the session is still open before attempting to create.

  # goal of this is to see that if we force errors like a null value, or duplicate that the session does not fail

    # create some people
    setup.odmcreate.createPerson(firstName="tony",
                                 lastName='castronova',
                                 middleName='michael')

    with pytest.raises(Exception) as excinfo:
        # this one should fail due to a not null constraint
        setup.odmcreate.createPerson(firstName=None,
                                     lastName='castronova',
                                     middleName='michael')

    assert 'People.PersonFirstName may not be NULL' in str(excinfo.value)

    # now add again
    setup.odmcreate.createPerson(firstName="tony",
                                 lastName='castronova',
                                 middleName=None)
2016-04-14 12:24:44,496 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-04-14 12:24:44,496 INFO sqlalchemy.engine.base.Engine INSERT INTO people (personfirstname, personmiddlename, personlastname) VALUES (?, ?, ?)
2016-04-14 12:24:44,496 INFO sqlalchemy.engine.base.Engine ('tony', 'michael', 'castronova')
2016-04-14 12:24:44,497 INFO sqlalchemy.engine.base.Engine COMMIT
2016-04-14 12:24:44,499 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-04-14 12:24:44,499 INFO sqlalchemy.engine.base.Engine INSERT INTO people (personfirstname, personmiddlename, personlastname) VALUES (?, ?, ?)
2016-04-14 12:24:44,499 INFO sqlalchemy.engine.base.Engine (None, 'michael', 'castronova')
2016-04-14 12:24:44,500 INFO sqlalchemy.engine.base.Engine ROLLBACK
F
setup = <class tests.test_odm2.odmConnection at 0x00000000088B34C8>

    def test_SessionNotFailed(setup):
        # goal of this is to see that if we force errors like a null value, or duplicate that the session does not fail

        # create some people
        setup.odmcreate.createPerson(firstName="tony",
                                     lastName='castronova',
                                     middleName='michael')

        with pytest.raises(Exception) as excinfo:
            # this one should fail due to a not null constraint
            setup.odmcreate.createPerson(firstName=None,
                                         lastName='castronova',
                                         middleName='michael')

        assert 'People.PersonFirstName may not be NULL' in str(excinfo.value)

        # now add again
        setup.odmcreate.createPerson(firstName="tony",
                                     lastName='castronova',
>                                    middleName=None)

tests\test_odm2.py:75: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
odm2api\ODM2\services\createService.py:226: in createPerson
    self._session.commit()
build\bdist.win-amd64\egg\sqlalchemy\orm\session.py:801: in commit
    ???
build\bdist.win-amd64\egg\sqlalchemy\orm\session.py:390: in commit
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <sqlalchemy.orm.session.SessionTransaction object at 0x00000000088EC668>
prepared_ok = True, rollback_ok = False, deactive_ok = False
closed_msg = 'This transaction is closed'

>   ???
E   InvalidRequestError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (sqlite3.IntegrityError) People.PersonFirstName may not be NULL [SQL: u'INSERT INTO people (personfirstname, personmiddlename, personlastname) VALUES (?, ?, ?)'] [parameters: (None, 'michael', 'castronova')]

build\bdist.win-amd64\egg\sqlalchemy\orm\session.py:214: InvalidRequestError

add init function to models

add init functions to model classes
current creation happens like this
screen shot 2016-03-23 at 10 18 55 am

I think this would be easier
screen shot 2016-03-23 at 10 20 10 am

or this
screen shot 2016-03-23 at 10 21 30 am

Modify API functions to accept arrays of inputs

For some of the API methods it makes sense to pass in an array rather than a single item. For example, getSamplingFeatures( ) will accept a SamplingFeatureID as an argument, but it should accept an array of SamplingFeatureIDs. Need to list which functions and arguments for which this makes sense.

Implement Use Case Models

We have use cases:

  • TimeSeries
  • Measurements
    and possibly, PetDB

If these were Objects that a use use cases populate, that then used the API to store and retrieve it would speed adoption of the API, and also isolate the use cases from the data model.

help specifying a build requirement from a github repo release (for geoalchemy fork)

@ocefpaf, if you have a bit of time (right!) I'd like to bug -- or beg? ๐Ÿ˜บ -- you for help with specifying a package dependency. This isn't conda packaging, yet; that'll be a separate conversation in the future. And it's not IOOS related, at least not yet.

The ODM2PythonAPI (odm2api) package has a dependency on a slightly customized fork of the old and no-longer-maintained geoalchemy "1"; FYI, that's the original geoalchemy, not the actively maintained geoalchemy2. The reasons for having that dependency and tweaking the old code are beyond the scope of this issue, but trust me that they are valid! I can fill you in offline if you really care.

We'd like to be able to update setup.py and/or requirements.txt so that our geoalchemy fork will be installed automatically when od2api is pip-installed via pip install git+https://github.com/ODM2/ODM2PythonAPI.git. Currently I can complete the installation with conda by setting up a conda env with all other dependencies already available via the anaconda main channel or the ioos channel (eg: conda create -n odm2api python=2.7 sqlalchemy psycopg2 pymysql pyodbc). Then, on that env, I pip install git+https://github.com/ODM2/geoalchemy.git, and then pip install git+https://github.com/ODM2/ODM2PythonAPI.git. We'd like to be able to specify the setup/requirements so the last step carries out the right geoalchemy installation automatically. What we've tried hasn't worked.

We're not interested in pushing our geoalchemy fork to pypi or anything more formal, for now.

I've just created a new release for our geoalchemy fork (v0.7.3), off the master branch. We don't anticipate any additional changes in the foreseeable future.

I know you're very busy, so if this looks too complicated, no worries if you have to pass. But if it's easy and you can help us, that'd be fantastic! We are a bit out of our depth here.

Custom type

The custom type for feature geometry is not saving to the database correctly for any of the db types.

Align pypi and anaconda channel packaging and setup efforts

@ocefpaf and I are working on a new ODM2 anaconda channel for easy, multi-OS distribution of ODM2 packages (odm2api initially, others later on). We have the ODM2 channel already created with odm2api and its dependencies, already installable on Linux and OSX (some Windows glitches are being worked on).

@valentinedwv is working on PyPI packaging, and already has this on the test pypi environment. He also has overhauled requirements.txt and setup.py in the setup_clean branch and has a PR (#46) waiting in the wings with these changes (I asked to delay merging it for now).

@valentinedwv also has refactored the code to make individual RDBMS packages (eg, psycopg2, pyodbc) optional user-driven installs, so a MSSQL user never has to care about or install PostgreSQL requirements. See issue #44.

In addition, there's a raft of great new odm2api overhauls that @sreeder has done and I just reviewed, also waiting in the wings. And we're now using tagged (pre) releases (thanks, @sreeder!).

These are all fantastic developments!! But it's time to coordinate so we don't trip over each other, we leverage each other's efforts and expertise, and we make the best joint decisions.

I'll stop here and add a follow-up, next-step comment in a few minutes. I'll ping @aufdenkampe and @horsburgh directly, b/c I think this is important enough that they should be directly aware or involved at least initially.

package loading error & generating database schema from the base model

I tried to create the database schema for sqlite and postgresql using sqlalchemy data model (models.py). My code is below:

import sys
sys.path.append('ODM2PythonAPI')
from sqlalchemy import create_engine
from src.api.base import modelBase
from sqlalchemy import event
from sqlalchemy.schema import CreateSchema

def createDBschema(db_type):
if db_type == 'postgresql':
event.listen(modelBase.metadata, 'before_create', CreateSchema('odm2'))
engine = create_engine('postgresql+psycopg2://............')
modelBase.metadata.create_all(engine,checkfirst=True)
if db_type == 'sqlite':
engine = create_engine('sqlite:///:memory:', echo=True)
modelBase.metadata.create_all(engine)
createDBschema('sqlite')

When running the above code, I got the error messages below related to "LikeODM1":
Traceback (most recent call last):
File "create_schema.py", line 8, in
from src.api.base import modelBase
File "ODM2PythonAPI/src/api/init.py", line 1, in
from .ODMconnection import SessionFactory, dbconnection
File "ODM2PythonAPI/src/api/ODMconnection.py", line 6, in
from .versionSwitcher import ODM, refreshDB #import Variable as Variable1
File "ODM2PythonAPI/src/api/versionSwitcher.py", line 8, in
import ODM2.LikeODM1.models as ODM2
File "ODM2PythonAPI/src/api/ODM2/LikeODM1/init.py", line 34, in
import models
File "ODM2PythonAPI/src/api/ODM2/LikeODM1/models.py", line 41, in
class Site(Base):
File "ODM2PythonAPI/src/api/ODM2/LikeODM1/models.py", line 46, in Site
id = site_join.c.odm2_sites_samplingfeatureid
File "/Users/cyoun/PycharmProjects/venv_odm/lib/python2.7/site-packages/sqlalchemy/util/_collections.py", line 211, in getattr
raise AttributeError(key)
AttributeError: odm2_sites_samplingfeatureid

After commenting out these parts in "ODMconnection.py" file, I could remove this error.

Based on this sqlachemy data model for ODM2, there are some issues for generating database schema directly below:

  1. For "sqlite" db, the current model uses the schema, "odm2" in the database and "geometry" type as the custom type for "featuregeometry" column in "samplingfeatures" table. There is a problem for generating the database schema from the model. The "sqlite" db does not support the schema method. I overrode "get_col_spec" function in Geometry type to accept the column object. But, even if this table was created, there is a failure to create index. After commenting out this schema name, "odm2" and "featuregeometry" column in this file, models.py, I could generate db schema.
  2. For "postgresql", we assume "postgis" tables should be installed. And the current model classes use the SQL collation, "SQL_Latin1_General_CP1_CI_AS" which specializes in using for ms sql server. Postgresql server does matter, spawning errors for generating db schema. After deleting this collation, I could generate db schema on this db server.

if we can have generic sqlalchemy data model for ODM2 that solves these above issues, using simple codes above, we can generate db schema simply.

MSSQL FreeTDS Linux issues

Figure out how to get FreeTDS working on a linux machine.

Present code works on a mac (or soon to be reverted code)

SamplingFeature read failures with SQLite

In #24 we went back forth with some changes to ODM2/models.py and possibly ODM2/services/readService.py regarding SamplingFeatures and the use of geoalchemy 1. A couple of merges were involved. These changes sort-of continued in #27, and have some linkage earlier to #13.

Before that, we got Examples/Sample.py to work top to bottom with its SQLite sample database (FYI, I never tested the SamplingFeature creation code). Here's a working IPython notebook version I made on 1/21

Now, I'm unable to access ReadODM2.getSamplingFeature methods to work anymore. They work on PostgreSQL, but not SQLite. With @horsburgh's ODM2 Little Bear River sqlite sample file, a method request such as getSamplingFeaturesByType('Site') leads to this error:

(sqlite3.OperationalError) no such function: AsBinary [SQL: u'SELECT 
samplingfeatures.samplingfeatureid AS odm2_samplingfeatures_samplingfeatureid, 
samplingfeatures.samplingfeatureuuid AS odm2_samplingfeatures_samplingfeatureuuid, 
samplingfeatures.samplingfeaturetypecv AS odm2_samplingfeatures_samplingfeaturetypecv, 
samplingfeatures.samplingfeaturecode AS odm2_samplingfeatures_samplingfeaturecode, 
samplingfeatures.samplingfeaturename AS odm2_samplingfeatures_samplingfeaturename, samplingfeatures.samplingfeaturedescription AS odm2_samplingfeatures_samplingfeaturedescription, 
samplingfeatures.samplingfeaturegeotypecv AS odm2_samplingfeatures_samplingfeaturegeotypecv, 
samplingfeatures.elevation_m AS odm2_samplingfeatures_elevation_m, 
samplingfeatures.elevationdatumcv AS odm2_samplingfeatures_elevationdatumcv, 
AsBinary(samplingfeatures.featuregeometry) AS odm2_samplingfeatures_featuregeometry
\nFROM samplingfeatures 
\nWHERE samplingfeatures.samplingfeaturetypecv = ?'] [parameters: ('site',)]

Note the problem with the AsBinary SQL function. Probing a bit further, I see the error comes from sqlalchemy/engine/default.py, specifically line 450 below:

449     def do_execute(self, cursor, statement, parameters, context=None):
450         cursor.execute(statement, parameters)

I've gone through the relevant commit history for ODM2/models.py and ODM2/services/readService.py since early December, and can't find anything that's different from a time in mid January when things were working fine. I'm stumped.

@denvaar has moved to greener pastures and @sreeder is still away on maternity leave. So, for now I'm creating this issue as documentation of the problem. I'll keep tackling it, and will add more information in a bit.

Getting Examples/Sample.py to work with LBR ODM2 SQLite db

@denvaar here's what I'm trying. I have Jeff's ODM2.sqlite db for the Little Bear River, with a single timeseries result. I'm trying to run Examples/Sample.py. I was hoping it would work!

I can connect to (read) the database, and query some objects. But I'll jump to the core and major problem I'm running into.

FYI, the sqlite db only has 1 result. No sweat. I run this statement tsResult = read.getTimeSeriesResultByResultId(1) in line 130. The resulting object matches closely the table definition of TimeSeriesResults. But it's clear that in Sample.py what's expected is an object that includes the associated Results; for example, there are statements like these:

tsResult.ResultTypeCV
tsResult.ProcessingLevelObj.Definition

But tsResult doesn't have any of those Results properties.

Ok, moving on, I try to get the time series result values, like this: tsValues = read.getTimeSeriesResultValuesByResultId(1). It doesn't work, and returns None.

read.getResultById(1) does return the single Results record.

The sqlite database looks fine to me, when I inspect its tables via a SQLite browser.

I also have other problems that seem puzzling. For example, read.getAllAffiliations() returns None, not the affiliations record.

Note that I'm not running Sample.py in full. I'm taking each distinct block and running it in a Jupyter notebook. I'm skipping the "add sampling feature" block, b/c I'm not interested in writing to the database at this time.

Thanks!

Add function to populate/update CVs in an existing database

There is a script in the ODM2 repository now that does the initial population of the CVs for a blank ODM2 database. We should add a function to the ODM2 API that updates the CVs:

UpdateCVs(IncludeUnits = "True")

There should be an option for including the Units (or not) since it is not a CV. We might also want to add a flag that specifies what to do with terms that might be in the database, but don't match terms in the master CV.

ODM2 Spatialite support

Need to get the code to work with spatialite databases. it is currently trying to connect with sqlite but that has none of the geospatial functions.

Querying Simulations fails

Querying for simulations does not work when using any of the readService functions. Exceptions are silenced when calling these functions and None is returned. I suggest that the exception handling at least print the exception message (possibly even raise Exception(e)). By printing the exception message we can see that all of these functions are looking for a table called Simulation:

global name 'Simulation' is not defined

However, this table does not exist, and these functions should instead be looking for a table called Simulations as defined in models.py

In the following functions, self._session.query(Simulation)... needs to be replaced with self._session.query(Simulations)...:

getAllSimulations
getSimulationByName 
getSimulationByActionID

High level query functions

Here is a list of the High level query function that have been requested for the ODM2API

Get Functions:

  • getSamplingFeatureInfo( ) - given a SamplingFeatureID, returns a SamplingFeature object and its list of Results (without data values). Similar to the WaterOneFlow getSiteInfo function

Create Functions:

  • createAffiliations( )

creating tables via sqlalchemy model base in sqlite

I tried to create all of tables via ODM2 model base, like this:

from odm2api.base import *
modelBase.metadata.create_all(engine)

in current odm2 sqlalchemy model, because of table schema, "odm2", when creating tables in sqlite, I got the error message below:

sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database "odm2" [SQL: u'PRAGMA "odm2".table_info("samplingfeatureexternalidentifiers")']

When using the modified odm2 model for sqlite, https://github.com/ODM2/ODM2PythonAPI/blob/master/odm2api/ODM2/models_sqlite.py, it went well.
Is there any way to skip the schema tag in current odm2 sqlalchemy model for not supporting the schema in certain databases, sqlite, mySQL?

Problem creating connection to PostgreSQL database

After installing ODM2PythonAPI using the overhauled setup I described in issue #11 (with the setup_em branch I've created), I got an error trying to make a test connection to one of my local PostgreSQL ODM2 databases:

from odm2api.ODMconnection import dbconnection
from odm2api.ODM2.services import *

session_factory = dbconnection.createConnection('postgresql', 'localhost', 'odm2_rivers', 
                                                 'myuser', 'mypwd')
**** {'engine': 'postgresql', 'password': 'mypwd', 'db': 'odm2_rivers', 'user': 'myuser', 'address': 'localhost'}
****** postgresql+psycopg2://myuser:mypwd@localhost/odm2_rivers
Connection was unsuccessful  (psycopg2.ProgrammingError) relation "Variables" does not exist
LINE 2: FROM "Variables"

The messages relation "Variables" does not exist and LINE 2: FROM "Variables" suggest that it's trying to issue a query using a case-sensitive (double-quoted) table name for "Variables". I was able to track the error to probably this line in odm2api/ODMconnection.py, but I wasn't able to go beyond that.

@sreeder, can you look into it? I can help fix it if it's a case issue and you can point me to where the problem is. BTW, this may be related to the case handling issues discussed a few months ago in issue #2. Back then I was able to change the code and get it to work for me with postgresql, but that was before you overhauled the case handling.

Pyodbc SQL Server Driver Connection String

@AmberSJones and I have been running into some issues when connecting to a SQL Server database from a windows machine. The driver portion of the connection string (SQL+Server+Native+Client+10.0) does not always match the name of the driver on each machine.

For example, my machine only had drivers named SQL+Server and ODBC+Driver+11+for+SQL+Server. The names of the drivers will not always be exactly the same. I have some code that can grab the names of the user's installed drivers. I am going to add that to ODMconnection.py so that it inserts the appropriate name into the connection sting.

Installing as package, from "setup" branch

@sreeder and @horsburgh, following up on our call from Monday, I've installed ODM2API. I think it needs more work, but here's what I did that was successful.

  • I created a conda environment with all high-level dependencies, plus some more. This wasn't necessary, strictly, speaking, but I definitely wanted to test this installation in a virtual environment that wouldn't mess with my system Python. Plus staging things with conda makes for a more controlled environment. FYI: conda create -n odm2_apitest1 python=2.7 ipython-notebook pandas seaborn sqlalchemy psycopg2 pymysql pyodbc.
  • I followed that step with pip install geoalchemy2 (geoalchemy2 is not currently available as a conda package), b/c geoalchemy2 turned out to be a dependency in the ODM2API setup git branch that's not handled by the python setup.py develop step below. I found out the hard way, when I tried to run import api after installing ODM2API.
  • Made sure I was on the setup ODM2API git branch.
  • cd'd to the ODM2API src directory (where setup.py) is found, and per Stephanie's instructions, ran python setup.py develop. This went very smoothly and quickly, as I already had all the dependencies in place

So far so good. Per Stephanie's instructions, I was then able to run import api w/o errors.

The first, main problem I see (in addition to the unhandled geoalchemy2 dependency) is that there's no umbrella "odm2api" package that's installed. Having to import a package called "api" is kind of dangerous and confusing. I don't have much Python package configuration chops, so I can't help with this in the very short term, and I don't know why the package is being named 'api'.

Beyond that, it's pretty involved to have to do a local git clone, then do python setup.py develop based off the right directory path. It would be best if ODM2API could be packaged as a "pip installable" package that can be installed directly from github, like this (where setup is the setup branch):

pip install git+https://github.com/ODM2/ODM2PythonAPI.git@setup

FYI, I tried that and got this error message:

Collecting git+https://github.com/ODM2/ODM2PythonAPI.git@setup
  Cloning https://github.com/ODM2/ODM2PythonAPI.git (to setup) to /tmp/pip-yFXVBb-build
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 18, in <module>
    IOError: [Errno 2] No such file or directory: '/tmp/pip-yFXVBb-build/setup.py'

I have a hunch the problem has to do with setup.py being under the src directory rather than at the base of the repo, but again, I'm reaching the depth of my understanding.

I really think it'd be time well spent if we (I'm volunteering) invested the effort very soon to turn this into a pip installable package with a proper name/namespace (eg, "odm2api", or just just "odm2"). That would make it easier for others to try it. Maybe Choonhan (and Dave V?) can help us with this?

Done for now. Have a great Thanksgiving!

Like ODM1

There are still a few issues with the LikeODM1 library,

  • the series is not being generated correctly
  • there are no relationships or foreign keys between the other LikeODM1 tables.

Consolidate List of API Functions

After looking at the existing code, we have decided on a major consolidation of the functions in the API to avoid complexity and repetition. For a description of the planned signatures (inputs and outputs) for each of the functions, click here.

Consolidated List of ODM2 API Low-level Get Functions

  • getActions()
  • getAffilations()
  • getDatasets()
  • getEquipment()
  • getMethods()
  • getModels()
  • getOrganizations()
  • getPeople()
  • getProcessingLevels()
  • getRelatedActions()
  • getRelatedModels()
  • getRelatedSamplingFeatures()
  • getResults()
  • getResultValues()
  • getSamplingFeatures()
  • getSimulations()
  • getUnits()
  • getVariables()

Consolidated List of ODM2 API Low-level Create Functions

  • createVariable()
  • createMethod()
  • createProcessingLevel()
  • createSamplingFeature()
  • createUnit()
  • createOrganization()
  • createPerson()
  • createAffiliation()
  • createDataset()
  • createDatasetResults()
  • createAction()
  • createRelatedAction()
  • createResult()
  • createResultValues()
  • createSamplingFeature()
  • createSpatialReference()
  • createModel()
  • createRelatedModel()
  • createSimulation()

For Reference - Functions that are being removed

getVariableById - Covered by getVariables (passing in a VariableID)
getVariableByCode - covered by getVariables (passing in a VariableCode)
getResultById - covered by getResults (passing in a ResultID)
getMethodById - covered by getMethods (passing in a MethodID)
getMethodByCode - coverd by getMethods (passing in a MethodCode)
getProcessingLevelById - covered by getProcessingLevels (passing in a ProcessingLevelID)
getProcessingLevelByCode - covered by getProcessingLevels (passing in a ProcessingLevelCode)
getSamplingFeatureById - Covered by getSamplingFeatures (passing in a SamplingFeatureID)
getSamplingFeatureByCode - Covered by getSamplingFeatures (Passing in a SamplingFeatureCode)
getSamplingFeaturesByType - Covered by getSamplingFeatures (Passing in a SamplingFeatureType)
getSamplingFeatureByGeometry - Covered by getSamplingFeatures (passing in a GeometryType)
getGeometryTest - I don't know what this is and don't think we need it
getUnitById - Covered by getUnits (passing in a UnitsID)
getUnitByName - Covered by getUnits (passing in a UnitsName)
getOrganizationById - Covered by getOrganizations (passing in an OrganizationID)
getOrganizationByCode - Covered by getOrganizations (passing in an OrganizationCode)
getPersonById - Covered by getPeople (passing in a PersonID)
getPersonByName - Covered by getPeople (passing in a name)
getAffiliationByPersonAndOrg - why do we need this? Covered by getAffiliations?
getAffiliationsByPerson - Covered by getAffiliations (passing in a PersonID)
getResultByActionID - Covered by getResults (passing in an ActionID)
getResultByID - Covered by getResults
getResultAndGeomByID - Covered by getResults (passing in a ResultID)
getResultAndGeomByActionID - coverec by getResults (passing in an ActionID)
getResultValidDateTime - covered by getting the result and querying the metadata
getDatasetByCode - Covered by getDatasets (passing in a DatasetCode)
getAllDataQuality - what is this for?
getAllEquipment - covered by getEquipment
getCitations - why do we need this independently?
getTimeSeriesResults - covered by getResults
getTimeSeriesResultByResultId - Covered by getResults (passing in a ResultID)
getTimeSeriesResultbyCode - Results don't have a code, so not sure what this would do anyway
getTimeSeriesResultValues - covered by getResultValues
getTimeSeriesResultValuesByResultId - covered by getResultValues (passing in a ResultID)
getTimeSeriesResultValuesByCode - results don't have a code so this wouldnt work anyway
getTimeSeriesResultValuesByTime - covered by getResultValues (passing in a ResultID and a time period)
getAllSites - Covered by getSamplingFeatures (passing a type of "Site")
getSiteBySFId - covered by getSamplingFeatures (passing in a SamplingFeatureID)
getSiteBySFCode - covered by getSamplingFeatures (passing in a SamplingFeatureCode)
getSpatialReferenceByCode - don't think we need this
getAllDeploymentAction - covered by getActions (passing in an ActionType)
getDeploymentActionById - covered by getActions (passing in an ActionID)
getDeploymentActionByCode - Actions do not have code, so this one wouldn't work anyway
getAllModels - covered by getModels
getModelByCode - covered by getModels (passing in a ModelCode)
getAllSimulations - covered by getSimulations
getSimulationByName - covered by getSimulations (passing in a Simulation Name)
getSimulationByActionID - covered by getSimulations (passing in an ActionID)
getRelatedModelsByID - covered by getRelatedModels (Passing in a ModelID)
getRelatedModelsByCode - covered by getRelatedModels (passing in a ModelCode)
getResultsBySimulationID - covered by GetResults (passing in an ActionID)
createTimeSeriesResult - covered by createResult
createTimeSeriesResultValues - covered by createResultValues
createSite - covered by createSamplingFeature
createDeploymentAction - covered by createAction

send list into get functions

Add the capability to send in a list of ids, or codes, or types to a get query instead of just a single value.
for example calling the function getResults() could look like this,
screen shot 2016-03-24 at 12 00 19 pm

or this
screen shot 2016-03-24 at 12 00 36 pm

Read service fails for simulation and model functions

The readService functions fail for most Simulation, Model, and RelatedModel functions.

  • getAllModels
  • getModelByCode
  • getAllSimulations
  • getSimulationByName
  • getSimulationByActionID
  • getRelatedModelsByID
  • getRelatedModelsByCode
  • getResultsBySimulationID

Geoalchemy raises AssertationError

Geoalchemy raises an AssertationError when using the API's getDetailedResultInfo method. As a work-around, I can comment out line 88 in base.py, but then it will not actually create a geometry.

    File "/home/denver/Documents/ODM2PythonAPI/src/api/ODM2/services/readService.py", line 195, in getDetailedResultInfo
    for r,s,m,v,p,u in q.all():
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2398, in all
    return list(self)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 84, in instances
    util.raise_from_cause(err)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 72, in instances
    for row in fetch]
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 426, in _instance
    loaded_instance, populate_existing, populators)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 484, in _populate_full
    dict_[key] = getter(row)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/geoalchemy/geometry.py", line 27, in process
    return DialectManager.get_spatial_dialect(dialect).process_result(value, self)
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/geoalchemy/mssql.py", line 311, in process_result
    return MSPersistentSpatialElement(WKBSpatialElement(value, type.srid))
  File "/home/denver/miniconda/envs/SDL-env/lib/python2.7/site-packages/geoalchemy/base.py", line 88, in __init__
    assert isinstance(desc, (basestring, buffer))
AssertionError

createSimulation fails

createService.createSimulation fails with:

OperationalError: (OperationalError) table simulations has no column named outputdatasetid u'INSERT INTO simulations (actionid, simulationname, simulationdescription, simulationstartdatetime, simulationstartdatetimeutcoffset, simulationenddatetime, simulationenddatetimeutcoffset, timestepvalue, timestepunitsid, inputdatasetid, outputdatasetid, modelid) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' (1, 'My Simulation', 'Some model descipription', '2014-03-01', -6, '2014-03-01', -6, 1.0, 1, None, None, 1)

It is looking for a column called outputdatasetid, however this doesn't exist in the ODM2 schema https://github.com/ODM2/ODM2/blob/master/schemas/ODM2_DBWrench_Schema.xml.

To fix:
Remove OutputDataSetID from the models.Simulations table

Make specific RDBMS packages optional rather than required installs

odm2api currently requires the Python packages for all the RDBMS that it supports. In the long run, we should work towards making each of the RDBMS packages optional (except for SQLite, but that's a non-issue); users of odm2api who are never going to use, say, postgresql and mysql, shouldn't have to install those dependencies, even if we make their installation trivial as auto-installed dependencies on the upcoming odm2api conda package.

This is a low-priority issue, for now. I'm just creating it for reference.

createDataSet fails to create dataset

The createDataset function in createService.py fails.

from odm2api.ODM2.services.createService import CreateODM2
writer = CreateODM2(my_local_connection)
writer = ReadODM2(self.connection)
myDataset = writer.createDataset(dstype='my dataset',
                                 dscode='my_dataset',
                                 dstitle='this is my dataset'
                                 dsabstract='some description')

IntegrityError: (IntegrityError) NOT NULL constraint failed: Datasets.DatasetUUID u'INSERT INTO datasets (datasetuuid, datasettypecv, datasetcode, datasettitle, datasetabstract) VALUES (?, ?, ?, ?, ?)' (None, None, None, None, 'test')

The createDataset function sets values for the following variables DatasetTypeCV, DatasetCode, DatasetTitle, DatasetAbstract, DatasetUUID. However, these variables do not exist in the models.py DataSets class, instead they are named DataSet*.

To fix, change the variables that are set in createService.createDataset to match the variables defined in models.DataSets

Python 3.4.3 Geoalchemy Error

Attempting to install odm2api on a CentOS 7 machine that is running Python 3.4.3 using the source code from the python 3.x compatibility pull request. While the odm2api installs fine, it is dependent on geoalchemy which raises the following ImportError:

>>> import odm2api
utils imported
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.4/site-packages/odm2api/__init__.py", line 1, in <module>
    from odm2api.ODMconnection import SessionFactory, dbconnection
  File "/usr/lib/python3.4/site-packages/odm2api/ODMconnection.py", line 6, in <module>
    from .ODM2.models import Variables as Variable2, setSchema
  File "/usr/lib/python3.4/site-packages/odm2api/ODM2/models.py", line 7, in <module>
    from geoalchemy import GeometryDDL, GeometryColumn
  File "/usr/lib64/python3.4/site-packages/geoalchemy/__init__.py", line 2, in <module>
    from geoalchemy.base import *
  File "/usr/lib64/python3.4/site-packages/geoalchemy/base.py", line 7, in <module>
    from utils import from_wkt
ImportError: cannot import name 'from_wkt'

This is documented as geoalchemy issue #41 and a pull request was created to fix this in November 2015, but hasn't been touched since January 2016.


Here is the suggested fix from #41

In base.py change

from utils import ...
from functions import ...

to

from geoalchemy.utils import ...
from geoalchemy.functions import ...

In base.py change any mention of ColumnProperty.ColumnComparator to ColumnProperty.Comparator

In geometry.py replace any calls to the has_key method with the corresponding 'in' expression.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.