Giter Site home page Giter Site logo

marshmallow-code / marshmallow-sqlalchemy Goto Github PK

View Code? Open in Web Editor NEW
543.0 19.0 95.0 678 KB

SQLAlchemy integration with marshmallow

Home Page: https://marshmallow-sqlalchemy.readthedocs.io

License: MIT License

Python 100.00%
python python-3 sqlalchemy marshmallow hacktoberfest

marshmallow-sqlalchemy's People

Contributors

abdealiloko avatar cancan101 avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar dpwrussell avatar evelyn9191 avatar ewittle avatar frol avatar heckad avatar indivar0508 avatar jeanphix avatar jmcarp avatar kelvinhammond avatar lafrech avatar mjpieters avatar mygodishe avatar peterschutt avatar pre-commit-ci[bot] avatar pyup-bot avatar rmackinnon avatar rudaporto avatar samueljsb avatar singingwolfboy avatar sloria avatar theunspretorius avatar uralbash avatar vgavro avatar yourun-proger avatar yuriheupa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marshmallow-sqlalchemy's Issues

dump_only ignored for relationship fields

After declaring a relationship field as dump_only, the data is still loaded back into the object.

Model and schema:

class Course(rod.model.db.Model):
    __tablename__ = 'course'

    id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, primary_key=True)
    title = sqlalchemy.schema.Column(sqlalchemy.types.String())

    # Levels of this course
    levels = sqlalchemy.orm.relationship(
        'Level',
        back_populates='course'
    )


class CourseSchema(rod.model.BaseSchema):
    class Meta(rod.model.BaseSchema.Meta):
        model = rod.model.course.Course

    levels = marshmallow_sqlalchemy.field_for(rod.model.course.Course, 'levels', dump_only=True)

Loading the object:

data = {
    'title': 'Test Course',
    'levels': [1, 2, 3]  # Should be ignored
}
course = CourseSchema().load(data).data

print course.levels

Expected:

None

Actual:

{InstrumentedList} [<Level object>, <Level object>, <Level object>]

Generate schemas for abstract models

This is admittedly a very narrow use case, but users may occasionally want to create schemas for abstract models. Since by definition abstract models don't have mappers, the logic used to list columns and relationships won't work. This could be handled easily enough by listing model attributes without using the mapper, although it would be less elegant:

for key in dir(model):
    attr = getattr(model, key)
    if isinstance(attr, (Column, MapperProperty)):
        # ingest column
        ...

The main plausible use case is a user who wants to define an ABC with several concrete subclasses, but only wants to serialize columns defined on the ABC. @sloria is this too narrow to support?

Use with sqlalchemy-1.1 and JSON field type

If you use sqlalchemy 1.1 (b2 in my case) with the new JSON field type, you'll get a type conversion error:

marshmallow_sqlalchemy.exceptions.ModelConversionError: Could not find field column of type <class 'sqlalchemy.sql.sqltypes.JSON'>.

In Convert.py, I added ",
sa.JSON: fields.Raw" to the end of SQLA_TYPE_MAPPING in class ModelConverter, and that seems to do the trick (at least so far)

Using marshmallow schema to restrict update fields.

I'm developing an api in Flask,

In my update functions I would like to restrict the fields that can be updated e.g. I don't want users to be able to change their email at the moment.

To achieve this I have set up a schema (UserSchema) with its fields restricted by a tuple (UserSchemaTypes.UPDATE_FIELDS).The tuple does not include email.

The problem I am having is that email is a required field for User rows in my database.

So when I create a User model object using the schema (users_schema.load(user_json) ) an illegal object is added to the sqlalchemy session.

#schema to validate the posted fields against  
users_schema = UserSchema(only=UserSchemaTypes.UPDATE_FIELDS)
#attempt to deserialize the posted json to a User model object using the schema
user_data = users_schema.load(user_json)
if not user_data.errors:#update data passed validation
   user_update_obj = user_data.data 
   User.update(user_id,vars(user_update_obj))

In my update function itself I then have to remove this illegal object from the session via db.session.expunge_all() as if I do not I receive an OperationalError.

@staticmethod    
def update(p_id,data):
    db.session.expunge_all()#hack I want to remove
    user = User.query.get(p_id)
    for k, v in data.iteritems():
        setattr(user, k, v)
    db.session.commit()

OperationalError received when db.session.expunge_all() is removed:

OperationalError: (raised as a result of Query-invoked autoflush; consider        
using a session.no_autoflush block if this flush is occurring prematurely) 
(_mysql_exceptions.OperationalError) (1048, &quot;Column 'email' cannot be   null&quot;) [SQL:     u'INSERT INTO user (email, password, active, phone, current_login_at, last_login_at, current_login_ip, last_login_ip, login_count) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: (None, None, 1, '0444', None, None, None, None, None)]

ModelSchema.load(instance=None) call wont overridden .instance

as in ModelSchema.load

self.instance = instance or self.instance

because I instantiation schema class in module level, reuse them in multiple places and across requests,
the .instance value firstly set will persists in following load() calls, regardless load(instance=None), this will trigger nasty bug as its value depends on request order.

is there a good reason for this behavior and not just set self.instance from parameter?
or should I instantiation a new schema object before any load or dump call?

Allow overriding class with field_for

For example I have a JSONB field on the model which is currently mapped to Raw. I would like to make it map to Dict. Ideally I could write:

content = field_for(MyModel, 'content', klass=fields.Dict)

Generate schemas for tables

I would like to be able to do something like this:

users = Table('users', metadata,
    Column('id', Integer, primary_key=True),
    Column('name', String),
    Column('fullname', String),
)

class UserSchema(TableSchema):
    class Meta:
        table = users

Because sometimes you want to use SQLAlchemy, but you don't want an ORM.

Validating multi-table json

Is there anyway of using marshmallow-sqlalchemy to validate json which contains data for multiple tables?

Example:
Given 2 tables Question and InputQuestion where InputQuestion is an extension of Question,
the following json is posted

{  
    "question": {
        "question_type_id": 1,
        "client_date_created": "17:25:12.517000",
        "company_id": 74
         },
    "input_question": {
        "input_question_type_id": 1,
        "require_input_if_positive": false,
        "device_input_type_id": 1,
        "require_photo_if_positive": false
    }
}

My naive and unsuccessful attempt to validate this was to setup a schema, with nested instances of
ModelSchema classes.

class Linked_Checklist_Input_Question_Schema(Schema):
    question =  fields.Nested(Checklist_Question_Schema)
    input_question = fields.Nested(Checklist_Input_Question)

#test link schema
schema = Linked_Checklist_Input_Question_Schema()
#load example json
result = schema.load(data)
#which gives us as result.errors
{u'_schema': [u'Invalid type.']}

`

Why is session part of the declarative API?

Hi,

My SQL Alchemy models are totally decoupled from ways to access them (so from both an engine and a session). My session objects are created depending on the python process that makes use of the models (for instance, I am using a scoped_session for my web app, a plain Session object for one-off scripts).

This makes it impossible to use this library. Why is it a requirement to pass the session to the model declaration?

Non default Column name causes get_primary_key to fail.

Say you have a model:

class Author(Base):
    __tablename__ = 'Author'

    author_id = Column("AuthorId", Integer, primary_key=True)
    name = Column("Name", Unicode(160), nullable=False)

get_primary_key will return AuthorId as the primary key, but the attribute that's actually desired is author_id. When I go go to serialize a BookSchema (with a many relationship on authors), the list of Author primary keys incorrectly gets filled with None. Using the below snippet instead solves the issue for me:

from sqlalchemy.orm import ColumnProperty, class_mapper

def get_primary_columns(model):
    """Get primary key columns for a SQLAlchemy model.

    :param model: SQLAlchemy model class
    """
    primary_keys = []
    for primary_key in class_mapper(model).primary_key:
        for prop in class_mapper(model).iterate_properties:
            if isinstance(prop, ColumnProperty):
                if prop.columns:
                    if prop.columns[0].compare(primary_key):
                        primary_keys.append(prop)
                        break
    return primary_keys

Would love some input here on whether this all makes sense.

schema.load is no longer taking into consideration the Models default values

As of today 12/9 loading a dictionary to a model that has default fields is no longer using the alchemy default fields and throwing errors.

Example below, Python 3

from datetime import datetime

from app import db
from app.models.base import Base as SQLBASE
from app.schemas import Base

__all__ = ["TestModel"]

class TestModel(SQLBASE):
    __tablename__ = 'test_table'
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.Unicode(255), nullable=False, unique=True)
    created_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
    updated_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)

class TestModelSchema(Base):

    class Meta(Base.Meta):
        dateformat = "%Y-%m-%d"
        model = TestModel

test_schema = TestModelSchema()

class TestSchemaFunctions:

    def test_no_defaults(self):
        dict_model = {"name": "TestName"}
        results = test_schema.load(dict_model)
        print(results.errors, results.data)
        # (Pdb) results.errors
        # {'updated_at': ['Missing data for required field.'], 'created_at': ['Missing data for required field.'], 'id': ['Missing data for required field.']}
        # (Pdb) results.data
        # {'name': 'TestName'}

Length validation fails on UUID fields

It appears that schemas for models with custom column types, which have a python_type of uuid.UUID, have a Length validator set on them. This fails when in validator.Length.__call__, len() is called on the UUID.

The column type is basically the example from the SQLAlchemy docs, with the addition of a python_type property that returns uuid.UUID.

I could just use the sqlalchemy.dialects.postgresql.UUID, but I don't want to tie myself to just postgresql. Is there a recommended way for handling custom columns with this library?

Stack trace:

../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow_sqlalchemy/schema.py:188: in validate
    return super(ModelSchema, self).validate(data, *args, **kwargs)
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/schema.py:574: in validate
    _, errors = self._do_load(data, many, postprocess=False)
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/schema.py:603: in _do_load
    index_errors=self.opts.index_errors,
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/marshalling.py:283: in deserialize
    index=(index if index_errors else None)
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/marshalling.py:65: in call_and_store
    value = getter_func(data)
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/marshalling.py:276: in <lambda>
    data
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/fields.py:263: in deserialize
    self._validate(output)
../../../../.virtualenvs/server/lib/python3.5/site-packages/marshmallow/fields.py:195: in _validate
    if validator(value) is False:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Length(min=None, max=None, error=None)>, value = UUID('a6f4f8ed-c20d-4f69-b335-88f2292d21e9')

    def __call__(self, value):
>       length = len(value)
E       TypeError: object of type 'UUID' has no len()```

get_schema_for_field using field.root can create multilevel nesting issues

Given a schema hierarchy of album->track->genres, if track.genres is a Related field, tracks.genres.model returns the model of the top level schema (Album) rather than the model of the Related field's parent (Track). This causes an attribute error, as Related.model tries to find the genres attribute on the Album model rather than the Track model.

I can't seem to come up with any justification or advantage to using field.root rather than field.parent in get_schema_for_field. Obviously I can override this problem pretty easily, and am happy to do so, but wanted to try and get some insight as to if there was a specific reason for using field.root.

What the use of inspect.getmro ?

Hi,

I'm currently building a module to integrate marshmallow with mongoengine (see: https://github.com/touilleMan/marshmallow-mongoengine). To do so I started from the codebase of marshmallow-sqlachemy.

Thus, I was wondering what is the use of inspect.getmro in the schema.get_declared_field function (see https://github.com/marshmallow-code/marshmallow-sqlalchemy/blob/dev/marshmallow_sqlalchemy/schema.py#L39) given the loop make use of opst = klass.opts (i.e. the real class options) instead of the current motherclass's ones (so something like opts = base.opts)

Is it an error or am I missing something ?

Deserialize Association Objects

Hi,

I've created an Association Table as described in the SQLAlchemy manual, however when I try to load on a POST request I get this response:

RuntimeError: Model <class 'app.products.models.SoldProducts'> has multiple primary keys

Which is true, as the association table creates a compound key. How should I proceed?

Thanks!

Initial Update

Hi ๐Ÿ‘Š

This is my first visit to this fine repo, but it seems you have been working hard to keep all dependencies updated so far.

Once you have closed this issue, I'll create seperate pull requests for every update as soon as I find one.

That's it for now!

Happy merging! ๐Ÿค–

Custom TYPE_MAPPING

I am extending the Schema class from marshmallow for my schemas and extending the TYPE_MAPPING attribute, but because of these lines here, the ModelConverter doesn't pick up my changes, because it hardcodes marshmallow's Schema class.

from datetime import datetime
from marshmallow import Schema

[...]

class BaseSchema(Schema):

    TYPE_MAPPING = Schema.TYPE_MAPPING.copy()
    TYPE_MAPPING.update({
        datetime: APIDateTime,
    })

    def __init__(self, *args, **kwargs):
        if 'strict' not in kwargs:
            kwargs['strict'] = True
        super(BaseSchema, self).__init__(*args, **kwargs)

How do I override the mapping in marshmallow-sqlalchemy for datetime => my own custom field?

Invalid syntax

After upgrading to 0.6.0 I get an "Invalid Syntax" exception on line 133 in schema.py:

filters = {
    column.key: data.get(column.key)
    for column in columns
}

"for column in columns" is not a valid syntax.

ModelConversionError with sqlalchemy_utils.types.json

I'm using json field from sqlalchemy_utils (https://sqlalchemy-utils.readthedocs.org/en/latest/data_types.html#module-sqlalchemy_utils.types.json) and after recent upgrade (from 0.2 to 0.8, wow) marshmallow_sqlalchemy throws ModelConversionError

traceback

  File "./manager.py", line 2, in <module>
    from dashboard.tasks import manager
  File "/Users/zz/Dropbox/Workspace/python/trackoji_db/dashboard/tasks.py", line 7, in <module>
    app = create_app()
  File "/Users/zz/Dropbox/Workspace/python/trackoji_db/dashboard/__init__.py", line 38, in create_app
    register_resources(api)
  File "/Users/zz/Dropbox/Workspace/python/trackoji_db/dashboard/__init__.py", line 66, in register_resources
    from .resources.widget import WidgetResource
  File "/Users/zz/Dropbox/Workspace/python/trackoji_db/dashboard/resources/widget.py", line 11, in <module>
    class WidgetSchema(ma.ModelSchema):
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow/schema.py", line 115, in __new__
    dict_cls=dict_cls
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/schema.py", line 57, in get_declared_fields
    declared_fields = mcs.get_fields(converter, opts)
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/schema.py", line 90, in get_fields
    include_fk=opts.include_fk,
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 100, in fields_for_model
    field = self.property2field(prop)
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 118, in property2field
    field_class = self._get_field_class_for_property(prop)
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 177, in _get_field_class_for_property
    field_cls = self._get_field_class_for_column(column)
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 146, in _get_field_class_for_column
    return self._get_field_class_for_data_type(column.type)
  File "/Users/zz/.virtualenvs/trackoji_db/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 169, in _get_field_class_for_data_type
    'Could not find field column of type {0}.'.format(types[0]))
marshmallow_sqlalchemy.exceptions.ModelConversionError: Could not find field column of type <class 'sqlalchemy_utils.types.json.JSONType'>.

model:

class Widget(db.Model, BaseModel):

    id = db.Column(db.Integer, primary_key=True)
    params = db.Column(JSONType)

versions:

flask-marshmallow==0.6.2
marshmallow==2.6.0
marshmallow-sqlalchemy==0.8.0
sqlalchemy-utils==0.31.6

Marshmallow not converting *some* datetime objects

map_field takes an id and field, gets and converts an object to dict using marshmallow, and takes out the specified field.

In [59]: map_field('foo', 30)
Out[59]: '2015-08-05T06:49:57.347259+00:00'

In [60]: map_field('foo', 40)
Out[60]: datetime.datetime(2015, 8, 4, 11, 56, 41, 97946)

I have no clue why this is happening, but it occurs only with dates.

Fails when using SQLAlchemy @hybrid_property instead of normal @property

I'm using Flask-SQLalchemy, Flask-Marshmallow, Marshmallow-SQLAlchemy and Marshmallow.

A couple of my SQLAlchemy models have @hybrid_properties.

Here's my field:

count_direct_child_items = field_for(GearCategory, 'count_direct_child_items', dump_only=True)

When it tries to fetch the hybrid property, it fails because the hybrid property is not in the mapper.

all_orm_descriptors may be useful for fixing this, as mentioned in this SQLAlchemy mailing list issue.

Here's the actual traceback:

Traceback (most recent call last):
  File "/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 1804, in get_property
    return self._props[key]
KeyError: 'count_recursive_child_items'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "manage.py", line 15, in <module>
    from app.serializers.gear_serializers import (GearCategorySchema, GearItemSchema,
  File "/Users/jeffwidman/Code/rc/api_rc_flask/app/serializers/gear_serializers.py", line 9, in <module>
    class GearCategorySchema(marshmallow.ModelSchema): # TODO switch to HyperlinkModelSchema
  File "/Users/jeffwidman/Code/rc/api_rc_flask/app/serializers/gear_serializers.py", line 25, in GearCategorySchema
    count_recursive_child_items = field_for(GearCategory, 'count_recursive_child_items', dump_only=True)
  File "/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/marshmallow_sqlalchemy/convert.py", line 117, in field_for
    prop = model.__mapper__.get_property(property_name)
  File "/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 1807, in get_property
    "Mapper '%s' has no property '%s'" % (self, key))
sqlalchemy.exc.InvalidRequestError: Mapper 'Mapper|GearCategory|gear_category' has no property 'count_recursive_child_items'

Missing author id serialization from the example

This is basically the example from here: http://marshmallow-sqlalchemy.readthedocs.org/en/latest/

import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker, relationship
from sqlalchemy import event
from sqlalchemy.orm import mapper

engine = sa.create_engine('sqlite:///:memory:')
session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()

class Author(Base):
    __tablename__ = 'authors'
    id = sa.Column(sa.Integer, primary_key=True)
    name = sa.Column(sa.String)

    def __repr__(self):
        return '<Author(name={self.name!r})>'.format(self=self)

class Book(Base):
    __tablename__ = 'books'
    id = sa.Column(sa.Integer, primary_key=True)
    title = sa.Column(sa.String)
    author_id = sa.Column(sa.Integer, sa.ForeignKey('authors.id'))
    author = relationship("Author", backref='books')

    def __repr__(self):
        return '<Book(title={self.title!r})>'.format(self=self)

Base.metadata.create_all(engine)

author = Author(name='Chuck Paluhniuk')
session.add(author)

book = Book(title='Fight Club', author=author)
session.add(book)

session.commit()

from marshmallow_sqlalchemy import ModelSchema

class BookSchema(ModelSchema):
    class Meta:
        model = Book
        sqla_session = session


class AuthorSchema(ModelSchema):
    class Meta:
        model = Author
        sqla_session = session

author_schema = AuthorSchema()
book_schema = BookSchema()

# Print the author to show that it's definitely there
print book.author

dump_data = author_schema.dump(author).data
print dump_data
# {'books': [123], 'id': 321, 'name': 'Chuck Paluhniuk'}

print author_schema.load(dump_data).data

# Everything seems fine until:
print book_schema.dump(book).data
#{'title': u'Fight Club', 'id': 1, 'author': 1}

Result of the last print:

{'title': u'Fight Club', 'id': 1, 'author': None}

The author should not be None, it should be 1

Foreign key value disregarded during .load()

Hello,

With roughly the following models:

class Tariff(rod.model.db.Model, rod.model.PersistentMixin):
    id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, primary_key=True)
    title = sqlalchemy.schema.Column(sqlalchemy.types.String())

    # Course this tariff belongs to
    course_id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, sqlalchemy.schema.ForeignKey('course.id'))
    course = sqlalchemy.orm.relationship(
        'Course',
        back_populates='tariffs'
    )

    # Price of this payment plan
    price = sqlalchemy.schema.Column(sqlalchemy.types.Integer)
class Course(rod.model.db.Model, rod.model.PersistentMixin):
    id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, primary_key=True)
    title = sqlalchemy.schema.Column(sqlalchemy.types.String())

    levels = sqlalchemy.orm.relationship(
        'Level',
        back_populates='course'
    )

    tariffs = sqlalchemy.orm.relationship(
        'Tariff',
        back_populates='course'
    )

And the following schemas:

class CourseSchema(rod.model.BaseSchema):
    class Meta(rod.model.BaseSchema.Meta):
        model = rod.model.course.Course

    tariffs = marshmallow.fields.Nested('TariffSchema', many=True, exclude=('course',))

class TariffSchema(rod.model.BaseSchema):
    class Meta(rod.model.BaseSchema.Meta):
        model = rod.model.tariff.Tariff

    course = marshmallow.fields.Nested(CourseSchema)

I'm POSTing the following body:
{"course":{"id":1},"course_id":1,"title":"uu","price":"222"}

To my handler:

    tariff_obj = rod.model.schemas.TariffSchema().load(flask.request.json).data
    rod.model.db.session.add(tariff_obj)
    rod.model.db.session.commit()

    return flask.jsonify(rod.model.schemas.TariffSchema().dump(tariff_obj).data)

However, for some reason after load()ing the JSON dump, my tariff_obj.course_id is None, and so it tariff_obj.course.id, even though the rest of the tariff_obj.course looks like a proper (albeit new) Course class instance.

Is this a bug?

Thanks.

API to customize the list of TYPE_MAPPING

I use sqlalchemy_utils which provide ChoiceType field (and many others).

I'm able to create a new Marshmallow field (Field) to handle this new field with proper _serialize/_deserialize functions but instead to override the attributes one by one in each schema I defined I want to update the list type_mapping in ModelConverter.

I didn't find how to access it properly, ModelConverter accepts schema_cls attribute at init
https://github.com/marshmallow-code/marshmallow-sqlalchemy/blob/dev/marshmallow_sqlalchemy/convert.py#L76
but I'm not able to figure out how to use it properly with all these meta classes ;)

So for now I rely on a monkey patch:

ModelConverter.SQLA_TYPE_MAPPING.update({
    ChoiceType: ChoiceTypeSchema
})

PS: I use flask-marshmallow in my app.

Support object polymorphism

In my application, I have an API endpoint like so:

/api/activities/

In my database, I have several different types of Activity models, all inheriting from a common class. I made a schema like so:

class ActivitySchema(ModelSchema):
    class Meta:
        model = models.Activity

And then have flask-restful handle the api endpoint itself, querying the database and passing the objects to marshmallow-alchemy.

However, marshmallow-sqlalchemy serializes the objects as if they were all of the base Activity type. It would be nice if there was some support like this:

class ActivitySchema(ModelSchema):
    class Meta:
        model = models.Activity
        generate_polymorphic_schemas = True

Which would use sqlalchemy's inspection to view all polymorpic subclasses of Meta.model, store them on ActivitySchema, and then when serializing/deserializing use the appropriate class.

Thoughts?

If this is a desirable feature, I can try to make a PR implementing such functionality.

marshmallow-sqlalchemy does not handle NULL foreign keys

marshmallow-sqlalchemy breaks if a column with for a foreign key is NULL/None.

Use the main example from project's description and insert a book without an author and try to serialize it.

>>> book = Book(title="Fight Club")
>>> session.add(book)
>>> session.commit()
>>> book_schema = BookSchema()
>>> book_schema.dump(book).data
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/schema.py", line 564, in dump
    **kwargs
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/marshalling.py", line 137, in serialize
    index=(index if index_errors else None)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/marshalling.py", line 56, in call_and_store
    value = getter_func(data)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/marshalling.py", line 131, in <lambda>
    getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/fields.py", line 221, in serialize
    return self._serialize(value, attr, obj)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow/fields.py", line 1180, in _serialize
    value = self.keygetter(value)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/marshmallow_sqlalchemy/convert.py", line 18, in get_pk_from_identity
    _, key = identity_key(instance=obj)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/sqlalchemy/orm/util.py", line 275, in identity_key
    mapper = object_mapper(instance)
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/sqlalchemy/orm/base.py", line 266, in object_mapper
    return object_state(instance).mapper
  File "/home/shreyder/.virtualenvs/mtest/lib/python3.4/site-packages/sqlalchemy/orm/base.py", line 288, in object_state
    raise exc.UnmappedInstanceError(instance)
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'builtins.NoneType' is not mapped

ModelSchema doesn't respect class Meta: ordered=True

When I use ModelSchema, and set ordered=True in class Meta, the output isn't the same order as my schema fields:

Demo:

import marshmallow_sqlalchemy as mas
class GearReviewSchema(mas.ModelSchema): 
      class Meta:
          ordered = True

      # schema fields... 

gear_review_schema = GearReviewSchema()

review = db.session.query(GearReview).get(3)

gear_review_schema.dump(review) 
# BUG: generates an OrderedDict, but ordering is different than schema fields

However, when I use the normal Schema, it works as expected:

import marshmallow as ma
class GearReviewSchema(ma.Schema):
    class Meta:
        ordered = True

    # schema fields... 

gear_review_schema = GearReviewSchema()

review = db.session.query(GearReview).get(3)

gear_review_schema.dump(review)
# generates an OrderedDict, with ordering same as schema fields

I first noticed this issue using Flask-Marshmallow alongside Marshmallow-SQLAlchemy, but after playing around with it, it looks like the problem is located in Marshmallow-SQLAlchemy.

Feature Request: built-in validator that object already exists in the db (useful for foreign keys)

Some of my API's let people create new objects, and some of the fields are optional foreign keys. Currently when a foreign key gets passed in, I have some custom validators that check the database to verify that the FK object actually exists. I could catch the exception, but rather check it on initial POST/PATCH.

This strikes me as something that would be useful as a built-in validator for marshmallow-sqlalchemy

I think the implementation would be fairly straightforward, although it's an open question in my mind whether to use session.query(foreign object).options(load_only(pk_field)).get(pk id) or something a little more fancy like:

if not db.session.query(db.exists([GearCategory.id]).where(GearCategory.id == category_id)).scalar():
   raise Exception("Gear Category ID [%s] does not exist" % category_id)

The get() will be far faster when the object is already in the session, but slower than the EXISTS if the query has to actually hit the DB.

Handle columns without `python_type`

To determine the field class to be used for a SQLAlchemy column, we sometimes check its python_type. But some columns, e.g. Postgres TSVECTOR columns, don't have a python_type and raise a NotImplementedError on trying to access. Some possible fixes:

  • Add TSVECTOR to SQLA_TYPE_MAPPING. This would kind of fix the immediate problem I'm running into, but what's the right marshmallow type to use? We could use Str, but that's not exactly right. Also, this doesn't help if other columns are missing python_type, which might or might not be the case.
  • Allow developers to exclude unusual fields from model conversion, e.g. via the Meta.exclude option in marshmallow. This is more flexible but requires developers to understand that they can / should exclude columns that don't convert to fields.
  • Don't raise an exception on failing to convert a column to a field. Possibly add a strict option to fields_for_model, such that ModelConversionErrors are only raised when strict is true. This is also flexible, but allows developers to silence potentially heterogeneous kinds of exceptions.

I'm submitting a patch for the second option, mostly so I can get some unrelated work done, but I think the third also makes sense, and it might not be bad to use both. What do you think @sloria?

Expose Model @property

I can't work out how to expose model properties created using @Property. E.g. as from your examples:

@property
def url(self):
    return url_for('author', id=self.id)

I might want to include this property when returning data. However, the best help I could find from the API reference is using property2field, but in trying to use:

url = property2field(Author.__mapper__.get_property('url')

I get the following error:

sqlalchemy.exc.InvalidRequestError: Mapper 'Mapper|Post|posts' has no property 'url'

It seems like this ought to be possible, but I can't work it out. Any help appreciated.

Declarative configuration

Another idea that is straight from ColanderAlchemy.

I believe this could be implemented relatively easily, but there are numerous different ways one could go about it, so I think perhaps some discussion is more appropriate than forging ahead with a PR which might turn out not to be the best way at all.

I would be really nice to be able to do something like this:

    id = Column(Integer,
                primary_key=True,
                info={
                    'marshmallow_sqlalchemy': {
                        'dump_only': False
                     }
                })

These would then be processed when constructing the schemas to override the settings. Potentially anything that marshmallow can do could be declared there.

I hacked this in quickly:

    def _add_column_kwargs(self, kwargs, column):
        """Add keyword arguments to kwargs (in-place) based on the passed in
        `Column <sqlalchemy.schema.Column>`.
        """
        if column.nullable:
            kwargs['allow_none'] = True

        if hasattr(column.type, 'enums'):
            kwargs['validate'].append(validate.OneOf(choices=column.type.enums))

        # Add a length validator if a max length is set on the column
        if hasattr(column.type, 'length'):
            kwargs['validate'].append(validate.Length(max=column.type.length))

        if hasattr(column.type, 'scale'):
            kwargs['places'] = getattr(column.type, 'scale', None)

        # Primary keys are dump_only ("read-only")
        if getattr(column, 'primary_key', False):
            kwargs['dump_only'] = True

        if 'marshmallow_sqlalchemy' in column.info:
            settings = column.info['marshmallow_sqlalchemy']

            if 'dump_only' in settings:
                kwargs['dump_only'] = settings['dump_only']

            if 'load_only' in settings:
                kwargs['load_only'] = settings['load_only']

and it worked quite well. It didn't support inheritance though which we definitely should I think. That can be achieved relatively easily as well I think as we have for each property, the list of applicable columns, so we just need to iterate over them in order from super to sub and apply the settings found in info to provide ordinary inheritance override behaviour. I think the same thing could be applied to relationships as well.

Thoughts?

`BIT` column type raises NotImplementedError

Using SQLAlchemy to process MySQL tables for an API I'm building, and one of the columns in my table happens to be type 'BIT'. This apparently isn't supported Marshmallow, but is a valid column type for SQLAlchemy's MySQL dialect. Any chance of a fix, or is there an easier work around in the Marshmallow-SQLAlchemy connector that I'm missing?

deserialize json model in model with marshmallow python

I going to deserialize json model

Models:

class Users(Base):

        __tablename__ = 'users'

        ID = Column(Integer, primary_key=True,autoincrement=True)
        Username = Column(String(50),nullable=False,unique=True)
        Email = Column(String(200),nullable=False,unique=True)
        Name = Column(String(20))
        Password = Column(String(50))
        Date = Column(Date)
class Logs(UserBase):

        __tablename__ = 'logs'

        ID = Column(Integer, primary_key=True,autoincrement=True)
        UserLog = Column(String(50))
        log_id = Column(Integer, ForeignKey('users.ID'))
        log = relationship("Users",lazy="joined", backref=backref('logs'))

Schemas:

class LogsSchema(ModelSchema):
    class Meta:
        model = Logs
        sqla_session = Session

class UsersSchema(ModelSchema):
    logs = fields.Nested(LogsSchema,exclude=('log', ))
    class Meta:
        model = Users
        sqla_session = Session

json model request

<QueryDict: {u'Username': [u'ramin world'], u'logs': [u'[UserLog=test]'], u'Date': [u'null'], u'Password': [u'1234'], u'Email': [u'[email protected]'], u'Name': [u'Farajpour']}>

load json request:

{'Username': u'ramin world', 'Password': u'1234', 'Name': u'Farajpour', 'Email': u'[email protected]'}

you see model u'logs': [u'[UserLog=test]']is not on result.

Idea: fields.Relationship

I often use fields.Nested to handle relationships (fields.Related doesn't do what I need). This idea for fields.Relationships creates a fields.Nested, where the schema is based on the parent schema, and the sqlalchemy class for the relation. It does "many" automatically, and also excludes the foreign key to the parent (which is redundant). Example usage:

class OrderSchema(ModelSchema):
    class Meta:
        model = db.Order
        sqla_session = db.session
    items = Relationship()

This is the code I'm using:

class Relationship(fields.Nested):
    def __init__(self, **kwargs):
        super(Relationship, self).__init__(None, **kwargs)

    @property
    def schema(self):
        if not hasattr(self, '_schema'):
            property = getattr(self.root.Meta.model, self.attribute or self.name).property
            self.many = property.uselist
            class InnerSchema(ModelSchema):
                class Meta(self.root.Meta):
                    model = property.mapper.class_
                    exclude = [c.name for c in property.remote_side]
            self._schema = InnerSchema()
        return self._schema

What I would really like to do - and I'm not sure how at the minute - is have this span multiple levels of relationship.

Access to relationship model

Hi,

author = Author(name='Chuck Paluhniuk')
book = Book(title='Fight Club', author=author)
session.add(author)
session.add(book)
session.commit()

author_schema.dump(author).data

output model book:

# {'books': [123], 'id': 321, 'name': 'Chuck Paluhniuk'}

how to access to model books if i set list model like ,

lst = ses.query(author ).all()
author_schema.dump(lst).data

output :

# {'books': [123], 'id': 321, 'name': 'Chuck Paluhniuk'}

but i need access to model book?

Custom SQLAlchemy types with a python_type that is not a class fail on init

Let me start by saying this might not be a bug.

Looks like kelvinhammond@12da838 breaks compatibility with sqlalchemy-utils (particularly PhoneNumberType). It seems like a bug in sqlalchemy-utils where it has a python_type of method (rather than a class).

I have a pull request which should fix the issue:
kvesteri/sqlalchemy-utils#248

I think this is a bug in sqlalchemy-utils, but it might also be reasonable to be a little more forgiving by gracefully handling cases where issubclass will fail. Tough call - on the one hand it is good to be accommodating but on the other hand we should trust that people are adhering to what appears to be a pretty definitive interface definition:

http://docs.sqlalchemy.org/en/latest/core/type_api.html#sqlalchemy.types.TypeEngine.python_type

Here is the relevant stack trace:

  File "marshmallow_sqlalchemy/convert.py", line 219, in _add_column_kwargs
    if not python_type or not issubclass(python_type, uuid.UUID):
TypeError: issubclass() arg 1 must be a class

(Pdb) python_type
<bound method PhoneNumberType.python_type of PhoneNumberType(length=20)>

Interface for customizing relationship columns

Back in 0.1, we had a basic interface for customizing the behavior of relationship fields via a keygetter passed to QuerySelect or QuerySelectList. That wound up being a less than ideal way to model relationships, which is why we introduced the Related field. Still, it would still be helpful to expose some interface for customizing related fields without overriding every Related by hand. For example, users may want to use something like StringRelatedField from DRF or Relationship from marshmallow-jsonapi.

At the moment, it seems like you'd subclass ModelConverter and override some combination of _get_field_class_for_property and _add_relationship_kwargs to do this. Which works, but seems kind of complicated, and requires users to know a lot about private APIs. What about adding options to ModelSchemaOpts for setting the field class for relationships, defaulting to Related? Or adding an optional related_factory that's invoked on converting relationships?

sqlalchemy marshmallow avoid loading data into session

Is there a way to avoid inserting the data into session while using Marshmallow - sqlalchemy .load()

Because we tried to manage the objects by ourself (adding them into session if required) . Will add into the session if required , but for only validation I need to use load () method which is provided by marshmallow-sqlalchemy

Simple use case unsupported

Hello,

Imagine a trivial use case in which we:

  • Read a model from the database
  • Serialize it into JSON
  • Receive updated JSON (from the web)
  • Deserialize it into an SQLAlchemy object
  • Persist the updated object in the database

Lets take a sample model:

class Group(rod.model.db.Model):
    __tablename__ = 'group'

    id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, primary_key=True)
    title = sqlalchemy.schema.Column(sqlalchemy.types.String(100))

    teacher_id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, sqlalchemy.schema.ForeignKey('staff.id'))
    teacher = sqlalchemy.orm.relationship(
        'Staff',  # References the Staff class below
        back_populates='groups'
    )

class Staff(rod.model.db.Model, rod.model.PersistentMixin):
    __tablename__ = 'staff'

    id = sqlalchemy.schema.Column(sqlalchemy.types.Integer, primary_key=True)

    # Personal Information
    name = sqlalchemy.schema.Column(sqlalchemy.types.String)
    groups = sqlalchemy.orm.relationship(
        'Group',
        back_populates='teacher'
    )

Since marshmallow-sqlalchemy converts relation fields to IDs between serialization and deserialization, the JSON string {"teacher": 1} when deserialized on Group becomes Group.teacher: Object<Teacher>, which looks like a new Teacher object, although it may be existing in the database.

The result, when you try to simply save the Group instance, is this:

FlushError: New instance <Group at 0x1088fd5d0> with identity key (<class 'rod.model.group.Group'>, (26,)) conflicts with persistent instance <Group at 0x1089c74d0>

Expand docs clarifying the benefits of using `field_for()` rather than typical `fields.Str()`

I'm still trying to understand the benefits of using marshmallow-sqlalchemy above what marshmallow already gives me.

The obvious one is auto-generation of fields, but for my schemas, most of the fields require additional arguments such as dump_only or required, so this doesn't add much for me.

I checked the docs, but couldn't find much. However, I was just reading the code and noticed that a length validator is automatically included when sqlalchemy has a length constraint on the underlying db column. Similarly, #47 (comment) mentions that fields that aren't allowed to be null have marshmallow required added.

Both are clever optimizations--and I think it'd be worth mentioning in the docs.

Ultimately, be nice if there was a clear set of 'here's the benefits of this extension over and above vanilla marshmallow' as well as 'here's what using field_for(column) provides over and above the typical fields.str() or fields.Integer()'

Schema deserialisation with defined session and nested field throws an exception

Not sure if this is intended functionality, however the code below throws ValueError: Deserialization requires a session when trying to load data into a nested object.

I would expect passing a session would propagate to nested fields, however this doesn't seem to be the case. Is this by design?

If i remove followup (the nested field) from the only= property, the data loads fine from the non-nested field.

class FollowUp(ModelSchema):
    reasonCode =fields.String(attribute='reason_code')
    comment = fields.String()

class Occurence(ModelSchema):
    #... other fields before ...
    followup = fields.Nested(FollowUp())
    status = fields.String()

# Inside my request handler:
obj = DBOccurence.query.get_or_404(occ_id)
res = Occurence(only=['status', 'followup']).load(
            request.json,
            instance=obj,
            session=current_app.db.session
        )

Incorrect SQLAlchemy dependency

The convert.py file, at line 58 says:

postgresql.JSONB: fields.Raw

But postgresql.JSONB is not supported until SQLAlchemy 0.9.7. Yet, the setup.py lists SQLAlchemy>=0.7. So, any version between 0.7 and 0.9.6 is not really supported.

Using Python built-in Enum type in sqlalchemy.Enum column type produces wrong oneOf validation

class Choices(enum.Enum):
    a = 'a'
    b = 'b'
    c = 'c'

class MyModel(db.Model):
    # ...
    selected_choice = db.Column(db.Enum(Choices), nullable=False)

class MyModelSchema(ModelSchema):
    class Meta:
        model = MyModel
        fields = ['selected_choice']

The MyModelSchema end up having

selected_choice.validate[0].choices == (<enum 'ChoicesEnum'>,)

(notice a tuple), because MyModel.selected_choice.type.enums stores a tuple of an enum for some reason...

BTW, constructing a marshmallow field with oneOf(ChoicesEnum) (omit the unnecessary tuple wrapping) doesn't help much:

>>> f = fields.Str(validate=[
            validate.OneOf(ChoicesEnum),
         ])
>>> f.serialize('a', ChoicesEnum.a)
'ChoicesEnum.a'

And I couldn't make it to deserialize.

A related discussion that I have found: #2

ModelSchema default behaviour needs a lot of overriding

The current behaviour of ModelSchema is not that useful to me. Where I have, say, an "order" many-to-one relationship, by default it dumps a field "order" with the order_id, and this serialised data does not then load. It would be more helpful to dump this as field "order_id". one-to-many relationships are similarly not that useful.

Right now I am using a lot of excludes and manually defined fields, but this is unnecessary. In fact, marshmallow-sqlalchemy already has much more helpful default logic, on TableSchema, especially with include_fk=True. I can't directly use TableSchema as it only supports dump and I also need to load.

However, a one line change to schema.py really helps:

return converter.fields_for_model(opts.model, fields=opts.fields, exclude=opts.exclude)
-->
return converter.fields_for_table(opts.model.__table__, include_fk=True, fields=opts.fields, exclude=opts.exclude)

I realise this would break compatibility, but it seems a much more helpful default to me.

unnecessary selects with relationships...

It looks like marshmallow-sqlalchemy makes no use of lazy='joined' loaded objects?

I'm using a schema dump with many=True to create a list of all objects and I'm loading the only relationship (list of url's) joined to not produce a query for every row but creates a select anyways.

Checking unique constraints

Hi,

I have just a small question, is it actually possible to check unique constraint during validation ?

In my current application during the load of the json data everything is going well, and when I'm trying to persist the data I obtain an Integrity error since the unique filed value is already taken by another row.

ps: Thank you for this module that is really useful

Polymorphic Relationships only return fields defined in the abstracted class

Currently when querying and serializing against a polymorphic model, the returned attributes are only the ones defined in the parent schema. And not of any of the child Schema, which could include specific validations or other post_/pre_ steps that need to be followed. We've implemented our own implementation with our own models, which is essentially a new custom field on the abstract model that chooses it's current polymorphic target; however, this is specific to our codebase. I and generalize it out and provide an pull request.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.