Giter Site home page Giter Site logo

dataclasses-json's Introduction

Dataclasses JSON

This library provides a simple API for encoding and decoding dataclasses to and from JSON.

It's very easy to get started.

README / Documentation website. Features a navigation bar and search functionality, and should mirror this README exactly -- take a look!

Quickstart

pip install dataclasses-json

from dataclasses import dataclass
from dataclasses_json import dataclass_json


@dataclass_json
@dataclass
class Person:
    name: str


person = Person(name='lidatong')
person.to_json()  # '{"name": "lidatong"}' <- this is a string
person.to_dict()  # {'name': 'lidatong'} <- this is a dict
Person.from_json('{"name": "lidatong"}')  # Person(1)
Person.from_dict({'name': 'lidatong'})  # Person(1)

# You can also apply _schema validation_ using an alternative API
# This can be useful for "typed" Python code

Person.from_json('{"name": 42}')  # This is ok. 42 is not a `str`, but
                                  # dataclass creation does not validate types
Person.schema().loads('{"name": 42}')  # Error! Raises `ValidationError`

What if you want to work with camelCase JSON?

# same imports as above, with the additional `LetterCase` import
from dataclasses import dataclass
from dataclasses_json import dataclass_json, LetterCase

@dataclass_json(letter_case=LetterCase.CAMEL)  # now all fields are encoded/decoded from camelCase
@dataclass
class ConfiguredSimpleExample:
    int_field: int

ConfiguredSimpleExample(1).to_json()  # {"intField": 1}
ConfiguredSimpleExample.from_json('{"intField": 1}')  # ConfiguredSimpleExample(1)

Supported types

It's recursive (see caveats below), so you can easily work with nested dataclasses. In addition to the supported types in the py to JSON table, this library supports the following:

  • any arbitrary Collection type is supported. Mapping types are encoded as JSON objects and str types as JSON strings. Any other Collection types are encoded into JSON arrays, but decoded into the original collection types.

  • datetime objects. datetime objects are encoded to float (JSON number) using timestamp. As specified in the datetime docs, if your datetime object is naive, it will assume your system local timezone when calling .timestamp(). JSON numbers corresponding to a datetime field in your dataclass are decoded into a datetime-aware object, with tzinfo set to your system local timezone. Thus, if you encode a datetime-naive object, you will decode into a datetime-aware object. This is important, because encoding and decoding won't strictly be inverses. See this section if you want to override this default behavior (for example, if you want to use ISO).

  • UUID objects. They are encoded as str (JSON string).

  • Decimal objects. They are also encoded as str.

The latest release is compatible with both Python 3.7 and Python 3.6 (with the dataclasses backport).

Usage

Approach 1: Class decorator

from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Person:
    name: str

lidatong = Person('lidatong')

# Encoding to JSON
lidatong.to_json()  # '{"name": "lidatong"}'

# Decoding from JSON
Person.from_json('{"name": "lidatong"}')  # Person(name='lidatong')

Note that the @dataclass_json decorator must be stacked above the @dataclass decorator (order matters!)

Approach 2: Inherit from a mixin

from dataclasses import dataclass
from dataclasses_json import DataClassJsonMixin

@dataclass
class Person(DataClassJsonMixin):
    name: str

lidatong = Person('lidatong')

# A different example from Approach 1 above, but usage is the exact same
assert Person.from_json(lidatong.to_json()) == lidatong

Pick whichever approach suits your taste. Note that there is better support for the mixin approach when using static analysis tools (e.g. linting, typing), but the differences in implementation will be invisible in runtime usage.

How do I...

Use my dataclass with JSON arrays or objects?

from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Person:
    name: str

Encode into a JSON array containing instances of my Data Class

people_json = [Person('lidatong')]
Person.schema().dumps(people_json, many=True)  # '[{"name": "lidatong"}]'

Decode a JSON array containing instances of my Data Class

people_json = '[{"name": "lidatong"}]'
Person.schema().loads(people_json, many=True)  # [Person(name='lidatong')]

Encode as part of a larger JSON object containing my Data Class (e.g. an HTTP request/response)

import json

response_dict = {
    'response': {
        'person': Person('lidatong').to_dict()
    }
}

response_json = json.dumps(response_dict)

In this case, we do two steps. First, we encode the dataclass into a python dictionary rather than a JSON string, using .to_dict.

Second, we leverage the built-in json.dumps to serialize our dataclass into a JSON string.

Decode as part of a larger JSON object containing my Data Class (e.g. an HTTP response)

import json

response_dict = json.loads('{"response": {"person": {"name": "lidatong"}}}')

person_dict = response_dict['response']

person = Person.from_dict(person_dict)

In a similar vein to encoding above, we leverage the built-in json module.

First, call json.loads to read the entire JSON object into a dictionary. We then access the key of the value containing the encoded dict of our Person that we want to decode (response_dict['response']).

Second, we load in the dictionary using Person.from_dict.

Encode or decode into Python lists/dictionaries rather than JSON?

This can be by calling .schema() and then using the corresponding encoder/decoder methods, ie. .load(...)/.dump(...).

Encode into a single Python dictionary

person = Person('lidatong')
person.to_dict()  # {'name': 'lidatong'}

Encode into a list of Python dictionaries

people = [Person('lidatong')]
Person.schema().dump(people, many=True)  # [{'name': 'lidatong'}]

Decode a dictionary into a single dataclass instance

person_dict = {'name': 'lidatong'}
Person.from_dict(person_dict)  # Person(name='lidatong')

Decode a list of dictionaries into a list of dataclass instances

people_dicts = [{"name": "lidatong"}]
Person.schema().load(people_dicts, many=True)  # [Person(name='lidatong')]

Encode or decode from camelCase (or kebab-case)?

JSON letter case by convention is camelCase, in Python members are by convention snake_case.

You can configure it to encode/decode from other casing schemes at both the class level and the field level.

from dataclasses import dataclass, field

from dataclasses_json import LetterCase, config, dataclass_json


# changing casing at the class level
@dataclass_json(letter_case=LetterCase.CAMEL)
@dataclass
class Person:
    given_name: str
    family_name: str
    
Person('Alice', 'Liddell').to_json()  # '{"givenName": "Alice"}'
Person.from_json('{"givenName": "Alice", "familyName": "Liddell"}')  # Person('Alice', 'Liddell')

# at the field level
@dataclass_json
@dataclass
class Person:
    given_name: str = field(metadata=config(letter_case=LetterCase.CAMEL))
    family_name: str
    
Person('Alice', 'Liddell').to_json()  # '{"givenName": "Alice"}'
# notice how the `family_name` field is still snake_case, because it wasn't configured above
Person.from_json('{"givenName": "Alice", "family_name": "Liddell"}')  # Person('Alice', 'Liddell')

This library assumes your field follows the Python convention of snake_case naming. If your field is not snake_case to begin with and you attempt to parameterize LetterCase, the behavior of encoding/decoding is undefined (most likely it will result in subtle bugs).

Encode or decode using a different name

from dataclasses import dataclass, field

from dataclasses_json import config, dataclass_json

@dataclass_json
@dataclass
class Person:
    given_name: str = field(metadata=config(field_name="overriddenGivenName"))

Person(given_name="Alice")  # Person('Alice')
Person.from_json('{"overriddenGivenName": "Alice"}')  # Person('Alice')
Person('Alice').to_json()  # {"overriddenGivenName": "Alice"}

Handle missing or optional field values when decoding?

By default, any fields in your dataclass that use default or default_factory will have the values filled with the provided default, if the corresponding field is missing from the JSON you're decoding.

Decode JSON with missing field

@dataclass_json
@dataclass
class Student:
    id: int
    name: str = 'student'

Student.from_json('{"id": 1}')  # Student(id=1, name='student')

Notice from_json filled the field name with the specified default 'student' when it was missing from the JSON.

Sometimes you have fields that are typed as Optional, but you don't necessarily want to assign a default. In that case, you can use the infer_missing kwarg to make from_json infer the missing field value as None.

Decode optional field without default

@dataclass_json
@dataclass
class Tutor:
    id: int
    student: Optional[Student] = None

Tutor.from_json('{"id": 1}')  # Tutor(id=1, student=None)

Personally I recommend you leverage dataclass defaults rather than using infer_missing, but if for some reason you need to decouple the behavior of JSON decoding from the field's default value, this will allow you to do so.

Handle unknown / extraneous fields in JSON?

By default, it is up to the implementation what happens when a json_dataclass receives input parameters that are not defined. (the from_dict method ignores them, when loading using schema() a ValidationError is raised.) There are three ways to customize this behavior.

Assume you want to instantiate a dataclass with the following dictionary:

dump_dict = {"endpoint": "some_api_endpoint", "data": {"foo": 1, "bar": "2"}, "undefined_field_name": [1, 2, 3]}
  1. You can enforce to always raise an error by setting the undefined keyword to Undefined.RAISE ('RAISE' as a case-insensitive string works as well). Of course it works normally if you don't pass any undefined parameters.
from dataclasses_json import Undefined

@dataclass_json(undefined=Undefined.RAISE)
@dataclass()
class ExactAPIDump:
    endpoint: str
    data: Dict[str, Any]

dump = ExactAPIDump.from_dict(dump_dict)  # raises UndefinedParameterError
  1. You can simply ignore any undefined parameters by setting the undefined keyword to Undefined.EXCLUDE ('EXCLUDE' as a case-insensitive string works as well). Note that you will not be able to retrieve them using to_dict:
from dataclasses_json import Undefined

@dataclass_json(undefined=Undefined.EXCLUDE)
@dataclass()
class DontCareAPIDump:
    endpoint: str
    data: Dict[str, Any]

dump = DontCareAPIDump.from_dict(dump_dict)  # DontCareAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'})
dump.to_dict()  # {"endpoint": "some_api_endpoint", "data": {"foo": 1, "bar": "2"}}
  1. You can save them in a catch-all field and do whatever needs to be done later. Simply set the undefined keyword to Undefined.INCLUDE ('INCLUDE' as a case-insensitive string works as well) and define a field of type CatchAll where all unknown values will end up. This simply represents a dictionary that can hold anything. If there are no undefined parameters, this will be an empty dictionary.
from dataclasses_json import Undefined, CatchAll

@dataclass_json(undefined=Undefined.INCLUDE)
@dataclass()
class UnknownAPIDump:
    endpoint: str
    data: Dict[str, Any]
    unknown_things: CatchAll

dump = UnknownAPIDump.from_dict(dump_dict)  # UnknownAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'}, unknown_things={'undefined_field_name': [1, 2, 3]})
dump.to_dict()  # {'endpoint': 'some_api_endpoint', 'data': {'foo': 1, 'bar': '2'}, 'undefined_field_name': [1, 2, 3]}

Notes:

  • When using Undefined.INCLUDE, an UndefinedParameterError will be raised if you don't specify exactly one field of type CatchAll.
  • Note that LetterCase does not affect values written into the CatchAll field, they will be as they are given.
  • When specifying a default (or a default factory) for the the CatchAll-field, e.g. unknown_things: CatchAll = None, the default value will be used instead of an empty dict if there are no undefined parameters.
  • Calling init with non-keyword arguments resolves the arguments to the defined fields and writes everything else into the catch-all field.
  1. All 3 options work as well using schema().loads and schema().dumps, as long as you don't overwrite it by specifying schema(unknown=<a marshmallow value>). marshmallow uses the same 3 keywords 'include', 'exclude', 'raise'.

  2. All 3 operations work as well using __init__, e.g. UnknownAPIDump(**dump_dict) will not raise a TypeError, but write all unknown values to the field tagged as CatchAll. Classes tagged with EXCLUDE will also simply ignore unknown parameters. Note that classes tagged as RAISE still raise a TypeError, and not a UndefinedParameterError if supplied with unknown keywords.

Override the default encode / decode / marshmallow field of a specific field?

See Overriding

Handle recursive dataclasses?

Object hierarchies where fields are of the type that they are declared within require a small type hinting trick to declare the forward reference.

from typing import Optional
from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Tree():
    value: str
    left: Optional['Tree']
    right: Optional['Tree']

Avoid using

from __future__ import annotations

as it will cause problems with the way dataclasses_json accesses the type annotations.

Use numpy or pandas types?

Data types specific to libraries commonly used in data analysis and machine learning like numpy and pandas are not supported by default, but you can easily enable them by using custom decoders and encoders. Below are two examples for numpy and pandas types.

from dataclasses import field, dataclass
from dataclasses_json import config, dataclass_json
import numpy as np
import pandas as pd

@dataclass_json
@dataclass
class DataWithNumpy:
    my_int: np.int64 = field(metadata=config(decoder=np.int64))
    my_float: np.float64 = field(metadata=config(decoder=np.float64))
    my_array: np.ndarray = field(metadata=config(decoder=np.asarray))
DataWithNumpy.from_json("{\"my_int\": 42, \"my_float\": 13.37, \"my_array\": [1,2,3]}")

@dataclass_json
@dataclass
class DataWithPandas:
    my_df: pd.DataFrame = field(metadata=config(decoder=pd.DataFrame.from_records, encoder=lambda x: x.to_dict(orient="records")))
data = DataWithPandas.from_dict({"my_df": [{"col1": 1, "col2": 2}, {"col1": 3, "col2": 4}]})
# my_df results in:
# col1  col2
# 1    2    
# 3    4
data.to_dict()
# {"my_df": [{"col1": 1, "col2": 2}, {"col1": 3, "col2": 4}]}

Marshmallow interop

Using the dataclass_json decorator or mixing in DataClassJsonMixin will provide you with an additional method .schema().

.schema() generates a schema exactly equivalent to manually creating a marshmallow schema for your dataclass. You can reference the marshmallow API docs to learn other ways you can use the schema returned by .schema().

You can pass in the exact same arguments to .schema() that you would when constructing a PersonSchema instance, e.g. .schema(many=True), and they will get passed through to the marshmallow schema.

from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Person:
    name: str

# You don't need to do this - it's generated for you by `.schema()`!
from marshmallow import Schema, fields

class PersonSchema(Schema):
    name = fields.Str()

Briefly, on what's going on under the hood in the above examples: calling .schema() will have this library generate a marshmallow schema for you. It also fills in the corresponding object hook, so that marshmallow will create an instance of your Data Class on load (e.g. Person.schema().load returns a Person) rather than a dict, which it does by default in marshmallow.

Performance note

.schema() is not cached (it generates the schema on every call), so if you have a nested Data Class you may want to save the result to a variable to avoid re-generation of the schema on every usage.

person_schema = Person.schema()
person_schema.dump(people, many=True)

# later in the code...

person_schema.dump(person)

Overriding / Extending

Overriding

For example, you might want to encode/decode datetime objects using ISO format rather than the default timestamp.

from dataclasses import dataclass, field
from dataclasses_json import dataclass_json, config
from datetime import datetime
from marshmallow import fields

@dataclass_json
@dataclass
class DataClassWithIsoDatetime:
    created_at: datetime = field(
        metadata=config(
            encoder=datetime.isoformat,
            decoder=datetime.fromisoformat,
            mm_field=fields.DateTime(format='iso')
        )
    )

Extending

Similarly, you might want to extend dataclasses_json to encode date objects.

from dataclasses import dataclass, field
from dataclasses_json import dataclass_json, config
from datetime import date
from marshmallow import fields

dataclasses_json.cfg.global_config.encoders[date] = date.isoformat
dataclasses_json.cfg.global_config.decoders[date] = date.fromisoformat

@dataclass_json
@dataclass
class DataClassWithIsoDatetime:
    created_at: date
    modified_at: date
    accessed_at: date

As you can see, you can override or extend the default codecs by providing a "hook" via a callable:

  • encoder: a callable, which will be invoked to convert the field value when encoding to JSON
  • decoder: a callable, which will be invoked to convert the JSON value when decoding from JSON
  • mm_field: a marshmallow field, which will affect the behavior of any operations involving .schema()

Note that these hooks will be invoked regardless if you're using .to_json/dump/dumps and .from_json/load/loads. So apply overrides / extensions judiciously, making sure to carefully consider whether the interaction of the encode/decode/mm_field is consistent with what you expect!

What if I have other dataclass field extensions that rely on metadata

All the dataclasses_json.config does is return a mapping, namespaced under the key 'dataclasses_json'.

Say there's another module, other_dataclass_package that uses metadata. Here's how you solve your problem:

metadata = {'other_dataclass_package': 'some metadata...'}  # pre-existing metadata for another dataclass package
dataclass_json_config = config(
            encoder=datetime.isoformat,
            decoder=datetime.fromisoformat,
            mm_field=fields.DateTime(format='iso')
        )
metadata.update(dataclass_json_config)

@dataclass_json
@dataclass
class DataClassWithIsoDatetime:
    created_at: datetime = field(metadata=metadata)

You can also manually specify the dataclass_json configuration mapping.

@dataclass_json
@dataclass
class DataClassWithIsoDatetime:
    created_at: date = field(
        metadata={'dataclasses_json': {
            'encoder': date.isoformat,
            'decoder': date.fromisoformat,
            'mm_field': fields.DateTime(format='iso')
        }}
    )

A larger example

from dataclasses import dataclass
from dataclasses_json import dataclass_json

from typing import List

@dataclass_json
@dataclass(frozen=True)
class Minion:
    name: str


@dataclass_json
@dataclass(frozen=True)
class Boss:
    minions: List[Minion]

boss = Boss([Minion('evil minion'), Minion('very evil minion')])
boss_json = """
{
    "minions": [
        {
            "name": "evil minion"
        },
        {
            "name": "very evil minion"
        }
    ]
}
""".strip()

assert boss.to_json(indent=4) == boss_json
assert Boss.from_json(boss_json) == boss

Performance

Take a look at this issue

Versioning

Note this library is still pre-1.0.0 (SEMVER).

The current convention is:

  • PATCH version upgrades for bug fixes and minor feature additions.
  • MINOR version upgrades for big API features and breaking changes.

Once this library is 1.0.0, it will follow standard SEMVER conventions.

Python compatibility

Any version that is not listed in the table below we do not test against, though you might still be able to install the library. For future Python versions, please open an issue and/or a pull request, adding them to the CI suite.

Python version range Compatible dataclasses-json version
3.7.x - 3.12.x 0.5.x - 0.6.x
>= 3.13.x No official support (yet)

Roadmap

Currently the focus is on investigating and fixing bugs in this library, working on performance, and finishing this issue.

That said, if you think there's a feature missing / something new needed in the library, please see the contributing section below.

Contributing

First of all, thank you for being interested in contributing to this library. I really appreciate you taking the time to work on this project.

  • If you're just interested in getting into the code, a good place to start are issues tagged as bugs.
  • If introducing a new feature, especially one that modifies the public API, consider submitting an issue for discussion before a PR. Please also take a look at existing issues / PRs to see what you're proposing has already been covered before / exists.
  • I like to follow the commit conventions documented here

Setting up your environment

This project uses Poetry for dependency and venv management. It is quite simple to get ready for your first commit:

  • Install latest stable Poetry
  • Navigate to where you cloned dataclasses-json
  • Run poetry install
  • Create a branch and start writing code!

dataclasses-json's People

Contributors

2ynn avatar andersk avatar askogvold avatar deansg avatar dpausp avatar fabaff avatar faraseer avatar george-zubrienko avatar glandos avatar happytreebeard avatar lidatong avatar matt035343 avatar nathan5280 avatar nathanmsmith avatar obi1kenobi avatar pawelwilczewski avatar pjcampi avatar r-richmond avatar rakanalh avatar rpmcginty avatar s-vitaliy avatar sloria avatar ssheftel avatar stevenbedrick avatar stevenj avatar sumnerevans avatar thepabloaguilar avatar tristanspeakeasy avatar tstehr avatar ussx-hares avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dataclasses-json's Issues

schema().dumps not working properly with field initializers

Consider the following dataclass:

from typing import List

from dataclasses import dataclass, field
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Bar:
    foos: List[str] = field(default_factory=list)

using Bar.schema().dumps(Bar()) gives me the following error:

Traceback (most recent call last):
  File "dataclass_test.py", line 20, in <module>
    Bar.schema().dumps(Bar())
  File "~/.venv/venv/lib/python3.6/site-packages/dataclasses_json/api.py", line 100, in schema
    infer_missing)
  File "~/.venv/venv/lib/python3.6/site-packages/dataclasses_json/mm.py", line 90, in _make_default_fields
    cls)
  File "~/.venv/venv/lib/python3.6/site-packages/dataclasses_json/mm.py", line 105, in _make_default_field
    type_arg = type_.___args__[0]
AttributeError: type object 'List' has no attribute '___args__'

Process finished with exit code 1

However, it works when the type of foos is a List of another dataclass_json, like this:

@dataclass_json
@dataclass
class Foo:
    one: str


@dataclass_json
@dataclass
class Bar:
    foos: List[Foo] = field(default_factory=list)


if __name__ == "__main__":
    print(Bar.schema().dumps(Bar()))

So something seems to be off with types and field initializers

I am using dataclasses-json==0.2.1

Does not work with some class fields

1.json

{
    "kind": "youtube#searchListResponse",
    "etag": "\"XI7nbFXulYBIpL0ayR_gDh3eu1k/KWWeP6ixot61qGYewP3pbjpukmk\"",
    "nextPageToken": "CDIQAA",
    "regionCode": "RU",
    "pageInfo": {
        "totalResults": 191659,
        "resultsPerPage": 50
    },
    "items": [
        {
            "kind": "youtube#searchResult",
            "etag": "\"XI7nbFXulYBIpL0ayR_gDh3eu1k/OKEpMDlF7p6035MRuZvZZNd4fQc\"",
            "id": {
                "kind": "youtube#video",
                "videoId": "xTpwkSB-w4Q"
            },
            "snippet": {
                "publishedAt": "2018-10-17T14:50:39.000Z",
                "channelId": "UCIYF1orTg4nDvv29ORoKXyA",
                "title": "INSIDE: \u041a\u0430\u043a\u0430\u044f \u043a\u0440\u0438\u043f\u0442\u043e\u0432\u0430\u043b\u044e\u0442\u0430 \u0441\u0434\u0435\u043b\u0430\u0435\u0442 \u0438\u043a\u0441\u044b \u043a Bitcoin? l Crypto Miner \u041f\u0420\u0418\u0425\u0412\u0410\u0422\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u043b\u0438) l \u041e\u0442\u0432\u0435\u0442 \u0445\u0435\u0439\u0442\u0435\u0440\u0430\u043c",
                "description": "INSIDE: \u041a\u0430\u043a\u0430\u044f \u043a\u0440\u0438\u043f\u0442\u043e\u0432\u0430\u043b\u044e\u0442\u0430 \u0441\u0434\u0435\u043b\u0430\u0435\u0442 \u0438\u043a\u0441\u044b \u043a Bitcoin? l Crypto Miner \u041f\u0420\u0418\u0425\u0412\u0410\u0422\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u043b\u0438) l \u041e\u0442\u0432\u0435\u0442 \u0445\u0435\u0439\u0442\u0435\u0440\u0430\u043c WAX https://coinmarketcap.com/currenc...",
                "thumbnails": {
                    "default": {
                        "url": "https://i.ytimg.com/vi/xTpwkSB-w4Q/default.jpg",
                        "width": 120,
                        "height": 90
                    },
                    "medium": {
                        "url": "https://i.ytimg.com/vi/xTpwkSB-w4Q/mqdefault.jpg",
                        "width": 320,
                        "height": 180
                    },
                    "high": {
                        "url": "https://i.ytimg.com/vi/xTpwkSB-w4Q/hqdefault.jpg",
                        "width": 480,
                        "height": 360
                    }
                },
                "channelTitle": "\u0412\u043b\u0430\u0434\u0438\u0441\u043b\u0430\u0432 \u0421\u0442\u0435\u0448\u0435\u043d\u043a\u043e \u043f\u0440\u043e \u041a\u0440\u0438\u043f\u0442\u043e\u0432\u0430\u043b\u044e\u0442\u044b \u0438 \u041c\u0430\u0439\u043d\u0438\u043d\u0433",
                "liveBroadcastContent": "none"
            }
        },
        {
            "kind": "youtube#searchResult",
            "etag": "\"XI7nbFXulYBIpL0ayR_gDh3eu1k/gig54JPD7QLzEONpAED1doF4dSU\"",
            "id": {
                "kind": "youtube#video",
                "videoId": "tc1-V4jh4DE"
            },
            "snippet": {
                "publishedAt": "2017-03-17T17:11:32.000Z",
                "channelId": "UCjE90zX9e_WukGg4BmDcKTg",
                "title": "\u041a\u0440\u0438\u043f\u0442\u043e\u0432\u0430\u043b\u044e\u0442\u0430, \u0447\u0442\u043e \u044d\u0442\u043e? \u0414\u043e\u0445\u043e\u0434\u0447\u0438\u0432\u043e \u0438 \u044f\u0441\u043d\u043e. \u0411\u0443\u0434\u0443\u0449\u0435\u0435 blockchain, Bitcoin, Namecoin, Zerocash...",
                "description": "\u0427\u0442\u043e \u0442\u0430\u043a\u043e\u0435 \u043a\u0440\u0438\u043f\u0442\u043e\u0432\u0430\u043b\u044e\u0442\u0430, blockchain \u0438 \u0438\u0445 \u0431\u0443\u0434\u0443\u0449\u0435\u0435. \u0414\u043e\u0445\u043e\u0434\u0447\u0438\u0432\u043e \u0438 \u044f\u0441\u043d\u043e. Bitcoin, Litecoin, Namecoin, Ethereum, Zerocash...",
                "thumbnails": {
                    "default": {
                        "url": "https://i.ytimg.com/vi/tc1-V4jh4DE/default.jpg",
                        "width": 120,
                        "height": 90
                    },
                    "medium": {
                        "url": "https://i.ytimg.com/vi/tc1-V4jh4DE/mqdefault.jpg",
                        "width": 320,
                        "height": 180
                    },
                    "high": {
                        "url": "https://i.ytimg.com/vi/tc1-V4jh4DE/hqdefault.jpg",
                        "width": 480,
                        "height": 360
                    }
                },
                "channelTitle": "SunandreaS",
                "liveBroadcastContent": "none"
            }
        }
    ]
}

test.py

import datetime

from dataclasses import dataclass
from typing import List
from dataclasses_json import dataclass_json


@dataclass_json
@dataclass
class YouTubeSearchListItemSnippedThumbnailsThumbnail:
    url: str = ''
    width: int = 0
    height: int = 0


@dataclass_json
@dataclass
class YouTubeSearchListItemSnippedThumbnails:
    default: YouTubeSearchListItemSnippedThumbnailsThumbnail = YouTubeSearchListItemSnippedThumbnailsThumbnail
    medium: YouTubeSearchListItemSnippedThumbnailsThumbnail = YouTubeSearchListItemSnippedThumbnailsThumbnail
    high: YouTubeSearchListItemSnippedThumbnailsThumbnail = YouTubeSearchListItemSnippedThumbnailsThumbnail
    standard: YouTubeSearchListItemSnippedThumbnailsThumbnail = YouTubeSearchListItemSnippedThumbnailsThumbnail
    maxres: YouTubeSearchListItemSnippedThumbnailsThumbnail = YouTubeSearchListItemSnippedThumbnailsThumbnail


@dataclass_json
@dataclass
class YouTubeSearchListItemSnipped:
    publishedAt: datetime.datetime or None = None
    channelId: str = ''
    title: str = ''
    description: str = ''
    thumbnails: YouTubeSearchListItemSnippedThumbnails = YouTubeSearchListItemSnippedThumbnails
    channelTitle: str = ''
    liveBroadcastContent: str = ''


@dataclass_json
@dataclass
class YouTubeSearchListItemId:
    kind: str = ''
    videoId: str = ''
    channelId: str = ''
    playlistId: str = ''


@dataclass_json
@dataclass
class YouTubeSearchListItem:
    kind: str = ''
    etag: str = ''
    id: YouTubeSearchListItemId = YouTubeSearchListItemId
    snippet: YouTubeSearchListItemSnipped = YouTubeSearchListItemSnipped


@dataclass_json
@dataclass
class YouTubeSearchListPageInfo:
    totalResults: int = 0
    resultsPerPage: int = 0


@dataclass_json
@dataclass
class YouTubeSearchListResponse:
    kind: str = ''
    etag: str = ''
    nextPageToken: str = ''
    prevPageToken: str = ''
    regionCode: str = ''
    pageInfo: YouTubeSearchListPageInfo = YouTubeSearchListPageInfo
    items: List[YouTubeSearchListItem] = List[YouTubeSearchListItem]


with open('1.json', 'r') as file:
    print(YouTubeSearchListResponse.from_json(file.read()))

When starting, we get an error TypeError: Type List cannot be instantiated; use list() instead.

Comment out standard and maxres in YouTubeSearchListItemSnippedThumbnails and it works.

In json there are no these fields, but the server can return json with such fields. https://developers.google.com/youtube/v3/docs/search#resource

Support for dataclass fields of type Optional[List[X]]

Summary

The library currently breaks for dataclasses with fields of type Optional[List[X]], returning the following error:

  File "dcj_bug/ex.py", line 66, in <module>
    Adult.schema().loads(test3, many=True)  # Fail
  File "<home_path>/.cache/pypoetry/virtualenvs/dcj-bug-py3.7/lib/python3.7/site-packages/dataclasses_json/api.py", line 100, in schema
    infer_missing)
  File "<home_path>/.cache/pypoetry/virtualenvs/dcj-bug-py3.7/lib/python3.7/site-packages/dataclasses_json/mm.py", line 86, in _make_default_fields
    cls)
  File "<home_path>/.cache/pypoetry/virtualenvs/dcj-bug-py3.7/lib/python3.7/site-packages/dataclasses_json/mm.py", line 103, in _make_default_field
    cons = _type_to_cons[cons_type]
KeyError: typing.List[__main__.Child]

I've attached some code below that demonstrates the issue.

Please let me know if there's any information or assistance I can provide; I dug into the code a bit in mm.py where it's breaking, but wasn't able to figure out what the correct logic should be in mm._make_default_field.

Example

import json

from dataclasses import dataclass
from typing import Optional, List

from dataclasses_json import DataClassJsonMixin


@dataclass
class Child(DataClassJsonMixin):
    name: str


@dataclass
class Adult(DataClassJsonMixin):
    name: str
    children: Optional[List[Child]] = None


test1 = """
{
    "name": "Foo"

}
""".strip()

test2 = """
{
    "name": "Bar",
    "children": [
        {
            "name": "Baz"
        },
        {
            "name": "Bat"
        }
    ]
}
""".strip()

test3 = """
[
    {
        "name": "Foo"
    },
    {
        "name": "Bar",
        "children": [
            {
                "name": "Baz"
            },
            {
                "name": "Bat"
            }
        ]
    }
]
""".strip()

test4 = json.loads(test3)

Adult.from_json(test1)  # OK
Adult.from_json(test2)  # OK

# Adult.from_json(test3)  # Breaks
Adult.schema().loads(test3, many=True)  # Breaks
# Adult.schema().load(test4, many=True)  # Breaks

Pipenv gets broken

Pipenv get broken when using dataclasses-json.
In the setup.py, replacing marshmallow>=3.0.0b13 by marshmallow==3.0.0b13 should make it work.
And it's safer anyway (if there's a non-backward compatible version of marshmallow that gets released, it will currently break dataclasses-json.

Support for default_factory option

Currently, from_json() method can not decode a dataclass that containing fields with default_factory option. Could you fix this problem?

$ docker run -it python:3.7.0 bash
root@910ab65b7f15:/# pip install -U ipython
root@910ab65b7f15:/# pip install git+https://github.com/lidatong/dataclasses-json
root@910ab65b7f15:/# ipython
Python 3.7.0 (default, Oct 16 2018, 07:10:55)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.0.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from dataclasses import dataclass, field
   ...: from dataclasses_json import dataclass_json
   ...: from typing import List
   ...: @dataclass_json
   ...: @dataclass
   ...: class Person:
   ...:     name: str
   ...:     friends: List[str] = field(default_factory=list)
   ...:

In [2]: Person(name='foo')
Out[2]: Person(name='foo', friends=[])

In [3]: Person.from_json('{"name": "foo"}')
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-3-2d41a7d5bbe5> in <module>
----> 1 Person.from_json('{"name": "foo"}')

/usr/local/lib/python3.7/site-packages/dataclasses_json/api.py in from_json(cls, s, encoding, parse_float, parse_int, parse_constant, infer_missing, **kw)
     60                                  parse_constant=parse_constant,
     61                                  **kw)
---> 62         return _decode_dataclass(cls, init_kwargs, infer_missing)
     63
     64     @classmethod

/usr/local/lib/python3.7/site-packages/dataclasses_json/core.py in _decode_dataclass(cls, kvs, infer_missing)
     24     init_kwargs = {}
     25     for field in fields(cls):
---> 26         field_value = kvs[field.name]
     27         if field_value is None and not _is_optional(field.type):
     28             warning = (f"value of non-optional type {field.name} detected "

KeyError: 'friends'

In [4]:

AttributeError: 'Field' object has no attribute 'Field'

Hello,

Tried the package on a project i am working on and the package was showing this stacktrace:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/rakan/.pyenv/versions/3.7.0/envs/raiden/lib/python3.7/site-packages/dataclasses_json/api.py", line 76, in schema
    Schema = build_schema(cls, DataClassJsonMixin, infer_missing, partial)
  File "/home/rakan/.pyenv/versions/3.7.0/envs/raiden/lib/python3.7/site-packages/dataclasses_json/mm.py", line 114, in build_schema
    schema_ = schema(cls, mixin, infer_missing)
  File "/home/rakan/.pyenv/versions/3.7.0/envs/raiden/lib/python3.7/site-packages/dataclasses_json/mm.py", line 99, in schema
    t = build_type(type_, options, mixin, field, cls)
  File "/home/rakan/.pyenv/versions/3.7.0/envs/raiden/lib/python3.7/site-packages/dataclasses_json/mm.py", line 74, in build_type
    return inner(type_, options)
  File "/home/rakan/.pyenv/versions/3.7.0/envs/raiden/lib/python3.7/site-packages/dataclasses_json/mm.py", line 73, in inner
    return field.Field(**options)
AttributeError: 'Field' object has no attribute 'Field'

Mostly caused by using the wrong package name. PR incoming

AttributeError when decoding to Optional[str]?

I've been getting an AttributeError when trying to decode a dict to a dataclass with optional str arguments, i.e. something like:

@dataclass
class A:
    a: Optional[str] = None

Stepping through this with PDB I've noticed that _is_supported_generic() evaluates to True for str, causing this branch in dataclasses_json.core to call _decode_generic() with type_ = str and then failing on line 73 because str has no __args__ attribute.

This example should illustrate the problem more clearly:

In [1]: import dataclasses_json, typing

In [2]: dataclasses_json.core._decode_generic(typing.Optional[str], 'foo', False)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-ef8bcdf59b9e> in <module>()
----> 1 dataclasses_json.core._decode_generic(typing.Optional[str], 'foo', False)

<project_dir>/venv/lib/python3.7/site-packages/dataclasses_json/core.py in _decode_generic(type_, value, infer_missing)
     89             res = _decode_dataclass(type_arg, value, infer_missing)
     90         elif _is_supported_generic(type_arg):
---> 91             res = _decode_generic(type_arg, value, infer_missing)
     92         else:
     93             res = value

<project_dir>/venv/lib/python3.7/site-packages/dataclasses_json/core.py in _decode_generic(type_, value, infer_missing)
     71         # type_arg is a typevar we need to extract the reified type information
     72         # hence the check of `is_dataclass(value)`
---> 73         type_arg = type_.__args__[0]
     74         if is_dataclass(type_arg) or is_dataclass(value):
     75             xs = (_decode_dataclass(type_arg, v, infer_missing) for v in value)

AttributeError: type object 'str' has no attribute '__args__'

Is this a bug, or am I misunderstanding something?

Decoding enums with str mixin fails

Enums can be defined with mixins, for example:

class MyStrEnum(str, Enum):
    a = 'aa'
    b = 'bb'

Decoding values for that enum fails because _decode_generic handles them incorrectly:

AttributeError                            Traceback (most recent call last)

---> 17 ElectionConfig.from_json(jso)

~/.local/share/virtualenvs/ekklesia-portal-GhysHmkk/lib/python3.7/site-packages/dataclasses_json/api.py in from_json(cls, s, encoding, parse_float, parse_int, parse_constant, infer_missing, **kw)
     60                          parse_constant=parse_constant,
     61                          **kw)
---> 62         return _decode_dataclass(cls, kvs, infer_missing)
     63 
     64     @classmethod

~/.local/share/virtualenvs/ekklesia-portal-GhysHmkk/lib/python3.7/site-packages/dataclasses_json/core.py in _decode_dataclass(cls, kvs, infer_missing)
    112             init_kwargs[field.name] = _decode_generic(field.type,
    113                                                       field_value,
--> 114                                                       infer_missing)
    115         elif _issubclass_safe(field.type, datetime):
    116             # FIXME this is a hack to deal with mm already decoding

~/.local/share/virtualenvs/ekklesia-portal-GhysHmkk/lib/python3.7/site-packages/dataclasses_json/core.py in _decode_generic(type_, value, infer_missing)
    149             xs = zip(ks, vs)
    150         else:
--> 151             xs = _decode_items(type_.__args__[0], value, infer_missing)
    152 
    153         # get the constructor if using corresponding generic type in `typing`

/nix/store/6x7zqpvbj07rx837jjvc9cc62qr56fs8-python3-3.7.2-env/lib/python3.7/enum.py in __getattr__(cls, name)
    342         """
    343         if _is_dunder(name):
--> 344             raise AttributeError(name)
    345         try:
    346             return cls._member_map_[name]

AttributeError: __args__

datetime and/or overriding serializers

Hi, great library.

It works except I can't serialise a datetime object on my dataclass. Is there a way to pass in my own serialiser?

EDIT: I can pass a fallback function using "default"

dataclass_obj.to_json(default=json_serial)

def json_serial(obj):
    """JSON serializer for objects not serializable by default json code"""

    if isinstance(obj, (datetime, date)):
        return obj.isoformat()
    raise TypeError ("Type %s not serializable" % type(obj))

Recursive encoding of dataclasses does not use overrides

Hello,

We recently found an issue with the encoding of nested dataclass objects where the nested dataclass has a specific encoder. The problem is that the nested dataclasses's encoder is not used when converting the field value to JSON. A simple example is shown below:

@dataclass_json
@dataclass
class Foo:
    dt: datetime.datetime = field(
        metadata={'dataclasses_json': {
            'encoder': dt.datetime.isoformat,
            'decoder': iso8601.parse_date,
            'mm_field': fields.DateTime(format='iso')
        }})


@dataclass_json
@dataclass
class FooCtr:
    l: List[Foo]


f = Foo.from_json('{"dt": "2019-01-02T12:34:56Z"}')

print(f.to_json())
# {"dt": "2019-01-02T12:34:56+00:00"}

fc = FooCtr([f])

print(fc.to_json())
# {"l": [{"dt": 1546432496.0}]}

The timestamp in the fc variable is using the default datetime encoder in core.py which is to make a timestamp. The reason for this is, and a possible fix is given in a pull request.

dump/load gives Invalid input type

Using schema().load() does not seem to work for classes that contain lists. I can reproduce this even with the Boss/Minion example from the documentation

@dataclass_json
@dataclass(frozen=True)
class Minion:
    name: str


@dataclass_json
@dataclass(frozen=True)
class Boss:
    minions: List[Minion]

boss = Boss([Minion('evil minion'), Minion('very evil minion')])
boss_dict = Boss.schema().dump(boss)
Boss.schema().load(boss_dict)
marshmallow.exceptions.ValidationError: {'_schema': ['Invalid input type.']}

For what it's worth, going all the way to JSON and back seems to work fine:

boss_json = boss.to_json()
Boss.from_json(boss_json)

Support json to dictionary

Hey i really like your dataclass json package. Currently I like to have the feature for remember a json to an dictonary and backwards. An example:

MinionId = NewType('MinionId', int)

@dataclass_json
@dataclass
class Minion:
    minion_id: MinionId
    name: str

@dataclass_json
@dataclass
class Datas:
    minions: Dict[MinionId, Minion]
    some_value: int
    some_name: int

From the json or to the json:

{
    "minions": [
        {
            "minion_id": 1,
            "name": "minion1"
        },
        {
            "minion_id": 2,
            "name": "minion2"
        }
    ],
    "some_value": int,
    "some_name": int
}

Maybe there is already an solution to fix this.

Fails to build schema if the class have enum field

Example:

from dataclasses import dataclass
from enum import Enum
from typing import *

from dataclasses_json import dataclass_json


class NodeType(Enum):
    ACTION1 = 'ACTION1'
    ACTION2 = 'ACTION2'


@dataclass_json
@dataclass(unsafe_hash=True)
class Node:
    type: NodeType

if __name__ == '__main__':
   print(Node.schema())

Library is not compatible with "from __future__ import annotations" (PEP 563)

PEP 563 means that we cannot expect type annotations to actually be class objects. Example code that fails (Python 3.7):

from __future__ import annotations

from dataclasses import dataclass
from dataclasses_json import dataclass_json


@dataclass
class T:
    f: str

@dataclass_json
@dataclass
class U:
    t: T

obj = U(T('foo'))
print(obj)
s = obj.to_json()
print(s)
obj2 = U.from_json(s)
print(obj2)

which outputs

U(t=T(f='foo'))
{"t": {"f": "foo"}}
U(t={'f': 'foo'})

The expected behavior is that t is of type T and not a dict. This works as expected when from future import __annotations__ is commented out.

When converting to dictionary, nested object is not handled recursively

When converting to dictionary, nested object is not handled recursively

Sample Code:

from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class RegionDetails:
    region_name: str
    min_x: int
    min_y: int
    width: int
    height: int

@dataclass_json
@dataclass
class FooSetting:
    last_detection_time: int
    region_details: RegionDetails
    foo: int

def test_main():
    foo = FooSetting(foo=15345, last_detection_time=1234567,
                     region_details = RegionDetails(height=333, width=555, min_x=10, min_y=10,
                                                             region_name="abc"))

    print(foo)
    print(FooSetting.schema().dump(foo))
    print(type(FooSetting.schema().dump(foo).get("region_details")))
    print(foo.to_json(indent=4))

test_main()

Output:

FooSetting(last_detection_time=1234567, region_details=RegionDetails(region_name='abc', min_x=10, min_y=10, width=555, height=333), foo=15345)
{'region_details': RegionDetails(region_name='abc', min_x=10, min_y=10, width=555, height=333), 'last_detection_time': 1234567, 'foo': 15345}
<class '__main__.RegionDetails'>
{
    "last_detection_time": 1234567,
    "region_details": {
        "region_name": "abc",
        "min_x": 10,
        "min_y": 10,
        "width": 555,
        "height": 333
    },
    "foo": 15345
}

When converting this "FooSetting" object into a Dictionary, "region_details" nested object does not get converting to a Dictionary recursively. The original "RegionDetails" type object shows up under "region_detalis" field instead.

However, when serializing it into JSON text, the recursive is done correctly.

Field init=False is not ignored when restoring from JSON

Hello

from dataclasses import dataclass, field
from typing import Optional

from dataclasses_json import DataClassJsonMixin


@dataclass
class A(DataClassJsonMixin):
    field_one: int
    field_two: Optional[int] = field(init=False)


def main():
    a = A(1)

    json = A.schema().dumps(a)
    a_restored = A.schema().loads(json)


if __name__ == '__main__':
    main()

The following example would fail with the following stacktrace:

Traceback (most recent call last):
  File "dc_json_init.py", line 21, in <module>
    main()
  File "dc_json_init.py", line 17, in main
    a_restored = A.schema().loads(json)
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/marshmallow/schema.py", line 737, in loads
    return self.load(data, many=many, partial=partial, unknown=unknown)
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/marshmallow/schema.py", line 708, in load
    postprocess=True,
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/marshmallow/schema.py", line 857, in _do_load
    original_data=data,
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/marshmallow/schema.py", line 1036, in _invoke_load_processors
    data=data, many=many, original_data=original_data,
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/marshmallow/schema.py", line 1158, in _invoke_processors
    data = processor(data)
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/dataclasses_json/mm.py", line 112, in make_instance
    return _decode_dataclass(cls, kvs, partial)
  File "/home/rakan/.pyenv/versions/playground/lib/python3.7/site-packages/dataclasses_json/core.py", line 137, in _decode_dataclass
    return cls(**init_kwargs)
TypeError: __init__() got an unexpected keyword argument 'field_two'

Problem is that the dump'ed JSON contained the field_two and it was being used for initializing the instance on restoration.

PR to resolve this incoming.

"Type[xxx]" has no attribute "from_json"

One of the downsides of @dataclass_json is that mypy doesn't know about the extra from_json and to_json methods. Any chance this can be supported, somehow?

The docs seem to suggest there is a plugin system in mypy, albeit experimental: https://mypy.readthedocs.io/en/latest/extending_mypy.html#extending-mypy-using-plugins

@dataclass_json
@dataclass
class BulkApmm:
    id: str
    frequency_ms: int
    pmms: List[Apmm]

...

    bulk_apmm = BulkApmm.from_json(await request.text())

betslipdisp/views.py:70: error: "Type[BulkApmm]" has no attribute "from_json"

Add ability to encode numpy datatypes

Hi,

in my program most of calculations are done with numpy and thus values are e.g of type np.int32 or numpy.ndarrays.

Consider following class:

@dataclass_json
@dataclass
class Results:
     value: int
     values: List[int]

results = Results()
results.value = some_numpy_computation_returning_numpy_int()
results.values = some_numpy_computation_returning_numpy_array()
json_data = results.to_json()

When running this code I would get a conversion error.

It would be very convenient if dataclassjson would be able to handle those errors, by passing in an extended json encoder which handles numpy datatypes.

In the dataclassjson code some cases are already dealt with the _ExtendedEncoder.

class _ExtendedEncoder(json.JSONEncoder):
    def default(self, o) -> JSON:
        result: JSON
        if _isinstance_safe(o, Collection):
            if _isinstance_safe(o, Mapping):
                result = dict(o)
            else:
                result = list(o)
        elif _isinstance_safe(o, datetime):
            result = o.timestamp()
        elif _isinstance_safe(o, UUID):
            result = str(o)
        elif _isinstance_safe(o, Enum):
            result = o.value
        else:
            result = json.JSONEncoder.default(self, o)
        return result

However, I cannot extend this decoder to support my needs, since cls=_ExtendedEncoder is not in the argument list of to_json.

 class DataClassJsonMixin(abc.ABC):
    """
    DataClassJsonMixin is an ABC that functions as a Mixin.

    As with other ABCs, it should not be instantiated directly.
    """

    def to_json(self,
                *,
                skipkeys: bool = False,
                ensure_ascii: bool = True,
                check_circular: bool = True,
                allow_nan: bool = True,
                indent: Optional[Union[int, str]] = None,
                separators: Tuple[str, str] = None,
                default: Callable = None,
                sort_keys: bool = False,
                **kw) -> str:
        return json.dumps(_asdict(self),
                          cls=_ExtendedEncoder,
                          skipkeys=skipkeys,
                          ensure_ascii=ensure_ascii,
                          check_circular=check_circular,
                          allow_nan=allow_nan,
                          indent=indent,
                          separators=separators,
                          default=default,
                          sort_keys=sort_keys,
                          **kw)

Would it be possible to change it to:

 class DataClassJsonMixin(abc.ABC):
    """
    DataClassJsonMixin is an ABC that functions as a Mixin.

    As with other ABCs, it should not be instantiated directly.
    """

    def to_json(self,
                *,
                skipkeys: bool = False,
                ensure_ascii: bool = True,
                check_circular: bool = True,
                allow_nan: bool = True,
                indent: Optional[Union[int, str]] = None,
                separators: Tuple[str, str] = None,
                default: Callable = None,
                sort_keys: bool = False,
                cls: json.JSONEncoder=_ExtendedEncoder,
                **kw) -> str:
        return json.dumps(_asdict(self),
                          cls=cls,
                          skipkeys=skipkeys,
                          ensure_ascii=ensure_ascii,
                          check_circular=check_circular,
                          allow_nan=allow_nan,
                          indent=indent,
                          separators=separators,
                          default=default,
                          sort_keys=sort_keys,
                          **kw)

I know about the possibility to define an encoder via metadata, but instead of adding this to all of my fields of the dataclasses, I'd rather prefer a global solution.

Option many=true

How can I parse List not Dictionary? Is there any function? Unfortunately I could not find hints in neither documentation and tests

Promote sharing encoding/decoding across multiple dataclasses

Problem

The current API only allows providing custom encoder/decoder on a field-by-field basis using metadata. The options for user code are not good:

  1. Repeat metadata for each usage of a custom type
  2. Define function for each custom type that returns a dataclasses.Field instantiated with the metadata

1 is obviously not good. 2 leads to more verbose code, e.g.

@dataclass_json
@dataclass
class Container:
    d: Data = data_field()
    d2: Data = data_field(default=1)

given an implementation like

from dataclasses import dataclass, field
from dataclasses_json import dataclass_json

class Data:
    ...

def data_field(**kwargs):
    metadata = kwargs.setdefault('metadata', {})
    metadata.setdefault('dataclasses_json', {
        'encoder': lambda x: None,
        'decoder': Data,
        'mm_type': None
    })
    return field(**kwargs)

even in the default case (d) we cannot omit the data_field().

Desired solution

  1. Encoder/decoder should be trivial to specify in one place per type with little to no boilerplate in user code
  2. Fields within a dataclass that would otherwise not need to use field() (e.g. d: dict = field(default_factory=dict)) should not have to in order to specify encoder/decoder. In the example above it should look like d: Data only.
  3. It should be possible to override encoding/decoding of a type for the rendering of several classes while still maintaining 1 and 2.

Union type deserialize to dict from json

The the Union type on a dataclass cannot be deserialized correctly from json.
Example:

from dataclasses import dataclass
from dataclasses_json import dataclass_json
from typing import *


@dataclass_json
@dataclass(unsafe_hash=True)
class Gofy:
    p1: str

@dataclass_json
@dataclass(unsafe_hash=True)
class Foo:
    p2: str

@dataclass_json
@dataclass(unsafe_hash=True)
class Agg:
    inst: Union[Gofy, Foo]


if __name__ == '__main__':
    a = Agg(inst=Foo(p2='p2'))
    json_str = a.to_json()
    a_restored = Agg.from_json(json_str)
    print(type(a_restored.inst) ) # this will print the dict type
    assert a == a_restored # this check fails
    assert isinstance(a_restored.inst, Foo) # this check fails

AttributeError: type object 'str' has no attribute '__args__'

I have a very small snippet:

@dataclass_json
@dataclass
class Test:
    entry: List[str]

t = Test(entry=[])
t.entry.append('mytest')
t = Test.from_json(t.to_json())

but even this is failing with this traceback:

  File "dc.py", line 25, in <module>
    t = Test.from_json(t.to_json())
  File "/private/tmp/testdc/env/lib/python3.7/site-packages/dataclasses_json/api.py", line 62, in from_json
    return _decode_dataclass(cls, init_kwargs, infer_missing)
  File "/private/tmp/testdc/env/lib/python3.7/site-packages/dataclasses_json/core.py", line 47, in _decode_dataclass
    infer_missing)
  File "/private/tmp/testdc/env/lib/python3.7/site-packages/dataclasses_json/core.py", line 83, in _decode_generic
    res = _get_type_cons(type_)(xs)
  File "/private/tmp/testdc/env/lib/python3.7/site-packages/dataclasses_json/core.py", line 77, in <genexpr>
    xs = (_decode_generic(type_arg, v, infer_missing) for v in value)
  File "/private/tmp/testdc/env/lib/python3.7/site-packages/dataclasses_json/core.py", line 73, in _decode_generic
    type_arg = type_.__args__[0]
AttributeError: type object 'str' has no attribute '__args__'

This issue might be duplicate of issue 2.

Different behaviour when calling class wtih nested dataclasses

When I define a dataclass A with a list of nested dataclasses B, by calling the class using a json dictionary, which has also lists inside, the recursive mechanism does not work, leaving the list of dataclasses unrendered, although they are converted to python objects if from_json is called.

Example

The json:

data = {'attachments': [],
 'vesselIds': [9731444],
 'globalServiceOptions': [{'globalServiceOptionId': {'id': 2,
    'agentServiceId': {'id': 2,
     'agentName': 'Manual Import Agent',
     'serviceName': 'Bunker Analysis',
     'fileParserEnabled': True,
     'fileSplitterEnabled': False,
     'messageToFileEnabled': False,
     'scheduleAllowed': True,
     'vesselDemanded': True},
    'serviceOptionName': 'Time',
    'value': 'T_last',
    'required': False,
    'defaultValue': 'T_last',
    'description': 'UTC Timestamp',
    'serviceOptionType': 'TIME'},
   'name': 'Time',
   'value': None}]}

Result of Converting it with data unrendered:

In [10]: FileParserRequest(**data)
Out[10]: FileParserRequest(vesselIds=[9731444], attachments=[], globalServiceOptions=[{'globalServiceOptionId': {'id': 2, 'agentServiceId': {'id': 2, 'agentName': 'Manual Import Agent', 'serviceName': 'Bunker Analysis', 'fileParserEnabled': True, 'fileSplitterEnabled': False, 'messageToFileEnabled': False, 'scheduleAllowed': True, 'vesselDemanded': True}, 'serviceOptionName': 'Time', 'value': 'T_last', 'required': False, 'defaultValue': 'T_last', 'description': 'UTC Timestamp', 'serviceOptionType': 'TIME'}, 'name': 'Time', 'value': None}])

Result of converting it with from_json():

In [9]: FileParserRequest.from_json(json.dumps(data))
Out[9]: FileParserRequest(vesselIds=[9731444], attachments=[], globalServiceOptions=[ServiceOption(globalServiceOptionId=GlobalServiceOptionId(id=2, agentServiceId=AgentServiceId(id=2, agentName='Manual Import Agent', serviceName='Bunker Analysis', fileParserEnabled=True, fileSplitterEnabled=False, messageToFileEnabled=False, scheduleAllowed=True, vesselDemanded=True), serviceOptionName='Time', value='T_last', required=False, defaultValue='T_last', description='UTC Timestamp', serviceOptionType='TIME'), name='Time', value=None)])

Schema generation is broken

  • #60 comes from a typo in the schema code.

  • #58 is due to the code not unwrapping the type typing.List[__main__.child] to list and so blows up. This is because the code unwraps Optional last.

  • #57 and #65 are because the created schema says minions is of type Minion not List[Minion].

  • #67 is because allow_none isn't passed true if default or Optional are used.

  • Fixing #60 reveals another issue that marshmallow.fields.List doesn't take the arguments provided.

  • You only create a schema field for values that are:

    1. Supplied via mm_field.

    2. Is of type DC, List[DC] or Optional[DC]. (It unwraps List as if it were Optional.)

    3. If a default is specified:

      test: List[str] = None
      
    4. Has the type datetime.

From this I can see a couple more issues:

  1. test: Optional[List[DC]] doesn't create a field.

  2. test: Optional[List[DC]] = None errors as it unwraps the Optional, but not the List.

    If you move the code to unwrap the Optional above the code to work with list, then the following error would be raised:

    TypeError: Unsupported typing.List[__main__.Relative] detected. Is it a supported JSON type or dataclass_json instance?
    
  3. test: Dict[str, List[str]] = None is created as a List[str].

  4. test: Dict[int, Dict[int, int]] = None is created as a List[int].

  5. test: List[List[str]] raises the same TypeError as above.

  6. Any working DC code doesn't have the correct default set, field.default or field.default_factory.

  7. All code uses missing rather than default. Meaning infer_missing is almost un-needed.

  8. No field has it's allow_none argument changed. This is really strange since you set Optional fields to it when no other is set.

  9. A lot of fields aren't created, and so their type is implied when creating the Schema, meaning marshmallow can't accurately verify the output.


To fix these you should merge _overrides, _make_nested_fields, _make_default_fields and the datetime_fields creation into one function. From this you should then move all the code that creates types into its own recursive function, so you can build nested types.

coerce_keys option

The py-to-json conversion table specifies that dict keys must be of a basic type: str, int, float, bool or None

However in practical usage, it's quite convenient to have objects as dictionary keys (especially, in lieu of data classes, leveraging the frozen=True option). Having the basic type restriction on keys makes encoding unnecessarily boilerplate-y.

Creating this issue for a potential coerce_keys kwarg to add to the public api. This gives the user the option of supplying a function to coerce non-basic type keys into a str.

Support dashes in JSON keys

As far as i can tell there is no way to handle json keys with dahes in the name.

from dataclasses import dataclass, field
from dataclasses_json import DataClassJsonMixin, dataclass_json
from marshmallow import fields


@dataclass
class MyClass(DataClassJsonMixin):
    some_field: str = field(metadata={'dataclasses_json': {
        'mm_field': fields.String(attribute='some-field'),
        'encoder': str,
        'decoder': str
    }})

print(MyClass.from_json('{"some-field": "hello world"}')) # KeyError: 'some_field'

apologies if i messed something in the docs.

Invalid code example in ReadMe

It seems that subclass resolution is broken.
This was fixed with an update to the latest version. However, the following still takes place
I've run code from ReadMe, and it is not working

Code from ReadMe

# peter@peterpc:~$ python3.7
# Python 3.7.2 (default, Feb 25 2019, 14:16:03) 
# [GCC 5.4.0 20160609] on linux
# Type "help", "copyright", "credits" or "license" for more information.
>>> from dataclasses import dataclass
>>> from dataclasses_json import dataclass_json
>>> import json
>>> from typing import Optional
>>> 
>>> @dataclass_json
... @dataclass
... class Student:
...     id: int
...     name: str = 'student'
... 
>>> Student.from_json('{"id": 1}')  # Student(id=1, name='student')
Student(id=1, name='student')
>>> 
>>> @dataclass_json
... @dataclass
... class Tutor:
...     id: int
...     student: Optional[Student]
... 
>>> Tutor.from_json('{"id": 1}')  # Tutor(id=1, student=None)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/site-packages/dataclasses_json/api.py", line 60, in from_json
    return _decode_dataclass(cls, kvs, infer_missing)
  File "/usr/local/lib/python3.7/site-packages/dataclasses_json/core.py", line 91, in _decode_dataclass
    field_value = kvs[field.name]
KeyError: 'student'
>>> 

I am using latest version of package (0.2.4) on python3.7

peter@peterpc:~$ python3.7 -m pip show dataclasses-json
Name: dataclasses-json
Version: 0.2.4
Summary: Easily serialize dataclasses to and from JSON
Home-page: https://github.com/lidatong/dataclasses-json
Author: lidatong
Author-email: [email protected]
License: Unlicense
Location: /usr/local/lib/python3.7/site-packages
Requires: marshmallow, marshmallow-enum
Required-by: 

user-supplied overrides

both #40 #41 raise the usecase of user-supplied overrides.

  • class level overrides: parameterize decorator
  • field level overrides: metadata field

0.1.0 datetime handling does not conform to iso 8601

In JSON serialization it is generally considered standard to use string-like datetime format. For example, .NET Newtonsoft.JSON serializes datetime values as strings: {"Now":"2018-11-14T15:47:11.881469+03:00"}, same as javascript's JSON library.

Prior to 0.1.0 release it was possible to write our own post_init method to decode such datetime fields:

@dataclass_json
@dataclass(frozen=True)
class ExampleObject:
    SomeDate: datetime

    def __post_init__(self):
        current_value = self.__dict__['SomeDate']
        if isinstance(current_value, str):
            self.__dict__['SomeDate'] = dateutil.parser.parse(current_value)

Release 0.1.0 breaks this behaviour and generally makes it impossible to work with commonly-accepted string-like rfc/iso datetime formats by always forcing datetime representation to be int-like due to this code.

TypeError : an integer is required (got type str)
at
...
File "api.py", line 65, in from_json
return _decode_dataclass(cls, init_kwargs, infer_missing)
File "core.py", line 69, in _decode_dataclass
infer_missing)
File "core.py", line 84, in _decode_dataclass
dt = datetime.fromtimestamp(field_value, tz=tz)

As of now we are forced to use 0.0.25 release, however it would be of great help if we could rely on dataclasses-json to natively handle datetime representations without our own hacks.

Btw, using string-like representation also has inherent benefit of allowing you to store timezone info in the serialized value, ensuring that encoding->decoding->encoding operation is strictly identity (note that d->e->d still will not be strict identity because serialization format may change, but that is okay as you still keep all timezone-related information)

Add a `many` option to `DataClassJsonMixin.from_json`

Currently the only way to get a List[T] with your code is with the example:

Person.schema().loads(people_json,` many=True)

I however can't get this to work with infer_missing when my REST endpoints elect to remove the key on optional return values. Since this would change the return type of from_json from A to Union[A, List[A]], you may want to make a function from_jsons instead.

License

Hi I saw this package in PIP but there is no license listed. Would you be kind enough to add one ? Maybe MIT ? Thanks

Import of dataclasses_json under Python 3.6 raises a TypeError: metaclass conflict

Importing dataclasses_json 0.2.5 in Python 3.6 throws a metaclass conflict related TypeError

$ python3.6
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dataclasses_json
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../lib/python3.6/site-packages/dataclasses_json/__init__.py", line 1, in <module>
    from dataclasses_json.api import (DataClassJsonMixin,
  File ".../lib/python3.6/site-packages/dataclasses_json/api.py", line 5, in <module>
    from dataclasses_json.mm import build_schema, SchemaHelper, JsonData
  File ".../lib/python3.6/site-packages/dataclasses_json/mm.py", line 105, in <module>
    class SchemaHelper(Schema, typing.Generic[T]):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases```

Hacky workaround for forward references

Currently dataclasses's fields helper function returns a type object that is a str when fields are forward references rather than an actual typing object (I imagine it's due to the chicken-and-egg problem, but they might use better methods in the future to resolve the infinite expansion, e.g make it lazy).

I'm planning to put in a hacky workaround to do str comparisons against the type names to handle forward references (given there's only a finite, small set of json types).

If you think you might have a better solution, feel free to respond to this issue / submit a PR.

schema fails to deserialize fields that are optional

HI,

schema().loads failed to load Optional fields correctly. It raises:
marshmallow.exceptions.ValidationError: {'b': ['Field may not be null.']}

Example code:

from typing import Optional
from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class A:
    a: int
    b: Optional[int]

print(A.from_json('{"a": 4, "b": 5}'))
print(A.from_json('{"a": 4, "b": null}'))
print(A.schema().loads('{"a": 4, "b": 5}'))
print(A.schema().loads('{"a": 4, "b": null}'))

The last line raises ValidationError.
I am using version 0.2.1.

Thanks.

Encode default values from dataclass

The README.md discusses using default values when decoding JSON but can a dataclass object encode to JSON using default values set by the dataclass constructor?

from dataclasses import dataclass
from dataclasses_json import dataclass_json

@dataclass_json
@dataclass
class Person:
    name: str = 'x'

# Encoding to JSON using default name value
Person().to_json()  # '{"name": "x"}'

Support for dataclass fields of List[Optional[Repository]]

Similar to the issue, #58, but in this case List[Optional[Repository]] fails with an exception. Here is some sample code. The error occurred with dataclasses-json version 0.2.2

Error

/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py:70:
                UserWarning: Unknown type <class 'NoneType'> at Breaks.nodes: typing.List[typing.Union[__main__.Repository, NoneType]]
                It's advised to pass the correct marshmallow type to `mm_field`.
                warnings.warn(f"Unknown type {type_} at {cls.__name__}.{field.name}: {field.type} "
        Traceback (most recent call last):
          File "<input>", line 1, in <module>
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/api.py", line 76, in schema
            Schema = build_schema(cls, DataClassJsonMixin, infer_missing, partial)
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 113, in build_schema
            schema_ = schema(cls, mixin, infer_missing)
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 98, in schema
            t = build_type(type_, options, mixin, field, cls)
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 73, in build_type
            return inner(type_, options)
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 66, in inner
            args = [inner(a, {}) for a in getattr(type_, '__args__', [])]
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 66, in <listcomp>
            args = [inner(a, {}) for a in getattr(type_, '__args__', [])]
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 66, in inner
            args = [inner(a, {}) for a in getattr(type_, '__args__', [])]
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 66, in <listcomp>
            args = [inner(a, {}) for a in getattr(type_, '__args__', [])]
          File "/Users/PC/miniconda3/envs/Flask_async/lib/python3.7/site-packages/dataclasses_json/mm.py", line 72, in inner
            return field.Field(**options)

Code example:

from dataclasses import dataclass
from dataclasses_json import dataclass_json
from typing import List, Optional

@dataclass_json
@dataclass
class Repository:
    name: str
    stargazers: str

@dataclass_json
@dataclass
class Breaks:
    errors_with_trace = ''"
    nodes: List[Optional[Repository]]


@dataclass_json
@dataclass
class Works:
    nodes: Optional[List[Repository]]

try:
    Breaks.schema()
except AttributeError as exc:
    print(exc)

Works.schema()

Invalid input type

Hello,

The title shows the error i've been having with this library. I've put together a scenario which reproduces the problem:

from dataclasses import dataclass
from typing import List

from dataclasses_json import dataclass_json


@dataclass_json
@dataclass
class Relative:
    name: str

@dataclass_json
@dataclass
class Person:
    name: str
    relatives: List[Relative]


people_json = '[{"name": "lidatong", "relatives": [{"name": "relative 1"}, {"name": "relative 2"}]}]'
Person.schema().loads(people_json, many=True)

The stacktrace i get is:

Traceback (most recent call last):
  File "main.py", line 20, in <module>
    Person.schema().loads(people_json, many=True)
  File "/home/rakan/.pyenv/versions/proj/lib/python3.7/site-packages/marshmallow/schema.py", line 724, in loads
    return self.load(data, many=many, partial=partial, unknown=unknown)
  File "/home/rakan/.pyenv/versions/proj/lib/python3.7/site-packages/marshmallow/schema.py", line 695, in load
    postprocess=True,
  File "/home/rakan/.pyenv/versions/proj/lib/python3.7/site-packages/marshmallow/schema.py", line 857, in _do_load
    raise exc
marshmallow.exceptions.ValidationError: {0: {'relatives': {'_schema': ['Invalid input type.']}}}

ignore unknown properties in nested classes. How to propagate schema config to nested classes?

I want to ignore unknown properties in nested classes when I parse json with unknown(not mapped) fields.
Parsing unknown properties works with Boss.from_json(boss_json). But there a lot of code where It use marshmallow schemas directly like Boss.schema().loads(boss_json, unknown="exclude")

How can I define classes with @dataclass_json to ignore unknown properties in nested schemas by default?

from dataclasses import dataclass
from dataclasses_json import dataclass_json
from typing import List

@dataclass_json
@dataclass(frozen=True)
class Minion:
    name: str


@dataclass_json
@dataclass(frozen=True)
class Boss:
    minions: List[Minion]

boss_json = """
{
    "minions": [
        {
            "name": "evil minion", 
            "UNKNOWN_PROPERTY" : "value"
        },
        {
            "name": "very evil minion"
        }
    ],
    "UNKNOWN_PROPERTY" : "value"
}
""".strip()
Boss.schema().loads(boss_json, unknown="exclude")

RESULT:
marshmallow.exceptions.ValidationError: {'minions': {0: {'UNKNOWN_PROPERTY': ['Unknown field.']}}}
Expected:
Boss(minions=[Minion(name='evil minion'), Minion(name='very evil minion')])

API features / improvements

Creating this parent issue to track API improvements / upgrades

  1. Support forward references (which will enable recursive dataclasses): #5
  2. Full typing support: #23
  3. coerce_keys kwarg for encoding: #29
  4. user-supplied overrides #42
  5. Sharing encoder / decoder in wider scopes (currently must be per-field. type, class, global are all potential scopes) #139

Doubt regarding Type Annotation in "from_json"

In api.py, for the class method from_json since the input cls parameter is class and output is its object. shouldn't it be like this:

from typing import Type

@classmethod
def from_json(cls: Type[A],  # A to Type[A]
              s: str,
              *,
              encoding=None,
              parse_float=None,
              parse_int=None,
              parse_constant=None,
              infer_missing=False,
              **kw) -> A:

can someone please help me understand this? Thanks in advance.

Infer missing unexpected behavior

Hi, here's the test that should pass :

@dataclass
class Address(DataClassJsonMixin):
    street: str
    no: int


@dataclass
class Person(DataClassJsonMixin):
    name: str
    address: Address


class TestInferMissingJson(unittest.TestCase):
    # this should be the expected behavior and it is not
    def test_given_no_address_when_from_json_then_address_attribute_is_None(self) -> None:
        json = '{"name": "Bob"}'

        result = Person.from_json(json)
 
        self.assertIsNone(result.address)

Currently, it treats it as :

{
    'name': 'Bob',
    'address': {
        'no': None,
        'street': None
    }
}

This can be fixed in core.py line 67:

def _decode_dataclass(cls, kvs, infer_missing):
    kvs = {} if kvs is None and infer_missing else kvs
    missing_fields = {field for field in fields(cls) if field.name not in kvs}
    for field in missing_fields:
        if field.default is not MISSING:
            kvs[field.name] = field.default
        elif infer_missing:
            kvs[field.name] = None

    init_kwargs = {}
    for field in fields(cls):
        field_value = kvs[field.name]
        if field_value is None: <-------------------------- By adding this --------- 
            init_kwargs[field.name] = field_value <-------- By adding this --------- 
        elif is_dataclass(field.type): <------------------- By modifying this ------
            init_kwargs[field.name] = _decode_dataclass(field.type,
                                                        field_value,
                                                        infer_missing)
        elif _is_supported_generic(field.type) and field.type != str:
            init_kwargs[field.name] = _decode_generic(field.type,
                                                      field_value,
                                                      infer_missing)
        else:
            init_kwargs[field.name] = field_value
    return cls(**init_kwargs)

Support for optional dataclass attributes

Good job, but _decode_dataclass(cls, kvs) should be aware of optional attributes.
Consider the following code:

import typing

from dataclasses import dataclass
from dataclasses_json import DataClassJsonMixin


@dataclass
class Foo(DataClassJsonMixin):
    bar: float
    baz: typing.Optional[float]


Foo.from_json('{"bar": 7.45}')

This throws KeyError despite baz being optional.

Enums as names

Hi.

Is there any option to encode enums by their name not their value?
I.e., is there any options to make this code:

from enum import Enum, auto
from dataclasses import dataclass
from dataclasses_json import DataClassJsonMixin

class Status(Enum):
    Alive = auto()
    Dead = auto()
    Comatose = auto()

@dataclass
class Human(DataClassJsonMixin):
    name: str
    status: Status = Status.Alive

if (__name__ == '__main__'):
    santa = Human("John")
    barbara = Human("Joan", Status.Comatose)
    forever = Human("Helen", Status.Dead)
    
    print(santa.to_json())
    print(barbara.to_json())
    print(forever.to_json())

... instead of printing:

{"name": "John", "status": 1}
{"name": "Joan", "status": 3}
{"name": "Helen", "status": 2}

... print:

{"name": "John", "status": "Alive"}
{"name": "Joan", "status": "Comatose"}
{"name": "Helen", "status": "Dead"}

And do so for the Human.from_json() as well.
For me, it looks like additional encoder/decoder flag, something like enums_by_id: bool = False
(default should be False to save the remain backwards-compatible)

Thank you!

P.S.
Could try to implement this later.

Troubble with boolean fields

I want to have some fields excluded, so I am using call b) below. But it throws an exception for the boolean field. Is this a bug?

`@dataclass_json
@DataClass
class Foo:
bar: bool = False
zoo: str = "Exclude me"

d = Foo()

a)

print(d.to_json())

b)

print(Foo.schema(exclude=["zoo"]).dumps(d))`

default value ignored when infer_missing=True

When infer_missing is set to True, it by-pass default argument.
Consider the following code :

import json
from dataclasses import dataclass, field
from dataclasses_json import DataClassJsonMixin

@dataclass()
class MyObject(DataClassJsonMixin):
    id: str
    name: str
    category: str = field(default="my_object")


s1 = MyObject(id="12345678-5ef8-49d5-be8b-44fcb6907c16", name="test_me")
print(json.dumps(s1.__dict__))

s2 = MyObject.from_json('{"id": "12345678-5ef8-49d5-be8b-44fcb6907c16", "name": "test_me"}', infer_missing=True)
print(s2.to_json())

Result is :

{"id": "12345678-5ef8-49d5-be8b-44fcb6907c16", "name": "test_me", "category": "my_object"}
{"id": "12345678-5ef8-49d5-be8b-44fcb6907c16", "name": "test_me", "category": null}

The category is replaced by null, not by the default value my_object

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.