tortoise / tortoise-orm Goto Github PK
View Code? Open in Web Editor NEWFamiliar asyncio ORM for python, built with relations in mind
Home Page: https://tortoise.github.io
License: Apache License 2.0
Familiar asyncio ORM for python, built with relations in mind
Home Page: https://tortoise.github.io
License: Apache License 2.0
Is your feature request related to a problem? Please describe.
Two actually:
Describe the solution you'd like
Update PyPika to allow parametrised queries.
Describe alternatives you've considered
There isn't really. Build our own is too much work for negative gain.
Additional context
We can't guarantee that parameters will be presented in the SQL query in the order we specify, so I feel the simplest solution might be to send the parameters as per usual, but then have a to_parametrised_query()
method that returns (str, query parameter objects) so we can use parametrised queries, and PyPika is then in charge of managing order.
Then we need to update our code to use the parameters.
Done:
BinaryField
LIKE
When I use an inexistent parameter in a query filter, I get this error message:
tortoise.exceptions.FieldError: unknown filter param non_existing_attribute
Is it possible to show allowed values to ease debugging like:
unknown filter parameter 'non_existing_attribute'. Allowed values are ['attribute1', 'attribute2', ...].
Another possibility is to clarify error message with something like:
You can only filter on existing attributes for the model. <non_existing_parameter> is invalid.
When I try to filter a model by a boolean property with an sqlite DB i get this error:
Traceback (most recent call last):
File "buggy.py", line 31, in <module>
loop.run_until_complete(init())
File "/usr/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "buggy.py", line 23, in init
result = await Tournament.filter(bool_field=True)
File "…/Code/tortoise-orm/tortoise/queryset.py", line 479, in _execute
self.query, custom_fields=list(self._annotations.keys())
File "…/Code/tortoise-orm/tortoise/backends/base/executor.py", line 24, in execute_select
raw_results = await self.connection.execute_query(str(query))
File "…/Code/tortoise-orm/tortoise/backends/sqlite/client.py", line 23, in wrapped
raise OperationalError(exc)
tortoise.exceptions.OperationalError: no such column: true
You can reproduce this bug by executing this code:
import asyncio
from tortoise import Tortoise
from tortoise.models import Model
from tortoise import fields
class Tournament(Model):
id = fields.IntField(pk=True)
bool_field = fields.BooleanField()
def __str__(self):
return self.name
async def init():
await Tortoise.init(
db_url='sqlite://:memory:',
modules={'models': ['__main__']}
)
await Tortoise.generate_schemas()
result = await Tournament.filter(bool_field=True)
await Tortoise.close_connections()
loop = asyncio.get_event_loop()
loop.run_until_complete(init())
This is the same problem on related models filters.
I was excepting a filtered queryset that contains only Tournament with bool_field
value to True
.
When max_length
argument is missing the error message is
tortoise.exceptions.ConfigurationError: max_digits must be >= 1
and doesn't show the true problem.
To reproduce it execute this code:
from tortoise.models import Model
from tortoise import fields
class AModel(Model):
id = fields.IntField(pk=True)
some_attribute = fields.CharField()
I think the error message should be instead:
tortoise.exceptions.ConfigurationError: missing 'max_length' parameter
I'm proposing start with an init handler like so:
def init(...) -> TortoiseManager
...
manager = tortoise.init(
modules=['app', 'app.sub'],
databases={
'default': <some db handler>,
'notes': <some db handler>
},
)
# Models are discoverable (and introspectable)
manager.models →
[<app.Projects>, <app.sub.Notes>, <app.sub.Tasks>]
# Actions are discoverable
manager.actions →
{
'tortoise': {
'syncdb': { describe itself and parameters here },
'makemigrations': ...,
'migrate': ....,
'info': ...
},
'app.sub': {
'import_tasks': ...
}
}
# Actions are executed
await manager.action('syncdb', param1=value1, ...)
# And then functionality useful for testing:
await manager.snapshot(...)
await manager.rollback(...)
await manager.teardown()
The TortoiseManager
object would be a class that contains all the discovered models, and access to some management commands. (a bit like Django, but simpler at first).
Management action functions get the TortoiseManager
object as an explicit parameter. The idea is to be able to do all introspection through a standard interface, to try and minimise incentive to access private functions.
We could do a tortoise
shell command, which would be a wrapper around tortoise.init
and TortoiseManager
. We do it like that to allow any system it is embedded in to easily help manage it in their own tools.
For testing I would need the ability to tear-down, the tortoise environment, and handle both application-wide models, and test-local models. We can do something like:
from tortoise.testing import TortoiseTest
class TestThis(TortoiseTest):
<some config here>
...
Where it would automatically build a test db if models are test-local (and tear it down after the tests complete), or use the globally configured test database.
For our own testing we will probably use test-local models, and real application would probably use global models. So the use case should be optimised (ito convenience) for the global model use case, and test-local models can be a bit rough.
Automatic configuration will be quite important, so I propose a set of "sane" defaults, such as ability to use environment variables/config file to do init automatically.
For that to work we also need to have the ability to initialise databases off a single string.
eg. using URI schema like: https://github.com/kennethreitz/dj-database-url
Any comments/suggestions?
Describe the bug
I'm trying use Tortoise and Sanic but I always found error when I'm running the code and calling the model.All().
AttributeError: 'NoneType' object has no attribute 'cursor'
Why does this error occur?
I'm using Python 3.7.2 and Sanic.
import asyncio
import uvloop
import aiotask_context as context
from sanic import Sanic
from sanic.response import json
from tortoise import Tortoise, run_async
from tortoise.models import Model
from tortoise import fields
class TbEstados(Model):
id = fields.IntField(pk=True)
nome = fields.TextField()
uf = fields.TextField()
codigoUf = fields.IntField(null=True)
regiao = fields.IntField(null=True)
class Meta:
table = 'TBEstados'
def __str__(self):
return self.name
sanic = Sanic(__name__)
@sanic.route("/")
async def test(request):
result = await TbEstados.all()
return json(result)
async def run():
await Tortoise.init(
db_url='mysql://user:pass@ip:3306/database',
modules={'models': ['__main__']}
)
await Tortoise.generate_schemas()
async def main():
await run()
if __name__ == "__main__":
run_async(main())
asyncio.set_event_loop(uvloop.new_event_loop())
server = sanic.create_server(host="0.0.0.0", port=8000)
loop = asyncio.get_event_loop()
loop.set_task_factory(context.task_factory)
task = asyncio.ensure_future(server)
try:
loop.run_forever()
except:
loop.stop()
If i try to use a db_uri with subdirectory like "sqlite://data/db.sqlite" I get the following error:
Traceback (most recent call last):
File "buggy.py", line 23, in <module>
loop.run_until_complete(init())
File "/usr/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "buggy.py", line 16, in init
modules={'models': ['buggy']}
File ".venv/lib/python3.6/site-packages/tortoise/__init__.py", line 301, in init
await cls._init_connections(connections_config, _create_db)
File ".venv/lib/python3.6/site-packages/tortoise/__init__.py", line 166, in _init_connections
await connection.create_connection()
File ".venv/lib/python3.6/site-packages/tortoise/backends/sqlite/client.py", line 45, in create_connection
await self._connection._connect()
File ".venv/lib/python3.6/site-packages/aiosqlite/core.py", line 190, in _connect
self._connection = await self._execute(self._connector)
File ".venv/lib/python3.6/site-packages/aiosqlite/core.py", line 183, in _execute
raise result
File ".venv/lib/python3.6/site-packages/aiosqlite/core.py", line 155, in run
result = fn()
File ".venv/lib/python3.6/site-packages/aiosqlite/core.py", line 304, in connector
return sqlite3.connect(str(database), **kwargs)
sqlite3.OperationalError: unable to open database file
I use tortoise-orm version 0.10.9.
You can reproduce by copy pasting the following code in a buggy.py
file:
import asyncio
from tortoise import Tortoise
from tortoise.models import Model
from tortoise import fields
class Tournament(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
def __str__(self):
return self.name
async def init():
await Tortoise.init(
db_url='sqlite://data/data_test.sqlite',
modules={'models': ['buggy']}
)
# Generate the schema
await Tortoise.generate_schemas()
loop = asyncio.get_event_loop()
loop.run_until_complete(init())
Subdirectory should be supported in Tortoise-orm as aiosqlite
supports them.
As workaround I use the config object (which seems to be yet undocumented).
class Tournament(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
def __str__(self):
return self.name
config = {
'connections': {
'default': {
'engine': 'tortoise.backends.sqlite',
'credentials':{
'file_path': 'data/test.sqlite'
}
},
},
'apps': {
'models': {
'models': ['buggy'],
'default_connection': 'default',
}
}
}
async def init():
await Tortoise.init(
config=config
)
# Generate the schema
await Tortoise.generate_schemas()
loop = asyncio.get_event_loop()
loop.run_until_complete(init())
Is your feature request related to a problem? Please describe.
At the moment, there is no way to know whether a field is required (e.g. cannot left empty when creating a model instance) on a Model
.
Describe the solution you'd like
Alter the behavior of default
: if not provided to the Field
's constructor, it should be set to a sentinel value, e.g. DEFAULT_NOT_PROVIDED
. Then, we can have required
(or blank
, to follow the Django API?) be a property computed as return self.default is not DEFAULT_NOT_PROVIDED
.
The API would look like this:
from tortoise.models import Model
from tortoise import fields
class BlogPost(Model):
title = fields.CharField(max_length=100)
description = fields.TextField(default=None, null=True)
assert BlogPost._meta.fields_map["title"].blank
assert not BlogPost._meta.fields_map["description"].blank
Describe alternatives you've considered
Add a blank
parameter to Field
's __init__()
and simply store it on the field. However, we don't have any validation mechanism for now so we wouldn't be able to act upon blank
which may be counter-intuitive. I suppose for now it's better to compute it from whether a default
was passed.
Additional context
I'm playing around with building Django REST Framework-like serialization on top of Tortoise, with Pydantic as a validation backend. My approach is to dynamically build a Pydantic model out of a Tortoise Model
. Pydantic has built-in validation for required/non-required fields, so knowing whether a field is required is a blocker for now.
Also — I am willing to work on this issue. 😄 I'd be glad to have pointers at where I should be careful when altering the behavior of default
(e.g. assigning to a sentinel value), although from what I've seen it should be quite easy to figure out.
from tortoise.models import Model
from tortoise.fields import *
class Tournament(Model):
id = IntField(pk=True)
name = TextField()
if name == "main":
from tortoise import Tortoise, run_async
async def init():
# Here we create a SQLite DB using file "db.sqlite3"
# also specify the app name of "models"
# which contain models from "app.models"
await Tortoise.init(
db_url='sqlite://otc.db',
modules={'models': ['model']} # 传入模块名称,本模块名为 model
)
# Generate the schema
await Tortoise.generate_schemas()
run_async(init())
async def dps():
await Tournament.create(name='Another Tournament')
# Now search for a record
tour = await Tournament.filter(name__contains='Another').first()
print(tour.name)
run_async(dps())
==============================
tortoise.exceptions.ConfigurationError: No DB associated to model
WHY ?
I want to add a _d
attribute and a few classmethod
s and some regular methods to models by creating a subclass of tortoise.models.Model
that users of my framework can use. However, subclassing Model
would create a new database model, which is not what I want. From what I've seen, the Model
class contains a lot of logic, and it seems not feasible to create a new class with metaclass=ModelMeta
. What are my options here? Am I missing something?
Is your feature request related to a problem? Please describe.
There's currently no way to automate the synchronisation of the database's structure with the Models except for when there are only entirely new Models that need to be synced (Tortoise.generate_schemas(safe=True)
).
Describe the solution you'd like
It would be nice if new fields that do not have pk
set to True
and have default
set to something or nullable
set to True
could be integrated into the database as well when Tortoise.generate_schemas
is called with safe=True
as this is a safe operation as far as I'm aware.
Describe alternatives you've considered
Not having this feature makes it difficult to build highly modular applications/libraries such as the one I'm building. An alternative I have actually considered is to use Django's migration system for migrations, but that would be messy at best. Another alternative would be to port Django's migration system to tortoise so we have at least some migration system for now (it should be fine for the migration system to be sync I guess), but I don't really have the time to do that right now, unfortunately.
Describe the bug
Expected behavior
It should autocommit. https://docs.djangoproject.com/en/2.1/topics/db/transactions/#autocommit-details
Is your feature request related to a problem? Please describe.
There's at the moment no way to inspect the actual SQL query that a QuerySet would execute when evaluated. This can however be very useful for debugging.
Describe the solution you'd like
Add an .sql
property to QuerySet
that could be used like this:
>>> print(Book.all().sql)
SELECT * FROM "books";
Implementation-wise, I believe return self._make_query().get_sql()
should work.
Describe alternatives you've considered
We could have named this .query
, but it would be unclear whether this returns a plain string (as this issue suggests) or the actual Query
object from pypika. I don't think the second option is a good idea, because using pypika should an implementation detail to our users.
Additional context
Follow-up on @abondar 's suggestion in #91
In trying to solve the DB → python type mapping, and the advent of typing
, a possible solution is to define types that then get evaluated into both Python and DB types:
This also raises the question of a cleaner (more modern) model definition, by defining the type:
Since py3.6 we have the typed variables:
class Author(Model):
name: str[255]
books: reverse_fk[Book]
class Book(Model):
title: str[255]
author: Author
published: datetime
logo: bytes = None
pages: int
extra: JSONField = {}
price: Decimal[18,5]
created_at: datetime[auto_now_add]
Which is already standard in Py3.7 @dataclass
We could extend type definitions so we can build model definitions off of types.
e.g. str[100] = 'some default'
→ CharField(max_length=100, default='some default')
The syntax is not comprehensive, but we could probably use type declarations for a large portion of use cases.
This would require Py3.6+ to function, so dropping Py3.5 & PyPy.
While testing my code using tortoise-orm, I created a specific event_loop
and dropped the default thread event loop. As consequence the method get_event_loop()
is not working as excepted.
The problem is that initializer
and finalizer
methods use the asyncio method to get the event loop and there is no other way.
I would like to be able to give the event loop as an optional argument with default value to what we get with the get_event_loop()
method.
Current plan is to add mysql support through https://github.com/aio-libs/aiomysql
(edited by @grigi )
Migrations is currently planned as a post-v1.0 feature.
Some useful resources for getting this working right now:
Best guess at this time for a complete solution, not a "quick" solution:
describe_model()
contain all the data one needs to generate DDL from.describe_model()
generate_schema
to use this high-level DDL instructions to generate a schema in the requested dialect. (This would require some decoupling from the driver instance, look at #72)describe_model()
)from a series of high-level DDL instructions (as in read the migration persistance model)I'm not particularily happy about many migration systems storing state in the DB itself, as for some environments that won't work, but it is a very good place from a consistency PoV. We should have a backup for when we generate DDL that one would pass on to the DB team in cases where DDL changes have an established process (e.g. enterprise/consulting)
Proper data migration management makes this very useful:
describe_model()
dataThere is some cases where backwards migrations cannot be done as a forward migration clobbered/destroyed some data.
In cases where we detect that we need mising data, and request a sane default.
What do we do about data migrations? Is it safe to just emit a warning?
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context about the feature request here.
# Model like this
class Permission(Model):
id = IntField(pk=True)
name = CharField(max_length=160)
def __str__(self):
return self.name
class Role(Model):
id = IntField(pk=True)
name = CharField(max_length=160)
permissions = ManyToManyField('models.Permission', related_name='perm')
create_time = DatetimeField(auto_now=True)
def __str__(self):
return self.name
# Handler like this
class RoleHandler(RequestHandler):
async def get(self, *args, **kwargs):
role = await Role.all()
for r in role:
print(i.permissions)
How to get values from ManyToManyRelationManager objects ?
Currently there is only implementation for postgres using jsonb
. It will be good thing to implement generic field that could be used for basic storing of JSON
Is your feature request related to a problem? Please describe.
EXPLAIN is an SQL operation that outputs the execution plan and allows to know how exactly the database will perform a query. It's very useful for debugging.
Describe the solution you'd like
Calling .explain()
on a queryset should return the execution plan as a string:
>>> plan = Tournament.filter(id=4).explain() # type: str
As it is meant for debugging, the actual content of the execution plan can (and will) vary across databases.
I've also noted that:
EXPLAIN
while SQLite uses EXPLAIN QUERY PLAN
(it also has EXPLAIN
but it returns lower-level information).EXPLAIN
query may vary between databases. For example, MySQL specifies the EXPLAIN output columns while SQLite says it may even vary across releases.EXPLAIN
formats:
TRADITIONAL
and JSON
connection.prepare().explain()
) that hardcodes the JSON, VERBOSE
format. However, it should be easy to re-implement explain()
with variable format ourselves, if needed.Describe alternatives you've considered
Because of the above, I'm not sure whether making us able to specify an EXPLAIN
format (e.g. queryset.explain(format="YAML")
would be worth it because, again, .explain()
would be for debugging so I'm afraid users may start relying on the actual output if we let them specify a format
.
Additional context
I wanted to use it to test field indexes (#89) but I am running into issues because the exact format is database-dependant, so I figured a separate issue would be better.
Describe the bug
Derived model not derives parent model fields.
To Reproduce
example
Logs:
[2019-02-26 16:58:50,616] DEBUG [asyncio.__init__:53] Using selector: EpollSelector
[2019-02-26 16:58:50,648] DEBUG [db_client.create_connection:64] Created connection <asyncpg.connection.Connection object at 0x7f5a6238b2d0> with params: user=postgres database=testdb host=localhost port=5432
[2019-02-26 16:58:50,649] DEBUG [db_client.execute_script:118] CREATE TABLE IF NOT EXISTS "something" ("name" TEXT NOT NULL, "id" SERIAL NOT NULL PRIMARY KEY);
[2019-02-26 16:58:50,667] DEBUG [db_client.close:76] Closed connection <asyncpg.connection.Connection object at 0x7f5a6238b2d0> with params: user=postgres database=testdb host=localhost port=5432
Expected behavior
Derived model includes all parent model fields (created
, updated
) as described here.
Additional context
Installed versions:
Python 3.7.2
tortoise-orm==0.11.4
asyncpg==0.18.3
PostgreSQL 11.2 (Debian 11.2-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
This was mentioned in the GIS discussion but never made it into a seperate issue.
Having a client being aware of the capabilities of its connection is beneficial in a number of situations.
I am considering how to do this + #72 together since they are somewhat related and thought some discussion would be good first around how the two would interact. Some unordered thoughts about how it would work:
{dialect}+{engine}://...
syntax is nice
postgis+asyncpg://...
sqlite+aiosqlite://...
mysql+odbc://...
sqlite://
DISTINCT ON
or JSON fieldspostgis+mysql://...
) should we warn them or let it fail? maybe an engine should define which dialects it supports (and postgis would just subclass pg)mysql
-> aiomysql
backends.engines
packagebackends.dialects
package that contains the supported dialects and their featuresThen, with the GIS module in mind, it would be nice to be able to register new dialects. For example, having spatialite
subclass sqlite
to add the location capability.
Currently information on how to generate schema from field is stored in schema_generator
and in order to make custom field for some DB type, that is not covered by us, user would have to create custom SchemaGenerator and DBClient pointing to it, which is quite hard deal.
We could simplify SchemaGenerator by delegating description of schema to models and fields themselves. So fields would have method get_sql_type
where it would return just string that would go in schema generating sql. Also there is advantage with this approach, that this way fields like DecimalField
would be generated much more elegantly, than it does now.
Problem with that approach is how to alter that string depending on dialect. I think we could pass dialect (or capabilities) down to get_sql_type
functions, so that they can alter behaviour based on dialect.
Also I think it should be developed alongside with #39
When I do
await IntFields.all().order_by('intnum').limit(10).delete()
I expected it to delete the first 10 IntFields
items ordered by lowest value of intnum
.
I read up, and just about all databases support deleting on a limit, as long as one doesn't do a join.
I feel we should decide what should be allowed, and then raise an exception if you build a queryset that does e.g. offset and delete, or join+limit and delete?
And make the same limitations apply to count()
?
Describe the bug
Delete does not return Anything
To Reproduce
res = await Projects.filter(id=id).delete()
res is NoneType, not DeleteQuery
Expected behavior
tortoise-orm documents write: Delete all objects in QuerySet. Return type: | DeleteQuery
Additional context
update() too
Please,
It is possible create a resource to reverse engineer in legacy database to generate model to the Tortoise?
.filter(intnum_null=None)
Does not work as expected.
Instead you have to do:
.filter(intnum_null__isnull=True)
This can cause rather hard to debug issues, especially if one has a variable that may be None.
It looks like there is no BigIntField. I need it because Discord IDs have 19 digits and are thus too big for regular INTEGERs.
Why is there no Field for file upload?
https://tortoise-orm.readthedocs.io/en/latest/fields.html
Such as FileField
What should I do if I need Fields to upload files now?
As per test tortoise.tests.test_two_databases.TestTwoDatabases.test_two_databases_relation
generate_schema
EXACTLY the same as SHOW CREATE TABLE
in MySQL (#237)Q
(filter, not-Q, bad-Q)values()
& values_list()
(with relation, multi-key)BinaryField
typeOneToOneField
(#236).values()
and .values_list()
(#20)db_url
resolver (#25)Model
Tortoise
singleton_meta
Define a method on Tortoise singleton and/or Model classes that will allow one to discover the exact schema, and any extra metadata.
This should be usable for generating migrations, serialisers and hypothesis strategies and similar.
Follow up from #14
Would be great if we could serialise/deserialise the Objects to a dict (and eventually JSON).
It would be perfectly fine if it could not jump a ManyToMany relationship at all.
I really like the way that attrs
does this, where you can send in arb data that then gets sent to a conversion function and that then instantiates the actual object.
e.g. you can send in an ISO8601 string to a datetime, and configure a converter to auto-convert it to a datetime.datetime
object for you.
Particuarly we need to have something that would make a CRUDL API simple to implement. I feel actual serialisation could be left as a later concern.
Maybe consider making this NOT be part of the core, but allow an external module to inspect Models easily via a schema-like interface?
I need to add unique key to model with 2 fields. Django offers a design _meta.unique_togetter
. How i can do it with Trotoise?
I miss the ability to mark fields as indexed. I don't know if this is a limitation of pypika. I need this feature to ensure that DB queries are decently fast. :)
values
and values_list
should also support getting fields from related models. Important feature for values
would be overriding name of field in resulting dict.
Like this:
await Event.all().values_list('id', 'name', 'tournament__name')
await Event.all().values(
'id',
'name',
tournament='tournament__name'
)
Is your feature request related to a problem? Please describe.
Doing work for a client, we requested access to a PostgreSQL database, and was provided an ODBC interface, when asked to have a direct network connection was told that it was "Against policy".
Considering the current architecture if we were using Tortoise it would have been quite the pain point.
Describe the solution you'd like
Separate the SQL Dialect and the DB Connector, so that they are loosely connected.
This way we can allow the user to mix & match them as needed.
This should make sharing connector for both GEO and non-GEO easy, and providing backup routes for db connections (e.g. falling back to aiopg
in case asyncpg
is not available)
We could allow a Dialect to provide a preferred connector, but allow overriding it manually:
postgres+aiopg://
or postgres+odbc://
To allow the postgres
dialect to generate either asyncpg
or aiopg
depending on availability we should make the dialect optionally return the default connector from a callable.
We could use the same mechanism to return capabilities.
Describe alternatives you've considered
Lots of code duplication, so not much really.
Additional context
It would require a much larger test matrix, which is already getting ridiculously large. Might be useful to split test runs up by base primitive, e.g. DB-variants (on default Python) + Python-versions (doing a testall
equivalent) to compress the matrix.
Right now we have 13 runs, which using the proposed compression we can reduce to 9.
If we add the hypothetical aiopg + odbc connectors, it would be 25 vs 12.
Describe the solution you'd like
Really been enjoying using this so far, but I'm now in the need of GIS data. I'd be interested in developing PostGIS and Spatialite support, whether that be included in the base ORM or as an extension (as geoalchemy2 is to sqlalchemy). There is an extension to pypika called pypika-gis that would presumably do some of the heavy lifting, so I was wondering if I could get some pointers on how to get started?
Describe alternatives you've considered
I considered using sqlalchemy, sqlalchemy-aio, geoalchemy2, but it feels like a bit of a "hack" to run it in a separate thread.
Python3.7 requires any async iterator to not only define __aiter__
but also __anext__
.
it seems the AsyncIteratorWrapper
defined in tortoise/utils.py
can't just be returned as is currently happening in tortoise/queryset.py
and tortoise/fields.py
.
More info:
File "/usr/lib64/python3.7/unittest/case.py", line 59, in testPartExecutor
yield
File ".tox/py37-postgres/lib/python3.7/site-packages/asynctest/case.py", line 297, in run
self._run_test_method(testMethod)
File ".tox/py37-postgres/lib/python3.7/site-packages/asynctest/case.py", line 354, in _run_test_method
self.loop.run_until_complete(result)
File ".tox/py37-postgres/lib/python3.7/site-packages/asynctest/case.py", line 224, in wrapper
return method(*args, **kwargs)
File "/usr/lib64/python3.7/asyncio/base_events.py", line 568, in run_until_complete
return future.result()
File "tortoise/tests/test_relations.py", line 23, in test_relations
async for team in event.participants:
TypeError: 'async for' received an object from __aiter__ that does not implement __anext__: coroutine
tortoise/tests/test_relations.py:23: RuntimeWarning: coroutine 'RelationQueryContainer.__aiter__' was never awaited
async for team in event.participants:
This was with CPython 3.7.0 installed and tested by running:
tox -e py37-postgres
(and disabling pylint, which seems broken in 3.7)
Currently tortoise can work normally if your PK is int field named id
. It should instead work with pk
field like in django, that references real PK field that can have various types and names
ODBC is a bit of a crap shoot. It can sometimes help in a pinch, though.
We should formalise our stance on ODBC.
e.g.
aioodbc
and support the basic features for ODBC driver for a select set of DBs only.ODBC is the only way from the asyncio python world to connect to the Microsoft/Oracle databases, and to Warehousing databases (such as Vertica/Teradata/IQ).
For the first set I could see that someone might want to use a Python ORM over MS-SQL, but the latter cases, for warehouse db's... They likely don't want to be used like a relational database, so an ORM doesn't make sense.
Please, provide support for this feature. Currently ModelMeta metaclass ignores source_field for ForeignKey fields (just appends '_id' to field name). I've managed myself to fix this with one-liner directly inside tortoise-orm source code (can't create an abstract subclass of ModelMeta/Model, because Tortoise.init
isn't setting _db
field to all internal objects of orm), but this does not seem like a reliable fix.
Example where this can be helpful:
class Lambda(Model):
id = fields.IntField(pk=True, null=False)
class Related(Model):
lambda_ = fields.ForeignKeyField(
'models.A',
source_field='lambda' # !
)
For the sake of optimisation and more complex queries it would be great to implement subqueries.
I see their design like this:
await Event.filter(tournament=Tournament.filter(name__icontains='First'))
It should be quite easy to understand and code itself should be intelligent enough to get id from queryset by default, but also should implement support for some others values by passing flat values_list
like this:
await Event.filter(
name__icontains=Tournament.filter(is_finished=True),
).values_list('name', flat=True)
I have tried to enable connection pools as part of the test suite, but there is a very very odd issue with connection pools and transactions.
When using a single connection, you always get the same connection, so we don't have to worry about which connection is busy in what transaction, but with a pool, one needs to lock the connection for the duration of the connection.
It looks like it should, but it isnt.
Symptoms I have found so far are:
.update
use the connection very differently than .filter
Why do we have code sections that reserve a single connection? and some that dont?Things I have tried:
This all leaves me quite confused as to why we have so many different ways of wrapping connections, and why they just don't seem to work with pools.
I want a clear idea before I actually consider ripping things apart.
Hi, I want to check does the delete action is success, the filter().delete()
response NoneType
. Thanks.
Right now I don't have good idea on how to make test.
I think I could make dummy backend for easy one query tests, and use sqlite for more complex ones, but still looking for better ideas
Hi
I have been looking for a sane async python orm, and this is the first one that I found that ticks my boxes ito design.
Clearly this is a work in progress, but as it is it actually is workable for simpler projects 👍
Could you please share your goals of this project, and how can I help?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.