prisma / prisma-engines Goto Github PK
View Code? Open in Web Editor NEW๐ Engine components of Prisma ORM
Home Page: https://www.prisma.io/docs/concepts/components/prisma-engines
License: Apache License 2.0
๐ Engine components of Prisma ORM
Home Page: https://www.prisma.io/docs/concepts/components/prisma-engines
License: Apache License 2.0
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
Spec: The relevant part of the spec on the Prisma Schema Language can be found here.
Description: The Prisma Schema Language allows the user to define a combination of fields that should be unique across all records for that model with the @@unique
directive. It is an implementation detail of a connector how this is implemented. Usually a connector will leverage an index with unique constraint to implement this.
Example:
The following example shows a User
model where the combination of firstName
and lastName
is unique.
model User {
id String @default(cuid())
firstName String
lastName String
@@unique([firstName, lastName], name: "UniqueFirstAndLastNameIndex")
}
Currently, the rust binaries are built for specific platforms like netlify, zeit/now, etc.
Instead, we should build OS-based distributions with different openssl versions, because different operating systems ship with different default openssl versions.
We need:
This will add proper support for:
Note: When these binares are shipped, we need to adapt the fetch-engine in photonjs. See #TODO for tracking.
When done, we can remove:
For a general overview, see prisma/prisma#533
Implement conversion of schema returned by database-introspection to data model.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
database-introspection: Don't mark SQLite columns as auto increment with incompatible type, even if they're part of the primary key, f.ex. TEXT.
NB: Keep in mind that SQLite column types are only symbolic, find out how this works in practice.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
This is a followup task from this issue.
Idea: We should instrument prisma server to measure every single request and have a prometheus/grafana setup for better timescale graphs.
Product Epic: The corresponding product epic can be found here.
Description: The Prisma Schema Language allows the user to define custom types that should be used for a given field.
Example:
The following example shows a User
model where the combination of firstName
and is backed by a varchar column.
datasource pg1 {
..
}
model User {
id String @default(cuid())
firstName String @pg1.varchar(123)
lastName String
}
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
This issue covers Rust work related to prisma/prisma-client-js#239
Goal: This issue describes how we plan to implement the feature of cascading deletes available in the Prisma Schema Language.
Idea: We would like to leverage SQLs ON DELETE CASCADE
feature wherever possible. However it does not support all the usecases of Prismas cascading deletes. The idea is that the SQL feature is used as often as possible and shims are implemented where necessary in the query engine. The parts that need to be shimmed are indicate with a ๐จ below.
Problems:
@relation(onDelete: CASCADE)
on a field implies a SQL level ON DELETE CASCADE
on the column of the related field. ๐ฅonDelete
annotation. On the SQL level there is only one column, and therefore only the behavior for one side can be expressed on the SQL level.This could work purely on the SQL level and could also be introspected from the DDL.
Prisma schema:
model Parent {
children Child[] @relation(onDelete: CASCADE)
}
model Child {
parent Parent // this references the parent id
}
corresponding SQL:
Create Table Parent ( id Text );
Create Table Child ( id Text,
parent text REFERENCES Parent(id) ON DELETE CASCADE );
Semantics:
This cannot be expressed on the SQL level and would need to be handled by Prisma. We could also not introspect this case.
Prisma schema:
model Parent {
children Child[]
}
model Child {
parent Parent @relation(onDelete: CASCADE) // this references the parent id
}
corresponding SQL:
Create Table Parent ( id Text );
Create Table Child ( id Text,
parent text REFERENCES Parent(id));
Semantics:
This cannot be fully expressed on the SQL level. We would need a mix of Prisma level and SQL level handling. We could therefore also not introspect this case.
Prisma schema:
model Parent {
children Child[] @relation(onDelete: CASCADE)
}
model Child {
parent Parent @relation(onDelete: CASCADE) // this references the parent id
}
corresponding SQL:
Create Table Parent ( id Text );
Create Table Child ( id Text,
parent text REFERENCES Parent(id)ON DELETE CASCADE);
Semantics:
Here the SQL level ON DELETE CASCADE
statements merely ensure that there are no dangling relation entries after one of the connecting nodes are deleted. They have no connection to the Prisma level semantics. The Prisma level ones cannot be expressed in the db and therefore can also not be introspected. Here for all cases (CASCADE on child, CASCADE on parent, CASCADE on both) the implementation needs to happen on the Prisma level.
Prisma schema:
model Parent {
children Child[] @relation(onDelete: CASCADE)
}
model Child {
parents Parent[]
}
corresponding SQL:
Create Table Parent ( id Text );
Create Table Child ( id Text );
Create Table _ParentToChild (
parent Text REFERENCES Parent(id) ON DELETE CASCADE,
child Text REFERENCES Child(id) ON DELETE CASCADE
);
Semantics:
Spec: The relevant part of the spec on the Prisma Schema Language can be found here.
Description: The Prisma Schema Language allows the user to define named or unnamed embeds.
Example:
The following example shows a User
model where the associated account is an embed.
model User {
id String @default(cuid())
account Account
}
embed Account {
accountNumber Int
}
Goal: We want to compare the performance of synchronous and asynchronous database drivers.
Idea: Build a very simple HTTP server that reads from a Postgres database. Through an interface it should be easy to swap the database driver that is being used.
Requirements:
/{path}
where the path is the name of the query to be executed. The server holds a map of static queries mapped to their name.information_schema
. One query that returns one result and one that returns many results.Interface Draft:
pub trait Testee {
// executes the query with the given name. The result is serialized to JSON and returned.
fn query(&self, query_name: &str) -> Future<Output = serde_json::Value>;
}
This is a followup task from this issue.
During a read only query a transaction is not really needed. Hence we should not start a transaction in that case to not incur an unecessary overhead.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
When introspecting MySQL, support compound primary keys.
Also fix this test after.
Test introspection versus old system.
This is a followup task from this issue.
Idea: Threaded connection pool with poll_fn/blocking, async/await all the way up.
Child of prisma/prisma#393
We are approaching a point where we will need strong backwards compatibility of the persisted state of the migration engine.
To avoid breaking backwards compatibility of the migrations, we can write end to end tests that do:
For the snapshots, we could use the insta crate.
Every time a snapshot breaks, we must manually change the test suite so it tests with the new snapshot in addition to the old (ideally, this would just be adding a file name to a static list). That way, we test that old migrations keep producing the same results.
It's unambiguous that the contents of the migration table must be backwards-compatible, but I am less sure for RPC. We should probably isolate the results that get persisted to the migrations folder, since we can evolve the API in sync with the Lift CLI.
Related issue: prisma/migrate#151
@timsuchanek wants to use this to validate that the javascript code and downloaded engines are in sync. We often had the case that this did not work properly and debugging is unecessarily hard.
Spec: The relevant part of the spec on the Prisma Schema Language can be found here.
Description: The Prisma Schema Language allows the user to define a combination of fields as the id criterion. It is an implementation detail of a connector how this is implemented.
Example:
The following example shows a User
model where the combination of firstName
and lastName
is the id criterion.
model User {
firstName String
lastName String
@@id([firstName, lastName])
}
Affected Components
This is a followup task from this issue.
We should run our new benchmarks on a continuous basis. Either daily or on each commit etc...
This is a followup task from this issue.
During benchmarking we found out that our current logger implementation is very slow and in some cases can consume 50% of CPU cycles while processing a simple query. We want to replace the current logger we use. The idea is to evluate tokio tracing for which we need write our own json output formatter.
Spec: There's no mention in the Prisma Schema Language spec yet.
Description: Currently we map all fields of type Int @id
always to an int column that is backed by a sequence. Currently it is not possible to have an int column as primary key that is not backed by a sequence.
Example:
The following example currently always maps to an int id field that is backed by a sequence.
model User {
id Int @id
}
Spec: The relevant part of the spec on the Prisma Schema Language can be found here.
Description: The PSL defines several core data types that must be implemented by each connector. We must make sure that we support exactly those not less or more.
Example:
The following example shows a User
model where the combination of firstName
and lastName
is unique.
model User {
id String @default(cuid())
string String
bool Boolean
int Int
float Float
dateTime DateTime
}
The DeadlockSpec
in the connector test kit suddenly started failing. The message i see in the logs in the case of Postgres is:
Error querying the database: db error: FATAL: sorry, too many clients already, application: prisma
It might be a side effect of the latest changes to connection pooling maybe. We should fix this spec.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
When introspecting, expose whether a column is marked as unique.
Replace SQL command string formatting with safe parameter injection in introspection component.
This is a note to myself, but I can't assign issues to myself currently.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
If the back relation field is unspecified and the specified field is:
Supersedes prisma/specs#164.
The Migration Engine needs to be able to diff two data models in order to generate the steps.json
that is central to lift. Currently we diff the data structures in the dml
module of the datamodel
crate. I think we should instead diff the data structures of the ast
module instead.
This should make our code easier to maintain and at the same time enables diffing of arbitrary directives. This is required to enbale custom types.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
There were fixes to the sql-schema-describer after users submitted issues about failing migrations (#47). We want to make sure prisma 2 works on both MySQL 8 and older versions, so that PR sets up the Rust tests to run on both versions using docker-compose locally (docker-compose.yml).
We want to have the same MySQL 8 setup on CI. It's identical to the mysql 5.7 setup, except the host port (3307), the container name and the service name (both mysql8
).
This issue is for tracking the CI setup work.
Related issue: #49
Goals:
Steps:
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
The introspection connector for SQL is the component that introspects a SQL database and returns a Prisma schema. This gets implemented by our Freelancer Arve.
Work packages are:
DatabaseSchemaCalculator
in the migration engine.Corresponding prisma2 issue: prisma/migrate#107
We should:
PR: #47
Description: We want to convert the benchmarking suite that we used for Prisma 1 to run against Prisma 2 to know where we are performance wise.
Spec:
timescaledb
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
Check the related epic for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.