Giter Site home page Giter Site logo

prisma / prisma-engines Goto Github PK

View Code? Open in Web Editor NEW
1.1K 21.0 221.0 2.47 GB

๐Ÿš‚ Engine components of Prisma ORM

Home Page: https://www.prisma.io/docs/concepts/components/prisma-engines

License: Apache License 2.0

Makefile 0.13% Shell 0.12% Rust 99.21% HTML 0.01% Nix 0.09% TSQL 0.01% Ruby 0.01% Dockerfile 0.01% TypeScript 0.39% JavaScript 0.01%
prisma rust

prisma-engines's Issues

EPIC: Multi field unique

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define a combination of fields that should be unique across all records for that model with the @@unique directive. It is an implementation detail of a connector how this is implemented. Usually a connector will leverage an index with unique constraint to implement this.

Example:
The following example shows a User model where the combination of firstName and lastName is unique.

model User {
  id        String @default(cuid())
  firstName String
  lastName  String
  @@unique([firstName, lastName], name: "UniqueFirstAndLastNameIndex")
}

Links to actual Work Packages

Build OS-based binaries instead of platform-based

Currently, the rust binaries are built for specific platforms like netlify, zeit/now, etc.

Instead, we should build OS-based distributions with different openssl versions, because different operating systems ship with different default openssl versions.

We need:

  • debian-based builds with libssl 1.0.x and 1.1.x
    This will support all major debian versions, all major ubuntu versions, probably most or ever all debian and ubuntu based distributions (e.g. Linux Mint), and cloud platforms which run on Ubuntu like Netlify
  • centos builds with libssl 1.0.x and 1.1.x,

This will add proper support for:

  • CentOS 7
  • Debian 8, 9, 10
  • Ubuntu 14.04, 16.04, 18.04, 19.04
  • debian and ubuntu based distributions
  • Fedora 28-30
  • Most cloud platforms like zeit/now, netlify, lambda

Note: When these binares are shipped, we need to adapt the fetch-engine in photonjs. See #TODO for tracking.

When done, we can remove:

  • platform-specific builds like zeit, netlify, lambda, etc. when they work out of the box with the new OS-based builds
  • ubuntu16.04 because it was only built as a workaround

For a general overview, see prisma/prisma#533

Implement conversion from introspected schema to data model

Implement conversion of schema returned by database-introspection to data model.

  • Implement field uniqueness
  • Implement field id_info
  • Implement foreign keys
  • Implement sequences
  • Implement field scalar_list_strategy
  • Implement field defaults
  • Implement enums

Query Engine: Automated Benchmarks

This is a followup task from this issue.

Idea: We should instrument prisma server to measure every single request and have a prometheus/grafana setup for better timescale graphs.

EPIC: Native Types

Product Epic: The corresponding product epic can be found here.

Description: The Prisma Schema Language allows the user to define custom types that should be used for a given field.

Example:
The following example shows a User model where the combination of firstName and is backed by a varchar column.

datasource pg1 { 
  .. 
}

model User {
  id        String @default(cuid())
  firstName String @pg1.varchar(123)
  lastName  String
}

Links to actual Work Packages

Cascading Deletes - Implementation Strategy for SQL connector

Goal: This issue describes how we plan to implement the feature of cascading deletes available in the Prisma Schema Language.

Idea: We would like to leverage SQLs ON DELETE CASCADE feature wherever possible. However it does not support all the usecases of Prismas cascading deletes. The idea is that the SQL feature is used as often as possible and shims are implemented where necessary in the query engine. The parts that need to be shimmed are indicate with a ๐Ÿšจ below.

Problems:

  • Contrary to intuition a @relation(onDelete: CASCADE) on a field implies a SQL level ON DELETE CASCADE on the column of the related field. ๐Ÿ’ฅ
  • Prisma provides a field for both sides of a relation that can take a onDelete annotation. On the SQL level there is only one column, and therefore only the behavior for one side can be expressed on the SQL level.
  • For Many to Many relations we cannot express the behavior we want (deletion traversing through the join table) on the SQL level.

Analysis of where the SQL On Delete Cascasde works and where not

One To Many: Cascade from the Parent.

This could work purely on the SQL level and could also be introspected from the DDL.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parent Parent // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id) ON DELETE CASCADE );

Semantics:

  • delete parent: children get deleted โœ…
  • delete child: parent remains โœ…

One To Many: Cascade from the Child.

This cannot be expressed on the SQL level and would need to be handled by Prisma. We could also not introspect this case.

Prisma schema:

model Parent {
  children Child[] 
}

model Child {
  parent Parent @relation(onDelete: CASCADE) // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id));

Semantics:

  • delete parent: children remain โœ…
  • delete child: parent gets deleted ๐Ÿšจ

One To Many: Cascade from both Sides

This cannot be fully expressed on the SQL level. We would need a mix of Prisma level and SQL level handling. We could therefore also not introspect this case.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parent Parent @relation(onDelete: CASCADE) // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id)ON DELETE CASCADE);

Semantics:

  • delete parent: children remain โœ…
  • delete child: parent gets deleted ๐Ÿšจ

Many to Many

Here the SQL level ON DELETE CASCADE statements merely ensure that there are no dangling relation entries after one of the connecting nodes are deleted. They have no connection to the Prisma level semantics. The Prisma level ones cannot be expressed in the db and therefore can also not be introspected. Here for all cases (CASCADE on child, CASCADE on parent, CASCADE on both) the implementation needs to happen on the Prisma level.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parents Parent[]
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text );
Create Table _ParentToChild (
  parent Text REFERENCES Parent(id) ON DELETE CASCADE,
  child Text REFERENCES Child(id) ON DELETE CASCADE
);

Semantics:

  • delete parent: children get deleted ๐Ÿšจ
  • delete child: parents remain โœ…

EPIC: Embeds

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define named or unnamed embeds.

Example:
The following example shows a User model where the associated account is an embed.

model User {
  id        String @default(cuid())
  account Account
}
embed Account {
  accountNumber Int
}

Links to actual Work Packages

  • not created yet because it is unclear when we will do this.

Build a prototype to compare performance of synchronous and asynchronous database drivers

Goal: We want to compare the performance of synchronous and asynchronous database drivers.

Idea: Build a very simple HTTP server that reads from a Postgres database. Through an interface it should be easy to swap the database driver that is being used.

Requirements:

  • easily extendable: it must be easy to add new queries to the prototype
    • idea: the server exposes a generic HTTP route for GET /{path} where the path is the name of the query to be executed. The server holds a map of static queries mapped to their name.
  • Queries: We can start with some really simple queries that do not require a special database. We should start testing with two simple queries against the information_schema. One query that returns one result and one that returns many results.
  • benchmark with the Vegeta benchmarking tool
  • Build two implementations of the database interface: One based on the sync postgres driver that uses a thread pool to asyncify request processing. Another based on the async postgres driver.

Interface Draft:

pub trait Testee {
    // executes the query with the given name. The result is serialized to JSON and returned.
    fn query(&self, query_name: &str) -> Future<Output = serde_json::Value>;
}

Test introspection

Test introspection versus old system.

  1. npm install -g prisma2@alpha
  2. Create a folder and cd into it.
  3. run prisma2 init. This starts a dialog. In the following steps choose Sqlite, Lift, Typescript and From Scratch .
  4. Then run prisma2 lift save --name test && prisma2 lift up to migrate the database
  5. Introspect(?)

Query Engine: Asyncify

This is a followup task from this issue.

Idea: Threaded connection pool with poll_fn/blocking, async/await all the way up.

Discussion: Backwards-compatibility of migrations table/files contents

We are approaching a point where we will need strong backwards compatibility of the persisted state of the migration engine.

To avoid breaking backwards compatibility of the migrations, we can write end to end tests that do:

  • snapshot testing of RPC inputs/outputs (json)
  • snapshot testing of migrations table (json?)

For the snapshots, we could use the insta crate.

Every time a snapshot breaks, we must manually change the test suite so it tests with the new snapshot in addition to the old (ideally, this would just be adding a file name to a static list). That way, we test that old migrations keep producing the same results.

It's unambiguous that the contents of the migration table must be backwards-compatible, but I am less sure for RPC. We should probably isolate the results that get persisted to the migrations folder, since we can evolve the API in sync with the Lift CLI.

Related issue: prisma/migrate#151

@mavilein @timsuchanek

EPIC: Multi field id criterion (Multi column PKs)

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define a combination of fields as the id criterion. It is an implementation detail of a connector how this is implemented.

Example:
The following example shows a User model where the combination of firstName and lastName is the id criterion.

model User {
  firstName String
  lastName  String
  @@id([firstName, lastName])
}

Affected Components

  • Datamodel Parser
  • Migration Engine
  • Query Engine
  • Introspection Engine

Links to actual Work Packages

Query Engine: Find a replacement for our current logger (performance)

This is a followup task from this issue.

During benchmarking we found out that our current logger implementation is very slow and in some cases can consume 50% of CPU cycles while processing a simple query. We want to replace the current logger we use. The idea is to evluate tokio tracing for which we need write our own json output formatter.

EPIC: Id Strategies + Sequences

Spec: There's no mention in the Prisma Schema Language spec yet.

Description: Currently we map all fields of type Int @id always to an int column that is backed by a sequence. Currently it is not possible to have an int column as primary key that is not backed by a sequence.

Example:
The following example currently always maps to an int id field that is backed by a sequence.

model User {
  id Int @id
}

Links to actual Work Packages

  • not created yet because it is unclear when we will do this.

EPIC: Prisma Core Primitive Types

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The PSL defines several core data types that must be implemented by each connector. We must make sure that we support exactly those not less or more.

Example:
The following example shows a User model where the combination of firstName and lastName is unique.

model User {
  id        String @default(cuid())
  string String
  bool Boolean
  int Int
  float Float
  dateTime DateTime
}

Links to actual Work Packages

  • not created yet

Fix DeadlockSpec

The DeadlockSpec in the connector test kit suddenly started failing. The message i see in the logs in the case of Postgres is:

Error querying the database: db error: FATAL: sorry, too many clients already, application: prisma

It might be a side effect of the latest changes to connection pooling maybe. We should fix this spec.

Query Engine: Native Types

Check the related epic for an initial breakdown of work packages.

Please use supply a more detailed task list in a comment.

Replace SQL command string formatting

Replace SQL command string formatting with safe parameter injection in introspection component.

This is a note to myself, but I can't assign issues to myself currently.

Migration Engine: Implement new diffing approach to enable custom types

The Migration Engine needs to be able to diff two data models in order to generate the steps.json that is central to lift. Currently we diff the data structures in the dml module of the datamodel crate. I think we should instead diff the data structures of the ast module instead.
This should make our code easier to maintain and at the same time enables diffing of arbitrary directives. This is required to enbale custom types.

Buildkite: set up MySQL 8 container for tests

There were fixes to the sql-schema-describer after users submitted issues about failing migrations (#47). We want to make sure prisma 2 works on both MySQL 8 and older versions, so that PR sets up the Rust tests to run on both versions using docker-compose locally (docker-compose.yml).

We want to have the same MySQL 8 setup on CI. It's identical to the mysql 5.7 setup, except the host port (3307), the container name and the service name (both mysql8).

This issue is for tracking the CI setup work.

Related issue: #49

Refactoring Write Execution of Query Engine

Goals:

  • Enable to work with required relations in all cases (prisma/migrate#98)
  • Simplify the AST for Writing to the database significantly.
  • This will also enable us to move a large portion code out of the connectors and into the core. Most notably the code for handling required relations needs to be duplicated per connector in the current architecture.
  • Query Execution will move in a direction where executing a query becomes akin to execute a program in a LISP like programming language.

Steps:

  • Concrete steps unclear right now. We must do some exploration work to come up with a some concrete steps.

EPIC: new Introspection Engine

The introspection connector for SQL is the component that introspects a SQL database and returns a Prisma schema. This gets implemented by our Freelancer Arve.

Work packages are:

  • build the command driven by Unit Tests based on the idea to simply invert the behavior of the DatabaseSchemaCalculator in the migration engine.
  • build a binary that takes a Prisma connection url and introspects the database. The result is printed to stdout. This allows to try out the introspection internally.
  • add a JSON RPC API to the introspection engine
  • create integration tests to test complex schemas in an end to end fashion. Foundation for those tests can either be our collection of database schema examples or the tests in the typescript version of this command.

EPIC: Benchmarking 1/x

Description: We want to convert the benchmarking suite that we used for Prisma 1 to run against Prisma 2 to know where we are performance wise.

Spec:

  • Convert the existing benchmarking tooling to run against Prisma 2.
  • Explore whether we can store the results in a timescaledb
  • Explore whether Grafana is a good fit to visualize test results.
  • Make it convenient to profile applications locally, by e.g. integrating flamegraphs.
  • Create small reports on our findings, e.g.: Where are we worse and where are we better? Can we pinpoint the underlying issues?
  • Create an automated setup so that we can see the change of benchmarks over time. E.g. run them daily or on each commit or ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.