Giter Site home page Giter Site logo

haskell-beam / beam Goto Github PK

View Code? Open in Web Editor NEW
570.0 21.0 169.0 17.4 MB

A type-safe, non-TH Haskell SQL library and ORM

Home Page: https://haskell-beam.github.io/beam/

Haskell 94.74% PLpgSQL 4.38% Shell 0.62% Nix 0.26%
haskell sql postgresql postgres sqlite orm

beam's Introduction

Beam: a type-safe, non-TH Haskell relational database library and ORM

Build status

If you use beam commercially, please consider a donation to make this project possible: https://liberapay.com/tathougies

Beam is a Haskell interface to relational databases. Beam uses the Haskell type system to verify that queries are type-safe before sending them to the database server. Queries are written in a straightforward, natural monadic syntax. Combinators are provided for all standard SQL92 features, and a significant subset of SQL99, SQL2003, and SQL2008 features. For your convenience a thorough compatibility matrix is maintained here.

Beam is standards compliant but not naive. We recognize that different database backends provide different guarantees, syntaxes, and advantages. To reflect this, beam maintains a modular design. While the core package provides standard functionality, beam is split up into a variety of backends which provide a means to interface Beam's data query and update DSLs with particular RDBMS backends. Backends can be written and maintained independently of this repository. For example, the beam-mysql and beam-firebird backends are packaged independently.

Recognizing that over-abstraction frequently means caving in to the lowest common denominator, Beam does not do connection or transaction management. Rather, the user is free to perform these functions using the appropriate Haskell interface library for their backend of choice. Additionally, beam backends provide a significant portion of backend-specific functionality which seamlessly fits into the beam ecosystem.

For example, the beam-postgres backend is built off of the postgresql-simple interface library. When using beam-postgres, the user manages connections and transactions with postgresql-simple. The user is free to issue queries directly with postgresql-simple, only using beam when desired. Postgres offers a number of rich data types on top of the standard SQL data types. To reflect this, beam-postgres offers pluggable support for postgres-specific data types and features.

For more information, see the user guide.

For questions, feel free to join our mailing list or head over to #haskell-beam on freenode.

A word on testing

beam-core has in-depth unit tests to test query generation over an idealized ANSI SQL-compliant backend. You may be concerned that there are no tests in either beam-sqlite or beam-postgres. Do not be alarmed. The documentation contains many, many examples of queries written over the sample Chinook database, the schema for which can be found at beam-sqlite/examples/Chinook/Schema.hs. The included mkdocs configuration and custom beam_query python Markdown extension automatically run every query in the documentation against a live database connection. Any errors in serializion/deserialization or invalid syntax are caught while building the documentation. Feel free to open pull-requests with additional examples/tests.

Tests are written

!beam-query
```haskell
!example <template-name> <requirements>
do x <- all_ (customer chinookDb) -- chinookDb available under chinook and chinookdml examples
   pure x
```

The !beam-query declaration indicates this is markdown code block that contains beam query code. The !example declaration indicates that this example should be built against applicable backends and included in the code. The template_name is either chinook or chinookdml (depending on whether you have quest a query or a DML statement). For chinook, the included code should produce a Q query. For chinookdml, the included code should be a monadic action in a MonadBeam. The requirements can be used to select which backends to run this against. See the documentation for examples.

Building the documentation

Beam uses mkdocs for its documentation generation. The included build-docs.sh script can take care of building the documentation and serving it locally. In order to use the tool though, make sure you have a python installation with the mkdocs module installed. You can do this by creating a virtualenv, and pip installing mkdocs, or in a Nix shell with nix-shell docs.

The documentation uses a custom Markdown preprocessor to automatically build examples against the canonical Chinook database. By default, beam will build examples for every beam backend it knows about, including ones not in the main source tree (see docs/beam.yaml for the full configuration). This means you will need to have an instance of all these database servers running and available. This is usually not what you want.

To only build examples for a particular backend, modify mkdocs.yaml and set the enabled_backends configuration setting for the docs.markdown.beam_query preprocessor. For example, to only build docs for beam-sqlite, change

  - docs.markdown.beam_query:
      template_dir: 'docs/beam-templates'
      cache_dir: 'docs/.beam-query-cache'
      conf: 'docs/beam.yaml'
      base_dir: '.'

to

  - docs.markdown.beam_query:
      template_dir: 'docs/beam-templates'
      cache_dir: 'docs/.beam-query-cache'
      conf: 'docs/beam.yaml'
      base_dir: '.'
      enabled_backends:
        - beam-sqlite

beam's People

Contributors

3noch avatar adetokunbo avatar alexbiehl avatar alexfmpe avatar aske avatar dependabot[bot] avatar ericson2314 avatar jaspa avatar jkachmar avatar judah avatar kmicklas avatar laurentrdc avatar lpsmith avatar luigy avatar mightybyte avatar mulderr avatar nsluss avatar phadej avatar reactormonk avatar rimmington avatar roberth avatar runeksvendsen avatar sajidanower23 avatar samprotas avatar srid avatar tathougies avatar thomasjm avatar timhabermaas avatar tomjaguarpaw avatar zzantares avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beam's Issues

How to use beam-migrate-cli with nix

When attempting to use beam-migrate (the nexgen branch) with this command:

beam-migrate database add postgres Database.Beam.Postgres.Migrate <connection> --package-path /nix/store/01fsyywf1wzn2b06bmayx5zrw4749dls-beam-postgres-0.1.0.0/lib/ghc-8.0.2/package.conf.d

I get this error:

beam-migrate: user error (Plugin load error: UnknownError "flags: '--package-db /nix/store/01fsyywf1wzn2b06bmayx5zrw4749dls-beam-postgres-0.1.0.0/lib/ghc-8.0.2/package.conf.d' not recognized")

What am I doing wrong?

Postgres support

What's the state of Postgres support? In the tutorial it said that it should work with all HDBC databases. But when I pass the output of dumpSchema to Postgres, I get errors. Is this a bug or does it still need a Postgres backend. If the latter, what would be necessary to do that?

Foreign key column name incorrect

Hi @tathougies!

I'm investigating using Beam in our project. While having a play around with it, I came across a behaviour where by using oneToMany_ generates the foreign key name incorrectly — it seems to append __id no matter what I do.

I've defined my foreign key as interviewCandidateId :: PrimaryKey CandidateT f and a helper function like so:

candidateInterviews :: OneToMany AppDb s CandidateT InterviewT
candidateInterviews =
  oneToMany_ (interviews appDb) interviewCandidateId

Which results in an error saying something like this:

Exception: Postgres error: Just "ERROR:  column t0.candidate_id__id does not exist\nLINE 1: SELECT \"t0\".\"id\" AS \"res0\", \"t0\".\"candidate_id__id\" AS \"res1..

Am I just doing it wrong ™? Is there something I'm missing? I've tried both the travis/beam-0500 branch as well as master.

Using postgres in the tutorial (a few problems not present in sqlite)

Hi @tathougies!

First of all amazing library i'm so glad I found it 😃

I tried the tutorial from the docs with postgres and ran into some issues that were easily resolved through tutorial 2 and then a compiler error that I just couldn't figure out in tutorial 3.

I created a repository for it.

I tried to be pretty thorough, putting important stuff in commits and including an org file that pretty much goes through the tutorial step by step including any compiler errors I encountered along the way.

If you aren't an emacser/org user I can reformat into something easier for you to look at.

If you are, the org file has todos marked, and even some tags for extra notes that I wanted to ask you.

master has everything working and compiling through tutorial 2.

there are separate branches for each tutorial, and obviously, tutorial 3 isn't going to compile and/or work.

I was hoping that since my foray into the tutorial for postgres is pretty well chronicled that I could add a postgres backend tutorial to the docs if you wouldn't mind (and also after some issues are ironed out and I can continue with the third tutorial).

On a separate note, if you need some other help with implementations, or other documentation I could also help with that if you would like.

The very last commit has the summary of outstanding items in the org file so I won't make this any longer by copying that in here.

Not sure if you prefer to talk about this here, or on the mailing list or through irc, but whatever medium is easier for you works for me.

Thanks for writing this, it seems awesome and first time playing with it doesn't dissapoint at all!

Add to Stackage

:D

This also will get the project into nixpkgs.haskellPackages without being manually added.

How do I use `insertFrom`?

I'm trying to figure out how to build a query in the FROM part of the INSERT that will properly match the column names of the table. I have not been successful.

GHC Warnings

I get quite a few GHC warnings when compiling beam. I suspect some (most) of them are caused by using an old version of GHC. I can address them, but didn't know what your version policy was. I see a few different options:

  • keep things as is since it compiles
  • use the CPP extension to conditionally import things based on GHC version
  • fix warnings which would increase version bounds base (at the very least)

I'm OK contributing any of the above, of course.

Rule based table field modifications

I use a different scheme for naming records than what Beam supports. Because of this I have to override almost all of my table fields. This gets old fast. However, if I could provide a function to specify how to do the renaming, I could remove all of the boilerplate. How hard would it be to allow the renaming rule to be overridden at a DB or table level?

Use of OFFSET and LIMIT together are not enforced

When doing an OFFSET in SQLite, the LIMIT must be set explicitly. If there are no limits, then a negative number (typically -1) is used.
This also means that in Beam, when using the function offset_, the limit_ function must always be used, otherwise, it is guaranteed to cause a run-time exception.
On a higher level, the solution, I suppose, is that Beam could infer a lack of limit_ as limit_ (-1), unless this can be ensured at a type-level.

Ergonomics and guidance for sharing queries

I realize Beam is not fully released or fully documented, so take this with a grain of salt. So far I'm extremely impressed with Beam. Many rough edges are just things that need to get done, not at all design flaws with the library. The only thing that is repeatedly and painfully frustrating is my complete inability to know what the actual type of a query is so as to write it down. I have been pervasively using PartialTypeSignatures because I simply can't seem to come up with the types of queries. Even if I were to copy them verbatim from GHC's warning messages, the size of the type is daunting, and often larger than the query itself! What's more, I sometimes try to slice a piece of code out of one function and share it between two, only to find I can't make the type system happy enough to get rid of all the ambiguities.

What I'd like, at least for now, is some sort of EVERYTHING constraint (or alias) that just lets me say "This is a PG query that can do anything" and that way I can just avoid the issue when I'm in a pinch. However, also having more, say, ergonomic ways of handling the types would be ideal. It's quite possible this already exists and I'm not aware of it.

Couldn't match kind '* -> *' with *

I'm seeing this compile error from time to time:

Couldn't match kind '* -> *' with '*'
When matching the kind of 'UserT'
Expected type: m (Maybe User)
  Actual type: m (Maybe (UserT Identity))

I have type User = UserT Identity so I have no idea why this error makes sense. I'm on GHC 8.0.2.

Support CONCAT

I'd like to be able to use like_ on the result of CONCAT.

How to do `QGenExpr _ Int -> QGenExpr _ (Auto Int)`

This seems like just_ but it's for Auto. I have a table with an Int and I want to join it on a table based on PK. However, the PK is Auto Int. At the normal "record-level" this is easy, but I'm not sure how to do it in queries.

How to LEFT JOIN an aggregate

I am not able make the compiler happy when moving from this

do
  u <- all_ (db^.users)
  x <-
    (all_ (db^.userFilters)
      & filter_ (^.userfilterEnabled)
      & aggregate_ (\uf -> (group_ $ uf^.userfilterForUser, as_ @Int countAll_))
     )
  ...

to this

do
  u <- all_ (db^.users)
  x <- leftJoin_
    (all_ (db^.userFilters)
      & filter_ (^.userfilterEnabled)
      & aggregate_ (\uf -> (group_ $ uf^.userfilterForUser, as_ @Int countAll_))
     )
     undefined -- placeholder for now
  ...

It gives this error:

    • No instance for (Database.Beam.Schema.Tables.Retaggable
                         (QGenExpr
                            QValueContext
                            beam-postgres-0.1.0.0:Database.Beam.Postgres.Syntax.PgExpressionSyntax
                            (Database.Beam.Query.Internal.QNested
                               (Database.Beam.Query.Internal.QNested
                                  Database.Beam.Query.QueryInaccessible)))
                         (QGenExpr
                            QValueContext
                            beam-postgres-0.1.0.0:Database.Beam.Postgres.Syntax.PgExpressionSyntax
                            (Database.Beam.Query.Internal.QNested
                               (Database.Beam.Query.Internal.QNested
                                  Database.Beam.Query.QueryInaccessible))
                            Int))
        arising from a use of ‘leftJoin_’

Lower operator precedence of common infix functions

Operators like ==. have a precedence of 4 which makes them convenient in the presence of lens operators (e.g. user^.userName ==. val_ "Jim"). However, it looks like at least some of the other common infix functions, are using the default precedence of 9. So you end up with, e.g. (user^.userName) `like_` val_ "Jim" or (user^.userName) `in_` [val_ "Jim"]).

Wanting to implement Firebird backend support

Hi! I have forked the Master branch at the beginning of March, and haven't notticed you have started to work on the library again. I would like to know if I already can try to adapt my preliminary Firebird backend with the modifications in the branch travis/beam-0500.

And I would be glad to be one of the contributors responsible for the Firebird backend support.
Thanks!

Can't create "all_" query

I'm probably missing something, but I could not create a basic "all_" query, following the tutorial.

Attaching a sample project, the compilation error is shown below. Please see "app/Main.hs" for a failing line.

I'm using "nextgen-beam-migrate" branch, the DB schema is generated by the "beam-migrate".

/home/nickolay/workspace/Haskell/simple/app/Main.hs:10:9: error:
        • Couldn't match type ‘Database.Beam.Backend.SQL.SQL92.Sql92FromExpressionSyntax
                                 (Database.Beam.Backend.SQL.SQL92.Sql92SelectTableFromSyntax
                                    (Database.Beam.Backend.SQL.SQL92.Sql92SelectSelectTableSyntax
                                       select0))’
                         with ‘Database.Beam.Backend.SQL.SQL92.Sql92SelectTableExpressionSyntax
                                 (Database.Beam.Backend.SQL.SQL92.Sql92SelectSelectTableSyntax
                                    select0)’
            arising from a use of ‘all_’
          The type variable ‘select0’ is ambiguous
        • In the expression: all_ (_table2 db)
          In an equation for ‘query’: query = all_ (_table2 db)
        • Relevant bindings include
            query :: Q select0
                       be
                       db0
                       s
                       (table0 (QExpr
                                  (Database.Beam.Backend.SQL.SQL92.Sql92SelectTableExpressionSyntax
                                     (Database.Beam.Backend.SQL.SQL92.Sql92SelectSelectTableSyntax
                                        select0))
                                  s))
              (bound at app/Main.hs:10:1)

simple.tar.gz

Should boolean column types be supported?

src/UserService/DB.hs:27:10:
    No instance for (FieldSchema Bool)
      arising from a use of ‘beam-0.3.0.0:Database.Beam.Schema.Tables.$gdmtblFieldSettings’
    In the expression:
      beam-0.3.0.0:Database.Beam.Schema.Tables.$gdmtblFieldSettings
    In an equation for ‘tblFieldSettings’:
        tblFieldSettings
          = beam-0.3.0.0:Database.Beam.Schema.Tables.$gdmtblFieldSettings
    In the instance declaration for ‘Table UserT’

Nullable IsSqlExpressionSyntaxStringType?

in postgresql string_agg allows you to have a nullable column as the first arg expression so should the following instance also exist?

-- Database/Beam/Postgres/Syntax.hs
instance IsSqlExpressionSyntaxStringType PgExpressionSyntax a =>
    IsSqlExpressionSyntaxStringType PgExpressionSyntax (Maybe a)

PVP violation?

It seems that beam violates the PVP. Version 0.3.0.0 has QExpr with one type parameter. In version 0.3.2.0 it has two.

SQLite Backend

There are issues with compiling beam on Windows because of it's dependency on sqlite3. Would it be possible to move the sqlite backed into a separate package?

Domain types

Tracking issue for domain types in beam-core and beam-migrate

[0500] Haddocks broken?

Haddock seems to dislike this:

"beam-core" Database.Beam.Backend.SQL.AST line 154

data DataType
  = DataTypeChar Bool {-^ Varying -} (Maybe Int)
  | DataTypeNationalChar Bool (Maybe Int)
Database/Beam/Backend/SQL/AST.hs:154:38: error:
    parse error on input ‘(’

Idea: Optimize simple boolean filters

I often have conditional filtering in my queries which causes beam to produce where clauses that look like WHERE ((true) AND (true)) AND (true).... It seems it would be easy to clean this up when building the queries by just realizing that a known (val_ True) combined with &&. can simplify to one side or the other. I'm not sure if this is "in-scope" with this project, but if it were easy, it would make the resulting queries cleaner.

I could see an argument against this if one wanted to have a very clear 1-to-1 correspondence between the beam DSL and generated SQL.

beam-postgres: Widen aeson bound to include 0.11

GHCJS is held at aeson 0.11 and in order to maintain safe communication between client and server those of us using it also want to pin aeson to 0.11 on the server. However, beam-postgres doesn't allow for it. Looking at the history I don't see any particular reason not to try it.

Control.Lense.TH.makeFields fails to compile for tables

{-# LANGUAGE TemplateHaskell #-}

import Control.Lens.TH (makeFields)
import Database.Beam

data ConfigT f = Config{
    _configTId :: Columnar f (Auto Int)
  } deriving Generic
makeFields ''ConfigT

produces

  * Illegal type synonym family application in instance:
     Columnar f_a2iSV (Auto Int)
  * In the instance declaration for
      'HasId (ConfigT f_a2iSV) (Columnar f_a2iSV) (Auto Int))'

I've tried with Text instead of Auto Int with the same result.

I'm using GHC 8.0.2.

Documentation on Hackage

You've written a great tutorial on Beam, but still there's no API documentation. Could you add it, please?

branch beam-0500: Updates with the field referenced at the right hand side of the assignment

With the code below:
runUpdate $ update ordersT (\order -> [_orderAmount order <-. (_orderAmount order *100)]) (\order -> _orderTakenBy order ==. (pk tomJones))

It shows the error:
Error: * Couldn't match type QField s Scientific' with QGenExpr QValueContext FirebirdExpressionSyntax s Scientific'
Expected type: QExpr FirebirdExpressionSyntax s Scientific
Actual type: Columnar (QField s) Scientific

  • In the first argument of (*)', namely _orderAmount order'
    In the second argument of (<-.)', namely (_orderAmount order * 100)'
    In the expression:
    _orderAmount order <-. (_orderAmount order * 100)
  • Relevant bindings include
    order :: OrderT (QField s) (bound at example\EmployeesFB.hs:211:19)

Possible to use aggregates over nullable columns?

Is it possible to use aggregates directly over nullable columns? Of course one can manage by filtering and coalescing, but this seems like an unnecessary complication. In particular, it's easy to get avg_ wrong when you coalesce first because each row will still count toward the average. Other aggregates like sum_ don't have this danger.

Data.Default instances

What do you think about adding a dependency to data-default and providing Default instances to Auto and others?

Implement a "pure" backend

SQLite lets you start an "in-memory" database. If we wrapped that in an ST-like monad, we could implement a "pure" backend that could be used in tests without dropping into IO.

This would require a rewrite of the sqlite bindings, but seems otherwise feasible.

MySQL Support

I forked beam earlier today and did the minimum possible to get the example to work with MySQL. I still need to break everything off into a separate library which depends on beam, but wanted to see what else was required before I could be confident that my branch does that the original does.

@tathougies, would you mind giving some guidance in that regard? There isn't a test suite, so I don't have anything pre-existing to go on. Another idea I had was to follow the tutorial, but using MySQL to see if I ran into any speed bumps.

Also, I ended up creating a stack.yaml as I prefer it to cabal. Would you be interested in having a current stack.yaml in the main repo, or should I just keep in with my code?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.