Giter Site home page Giter Site logo

jordanmarr / sqlhydra Goto Github PK

View Code? Open in Web Editor NEW
211.0 8.0 20.0 26.04 MB

SqlHydra is a suite of NuGet packages for working with databases in F# including code generation tools and query expressions.

License: MIT License

TSQL 2.61% F# 91.78% Batchfile 0.04% Dockerfile 0.03% Shell 0.33% PLpgSQL 5.11% Ruby 0.09%
fsharp orm typeprovider

sqlhydra's Introduction

SqlHydra

SqlHydra is a set of NuGet packages for working with databases in F# with an emphasis on type safety and convenience.

SqlHydra.Cli

SqlHydra.Cli is a dotnet tool that generates F# types and readers for SQL Server, PostgreSQL, Oracle and SQLite databases.

SqlHydra.Query

SqlHydra.Query provides strongly typed Linq queries against generated types.

Notes

  • The generated code can be used alone or with any query library for creating strongly typed table records and data readers.
  • SqlHydra.Query is designed to be used with SqlHydra generated types. (If you would prefer to create your own types over using generated types, then I would recommend checking out Dapper.FSharp.)
  • SqlHydra.Query uses SqlKata internally to generate provider-specific SQL queries.
  • All SqlHydra NuGet packages will be released with matching major and minor version numbers.

Contributors โœจ

Thanks goes to these wonderful people:


๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

๐Ÿ’ป

This project follows the all-contributors specification. Contributions of any kind welcome!

Contributing

  • This project uses the vs-code Remote-Containers extension to spin up a dev environment that includes databases for running the Tests project.
  • Alternatively, you can manually run the docker-compose file to load the development databases along with your IDE of choice.
  • Contributing Wiki

SqlHydra.Cli NuGet version (SqlHydra.Cli)

Local Install (recommended)

Run the following commands from your project directory:

  1. dotnet new tool-manifest
  2. dotnet tool install SqlHydra.Cli

Configure and Run

Run the tool from the command line, passing in a database provider: mssql, npgsql, sqlite, oracle

dotnet sqlhydra mssql
  • If no .toml configuration file is detected, a configuration wizard will ask you some questions to create a new .toml configuration file for you, and will then generate code using the new config.
  • If a .toml configuration file already exists, it will generate code.
  • The generated .fs file will automatically be added to your .fsproj as Visible="false".
  • By default, the generated toml file will be named sqlhydra-{provider}.toml

TOML Creation Wizard

The wizard will prompt you for the following input:

- Enter a database Connection String:

This is the connection string that SqlHydra can use to query table and column metadata.

- Enter an Output Filename (Ex: AdventureWorks.fs):

This is the filename that your generated types will be added to. (This file will be automatically added to your fsproj.)

- Enter a Namespace (Ex: MyApp.AdventureWorks):

This is the namespace that your generated table record types will be created in.

- Select a use case:

> SqlHydra.Query integration (default)
  Other data library
  Standalone      

Selecting a use case will set the base configuration options in your TOML file.

  • SqlHydra.Query integration (default) should be chosen if you plan on using the SqlHydra.Query NuGet package to query your database using the generated types. This option will generated additional metadata that is utilized by the SqlHydra.Query package to recognize things like provider-specific parameter types. This use case will also generate a HydraReader class that SqlHydra.Query depends on for reading data into the generated types.
  • Other data library should be chosen if you plan on using a 3rd party data library (ex: Dapper.FSharp, Donald, Npgsql.FSharp, ADO.NET, and many others). This use case only generates the table record types. No HydraReader class is generated.
  • Standalone means that you will only be using the generated read-only querying methods that will be generated. This use case creates the table record types and the HydraReader for reading them. (It does not create the additional metadata used by SqlHydra.Query.)

For more details, see the .toml configuration.

Build Event (optional)

To regenerate after a Rebuild (only when in Debug mode), you can run SqlHydra from an fsproj build event:

  <!-- Regenerate entities on Rebuild in Debug mode -->
  <Target Name="SqlHydra" BeforeTargets="Clean" Condition="'$(Configuration)' == 'Debug'">
    <Exec Command="dotnet sqlhydra mssql" />
  </Target>

Support for Postgres Enums

Postgres enum types are generated as CLR enums! You will, however, still need to manually "register" your custom enums.

If using Npgsql v7 or later:

// Global mapping should occur only once at startup:
// `experiments.mood` is the generated enum, and "experiments.mood" is the "{schema}.{enum}".
let dataSourceBuilder = NpgsqlDataSourceBuilder(DB.connectionString)
dataSourceBuilder.MapEnum<ext.mood>("ext.mood") |> ignore

If using Npgsql v6 or earlier:

// Global mapping should occur only once at startup:
// `experiments.mood` is the generated enum, and "experiments.mood" is the "{schema}.{enum}".
Npgsql.NpgsqlConnection.GlobalTypeMapper.MapEnum<experiments.mood>(nameof experiments.mood) |> ignore

๐Ÿ’ฅ Npgsql v8.0.0 fails when inserting an enum.

Support for Postgres Arrays

SqlHydra.Cli supports text[] and integer[] column types.

Sqlite Data Type Aliases

Sqlite stores all data as either an INTEGER, REAL, TEXT or BLOB type. Fortunately, you can also use aliases for data types more commonly used in other databases in your table definitions and Sqlite will translate them to the appropriate type. Using these type aliases also allows SqlHydra.Cli to generate the desired .NET CLR property type.

Here is a list of valid data type aliases (or "affinity names"): https://www.sqlite.org/datatype3.html#affinity_name_examples

SQL Server Troubleshooting

The following exception may occur with the latest version of Microsoft.Data.SqlClient:

Microsoft.Data.SqlClient.SqlException (0x80131904): 
A connection was successfully established with the server, but then an error occurred during the login process. 
(provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)

The most simple way to resolve this is to append ;TrustServerCertificate=True to the connection string in your .toml configuration file. UPDATE: This behavior has been fixed in Microsoft.Data.SqlClient v4.1.1.

Generated Table Types for AdventureWorks

// This code was generated by SqlHydra.SqlServer.
namespace SampleApp.AdventureWorks

module dbo =
    type ErrorLog =
        { ErrorLogID: int
          ErrorTime: System.DateTime
          UserName: string
          ErrorNumber: int
          ErrorMessage: string
          ErrorSeverity: Option<int>
          ErrorState: Option<int>
          ErrorProcedure: Option<string>
          ErrorLine: Option<int> }

    let ErrorLog = table<ErrorLog>

    type BuildVersion =
        { SystemInformationID: byte
          ``Database Version``: string
          VersionDate: System.DateTime
          ModifiedDate: System.DateTime }

    let BuildVersion = table<BuildVersion>

module SalesLT =
    type Address =
        { City: string
          StateProvince: string
          CountryRegion: string
          PostalCode: string
          rowguid: System.Guid
          ModifiedDate: System.DateTime
          AddressID: int
          AddressLine1: string
          AddressLine2: Option<string> }

    let Address = table<Address>

    type Customer =
        { LastName: string
          PasswordHash: string
          PasswordSalt: string
          rowguid: System.Guid
          ModifiedDate: System.DateTime
          CustomerID: int
          NameStyle: bool
          FirstName: string
          MiddleName: Option<string>
          Title: Option<string>
          Suffix: Option<string>
          CompanyName: Option<string>
          SalesPerson: Option<string>
          EmailAddress: Option<string>
          Phone: Option<string> }

    let Customer = table<Customer>
    
    // etc...

Strongly Typed Data Readers

The generated HydraReader class works in tandem with SqlHydra.Query for reading queried entities, but it can also be used on its own with any query library that returns an IDataReader.

TOML Configuration Reference

Generating Multiple TOML Files

It is also possible to have more than one .toml file in the same project. By default, SqlHydra will create a .toml file named after the version of SqlHydra used. For example, running dotnet sqlhydra sqlite will generate sqlhydra-sqlite.toml.

However, you can also specify a name for your .toml file: dotnet sqlhydra sqlite -t "shared.toml" This can be useful for various use cases, such as:

  • data migrations where you want to generate types for a source and a target database.
  • generating record types with different schema/table filters in separate files.

Supported Frameworks

.NET 6 - .NET 8 are currently supported. (If you still need support for .NET 5, use the deprecated SqlHydra.SqlServer, SqlHydra.Sqlite, SqlHydra.Npgsql or SqlHydra.Oracle tools.)

.NET 6 and Greater

The new .NET 6 System.DateOnly and System.TimeOnly types are now supported by all generators.

SqlHydra.Query NuGet version (SqlHydra.Query)

SqlHydra.Query wraps the powerful SqlKata query generator with F# computation expression builders for strongly typed query generation. SqlHydra.Query can be used with any library that accepts a data reader; however, is designed pair well with SqlHydra generated records and readers!

Creating a Query Context

/// Opens a connection and creates a QueryContext that will generate SQL Server dialect queries
let openContext() = 
    let compiler = SqlKata.Compilers.SqlServerCompiler()
    let conn = new SqlConnection("Replace with your connection string")
    conn.Open()
    let ctx = new QueryContext(conn, compiler)
    #if DEBUG
    // Writes your queries and parameters to the console
    ctx.Logger <- printfn "SQL: %O" 
    #endif
    ctx

Select Builders

There are two main select builders that you can use to create queries:

  • selectTask - creates a self-executing query that returns a Task<'T> of query results.
  • selectAsync - creates a self-executing query that returns an Async<'T> of query results.
let getErrorNumbers () =
    selectTask HydraReader.Read openContext {
        for e in dbo.ErrorLog do
        select e.ErrorNumber
    }

Input:

  1. You must pass in the HydraReader.Read static method to be passed in (which is generated by SqlHydra.Cli when the "Generate HydraReader?" option is selected).
  2. You must pass in either an existing QueryContext or a function that will create a new one.

Example 1: Creating a new QueryContext

In this case, the selectAsync expression will create the QueryContext using the given openContext function, and it will also dispose it for you automatically.

let openContext() = 
    let compiler = SqlKata.Compilers.SqlServerCompiler()
    let conn = new SqlConnection("Replace with your connection string")
    conn.Open()
    new QueryContext(conn, compiler)

let getErrorNumbers () =
    selectAsync HydraReader.Read openContext {
        for e in dbo.ErrorLog do
        select e.ErrorNumber
    }

Example 2: Sharing a QueryContext

In this case, you manually create a shared QueryContext and pass it to each selectTask expression. You are also responsible for disposing the QueryContext manually.

let getUserOrders email =
    task {
        use ctx = openContext ()

        let! userInfo = 
            selectTask HydraReader.Read ctx {
                for user in dbo.Users do
                where (user.Email = email)
            }

        let! orderHeaders = 
            selectTask HydraReader.Read ctx {
                for o in dbo.OrderHeaders do
                where (o.CustomerEmail = email)
            }

        return { User = userInfo; Orders = orderHeaders }
    }

Notes on Select Builders

  • Technically, the second argument to the selectTask and selectAsync expressions is a ContextType discriminated union. However, you can now implicitly pass a QueryContext or a function that returns a QueryContext in place of the ContextType DU.
  • The selectTask and selectAsync computation expressions are self-executing and do not require being wrapped within an async or task computation expression.
  • The selectTask and selectAsync computation expressions require the HydraReader.Read static method to be passed in (which is generated by SqlHydra.Cli when the "Generate HydraReader?" option is selected).
  • The selectTask and selectAsync builders also require you to pass in either an existing QueryContext (which manages the DbConnection and executes the various types of queries), or a function that returns a QueryContext.
    • If you pass in an existing QueryContext, you will be responsible for disposing it.
    • You can also pass in a function that will return a QueryContext which will be disposed on your behalf. The following are valid create functions:
    • unit -> QueryContext
    • unit -> Async<QueryContext>
    • unit -> Task<QueryContext>
  • The selectTask and selectAsync computation expressions builders also provide the following custom operations that are applied to the queried results (after the query data is returned):
    • toArray
    • toList
    • mapArray
    • mapList
    • tryHead
    • head

Creating a Custom selectAsync or selectTask Builder

If the redundancy of passing the generated HydraReader.Read static method into the selectAsync and selectTask builders bothers you, you can easily create your builder that has it baked-in:

let selectTask' ct = selectTask HydraReader.Read ct

// Usage:
let! distinctCustomerNames = 
    selectTask' openContext {
        for c in SalesLT.Customer do
        select (c.FirstName, c.LastName)
        distinct
    }

Selecting city and state columns only:

let getCities (cityFilter: string) = 
    selectTask HydraReader.Read openContext {
        for a in SalesLT.Address do                             // Specifies a FROM table in the query
        where (a.City = cityFilter)                             // Specifies a WHERE clause in the query
        select (a.City, a.StateProvince) into selected          // Specifies which entities and/or columns to SELECT in the query
        mapList (                                               // Transforms the query results
            let city, state = selected
            $"City, State: %s{city}, %s{state}"
        )
    }

Special where filter operators:

  • isIn or |=|
  • isNotIn or |<>|
  • like or =%
  • notLike or <>%
  • isNullValue or = None
  • isNotNullValue or <> None
  • subqueryMany
  • subqueryOne

Select Address entities where City starts with S:

let getAddressesInCitiesStartingWithS () = 
    selectAsync HydraReader.Read openContext {
        for a in SalesLT.Address do
        where (a.City =% "S%")
        select a
    }

NOTE: You should always explicitly select the table or tables. As of v2.5.0, selecting the table(s) will explicitly select all columns in the table. Otherwise, it will issue a SELECT * query which is slightly less performant as it will require a table scan.

Try to select a single row (this example returns a decimal option):

let tryGetOrderTotal (orderId: int) = 
        selectAsync HydraReader.Read openContext {
            for o in SalesLT.Order do
            where (o.Id = orderId)
            select o.Total
            tryHead
        }

Joins

Select top 10 Product entities with inner joined category name:

let getProductsWithCategory () = 
    selectTask HydraReader.Read openContext {
        for p in SalesLT.Product do
        join c in categoryTable on (p.ProductCategoryID.Value = c.ProductCategoryID)
        select (p, c.Name)
        take 10
    }

Select Customer with left joined Address where CustomerID is in a list of values: (Note that left joined tables will be of type 'T option, so you will need to use the .Value property to access join columns.)

let getCustomerAddressesInIds (customerIds: int list) =
    selectAsync HydraReader.Read openContext) {
        for c in SalesLT.Customer do
        leftJoin ca in SalesLT.CustomerAddress on (c.CustomerID = ca.Value.CustomerID)
        leftJoin a  in SalesLT.Address on (ca.Value.AddressID = a.Value.AddressID)
        where (c.CustomerID |=| customerIds)
        orderBy c.CustomerID
        select (c, a)
    }

When selecting individual columns from a left joined table, you can force non-optional columns to be optional by wrapping them in Some:

let getCustomerZipCodes (customerId: int) =
    selectAsync HydraReader.Read openContext {
        for c in SalesLT.Customer do
        leftJoin ca in SalesLT.CustomerAddress on (c.CustomerID = ca.Value.CustomerID)
        leftJoin a  in SalesLT.Address on (ca.Value.AddressID = a.Value.AddressID)
        where (c.CustomerID = customerId)
        orderBy c.CustomerID
        select (c, Some a.Value.ZipCode)
    }

To create a join query with multi-columns, use tuples:

select {
    for o in SalesLT.OrderHeaders do
    join d in SalesLT.OrderDetails on ((o.SalesOrderID, o.ModifiedDate) = (d.SalesOrderID, d.ModifiedDate))
    select (o, d)
}

๐Ÿ’ฅ The join on clause only supports simple column = column comparisons. Constant value parameters are not supported. Any custom filters that you might normally put in the on clause, especially those involving input parameters, will need to be moved to the where clause. This is because the F# join on syntax does not support complex filter clauses.

Transforming Query Results

To transform the query results use the mapSeq, mapArray or mapList operations.

    let! lineTotals =
        selectTask HydraReader.Read openContext {
            for o in SalesLT.OrderHeaders do
            join d in SalesLT.OrderDetails on (o.SalesOrderID = d.SalesOrderID)
            where (o.OnlineOrderFlag = true)
            select (o, d) into selected
            mapList (
                let o,d = selected
                {| 
                    ShipDate = 
                        match o.ShipDate with
                        | Some d -> d.ToShortDateString()
                        | None -> "No Order Number"
                    LineTotal = (decimal qty) * unitPrice
                |}
            )
        }

If a custom subset of entities and/or columns has been selected in the query, you will need to project them into a new binding using the into operation:

    let! lineTotals =
        selectTask HydraReader.Read openContext {
            for o in SalesLT.OrderHeaders do
            join d in SalesLT.OrderDetails on (o.SalesOrderID = d.SalesOrderID)
            where (o.OnlineOrderFlag = true)
            select (o, d.OrderQty, d.UnitPrice) into selected  // project selected values so they can be mapped
            mapList (
                let o, qty, unitPrice = selected               // unpack the selected values for use in transform
                {| 
                    ShipDate = 
                        match o.ShipDate with
                        | Some d -> d.ToShortDateString()
                        | None -> "No Order Number"
                    LineTotal = (decimal qty) * unitPrice
                |}
            )
        }

You can also use mapSeq in conjunction with tryHead to map a single result:

        selectAsync HydraReader.Read openContext {
            for o in SalesLT.Order do
            where (o.Id = orderId)
            select o.Total
            mapSeq {| GrandTotal = o.Total |}
            tryHead
        }

Aggregates

Aggregate functions (can be used in select, having and orderBy clauses):

  • countBy
  • sumBy
  • minBy
  • maxBy
  • avgBy
/// Select categories with an avg product price > 500 and < 1000
let getCategoriesWithHighAvgPrice () = 
    selectTask HydraReader.Read openContext {
        for p in SalesLT.Product do
        where (p.ProductCategoryID <> None)
        groupBy p.ProductCategoryID
        having (minBy p.ListPrice > 500M && maxBy p.ListPrice < 1000M)
        select (p.ProductCategoryID, minBy p.ListPrice, maxBy p.ListPrice) into selected
        mapList (
            let catId, minPrice, maxPrice = selected
            $"CatID: {catId}, MinPrice: {minPrice}, MaxPrice: {maxPrice}"
        )
    }

Alternative Row Count Query:

let! customersWithNoSalesPersonCount =
    selectTask HydraReader.Read openContext {
        for c in SalesLT.Customer do
        where (c.SalesPerson = None)
        count
    }

๐Ÿ’ฅ In some cases when selecting an aggregate of a non-NULL column, the database will still return NULL if the query result set is empty, for example if selecting the MAX of an INT column in an empty table. This is not supported and will throw an exception. If your query might return NULL for the aggregate of a non-NULL column, you may include Some in the aggregate to support parsing the NULL as an Option value:

โŒ INCORRECT:

/// Select the minimum item price above a threshold
let getNextLowestPrice threshold = 
    selectTask HydraReader.Read openContext {
        for p in SalesLT.Product do
        where (p.ListPrice > threshold)
        select (minBy p.ListPrice)
    }

โœ… CORRECT:

/// Select the minimum item price above a threshold
let getNextLowestPrice threshold = 
    selectTask HydraReader.Read openContext {
        for p in SalesLT.Product do
        where (p.ListPrice > threshold)
        select (minBy (Some p.ListPrice))
    }

WHERE Subqueries

Use the subqueryMany function for subqueries that return multiple rows for comparison:

// Create a subquery that gets top 5 avg prices by category ID:
let top5CategoryIdsWithHighestAvgPrices = 
    select {
        for p in SalesLT.Product do
        where (p.ProductCategoryID <> None)
        groupBy p.ProductCategoryID
        orderByDescending (avgBy p.ListPrice)
        select p.ProductCategoryID
        take 5
    }

// Get category names where the category ID is "IN" the subquery:
let! top5Categories =
    selectTask HydraReader.Read openContext {
        for c in SalesLT.ProductCategory do
        where (Some c.ProductCategoryID |=| subqueryMany top5CategoryIdsWithHighestAvgPrices)
        select c.Name
    }

Use the subqueryOne function for subqueries that return a single value for comparison:

// Create a subquery that gets the avg list price (a single value):
let avgListPrice = 
    select {
        for p in SalesLT.Product do
        select (avgBy p.ListPrice)
    } 

// Get products with a price > the average price
let! productsWithAboveAveragePrice =
    selectTask HydraReader.Read openContext {
        for p in SalesLT.Product do
        where (p.ListPrice > subqueryOne avgListPrice)
        select (p.Name, p.ListPrice)
    }
Correlated Subqueries

If the subquery is correlated with the parent query (i.e., the subquery references a row variable from the parent query), use the correlate keyword in the subquery to introduce the correlated variable. Note: the variable name in the subquery must match the variable name in the parent query, because it determines the table alias in the generated SQL query.

// Create a subquery that gets the min price for this product line,
// referencing a row variable "outer" from the parent query:
let lowestPriceByProductLine = 
    select {
        for inner in Production.Product do
        correlate outer in Production.Product
        where (inner.ProductLine = outer.ProductLine)
        select (minBy inner.ListPrice)
    }

// Get the products whose price is the lowest of all prices in its product line.
// The name "outer" needs to match the subquery.
let! cheapestByProductLine = 
    selectTask HydraReader.Read openContext {
        for outer in Production.Product do
        where (outer.ListPrice = subqueryOne lowestPriceByProductLine)
        select (outer.Name, outer.ListPrice)
    }

Distinct Query:

let! distinctCustomerNames = 
    selectTask HydraReader.Read openContext {
        for c in SalesLT.Customer do
        select (c.FirstName, c.LastName)
        distinct
    }

Dos and Don'ts

๐Ÿ’ฅ The select clause only supports tables and fields for the sake of modifying the generated SQL query and the returned query type 'T. Transformations (i.e. .ToString() or calling any functions is not supported and will throw an exception.

๐Ÿ’ฅ The where clause will automatically parameterize your input values. However, similar to the select clause, the where clause does not support calling an transformations (i.e. .ToString()). So you must prepare any parameter transformations before the builder.

โœ… CORRECT:

let getCities () =
    let city = getCity() // DO prepare where parameters above and then pass into the where clause
    selectTask HydraReader.Read openContext {
        for a in SalesLT.Address do
        where (a.City = city)
        select (a.City, a.StateProvince) into (city, state)
        mapList $"City: %s{city}, State: %s{state}"   // DO transforms using the `mapSeq`, `mapArray` or `mapList` operations
    }

โŒ INCORRECT:

let getCities () =
    selectTask HydraReader.Read openContext {
        for a in SalesLT.Address do
        where (a.City = getCity()) // DO NOT perform calculations or translations within the builder
        select ($"City: %s{city}, State: %s{state}")   // DO NOT transform results within the builder 
    }

Insert Builder

Simple Inserts

For simple inserts with no identity column and no included/excluded columns, use the into _ syntax:

let! rowsInserted = 
    insertTask openContext {
        into Person.Person
        entity 
            {
                dbo.Person.ID = Guid.NewGuid()
                dbo.Person.FirstName = "Bojack"
                dbo.Person.LastName = "Horseman"
                dbo.Person.LastUpdated = DateTime.Now
            }
    }

printfn "Rows inserted: %i" rowsInserted

Insert with an Identity Field

If you have an Identity column or if you want to specify columns to include/exclude, use the for _ in _ do syntax. By default, all record fields will be included as insert values, so when using an identity column, you must handle it in one of two ways:

  1. Mark it with getId. This will prevent it from being added as an insert value, and it will also select and return the identity field.
  2. Mark it with excludeColumn to prevent it from being added as an insert value.
let! errorLogID =
    insertTask openContext {
        for e in dbo.ErrorLog do
        entity 
            {
                dbo.ErrorLog.ErrorLogID = 0 // Adding `getId` below will ignore this value.
                dbo.ErrorLog.ErrorTime = System.DateTime.Now
                dbo.ErrorLog.ErrorLine = None
                dbo.ErrorLog.ErrorMessage = "TEST"
                dbo.ErrorLog.ErrorNumber = 400
                dbo.ErrorLog.ErrorProcedure = (Some "Procedure 400")
                dbo.ErrorLog.ErrorSeverity = None
                dbo.ErrorLog.ErrorState = None
                dbo.ErrorLog.UserName = "jmarr"
            }
        getId e.ErrorLogID
    }

printfn "ErrorLogID Identity: %i" errorLogID

Multiple Inserts

To insert multiple entities in one query, use the entities operation in conjunction with the AtLeastOne type to ensure that at least one item exists in the collection. (The AtLeastOne forces you to handle the case where an empty collection is passed to entities which would throw a runtime exception.)

NOTE: getId is not supported for multiple inserts with entities! So if you are inserting multiple entities that have an identity field, you must use excludeColumn on the identity column.

let currenciesMaybe = 
    [ 0..2 ] 
    |> List.map (fun i -> 
        {
            Sales.Currency.CurrencyCode = $"BC{i}"
            Sales.Currency.Name = "BitCoin"
            Sales.Currency.ModifiedDate = System.DateTime.Now
        }
    )
    |> AtLeastOne.tryCreate

match currenciesMaybe with
| Some currencies ->
    do! insertTask openContext {
            into Sales.Currency
            entities currencies
        } :> Task // upcast to Task if you want to ignore the resulting value
| None ->
    printfn "Skipping insert because entities seq was empty."

Upsert

Upsert support has been added for Postgres and Sqlite only because they support ON CONFLICT DO ___ which provides atomic upsert capabilities. (Unfortunately, SQL Server and Oracle only have MERGE which can suffer from concurrency issues. For SQL Server bulk operations, please try my SqlBulkTools.Fsharp library.)

Postgres: open SqlHydra.Query.NpgsqlExtensions

Sqlite: open SqlHydra.Query.SqliteExtensions

Example Usage:

    /// Inserts an address or updates it if it already exists.
    let upsertAddress address = 
        insertTask openContext {
            for a in Person.Address do
            entity address
            onConflictDoUpdate a.AddressID (
                a.AddressLine1,
                a.AddressLine2,
                a.City,
                a.StateProvince,
                a.CountryRegion,
                a.PostalCode,
                a.ModifiedDate
            )
        }

Or, if you have multiple addresses to upsert:

    /// Inserts multiple addresses or updates them if they already exist.
    let upsertAddress addresses =
        match addresses |> AtLeastOne.tryCreate with
        | Some addresses -> 
            insertTask openContext {
                for a in Person.Address do
                entities addresses
                onConflictDoUpdate a.AddressID (
                    a.AddressLine1,
                    a.AddressLine2,
                    a.City,
                    a.StateProvince,
                    a.CountryRegion,
                    a.PostalCode,
                    a.ModifiedDate
                )
            }
        | None ->
            printfn "No addresses to insert."
            0
    /// Tries to insert an address if it doesn't already exist.
    let tryInsertAddress address = 
        insertTask openContext {
            for a in Person.Address do
            entity address
            onConflictDoNothing a.AddressID
        }

Update Builder

Update Individual Fields

To update individual columns, use the set operation.

do! updateAsync openContext {
        for e in dbo.ErrorLog do
        set e.ErrorNumber 123
        set e.ErrorMessage "ERROR #123"
        set e.ErrorLine (Some 999)
        set e.ErrorProcedure None
        where (e.ErrorLogID = 1)
    } :> Task // upcast to Task if you want to ignore the resulting value

Update Entire Record

To update an entire record, use the entity operation. You may optionally use includeColumn to specify an allow list of one or more columns on the record to include in the update. You may optionally use excludeColum to specify a deny list of one or more columns on the record to exclude from the update. NOTE: You may use includeColumn or excludeColumn multiple times - once for each column to include/exclude.

let! rowsUpdated = 
    updateTask openContext {
        for e in dbo.ErrorLog do
        entity 
            {
                dbo.ErrorLog.ErrorLogID = 0 // Add `excludeColumn` below to ignore an identity column
                dbo.ErrorLog.ErrorTime = System.DateTime.Now
                dbo.ErrorLog.ErrorLine = None
                dbo.ErrorLog.ErrorMessage = "TEST"
                dbo.ErrorLog.ErrorNumber = 400
                dbo.ErrorLog.ErrorProcedure = (Some "Procedure 400")
                dbo.ErrorLog.ErrorSeverity = None
                dbo.ErrorLog.ErrorState = None
                dbo.ErrorLog.UserName = "jmarr"
            }
        excludeColumn e.ErrorLogID // Exclude the identity column
        where (e.ErrorLogID = errorLog.ErrorLogID)
    }

If you want to apply an update to all records in a table, you must use the updateAll keyword or else it will throw an exception (it's a safety precaution that may save you some trouble. ๐Ÿ˜Š):

update {
    for c in Sales.Customer do
    set c.AccountNumber "123"
    updateAll
}

Delete Builder

do! deleteTask openContext {
        for e in dbo.ErrorLog do
        where (e.ErrorLogID = 5)
    } :> Task // upcast to Task if you want to ignore the resulting value

If you want to delete all records in a table, you must use the deleteAll keyword in lieu of a where statement or else it will not compile:

let! rowsDeleted = 
    deleteTask openContext {
        for c in Sales.Customer do
        deleteAll
    }
    
printfn "Rows deleted: %i" rowsDeleted

Custom SqlKata Queries

SqlKata supports a lot of custom query operations, many of which are not supported by SqlHydra query builders. The kata custom operation allows you to manipulate the underlying SqlKata query directly. For example, you could use this to conditionally add columns to the WHERE or ORDER BY clauses:

let getCustomers filters = 
  select {
      for c in main.Customer do
      where (c.FirstName = "John")
      kata (fun query -> 
          match filters.LastName with
          | Some lastName -> query.Where("c.LastName", lastName)
          | None -> query
      )
      kata (fun query -> 
          query.OrderBy(filters.SortColumns)
      )
  }

Custom SQL Queries

Sometimes it is easier to just write a custom SQL query. This can be helpful when you have a very custom query, or are using SQL constructs that do not yet exist in SqlHydra.Query. You can do this while still maintaining the benefits of the strongly typed generated HydraReader.

let getTop10Products(conn: SqlConnection) = task {
    let sql = $"SELECT TOP 10 * FROM {nameof dbo.Product} p"
    use cmd = new SqlCommand(sql, conn)
    use! reader = cmd.ExecuteReaderAsync()
    let hydra = HydraReader(reader)

    return [
        while reader.Read() do
            hydra.``dbo.Product``.Read()
    ]
}

See more examples of using the generated HydraReader: https://github.com/JordanMarr/SqlHydra/wiki/DataReaders

sqlhydra's People

Contributors

devinlyons avatar everybodykurts avatar ieviev avatar jmaharman avatar jordanmarr avatar jwosty avatar mangelmaxime avatar ntwilson avatar robmaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlhydra's Issues

Feature Request: Upserting multiple records to a single statement

Hi Jordan,

First, as always, thank you for your hard work on this package. I've benefited professionally and personally from SqlHydra. It's not much, but I threw you $50.00 for the great work you've done.

According to the postgres documentation, multiple records can be upserted on a single sql statement. They provide a simple example:

INSERT INTO distributors (did, dname)
    VALUES (5, 'Gizmo Transglobal'), (6, 'Associated Computing, Inc')
    ON CONFLICT (did) DO UPDATE SET dname = EXCLUDED.dname;

I would like to do this in SqlHydra, by perhaps using the entities keyword in either the insertAsync or upsertAsync computation expression, to upsert multiple records in a single sql statement.

.net6 DateOnly issues in an Update statement

Sorry to bombard you, but I've got another issue with the DateOnly stuff, this time on an update query. I included a DateOnly in a where clause of an update statement, and got the following exception:

Unhandled exception. System.AggregateException: One or more errors occurred. (One or more errors occurred. (No mapping exists from object type System.DateOnly to a known managed provider native type.))
 ---> System.AggregateException: One or more errors occurred. (No mapping exists from object type System.DateOnly to a known managed provider native type.)
 ---> System.ArgumentException: No mapping exists from object type System.DateOnly to a known managed provider native type. 
   at Microsoft.Data.SqlClient.MetaType.GetMetaTypeFromValue(Type dataType, Object value, Boolean inferLen, Boolean streamAllowed)
   at Microsoft.Data.SqlClient.SqlParameter.Validate(Int32 index, Boolean isCommandProc)
   at Microsoft.Data.SqlClient.SqlCommand.BuildParamList(TdsParser parser, SqlParameterCollection parameters, Boolean includeReturnValue)
   at Microsoft.Data.SqlClient.SqlCommand.BuildExecuteSql(CommandBehavior behavior, String commandText, SqlParameterCollection parameters, _SqlRPC& rpc)
   at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean 
returnStream, Boolean isAsync, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest)
   at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry, String method)
   at Microsoft.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, Boolean sendToPipe, Int32 timeout, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry, String methodName)
   at Microsoft.Data.SqlClient.SqlCommand.BeginExecuteNonQueryInternal(CommandBehavior behavior, AsyncCallback callback, Object stateObject, Int32 timeout, Boolean inRetry, Boolean asyncWrite)
   at Microsoft.Data.SqlClient.SqlCommand.BeginExecuteNonQueryAsync(AsyncCallback callback, Object stateObject)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl(Func`3 beginMethod, Func`2 endFunction, Action`1 endAction, Object 
state, TaskCreationOptions creationOptions)
   at System.Threading.Tasks.TaskFactory`1.FromAsync(Func`3 beginMethod, Func`2 endMethod, Object state)
   at Microsoft.Data.SqlClient.SqlCommand.InternalExecuteNonQueryAsync(CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   at Microsoft.FSharp.Control.AsyncResult`1.Commit() in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 393
   at Microsoft.FSharp.Control.AsyncPrimitives.QueueAsyncAndWaitForResultSynchronously[a](CancellationToken token, FSharpAsync`1 computation, FSharpOption`1 timeout) in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 1045
   at Microsoft.FSharp.Control.AsyncPrimitives.RunSynchronously[T](CancellationToken cancellationToken, FSharpAsync`1 computation, FSharpOption`1 timeout) in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 1071
   at Microsoft.FSharp.Control.FSharpAsync.RunSynchronously[T](FSharpAsync`1 computation, FSharpOption`1 timeout, FSharpOption`1 cancellationToken) in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 1366

When I generate the schema with .net5 and run the same query, I get no errors.

The query looked like:

    do! Async.Ignore <| updateAsync (ContextType.Create (openContext connStr)) {
      for row in table<MyDB.dbo.MyTable> do
      set row.timestamp updated.timestamp
      where (row.date = updated.date) // <- row.date here is a DateOnly
    }

Use of DataTimeOffset means you must use SqlDataReader

Dapper returns an IDataReader on it's extension method on top of the SqlConnection. Unfortunately, this does not contain the GetDateTimeOffset member that SqlDataReader does, so the generated SqlHydra code does not work.

Is there an alternative for the use of this member that would work with an IDataReader?

SQL Lock Tables

Hello and thanks for this awesome library!

While we have been successful in utilizing it for setup, type generation and querying / inserting data into our database, we were wondering if there was any documentation or features that allow for the locking of a table when using SQLHydra.Query. Any insight on a feature set, documentation or configuration that may need to occur for this to be possible (if it even is or is planned) would be greatly appreciated :)

Thank you!

MSSQL (Microsoft.Data.SqlClient): can't retrieve DateOnly

I'm using .net 6, SQL Hydra v. 1.0.0, and Microsoft.Data.SqlClient v. 4.1.0 to connect to an MSSQL database that includes some DATE columns. SQL Hydra tries to use System.DateOnly for this, and it can write dates to the database just fine, but as soon as it tries to read a date from the database, it crashes in the SqlHydra-generated Reader code with

System.InvalidCastException: Unable to cast object of type 'System.DateTime' to type 'System.DateOnly'.
   at Microsoft.Data.SqlClient.SqlDataReader.GetFieldValueFromSqlBufferInternal[T](SqlBuffer data, _SqlMetaData metaData, Boolean isAsync)
   at Microsoft.Data.SqlClient.SqlDataReader.GetFieldValueInternal[T](Int32 i, Boolean isAsync)
   at Microsoft.Data.SqlClient.SqlDataReader.GetFieldValue[T](Int32 i)
   at [email protected](Int32 arg00) in C:\Git\MyRepo\MyProject\MyDatabase.fs:line 537
   at MyProject.MyDatabase.RequiredColumn`2.Read(FSharpOption`1 alias) in C:\Git\MyRepo\MyProject\MyDatabase.fs:line 11
   at MyProject.MyDatabase.dbo.MyTableReader.Read() in C:\Git\MyRepo\MyProject\MyDatabase.fs:line 542

It seems like GetFieldValue<DateOnly> doesn't work? Can we just use GetDateTime >> DateOnly.FromDateTime?

Fails to insert entity with auto-increment in Postgres

I tried inserting an entity into a table with auto-increment primary key (serial) in Postgres and followed the example from the docs where you just set 0 on the id-field, but it ignores the auto-increment sequence and just inserts the value as it is, which of course fails the second time when there already exists a row with id 0.

Is there something I am missing from the example? My query looks like this:

// auto-generated types
module my_schema =
      [<CLIMutable>]
      type my_table =
          { id: int
            name: string
        }

// code inserting an entity
let myRecord: my_schema.my_table = { id = 0; name = "Test" } 
use ctx = openContext dbConnection
    insert {
        for e in myTable do
        entity myRecord
    }
    |> ctx.InsertAsync

The sql schema looks like this:

create table my_table (
    id serial primary key,
    name text not null
);

Thanks!

SqlServer: Cannot use `getId` against a `UNIQUEIDENTIFIER` column

I have the following table

CREATE TABLE [dbo].[SqlHydraRepro]
(
    [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),
    -- descriptive data
    [EmailAddress] NCHAR(50) NOT NULL
)
GO

If I try to insert a new value with auto generation of the Id column I get an error:

insertAsync (Create DbContext.create) {
    for user in Tables.sqlHydraRepro do

        entity
            {
                dbo.SqlHydraRepro.Id = Guid.NewGuid()
                dbo.SqlHydraRepro.EmailAddress =
                    "requestValues.EmailAddress"
            }

        getId user.Id
}

Result:

System.InvalidCastException: Invalid cast from 'System.DBNull' to 'System.Guid'.
   at System.Convert.DefaultToType(IConvertible value, Type targetType, IFormatProvider provider)
   at System.DBNull.System.IConvertible.ToType(Type type, IFormatProvider provider)
   at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
   at <StartupCode$SqlHydra-Query>[email protected](Object identity)
   at Microsoft.FSharp.Control.AsyncPrimitives.CallThenInvokeNoHijackCheck[a,b](AsyncActivation`1 ctxt, b result1, FSharpFunc`2 userCode) in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 465
   at Microsoft.FSharp.Control.Trampoline.Execute(FSharpFunc`2 firstAction) in D:\a\_work\1\s\src\fsharp\FSharp.Core\async.fs:line 104

Missing oracle timestamp datatypes

I'm noticing that when generating objects for an oracle table that includes timestamp columns, i'm not seeing the timestamp fields getting generated. Looking closer, i can see that fields' datatypes are e.g. TIMESTAMP(3) instead of TIMESTAMP, which is what the supportedTypeMappings in OracleDataTypes.fs would expect

Looking at the link included below, and grabbing samples out of the a default EBS oracle database (version 19c), it looks like:

TIMESTAMP
TIMESTAMP WITH LOCAL TIME ZONE
TIMESTAMP WITH TIME ZONE
INTERVAL DAY TO SECOND

will be identified as having types

TIMESTAMP(x) .. where x can be a value between 0 and 9
TIMESTAMP(x) WITH LOCAL TIME ZONE .. where x can be a value between 0 and 9
TIMESTAMP(x) WITH TIME ZONE .. where x can be a value between 0 and 9
INTERVAL DAY(x) TO SECOND(y) .. where x and y can each be a value between 0 and 9 (not necessarily the same)

You can read more about it here: https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements001.htm#i54330

CLOB and NCLOB are also some string fields that are omitted in the supported types map. i suppose there may be some others, but these are some i'm running across while working on a poc at the moment.

Npgsql.PostgresException syntax error at or near "["

Hello @JordanMarr,

in our project we are moving away from SQLServer to go to PostgreSQL.

I installed SqlHydra.Npgsql tool and generated the new database files.

I also installed Npgsql and removed the System.Data.SqlClient library.

My project is now compiling how when running the tests I get the following error:

System.AggregateException: One or more errors occurred. (42601: syntax error at or near "["

POSITION: 13)
 ---> Npgsql.PostgresException (0x80004005): 42601: syntax error at or near "["

POSITION: 13
   at Npgsql.Internal.NpgsqlConnector.<ReadMessage>g__ReadMessageLong|225_0(NpgsqlConnector connector, Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
   at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
   at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteReader(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteReader(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteNonQuery(Boolean async, CancellationToken cancellationToken)
  Exception data:
    Severity: ERROR
    SqlState: 42601
    MessageText: syntax error at or near "["
    Position: 13
    File: scan.l
    Line: 1176
    Routine: scanner_yyerror
   --- End of inner exception stack trace ---

I feel like the [ syntax error is because the generated SQL is for SQLServer instead of PostgreSQL.

The query which trigger the error is:

deleteAsync (Shared ctx) {
    for _ in table<dbo.EncounterForms> do
        deleteAll
}

Is there something I am missing? Is it possible to see the generated SQL by SQL Hydra to try understand the problem?

`where` clause with direct boolean column does not work

It would be nice if you could write the following (where is_latest is a boolean column):

selectAsync {
    for foo in someTable do
    where foo.is_latest
    select foo
    tryHead
}

instead, you have to manually check for true:

selectAsync {
    for foo in someTable do
    where (foo.is_latest = true)
    select foo
    tryHead
}

Conditional where's, orderBy's, and thenBy's

I saw this issue a while ago and thought I'd contribute.
If you'd like a PR instead, I'd be happy to send one over.

type SelectQueryBuilder<'Selected, 'Mapped> with

  /// Sets the WHERE condition if applyFilter is true
  [<CustomOperation("whereIf", MaintainsVariableSpace = true)>]
  member this.WhereIf(state: QuerySource<'T, Query>, applyFilter, [<ProjectionParameter>] whereExpression) =
      if applyFilter then
          this.Where(state, whereExpression)
      else
          state

  /// Sets the ORDER BY for single column if applyOrderBy is true
  [<CustomOperation("orderByIf", MaintainsVariableSpace = true)>]
  member this.OrderByIf(state: QuerySource<'T, Query>, applyOrderBy, [<ProjectionParameter>] propertySelector) =
      if applyOrderBy then
          this.OrderBy(state, propertySelector)
      else
          state

  /// Sets the ORDER BY DESC for single column if applyOrderByDescending is true
  [<CustomOperation("orderByDescendingIf", MaintainsVariableSpace = true)>]
  member this.OrderByDescendingIf
      (
          state: QuerySource<'T, Query>,
          applyOrderByDescending,
          [<ProjectionParameter>] propertySelector
      ) =
      if applyOrderByDescending then
          this.OrderByDescending(state, propertySelector)
      else
          state

  /// Sets the ORDER BY for single column if applyThenBy is true
  [<CustomOperation("thenByIf", MaintainsVariableSpace = true)>]
  member this.ThenByIf(state: QuerySource<'T, Query>, applyThenBy, [<ProjectionParameter>] propertySelector) =
      if applyThenBy then
          this.ThenBy(state, propertySelector)
      else
          state

  /// Sets the ORDER BY DESC for single column if applyThenByDescending is true
  [<CustomOperation("thenByDescendingIf", MaintainsVariableSpace = true)>]
  member this.ThenByDescendingIf
      (
          state: QuerySource<'T, Query>,
          applyThenByDescending,
          [<ProjectionParameter>] propertySelector
      ) =
      if applyThenByDescending then
          this.ThenByDescending(state, propertySelector)
      else
          state

How to use a correlated subquery?

I'm trying to write a query with a subquery that references rows from the parent query. For example, say I want SQL for getting the latest weather value from a table of weather data that looks roughly like

SELECT * FROM weather w
WHERE 
  w.time >= '2023-02-01' AND w.time <= '2023-02-10' 
  AND w.timestamp = (
    SELECT MAX(timestamp) FROM weather inner 
    WHERE inner.location = w.location AND inner.instrument = w.instrument AND inner.time = 
w.time
  )

First I tried

let lBound,uBound = (DateTime(2023,02,01), DateTime(2023,02,10))
select { 
  for w in table<weather> do
  where (
    w.time >= lBound && w.time <= uBound && w.timestamp = subqueryOne (
      select { 
        for inner in table<weather> do
        where (inner.location = w.location && inner.instrument = w.instrument && inner.time = w.time)
        select (maxBy inner.timestamp)
      }
    )
  )
}

but that doesn't let me use a select to start a subquery inside another one. So then I tried

let lBound,uBound = (DateTime(2023,02,01), DateTime(2023,02,10))
let selectInner = select
select { 
  for w in table<weather> do
  where (
    w.time >= lBound && w.time <= uBound && w.timestamp = subqueryOne (
      selectInner { 
        for inner in table<weather> do
        where (inner.location = w.location && inner.instrument = w.instrument && inner.time = w.time)
        select (maxBy inner.timestamp)
      }
    )
  )
}

which compiles, but gave me

System.NotImplementedException : The method or operation is not implemented.
Stack Trace:
at SqlHydra.Query.LinqExpressionVisitors.visit@212(FSharpFunc2 qualifyColumn, Expression exp, Query query) at SqlHydra.Query.LinqExpressionVisitors.visit@212(FSharpFunc2 qualifyColumn, Expression exp, Query query)
at SqlHydra.Query.SelectBuilders.SelectBuilder2.Where[T](QuerySource2 state, Expression`1 whereExpression)

I thought I'd try

let lBound,uBound = (DateTime(2023,02,01), DateTime(2023,02,10))
let sub = select {
  for inner in table<weather> do
  groupBy (inner.location, inner.instrument, inner.time)
  select (inner.location, inner.instrument, inner.time, maxBy inner.timestamp)
}
select { 
  for w in table<weather> do
  where (
    w.time >= lBound && w.time <= uBound && 
    (w.location, w.instrument, w.time, w.timestamp) = subqueryOne sub
  )
}

which also gives me a NotImplementedException (not surprising, and I think there's another open issue about using tuples in a where clause)
and

let lBound,uBound = (DateTime(2023,02,01), DateTime(2023,02,10))
let sub = select {
  for inner in table<weather> do
  groupBy (inner.location, inner.instrument, inner.time)
  select (inner.location, inner.instrument, inner.time, maxBy inner.timestamp)
}
select { 
  for w in table<weather> do
  for (location, instrument, time, timestamp) in subqueryMany sub do
  where (
    w.time >= lBound && w.time <= uBound && 
    w.location = location && w.instrument = instrument && w.time = time && w.timestamp = timestamp
  )
}

but that fails to compile.

Any suggestions how to get this sort of query to work?

SQLHydra's maxBy causes SqlHydra code to fail if querying an empty table

Noticed this when trying to use a maxBy on an empty table. We were able to work around with a sortByDescending into take 1, (we also could grab a count and branch on whether that returned anything) but wanted to suggest it might be a bit more intuitive to have maxBy handle the null without breaking; perhaps by returning an option?

This problem was encountered using SqlHydra.SqlServer.

I believe this should be reproducible by creating a database with an empty table, creating the schema for said table, and querying for a maxBy on one of the fields. Our particular field was a dateTime2(6) if that is relevant.

Reading Option<byte[]> is broken

I have the following table:

CREATE TABLE account
(
    account_pk uuid NOT NULL DEFAULT gen_random_uuid(),
    inserted_on timestamptz NOT NULL DEFAULT current_timestamp,
    updated_on timestamptz,
    email varchar(200) NOT NULL,
    name varchar(200) NOT NULL,
    password_salt bytea,
    password_hash bytea,
    password_hash_parameters json,
    
    CONSTRAINT account__pk PRIMARY KEY (account_pk),
    CONSTRAINT account__email__nonempty                         CHECK (length(email) > 0),
    CONSTRAINT account__email__unique                           UNIQUE (email),
    -- In other words, these three fields must either all be set or all be null 
    CONSTRAINT account__password_all_or_nothing_set             CHECK (((password_salt IS NULL) = (password_hash IS NULL)) AND ((password_hash IS NULL) = (password_hash_parameters IS NULL)))
);

And I query it like this:

selectAsync () {
    for a in Tables.account do
        where (a.email =% email)
        select (a.account_pk, a.email, a.name, a.password_salt, a.password_hash, a.password_hash_parameters)
        tryHead
}

But for rows where password_salt or password_hash are not null, SqlHydra throws this exception:

System.AggregateException: One or more errors occurred. (Object of type 'Microsoft.FSharp.Core.FSharpOption`1[System.Object]' cannot be converted to type 'Microsoft.FSharp.Core.FSharpOption`1[System.Byte[]]'.)
 ---> System.ArgumentException: Object of type 'Microsoft.FSharp.Core.FSharpOption`1[System.Object]' cannot be converted to type 'Microsoft.FSharp.Core.FSharpOption`1[System.Byte[]]'.
   at System.RuntimeType.CheckValue(Object& value, ParameterCopyBackAction& copyBack, Binder binder, CultureInfo culture, BindingFlags invokeAttr)
   at System.Reflection.MethodBase.CheckArguments(Span`1 copyOfParameters, IntPtr* byrefParameters, Span`1 shouldCopyBack, ReadOnlySpan`1 parameters, RuntimeType[] sigTypes, Binder binder, CultureInfo culture, BindingFlags invokeAttr)
   at System.Reflection.RuntimeConstructorInfo.InvokeWithManyArguments(RuntimeConstructorInfo ci, Int32 argCount, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at <StartupCode$Sessionizer-Data>.$Database.Read@298.Invoke(Unit unitVar0) in /Users/jwostenberg/Code/Sessionizer/src/Sessionizer.Data/Database.fs:line 300
   at <StartupCode$SqlHydra-Query>.$QueryContext.ReadAsyncWithOptions@120-3.Invoke(DbDataReader reader)
   at Microsoft.FSharp.Control.AsyncPrimitives.CallThenInvoke[T,TResult](AsyncActivation`1 ctxt, TResult result1, FSharpFunc`2 part2) in D:\a\_work\1\s\src\FSharp.Core\async.fs:line 510
   at Microsoft.FSharp.Control.Trampoline.Execute(FSharpFunc`2 firstAction) in D:\a\_work\1\s\src\FSharp.Core\async.fs:line 112
   --- End of inner exception stack trace ---
   at Fable.Remoting.Server.Proxy.makeEndpointProxy@87-2.MoveNext()
   at Fable.Remoting.Server.Proxy.memberVisitor@143-3.MoveNext()

The problem can be fixed by changing lines in GetPrimitiveReader (in Database.fs) from:

else if t = typedefof<byte []> then Some(wrap reader.GetValue)

to:

else if t = typedefof<byte []> then Some(wrap reader.GetFieldValue<byte[]>)

I'm working around this for now by manually making that change, but a fix in the code generator itself would be great.

I suspect that all array types are affected, not just byte[] in particular.

DateOnly/ TimeOnly Support

Microsoft.Data.SqlClient supports DateOnly and TimeOnly as of 5.1.0. See release notes: https://github.com/dotnet/SqlClient/blob/main/release-notes/5.1/5.1.0.md .

Would it be possible to support these as well? Maybe as simple as removing the "convertIfDateOnlyTimeOnly " function?

As always, thanks!

Question: Can you exclude a column from being in a generated type?

We would like to add a created_at column to a table that our PostgresSQL that always gets set by our DB.
Is there a way to exclude this column when generating the type that represents this row, since we never use it in our program? The column is only used for debugging purposes.

Error when using use ctx = openContext()

I am trying to connect to a PostgreSQL database. Here is my code

let openContext () =
     let compiler = SqlKata.Compilers.PostgresCompiler()
     let conn = new Npgsql.NpgsqlConnection(Helpers.connStr)
     conn.Open()
     new QueryContext(conn, compiler)
 
 let getPerson name =
     use ctx = openContext() // this line is important
     select {
         for person in Table.person do 
         where (person.name = name)
         select (person.name)
     }
     |> ctx.ReadAsync MyDb.HydraReader.Read
     |> Async.AwaitTask
     |> Async.map(Seq.exactlyOne)

If I do, use ctx = openContext() i get the bellow error. However, if i do let ctx = openContext(), it works fine. However, I am afraid this way it doesn't dispose correctly.

Here is the error

Npgsql.NpgsqlException (0x80004005): Exception while reading from stream
---> System.NotSupportedException: The ReadAsync method cannot be called when another read operation is pending.
at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](TIOAdapter adapter, Memory`1 buffer)
at System.Net.Security.SslStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at Npgsql.Internal.NpgsqlReadBuffer.g__EnsureLong|41_0(NpgsqlReadBuffer buffer, Int32 count, Boolean async, Boolean readingNotifications)
at Npgsql.Internal.NpgsqlReadBuffer.g__EnsureLong|41_0(NpgsqlReadBuffer buffer, Int32 count, Boolean async, Boolean readingNotifications)
at Npgsql.Internal.NpgsqlConnector.g__ReadMessageLong|213_0(NpgsqlConnector connector, Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
at Npgsql.Internal.NpgsqlConnector.g__ReadMessageLong|213_0(NpgsqlConnector connector, Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
at Npgsql.NpgsqlDataReader.Consume(Boolean async)
at Npgsql.NpgsqlDataReader.Close(Boolean connectionClosing, Boolean async, Boolean isDisposing)
at Npgsql.Internal.NpgsqlConnector.CloseOngoingOperations(Boolean async)
at Npgsql.NpgsqlConnection.CloseAsync(Boolean async)
at Npgsql.NpgsqlConnection.Close()
at Npgsql.NpgsqlConnection.Dispose(Boolean disposing)
at System.ComponentModel.Component.Dispose()
at SqlHydra.Query.QueryContext.System.IDisposable.Dispose()

Adding Postgres' "UPSERT" to SqlHydra

According to postgres' insert documentation, the ON CONFLICT clause can be used to determine what should happen if an record already exists in the database: where to ignore the conflict, replace the whole record, or to update only a few select columns.

It'd be sweet if SqlHydra had this feature. I understand that this might have to be a pull request if I want to see this feature. If I wanted to create a pull request for this feature, would you be receptive to it? If so, what advice can you give me - where should I look first in the code?

Thanks for any and all feedback.

Allow to generate the list of tables?

Maintaining the list of Tables is a bit tedious.

When a table is removed it is easy to update the list because there is a compiler error.

However, when adding a table the compiler doesn't provide any help. Would it be possible to add support for generating the list of tables?

The things that would need to be configurable are the output and the namespace (could be inherited from the general.namespace.

In my project we generate it using this convention:

namespace Antidote.Database

[<RequireQualifiedAccess>]
module Tables =

    open SqlHydra.Query

    let acsIdentities = table<dbo.AcsIdentities> |> inSchema (nameof dbo)
    let acsTokens = table<dbo.AcsTokens> |> inSchema (nameof dbo)
    let callQueueEntries = table<dbo.CallQueueEntries> |> inSchema (nameof dbo)

    let callRatingImprovementsByCallRating =
        table<dbo.CallRatingImprovementsByCallRating> |> inSchema (nameof dbo)

    let callRatings = table<dbo.CallRatings> |> inSchema (nameof dbo)

Having Tables marked as RequireQualifiedAccess makes the consumer experience easy.

dotnet sqlhydra sqlite fails

Hey there, I want to give this a spin for an embedded db to keep track of a few things in one of my projects and I wanted to give it a try in a console application first to get familiar with it

I Installed SQLHydra.Cli locally in a clean .net7 console project and as the readme suggests and ran it but I seem to get an exception.

dotnet sqlhydra sqlite
daniel@ubuntu:~/shared/repos/Sqoldo$ dotnet sqlhydra sqlite
- SqlHydra.Sqlite
- v2.0.0.0
- `sqlhydra-sqlite.toml` does not exist. Starting configuration wizard...
- Enter a database Connection String: sample.db
- Enter an Output Filename (Ex: AdventureWorks.fs): SqoldoData.fs
- Enter a Namespace (Ex: MyApp.AdventureWorks): Sqoldo.Data
- Select a use case: SqlHydra.Query integration (default)
- sqlhydra-sqlite.toml has been created!
- Please install the `System.Data.SQLite` NuGet package in your project.
- Please install the `SqlHydra.Query` NuGet package in your project.
Unhandled exception: System.DllNotFoundException: Unable to load shared library 'SQLite.Interop.dll'....

I'm not sure if there's something else going to be performed by the CLI but each time I run the command it happens again

Full Exception
daniel@ubuntu:~/shared/repos/Sqoldo$ dotnet sqlhydra sqlite
- SqlHydra.Sqlite
- v2.0.0.0
- `sqlhydra-sqlite.toml` does not exist. Starting configuration wizard...
- Enter a database Connection String: sample.db
- Enter an Output Filename (Ex: AdventureWorks.fs): SqoldoData.fs
- Enter a Namespace (Ex: MyApp.AdventureWorks): Sqoldo.Data
- Select a use case: SqlHydra.Query integration (default)
- sqlhydra-sqlite.toml has been created!
- Please install the `System.Data.SQLite` NuGet package in your project.
- Please install the `SqlHydra.Query` NuGet package in your project.
Unhandled exception: System.DllNotFoundException: Unable to load shared library 'SQLite.Interop.dll' or one of its dependencies. In order to help diagnose loading problems, consider using a tool like strace. If you're using glibc, consider setting the LD_DEBUG environment variable: 
/home/daniel/.dotnet/shared/Microsoft.NETCore.App/7.0.5/SQLite.Interop.dll.so: cannot open shared object file: No such file or directory
/home/daniel/.nuget/packages/sqlhydra.cli/2.0.0/tools/net7.0/any/SQLite.Interop.dll.so: cannot open shared object file: No such file or directory
/home/daniel/.dotnet/shared/Microsoft.NETCore.App/7.0.5/libSQLite.Interop.dll.so: cannot open shared object file: No such file or directory
/home/daniel/.nuget/packages/sqlhydra.cli/2.0.0/tools/net7.0/any/libSQLite.Interop.dll.so: cannot open shared object file: No such file or directory
/home/daniel/.dotnet/shared/Microsoft.NETCore.App/7.0.5/SQLite.Interop.dll: cannot open shared object file: No such file or directory
/home/daniel/.nuget/packages/sqlhydra.cli/2.0.0/tools/net7.0/any/SQLite.Interop.dll: cannot open shared object file: No such file or directory
/home/daniel/.dotnet/shared/Microsoft.NETCore.App/7.0.5/libSQLite.Interop.dll: cannot open shared object file: No such file or directory
/home/daniel/.nuget/packages/sqlhydra.cli/2.0.0/tools/net7.0/any/libSQLite.Interop.dll: cannot open shared object file: No such file or directory

   at System.Data.SQLite.UnsafeNativeMethods.sqlite3_config_none(SQLiteConfigOpsEnum op)
   at System.Data.SQLite.SQLite3.StaticIsInitialized()
   at System.Data.SQLite.SQLiteLog.PrivateInitialize(String className)
   at System.Data.SQLite.SQLiteLog.Initialize(String className)
   at System.Data.SQLite.SQLiteConnection..ctor(String connectionString, Boolean parseViaFramework)
   at System.Data.SQLite.SQLiteConnection..ctor(String connectionString)
   at SqlHydra.Sqlite.SqliteSchemaProvider.getSchema(Config cfg) in C:\_github\SqlHydra\src\SqlHydra.Cli\Sqlite\SqliteSchemaProvider.fs:line 14
   at [email protected](Config cfg)
   at SqlHydra.Console.run(Args args) in C:\_github\SqlHydra\src\SqlHydra.Cli\Console.fs:line 140
   at SqlHydra.Program.handler(String provider, FSharpOption`1 tomlFile) in C:\_github\SqlHydra\src\SqlHydra.Cli\Program.fs:line 25
   at [email protected](Tuple`2 tupledArg)
   at FSharp.SystemCommandLine.CommandBuilders.SetHandlerUnit@163-2.Invoke(InvocationContext ctx)
   at System.CommandLine.Invocation.AnonymousCommandHandler.Invoke(InvocationContext context)
   at System.CommandLine.Invocation.InvocationPipeline.<>c__DisplayClass4_0.<<BuildInvocationChain>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass17_0.<<UseParseErrorReporting>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass12_0.<<UseHelp>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass22_0.<<UseVersionOption>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass19_0.<<UseTypoCorrections>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c.<<UseSuggestDirective>b__18_0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass16_0.<<UseParseDirective>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c.<<RegisterWithDotnetSuggest>b__5_0>d.MoveNext()
--- End of stack trace from previous location ---
   at System.CommandLine.Builder.CommandLineBuilderExtensions.<>c__DisplayClass8_0.<<UseExceptionHandler>b__0>d.MoveNext()

Finish support for DateOnly/TimeOnly types

DateOnly Support

  • SqlHydra.Query will need to manually convert DateTime parameter values to DateOnly.
  • SqlHydra.* provider packages will need to generate and use a custom GetDateOnly reader extension method for .NET 6 and above.

TimeOnly Support

  • SqlHydra.Query will need to manually convert TimeSpan parameter values to TimeOnly.
  • SqlHydra.* provider packages will need to generate and use a custom GetTimeOnly reader extension method for .NET 6 and above.

DateOnly tests

TimeOnly tests

Add Support for tuple in where clauses

Thanks for the great library. Another thing I've found is probably missing support for tuple query generation in where clauses?

        selectTask HydraReader.Read ctx {
          for a in draftItemTable do
            where (topic = topic
                   && (a.timestamp, a.sequence) >= (fromTimestamp, fromSequence)
                   && (a.timestamp, a.sequence) <= (toTimestamp, toSequence)
                   && (loadExecuted = true || isNotIn a.status [|draft_item_status.executed; draft_item_status.executing|])
                   )
            take limit
            mapArray (toDomainModel a)

should roughly translate to

        select * from draft_item
        where
          topic = @topic
          and (timestamp, sequence) >= (@fromTimestamp, @fromSequence)
          and (timestamp, sequence) <= (@toTimestamp, @toSequence)
          and (@loadExecuted = true or status not in ('executed', 'executing'))
        order by (timestamp, sequence)
        limit @limit

However at the moment, it gives:

System.InvalidOperationException: The binary operator GreaterThanOrEqual is not defined for the types 'System.Tuple`2[System.DateTime,System.Int64]' and 'System.Tuple`2[System.DateTime,System.Int64]'.
   at System.Linq.Expressions.Expression.GetUserDefinedBinaryOperatorOrThrow(ExpressionType binaryType, String name, Expression left, Expression right, Boolean liftToNull)
   at System.Linq.Expressions.Expression.GetComparisonOperator(ExpressionType binaryType, String opName, Expression left, Expression right, Boolean liftToNull)
   at System.Linq.Expressions.Expression.GreaterThanOrEqual(Expression left, Expression right, Boolean liftToNull, MethodInfo method)
   at System.Linq.Expressions.Expression.GreaterThanOrEqual(Expression left, Expression right)
   at Microsoft.FSharp.Linq.RuntimeHelpers.LeafExpressionConverter.ConvExprToLinqInContext@367-3.Invoke(Tuple`2 tupledArg)
   at Microsoft.FSharp.Linq.RuntimeHelpers.LeafExpressionConverter.ConvExprToLinqInContext(ConvEnv env, FSharpExpr inp) in D:\a\_work\1\s\src\FSharp.Core\Linq.fs:line 367
   at Microsoft.FSharp.Linq.RuntimeHelpers.LeafExpressionConverter.ConvExprToLinqInContext(ConvEnv env, FSharpExpr inp) in D:\a\_work\1\s\src\FSharp.Core\Linq.fs:line 282
   at Microsoft.FSharp.Linq.RuntimeHelpers.LeafExpressionConverter.ConvExprToLinqInContext(ConvEnv env, FSharpExpr inp) in 

Could not find file '...\schema.json'.

First, let me say:
great library and thanks for your hard work ๐Ÿ™ Using it with Dapper.FSharp is awesome.

I am testing the library SqlHydra.SqlServer (following the readme), and I get the following error (in AdventureWorks.fs):

namespace rec globalmodule SqlServerGeneratorFailure =
    let (!CompilationError) =
        "Could not find file 'C:\dev\MyTest\schema.json'."

I suspected my connection string but... If (instead of using the Nuget package) I clone the repo and replace the connection in myriad.toml with my connection, everything works.

If is needed I can paste my configs (they are pretty basic as I am following the readme). But I guess there is something wrong with the nuget package.

Disallow entering empty seq to `insert` `entities`

Due to the fact that the insert entities operation will fail if an empty collection is passed in, I think that entities should be restricted to a new type that will force the user to handle this common scenario ahead of time rather than receiving an exception. (I know it's common because not handling this case just caused an error for a client).

module AtLeastOne =
    type AtLeastOne<'T> = private { Items : 'T seq }

    let tryInit<'T> (items: 'T seq) = 
        if Seq.isEmpty items
        then None
        else Some { Items = items }

    let getItems { Items = atLeastOne } = 
        atLeastOne

module Test = 
    let doStuff (atLeastOne: AtLeastOne.AtLeastOne<int>) = 
        let items = atLeastOne |> AtLeastOne.getItems 
        printfn $"{Seq.length items} items found"

    let testWith () =
        match AtLeastOne.tryInit [ 1 ] with
        | Some items -> doStuff items
        | None -> failwith "no items"

    let testWithout () =
        match AtLeastOne.tryInit [] with
        | Some items -> doStuff items
        | None -> failwith "no items"

Joining more than once on the same table produces invalid SQL

First of all - super awesome library!

Just running into a problem relating to joining on the same table more than once in a query. For example, given the following postgresql schema:

CREATE TABLE public.user
(
    user_pk uuid NOT NULL DEFAULT gen_random_uuid(),
    username character varying NOT NULL,
    PRIMARY KEY (user_pk)
);

CREATE TABLE public.message
(
    message_pk uuid NOT NULL DEFAULT gen_random_uuid(),
    from_fk uuid NOT NULL,
    to_fk uuid NOT NULL,
    body character varying NOT NULL,
    CONSTRAINT message__from_fk__fk FOREIGN KEY (from_fk)
        REFERENCES public."user" (user_pk) MATCH SIMPLE
        ON UPDATE CASCADE,
    CONSTRAINT message__to_fk__fk FOREIGN KEY (to_fk)
        REFERENCES public."user" (user_pk) MATCH SIMPLE
        ON UPDATE CASCADE
);

And the following code:

let userTable = table<``public``.message>
let messageTable = table<``public``.message>

let! messages =
    selectTask Dto.HydraReader.Read (Shared queryCtx) {
        for m in messageTable do
            join fromUser in userTable on (m.from_fk = fromUser.user_pk)
            join toUser in userTable on (m.to_fk = toUser.user_pk)
            select (m.body, fromUser.username, toUser.username)
    }

You end up with the following SQL error: table name "user" specified more than once

This is because it gets compiled to the following SQL:

"message"."body", "user"."username", "user"."username" FROM "message" 
INNER JOIN "user" ON ("message"."from_fk" = "user"."user_pk")
INNER JOIN "user" ON ("message"."to_fk" = "user"."user_pk")

SQL hydra can solve this by adding some AS clauses to disambiguate the two different usages of this table. It could either be an arbitrary alias, or it could even use the same name as the variable in the F# query (in this case fromUser and toUser).

It should be running something more like this:

"message"."body", "fromUser"."username", "toUser"."username" FROM "message" 
INNER JOIN "user" AS fromUser ON ("message"."from_fk" = "user"."user_pk")
INNER JOIN "user" AS toUser ON ("message"."to_fk" = "user"."user_pk")

Unfortunately I can't figure out a workaround, so I am having to drop to a lower level and generating a query directly.

Order of columns appears to be in a different order on re-generation

Hi Jordan,

I've had this issue fairly early on in trying out SqlHydra, when I've looked into the source previously I thought I'd seen the columns being ordered alphabetically before being generated and assumed it was my older version of the tool. However upon generation this time I've got them in a different order now, using SqlHydra 0.503.0.0 with SQL Server.

I was jumping in to take a look and noticed when looking through SqlServerSchemaProvider.fs I can't see any ordering, or have I missed it? That then got me thinking as to what the correct behaviour should be, is it alphabetical, or should it be the order the table has the columns?

Using a where clause with leftJoin

Hi Jordan,

First, thank you for the great product!
Second, I was wondering if you have any suggestions for using a where clause after leftJoin'ing tables, like in the example below. When I try to dot into it with .Value in the where clause, I get this error: "Nested property value extraction is not supported in 'where' statements. Try manually unwrapping and passing in the value." I would appreciate any advice you might have. Thanks.
image

Conditional Where clause

I was trying to add a conditional where clause in the CE but hitting dead ends. If I was using SqlKata directly I'd be able to wrap it in an if statement.

if (condition == true) {
    query.Where("Field", "Value")
}

I tried to do similar in the CE but the compiler tells me:

A custom operation may not be used in conjunction with 'use', 'try/with', 'try/finally', 'if/then/else' or 'match' operators within this computation expression.

I started to play around with adding a new customOperator that received a type which included the condition, but I ran into compiler errors I don't understand and quickly realised how very little I know about how computational expressions.

Firstly, is there a way of doing this with the library as it is today? If not, did you have an idea of how this might be possible? I'm happy to give it a go if pointed in the right direction.

Types for views are not generated

We are using a postgres database and it seems like sqlhydra doesn't generate types for views.
Is there anything extra we need to add in the configuration file? Or is it not supported by the library?

Thanks!

Filters for large databases

I'm working with a very large database (oracle, in this case), for which i want to pull only a few tables. After setting filters in the .toml file and trying to generate the library, I'm finding that it is taking far too long.

I wonder if the logic for filtering can be partially applied earlier, when the GetSchema() methods are called. The GetSchema() method has an overload for specifying exclusions that might have some performance gains here.

Here's a link to the oracle docs, but I believe that it's the same idea in each of the providers sqlhydra is using.

https://docs.oracle.com/cd/E85694_01/ODPNT/ConnectionGetSchema3.htm

`SqlHydra.ProviderDbType` Attribute requires SqlHydra.Query

Issue

(Raised by @isaacabraham)

Table records generated by SqlHydra may contain SqlHydra.ProviderDbType attributes.
This attribute is declared in the SqlHydra.Query package and is added to generated types to designate database specific parameter types for SqlHydra.Query to use when creating query parameters.

Proposed Solution

Add a new configuration option, provider_db_type_attributes , to the .toml config file under the [sqlhydra_query_integration] section to allow users to exclude the [<ProviderDbType>] attributes used by SqlHydra.Query. (This will default to true if not present.)

Ex:

[general]
connection = "Server=localhost,12019;Database=AdventureWorks;User=sa;Password=Password#123;TrustServerCertificate=True"
output = "SqlServer/AdventureWorksNet5.fs"
namespace = "SqlServer.AdventureWorksNet5"
cli_mutable = true
[sqlhydra_query_integration]
provider_db_type_attributes = false
[readers]
reader_type = "Microsoft.Data.SqlClient.SqlDataReader"
[filters]
include = [ "*" ]
exclude = [ "*/v*" ]

sqlhydra-npgsql generation tool doesn't pick up "array" columns

It appears that for schema like following:

create table draft_item
(
    topic     text        not null,
    id        text        not null,
    timestamp timestamptz not null,
    sequence  bigint      not null,
    payload   jsonb       not null,
    errors    text[]      not null,
    constraint pk_draft_items primary key (id, topic)
);

It simply skips over array type errors, generating the following:

    [<CLIMutable>]
    type draft_item =
        { topic: string
          id: string
          timestamp: System.DateTime
          sequence: int64
          [<SqlHydra.ProviderDbType("Jsonb")>]
          payload: string }

Generate types in separate module

It would be nice if we could generate the Readers and row types in separate modules/files.

My primary intent is to reuse the row types in a fable client.

Unable to use strings with isNotIn |<>| operation?

Hello,

I have tried the following with both .NET 6 and .NET 7 and am receiving an error.

  selectTask HydraReader.Read (Create openContext) {
    for dl in dlTable do
      where (isNotIn dl.SystemName ignoreSystemNames)
      select (dl.DataListID, dl.Name, dl.SystemName) into selected
      mapList (//...
      )
  }

dl.SystemName is a nvarchar(120) column in the DB. ignoreSystemNames has type seq<string option>.

When I try to run the program, it fails immediately with the following exception. My question is - is isNotIn supported with strings? If so, how can I get it to work?

PS C:\projects2\graph\ConsoleApp1\bin\Debug\net6.0> .\mCaseDiagramGenerator.exe
Unhandled exception. System.TypeInitializationException: The type initializer for '<StartupCode$mCaseDiagramGenerator>.$com.znprojects.app.Program' threw an exception.
 ---> System.AggregateException: One or more errors occurred. (One or more errors occurred. (One or more errors occurred. (No mapping exists from object type Microsoft.FSharp.Core.FSharpOption`1[[System.String, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]] to a known managed provider native type.)))
 ---> System.AggregateException: One or more errors occurred. (One or more errors occurred. (No mapping exists from object type Microsoft.FSharp.Core.FSharpOption`1[[System.String, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]] to a known managed provider native type.))
 ---> System.AggregateException: One or more errors occurred. (No mapping exists from object type Microsoft.FSharp.Core.FSharpOption`1[[System.String, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]] to a known managed provider native type.)
 ---> System.ArgumentException: No mapping exists from object type Microsoft.FSharp.Core.FSharpOption`1[[System.String, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]] to a known managed provider native type.
   at Microsoft.Data.SqlClient.SqlCommand.<>c.<ExecuteDbDataReaderAsync>b__207_0(Task`1 result)
   at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
   at System.Threading.Tasks.Task.<>c.<.cctor>b__272_0(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
   --- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
   at System.Threading.Tasks.Task`1.get_Result()
   at com.znprojects.apps.DatabaseConnection.buildModel() in C:\projects2\mCase-graph\ConsoleApp1\DatabaseConnection.fs:line 117
   at <StartupCode$mCaseDiagramGenerator>.$com.znprojects.app.Program..cctor() in C:\projects2\mCase-graph\ConsoleApp1\Program.fs:line 9
   --- End of inner exception stack trace ---
   at com.znprojects.app.Program.main(String[] argv) in C:\projects2\mCase-graph\ConsoleApp1\Program.fs:line 193

Cutting off DateTime precision?

Hi, I haven't been able to verify that this is a SqlHydra issue and not a SqlKata issue, but I'm having some DateTimes getting truncated to the second in my queries.
I've created a query expression to log a select query when executing it:

let private stringifyQuery (query:SqlKata.Query) =
    let compiler = SqlKata.Compilers.SqlServerCompiler()
    let sql = compiler.Compile(query)
    let bindings = 
      String.concat "; " 
        [ 
          for KeyValue (k, v) in sql.NamedBindings -> 
            match v with 
            | :? DateTime as d -> sprintf "(%A: %s)" k (d.ToString("yyyy-MM-dd hh:mm:ss.fffffff"))
            | _ -> sprintf "(%A: %A)" k v
        ]
    sprintf "'%s' with parameters '%s'" sql.Sql bindings

type LoggingSelectBuilder<'a, 'b, 'c when 'c :> Common.DbDataReader> (reader : ('c -> unit -> 'a), ct:ContextType) = 
  inherit SelectAsyncBuilder<'a, 'b, 'c>(reader, ct)

  member this.Run(state: QuerySource<'a, SqlKata.Query>) =
    printfn "%s" <| stringifyQuery state.Query
    base.Run(state)
    
let loggingSelect reader connStr = 
  LoggingSelectBuilder(reader, (ContextType.Create (openContext connStr)))

I have a query like:

  let! myCount = 
    loggingSelect MyDB.HydraReader.Read connStr {
      for record in table<MyDB.dbo.MyTable> do
      where (record.entry_timestamp > ts)
      select (minBy record.entry_timestamp)
    } 
    |>> Array.ofSeq
    |>> Array.head

  printfn "results in: %s" (myCount.ToString("yyyy-MM-dd hh:mm:ss.fffffff"))

which outputs:

'SELECT MIN([MyTable].[entry_timestamp]) FROM [MyTable] WHERE ([MyTable].[entry_timestamp] > @p0)' with parameters '("@p0": 2022-07-15 09:40:42.6078010)'

results in: 2022-07-15 09:40:42.6077917

it seems like I have to add a full second to the timestamp in the query to get the results I want.
Do you think this is a SqlKata issue or a SqlHydra issue? Do you have any recommendations for how to debug this? A SqlKata developer claims that SqlKata sends raw DateTime parameters to the DB server unaltered in this issue, and is correct as far as what I can tell from the SqlKata source. But I also can't see in SqlHydra source anywhere that the truncation would be happening.

User Defined Functions/ Table Valued Functions

Hello,

I was wondering if you would be open to the idea of either adding and/or receiving a PR for User Defined Functions and Table Valued Functions? I use MS SQL Server, and would very much like to use SQL Hydra in conjunction with UDF's/TVF's. I might know someone who can help with the PR if that's the route you decide to go.

Thanks so much.

Fails to insert jsonb values in Postgres

We have a table with a field of type jsonb, which becomes string in the generated type, but it doesn't seem to work when inserting a string with the json format, the exception I get is:

Npgsql.PostgresException (0x80004005): 42804: column "test" is of type jsonb but expression is of type text
Exception data:
    Severity: ERROR
    SqlState: 42804
    MessageText: column "test" is of type jsonb but expression is of type text
    Hint: You will need to rewrite or cast the expression.
    Position: 97
    File: parse_target.c
    Line: 588
    Routine: transformAssignedExpr

when trying to insert """{"publisher": "test"}""".
Would you be able to help with this?

`deleteAll` failing with SqlHydra.Query 1.1.0

Using the following code with SqlHydra.Query 1.1.0 result in an error at runtime:

module Tables =

    let encounterForms = table<dbo.EncounterForms>

    let! _ =
        deleteAsync (Shared ctx) {
            for _ in Tables.encounterForms do
                deleteAll
        }
System.NotImplementedException: The method or operation is not implemented.
   at SqlHydra.Query.QuotationVisitor.visit@10(FSharpExpr expr)
   at SqlHydra.Query.QuotationVisitor.visitFor[T](FSharpExpr`1 f)
   at SqlHydra.Query.DeleteBuilders.DeleteBuilder`1.For[T](QuerySource`1 state, FSharpExpr`1 forExpr)
   at [email protected](QueryContext _arg1) in /Users/mmangel/Workspaces/Github/Antidote-AI/antidote-apps/Server/Antidote.Server.Tests/Database.fs:line 73
   at Microsoft.FSharp.Control.AsyncPrimitives.CallThenInvoke[T,TResult](AsyncActivation`1 ctxt, TResult result1, FSharpFunc`2 part2) in D:\a\_work\1\s\src\FSharp.Core\async.fs:line 510
   at Microsoft.FSharp.Control.Trampoline.Execute(FSharpFunc`2 firstAction) in D:\a\_work\1\s\src\FSharp.Core\async.fs:line 112

Downgrading to SqlHydra 1.0.5 with the following code:

module Tables =

	let encounterForms = table<dbo.EncounterForms> |> inSchema (nameof dbo)

    let! _ =
        deleteAsync (Shared ctx) {
            for _ in Tables.encounterForms do
                deleteAll
        }

works.

Connection disposed when disposing QueryContext

QueryContext disposes database connection here when it is disposed, and it also takes care of transactions. Do we really want to dispose the connection once a transaction disposes? Our understanding of QueryContext is that it is a light-weight object that is created for every db operation, with transaction in case it is necessary, but it will be quite expensive to create a new db connection for each query context. Does it make sense to either remove connection disposal or maybe even require the connection to be send in each method instead? That way it is the consumer's responsibility to make sure the connection is properly disposed.

Support for insert many?

I can't seem to find a way to do insert of multiple entities in one query, it does seem to be supported by sqlkata.
Is there a way to achieve this with insert-computation expression?

Should CLI tools be consolidated into one?

Rather than having separate packages:

  • SqlHydra.SqlServer: dotnet sqlhydra-mssql
  • SqlHydra.Npgsql: dotnet sqlhydra-npgsql
  • SqlHydra.Oracle: dotnet sqlhydra-oracle
  • SqlHydra.Sqlite: dotnet sqlhydra-sqlite

It seems like it would be more convenient to just have one package to rule them all:

  • SqlHydra.Cli:
    • dotnet sqlhydra mssql
    • dotnet sqlhydra npgsql
    • dotnet sqlhydra oracle
    • dotnet sqlhydra sqlite

Since it's a dotnet tool that isn't included directly as a sln project, it shouldn't really matter if it references multiple db provider packages. It would also reduce the amount of project maintenance in terms of conditional dependencies to tweak during .NET version upgardes, versions to set when deploying, duplicate info on the readme page, etc.

The downside would be fairly minimal (swapping to new tool and removing - from the command name).

Am I overlooking any big issues with this approach?

Working with arbitrary queries?

Is there a way to use SqlHydra for arbitrary queries e.g. if you have a folder with .sql queries, can it generate a type and / or strongly-typed datareader field for each query file it finds?

Autogenerated code infers wrong type

I think I found a regression between versions 1.0.4 and 1.2.0. In version 1.0.4, the type <table>Reader class was defined right below the type <table> record. In version 1.2.0, the table records are all defined at the top and the reader classes are all defined below that. If two tables happen to have the same column names, the type inference in the <table>Reader class will be wrong in the .Read() method.

E.g., I have one table defined as { id : byte; name : string }, and another table defined as { id : int16; name : string }. After running dotnet sqlhydra-mssql, the generated code won't compile because the Reader class for one of the tables is inferred to have the return type of the other table (and I get a compile error about type int16 not matching type byte). I think this could be resolved by either moving the Reader classes back to right below the table record they match, or by just annotating the return type of the Reader.Read() method.

Question about multiple inserts, possibility to not enforce the "match"?

On the readme there is this note:

To insert multiple entities in one query, use the entities operation in conjunction with the AtLeastOne type to ensure that at least one item exists in the collection. (The AtLeastOne forces you to handle the case where an empty collection is passed to entities which would throw a runtime exception.)

I am perhaps naive but would it not be possible for SqlHydra to check if the collection is empty or not?

If the collection is empty it could resolve directly without hitting the database.

If we consider this workflow HTTP Request -> Validate -> Do stuff with data (insert into DB) -> HTTP Response

I think that if the list of elements to insert should not be empty then the user should handle it from it's code in the validation phase.

This would avoid forcing the user to have a match instruction for something he already handled.

let private myQuery (request : Request.MyQuery) : Async<Response.MyQuery> =                

    task {
        match Request.MyQuery.validate request with
        | Ok requestValues ->
            let proceduresToInsert =
                requestValues.Procedures
                |> List.map (fun procedure -> {
                    dbo.Procedures.Id = -1
                    dbo.Procedures.Procedure = procedure
                }
                )
                |> AtLeastOne.tryCreate

            match proceduresToInsert with
            | Some proceduresToInsert ->
                do!
                    insertTask (Create DbContext.create) {
                        for c in Tables.procedures do
                            entities proceduresToInsert
                            excludeColumn c.Id
                    }
                    :> Task

            | None -> () // Redondant because already handled in the validation phase

        | Error errors ->
            return Response.MyQuery.InvalidRequest errors
    }
    |> Async.AwaitTask

would become

let private myQuery (request : Request.MyQuery) : Async<Response.MyQuery> =                

    task {
        match Request.MyQuery.validate request with
        | Ok requestValues ->
            let proceduresToInsert =
                requestValues.Procedures
                |> List.map (fun procedure -> {
                    dbo.Procedures.Id = -1
                    dbo.Procedures.Procedure = procedure
                }
                )

            do!
                insertTask (Create DbContext.create) {
                    for c in Tables.procedures do
                        entities proceduresToInsert
                        excludeColumn c.Id
                }
                :> Task

        | Error errors ->
            return Response.MyQuery.InvalidRequest errors
    }
    |> Async.AwaitTask

Support for cancellation token in db operations

It seems like there is no support for passing cancellation token in async operations against QueryContext.
Do the native drivers have support for this? Would it be difficult to implement, what do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.