Giter Site home page Giter Site logo

exqlite's Introduction

Exqlite

Build Status Hex Package Hex Docs

An Elixir SQLite3 library.

If you are looking for the Ecto adapter, take a look at the Ecto SQLite3 library.

Documentation: https://hexdocs.pm/exqlite Package: https://hex.pm/packages/exqlite

Caveats

  • Prepared statements are not cached.
  • Prepared statements are not immutable. You must be careful when manipulating statements and binding values to statements. Do not try to manipulate the statements concurrently. Keep it isolated to one process.
  • Simultaneous writing is not supported by SQLite3 and will not be supported here.
  • All native calls are run through the Dirty NIF scheduler.
  • Datetimes are stored without offsets. This is due to how SQLite3 handles date and times. If you would like to store a timezone, you will need to create a second column somewhere storing the timezone name and shifting it when you get it from the database. This is more reliable than storing the offset as +03:00 as it does not respect daylight savings time.
  • When storing BLOB values, you have to use {:blob, the_binary}, otherwise it will be interpreted as a string.

Installation

defp deps do
  [
    {:exqlite, "~> 0.21"}
  ]
end

Configuration

Runtime Configuration

config :exqlite, default_chunk_size: 100
  • default_chunk_size - The chunk size that is used when multi-stepping when not specifying the chunk size explicitly.

Compile-time Configuration

In config/config.exs,

config :exqlite, force_build: false
  • force_build - Set true to opt out of using precompiled artefacts. This option only affects the default configuration. For advanced configuation, this library will always compile natively.

Advanced Configuration

Defining Extra Compile Flags

You can enable certain features by doing the following:

export EXQLITE_SYSTEM_CFLAGS=-DSQLITE_ENABLE_DBSTAT_VTAB=1

Or you can pass extra environment variables using the Elixir config:

config :exqlite,
  force_build: true,
  make_env: %{
    "EXQLITE_SYSTEM_CFLAGS" => "-DSQLITE_ENABLE_DBSTAT_VTAB=1",
    "V" => "1"
  }

Listing Flags Used For Compilation

If you export V=1 the flags used for compilation will be output to stdout.

Using System Installed Libraries

This will vary depending on the operating system.

# tell exqlite that we wish to use some other sqlite installation. this will prevent sqlite3.c and friends from compiling
export EXQLITE_USE_SYSTEM=1

# Tell exqlite where to find the `sqlite3.h` file
export EXQLITE_SYSTEM_CFLAGS=-I/usr/include

# tell exqlite which sqlite implementation to use
export EXQLITE_SYSTEM_LDFLAGS=-L/lib -lsqlite3

After exporting those variables you can then invoke mix deps.compile. Note if you re-export those values, you will need to recompile the exqlite dependency in order to pickup those changes.

Database Encryption

As of version 0.9, exqlite supports loading database engines at runtime rather than compiling sqlite3.c itself. This can be used to support database level encryption via alternate engines such as SQLCipher or the Official SEE extension. Once you have either of those projects installed on your system, use the following environment variables during compilation:

# tell exqlite that we wish to use some other sqlite installation. this will prevent sqlite3.c and friends from compiling
export EXQLITE_USE_SYSTEM=1

# Tell exqlite where to find the `sqlite3.h` file
export EXQLITE_SYSTEM_CFLAGS=-I/usr/local/include/sqlcipher

# tell exqlite which sqlite implementation to use
export EXQLITE_SYSTEM_LDFLAGS=-L/usr/local/lib -lsqlcipher

Once you have exqlite configured, you can use the :key option in the database config to enable encryption:

config :exqlite, key: "super-secret'

Usage

The Exqlite.Sqlite3 module usage is fairly straight forward.

# We'll just keep it in memory right now
{:ok, conn} = Exqlite.Sqlite3.open(":memory:")

# Create the table
:ok = Exqlite.Sqlite3.execute(conn, "create table test (id integer primary key, stuff text)")

# Prepare a statement
{:ok, statement} = Exqlite.Sqlite3.prepare(conn, "insert into test (stuff) values (?1)")
:ok = Exqlite.Sqlite3.bind(conn, statement, ["Hello world"])

# Step is used to run statements
:done = Exqlite.Sqlite3.step(conn, statement)

# Prepare a select statement
{:ok, statement} = Exqlite.Sqlite3.prepare(conn, "select id, stuff from test")

# Get the results
{:row, [1, "Hello world"]} = Exqlite.Sqlite3.step(conn, statement)

# No more results
:done = Exqlite.Sqlite3.step(conn, statement)

# Release the statement.
#
# It is recommended you release the statement after using it to reclaim the memory
# asap, instead of letting the garbage collector eventually releasing the statement.
#
# If you are operating at a high load issuing thousands of statements, it would be
# possible to run out of memory or cause a lot of pressure on memory.
:ok = Exqlite.Sqlite3.release(conn, statement)

Using SQLite3 native extensions

Exqlite supports loading run-time loadable SQLite3 extensions. A selection of precompiled extensions for popular CPU types / architectures is available by installing the ExSqlean package. This package wraps SQLean: all the missing SQLite functions.

alias Exqlite.Basic
{:ok, conn} = Basic.open("db.sqlite3")
:ok = Basic.enable_load_extension(conn)

# load the regexp extension - https://github.com/nalgeon/sqlean/blob/main/docs/re.md
Basic.load_extension(conn, ExSqlean.path_for("re"))

# run some queries to test the new `regexp_like` function
{:ok, [[1]], ["value"]} = Basic.exec(conn, "select regexp_like('the year is 2021', ?) as value", ["2021"]) |> Basic.rows()
{:ok, [[0]], ["value"]} = Basic.exec(conn, "select regexp_like('the year is 2021', ?) as value", ["2020"]) |> Basic.rows()

# prevent loading further extensions
:ok = Basic.disable_load_extension(conn)
{:error, %Exqlite.Error{message: "not authorized"}, _} = Basic.load_extension(conn, ExSqlean.path_for("re"))

# close connection
Basic.close(conn)

It is also possible to load extensions using the Connection configuration. For example:

arch_dir =
  System.cmd("uname", ["-sm"])
  |> elem(0)
  |> String.trim()
  |> String.replace(" ", "-")
  |> String.downcase() # => "darwin-arm64"

config :myapp, arch_dir: arch_dir

# global
config :exqlite, load_extensions: [ "./priv/sqlite/\#{arch_dir}/rotate" ]

# per connection in a Phoenix app
config :myapp, Myapp.Repo,
  database: "path/to/db",
  load_extensions: [
    "./priv/sqlite/\#{arch_dir}/vector0",
    "./priv/sqlite/\#{arch_dir}/vss0"
  ]

See Exqlite.Connection.connect/1 for more information. When using extensions for SQLite3, they must be compiled for the environment you are targeting.

Why SQLite3

I needed an Ecto3 adapter to store time series data for a personal project. I didn't want to go through the hassle of trying to setup a postgres database or mysql database when I was just wanting to explore data ingestion and some map reduce problems.

I also noticed that other SQLite3 implementations didn't really fit my needs. At some point I also wanted to use this with a nerves project on an embedded device that would be resiliant to power outages and still maintain some state that ets can not afford.

Under The Hood

We are using the Dirty NIF scheduler to execute the sqlite calls. The rationale behind this is that maintaining each sqlite's connection command pool is complicated and error prone.

Compiling NIF for Windows

When compiling on Windows, you will need the Build Tools or equivalent toolchain. Please make sure you have the correct environment variables, including path to compiler and linker and architecture that matches erl.exe (likely x64).

You may also need to invoke vcvarsall.bat amd64 before running mix.

A guide is available at guides/windows.md

Contributing

Feel free to check the project out and submit pull requests.

exqlite's People

Contributors

alanj853 avatar andyl avatar cocoa-xu avatar colacheng avatar connorrigby avatar cw789 avatar dmitriid avatar dominicletz avatar dvic avatar eiji7 avatar elepedus avatar greg-rychlewski avatar gregors avatar jeregrine avatar jjcarstens avatar kevinlang avatar kianmeng avatar liamdiprose avatar lostkobrakai avatar mbuhot avatar mellkior avatar mhanberg avatar mikestok avatar mindreframer avatar mwhitworth avatar reachfh avatar ruslandoga avatar sebastianszturo avatar voltagex avatar warmwaffles avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

exqlite's Issues

Better handling of nulls

Given:

defmodule CMS.Page do
  use Ecto.Schema
  import Ecto.Changeset

  schema "pages" do
    field :rendered, :string
    field :slug, :string
    field :title, :string
    field :published, :boolean, default: false
    field :published_at, :naive_datetime

    timestamps()
  end

  @doc false
  def changeset(page, attrs) do
    page
    |> cast(attrs, [:title, :slug, :rendered])
    |> validate_required([:title, :slug, :rendered])
    |> unique_constraint(:slug)
  end
end

And the following data in the db:

INSERT INTO "pages" ("id", "title", "slug", "rendered", "inserted_at", "updated_at", "published", "published_at") VALUES
('1', 'Hello, world', 'hello-world', '<p>You wat <em>mate</em></p>', '2020-10-10 10:10:10', '2020-10-10 10:10:10', '0', NULL);

The following will fail:

> Repo.all(Pages)

no function clause matching in Calendar.ISO.parse_naive_datetime/1

Called with 1 arguments
    :undefined

Attempted function clauses (showing 2 out of 2)
   def parse_naive_datetime(<<"-", string::binary()>>) when is_binary(string)
   def parse_naive_datetime(string) when is_binary(string)

This is called from exqlite.ex:186:

  defp naive_datetime_decode(val) do
    case NaiveDateTime.from_iso8601(val) do
      {:ok, dt} -> {:ok, dt}
      _ -> :error
    end
  end

Fix on_conflict handling

We currently have two excluded tags:

:with_conflict_target,
:without_conflict_target,

Due to the underlying tests failing. SQLite does support pretty robust on-conflict handling, so we should be able to make these tests pass. See upsert-clause at https://sqlite.org/lang_insert.html

Fix (composite) foreign key constraint

For the :composite_foreign_key tag, we fail due to the child table FK to the parent table being generated like so:

CREATE TABLE composite_child (id INTEGER PRIMARY KEY, parent_key_id INTEGER, parent_id INTEGER CONSTRAINT composite_child_parent_id_fkey REFERENCES composite_parent(id));

Which is completely dropping the other key.

It should instead be something like:

CREATE TABLE composite_child2 (id INTEGER PRIMARY KEY, parent_key_id INTEGER, parent_id INTEGER, FOREIGN KEY(parent_key_id, parent_id) REFERENCES composite_parent(key_id, id));

Thanks for this!

Hello, Current maintainer of sqlite_ecto2 here.

I just wanted to express my gratitude for you putting time into this. This is something the community will really benefit from.

Add Windows CI

Once #26 is fixed, we will want to add a Windows CI to ensure that it stays building properly.

A reasonably weird table schema

I haven't had time to convert it to Elixir, and it's more code creating stuff than the actual queries, but all I got so far ๐Ÿ˜ฌ

This is based on a real database on on real db data, that's why some fields might not make sense (like "timestamps" that are varchars. They are references to something else in real DB)

CREATE TABLE `entity` (
      `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
      `parts_ref` varchar(255) DEFAULT NULL,
      `locale` varchar(255) DEFAULT NULL,
      `name` varchar(255) DEFAULT NULL,
      `description` varchar(255) DEFAULT NULL,
      `original_name` varchar(255) DEFAULT NULL,
      `valid` bit(1) NOT NULL DEFAULT b'0',
      PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1171083 DEFAULT CHARSET=utf8mb4;

CREATE TABLE `entity_type` (
     `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
     `identifier` varchar(13) NOT NULL,
     `short_key` varchar(255) DEFAULT NULL,
     `format_key` varchar(255) DEFAULT NULL,
     `version` varchar(255) DEFAULT NULL,
     `ingest_timestamp_ref` varchar(255) DEFAULT NULL,
     `provisional_timestamp_ref` varchar(255) DEFAULT NULL,
     `provider_id` bigint(20) unsigned DEFAULT NULL,
     `entity_section_id` bigint(20) unsigned DEFAULT NULL,
     `valid` bit(1) NOT NULL DEFAULT b'0',
     `provider_external_id` bigint(20) unsigned DEFAULT NULL,
     `selector` varchar(255) DEFAULT NULL,
     `source` varchar(255) DEFAULT NULL,
     `uri` varchar(2048) DEFAULT NULL,
     PRIMARY KEY (`id`),
     UNIQUE KEY `UNIQUE_entity_type_identifier` (`identifier`),
     KEY `FK_entity_type_entity_section_id` (`entity_section_id`),
     KEY `FK_entity_type_provider_id` (`provider_id`),
     KEY `FK_entity_type_provider_external_id` (`provider_external_id`),
     CONSTRAINT `FK_entity_type_entity_section_id` FOREIGN KEY (`entity_section_id`) REFERENCES `entity_section` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
     CONSTRAINT `FK_entity_type_provider_external_id` FOREIGN KEY (`provider_external_id`) REFERENCES `provider` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
     CONSTRAINT `FK_entity_type_provider_id` FOREIGN KEY (`provider_id`) REFERENCES `provider` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1170174 DEFAULT CHARSET=utf8mb4;

CREATE TABLE `entity_section` (
       `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
       `entity_id` bigint(20) unsigned DEFAULT NULL,
       `name` varchar(255) DEFAULT NULL,
       `description` varchar(255) DEFAULT NULL,
       `sequence` varchar(255) DEFAULT NULL,
       `tags` varchar(255) DEFAULT NULL,
       `tags_external` varchar(255) DEFAULT NULL,
       `valid` bit(1) NOT NULL DEFAULT b'0',
       `category` varchar(255) DEFAULT NULL,
       PRIMARY KEY (`id`),
       KEY `FK_entity_section_entity_id` (`entity_id`),
       CONSTRAINT `FK_entity_section_entity_id` FOREIGN KEY (`entity_id`) REFERENCES `entity` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1171004 DEFAULT CHARSET=utf8mb4;

CREATE TABLE `owner` (
       `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
       `name` varchar(255) DEFAULT NULL,
       `description` varchar(255) DEFAULT NULL,
       `slug` varchar(255) DEFAULT NULL,
       `valid` bit(1) NOT NULL DEFAULT b'0',
       PRIMARY KEY (`id`),
       UNIQUE KEY `UNIQUE_owner_slug` (`slug`)
) ENGINE=InnoDB AUTO_INCREMENT=423497 DEFAULT CHARSET=utf8mb4;

CREATE TABLE `entity_type_owner` (
     `entity_type_id` bigint(20) unsigned NOT NULL,
     `owner_id` bigint(20) unsigned NOT NULL,
     `role` varchar(255) NOT NULL,
     PRIMARY KEY (`entity_type_id`,`owner_id`,`role`),
     KEY `FK_entity_type_owner_owner_id` (`owner_id`),
     CONSTRAINT `FK_entity_type_owner_entity_type_id` FOREIGN KEY (`entity_type_id`) REFERENCES `entity_type` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
     CONSTRAINT `FK_entity_type_owner_owner_id` FOREIGN KEY (`owner_id`) REFERENCES `owner` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `provider` (
     `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
     `name` varchar(255) DEFAULT NULL,
     `slug` varchar(255) DEFAULT NULL,
     `valid` bit(1) NOT NULL DEFAULT b'0',
     PRIMARY KEY (`id`),
     UNIQUE KEY `UNIQUE_provider_slug` (`slug`)
) ENGINE=InnoDB AUTO_INCREMENT=23289 DEFAULT CHARSET=utf8mb4;



INSERT INTO `entity` (`id`, `parts_ref`, `locale`, `name`, `description`, `original_name`, `valid`) VALUES
('1000', NULL, '5932912314679296', '6594827038752768', '5305814070853632', '4509837178175488', b'1');

INSERT INTO `entity_section` (`id`, `entity_id`, `name`, `description`, `sequence`, `tags`, `tags_external`, `valid`, `category`) VALUES
('1000', '1000', NULL, NULL, NULL, '5616609716600832', '5807113662103552', b'1', NULL);

INSERT INTO `entity_type` (`id`, `identifier`, `short_key`, `format_key`, `version`, `ingest_timestamp_ref`, `provisional_timestamp_ref`, `provider_id`, `entity_section_id`, `valid`, `provider_external_id`, `selector`, `source`, `uri`) VALUES
('1000', '112233', '5453216544194560', '6427234059419648', '5132703804424192', '6016166497615872', '4904996265000960', '2159', '1000', b'1', NULL, NULL, '5089374199349248', NULL);

INSERT INTO `provider` (`id`, `name`, `slug`, `valid`) VALUES
('2159', '5995019538268160', 'someone-or-other', b'1');

INSERT INTO `entity_type_owner` (`entity_type_id`, `owner_id`, `role`) VALUES
('1000', '798', 'person'),
('1000', '799', 'company');

INSERT INTO `owner` (`id`, `name`, `description`, `slug`, `valid`) VALUES
('798', '5237804690833408', NULL, 'some-person', b'1'),
('799', '4809767713243136', NULL, 'some-company', b'1');

--
-- Given :entityId = 1000
--

SELECT
    cf.id AS Id,
    cf.valid AS Valid,
    cf.entity_section_id AS EntitySectionId,
    cf.identifier AS identifier,
    cf.short_key AS Short,
    cf.source AS Source,
    cf.format_key AS FormatIdentifier,
    cf.version AS Version,
    cf.ingest_timestamp_ref AS IngestDate,
    cf.provisional_timestamp_ref AS ProvisionalDate,
    (SELECT id FROM provider p WHERE p.id = cf.provider_id AND p.valid = 1) AS ProviderId,
    (SELECT id FROM provider p WHERE p.id = cf.provider_external_id AND p.valid = 1) AS ProviderExternalId,
    cf.selector AS ProviderSelector,
    cf.uri AS Link
FROM entity_type cf
WHERE cf.id IN (SELECT id FROM entity_type WHERE entity_section_id IN (SELECT id FROM entity_section WHERE entity_id = :entityId))
  AND cf.valid = 1;

-- should return
-- | Id   | Valid |  EntitySectionId |    identifier | Short            | Source           | FormatIdentifier | Version          | IngestDate       | ProvisionalDate  | ProviderId  | ExternalId| ProviderSelector  | Link |
-- |------|-------|------------------|---------------|------------------|------------------|------------------|------------------|------------------|------------------|-------------|-----------|-------------------|------|
-- | 1000 |     1 |             1000 |        112233 | 5453216544194560 | 5089374199349248 | 6427234059419648 | 5132703804424192 | 6016166497615872 | 4904996265000960 |        2159 |           |                   |      |

SELECT
    c.id AS Id,
    c.valid AS Valid,
    cfc.role AS Role,
    cfc.entity_type_id AS EntityTypeId
FROM entity_type_owner cfc
         JOIN owner c ON c.id = cfc.owner_id
WHERE cfc.entity_type_id IN (SELECT id FROM entity_type WHERE entity_section_id IN (SELECT id FROM entity_section WHERE entity_id = :entityId));

-- should return
-- | Id  | Valid | Role     |       EntityTypeId |
-- |-----|-------|----------|--------------------|
-- | 798 |     1 | person   |               1000 |
-- | 799 |     1 | company  |               1000 |

Query cache should be LRU based

Right now the query cache Exqlite.Queries currently just grows infinitely. I think the solution we should look at is have this be a configurable value something like query_cache_size: 100 to set a max value, and setting to 0 would disable query caching.

Busy ecto sandbox

@kevinlang suggested I open an issue here in regards to the ecto sandbox testing.

I just tried my example from a few days ago with 0.3.6 and I still get busy errors using the following intentially crafted example:

defmodule Test1Test do
  use PhxSqlite.DataCase, async: true

  test "abc" do
    import Ecto.Query
    query = from m in "mytable", select: %{id: m.id, name: m.name}
    IO.inspect("Start 1")
    now = NaiveDateTime.utc_now()
    Repo.insert_all("mytable", [%{name: "test", inserted_at: now, updated_at: now}])
    Repo.all(query) |> IO.inspect(label: "1")
    Process.sleep(10000)
    Repo.all(query) |> IO.inspect(label: "1")
  end
end

defmodule Test2Test do
  use PhxSqlite.DataCase, async: true

  test "abc" do
    import Ecto.Query
    IO.inspect("Start 2")
    Process.sleep(4000)
    query = from m in "mytable", select: %{id: m.id, name: m.name}
    Repo.all(query) |> IO.inspect(label: "2")

    now = NaiveDateTime.utc_now()
    Repo.insert_all("mytable", [%{name: "test 2", inserted_at: now, updated_at: now}])
  end
end
"Start 1"
1: [%{id: 1, name: "test"}]
"Start 2"
2: []


  1) test abc (Test2Test)
     test/phx_sqlite_web/test_1_test.exs:19
     ** (Exqlite.Error) Database busy
     code: Repo.insert_all("mytable", [%{name: "test 2", inserted_at: now, updated_at: now}])
     stacktrace:
       (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:751: Ecto.Adapters.SQL.raise_sql_call_error/1
       (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:660: Ecto.Adapters.SQL.insert_all/8
       (ecto 3.5.8) lib/ecto/repo/schema.ex:54: Ecto.Repo.Schema.do_insert_all/6
       test/phx_sqlite_web/test_1_test.exs:27: (test)

1: [%{id: 1, name: "test"}]
.

Finished in 10.0 seconds
2 tests, 1 failure

It seems the reading does actually work, but writing is the issue here.

Awesome!

Was just looking into reviving my ill-fated attempt at this once before, xqlite3 that got nowhere due to the busy errors you mentioned in the README (and also because it was my first foray into Elixir programming, probably not the best thing to start with).

You may want to post about it on the Elixir Forum as things get somewhat stable, as a decent SQLite3 ecto3 driver has been wanted for some time. Here is a recent thread, for example:
https://elixirforum.com/t/help-with-debugging-sqlites-busy-error-with-ecto3/33613/9

(edit: just realized one of the contributors was the one who started that thread, ๐Ÿ˜… )

It would be awesome to one day see the ecto driver upstreamed into the ecto repo itself once things get stable, as the other drivers have been (postgrex, myxql, etc).

Cheers,

Upstreaming the ecto-related parts

I was going to start a new conversation in the elixir-ecto Google group to see what the path forward is for upstreaming most of the ecto related stuff into ecto_sql, now that nearly all of the integration tests are onboarded and stable.

In that scenario, this exqlite repo would mainly contain the core non-ecto driver parts, like the db_connection stuff and the NIF.

Does that sound good, @warmwaffles ? Any reservations or thoughts?

Add telemetry support

Since I am unsure of how effective cached prepared statements are versus not caching them at all, we should probably add some telemetry support.

Description on hex.pm should include "adapter"

Firstly, thanks for open-sourcing this library! SQLite is a great database, especially for smaller applications, and it seems crazy to not have an ecto3 adapter for it in the standard Ecto library.

In order to improve discoverability of this library for the community, I would like to suggest that the description on hex.pm be changed from:

An Sqlite3 Elixir library.

to:

An SQLite3 Elixir library with adapter for Ecto3

json_extract_path integration tests fail

If you uncomment Code.require_file "#{ecto}/integration_test/cases/type.exs", __DIR__ and run

EXQLITE_INTEGRATION=true mix test --only json_extract_path --seed 0

You'll see the errors.

Markdown formatting of CHANGELOG

As the CHANGELOG is written in Markdown syntax (keep a changelog),
we should probably rename it to CHANGELOG.md.
It will make the readability much better within GitHub.

Additionally we then could add it to the docs.

insert_all not returning number of inserted rows

Test w/ repo:

    test "insert_all" do
      timestamp = NaiveDateTime.utc_now() |> NaiveDateTime.truncate(:second)
      account = %{
        name: "John",
        inserted_at: timestamp,
        updated_at: timestamp,
      }
      {1, _} = TestRepo.insert_all(Account, [account], [])
    end

Will fail due to rhs being {0, []}

Verify this compiles on ARM systems

Currently I have no idea if this will compile and play nice on ARM. In theory it should because nothing is dependent on x86 / x64 specific things.

When connection times out a SIGSEGV is encountered

For a large table of 66,000 records

Company
|> select([c], c)
|> Repo.all()

Encounter

14:25:34.215 [error] Exqlite.Connection (#PID<0.446.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.536.0> timed out because it queued and checked out the connection for longer than 15000ms

#PID<0.536.0> was at location:

    (exqlite 0.5.1) lib/exqlite/sqlite3.ex:107: Exqlite.Sqlite3.fetch_all/3
    (exqlite 0.5.1) lib/exqlite/connection.ex:486: Exqlite.Connection.get_rows/2
    (exqlite 0.5.1) lib/exqlite/connection.ex:439: Exqlite.Connection.execute/4
    (db_connection 2.3.1) lib/db_connection/holder.ex:316: DBConnection.Holder.holder_apply/4
    (db_connection 2.3.1) lib/db_connection.ex:1272: DBConnection.run_execute/5
    (db_connection 2.3.1) lib/db_connection.ex:1359: DBConnection.run/6
    (db_connection 2.3.1) lib/db_connection.ex:557: DBConnection.parsed_prepare_execute/5
    (db_connection 2.3.1) lib/db_connection.ex:550: DBConnection.prepare_execute/4
    (ecto_sqlite3 0.5.3) lib/ecto/adapters/sqlite3/connection.ex:62: Ecto.Adapters.SQLite3.Connection.prepare_execute/5
    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:692: Ecto.Adapters.SQL.execute!/4
    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5
    (ecto 3.5.8) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.5.8) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3
    (stdlib 3.14) erl_eval.erl:680: :erl_eval.do_apply/6
    (elixir 1.11.3) src/elixir.erl:280: :elixir.recur_eval/3
    (elixir 1.11.3) src/elixir.erl:265: :elixir.eval_forms/3
    (iex 1.11.3) lib/iex/evaluator.ex:261: IEx.Evaluator.handle_eval/5
    (iex 1.11.3) lib/iex/evaluator.ex:242: IEx.Evaluator.do_eval/3
    (iex 1.11.3) lib/iex/evaluator.ex:220: IEx.Evaluator.eval/3

fish: Job 1, 'iex -S mix' terminated by signal SIGSEGV (Address boundary error)

This may be a reason why esqlite added bulk fetching rows instead of simply stepping. Although bulk fetching would be nice, the issue arises that an error happens midway through the step, and how to communicate that error back is a bit tricky.

Improve on_conflict: nothing and un-fork repo.exs tests

Right now we have a caveat about us returning an invalid PK id when hitting a conflict during ON CONFLICT DO NOTHING.

This is due to limitations around the last insert id, as mentioned in the README.

This also caused us to have to fork the repo.exs cases so that we can exclude one of the tests around this particular scenario.

Once #76 is supported, and we use that as the primary mechanism for getting the last insert ids, we should be able to eliminate this caveat and un-fork our repo.exs test.

I was able to confirm that the RETURNING * clause returns nothing in the ON CONFLICT DO NOTHING case, which is what we want.

create table foo(bar);
create unique index u on foo(bar);
insert into foo values(1) returning *;
=> 1
insert into foo values(1) on conflict do nothing returning *;
(nothing returned)

Support FROM in when running UPDATE

There are two tests that are failing at the moment.

The following sql is valid according the sqlite documentation

WITH target_rows AS (
  SELECT 
    s0.id AS id 
  FROM schema AS s0 
  ORDER BY s0.id
  LIMIT 10
) 
UPDATE schema AS s0 
  SET x = 123 
  FROM target_rows AS t1 
  WHERE (t1.id = s0.id)

Make busy_timeout configurable

Useful to configure in itself, but also for our tests that rely on testing it. It makes those tests hang for ~2seconds per the 2000 value we have hardcoded.

Make last_insert_id call from Exqlite skippable, or otherwise remove it

We no longer need it in the ecto layer thanks to RETURNING support added in #81. We can avoid another NIF call by not doing it when driving Exqlite via the Ecto adapter.

We should probably keep it there and configurable for those who want to use Exqlite (without ecto) directly. Though they should probably also just use RETURNING themselves too, so maybe it makes sense to cut it out all together.

Onboard transaction.exs test suite

This is the most reliable reproduction of #58. My bet is that once we have all the tests passing here, #58 will be fine, too.

Labeling as bug since these failing tests indicate an underlying bug that will be fixed at the same time the tests are onboarded.

How best to deal with timezone aware datetimes

The original sqlite_ecto2 library used TEXT_DATETIME to preserve the offset in the datetime. All datetimes stored in sqlite are shifted to UTC.

My personal opinion is that all datetimes should be stored in utc and the named timezone they need to be shifted to, should be stored in a separate column else where. Having a simple +03:00 does not convey daylight savings time.

Improve compilation speed for rebuilds

I would like to explore compiling sqlite3 to an object file and then linking it rather than recompiling everything at once. I dread making changes to the nif because the compilation takes so long.

Error :arguments_wrong_length on preload

Hey

Nice to see an Ecto 3 SQLite library. I'm thrilled about this! :)

I just tried to use it with an existing project and after some modifications (remove ALTER TABLE migrations, remove :map type field) it seemed to work.

Until I hit this error.

Git repo with branch: https://github.com/lawik/noted/tree/exqlite

Trying to run this code:

  def list_notes(user_id) do
    query =
      from n in Note,
        where: [user_id: ^user_id],
        preload: [:tags, :files]

    Repo.all(query)
  end
iex(2)> Noted.Notes.list_notes(1)
[debug] QUERY OK source="notes" db=0.0ms queue=0.1ms idle=1857.2ms
SELECT n0.id, n0.body, n0.title, n0.user_id, n0.inserted_at, n0.updated_at FROM notes AS n0 WHERE (n0.user_id = ?) [1]
[debug] QUERY ERROR source="files" db=11.7ms queue=0.2ms idle=28.1ms
SELECT f0.id, f0.mimetype, f0.path, f0.size, f0.note_id, f0.inserted_at, f0.updated_at, f0.note_id FROM files AS f0 WHERE (f0.note_id IN (?,?)) ORDER BY f0.note_id [[2, 1]]
[error] GenServer #PID<0.411.0> terminating
** (DBConnection.ConnectionError) client #PID<0.487.0> stopped: ** (WithClauseError) no with clause matching: {:error, %Exqlite.Error{message: :arguments_wrong_length}, %Exqlite.Connection{db: #Reference<0.1830523906.2284716050.207239>, path: "database.db", status: :busy, transaction_status: :idle}}
    (exqlite 0.4.1) lib/exqlite/connection.ex:443: Exqlite.Connection.execute/4
    (db_connection 2.3.1) lib/db_connection/holder.ex:316: DBConnection.Holder.holder_apply/4
    (db_connection 2.3.1) lib/db_connection.ex:1272: DBConnection.run_execute/5
    (db_connection 2.3.1) lib/db_connection.ex:1359: DBConnection.run/6
    (db_connection 2.3.1) lib/db_connection.ex:557: DBConnection.parsed_prepare_execute/5
    (db_connection 2.3.1) lib/db_connection.ex:550: DBConnection.prepare_execute/4
    (exqlite 0.4.1) lib/ecto/adapters/exqlite/connection.ex:46: Ecto.Adapters.Exqlite.Connection.prepare_execute/5
    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:692: Ecto.Adapters.SQL.execute!/4
    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5
    (ecto 3.5.8) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
    (ecto 3.5.8) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3
    (ecto 3.5.8) lib/ecto/repo/preloader.ex:229: Ecto.Repo.Preloader.fetch_query/8
    (elixir 1.11.3) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
    (elixir 1.11.3) lib/task/supervised.ex:35: Task.Supervised.reply/5
    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3

    (db_connection 2.3.1) lib/db_connection/connection.ex:183: DBConnection.Connection.handle_cast/2
    (connection 1.1.0) lib/connection.ex:810: Connection.handle_async/3
    (stdlib 3.13) gen_server.erl:680: :gen_server.try_dispatch/4
    (stdlib 3.13) gen_server.erl:756: :gen_server.handle_msg/6
    (stdlib 3.13) proc_lib.erl:236: :proc_lib.wake_up/3 
Last message: {:"$gen_cast", {:stop, #Reference<0.1830523906.2284716033.212081>, %DBConnection.ConnectionError{message: "client #PID<0.487.0> stopped: ** (WithClauseError) no with clause matching: {:error, %Exqlite.Error{message: :arguments_wrong_length}, %Exqlite.Connection{db: #Reference<0.1830523906.2284716050.207239>, path: \"database.db\", status: :busy, transaction_status: :idle}}\n    (exqlite 0.4.1) lib/exqlite/connection.ex:443: Exqlite.Connection.execute/4\n    (db_connection 2.3.1) lib/db_connection/holder.ex:316: DBConnection.Holder.holder_apply/4\n    (db_connection 2.3.1) lib/db_connection.ex:1272: DBConnection.run_execute/5\n    (db_connection 2.3.1) lib/db_connection.ex:1359: DBConnection.run/6\n    (db_connection 2.3.1) lib/db_connection.ex:557: DBConnection.parsed_prepare_execute/5\n    (db_connection 2.3.1) lib/db_connection.ex:550: DBConnection.prepare_execute/4\n    (exqlite 0.4.1) lib/ecto/adapters/exqlite/connection.ex:46: Ecto.Adapters.Exqlite.Connection.prepare_execute/5\n    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:692: Ecto.Adapters.SQL.execute!/4\n    (ecto_sql 3.5.4) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5\n    (ecto 3.5.8) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4\n    (ecto 3.5.8) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3\n    (ecto 3.5.8) lib/ecto/repo/preloader.ex:229: Ecto.Repo.Preloader.fetch_query/8\n    (elixir 1.11.3) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2\n    (elixir 1.11.3) lib/task/supervised.ex:35: Task.Supervised.reply/5\n    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3\n", reason: :error, severity: :error}, %Exqlite.Connection{db: #Reference<0.1830523906.2284716050.207239>, path: "database.db", status: :busy, transaction_status: :idle}}}
State: {Exqlite.Connection, %Exqlite.Connection{db: #Reference<0.1830523906.2284716050.207239>, path: "database.db", status: :busy, transaction_status: :idle}}

I don't have an opportunity to sit down and figure if this is all WHERE x IN y or whether it has to do with Ecto preloads or if my schemas are screwy. It worked fine on Postgres and explodes on this.

If you want to reproduce with my code, set up the app, get a Telegram bot API key (not too complicated if you have Telegram, just talk to the BotFather bot) put that and the bot name in environment variables as noted in the README and use the bot to authenticate and then send a photo. This breaks the list_notes function call with the above error.

Onboard "embed with custom type" test by fixing issue related to Ecto select projection

This is the last test from type.exs that is marked with our :onboarding tag.

See notes in #101. I think that Ecto may have an issue where if the query has a select for a specific column, it results in the respective loader not getting called, which is why that test is failing. This issue would manifest itself pretty frequently in day-to-day usage if my theory is correct - everytime a specific column is selected that would benefit from a loader pass, which in SQLite is most of them!

Edit: also if my theory is correct, the fix for this is upstream

Repo.delete_all does not return number of affected rows

Simple case in my kcl-delete-2 branch:

  describe "delete" do
    test "delete_all" do
      TestRepo.insert!(%Product{name: "hello"})
      assert [_] = TestRepo.all(Product)

      assert {1, _} = TestRepo.delete_all(Product)
      assert [] == TestRepo.all(Product)
    end
  end

The match on {1, _} will always fail.

Look into supports_ddl_transaction? causing busy issues

This is what caused a ton of issues for other ecto adapters

  @impl true
  def supports_ddl_transaction?(), do: false

If you set this to true, the initial migrations for the test database will not work. Something is happening with transactions not being committed and it hangs when trying to insert the schema_migration record.

Connection pooling needs to be altered

The connection pooling needs to be altered to only allow one connection to make modifications and the others can only query. Unsure on how to get this implemented, but DBConnection does allow you to specify / build your own pooling method.

Some solutions or ideas on how to go about this would be excellent.

Sandbox does not recover properly from errors, leading to busy issues

If you go to integration_test/exqlite/all_test.exs and comment out the line for:

Code.require_file "#{ecto}/integration_test/cases/interval.exs", __DIR__

and run the test suite, the first test fails due to some reason, and then the rest of the tests fail with "Database busy". Which means the failed test is not rolling back its transaction correctly...

I believe db_connection has a :disconnect return status that I noticed some of the other adapters take use of when hitting errors. I think we need to use that for some of the errors we cannot recover from, so that the entire connection is closed and the database does not stay in an unreachable state until the entire application is restarted!

To be clear, this may not be scoped only to the sandbox. This probably represents an error that could occur in normal "production" application use.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.