Giter Site home page Giter Site logo

sqlstreamstore / sqlstreamstore Goto Github PK

View Code? Open in Web Editor NEW
468.0 31.0 124.0 5.46 MB

Stream Store library targeting RDBMS based implementations for .NET

License: MIT License

C# 94.84% PLpgSQL 1.33% Shell 0.02% Batchfile 0.02% Dockerfile 0.05% PowerShell 0.02% TSQL 3.71%
streams cqrs event-store event-sourcing sql-server postgresql c-sharp dotnet

sqlstreamstore's Introduction

SQL Stream Store CI release license code size docs status

⚠️ These libraries are no longer actively maintained.

A stream store library for .NET that specifically targets SQL based implementations. Primarily used to implement Event Sourced applications.

Package Install
SqlStreamStore (includes in-memory version for behaviour testing) NuGet
MS SQL Server / Azure SQL Database NuGet
PostgreSQL / AWS Aurora NuGet
MySQL / AWS Aurora NuGet
Sqlite up for grabs
HTTP Wrapper API On CI Feed
Schema Creation Script Tool NuGet

CI Packages available on Feedz.

Design considerations:

  • Designed to only ever support RDMBS/SQL implementations.
  • Subscriptions are eventually consistent.
  • API is influenced by (but not compatible with) EventStore.
  • Async only.
  • JSON only event and metadata payloads (usually just a string / varchar / etc).
  • No support for System.Transaction, enforcing the concept of the stream as the consistency and transaction boundary.

Building

Building requires Docker. Solution and tests are run on a linux container with .NET Core leveraging SQL Server, Postgres and MySQL as sibling containers.

  • Windows, run .\build.cmd
  • Linux, run ./build.sh

Note: build does not work via WSL.

Help & Support

Ask questions in the #sql-stream-store channel in the ddd-cqrs-es slack workspace. (Join here).

Licences

Licenced under MIT.

sqlstreamstore's People

Contributors

adamralph avatar bartelink avatar benniecopeland avatar cobelapierre avatar cumpsd avatar damianh avatar danbarua avatar dealproc avatar dependabot-preview[bot] avatar erwinvandervalk avatar huysentruitw avatar jchannon avatar jefclaes avatar jruizaranguren avatar leecampbell avatar mausch avatar mihasic avatar mnf avatar nordfjord avatar osoykan avatar pgermishuys avatar ragingkore avatar rikbosch avatar robvdlv avatar rvdkooy avatar thefringeninja avatar tpresthus avatar vmachacek avatar ylorph avatar yreynhout avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlstreamstore's Issues

Make public HTTP Wrapper API

In readme it’s said that HTTP Wrapper API is under development. As I have a similar requirement, it will be very interesting to see your current state to use what’s done so far and possibly contribute.
Please make public HTTP Wrapper API branch.

Use regular merges for PRs

Using these repo options can make things a bit awkward for contributors:

When a squash/rebase merge is used, the commits which are pushed to master are not those which are on my PR branch

I have a git alias that automatically updates my clone and prunes any branches which have been merged. Since these branches have not been merged, they remain present.

Also, consider that the commits in my PR are GPG signed and are therefore displayed as "Verified" by GitHub:

Because of the rebase, a new, joint-authored commit is created on master, which is not GPG signed by me:

I suggest switching off both of these options and using regular merges for PRs.

Postgres 1.2.0-build00028 exception when appending to stream

Was testing the latest version of Postgres (1.2.0-build00028) but I am getting a "Object reference not set" when appendToStream. Reading from stream did not give an exception.

image

Using this store initializer:
using (var store = new PostgresStreamStore(new PostgresStreamStoreSettings(Settings.Default.ESConnectionString) { Schema = "EventStore".ToLower() }))

Also tried with no schema provided to the PostgresStreamStoreSettings. (I also ran the await store.CreateSchema() again.

Also downgraded both packages for SqlStreamStore and SqlStreamStore.Postgres to version 16 => same error.

using docker image 10.5 of postgres.

I have a public schema and a eventstore schema in postgres db
image

Does the $deleted stream obey the metadata settings?

I am using version 1.1.2.
I have a stream which I write to ($checkpoint) in which I have set its metadata to MaxCount 100.
I have set the $deleted stream's MaxCount to 10.
I see the metadata entries in the Streams table with the correct values.

As I write to $checkpoint, it only records 100 rows as expected however the $deleted stream continually grows.

I have not debugged it but it looks like DeleteEventInternal creates new entries in the $deleted stream but does not act on the returned MsSqlAppendResult to trim any overflow.

StreamEvent(s?)Received

When subscribing to a stream, SqlStreamStore starts by reading events you haven't seen yet based on the position you provide. It will then for each event invoke the StreamEventReceived delegate.

In case you want to remember the last message you've processed, you end up checkpointing (to a database, a file...) after processing each message. Next to checkpointing, each message that you're interested in has a reaction (a DDL statement, publishing to a bus..) with some cost attached to it.

When your projection is catching up, I think it is beneficial to instead of passing over a single message, to pass over a list of events to the StreamEvent(s)Received received delegate. This would allow to make impactful optimisations in regards to processing of DML statements or dispatching of messages in bulk.

MsSQL: Validate the schema

A consumer should be able to check that the schema of a store is valid. Something along the lines of

MsSqlStore.ValidateSchema();

Use case is for people who use the stream store schema mixed in with other schema within a single database.

documentation :-)

hi,

just wondering if i'm looking for info on architectural decisions/approaches, should i look at the neventstore docs as inspiration ???

i'm looking for a simple ES solution on top of postgres + geteventstore, so this seems to be perfect.

Request: Autodispose subscriptions when disposing an IStreamStore instance.

Problem space

At the moment, when one disposes a stream store instance, subscriptions find out about this fact when they try to read the next page from the specified stream store instance. The end users using the IAllStreamSubscription SubscribeToAll(long? continueAfterPosition, AllStreamMessageReceived streamMessageReceived, AllSubscriptionDropped subscriptionDropped = null, HasCaughtUp hasCaughtUp = null, string name = null) part of the API to set up these subscriptions, get notified about this via the AllSubscriptionDropped callback, which has the following signature delegate void AllSubscriptionDropped(IAllStreamSubscription subscription, SubscriptionDroppedReason reason, Exception exception = null). The SubscriptionDroppedReason is StreamStoreError in this particular case, not Disposed, which is reserved for dropping the subscription itself (AFAICT). In order to figure out whether one should resubscribe, one has to inspect - next to the reason - the exception and figure out whether it is an ObjectDisposedException with an ObjectName property that matches the name of the specific IStreamStore implementation. It should go without saying that this is a bit leaky and difficult to figure out. Please note, I have not investigated the subscription to a specific stream. Wild guess is, it behaves pretty much the same.

In essence, it should be clear and easy to figure out when resubscribing is appropriate and when it is not.

Solution space

  • IF a stream store were to track and dispose of its subscriptions, the SubscriptionDroppedReason could become Disposed again, since that is effectively what would be going on.
  • A simpler, short term solution might be to make the SubscriptionDroppedReason less ambiguous, namely:
  public enum SubscriptionDroppedReason
  {
    SubscriptionDisposed,
    StreamStoreDisposed,
    SubscriberError,
    StreamStoreError,
  }

buckets for natural stream partitions

I'm aware it is pretty soon to add issues here, but, I'll ask anyway 😄

Do you plan to add a bucket column in order to make it easy to partition streams in RDBMS (as in NEventStore)? Or do you plan to avoid it making in order to be more GES compatible?

Purge $deleted

When a message or stream is deleted, a message is deleted to the $deleted stream. The primary purpose of this is to allow subscribers or (a future) mirroring capability to handle that. The current design retains these deleted messages indefinitely where as for most cases, one would only want to retain the for a period of time.

Throwing out some thoughts:

  1. Leverage the metadata infra to be able to set a maxAge and/or maxLength. Currently, the API does not allow direct manipulation of streams whose Id starts with $ by design. I'd like to keep this in place so this approach would require an explicit API. The potential slight problem is that for maxAge the purge happens on read so a ReadAllStreamForwards(null...) would be required to actually remove the $deleted stream messages. If that never happens... MaxCount is handled on append, so that is not a concern.

  2. Expose a specific operation such as PurgeDeleted(TimeSpan olderThan) that the host application can invoke on it's own schedule. Should this have it's own implementation or underneath just reads the $deleted stream and let the MaxAge purge do it's thing?

Limit message payload size, or not?

Currently the JsonData payload in NewStreamMessage is unbounded. Or rather, it is bounded to the limits the underlying store can support. For MsSQL server (nvarchar(max), text) is rather large. This allows other patterns such as storing "documents" in the streams. However, EventStore has a limit of 64KB per message. Not having this limit further reduces portability.

While full portability with GES is not a goal, I thought that mentioning this decision here would be a good idea. :)

At this moment my preference is not to limit the size, but leave that to the user, if they so need it.

Cannot build locally due to version conflict

I cannot build locally since this PR was merged: #183

I get this error every single time I try:

/src/SqlStreamStore.Http.Tests/SqlStreamStore.Http.Tests.csproj : error NU1107: Version conflict detected for SqlStreamStore. Install/reference SqlStreamStore directly to project SqlStreamStore.Http.Tests to resolve this issue.  [/src/SqlStreamStore.sln]
/src/SqlStreamStore.Http.Tests/SqlStreamStore.Http.Tests.csproj : error NU1107:  SqlStreamStore.Http.Tests -> SqlStreamStore.TestUtils -> SqlStreamStore  [/src/SqlStreamStore.sln]
/src/SqlStreamStore.Http.Tests/SqlStreamStore.Http.Tests.csproj : error NU1107:  SqlStreamStore.Http.Tests -> SqlStreamStore.HAL 1.0.0-rc2-build00035 -> SqlStreamStore (>= 1.2.0-build00124). [/src/SqlStreamStore.sln]

[MSSQL] N+1 issue on reading messages

Each read action is followed by FilterExpired() that tries to get the max age out of stream metadata. There is an internal cache, but it's not getting much hits.

Proposed fix: put latest metadata on a stream itself (including MaxAge and MaxCount) and read them together with regular message read action.

Possible workaround: unseal the MsSqlStreamStore, so in consumer code - GetStreamMetadataInternal can be overridden to avoid the Sql query (so Max age won't be supported)

Getting to version 1.0 checklist

Outstanding items to get to a v0.1 release:

  • Message Metadata - string with arbitrary json vs Dictionary<string, string> it's just a string you can insert your own json?
  • Stream Metadata api - string with arbitrary json vs Dictionary<string, string>? Special message object with known properties MaxAge and MaxCount + user json.. Store as independent stream or not? yes. Expose via all stream? yes Support MaxAge and MaxCount
  • Steam deletion (support tomb stoning?)
  • Message deletion
  • Scavaging - "Older than X", "Prior to Version Y"
  • Rename. A) StreamStore or B) SqlStreamStore
  • XML Comment all the public things
  • Logging
  • Performance metrics Added a load test project. Perf counters / metrics in a subsequent version.
  • Docs - readthedocs?
  • Sample project

Post 0.1 (just noting for now)

  • Postgres
  • Stream Migration

Cedar.EventStore.MsSql2008 doesn't store events in the correct order

What we're seeing is a mismatch between ordinal and streamversion for a stream.
Which leads to the highest ordinal number not having the highest streamversion.

ordinal

Full explantion and code: https://gist.github.com/bwaterschoot/77362620bbf058b58285

This happens because the events aren't stored in the correct order, link below:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5cf485cc-a005-4f70-9cad-78b091d669cd/table-valued-parameters-order-of-records-to-stored-procedure?forum=transactsql

A solution would be in the ReadStreamForward to use StreamVersion instead of Ordinal as the orderby clause. Because you can't be sure SQL writes the event in the same order.

Probably would need to look at other Read sql statement to do something similar.

You can reproduce the issue using the code below, you might have to run it several times, because sometimes it does work ...

    namespace Cedar.EventStore
    {
        using System;
        using System.Collections.Generic;
        using System.Threading.Tasks;
        using System.Linq;
        using Cedar.EventStore.Streams;
        using Shouldly;
        using Xunit;


        public partial class EventStoreAcceptanceTests
        {
        [Fact]
        public async Task Given_large_event_stream_can_be_read_back_in_pages()
        {
            using (var fixture = GetFixture("dbo"))
            {
                using (var eventStore = await fixture.GetEventStore())
                {
                    var eventsToWrite = CreateNewStreamEvents();

                    await eventStore.AppendToStream("stream-1", ExpectedVersion.NoStream, eventsToWrite);


                    var readEvents = await new PagedEventStore(eventStore).GetAsync("stream-1");

                    readEvents.Count().ShouldBe(eventsToWrite.Length);
                }
            }
        }

        private NewStreamEvent[] CreateNewStreamEvents()
        {
            var eventsToWrite = new List<NewStreamEvent>();
            var largeStreamCount = 7500;
            for (int i = 0; i < largeStreamCount; i++)
            {
                var envelope = new NewStreamEvent(Guid.NewGuid(), $"event{i}", "{}", $"{i}");

                eventsToWrite.Add(envelope);
            }

            return eventsToWrite.ToArray();
        }
    }

    public class PagedEventStore
    {
        private readonly IEventStore _eventStore;

        public PagedEventStore(IEventStore eventStore)
        {
            _eventStore = eventStore;
        }

        public async Task<IEnumerable<StreamEvent>> GetAsync(string streamName)
        {
            var start = 0;
            const int BatchSize = 500;

            StreamEventsPage eventsPage;
            var events = new List<StreamEvent>();

            do
            {
                eventsPage = await _eventStore.ReadStreamForwards(streamName, start, BatchSize);

                if (eventsPage.Status == PageReadStatus.StreamDeleted)
                {
                    throw new Exception("Stream deleted");
                }

                if (eventsPage.Status == PageReadStatus.StreamNotFound)
                {
                    throw new Exception("Stream not found");
                }

                events.AddRange(
                    eventsPage.Events);

                start = eventsPage.NextStreamVersion;
            }
            while (!eventsPage.IsEndOfStream);

            return events;
        }

    }
}

HTTP API

So far, Damian, Myself, Jef, Bart and Mat are interested in this.

Be able to List Streams

Nice to be able to iterate over the collection of StreamIds instead of having to ReadAll and skip over messages. Also nice to have some pattern matching, perhaps startsWith initially. Primary use case is performing stream migrations.

Potential examples:

var streams = streamStore.ListStreams(cancellationToken)

var startsWith = "foo/bar/"
var streams = streamStore.ListStreams(startsWith, maxCount:100, cancellationToken);
streams = streams.GetNext();

Missing null check for MsSqlEventStore CreateEventStoreNotifier

If you create a new store with

var store = MsSqlEventStore(connectionString, null);

then later try to add a subscription

store.SubscribeToAll(null, se => { DoStuff(se); });

It fails with a generic

System.NullReferenceException: Object reference not set to an instance of an object.

With the below origin:

public MsSqlEventStore(
    string connectionString,
    CreateEventStoreNotifier createEventStoreNotifier,
    string schema = "dbo",
    string logName = "MsSqlEventStore")
    :base(logName)
{
    Ensure.That(connectionString, nameof(connectionString)).IsNotNullOrWhiteSpace();

    _createConnection = () => new SqlConnection(connectionString);
    _eventStoreNotifier = new AsyncLazy<IEventStoreNotifier>(
        async () => await createEventStoreNotifier(this).NotOnCapturedContext()); <--- HERE
    _scripts = new Scripts(schema);
}

It would be nice to have a better error message along the lines of "Cannot create notifier because supplied createEventStoreNotifier was null"...

Stylistically I'm not sure how you want to handle this.

Maybe something like

// untested code
_eventStoreNotifier = new AsyncLazy<IEventStoreNotifier>(
    async () => 
        {
        if(createEventStoreNotifier == null)
            throw new ArgumentNullException("Cannot create notifier because supplied createEventStoreNotifier was null");
        return await createEventStoreNotifier(this).NotOnCapturedContext()); 
    });

AppendToStream causes StackOverFlow

The overload of EventStoreExtensions.AppendToStream that takes an IEnumerable<NewStreamEvent> calls itself, instead of the interface declared overload that takes an NewStreamEvent[].

public static Task AppendToStream(
        this IEventStore eventStore,
        string streamId,
        int expectedVersion,
        IEnumerable<NewStreamEvent> events)
    {
        //Just calls itself and Stackoverflows.
        return eventStore.AppendToStream(streamId, expectedVersion, events);
    }

Should be

public static Task AppendToStream(
        this IEventStore eventStore,
        string streamId,
        int expectedVersion,
        IEnumerable<NewStreamEvent> events)
    {
        return eventStore.AppendToStream(streamId, expectedVersion, events.ToArray());
    }

Design question: Should the MsSql CreateSchema() method really be async?

Currently the method is async, but when will this be called outside of an application's composition root, which is generally synchronous? I'm using SimpleInjector and it does not take async lambda expressions, so I specifically have to call Wait() since I want to make sure the database is ready before I start calling into it.

container.Register<IStreamStore>(() =>
{
    var connectionString = this.Configuration.GetConnectionString("EventStore");
    var settings = new MsSqlStreamStoreSettings(connectionString);

    var store = new MsSqlStreamStore(settings);
    store.CreateSchema().Wait();

    return store;
}, Lifestyle.Singleton);

HasCaughtUp not called in an empty store

When subscribing to All (have not tested individual streams), and the store is empty, the HasCaughtUp method is never called. Expected it to be called with true.

Projections

Do you have any plans for dealing with projections, or this logic is left to the consumers of subscriptions as they are currently implemented through SubscribeToStream?

Non-deterministic Position values when subscribing to All Streams.

As briefly discussed on https://ddd-cqrs-es.slack.com/messages/sql-stream-store/ :

With the isolation level set to READ COMMITTED, the Position values are non-deterministic in the sense that with two or more concurrent write operations can end up with creating "holes" in the Position sequence if one of the operation fails. This in turn affects the reads against the table; it is impossible to know if all new events are being picked up by the poller without keeping a state per subscriber / stream.

I assume that the intention of SqlStreamStore is to be able to use READ COMMITTED as the isolation level; SERIALIZABLE reads and writes against the Streams table will probably result in a very poor performance.

Question: will you be supprting linking a message to multiple streams?

Hi, I see that you are basing the SqlStreamStore on GetEventStore and I was wondering if you were planing on supporting the functionality to link a message created in one stream into another stream as can be done in GetEventStore using javascript projections?

We are quite keen to move away from GetEventStore and to use SqlStreamStore in its place so as to get the advantages of Sql Server, and this is one possible issue that may delay our adoption.

SQLStreamStore vs NEventStore vs EventStore

Your project is relatively new and exist in the same area as more mature Event Store and NEventStore (and a few others).

Can you explain the reasons, why this project was created? What are the benefits ( and disadvantages) to use your library? Why I (as new in Event Store area) should choose your library?

It will be good to add the answer to wiki.
P.s. @damianh, I found your post http://dhickey.ie/2015/04/stepping-down-from-neventstore/, but it also doesn’t explain, why “NEventStore's current design doesnt work” for you.

It is possible for subscriptions to miss messages when chasing the tail under high concurrent load

Why

When inserting messages into SqlStreamStore, it is possible to get gaps in the stream, e.g. by a transaction rolling back, or by a transaction not yet ready for commit.

In the first case, the gap is legitimate, but in the second case, any subscribers to the messages should wait until the gap has been filled. It is not possible, upon seeing a gap to canonically determine the scenario under which a gap exists, so SqlStreamStore uses the following algorithm:

  1. Read a page of messages
  2. Check that every message's position is monotonically incrementing
  3. If any message is missing, wait DefaultReloadIntervalms, and reread a batch
  4. Return the new batch, without checking for gaps
    The reason for ignoring gaps in step 4 is that if the gap is legitimate (e.g. a tx was rolled back), not to ignore gaps would cause the subscriber to continuously reload a page.

The underlying issue is that after the DefaultReloadInterval when a batch will be reloaded, subsequent messages may be written to the message queue. These new messages may also contain gaps, but these new messages will never be checked for inconsistencies.

e.g., consider batch size is 100

First read. 5 Messages in the messages table, batch size is 100, and message 3 has not yet committed (but will be)

ReadAllForwards sees messages [1, 2, *, 4, 5] // Message 3 is missing, so it will reload

Second Read. Now 10 messages from messages table, but message 7 has not been committed (but will be)

ReadAllForwards sees messages [1, 2, 3, 4, 5, 6, *, 8, 9, 10] // Message 7 is missing

ReadAllForwards will not re-validate this batch, and will pass it onto the subscription for processing, permanently hiding message 7 form the subscription.

What

Alter the reloading strategy such that, upon reloading a page due to gaps, the reloaded page is re-validated, taking into account that a message may never exist:

  • Load a page
  • If a gap is detected:
    • Wait for a delay
    • Reload the page
    • Check for new gaps. Any previously existing gaps can be ignored as abandoned messages (e.g. after a db tx rollback)
    • Loop until no new gaps exist

SqlStreamStore.MsSql 1.0.1 is broken

Default dependency of SqlStreamStore 1.0.1 does not exist, please publish 1.0.1 or a fixed SqlStreamStore.MsSql with reference to 1.0.0 of SqlStreamStore.

Be able to opt out of deletion tracking

By default, the stores track stream and message deletes by appending a message to the $deleted stream. This allows downstream subscribers to handle appropriately and performs a mechanism of audit tracking. However this comes at a cost (the write and the storage) especially if there is a high number of streams that have MaxCount or MaxAge applied to them and there is a large volume of appends. Some scenarios don't require deletion tracking (i.e. using it as KV store without subscribers / user implements their own tomb stoning pattern).

Log level upon disposing subscriptions

The log level seems to be Error when some subscriptions get disposed. Since disposing subscriptions is expected behavior, it might be useful to lower the log level.

2017-02-10 14:31:53.635 +01:00 [Error] Error Disposed in <subscription name here/>. It will shut down.

Documentation

Hi. Just having a look at this repo, it looks interesting and potentially something we would use. However I can't find much in the way of docs, particularly how to get up and running with the proper SQL version (although I managed to muddle my way through the in-memory version without too much difficulty). Anywhere I've missed?

Feature Request: Change signature of AppendToStream such that it returns Task<AppendResult>

Currently the signature of AppendToStream on IStreamStore looks like this:

        Task AppendToStream(
            string streamId,
            int expectedVersion,
            NewStreamMessage[] messages,
            CancellationToken cancellationToken = default(CancellationToken));

What I would like is for it to look like this:

        Task<AppendResult> AppendToStream(
            string streamId,
            int expectedVersion,
            NewStreamMessage[] messages,
            CancellationToken cancellationToken = default(CancellationToken));

The use case for this is the following:

    public async Task Save(MyAggregateRootEntity root)
    {
        var name = StreamName.For<MyAggregateRootEntity>(root.Identifier);
        var result = await _connection.AppendToStream(
            name,
            root.ExpectedVersion,
            root
                .GetChanges()
                .Select(@event =>
                {
                    var serialized = EventSerialization.Instance.Serialize(@event);
                    return new NewStreamMessage(Guid.NewGuid(), serialized.EventContractName, serialized.EventJsonData, null);
                })
                .ToArray());
        //!!THIS LINE BELOW IS WHY WE ARE INTERESTED IN THE RESULT!!
        root.ExpectedVersion = result.NextExpectedVersion;
        root.ClearChanges();
    }

In essence you alleviate the streamstore client api consumer from computing the next expected version, you tell it which one it is.

Why should I use SQLStreamStore instead of simple SQL implementation?

Our team wants to implement Event Store. I proposed to use SQLStreamStore library.
But our architect is asking, what are the benefits of using the library compare to simple SQL implementation:

Insert  new messages  to SQL table with auto-incremental identity key 
and read from the table  page of records starting from specified identity key value
with optional filter of streamId

Note that we will use Azure sql database, so support of multiple types of databases is not important for us.

What would you answer for such question? Which advantages of using of the library can I give?

Sorry, if the question is offensive for you. Hopefully the answer could be added to your documentation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.