Giter Site home page Giter Site logo

supabase / realtime Goto Github PK

View Code? Open in Web Editor NEW
6.5K 97.0 276.0 7.23 MB

Broadcast, Presence, and Postgres Changes via WebSockets

Home Page: https://supabase.com/realtime

License: Apache License 2.0

Elixir 92.72% Dockerfile 0.48% HTML 3.52% Shell 0.43% Makefile 0.30% CSS 1.26% JavaScript 1.28% Batchfile 0.01%
elixir postgres postgresql realtime phoenix phoenix-framework cdc change-data-capture crdt distributed-systems

realtime's Introduction


Supabase Logo

Supabase Realtime

Send ephemeral messages, track and synchronize shared state, and listen to Postgres changes all over WebSockets.
Multiplayer Demo · Request Feature · Report Bug

Status

Features v1 v2 Status
Postgres Changes GA
Broadcast Beta
Presence Beta

This repository focuses on version 2 but you can still access the previous version's code and Docker image. For the latest Docker images go to https://hub.docker.com/r/supabase/realtime.

The codebase is under heavy development and the documentation is constantly evolving. Give it a try and let us know what you think by creating an issue. Watch releases of this repo to get notified of updates. And give us a star if you like it!

Overview

What is this?

This is a server built with Elixir using the Phoenix Framework that enables the following functionality:

  • Broadcast: Send ephemeral messages from client to clients with low latency.
  • Presence: Track and synchronize shared state between clients.
  • Postgres Changes: Listen to Postgres database changes and send them to authorized clients.

For a more detailed overview head over to Realtime guides.

Does this server guarantee message delivery?

The server does not guarantee that every message will be delivered to your clients so keep that in mind as you're using Realtime.

Quick start

You can check out the Multiplayer demo that features Broadcast, Presence and Postgres Changes under the demo directory: https://github.com/supabase/realtime/tree/main/demo.

Client libraries

Server Setup

To get started, spin up your Postgres database and Realtime server containers defined in docker-compose.yml. As an example, you may run docker-compose -f docker-compose.yml up.

Note Supabase runs Realtime in production with a separate database that keeps track of all tenants. However, a schema, _realtime, is created when spinning up containers via docker-compose.yml to simplify local development.

A tenant has already been added on your behalf. You can confirm this by checking the _realtime.tenants and _realtime.extensions tables inside the database.

You can add your own by making a POST request to the server. You must change both name and external_id while you may update other values as you see fit:

  curl -X POST \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiIiLCJpYXQiOjE2NzEyMzc4NzMsImV4cCI6MTcwMjc3Mzk5MywiYXVkIjoiIiwic3ViIjoiIn0._ARixa2KFUVsKBf3UGR90qKLCpGjxhKcXY4akVbmeNQ' \
  -d $'{
    "tenant" : {
      "name": "realtime-dev",
      "external_id": "realtime-dev",
      "jwt_secret": "a1d99c8b-91b6-47b2-8f3c-aa7d9a9ad20f",
      "extensions": [
        {
          "type": "postgres_cdc_rls",
          "settings": {
            "db_name": "postgres",
            "db_host": "host.docker.internal",
            "db_user": "postgres",
            "db_password": "postgres",
            "db_port": "5432",
            "region": "us-west-1",
            "poll_interval_ms": 100,
            "poll_max_record_bytes": 1048576,
          }
        }
      ]
    }
  }' \
  http://localhost:4000/api/tenants

Note The Authorization token is signed with the secret set by API_JWT_SECRET in docker-compose.yml.

If you want to listen to Postgres changes, you can create a table and then add the table to the supabase_realtime publication:

create table test (
  id serial primary key
);

alter publication supabase_realtime add table test;

You can start playing around with Broadcast, Presence, and Postgres Changes features either with the client libs (e.g. @supabase/realtime-js), or use the built in Realtime Inspector on localhost, http://localhost:4000/inspector/new (make sure the port is correct for your development environment).

The WebSocket URL must contain the subdomain, external_id of the tenant on the _realtime.tenants table, and the token must be signed with the jwt_secret that was inserted along with the tenant.

If you're using the default tenant, the URL is ws://realtime-dev.localhost:4000/socket (make sure the port is correct for your development environment), and you can use eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MDMwMjgwODcsInJvbGUiOiJwb3N0Z3JlcyJ9.tz_XJ89gd6bN8MBpCl7afvPrZiBH6RB65iA1FadPT3Y for the token. The token must have exp and role (database role) keys.

ALL RELEVANT OPTIONS

Note Realtime server is tightly coupled to Fly.io at the moment.

PORT                       # {number}      Port which you can connect your client/listeners
DB_HOST                    # {string}      Database host URL
DB_PORT                    # {number}      Database port
DB_USER                    # {string}      Database user
DB_PASSWORD                # {string}      Database password
DB_NAME                    # {string}      Postgres database name
DB_ENC_KEY                 # {string}      Key used to encrypt sensitive fields in _realtime.tenants and _realtime.extensions tables. Recommended: 16 characters.
DB_AFTER_CONNECT_QUERY     # {string}      Query that is run after server connects to database.
API_JWT_SECRET             # {string}      Secret that is used to sign tokens used to manage tenants and their extensions via HTTP requests.
FLY_ALLOC_ID               # {string}      This is auto-set when deploying to Fly. Otherwise, set to any string.
FLY_APP_NAME               # {string}      A name of the server.
FLY_REGION                 # {string}        Name of the region that the server is running in. Fly auto-sets this on deployment. Otherwise, set to any string.
SECRET_KEY_BASE            # {string}      Secret used by the server to sign cookies. Recommended: 64 characters.
ERL_AFLAGS                 # {string}      Set to either "-proto_dist inet_tcp" or "-proto_dist inet6_tcp" depending on whether or not your network uses IPv4 or IPv6, respectively.
ENABLE_TAILSCALE           # {string}      Use Tailscale for private networking. Set to either 'true' or 'false'.
TAILSCALE_APP_NAME         # {string}      Name of the Tailscale app.
TAILSCALE_AUTHKEY          # {string}      Auth key for the Tailscape app.
DNS_NODES                  # {string}      Node name used when running server in a cluster.
MAX_CONNECTIONS            # {string}     Set the soft maximum for WebSocket connections. Defaults to '16384'.
MAX_HEADER_LENGTH          # {string}      Set the maximum header length for connections (in bytes). Defaults to '4096'.
NUM_ACCEPTORS              # {string}     Set the number of server processes that will relay incoming WebSocket connection requests. Defaults to '100'.
DB_QUEUE_TARGET            # {string}     Maximum time to wait for a connection from the pool. Defaults to '5000' or 5 seconds. See for more info: https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config.
DB_QUEUE_INTERVAL          # {string}     Interval to wait to check if all connections were checked out under DB_QUEUE_TARGET. If all connections surpassed the target during this interval than the target is doubled. Defaults to '5000' or 5 seconds. See for more info: https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config.
DB_POOL_SIZE               # {string}     Sets the number of connections in the database pool. Defaults to '5'.
SLOT_NAME_SUFFIX           # {string}     This is appended to the replication slot which allows making a custom slot name. May contain lowercase letters, numbers, and the underscore character. Together with the default `supabase_realtime_replication_slot`, slot name should be up to 64 characters long.

WebSocket URL

The WebSocket URL is in the following format for local development: ws://[external_id].localhost:4000/socket/websocket

If you're using Supabase's hosted Realtime in production the URL is wss://[project-ref].supabase.co/realtime/v1/websocket?apikey=[anon-token]&log_level=info&vsn=1.0.0"

WebSocket Connection Authorization

WebSocket connections are authorized via symmetric JWT verification. Only supports JWTs signed with the following algorithms:

  • HS256
  • HS384
  • HS512

Verify JWT claims by setting JWT_CLAIM_VALIDATORS:

e.g. {'iss': 'Issuer', 'nbf': 1610078130}

Then JWT's "iss" value must equal "Issuer" and "nbf" value must equal 1610078130.

Note:

JWT expiration is checked automatically. exp and role (database role) keys are mandatory.

Authorizing Client Connection: You can pass in the JWT by following the instructions under the Realtime client lib. For example, refer to the Usage section in the @supabase/realtime-js client library.

License

This repo is licensed under Apache 2.0.

Credits

realtime's People

Contributors

0xflotus avatar abc3 avatar awalias avatar benmccann avatar burmecia avatar chasers avatar darora avatar dependabot[bot] avatar dhenson02 avatar dragarcia avatar egor-romanov avatar filipecabaco avatar fracek avatar giovannibonetti avatar hf avatar inian avatar j0 avatar johnoscott avatar kiwicopple avatar mats16 avatar mul1sh avatar ngryman avatar samtgarson avatar soedirgo avatar sweatybridge avatar vkartk avatar w3b6x9 avatar williamyeny avatar ziinc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realtime's Issues

Set up with Layer CI - https://show.layerci.com/

Our batch mates just released this: https://show.layerci.com/

We can put this on their page by:

  1. Log in
  2. click "Submit project"
  3. Post title: "Supabase Realtime"

For the configuration, it probably easiest to modify the config used for Redash:

FROM vm/ubuntu:18.04

# install docker
RUN apt-get update && \
    apt-get install apt-transport-https ca-certificates curl software-properties-common && \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" && \
    apt-get update && \
    apt install docker-ce
    
# install compose
RUN apt-get update && \
    apt-get install python3-dev libffi-dev libssl-dev gcc build-essential

RUN curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" \
    -o /usr/local/bin/docker-compose && \
    chmod 755 /usr/local/bin/docker-compose

# copy files from the repository into this staging server
WORKDIR /opt/redash
COPY . .

# set up environment
RUN rm docker-compose.yml && wget https://raw.githubusercontent.com/getredash/setup/master/data/docker-compose.yml
RUN echo "REDASH_REDIS_URL=redis://redis:6379/0" >> env && \
    echo "POSTGRES_PASSWORD=password" >> env && \
    echo "REDASH_COOKIE_SECRET=cookie" >> env && \
    echo "REDASH_SECRET_KEY=secretkey" >> env && \
    echo "REDASH_DATABASE_URL=postgresql://postgres:password@postgres/postgres" >> env
    
# pre-pull everything to improve build times
RUN docker-compose pull && \
    grep -E 'FROM\s+\S+' Dockerfile | awk '{print $2}' | xargs -I {} docker pull {}

# create database & start everything
RUN docker-compose run --rm server create_db
RUN docker-compose up -d
EXPOSE WEBSITE http://localhost:5000

Notes

image

realtime_web npm error in docker

This is what I get when trying to run this project according to the readme:

web_1       | npm ERR! path /usr/src/service/node_modules/npm/node_modules/cliui/node_modules/ansi-regex/package.json.1314949609
web_1       | npm ERR! code ENOENT
web_1       | npm ERR! errno -2
web_1       | npm ERR! syscall open
web_1       | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/service/node_modules/npm/node_modules/cliui/node_modules/ansi-regex/package.json.1314949609'
web_1       | npm ERR! enoent This is related to npm not being able to find a file.
web_1       | npm ERR! enoent
web_1       |
web_1       | npm ERR! A complete log of this run can be found in:
web_1       | npm ERR!     /root/.npm/_logs/2020-05-29T21_44_32_747Z-debug.log
realtime_web_1 exited with code 254

The db and realtime images work fine.
Any advice ?
Thanks

unable to filter delete realtime messages

Bug report

Describe the bug

Realtime filter doesn't work properly on Delete subscription, it seems to filter all messages.

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

supabase.from(`${table}:${column}=eq.${value}`)
  .on('DELETE', () => console.log("random text"))
  .subscribe()

Expected behavior

Only messages respecting the condition should go through.

Screenshots

n/a

Event parser

Feature request

Right now we are blindly sending all events over the websockets. For various reasons (primarily security), a developer may want to filter which events are sent out. In the future this could also extend to include JSON transformer discussed here.

Describe the solution you'd like

We already have the Configuration manager for adding webhooks:

    config = [
	%Configuration.Webhook{ 
	  event: "*",
	  relation: "public:todos",
        }
      ]

we can probably re-use this for Realtime:

    config = [
	%Configuration.Realtime{
	  event: "*",
	  relation: "public:todos",
        },
	%Configuration.Realtime{
	  event: "*",
	  relation: "public:users:id",
        },
      ]

If this config is present, the realtime server will stop sending events for everything, and only send events matching the rules specified, although I can see cases where people want to allow everything, and then uses these rules to enrich some routes.

In this case the Realtime server will only send events that happen on the todos table (including all the columns? ie: todos:id.eq.1), and on the public:users:id.eq.* channel.

This is a workaround using security through obscurity. For example, if the user ID is a uuid, then it will be fairly difficult to listen to other users' data - supabase.from('users:id = eq.21a285d7-6028-4967-a2a9-fb5afdc9a4b4').on('*', () => {})

Finally, in the future we may extend this with a transformer:

    config = [
	%Configuration.Realtime{
	  event: "*",
	  relation: "public:todos",
          transformer: '.[0] | {message: .commit.message, name: .commit.committer.name}' # fake, using jq filter
        }
      ]

Describe alternatives you've considered

None come to mind, but I'm not particularly happy with this part:

I can see cases where people want to allow everything, and then uses these rules to enrich some routes

Also I don't particularly like how the "Channels" config is now mixed in with the "Connectors" config.

Additional context

edit: @awalias changed:
supabase.from('users:id.eq.21a285d7-6028-4967-a2a9-fb5afdc9a4b4').on('*', () => {}) to
supabase.from('users:id = eq.21a285d7-6028-4967-a2a9-fb5afdc9a4b4').on('*', () => {})

KeyError key record not found

Possibly introduced in https://github.com/supabase/realtime/tree/951ef2350465d42eb6f741f4659ed5b6fda4cd7b

Realtime version 7.0.5 @ 951ef23

2020-06-04 08:21:11.454 [info] "realtime:public"
2020-06-04 08:21:11.454 [info] "realtime:public:product"
2020-06-04 08:21:11.458 [error] GenServer Realtime.SubscribersNotification terminating
** (KeyError) key :record not found in: %Realtime.Adapters.Changes.DeletedRecord{columns: [%Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "id", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "name", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "ean", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "manufacturer", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "asin", type: "text", type_modifier: 4294967295}], commit_timestamp: ~U[2020-05-31 00:00:48Z], old_record: %{"asin" => nil, "ean" => nil, "id" => "0dfbf2c9-6e9f-494b-bbef-fb35295aad9c", "manufacturer" => nil, "name" => nil}, schema: "public", table: "product", type: "DELETE"}
    (realtime 0.7.1) lib/realtime/subscribers_notification.ex:62: anonymous fn/3 in Realtime.SubscribersNotification.notify_subscribers/1
    (elixir 1.10.3) lib/enum.ex:2111: Enum."-reduce/3-lists^foldl/2-0-"/3
    (realtime 0.7.1) lib/realtime/subscribers_notification.ex:43: Realtime.SubscribersNotification.notify_subscribers/1
    (realtime 0.7.1) lib/realtime/subscribers_notification.ex:24: Realtime.SubscribersNotification.handle_call/3
    (stdlib 3.13) gen_server.erl:706: :gen_server.try_handle_call/4
    (stdlib 3.13) gen_server.erl:735: :gen_server.handle_msg/6
    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message (from #PID<0.2402.0>): {:notify, %Realtime.Adapters.Changes.Transaction{changes: [%Realtime.Adapters.Changes.DeletedRecord{columns: [%Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "id", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "name", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "ean", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "manufacturer", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "asin", type: "text", type_modifier: 4294967295}], commit_timestamp: nil, old_record: %{"asin" => nil, "ean" => nil, "id" => "0dfbf2c9-6e9f-494b-bbef-fb35295aad9c", "manufacturer" => nil, "name" => nil}, schema: "public", table: "product", type: "DELETE"}], commit_timestamp: ~U[2020-05-31 00:00:48Z]}}
2020-06-04 08:21:11.465 [error] GenServer #PID<0.2402.0> terminating
** (stop) exited in: GenServer.call(Realtime.SubscribersNotification, {:notify, %Realtime.Adapters.Changes.Transaction{changes: [%Realtime.Adapters.Changes.DeletedRecord{columns: [%Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "id", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "name", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "ean", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "manufacturer", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "asin", type: "text", type_modifier: 4294967295}], commit_timestamp: nil, old_record: %{"asin" => nil, "ean" => nil, "id" => "0dfbf2c9-6e9f-494b-bbef-fb35295aad9c", "manufacturer" => nil, "name" => nil}, schema: "public", table: "product", type: "DELETE"}], commit_timestamp: ~U[2020-05-31 00:00:48Z]}}, 5000)
    ** (EXIT) an exception was raised:
        ** (KeyError) key :record not found in: %Realtime.Adapters.Changes.DeletedRecord{columns: [%Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "id", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "name", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "ean", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "manufacturer", type: "uuid", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [], name: "asin", type: "text", type_modifier: 4294967295}], commit_timestamp: ~U[2020-05-31 00:00:48Z], old_record: %{"asin" => nil, "ean" => nil, "id" => "0dfbf2c9-6e9f-494b-bbef-fb35295aad9c", "manufacturer" => nil, "name" => nil}, schema: "public", table: "product", type: "DELETE"}
            (realtime 0.7.1) lib/realtime/subscribers_notification.ex:62: anonymous fn/3 in Realtime.SubscribersNotification.notify_subscribers/1
            (elixir 1.10.3) lib/enum.ex:2111: Enum."-reduce/3-lists^foldl/2-0-"/3
            (realtime 0.7.1) lib/realtime/subscribers_notification.ex:43: Realtime.SubscribersNotification.notify_subscribers/1
            (realtime 0.7.1) lib/realtime/subscribers_notification.ex:24: Realtime.SubscribersNotification.handle_call/3
            (stdlib 3.13) gen_server.erl:706: :gen_server.try_handle_call/4
            (stdlib 3.13) gen_server.erl:735: :gen_server.handle_msg/6
            (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
    (elixir 1.10.3) lib/gen_server.ex:1023: GenServer.call/3
    (realtime 0.7.1) lib/realtime/replication.ex:84: Realtime.Replication.process_message/2
    (realtime 0.7.1) lib/realtime/replication.ex:56: Realtime.Replication.handle_info/2
    (stdlib 3.13) gen_server.erl:680: :gen_server.try_dispatch/4
    (stdlib 3.13) gen_server.erl:756: :gen_server.handle_msg/6
    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {:epgsql, #PID<0.2403.0>, {:x_log_data, 24207728, 24207728, <<67, 0, 0, 0, 0, 0, 1, 113, 97, 64, 0, 0, 0, 0, 1, 113, 97, 112, 0, 2, 73, 229, 35, 177, 168, 88>>}}
2020-06-04 08:21:11.509 [info] Application realtime exited: shutdown
{"Kernel pid terminated",application_controller,"{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...x_log_data, 24207728, 24207728, <<67, 0, 0, 0, 0, 0, 1, 113, 97, 64, 0, 0, 0, 0, 1, 113, 97, 112, 0, 2, 73, 229, 35, 177, 168, 88>>}}
2020-05-31 00:00:48.677 [info] Application realtime exited: shutdown
{"Kernel pid terminated",application_controller,"{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Restructure repo

We should use this repo for all of our realtime code. eg:

  • server
  • client
    • realtime-js
    • realtime-contrib-nodered
    • realtime-py

crash: replication slot already exists

07:19:50.702 [info] Running RealtimeWeb.Endpoint with cowboy 2.6.3 at :::4000 (http)
07:19:50.703 [info] Access RealtimeWeb.Endpoint at http://localhost:4000
07:19:50.799 [info] Application realtime exited: Realtime.Application.start(:normal, []) returned an error: shutdown: failed to start child: Realtime.Replication
    ** (EXIT) an exception was raised:
        ** (MatchError) no match of right hand side value: {:error, {:error, :error, "42710", :duplicate_object, "replication slot \"pid0_2356_0\" already exists", [file: "slot.c", line: "251", routine: "ReplicationSlotCreate", severity: "ERROR"]}}
            (realtime) lib/adapters/postgres/epgsql_implementation.ex:24: Realtime.Adapters.Postgres.EpgsqlImplementation.init/1
            (stdlib) gen_server.erl:374: :gen_server.init_it/2
            (stdlib) gen_server.erl:342: :gen_server.init_it/6
            (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Kernel pid terminated (application_controller) ({application_start_failure,realtime,{{shutdown,{failed_to_start_child,'Elixir.Realtime.Replication',{{badmatch,{error,{error,error,<<"42710">>,duplicate
{"Kernel pid terminated",application_controller,"{application_start_failure,realtime,{{shutdown,{failed_to_start_child,'Elixir.Realtime.Replication',{{badmatch,{error,{error,error,<<\"42710\">>,duplicate_object,<<\"replication slot \\"pid0_2356_0\\" already exists\">>,[{file,<<\"slot.c\">>},{line,<<\"251\">>},{routine,<<\"ReplicationSlotCreate\">>},{severity,<<\"ERROR\">>}]}}},[{'Elixir.Realtime.Adapters.Postgres.EpgsqlImplementation',init,1,[{file,\"lib/adapters/postgres/epgsql_implementation.ex\"},{line,24}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,374}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]}}},{'Elixir.Realtime.Application',start,[normal,[]]}}}"}
Crash dump is being written to: erl_crash.dump...done

Broadcast changes to relations

At the moment we are broadcasting *

A customer should be able to "listen" to changes to individual tables. To achieve this, we should also broadcast based on the relation (schema + table), and allow the customer to listen based on this criteria.

For example:

// 'schema' is optional, and defaults to 'public'
Realtime.addListener({schema: 'public', table: 'chats'}, change => {
  console.log(change)
})

Chnages required:

  • Server: should broadcast both on * AND the schema+table
  • JS client: should accept a object as the first param

Row level filtering

user should be able to subscribe to events only on specific rows based on column criteria

supabase
.from('analytics')
.eq('customer_id', 3)
.subscribe()

Dynamic Configuration

The realtime module supports quasi-dynamic configuration from a json file. In the future it should be completely dynamic so that users can add/remove webhooks and other connectors without restarting the application. This last requirement rules out using Config.Provider.

Some issues that came up while implementing #46:

  • If users can add webhooks from the web application, where do we store this data and how do we update realtime to know about it? How can realtime reload the config if it crashes?
  • Should all configuration be stored in the same way or should we allow multiple configuration options? If the latter, how do we join the configurations?
  • At the moment, the connectors genserver queries the configuration manager every time it needs to notify the subscribers. This has the implication that if the configuration manager crashes (for example because the json file is malformed), the webhook manager will stop sending out notifications. We should replace the configuration manager with an Agent to hold the configuration struct, and a genserver to update it. If the latter crashes, other components can still access the old configuration.

Add self-hosting instructions

AWS: https://aws.amazon.com/marketplace/pp/B089N4FH7N/
DO: https://marketplace.digitalocean.com/apps/supabase-realtime
Docker: https://hub.docker.com/r/supabase/realtime

Realtime was released to the marketplaces, which means it is now simpler to get started. We should start documenting the process for hosting and installing Realtime.

I think we should move this documentation onto our website, but using the same conventions that we used for postgres: https://github.com/supabase/postgres/wiki

We should end up with 5 pages:

- Realtime
  |- About
  |- Docker
  |- AWS
  |- Digital Ocean
  |- Build from scratch

Authentication

User should be able to use postgres row level security.

Possible approach:

  • follow postgrest authentication
  • give customer ability to pass through their own keys/spec
  • do this on our own API gateway (won't work for self hosted) <--- favourite for now
supabase
.from('analytics')
.subscribe('users', 3, 'token', {...})

Refactor the notifiers logic and plan for connectors

At the moment our decoder gen_server is doing a lot of work. We should (maybe) refactor it into smaller gen_servers, specifically this part here:

defp notify_subscribers(%State{transaction: {_current_txn_lsn, txn}}) do

This part is where we parse the data changes and then send it to various channel topics:

image

It will be better to separate this out. because we eventually want this structure (comments/improvements:

image

Database disk usage issue?

I connect the supabase/realtime server to a managed postgres db on DigitalOcean.
There seems to be an issue with disk usage, which is related to the realtime server, as in: If I boot up the realtime server, the db's disk usage will at some point begin to start rising. After reaching around 96% it will drop down, but it will rise again shortly after. If I shut it down, then the disk usage will drop back down to around 20% and stay there.

Screenshot 2020-08-04 at 13 59 17

I can't tell if it has significant implications on database usability/performance, as I'm not yet using it in production, but it is annoying as I always get warnings from DigitalOcean like these:

Your database XXXXX is low on resources. Hello, Your postgres service, XXXXX, is running low on disk. If your service runs out of disk space, writes will be denied and other functionality could be rendered unavailable. Either delete unnecessary data or upgrade to a larger plan to avoid service outage.

500 error report

service ref vkvjnvalmrvcmmlggpfr

2020-10-06 06:44:12.538 [error] GenServer #PID<0.2624.0> terminating
** (CaseClauseError) no case clause matching: 3
    (realtime 0.7.7) lib/decoder/decoder.ex:189: Realtime.Decoder.decode_message_impl/1
    (realtime 0.7.7) lib/realtime/replication.ex:52: Realtime.Replication.handle_info/2
    (stdlib 3.13) gen_server.erl:680: :gen_server.try_dispatch/4
    (stdlib 3.13) gen_server.erl:756: :gen_server.handle_msg/6
    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {:epgsql, #PID<0.2625.0>, {:x_log_data, 207641560, 207641560, <<84, 0, 0, 0, 1, 3, 0, 0, 64, 138>>}}
2020-10-06 06:44:12.938 [info] Application realtime exited: shutdown
{"Kernel pid terminated",application_controller,"{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...controller,"{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...ion_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump......dump....{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...lication_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump....mp..."{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...ation_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...04.935 [info] JOINED realtime:public:project in 144µs
  Parameters: %{}
2020-10-05 04:50:05.034 [info] Application realtime exited: shutdown
{"Kernel pid terminated",application_controller,"{application_terminated,realtime,shutdown}"}
Kernel pid terminated (application_controller) ({application_terminated,realtime,shutdown})

Crash dump is being written to: erl_crash.dump...ange, :epgsql_idatetime}, 1186 => {:type, 1186, :interval, false, :undefined, :epgsql_codec_datetime, :epgsql_idatetime}, 1115 => {:type, 1115, :timestamp, true, 1114, :epgsql_codec_datetime, :epgsql_idatetime}, 2951 => {:type, 2951, :uuid, true, 2950, :epgsql_codec_uuid, []}, 1014 => {:type, 1014, :bpchar, true, 1042, :epgsql_codec_bpchar, []}, 1183 => {:type, 1183, :time, true, 1083, :epgsql_codec_datetime, :epgsql_idatetime}, 651 => {:type, 651, :cidr, true, 650, :epgsql_codec_net, []}, 3926 => {:type, 3926, :int8range, false, :undefined, :epgsql_codec_intrange, []}, 1000 => {:type, 1000, :bool, true, 16, :epgsql_codec_boolean, ...}, 1187 => {:type, 1187, :interval, true, 1186, ...}, 1043 => {:type, 1043, :varchar, false, ...}, 774 => {:type, 774, :macaddr8, ...}, 3904 => {:type, 3904, ...}, 1040 => {:type, ...}, 3802 => {...}, ...}, %{{:bpchar, true} => 1014, {:timestamptz, true} => 1185, {:timetz, false} => 1266, {:timestamptz, false} => 1184, {:text, true} => 1009, {:date, true} => 1182, {:float4, false} => 700, {:int4range, true} => 3905, {:uuid, false} => 2950, {:char, false} => 18, {:tsrange, false} => 3908, {:cidr, false} => 650, {:varchar, false} => 1043, {:inet, true} => 1041, {:json, true} => 199, {:int8, true} => 1016, {:tstzrange, false} => 3910, {:daterange, true} => 3913, {:int8, false} => 20, {:timestamp, false} => 1114, {:int4, true} => 1007, {:float4, true} => 1021, {:tsrange, true} => 3909, {:time, false} => 1083, {:text, false} => 25, {:timestamp, true} => 1115, {:int2, true} => 1005, {:date, false} => 1082, {:macaddr8, false} => 774, {:macaddr, false} => 829, {:char, true} => 1002, {:bytea, false} => 17, {:cidr, true} => 651, {:daterange, false} => 3912, {:float8, true} => 1022, {:tstzrange, ...} => 3911, {...} => 1270, ...}}}, {[], []}, :undefined, :undefined, :undefined, :undefined, [{"application_name", ""}, {"client_encoding", "UTF8"}, {"DateStyle", "ISO, MDY"}, {"integer_datetimes", "on"}, {"IntervalStyle", "postgres"}, {"is_superuser", "on"}, {"server_encoding", "UTF8"}, {"server_version", "12.3 (Ubuntu 12.3-1.pgdg18.04+1)"}, {"session_authorization", "supabase_admin"}, {"standard_conforming_strings", "on"}, {"TimeZone", "UTC"}], [], [], true, 73, :undefined, {:repl, :undefined, 0, 0, :undefined, :undefined, :undefined, #PID<0.2628.0>, false}}, :undefined)
    (epgsql 4.3.0) /opt/supabase/server/deps/epgsql/src/epgsql_sock.erl:294: :epgsql_sock.command_handle_message/3
    (epgsql 4.3.0) /opt/supabase/server/deps/epgsql/src/epgsql_sock.erl:378: :epgsql_sock.loop/1
    (stdlib 3.13) gen_server.erl:680: :gen_server.try_dispatch/4
    (stdlib 3.13) gen_server.erl:756: :gen_server.handle_msg/6
    (stdlib 3.13) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {:tcp, #Port<0.17>, <<69, 0, 0, 0, 116, 83, 69, 82, 82, 79, 82, 0, 86, 69, 82, 82, 79, 82, 0, 67, 53, 53, 48, 48, 54, 0, 77, 114, 101, 112, 108, 105, 99, 97, 116, 105, 111, 110, 32, 115, 108, 111, 116, 32, 34, 114, 101, ...>>}

v0.7.3 & v0.7.4 doesn't include the whole `mix release` artifacts

I should've checked this beforehand, but I just found out that mix release doesn't produce a single file binary, but rather a shell script that requires many other artifacts in its parent directories to work. 🤦‍♂️

I'll figure out how this should be done as I work on the AMI/DO images, but for now I'll remove the artifacts until I got this right.

guide for dropping this into an existing phoenix app?

I'm currently building an mvp using phoenix. subscribing to database changes is an issue I've been looking into and I was researching debezium for that purpose. This looks like a much better fit however. Are there any existing docs on how to integrate this into an existing phoenix app?

Also, do you guys have clients for ios/android?

Transaction Filter Definition

In #33 we started discussing how a transaction filter should look and work.
I think the first step is discuss how you see users using this feature.

It's a good idea to start by defining what we want to achieve with the filter and only then define its synatx. If we get the core idea right we can avoid a ridiculously complex parser and make it easy to extend in the future.

The language should filter on:

  • Schema by name
  • Tables by name
  • Columns by value, it should have equality and inequality for sure, I'm not sure about comparison. For example, it doesn't make much sense to say buffer1 > buffer2 (i.e. comparing two byte columns), so we either type check the filters or allow some weird behaviour.

It should maybe implement:

  • Logical conditions, e.g. user.id=4 or user.id=10, user.role in ['admin', 'super-admin']

It should not filter:

  • Dynamically, e.g. with conditions like column1 value equals column2 value
  • Using data from other tables

Nice to have:

  • Warn user if we are filtering on a column that does not exist. This would help catch spelling mistakes and make it more user friendly in general.
  • Warn about malformed filter definitions as soon as possible, i.e. before the filter is used.

Prior Art

Most logging services provide a similar filtering language. We should look at a couple of examples to see how they solve this problem.

Use case: synching to Elasticsearch/Solr

#40 (comment)

Can you elaborate a bit on your particular use-case? It will help with our planning and improvements. Feel free to start a new issue if I haven't correctly addressed your question here

Hi @kiwicopple. I'm currently in an exploratory stage following a requirement to sync data from PostgreSQL to Solr or Elasticsearch and seeing what's out there. I knew logical replication was the only way to go but needed to explore existing solutions. I heard about supabase realtime a year ago so been kicking the tyres to see what it could do.

My thinking was to subscribe to tables of interest and update or insert records to ES/Solr. Guaranteed delivery is obviously preferred for this. I don't like the idea of having to occasionally sync hundreds of thousands of emails that might be out of sync.

While exploring supabase/realtime however, I see how it might be beneficial for other aspects of a product I'm working on.

• Subscribing to tables instead of creating lots of NOTIFY triggers (as we currently do in production).

One thing that jumped out about this is that the payload size including the types object, while useful, really adds up over thousands of records (even more if you're receiving the OLD record). I think I'd prefer it if when subscribing I could request for specific data.

This seems to be partly discussed here: #47 but that seems to just be conditionally filtering data (which is good). What I would find useful would be requesting a subset of the data so optionally only the bare minimum needs to be transmitted between the realtime server and client. A JSON query and transformation language like https://github.com/schibsted/jslt or https://github.com/jsonata-js/jsonata would be ideal as this could be passed as an argument on subscription and transform the data on the server side. I don't know equivalent libraries are available in Elixir.

Question:
I'm assuming the preferred approach to reduce the load on the server and the streaming output from PostrgreSQL is to only create a publication for a subset of tables rather than CREATE PUBLICATION supabase_realtime FOR ALL TABLES;?

One more thing: we are planning the benchmark test as a requirement to move from alpha to beta.

BTW the checkboxes in the README for this repo suggests you're already in Beta (https://github.com/supabase/realtime#status) but the paragraph below suggests otherwise. I didn't read the paragraph below originally and just the checkboxes so thought you were in Beta. :)

Include date in log timestamps

currently logs look like:
02:11:23.876 [error] GenServer #PID<0.9788.0> terminating

Would be useful to change the timestamp to be datetime instead of just time

Docker-compose wont start - eafnosupport (address family not supported by protocol family)

Iam getting error, when following the instructions https://supabase.io/docs/realtime/docker :

docker-compose up
Starting rnbackend_db_1 ... done
Starting rnbackend_realtime_1 ... done
Attaching to rnbackend_db_1, rnbackend_realtime_1
db_1        | 
db_1        | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1        | 
db_1        | 2020-07-21 06:25:46.161 UTC [1] LOG:  starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1        | 2020-07-21 06:25:46.163 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1        | 2020-07-21 06:25:46.164 UTC [1] LOG:  could not create IPv6 socket for address "::": Address family not supported by protocol
db_1        | 2020-07-21 06:25:46.173 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1        | 2020-07-21 06:25:46.207 UTC [25] LOG:  database system was shut down at 2020-07-21 06:25:24 UTC
db_1        | 2020-07-21 06:25:46.214 UTC [1] LOG:  database system is ready to accept connections
realtime_1  | 06:25:50.598 [error] Failed to start Ranch listener RealtimeWeb.Endpoint.HTTP in :ranch_tcp:listen([{:cacerts, :...}, {:key, :...}, {:cert, :...}, :inet6, {:port, 4000}]) for reason :eafnosupport (address family not supported by protocol family)
realtime_1  | 
realtime_1  | 06:25:50.599 [info] Application realtime exited: Realtime.Application.start(:normal, []) returned an error: shutdown: failed to start child: RealtimeWeb.Endpoint
realtime_1  |     ** (EXIT) shutdown: failed to start child: {:ranch_listener_sup, RealtimeWeb.Endpoint.HTTP}
realtime_1  |         ** (EXIT) shutdown: failed to start child: :ranch_acceptors_sup
realtime_1  |             ** (EXIT) {:listen_error, RealtimeWeb.Endpoint.HTTP, :eafnosupport}
realtime_1  | {"Kernel pid terminated",application_controller,"{application_start_failure,realtime,{{shutdown,{failed_to_start_child,'Elixir.RealtimeWeb.Endpoint',{shutdown,{failed_to_start_child,{ranch_listener_sup,'Elixir.RealtimeWeb.Endpoint.HTTP'},{shutdown,{failed_to_start_child,ranch_acceptors_sup,{listen_error,'Elixir.RealtimeWeb.Endpoint.HTTP',eafnosupport}}}}}}},{'Elixir.Realtime.Application',start,[normal,[]]}}}"}
realtime_1  | Kernel pid terminated (application_controller) ({application_start_failure,realtime,{{shutdown,{failed_to_start_child,'Elixir.RealtimeWeb.Endpoint',{shutdown,{failed_to_start_child,{ranch_listener_sup
realtime_1  | 
realtime_1  | Crash dump is being written to: erl_crash.dump...done
rnbackend_realtime_1 exited with code 1

Socket is not a constructor

Bug report

Describe the bug

When following the node example in this repo, I'm receiving the following error on the socket.connect() line:

"TypeError: Socket is not a constructor"

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

Follow the example here:

https://github.com/supabase/realtime/blob/master/examples/node-js/src/server.js

Use an app.supabase.io hosted API URL for the process.env.REALTIME_URL variable.

Expected behavior

The socket should connect successfully.

System information

  • OS: Windows 10
  • Browser (if applies) N/A
  • Version of realtime-js: 1.0.6
  • Version of Node.js: 14.15.0

Additional context

I'm assuming that if we're using app.supabase.io to host our DB and API, we should be able to use that to connect the realtime-js client to rather than self hosting the realtime API using docker as explained in the examples?

:unchanged_toast is causing errors

Just as a gentle reminder, the latest docker image for supabase/realtime still has this bug. I tracked it down to this issue and rebuilt from the latest code in the repo using the provided Dockerfile and it has resolved.

Originally posted by @kwakwaversal in #50 (comment)

Special character pg passwords

some combinations of special characters passed as DB_PASSWORD seem to be causing connection failures - the password may need to be uri encoded before trying to connect to postgres

How do you work with authentication and authorization?

Question 1:
I saw that #44 mentioned that this is suppose to be a standalone server. How do we handle authentication in this case?

Question 2:
If a User has_many Blogs, how do I ensure that the current_user is only subscribing to changes of blogs that belong to them?

Question 3:
Is there a recommended way to handle business logic?

Large inserts choke micro instances

inserting 1 million rows at the same time cause excessive CPU usage for an extended period afterwards, along with ~50% RAM:

image

test code here: https://raw.githubusercontent.com/supabase/benchmarks/feat/firestore-benchmarks/db/migrations/20201124141199_read_data.sql

ruled out COPY being the issue since the following snippet also results in the same behaviour:

do $$
begin
for r in 1..1000000 loop
insert into public.read(id, slug) values(r, r);
end loop;
end;
$$;

Restarting the service (process) does not resolve the issue

JavaScript examples don't work for me

Hey,

just wanted to let you know that there seems to be an error in the JS examples.
I'm trying to get this to work on a node.js server. (not a JS expert)
The below code block does not work for me.

import { Socket } = '@supabase/realtime-js'

const REALTIME_URL = process.env.REALTIME_URL || 'http://localhost:4000'

var socket = new Socket(REALTIME_URL)
socket.connect()

// Listen to only INSERTS on the 'users' table in the 'public' schema
var allChanges = this.socket.channel('realtime:public:users')
  .join()
  .on('INSERT', payload => { console.log('Update received!', payload) })

what does instead work is the following:

const { Socket } = require('@supabase/realtime-js'); // related to node/browser difference?

const REALTIME_URL = process.env.REALTIME_URL || 'http://localhost:4000/socket' // "/socket" had to be added

var socket = new Socket(REALTIME_URL) 
socket.connect()

var allChanges = socket.channel('realtime:public:users') // removed "this." here (related to node/browser difference?)
  .join()
  .channel // added ".channel" because "join()" returns Push object
  .on('INSERT', payload => { console.log('Record inserted!', payload) })

Thanks for the great lib :)

Phoenix Channels is appending /websocket after query params

Expected Behaviour

When implementing web sockets, the expected resultant web socket url would be:
ws://world.supabase.co/socket/websocket?apikey=xxxxx.

Current Behaviour

When passing the url to the library phoenix-channels, the resultant web socket becomes:
ws://world.supabase.co?apikey=xxxxx/websocket

This leads to a 404 error.

Possible Solution

Fork the library phoenix-channels and edit it in order to achieve the above-mentioned expected behaviour.

Pick up from last wal_position

When we start the server, it reads from wal_position = {"0", "0"}:

wal_position: {"0", "0"}, # You can provide a different WAL position if desired, or default to allowing Postgres to send you what it thinks you need

We need a way for the server to keep track of its position and then pick up where it last read from (if the server dies)

A Rust implementation?

I noticed that there is a Rust implementation of the Postgres client, but not for the server. I think that the server would benefit from the speed a Rust implementation (possibly more than the client) could bring.
This should maybe be in the realtime repository instead.

All data/changes are being "stringified"

See this issue here: cainophile/pgoutput_decoder#1

In summary, the decoded data is in the format:

[debug] Received binary message: <<66, 0, 0, 0, 0, 3, 97, 225, 16, 0, 2, 54, 112, 113, 117, 141, 181, 0, 0, 3, 183>>

[debug] Decoded message: %PgoutputDecoder.Messages.Begin{commit_timestamp: #DateTime<2019-09-26 09:48:41Z>, final_lsn: {0, 56746256}, xid: 951}

[debug] Received binary message: <<85, 0, 0, 68, 8, 78, 0, 7, 116, 0, 0, 0, 1, 50, 116, 0, 0, 0, 12, 84, 101, 115, 116, 32, 112, 97, 114, 116, 110, 101, 114, 116, 0, 0, 0, 7, 49, 50, 51, 52, 53, 54, 55, 110, 110, 116, 0, 0, 0, 23, 50, 48, 49, 57, 45, 48, 57, 45, 50, 54, 32, 48, 57, 58, 52, 56, 58, 50, 49, 46, 48, 51, 55, 116, 0, 0, 0, 23, 50, 48, 49, 57, 45, 48, 57, 45, 50, 54, 32, 48, 57, 58, 52, 56, 58, 52, 49, 46, 49, 50, 53>>

[debug] Decoded message: %PgoutputDecoder.Messages.Update{changed_key_tuple_data: nil, old_tuple_data: nil, relation_id: 17416, tuple_data: {"2", "Test partner", "1234567", nil, nil, "2019-09-26 09:48:21.037", "2019-09-26 09:48:41.125"}}

The tuple_data has an value of "2" for the ID, instead of 2

Extract out replication functionality

I'd love to use just the replication / data change functionality directly in Elixir so that I can do stuff on the data (no need to publish the data out). Is there any chance this functionality could be extracted out into it's own library? Or can you recommend how I can go about it myself?

Connecting to a distant RDS seems to fail

With my local machine in Singapore, I couldn't get this to connect to an RDS instance in the US.

Steps to replicate:

  1. Go to RDS (somewhere far away)
  2. Go to Parameter groups
  3. Create a Postgres11 parameter group, and set the rds.logical_replication to 1
  4. Create a new RDS (free tier is fine).
  5. Update the Security Group to be fully permissive - allow all IP's (just to be safe)
  6. Try to run this app with the connection info. Eg, move into the server folder and run:
DB_HOST=instance-name.accountid.us-west-1.rds.amazonaws.com \
DB_NAME=postgres \
DB_USER=postgres \
DB_PASSWORD=postgres \
DB_PORT=5432 \
APP_PORT=4000 \
APP_HOSTNAME=localhost \
SECRET_KEY_BASE=SOMETHING_SECRET \
DB_SSL=true \
mix phx.server

This will probably fail with the following error:

[info] Running RealtimeWeb.Endpoint with cowboy 2.6.3 at 0.0.0.0:4000 (http)
[info] Access RealtimeWeb.Endpoint at http://localhost:4000
[info] Application realtime exited: Realtime.Application.start(:normal, []) returned an error: shutdown: failed to start child: Realtime.Replication
    ** (EXIT) :invalid_authorization_specification
** (Mix) Could not start application realtime: Realtime.Application.start(:normal, []) returned an error: shutdown: failed to start child: Realtime.Replication
    ** (EXIT) :invalid_authorization_specification

However if you try to connect with something like PG Admin it will succeed.

Multiple builds

At the moment we only offer this as a docker image. We probably want these as a minimum to be built, version, and the artifacts stored:

Now:

Next:

  • Digital ocean image
  • AWS image

Recommendation:

  • Packer: we use Packer on @supabase/postgres and our KPS server. It would be good if we can use the same so that we don't have to learn new tools

Realtime dropping connection intermittently

Running the test script in supabase-js https://github.com/supabase/supabase/blob/master/libraries/supabase-js/test/integration/testRealtime.js

I get intermittent Network timeout. Still waiting... and REALTIME DISCONNECTED errors

When I inspect the server logs, there are a couple of errors logged:

127.0.0.1 - - [25/Feb/2020:17:26:17 +0000] "POST /messages HTTP/1.1" 201 - "" ""
17:26:18.082 [info] "realtime:public"
17:26:18.082 [info] "realtime:public:messages"
17:26:18.083 [info] "realtime:public:messages:channel_id=eq.1"
17:26:18.085 [error] GenServer #PID<0.2564.0> terminating
** (ArgumentError) argument error
    :erlang.bit_size(nil)
    (realtime) anonymous fn/3 in Realtime.Replication.notify_subscribers/1
    (elixir) lib/enum.ex:789: anonymous fn/3 in Enum.each/2
    (stdlib) maps.erl:232: :maps.fold_1/3
    (elixir) lib/enum.ex:1964: Enum.each/2
    (realtime) lib/realtime/replication.ex:237: anonymous fn/3 in Realtime.Replication.notify_subscribers/1
    (elixir) lib/enum.ex:1948: Enum."-reduce/3-lists^foldl/2-0-"/3
    (realtime) lib/realtime/replication.ex:217: Realtime.Replication.notify_subscribers/1
Last message: {:epgsql, #PID<0.2565.0>, {:x_log_data, 23616408, 23616408, <<67, 0, 0, 0, 0, 0, 1, 104, 91, 104, 0, 0, 0, 0, 1, 104, 91, 152, 0, 2, 66, 104, 141, 229, 100, 110>>}}

and

17:26:18.151 [info] CONNECTED TO RealtimeWeb.UserSocket in 178µs
  Transport: :websocket
  Serializer: Phoenix.Socket.V1.JSONSerializer
  Connect Info: %{}
  Parameters: %{"vsn" => "1.0.0"}
17:26:18.153 [error] GenServer #PID<0.2569.0> terminating
** (KeyError) key :record not found in: %Realtime.Adapters.Changes.DeletedRecord{columns: [%Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "id", type: "int8", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "inserted_at", type: "timestamp", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "updated_at", type: "timestamp", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "data", type: "jsonb", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "message", type: "text", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "user_id", type: "int8", type_modifier: 4294967295}, %Realtime.Decoder.Messages.Relation.Column{flags: [:key], name: "channel_id", type: "int8", type_modifier: 4294967295}], commit_timestamp: ~U[2020-02-25 17:26:18Z], old_record: %{"channel_id" => "1", "data" => nil, "id" => "23", "inserted_at" => "2020-02-25 17:26:18.071593", "message" => "delete test", "updated_at" => "2020-02-25 17:26:18.071593", "user_id" => "1"}, schema: "public", table: "messages", type: "DELETE"}
    (realtime) lib/realtime/replication.ex:237: anonymous fn/3 in Realtime.Replication.notify_subscribers/1
    (elixir) lib/enum.ex:1948: Enum."-reduce/3-lists^foldl/2-0-"/3
    (realtime) lib/realtime/replication.ex:217: Realtime.Replication.notify_subscribers/1
    (realtime) lib/realtime/replication.ex:82: Realtime.Replication.process_message/2
    (realtime) lib/realtime/replication.ex:54: Realtime.Replication.handle_info/2
    (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4
    (stdlib) gen_server.erl:711: :gen_server.handle_msg/6
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: {:epgsql, #PID<0.2570.0>, {:x_log_data, 23616632, 23616632, <<67, 0, 0, 0, 0, 0, 1, 104, 92, 72, 0, 0, 0, 0, 1, 104, 92, 120, 0, 2, 66, 104, 141, 229, 243, 123>>}}

not sure if these are responsible but probably worth cleaning up anyway

Document the `slot_name` var

Bug report

Describe the bug

We don't have any documentation for slot_name: #40 (comment)

Additional context

We should add some documentation that:

  1. shows users how the slot_name var
  2. documents down what it is (allows postgres to track the "last position")
  3. explains how to get multiple servers connecting to postgres, without conflicts

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.