Giter Site home page Giter Site logo

subzerocloud / subzero-starter-kit Goto Github PK

View Code? Open in Web Editor NEW
162.0 13.0 25.0 1.68 MB

Starter Kit and tooling for authoring GraphQL/REST API backends with subZero

Home Page: https://subzero.cloud

License: MIT License

PLpgSQL 39.84% Shell 22.49% JavaScript 26.40% HTML 11.27%
subzero postgrest postgresql api rest graphql openresty docker boilerplate starter-kit

subzero-starter-kit's Introduction

subZero GraphQL/REST API Starter Kit

Base project and tooling for authoring data API backends with subZero.

Runs Anywhere

Run subZero stack as a hassle-free service (free plan available) or deploy it yourself anywhere using binary and docker distributions.

Features

✓ Out of the box GraphQL/REST/OData endpoints created by reflection over a PostgreSQL schema
✓ Authentication using email/password or using 3rd party OAuth 2.0 providers (google/facebook/github preconfigured)
✓ Auto-generation of SSL certificates with "Let's Encrypt"
✓ Uses PostgREST+ with features like aggregate functions (group by), window functions, SSL, HTTP2, custom relations
✓ Cross-platform development on macOS, Windows or Linux inside Docker
PostgreSQL database schema boilerplate with authentication and authorization flow
✓ Debugging and live code reloading (sql/configs/lua) functionality using subzero-cli
✓ Full migration management (migration files are automatically created) through subzero-cli
✓ SQL unit test using pgTAP
✓ Integration tests with SuperTest / Mocha
✓ Community support on Slack
✓ Scriptable proxy level caching using nginx proxy_cache or Redis

Directory Layout

.
├── db                        # Database schema source files and tests
│   └── src                   # Schema definition
│       ├── api               # Api entities available as REST and GraphQL endpoints
│       ├── data              # Definition of source tables that hold the data
│       ├── libs              # A collection of modules used throughout the code
│       ├── authorization     # Application level roles and their privileges
│       ├── sample_data       # A few sample rows
│       └── init.sql          # Schema definition entry point
├── html                      # Place your static frontend files here
├── tests                     # Tests for all the components
│   ├── db                    # pgTap tests for the db
│   ├── graphql               # GraphQL interface tests
│   └── rest                  # REST interface tests
├── docker-compose.yml        # Defines Docker services, networks and volumes
└── .env                      # Project configurations

Installation

Prerequisites

Create a New Project

Click [Use this template] (green) button. Choose the name of your new repository, description and public/private state then click [Create repository from template] button. Check out the step by step guide if you encounter any problems.

After this, clone the newly created repository to your computer. In the root folder of application, run the docker-compose command

docker-compose up -d

The API server will become available at the following endpoints:

Try a simple request

curl http://localhost:8080/rest/todos?select=id,todo

Try a GraphQL query in the integrated GraphiQL IDE

{
  todos{
    id
    todo
  }
}

Development workflow and debugging

Execute subzero dashboard in the root of your project.
After this step you can view the logs of all the stack components (SQL queries will also be logged) and if you edit a sql/conf file in your project, the changes will immediately be applied.

Unit and integration tests

The starter kit comes with a testing infrastructure setup. You can write pgTAP tests that run directly in your database, useful for testing the logic that resides in your database (user privileges, Row Level Security, stored procedures). Integration tests are written in JavaScript.

Here is how you run them locally

yarn install         # Install test dependencies
yarn test            # Run all tests (db, rest, graphql)
yarn test_db         # Run pgTAP tests
yarn test_rest       # Run rest integration tests
yarn test_graphql    # Run graphql integration tests

All the test are also executed on on git push (on GitHub)

Deployment

The deployment is done using a GitHub Actions script. The deploy action will push your migrations to the production database using sqitch and the static files with scp. The deploy step is triggered only on git tags in the form of v1.2

Note that the deploy action pushes to production the database migrations (db/migrations/) not the current database schema definition (db/src/) so you'll need execute subzero migrations init --with-roles before the first deploy and when iterating, you'll create new migrations using subzero migration add <migration_name>

You'll also need to configure the following "secrets" for your github deploy action

SUBZERO_EMAIL
SUBZERO_PASSWORD
APP_DOMAIN
APP_DB_HOST
APP_DB_PORT
APP_DB_NAME
APP_DB_MASTER_USER
APP_DB_MASTER_PASSWORD
APP_DB_AUTHENTICATOR_USER
APP_DB_AUTHENTICATOR_PASSWORD
APP_JWT_SECRET

While the deploy action is written for subzero.cloud and fargate (DEPLOY_TARGET: subzerocloud) it can easily be adapted for other deploy targets that run the subzero stack.

If you with to deploy to AWS Fargate, you'll need to additionally configure the secrets starting with AWS_*.

If you have a preexisting database, you can also deploy the container to AWS Fargate by clicking the button below. The cloudformation stack will launch the container in Fargate and create a DNS record for it in a Route53 zone. You'll only need to update the domain registration to use Amazon Route 53 name servers from the domain's zone.

Contributing

Anyone and everyone is welcome to contribute.

Support and Documentation

License

Copyright © 2017-2021 subZero Cloud, LLC.
This source code in this repository is licensed under MIT license
Components implementing the GraphQL interface (customized PostgREST+ and OpenResty docker images) are available under a commercial license
The documentation to the project is licensed under the CC BY-SA 4.0 license.

subzero-starter-kit's People

Contributors

akagomez avatar angelinatarapko avatar coolzilj avatar futtetennista avatar gavrilyak avatar kljensen avatar numtel avatar pjlindsay avatar ruslantalpa avatar soyuka avatar steve-chavez avatar stonecypher avatar synapseradio avatar wildsurfer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

subzero-starter-kit's Issues

Is Range ignored with subzero?

GET /people HTTP/1.1
Range-Unit: items
Range: 0-19

PostgREST document teaches me this.
Is Range ignored with subzero?

$ git clone https://github.com/subzerocloud/subzero-starter-kit test
$ cd test
$ docker-compose up -d
$ curl --verbose --http1.1 'http://localhost:8080/rest/todos?select=id' # This does expected

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying ::1:8080...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /rest/todos?select=id HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.66.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: openresty
< Date: Tue, 29 Oct 2019 09:49:17 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< Content-Range: 0-2/*
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Location: /rest/todos?select=id
< Request-Time: 0.020
< Method: GET
< Cache-Engine: "nginx"
< Cache-Status: BYPASS
< Cache-Key: 56002b0d5a4e057c37780181f18d7f4c
< Cache-TTL: 60
< 
{ [72 bytes data]

100    61    0    61    0     0   2772      0 --:--:-- --:--:-- --:--:--  2772
* Connection #0 to host localhost left intact
[{"id":"dG9kbzox"}, 
 {"id":"dG9kbzoz"}, 
 {"id":"dG9kbzo2"}]

$ curl --verbose --http1.1 'http://localhost:8080/rest/todos?select=id' -H 'Content-Range: items' -H 'Range: 0-1' # This fetched more items than range.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying ::1:8080...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /rest/todos?select=id HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.66.0
> Accept: */*
> Range-Unit: items
> Range: 0-1
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: openresty
< Date: Tue, 29 Oct 2019 10:07:35 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< Content-Range: 0-2/*
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Location: /rest/todos?select=id
< Request-Time: 0.003
< Method: GET
< Cache-Engine: "nginx"
< Cache-Status: BYPASS
< Cache-Key: 0951892c2e83a4ef06701453128df4e0
< Cache-TTL: 60
< 
{ [72 bytes data]

100    61    0    61    0     0  20333      0 --:--:-- --:--:-- --:--:-- 20333
* Connection #0 to host localhost left intact
[{"id":"dG9kbzox"}, 
 {"id":"dG9kbzoz"}, 
 {"id":"dG9kbzo2"}]

Unable to receive messages using the starter kit

Hi there, I'm trying to test this starterkit to see if it fits my requirements.

What I want to do is to receive notifications when a "todo" changes using a websocket.
To test that I'm using 2 very simple python programs, the first one is authenticating as alice and connecting to /rabbitmq/ws and listening for incoming messages, the second one is authenticating as alice and sending POST requests to /rest/todos to add new data.

The problem is that after running those 2 files

python3 subscribe.py
python3 addtodo.py

I don't receive any message on the websocket connection.
I'm sure i'm authenticated in the subscribe code, because I manually added logs in nginx lua code that is printing AUTHENTICATED when the code reaches this point

https://github.com/subzerocloud/subzero-starter-kit/blob/master/openresty/nginx/rabbitmq.conf#L159

I created a GIST with the file I was using can you please tell me what am I doing wrong, or if I need to do something extra in order to receive rabbitmq messages over the ws connection?

https://gist.github.com/riccardodivirgilio/83025d11f706c546151c819e6640d69a

Thank you very much

Refresh Token Documentation

Hello! Great work with Subzero, much easier to work with than other REST platforms I've tried!

I'm a little hung up on Refresh Tokens at the moment. It looks like subzero is set up to accept both cookies and Auth headers for identification. So for local dev, since I can't get cookies to work, I'm returning the JWT token as well in the login response, so that way I can add it to future requests as an Authorization: Bearer ${token} header.

That works great, subzero parses the header correctly, even without a cookie present. But as far as I can tell, there's nothing automated for refreshing the token. If you manually call auth/token_refresh it looks like it just checks if the token is still valid, and then if it is, it creates a new token, and sets a new cookie. As far as I can tell there's not longer lived, separate refresh token, correct? Typically a response is called, and if the JWT is expired, the refresh token is used to create a new JWT without user intervention.

So I'm wondering how this is handled with Subzero. Are you checking expiry times on every request, and then refreshing the cookie if time is almost up? And if it's all manual, is there a way to pass back the token as a response in refresh_token instead of just true? I know it sends it back as an Authorization Header, but it looks like that can't be accessed in client side javascript?

Any help is greatly appreciated, thanks!

graphql not working with postgres11.2

Since the graphql endpoint did not work in the master branch, I did a black box test including 5eeacef I used. As a result, it was caused by postgres 11.2.
I did not understand the log, but if it is possible, please correspond to ver. 11.2.

git clone https://github.com/subzerocloud/subzero-starter-kit
cd subzero-starter-kit

git checkout 5eeacef416d # using
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # success
docker-compose down

git checkout 5e5adad0b25 # latest
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # 500
docker-compose down
git show 5eeacef416d:docker-compose.yml > docker-compose.yml
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # success
docker-compose down
sed 's/9.6/11.2/g' -i docker-compose.yml
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # 500
docker-compose down
sed 's/11.2/10.8/g' -i docker-compose.yml
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # success
docker-compose down
git checkout docker-compose.yml # free from 5eeacef416d
sed 's/11.2/10.8/' -i .env
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # success
docker-compose down


# ---
mkdir /tmp/test
cd /tmp/test
npm init
npm install -D subzero-cli
subzero base-project
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # 500
docker-compose down
sed 's/11.2/10.8/' -i .env
docker-compose up -d
curl -XPOST http://localhost:8080/graphql/simple -d '{"query": "{todos{id}}"}' # success
docker-compose down

missmatches in return types of response lib functions

Trying to add the response lib to my environment I run into the following error:

│2020-08-18 09:17:06.958 UTC [71] ERROR:  return type mismatch in function declared to return void                                                     │
│2020-08-18 09:17:06.958 UTC [71] DETAIL:  Actual return type is text.                                                                                 │
│2020-08-18 09:17:06.958 UTC [71] CONTEXT:  SQL function "set_header"                                                                                  │
│2020-08-18 09:17:06.958 UTC [71] STATEMENT:  create or replace function response.set_header(name text, value text) returns void as $$                 │
│        select set_config(                                                                                                                            │
│            'response.headers',                                                                                                                       │
│            jsonb_insert(                                                                                                                             │
│                (case coalesce(current_setting('response.headers',true),'')                                                                           │
│                when '' then '[]'                                                                                                                     │
│                else current_setting('response.headers')                                                                                              │
│                end)::jsonb,                                                                                                                          │
│                '{0}'::text[],                                                                                                                        │
│                jsonb_build_object(name, value))::text,                                                                                               │
│            true                                                                                                                                      │
│        );                                                                                                                                            │
│    $$ stable language sql;                                                                                                                           │
│psql:/docker-entrypoint-initdb.d/libs/response/schema.sql:35: ERROR:  return type mismatch in function declared to return void                        │
│DETAIL:  Actual return type is text.                                                                                                                  │
│CONTEXT:  SQL function "set_header" 

I assume that is because the return type of the set_config function is actually text.
What do you think could be the cause for me running into this problem?

€: same problem with the set_cookie and delete_cookie function

tutorial -> GraphQL example

Hi,

Thank you for your project, I played a little with it, it's really cool !!!
I've one question about GraphQL example in the tutorial, I've an error when I try this query :

{
  projects(where: {name: {ilike: "*3*"}}) {
    id
    name
    tasks(where: {completed: {eq: false}}){
      id
      name
    }
  }
}

Here is the error :


  "data": {
    "projects": null
  },
  "errors": [
    {
      "hint": null,
      "details": null,
      "code": "42703",
      "message": "column projects.tasks does not exist"
    }
  ]
}

Getting objects with relationships across views should work?

Thanks you,
Fabien

Using a subzero backend with a create-react-app

As discussed in the slack channel, it's not straightforward to get a create-react-app base project running that uses graphql with auth via cookies.

I played around with the cookie configuration in hooks.lua

local session_cookie = require 'subzero.jwt_session_cookie'
session_cookie.configure({
    -- rest_prefix = '/internal/rest/',
    -- login_uri = 'rpc/login',
    -- logout_uri = 'rpc/logout' ,
    -- refresh_uri = 'rpc/refresh_token',
    -- session_cookie_name = 'SESSIONID',
    -- session_refresh_threshold = (60*55) -- (expire - now < session_refresh_threshold),
    -- path = '/',
    -- domain = nil,
    -- secure = false,
    -- httponly = true,
    samesite = "Lax",
    -- extension = nil,
})

But no luck as of yet, I think after your input on the slack channel

"@tamebadger the short answer is that you are having problems because your frontend and your api live on different domains so the api domain can not set cookies for the frontend domain, plus in production you always want then on the same domain otherwise the browser will be making OPTIONS request all the time. In production, you would put your html here https://github.com/subzerocloud/subzero-starter-kit/tree/master/openresty/nginx/html so your app url will be http:/localhost:8080/ and the api endpoint http:/localhost:8080/graphql/simple. However, in development, i understand what you are asking. You want to retain the tooling and automation of create-react-app (that's why you have them on different ports). As it happens, i'll be doing exactly what you are doing in a short while so i will come up with a solution. Please leave an issue here https://github.com/subzerocloud/subzero-starter-kit and i'll come up with something"

I'll see if I can just script copying over the CRA output into the nginx/html directory for the interim ;)

How to get search paths from datafiller? they are missing on default schema.

If I run the datafiller example on the default todos project it generates data.sql without any search_paths. It causes failures:

│psql:/src/db/src/sample_data/data.sql:14: ERROR:  relation "user" does not exist                                                                                                 │
│                                                                                                                                                                                 │
│psql:/src/db/src/sample_data/data.sql:14: ERROR:  relation "user" does not exist

The generated output looks like

-- fill table user (2)
\echo # filling table user (2)
COPY "user" (id,name,email,"password","role") FROM STDIN (FREEZE ON);
1	name_2_2	email_1_1_1_1_1_	password_1_1_	webuser
2	name_1_1_1_1_1	email_2_2_2	password_1_1_	webuser
\.
-- 
-- fill table todo (2)
\echo # filling table todo (2)
COPY todo (id,todo,private,owner_id) FROM STDIN (FREEZE ON);
1	todo_1_1_1	TRUE	2
2	todo_2_2_2_2_	FALSE	1
\.
-- 
-- restart sequences
ALTER SEQUENCE "user_id_seq" RESTART WITH 3;
ALTER SEQUENCE todo_id_seq RESTART WITH 3;
-- 
-- analyze modified tables
ANALYZE "user";
ANALYZE todo;

it should be "data.user" like the commited sample file at:

https://github.com/subzerocloud/subzero-starter-kit/blob/master/db/src/sample_data/data.sql#L13

Error: Invalid or corrupt jarfile /usr/local/bin/apgdiff.jarc when running migrations init

Hi folks,

I'm following the tutorial page here: https://docs.subzero.cloud/managing-migrations/ and I get the error Error: Invalid or corrupt jarfile /usr/local/bin/apgdiff.jar when trying to run the managing migrations section.

$ subzero migrations init
Writing database dump to /Users/jambo/PycharmProjects/udemy_pc/tmp/dev-initial.sql
Created sqitch.conf
Created sqitch.plan
Created deploy/
Created revert/
Created verify/
Created deploy/0000000001-initial.sql
Created revert/0000000001-initial.sql
Created verify/0000000001-initial.sql
Added "0000000001-initial" to sqitch.plan
Diffing /Users/jambo/PycharmProjects/udemy_pc/tmp/dev-initial.sql and /Users/jambo/PycharmProjects/udemy_pc/tmp/prod-initial.sql
Writing the result to /Users/jambo/PycharmProjects/udemy_pc/db/migrations/revert/0000000001-initial.sql
Error: Invalid or corrupt jarfile /usr/local/bin/apgdiff.jar

Copying /Users/jambo/PycharmProjects/udemy_pc/tmp/dev-initial.sql to /Users/jambo/PycharmProjects/udemy_pc/db/migrations/deploy/0000000001-initial.sql

I know you use apgdiff to generate your migration file - I could download it to /usr/local/bin? But also there must be a more elegant way that keeps apgdiff up to date.
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.