Giter Site home page Giter Site logo

amazon-archives / aws-appsync-rds-aurora-sample Goto Github PK

View Code? Open in Web Editor NEW
132.0 15.0 39.0 681 KB

An AWS AppSync Serverless resolver for the Amazon Aurora relational database.

License: MIT No Attribution

JavaScript 100.00%
aws amazon-web-services appsync appsync-resolvers rds mysql amazon-aurora aws-lambda nodejs

aws-appsync-rds-aurora-sample's Introduction

AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda

Aurora Serverless Data Sources

As of 11/20/2018, AWS AppSync supports Aurora Serverless as a native data source. More information is available here: Relational database data sources

Introduction

This sample application will create all of the AWS resources you need to have an AppSync GraphQL API that fronts an RDS Aurora cluster, doing so via a Lambda function. All resources are created with AWS CloudFormation.

Specifically, it will provision the AppSync API (including the schema, data source, and all resolvers), the Cognito user pool used for authorization, an RDS cluster that runs on Amazon Aurora MySQL, and a Lambda function to serve as the go-between. Both the Lambda function and the RDS resources will reside in a newly created VPC.

This sample will work out of the box in every region that AppSync is available, and will show how you can use GraphQL to interact with your database, as well as how the requests translate into SQL. You can use this sample application for learning purposes or adapt the resources to meet your own needs.

Features

The schema is a light blog design - the out of the box types are 'Posts' and 'Comments'.

  • GraphQL Mutations

    • Create new posts
    • Create comments on existing posts
    • Increment view counts on existing posts
    • Upvote comments on existing posts
    • Downvote comments on existing posts
  • GraphQL Queries

    • Get a post
    • Get all posts by an author
    • Get all of the comments on a post
    • Get the number of comments on a post
    • Get all comments by an author
  • GraphQL Subscriptions

    • Real time updates for comments added to a post
    • Real time updates for comments added by an author
    • Real time updates for posts added by an author
  • Authorization

    • This sample app uses Cognito User Pools as the authorization mechanism
  • Dependency Versions

    • The Lambda code runs Node.js 8.10
    • The RDS cluster runs Aurora MySQL 5.7
    • The RDS instance is a t2.medium

Setting up the Sample

Create CloudFormation Stack

The sample spins up several AWS resources via CloudFormation, so step one to using it is creating the CloudFormation stack. Note the AppSync GraphQL API name when you're creating it - it's an input in creating the stack, and will be necessary later. Creating the stack can be done any of the following ways:

From Here

AWS Region Short name
US East (Ohio) us-east-2 cloudformation-launch-button
US East (N. Virginia) us-east-1 cloudformation-launch-button
US West (Oregon) us-west-2 cloudformation-launch-button
EU (Ireland) eu-west-1 cloudformation-launch-button
EU (Frankfurt) eu-central-1 cloudformation-launch-button
Asia Pacific (Tokyo) ap-northeast-1 cloudformation-launch-button
Asia Pacific (Sydney) ap-southeast-2 cloudformation-launch-button
Asia Pacific (Singapore) ap-southeast-1 cloudformation-launch-button
Asia Pacific (Mumbai) ap-south-1 cloudformation-launch-button

From the AppSync Console

The AppSync console has the new sample added as well, entitled 'Blog App'. It will spin up the exact same template linked above and in this repository.

appsync-sample-tile

Customize Your Own

If you want to customize your own version of the template, that can easily be done by using this repository. Just download the template, customize it to fit your own needs, then upload it to the CloudFormation console.

Customizing the Lambda

By default, the CloudFormation template will pull the Lambda code from an S3 bucket owned by AppSync. It's possible to simply update the Lambda code once it's deployed, which might be quicker for testing purposes, but you might want to customize the code that gets used if your needs are more complex or require multiple stack builds, in which case you can easily customize the included Lambda code as needed. The entirety of what is pulled from the AppSync S3 bucket is in the src/lambdaresolver directory of this repository. Pull the contents of that directory, and customize the index.js file as needed.

Lambda requires a specific format for code it executes - it must be inside a zip file with only the contents of the lambdaresolver directory, not a zip of the lambdaresolver directory. You'll need to upload that zip file to an S3 bucket in your own account, and then update the CloudFormation template to point to that S3 bucket and key. Also note that you'll need to update the 'Handler' field in the same resource if you re-name index.js. Both of these fields are in the 'AppSyncRDSLambda' resource in the template.

This sample uses some SQL to set up the database and necessary tables during the Lambda function's execution. You'll likely want to remove that code when customizing it.

If you do change the SQL and/or GraphQL schema, you'll likely want to look at the resolver mapping templates in the AppSync console. These handle the logic that translates the GraphQL requests into SQL. The below example is used to get a post:

{
    "version" : "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "sql":"SELECT * FROM posts WHERE id = :POST_ID",
        "variableMapping": {
            ":POST_ID" : "$context.arguments.id"
        }
    }
}

Setting Up Authorization

The sample uses Cognito User Pools for authorization. Now that the sample stack has been created, the next step is creating a Cognito user to sign in. You can do this programatically, but the easiest way to do this is via the Cognito console. Go there, then click 'Manage User Pools'. Next, you will see a list of different user pools under your account. By default, the sample will create one with the name 'AppSyncRDSLambdaPool' - click into that. You'll need two things from this console:

Client Id

The CloudFormation stack will automatically create the client resource for you. You can get it by clicking 'App clients', in which it will be under the text 'App client id'.

A Created User

You'll need a user to exist under this pool to sign in with on the AppSync console. To create one, click on 'Users and groups' inside the created user pool page.

Creating a user

Input a username for the user and a temporary password. Insert your phone number (with the country code, of the format +15555555555) in the 'Phone Number' field, check 'SMS (default)', and uncheck 'Mark email as verified'. This will text your phone with the username and temporary password for the user. If you don't enter a temporary password, a random one will be auto-generated for you. This will create the user with status 'FORCE_CHANGE_PASSWORD', which is Cognito's way of making sure an 'admin-created user' changes their password before they're able to sign in successfully. The AppSync console will take care of this for you.

Authorizing on the AppSync Console

There's one last step before you're ready to play with the sample. Go to the AppSync console, and click on the API that was created with the stack. Click on 'Queries' on the side of the page. At the top, you'll see a button to log in with User Pools. Click that.

Sign in

Once you sign in with the client id, username, and temporary password you used, the console will prompt you to give a new password. This is normal for 'admin-created users'. We have to change it once to get the user out of state 'FORCE_CHANGE_PASSWORD'. Use whatever password you like - this will be permanent.

Changing password

Using the Sample

Now you're free to use the sample! Here's some queries and mutations to get you started:

Hypothetical author Shaggy creates a new post:

mutation CreatePost{
  createPost(author:"Shaggy", content:"Hello there"){
    id
    author
    content
    views
  }
}

Let's say the id that was returned was "123"

Hypothetical reader Nadia pulls up that post, running both of the below commands:

mutation IncrementViewCount{
  incrementViewCount(id:"123"){
    id
    views
  }
}

query GetPost{
  getPost(id:"123"){
    id
    author
    content
    views
    comments {
      id
      author
      content
    }  
  }
}

Now, let's say Nadia liked the post enough to leave a comment on it:

mutation CreateComment{
  createComment(postId:"123", author:"Nadia", content:"Great stuff"){
    id
    author
    postId
  }
}

Let's pretend Shaggy saw this comment (that was given id 456) and wanted to upvote it:

mutation UpvoteComment{
  upvoteComment(id:"456"){
    id
    upvotes
    downvotes
  }
}

Now let's say Shaggy wanted to see all of Nadia's comments:

query GetCommentsByAuthor{
  getCommentsByAuthor(author:"Nadia"){
    id
    author
    content
    upvotes
    downvotes
  }
}

Say Shaggy liked what Nadia had to say, and decided to get all of her future comments:

subscription AddedCommentByAuthor{
  addedCommentByAuthor(author:"Nadia"){
    id
    author
    content
  }
}

Note: subscriptions are long-running processes that won't show anything until a mutation runs. To do this, you'll need two tabs in the AppSync console - one to execute the subscription and then a second to execute the mutation. Additionally, AppSync requires that filters in subscriptions be in the response of the mutation, so createComment must return author to do this.

License Summary

This sample code is made available under a modified MIT license. See the LICENSE file.

aws-appsync-rds-aurora-sample's People

Contributors

alpacagoescrazy avatar chriscoombs avatar gavinkilbride avatar guerrerocarlos avatar jbailey2010 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-appsync-rds-aurora-sample's Issues

Custom Lambda Function Not updating.

I uploaded Lambda to S3 like this

Lambda requires a specific format for code it executes - it must be inside a zip file with only the contents of the lambdaresolver directory, not a zip of the lambdaresolver directory.

When I update my lambda function index.js file with new table is is not fetched by cloudformation stack change set.

What will be the possible solution to this?

Sample fails during creation script

I've tried to deploy this example as my first foray into appsync. I merely select the 'blog' option, and wait for deployment. After some time in creating status, it says rollback is in progress and then rollback is complete.

In the events log for 'AppSyncRDSLambda' it says "The runtime parameter of nodejs8.10 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs12.x) while creating or updating functions. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: f8f22b02-ecef-40f8-876f-705c569e6933)"

It then declares failure and rolls back.

testing example

Is there a suggested way to write automated, local tests for AppSync backed by Postgres Aurora? I'd like to run them as part of CI pipeline and not have to call actual service

[Question] Build the database based on the graphql scheme

Thank you so much for this tutorial! I started off using DynamoDB and realized I needed a relational database which lead me to this example.

Is there a way to build the database based on the graphql scheme? When I was using DynamoDB it would build the database which was great; similar to what sequelize does.

Thanks again!

how to identify number of active subscribers by subscription in AppSync

Need to keep count of how many subscribers are connected with each subscription.

For UserA, UserB are connected with SubscriptionA : then 2 is the total subscribers
For UserA are connected with SubscriptionB : then 1 is the total subscribers

Can i know when UserA unsubscribes , subscribes, disconnects, reconnects from AppSync ?

Subscription query hangs

The subscription query in the guide hangs

subscription AddedCommentByAuthor{
  addedCommentByAuthor(author:"Nadia"){
    id
    author
    content
  }
}

1-Click for the us-west-1 has invalid region name and template validation error

  1. The Launch Stack button and naming is wrong. Not sure if the region is Oregon or not. The short name for US West (Oregon) is us-west-2. It's showing us-west-1 which is US West (California)
    Region US West (Oregon)
    Short name: us-west-1

  2. When you select the Launch Stack button for the US West (Oregon) region, you get the following validation error:
    Template validation error: Template format error: Unrecognized resource types: [AWS::AppSync::GraphQLApi, AWS::AppSync::Resolver, AWS::Cognito::UserPool, AWS::AppSync::DataSource, AWS::AppSync::GraphQLSchema, AWS::Cognito::UserPoolClient]

SQL queries in resolvers

Is this really the right way to store SQL queries in resolvers and then just invoke them against DB in a singular lambda?

When you have resolvers for a field within a type and you query for list of those types you will face 'N+1 queries' problem. I SQL in resolvers approach negates possibility to implement BatchInvoke resolvers which should provide concatenated fetching of data (solving N+1 queries issue) as each event in the batch will come as a separate query to lambda function.

Or is there some other way around this problem?

[Question] Batch insert support with Aurora Data Source

Is it possible to perform a batch execution insert using AppSync with an Aurora Data Source?

There is a Tutorial: DynamoDB Transaction Resolvers but I could not find the same for Aurora.

Here's an example of a schema I would like to use:

        type LambdaMetric {
          id: ID!
          documentClassificationId: ID!
          timestamp: Int!
          isActive: Boolean!
          stackName: String!
          functionName: String!
          version: Int!
          elapsedTime: Int!
        }

        input LambdaMetricInput {
          documentClassificationId: ID!
          timestamp: Int!
          isActive: Boolean!
          stackName: String!
          functionName: String!
          version: Int!
          elapsedTime: Int!
        }

        type Mutation {
          addLambdaMetrics(input: [LambdaMetricInput!]!): [LambdaMetric!]!
        }

I'm currently using boto3's rds-data batch_execute_statement and would like to port my application to AppSync + Aurora.

Wondering how you viewed the data in the DB

First of all, I love this sample that you created. It's by far one of the easiest to follow for somebody like me who has stumbled upon his first couple of days with AWS products and I really appreciate something like this.

So in Amazon RDS, I can find the database information where it shows CPU utilization and other neat things about the usage. But I can't find anywhere where it visually shows the data within. Also whenever I click on 'Query Editor', it says that "Currently, query editor only supports Aurora Serverless databases".

The other thing I wanted to ask was about scalability. I've read that Aurora is an on demand, auto-scaling database. Is this by default or is there some sort of setting?

autoId (128 bits) vs varchar(64)

Hello,

I see in the resolvers you use autoId, which returns a 128 bit uuid, or a 16 characters string.
However, in the SQL crearion statements, I see that the id type is varchar(64).
Shouldn’t that be varchar(16) or binary(16)?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.