Giter Site home page Giter Site logo

aegis's Introduction

Aegis

License Apache 2 godoc aegis Build Status Go Report Card

Aegis Documentation

Aegis is both a simple deploy tool and framework. Its primary goal is to help you write microservices in the AWS cloud quickly and easily. They are mutually exclusive tools.

Aegis is not intended to be an infrastructure management tool. It will never be as feature rich as tools like Terraform. Its goal is to assist in the development of microservices - not the maintenance of infrastructure.

Likewise the framework is rather lightweight as well. It may never have helpers and features for every AWS product under the sun. It provides a conventional framework to help you build serverless microservices faster. It removes a lot of boilerplate.

Getting Started

You'll need an AWS account of course. You'll also want to have your credentials setup as you would for using AWS CLI. Note that you can also pass AWS credentials via the CLI or by setting environment variables.

Get Aegis of course. Use the normal go get github.com/tmaiaroto/aegis. Ensure the aegis binary is in your executable path. You can build a fresh copy from the code in this repository or download the binary from the releases section of the GitHub project site. If you want to use the framework though, you'll need to use go get anyway.

You can find some examples in the examples directory of this repo. Aegis also comes with a command to setup some boilerplate code in a clean directory using aegis init. Note that it will not overwrite any existing files.

Work with your code and check settings in aegis.yaml. When you're ready, you can deploy with aegis deploy to upload your Lambda and setup some resources.

Aegis' deploy command will set up the Lambda function, an optional API Gateway, IAM roles, CloudWatch event rules, and other various triggers and permissions for your Lambda function. You're able to choose a specific IAM role if you like too. Just set it in aegis.yaml.

If you're deploying an API, the CLI output will show you the URL for it along with other helpful information.

The Aegis framework works by handling events (how anything using AWS Lambda works). The way in which it does this though is via "routers." This means your Lambda is actually able to handle multiple types of events if you so choose.

Many people will want to write one handler for one Lambda, but that's not a mandate of Lambda. So feel free to architect your microservices how you like.

There are several types of routers. You can handle incoming HTTP requests via API Gateway using various HTTP methods and paths. You can handle incoming S3 events. You can handle scheduled Lambda invocations using CloudWatch rules. You can even handle invocations from other Lambdas ("RPCs").

Building

It's easiest to download a binary to use Aegis, though you may wish to build for your specific platform. In this case, Go Modules is used. Easiest thing to do after cloning is:

GO111MODULE=on go mod download

Then build:

GO111MODULE=on go build

Unfortunately you can't do a straight go build because of one of the packages used. You'll get errors. So using Go Modules is the way.

Contributing

Please feel free to contribute (see CONTRIBUTING.md). Though outside of actual pull requests with code, please file issues. If you notice something broken, speak up. If you have an idea for a feature, put it in an issue. Feedback is perhaps one of the best ways to contribute. So don't feel compelled to code.

Keep in mind that not all ideas can be implemented. There is a design direction for this project and only so much time. Though it's still good to share ideas.

Running Tests

Goconvey is used for testing, just be sure to exclude the docs directory. For example: goconvey -excludedDirs docs

Otherwise, tests will run and also include the docs folder which will likely have problems.

Alternatively, run tests from the framework directory.

aegis's People

Contributors

gitbook-bot avatar tmaiaroto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aegis's Issues

Allow `deploy` to accept Git repo URL

Allow the deploy command to deploy from a Git repo. This will allow for easy deployment of other microservices/apps/APIs... The one thing to note here is that function and API names are going to need to be unique. Technically API names don't need to be, but that'll cause some confusion.

Perhaps a prompt to help notify about conflicts?

Maybe when deploying from a repo, prefix/suffix the function name with a hash of the repo URL or something. Maybe with additional phrasing? To help people easily spot those functions (and APIs in API GW) deployed from Git repos?

GraphQL endpoint?

Given the popularity of GraphQL, it might be nice to have this. As a dedicated API Gateway resource and handler.

While this is trivial to implement with the Router and the existing API Gateway {proxy+} path, it comes at a penalty. Lambda integrations with the proxy paths are a little slower than defining resources and using Lambda on those with defined responses, etc. With GraphQL we know the path and method and also the response type. So the deploy command can easily set that up, there's no mind reading involved.

The question is, does this belong with Aegis? Or is it out of scope? Is it even worth doing?
What kind of support around building GraphQL API exists?
There is this package: https://github.com/graphql-go/graphql

It works well, but defining schema can be annoying. It would be nice to create helpers to simplify this process. There's the potential for using graphql tags on structs to help automatically build the schema for example.

Needless to say this could be a rather large rabbit hole.

Add binary support

API Gateway now has binary support. It just needs to know which content-type headers to treat the media type accordingly. This should be configurable.

Custom Services and related work

The idea of "Services" are really just handler dependencies that get injected, but also can be configured. This configuration happens with a closure that receives context and the event. So it allows for a great deal of flexibility with what gets injected into a handler.

Currently only the Cognito service (which is a more involved "helper" of the sorts) is used. The ability to add custom services is sorta there -- there's a Custom field on the Services struct, but it's not fully implemented. That is, nothing will get configured.

For example, look at what happens with the Cognito service:

if sCfg, ok := a.Services.configurations["cognito"]; ok && a.Services.Cognito == nil {
    cognitoCfg := sCfg(ctx, evt).(*CognitoAppClientConfig)
    cognitoCfg.TraceContext = a.TraceContext
    cognitoCfg.AWSClientTracer = a.AWSClientTracer

The current Lambda context and event are given to the configuration function (held on
a configurations map). This allows the Cognito service to use that context and event to configure itself.

Currently, users could write some sort of configuration "New()" function and run it in each of the handlers they wish to use context and event data from...But that's not so much dependency injection. That's taking the event and instantiating a new service after the fact. We can make it a little cleaner. Not saying it's the only way it should be done or could be done...But it provides another option that may become more important when it comes to use 3rd party packages.

First off: Refactor the check for cognito service configuration. That should be in its own function that not only checks for that, but also other services.

Second: Also take into account Custom user services.

The following is how the Cognito service gets configured:
AegisApp.ConfigureService("cognito", func(ctxcontext.Context, evt map[string]interface{}) { ... })

That should already work well for every other service in the future as well. Custom services should be something like ConfigureService("custom.myservice", func...)

That function adds to the configurations map which gets used in the base aegisHandler(). It just needs to work for more than just the Cognito service now.

So the groundwork is all in place. There's a clear path on how to implement these things, it just needs to happen. It was never really added. Though the intent for custom services was always there.

The priority? That really is hard to say, because there are ways to use and configure services for your handlers already. Just do it inside the handler. This is mostly an enhancement and quality of life improvement that allows you to write less code (or potentially share more code with others).

Circuit breaker (and subsequent monitoring/registry)

It would be nice to "roll back" versions of each Lambda to a known working version in the event of failures from a newly pushed version (after a set number of failures).

This would likely require another Lambda to monitor CloudWatch for each Lambda pushed by Aegis. So aegis up would ensure this main, watchful, Lambda was also deployed.

Alternatively, it may be possible in the Node.js wrapper to catch errors because it is separate from the actual Go binary. Should the Go binary return with some sort of panic or failure, that Node.js shim could perhaps perform a rollback in some way.

This overarching Lambda would also then be very capable of much more than failure and rollback detection. It could serve as a registry/discovery service. It could also provide additional monitoring and health checks. It may also expose an API for such useful information. Logging. etc.

This is definitely a feature that will take some time and consideration.
Versioning support will need to be added too.

S3 object trigger/handler

Allow S3 object operations (put, delete, etc.) to trigger the Lambda and provide handler helpers. This will allow aegis.yaml to contain a list of S3 buckets that will have (for now, full) operation triggers applied to them to invoke the Lambda. The handler will use the bucket key/path as named handlers. The event message will contain information about the file, etc.

So something like:

s3Uploader.Handle("PutObject", "mybucket", "mypath", handlePut)

func handlePut(ctx context.Context, evt map[string]interface{}) error {
  // ... receives event messages from the S3 put operation, it will include file info, etc.
}

All operations/event types:

"ListObjects",
"ListObjectVersions",
"PutObject",
"GetObject",
"HeadObject",
"CopyObject",
"GetObjectAcl",
"PutObjectAcl",
"CreateMultipartUpload",
"ListParts",
"UploadPart",
"CompleteMultipartUpload",
"AbortMultipartUpload",
"UploadPartCopy",
"RestoreObject",
"DeleteObject",
"DeleteObjects",
"GetObjectTorrent"

AppSync/GraphQL support

I'd really like to add AppSync support. Then have a router for resolvers.

  • handle resolvers with a new router
  • import/export/push/sync with AppSync

The Amplify CLI toolchain is great and all but, it's very heavy and does a lot. It also uses CloudFormation and as such there's a lot of stateful issues and limitations. For example, there is no ability to import and existing GraphQL AppSync API into the toolchain. However, the SDK has everything needed: https://docs.aws.amazon.com/cli/latest/reference/appsync/index.html#cli-aws-appsync

I see no tool out there that pulls down your AppSync API, allowing you to update the VTL and then push back changes. So if something was created via the web console for example...you're hosed. Amplify CLI wouldn't be able to deal with it. A sync (no pun intended) would really help.

I'm using AppSync GraphQL more and more and will replace a lot of my API needs with it. Not all, but a lot. Auth checks using Cognito are great and you can make multiple queries in the VTL templates (transactions etc) so checking for authorization based on data in RDS or something is pretty easy too. This eliminates a tremendous amount of need for Lambda functions.

Want to send an e-mail if a user invites another user to their group in your app? Sure, you're looking at Lambda. So there's no way to completely avoid it but, AppSync is a very powerful thing that people are missing out on and aren't connecting all the dots (and the examples don't really show the potential either really).

So being able to manage this from aegis fits in pretty naturally. Funny enough, one could use aegis then to build an entire API and never write any Go code at all. Technically speaking. Practically, of course you'd have more going on and would write some Go functions.

Add scheduled task "cron" support

Allow the config to automatically execute the Lambda. The event passed to the Lambda is very specific, so it needs to be re-created for the scheduled invocation. Or maybe a special event gets passed that the aegis Lambda handler can look for and process separately.

It would be nice to keep the same semantics for handling these invocations. Something like:

router.Handle("CRON", handlerFn)

Or:

cron.Handle(handleFn)

There may be several cron tasks to perform so there may even be a name in the event that helps handle each one. The Go code won't necessarily be configuring and setting up the actual schedule (that's best left to the YML config) but it can handle the tasks in a conventional manner while co-existing with the rest of the app.

Or...maybe the Go application code does do the scheduling and defines everything. Then when calling up and with a single config parameter of cron: true the Lambda will be invoked in a special manner to then execute all of the cron configurations and set them up.

There's many ways to do it, I'm not sure yet how to handle it just yet...But it's a very nice feature. It would be nice to build into configuration to avoid additional commands on the CLI.

Method to read/write encrypted config values

More and more there are needs to deal with sensitive credentials. Be it a Cognito secret key or database connection info. Not everything can be handled by IAM and SDK calls.

I made a project a long while ago called discfg. That exposed an HTTP API interface and used DyanmoDB as well. I'm thinking something simpler.

I'm thinking a new command for the Aegis CLI that lets one read and write key values to perhaps just S3. This would avoid the use of DynamoDB (just another thing to provision and such). The reason being that these config values won't often change. There's no concern with write speed. Read speed? Ehh, should be ok. Especially considering many of these variables will only be read at start time. Warm Lambda invocations won't be affected at all.

The CLI would expose three commands: read, readFull, and write

It would use KMS to encrypt the stored values in S3. A bucket would need to be configured (aegis.yaml). The file name will come from configuration too (or conventionally be the name of the Lambda, though the ability to change that without losing config path is preferable).

So all the Lambda would need at build time is the bucket/path. The deploy command would have already set up permissions to use KMS. Then it's just a matter of a simple helper function in the Lambda function to get the values.

These would be as simple as setting environment variables. Perhaps the helper even pulls the entire file and sets as environment variables. So the process can be the same both when testing/developing locally and running in AWS. Or so that environment variables can be optionally used instead. It keeps things a bit more standard and flexible. Think CI tools, etc. Things that might not have KMS access yet still need to run the app or run an integration test, etc.

The read and readFull commands are nice too. The normal read would only show part of the stored value, with asterisks protecting the bulk of it. For example, DB_PASSWORD=a1kj******ak should be enough for someone to quickly see what was currently set and if it's correct (in most cases). While a readFull command would show the value completely unencrypted. This is just security convenience. If someone is in a position where they don't want everything to appear on their screen unencrypted, they have a choice now.

What this whole thing does though is it makes it harder for developers to push sensitive credentials to online repositories. The aegis.yaml file should theoretically be able to contain nothing sensitive. The Lambda environment variables and API Gateway stage variables set there shouldn't be sensitive passwords or keys because it's so incredibly easy to commit that aegis.yaml file. Many people will want to commit that file too instead of sharing it among people because it contains the function name, memory allocation, API name, etc. It should be able to be pushed to a public repo without concern.

I'm not keen on making some sort of dot file that magically sets variables because again we're in the position of accidental publishing. There's still the need to then share that file with team members.

S3 is central. It's easier to provision than DynamoDB and there's no management of capacity units to consider. All team members should have an AWS account with access to S3 and KMS. The specific KMS key and S3 bucket needed in this case. It provides one central point to control access to the credentials. It leaves absolutely no way to accidentally publish sensitive information either.

Update command

Create an update command for the CLI. This will update the Lambda function code, but not re-deploy everything. An alternative to this might be to look through aegis.yaml and the tasks directory if that exists, etc. and determine if everything must be deployed...But it's easier to just use an different command. It also provides users with more flexibility and helps remove concerns about working with multiple people who might have different aegis.yaml files.

Local run/testing

Allow Lambdas and API to run locally. This will allow for a good way to locally preview the API before deploying in AWS.

I've began work on this already. The details are still a little fuzzy with regard to what data to pass along in the context and event outside of the HTTP request. Since this isn't Lambda of course. There is no request ID for example...but should there be? Should there be one for the sake of logging and debugging locally? I'm leaning toward yes right now...However, it's still not implemented.

I want to address the logging as well when running locally. This may involve some settings in the YAML config file or I might just lean a little more convention over configuration and assume a log file right next to the binary (which is likely in the Go project's directory of course) is just fine.

At any rate, this works locally. I just need to wrap it up and push the changes up. I of course decided to do this in master, but as more watchers are coming here and especially as I start doing actual releases...I'll be putting more of these features into proper feature branches.

How will this work? It's opt-in and available with a new router method: router.Gateway() ... That will essentially mimic what API Gateway is doing. It will take an HTTP request (port 9999 for now, but I'll leave that as an option to set/configure) and transform it to a Lambda Event. Then Aegis will run like normal and pass that to the router to handle. The ProxyResponse (in the format Lambda Proxy needs) will then again be transformed on the way out and sent back through as a normal HTTP response.

It works quite well and allows for an easy way to locally test and run your Lambda. I have some small concern about the belief it may be suitable to simply use as is on a server (not using Lambda at all)...I don't think it'd be the worst thing in the world, but it's not the intent nor design and there are far better multiplexers and routers out there for traditional APIs.

I would highly suggest using this feature as a way to test and debug locally... But I realize a door may have been opened.

Custom Zip

Is it possible to create a custom zip and include non go files like templates and such?

I saw in the config file that you can provide a custom zip file, which i created by generating the go binary and placing it in a zip with my template but no dice.

Any suggestions?

increase code coverage, documentation, and examples

sprinted on features for a while here, built a debt for unit tests. need to increase code coverage a good bit.

same goes for documentation and examples. there's so many features, but they are mostly unexplained unless you dig into the code.

CLI Reports from XRay

Following up on #13 it would be nice to visualize some of the data recorded in XRay without combing through the AWS web console.

The most obvious thing here is a command to return the success/failure rates of an application's handlers along with timing information (min/max/avg execution time).

XRay has an API for getting this information, it just needs to be hooked into a command.

The hard part will be identifying all of the subsegment and annotation values. I'm not sure what the XRay API has just yet. So I don't know if there's an index of them all, or if now we need to hold a list of valid values to look for. That's going to change for each application, so it's hard. If XRay doesn't have any sort of aggregation or reporting support in its API, it may not make sense to do this.

S3 Object Router - routing by event?

I'm not entirely sure if it's routing by event or not. Have to double check this, but if I'm reading it correctly, only the object key is being matched (glob match) and the bucket (or no bucket at all). Would also need to add a check for the actual s3 object event name itself (which is available on the routed handler's rule, but doesn't appear to be checked).

Cognito handlers and helpers

This is a big one, but it's very common and important for most apps.

There should be handlers for each trigger described here: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-lambda-trigger-examples.html

There should be middleware that can be used with the router.

We can return HTML pages with Aegis actually. So either work with JSON to allow a custom app to design sign up forms, etc. Or return a default simple HTML form/template. I imagine 80%+ of the time apps would make all the sign up forms, etc. However, using redirects or iframes even could be valuable, especially for speed. This raises an interesting concept -- what about HTML templates held in S3? Aegis could load those to render. So design can be customized to a good degree. Of course lengthy forms for profile updates, etc. would be something to work on later or just completely out of the equation. Aegis hosted HTML through API Gateway could simply be simple signup/login. No extra form fields about phone numbers, birthdate, etc. It's much better to use CloudFront for hosting and not S3 (assuming templates are hosted there and not bundled with the Lambda...but actually why not bundle with deploy?) through Lambda through API Gateway. Though for speed/convenience...Something to think about.

Pools have to be created outside of deploy. They can't have their attributes updated once created - so having a list of attributes in the config would be weird because they'd be a one time use. Also, it's just a lot here. It's bloating deploy. API Gateway and roles and such made sense (especially given the usage of API Gateway was opinionated), but maybe even some of that could benefit from separate commands for certain things. The first pass of Cognito in aegis could certainly require all Cognito setup to occur outside of the too. Using the web console is easy enough. Though a comprehensive CLI tool (that is likely to be quite opinionated) is a long term goal.

Lambda permissions

There still seems to be some issues with Lambda invoke permissions for the API Gateway when updating functions. Very strange.

Default aegis_lambda_role needs VPC policy

aegis deploy won't work if specifying VPC options for the Lambda without having the proper access. A work around for now is to manually assign VPC policy to the default role that was created (aegis_lambda_role) or specify a role with proper authorization in aegis.yaml.

aegis deploy error

I created a blank project using aegis init but am getting an error when trying to deploy.

$ aegis deploy
There was a problem building the Go app for the Lambda function.
exit status 1

Is there anything I should change in order to deploy this project?

Set up integration tests

Changes in various things can create issues from time to time. Having an integration test to ensure basic build and deploy works would be great. Having it periodically run automatically would be better.

Maybe run this from a Lambda? ๐Ÿ˜€

How to install

Sorry if I missed anything but How to install this project. I 've search for go and npm package but there was no result. Do I have to clone and build this project to have a binary file for aegis CLI ?

Inter-service communication (RPC)

Allow one Lambda to invoke another with a passed event message. The interface will be very much like the Router and Task handlers.

Aegis event filters & router level middleware

Potentially add a beforeFilter, afterServiceFilter, and afterFilter hook (perhaps slice of functions to run) that are application wide event filters.

Basically, a Lamda event intercept. After/before all services are configured and after all handlers have finished.

These wouldn't necessarily return anything. They'd be given pointers and they wouldn't stop the flow of execution (though they could ruin it by emptying out the event map for example or altering certain data that would render the registered handler useless).

So it's not really "middleware" in the sense of an HTTP request router, but rather hooks/filters.

The problem is, I can't think of the need right now. So does it even make sense?

I feel like there are many reasons someone might want to modify a Lambda event before it hits the registered handlers. Someone may want to always stick certain data onto every single event. So whether it's an event from API Gateway or a scheduled task, it would get the same treatment. Perhaps very useful for RPCs. Router middleware can check "cookies" for auth tokens (what's a cookie though? It's still a key in an event map in the case of Lambda), but RPCs have no such middleware. No ability to check for auth. Not outside each individual RPC handler. What if there are several calls in the Lambda? Why repeat all those checks?

Maybe a someone wants to put a function on an event...Remember that Lambda events are JSON. But at this point they are map[string]interface{} so we could put anything on the event map. It just won't be marshaled back out if there's a return/response.

There's one problem with middleware right now: It does not have access to any configured services (like Cognito). It does not see the Aegis interface. It's a completely different scope. The only way it can use the Aegis interface and it's configured services is by having it defined in the main app and have Aegis interface pulled outside of main(). This means two things:

  1. Any router middleware checking auth has to be defined in the main app
  2. That middleware has to be applied to every single route handler that needs it

This would help solve that kind of limitation. The question is, does it matter?

Auth is the big reason why it might matter here...Although if a router level middleware is applied then the user only needs to write and apply one middleware (instead of on each handled route). That solves the second issue...But it still doesn't avoid the first issue of scope.

Of course not all routers/handlers have the idea of "middleware" either. It's only events from API Gateway because those are "HTTP-like" events and so when writing an application/API, the natural inclination is to use middleware for various concerns (have to keep reminding ourselves these are NOT http requests at all). Aegis' router "middleware" also doesn't call any sort of next() function. While it has access to the "request" and "response" (again NOT http) pointers, it does not have access to the next middleware function. These just return true/false in order to continue or stop the flow of execution. Upon stopping, a response is always returned. This is why middleware is in quotes here.

Not sure any other router type needs "middleware" either though. These hooks may be enough. Though they will be called for every type of event whereas more specific hooks/filters/middleware would only be called and applied to certain types of events. But everything is an event anyway. What's the different?

The difference here is flexibility and concerns. It's context - how and when things are applied. These filters and middlewares aren't magic. They shouldn't be confusing. It let's developers reduce repetition in their code, share functionality across Lambdas, and know exactly when such functionality gets applied.

Currently, "middleware" (and the yet to be built filters) aren't traced in XRay. Ensure that they are. That will help with debugging, timing, etc.

So this proposal would add additional router level middleware and Aegis app hooks or event filters. There may be other "hooks" as well. Right now this just deals with those that apply to events.

3rd Party Add-Ons

I'm going to keep a running list of 3rd party add-on ideas here. These will not be part of this repository because all 3rd party things will be separate packages. Aegis only deals with AWS. However, there is a sense of interop and a desire to use other services.

Tracing

This one may even make sense to include (since logrus is) in the core framework package. It's a vendor neutral bridge, however it does not include X-Ray. So on the fence about it. Also makes me think about moving logrus out on the other hand. Regardless, I think this is very important and I think it can use the existing TraceStrategy interface to adapt calls; perhaps some additional methods on top for it specifically.

Auth

This is such a popular service it seems like there should be a 3rd party package to work with.

Add "queue" support

Much like cron tasks, there can also be a "task master" type facility that leverages SQS and then just a single cron/hook that gets called on a set schedule (configured by up ... maybe every 5 minutes for example or even configure that in the YAML) and handled by the Go app code.

This is a little different than cron. While a scheduled job invokes the Lambda to process the queue in SQS, it still means that a certain number of messages will be read and processed at once. So there may be a few configuration options to consider here. At any rate, it's a queue that can even support retries unlike a cron task. Though it's still very helpful.

nil Tracer

When adding unit tests and providing a tracing strategy that did nothing, I inadvertently broke some things =(

Simple fix in a bunch of places. Already underway. Along with it will come SQS routing support.

Unable to build new project with go modules enabled

Hi there, thanks for the great framework. I tried to build sample application and noticed that when i have go modules on I get the following error:

# github.com/tmaiaroto/aegis/framework
../../go-projects/pkg/mod/github.com/tmaiaroto/[email protected]/framework/tracer.go:169:8: subseg.CloseAndStream undefined (type *xray.Segment has no field or method CloseAndStream)   

Any suggestions on how to work around this? Thanks!

Ensure Cognito rate limit is not hit

In a production app, it was discovered that a Cognito service rate limit was hit. This was mostly around automatically retrieving ID values in order to get the app client secret.

In that particular case, re-using the same client prevented the rate limit. Lambda of course is ethereal and parallel. So while it's nice to automagically "figure it out" for the user, it's better to simply require the configured values so that it doesn't hit rate limits when it goes to "figure it out" a bunch of times.

So we should require the domain name and also the app client secret. Understanding we might not want to check in secret values to repos, etc. the use of AWS Secrets Manager could be used by the user.

https://github.com/tmaiaroto/aegis/blob/master/framework/cognito_client.go#L101
to get: https://github.com/tmaiaroto/aegis/blob/master/framework/cognito_client.go#L178

and
https://github.com/tmaiaroto/aegis/blob/master/framework/cognito_client.go#L113
to get:
https://github.com/tmaiaroto/aegis/blob/master/framework/cognito_client.go#L127

RPC based middleware or MOM?

The other hooks and thoughts of interceptors and AOP is making me curious about concerns outside of an individual microservice/Lambda. What about microservice coordination and distributed design considerations?

Allow custom policy/role JSON to be provided

Currently the config can supply an existing IAM policy, but it would also be nice to allow a local JSON file with the policy to be referenced. This way, each project can more easily tune their access to other AWS services without requiring one to go to AWS web console.

Implement XRay Strategy

XRay has been added to router and task handlers automatically, but users can't control how it works. This is all well and good, but it creates problems if running locally as well as for testing. It also leaves a little to be desired.

So the idea of "strategy interfaces" will be added. Not just for XRay but for other features in the framework as well (such as a circuit breaker for example). This will allow users to change how certain features are implemented, while still enjoying a sane default/convention.

This also then solves the problem found when trying to write test cases.

The way it will work is simple - you pass your struct that implements the strategy interface in configuration and it'll be used instead of the default.

So an XRay default strategy might send traces to XRay along with annotations about the name of the task handler or the request path the router handler received from API Gateway event, etc. However, your custom strategy may instead do nothing. Or perhaps send traces out to a completely different service. Or simply (and in the case of testing) do nothing at all.

This architecture decision will play a bigger role in the future and will be more common place. The goal here is not to add support for other cloud vendors, but instead provide a choice for how certain functionality is implemented.

Allow role permissions from yaml config

Aegis deploys Lambda functions and API Gateways with limited/basic role permissions. It's basically enough to run the function, the API Gateway, get events from CloudWatch, and log out to CloudWatch.

This approach was for simplicity and expediency. The yaml config does allow another IAM role to be used instead of course. However, that leaves some work for someone to do within AWS Console or CLI or Terraform, etc.

The goal of Aegis is not to be a cloud resource management tool, but it is a deploy tool and this does make a bit of sense. It's also not terribly complex to work in either. The createAegisRole() function can simply take input from the yaml if it exists and add additional role permissions to the iamPolicy ... Or it could be that the yaml simply points to a JSON file that has the role policy defined in its entirety. Then the Aegis tool will pull that in to use instead of the default.

Definitely want to keep it simple and provide sane defaults, but as Amazon's list of services continue to grow, it's nice to be able to quickly address this.

There is one caveat: Currently it skips creating a role that already exists. So if the role policy changes, we might open a can of worms trying to account for the differences...Do we remove things that were there previously but now aren't? Do a wholesale replace? If the role were to be deleted and re-created again, what other functions now need to update because they were depending on that old role ID? Have to give some more thought here.

Add support for AWS Athena?

AWS just announced Athena (wow the project names really work well together here just randomly). This just may solve the one outstanding serverless concern - database. Opportunity for a serverless ORM here.

By storing JSON in S3 and querying it via Presto wih Athena, it just may be possible to build a completely serverless application.

I need to look into the costs and feasibility and performance...But $5/terabyte scanned? That may not be so bad. If your average web application database is only a couple hundred megabytes, that's really cheap. Plus there's caching and other solutions such as DynamoDB. So it just may work well.

Or maybe not. Either way, something to investigate.

Github landing page documentation

Is it possible to change the github landing page to a page that looks like 'AWS chalice documenation'. The goal is to get users create and deploy a 'hello world' lambda in a few steps . Writing and deploying a successful starter hello world is worth a thousand words.

https://aws.github.io/chalice/quickstart.html

  1. Quickstart
  2. Credentials
  3. Creating your 'Hello World' Lambda project
  4. Deploying
  5. Cleaning up
  6. Next steps

Find infrastructure management solution

Managing infrastructure is a necessity, to a degree, but is not a core goal for Aegis. Aegis is first and foremost a developer tool for building applications. The CLI portion of Aegis exists solely to further those efforts. Deploying the function and provisioning an API Gateway speed up the development process for example. It pushes your code out.

However, when it comes to serverless, infrastructure management is a consideration. It's actually also something that many people are doing. There's no need to re-invent wheels here.

So what I'd like to do is leverage an existing solution (Terraform, Serverless, Pulumi, etc.). Essentially outsource the problem. The CLI commands should still exist though. That "developer experience" is still essential. Reducing the number of tools to get, etc. That's all super important.

It's not a "go get this, now this, and configure that, and pray the moon and stars are in alignment. great, now run these 5 commands."

It's more of a "get the single aegis binary and run this single command."

How to do that while leveraging another tool for infrastructure management? TBD.

Add flags to accept AWS credentials

Typically, it's a lot less painstaking to simply setup your credentials in ~/.aws/credentails and most people using AWS CLI tool should already have this... The exception here is that for those who work with multiple profiles, there's no way to use a different profile with Aegis right now (or maybe there is by setting an environment variable in the same line upon running the up command).

So at a bare minimum, a --profile flag should be added...But really, it's not so bad to add support for all AWS credentials. I'm probably going to stop at credentials and not include region, etc. and leave that to the YAML. I would discourage putting credentials in the YAML.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.