Comments (8)
Hey Jaymell,
Thanks for feedback. Really appreciate it ^_^
I often run into these very same use cases. So far there have been fewer and fewer that I haven't been able to solve using Qaz and some tactical Cloudformation-fu.
For example, given your KMS example in the past i've Created the KMS Key-id in an initial stack and Exported it, then have the subsequent stack import the value and inject it directly to my instance user-data to perform the necessary encryption/decryption. For stuff like KMS Keys or values that will be useful for future stacks that may not be a part of the same deployment, it's good to Export them, so those values can be imported to any future stack deployed in that region without special scripts to re-fetch.
That being said, there are definitely situations where one needs a good old fashion script or api call between stack deployments. For Qaz this needs to be implemented while adhering to the following:
- Keep everything as Cloud-native as possible
- Maintain minimal abstraction from the underlying AWS platform
- Run-from-anywhere, that is, the App should be able to run by calling config remotely. Deployments should have no explicit local dependency.
Given the above, how I intended to handle this was by implementing AWS Lambda Hooks that can be triggered by Qaz before/after a stack deployment. All the logic needed for a deployment can be stored in a single function which performs various actions based on the event json passed in or multiple simple functions can be created.
In config, this would look something like this:
stacks:
autoscaling:
deploy_hook:
pre:
- lambda_name: '{some:json}'
post:
- lambda_name: '{some:json}'
delete_hook:
pre:
- lambda_name: '{some:json}'
post:
- lambda_name: '{some:json}'
Given the above, you'd be able to trigger as many events pre/post deploy/delete stack operations (may even add one for updates).
The functionality is still being mapped out in my head, but I'm open to suggestions for how this should/could work.
--
I am quite keen on Custom Resources as a solution as well, due to it being the most Cloud native way of achieving this and that it allows you to Export the value of any special operations to Global Cloudformation space, making it accessible to all future stack deployments. Since it's raw Cloudformation and AWS Services, you're not locked in to using a particular tool or limited by said tool.
--
Another existing alternative currently available in Qaz is using the template Deploy-Time/Gen-Time function invoke. This allows you to invoke a lambda function during the process of generating or deploying a template. For example:
{{ invoke "some_function" `{"some":"json"}` }}
The above will invoke a function a write the response to your Template before deploying. In this way, you're able to dynamically trigger actions in AWS via lambda and Export the outputs via Cloudformation.
For example:
Outputs:
functionResponse:
Description: Lambda Function Response
Value: {{ invoke "some_function" `{"some":"json"}` }}
Export:
Name: some-export-name
This doesn't help much with post deployment ops though. :-/
--
Let me know your thoughts. :-)
Apologies for the long winded response ^_^
from qaz.
Thanks for the detailed response. I think this sounds like a good approach and I think could work well.
The only thing that gives me pause about relying heavily on Lambda functions is that deploying the Lambda functions adds more complexity to the overall deployment process. I think the "Cloud native" approach is a good one overall, but from a pragmatic perspective, I get impatient with the additional work of packaging and deploy Lambda functions and find the subshell call easier. Do you have any thoughts about keeping the Lambda deployments themselves relatively painless?
There is also one thing that generally has to be done prior to using deploying/running Lambda or Cloudformation: creating an s3 bucket to hold your CF templates and Lambda code prior to deploying them. Because the apps I deploy are multi-AWS account, I've found it easiest to use app-environment-specific infrastructure buckets to hold these artifacts. Furthermore, given some of the annoyances of dealing with s3 buckets within Cloudformation, I often choose to just use Ansible for creating all my applications' s3 buckets, to avoid having to worry about deleting and re-creating a stack and thereby losing any data within s3 buckets that may have been part of that stack.... Anyway, all of this is just a long-winded way of saying that it would be helpful if the app could idempotently create at least that first turtle -- I mean, s3 bucket :).
Point taken on using cross-stack references. I put together most of the automation scripts and CF template snippets I used prior to cross-stack references being available so haven't used them as much as I probably should. I also sort of dislike the 'global' nature of the exports, but that's a relatively minor gripe.
Ayway, I've just changed jobs and no longer have access to my old automation tools, which is the perfect opportunity to start fresh. I've been contemplating whether to build my own fairly minimal Python framework to do so, but I think you've already done a better job than I would be able to do myself. Besides, I'm more enamored with Golang these days :) -- meaning, I'd love to contribute to the project if it looks like qaz will be a good fit for the needs of my current employer.
from qaz.
Gave it some thought today and you're completely right! Given the use case above I image it would get extremely tedious deploying lambda functions to different accounts for each build. Further, managing those functions can also add some extra admin work to your day.
All-in-all, I liked the Lambda idea but given your input I believe it won't scale well. The hook logic needs to be pushed from where ever Qaz sits, thus giving a more centralised management solution.
So I'm wide open to a feature-set that supports local scripts as hooks.
I think it would be great to have values from the Config file be passed into scripts, that way scripts can be dynamically generated templates that differ based on values from the config.
--
I'm always happy for feedback and pull requests, as you've just proven, I don't know everything :-) So I love any contribution great or small.
You've given me some stuff to re-think but let me know if you have any ideas on how to approach this, or fork and implement it and I can test it out.
Thanks again Jaymell :-)
from qaz.
Thanks for the feedback. I'm still undecided as to whether the Lambda deployment overhead is a worthy trade-off for a deployment that has minimal local dependencies. To the extent that the Lambda functions are reusable across multiple applications and not just one-off solutions to a specific application's deployment, it's probably worth taking the Lambda-based solution you initially proposed.
I'm hoping to spend much of the next couple weeks figuring that out and seeing if qaz will fit our needs. Also, this is not necessarily related, but I wanted to at least mention some other use cases I'm trying to think about possibly being able to accomplish:
- Ability to assume roles instead of relying on profiles for deploying some or all stacks (say I need to put a DNS record in Route 53 in another AWS account) or running the discussed tasks
- Easy multi-region deployments (ideally, I could pass region name on command-line rather than having to hard-code it in the config files)
If I feel like I have any good solutions to these problems, I will fork and try to get some code in place for them. Thanks again for your feedback. Cheers!
from qaz.
I'll give some thought to both solutions definitely.
Role switching is already supported by specifying the Roles in your AWS Config. For example here's my AWS CLI Config file:
~/.aws/config
[profile default]
aws_secret_access_key = "oxoxoxoxoxoxoxoxoxoxoxoxoxoxoxo"
aws_access_key_id = "oxoxoxoxoxoxoxoxoxoxox"
region = "eu-west-1"
[profile billing]
role_arn = "arn:aws:iam::9999999999:role/myrolename"
source_profile = "default"
region = "eu-west-1"
I'm then able to specify the billing profile in my config and Qaz will perform the role switch in the background. So the setup will work with Keys and Roles. Qaz handles this in the same way aws-cli does.
--
As for the Region issue, I'm currently working on version 0.50-beta which addresses partially addresses this issue. The region keyword is going to be superseded by the AWS CLI Config. This means that the region you specify in your CLI config for each profile will be used when deploying against a stack.
For example, the region defined for billing in my CLI Config above is eu-west-1. If I call this profile in Qaz Config, without specifying region in Qaz, it will use the region defined for the profile. Effectively, you won't need to specify a region in Qaz config at all.
--
I'll also give some thought about passing values via the CLI, i'd planned to do this or things like Stack Parameters and maybe CF values.
Let me know if you have any questions on the Role switching and region stuff.
Thanks :-)
from qaz.
Thanks for the reply. The region config definitely makes sense, and for the most part I think it should work fine. There are two potential issues I see with relying on the AWS profile config:
- Blue/green deployments -- though I've never actually gotten a full application to do this (we can all dream!), I've always intended to be able to do this by deploying the same stack across different regions. I think a runtime option to specify region would be easiest, rather than hard-coding it in the credentials file.
- On centralized build servers, I generally use instance profiles to define permissions. Since this elminiates the need for API keys, ideally I could just specify the name of a role to assume rather than separately creating an AWS credentials file on the server.
--James
from qaz.
Inclined to agree with passing the region in as a flag for a multi-region Blue/Green Deployments. The original use case for me with the Multi-account/region deployments was simultaneous or cross-account deployments and dependencies. I needed to be able to provision multiple separate accounts with the same stacks at the same time and have a another account read stack outputs from all of them.
Given the above, it wouldn't be hard to have a region flag that overrides the config values that can be passed in. The hard part is being able to do it for individual stacks. If we go with the region flag, it'll have to be Global, but based on the Use case you've outlined, I think that should be ok.
The central build stuff was from an external build server or workstation point of view. You're definitely right, if running from EC2, it wouldn't make sense to define a profile. That being said, I'm pro having a role flag.
We then use the instance_profile to Assume the role.
The commands then could look something like this:
qaz deploy some_stack --role="some/arn" --region=eu-west-1
qaz deploy --all --role="some/arn" --region=eu-central-1
Note: When dealing with roles, it has to be the full ARN, it won't be fun to type out, especially if dynamically generated, It may still be worth having the option to store it in config also.
Let me know if that's direction you were thinking.
from qaz.
After giving this one some thought, I believe qaz has sufficient functionality via Lambda to run arbitrary tasks.
- Using Lambda as a template source also allows other tasks to be run before returning the Template
stacks:
my_stack:
source: lambda:{"some":"event"}@myfunction
- Lambda can be called both at Gen-Time & Deploy-Time within generation operations. Allows actions to be run and the responses written directly to the template.
{{ $resp := invoke "somefunction" `{"some":"event"}` }}
My response is {{ $resp.response }}
- Custom Resource Hooks can be used to run arbitrary tasks on create, update and delete
I'm open to a PR that enhances the arbitrary tasks model but will actively avoid calling local scripts.
Closing this one for now.
Thanks
from qaz.
Related Issues (19)
- Change-Set Management HOT 1
- Upload large templates to s3 prior to deployment HOT 4
- AWS Profiles doesn't work with MFA set HOT 3
- Implement Debugging HOT 1
- Reduce API Calls HOT 1
- Support sprig template function HOT 1
- Support looking for both config.yml AND config.yaml
- Testing!!! HOT 2
- Environments? HOT 3
- Question: can delimiters be quoted? HOT 2
- support for notification-arns in qaz HOT 4
- Implement Tail HOT 2
- AWS Credentials Chain HOT 1
- Feature discussion: update preview HOT 3
- Lambda Invoke HOT 1
- Automate Binary Releases HOT 1
- Not Handling Error when Deploy is made for Stack that is not defined. HOT 1
- Handling Stack Parameters HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qaz.