claranet / terraform-aws-lambda Goto Github PK
View Code? Open in Web Editor NEWTerraform module for AWS Lambda functions
License: MIT License
Terraform module for AWS Lambda functions
License: MIT License
The ternary operator can return any type now. Rather than machinations with slice/list/etc, just return the desired list value directly.
https://www.terraform.io/docs/configuration/expressions.html#conditional-expressions
Originally posted by @lorengordon in #49
I started using this module for a Java11
lambda function and I was having a hard time making the lambda function execute as I kept getting an error message saying that my class was not found.
I later realize that the module ZIPs the jar I provided and this causes it to break.
I spent some time looking at the code and I realize that a simple workaround would be to use the build_command
property like so:
build_command = "cp '$source' '$filename' "
Improving the documentation can help with the deployment of functions that do not require to be ZIP, like Java, or are already ZIP.
When running make plan on Debian the pip command that is ran to install the python modules:
pip install -r requirements.txt -t .
Seems to trigger the problem documented here: https://stackoverflow.com/questions/4495120/combine-user-with-prefix-error-with-setup-py-install
It appears that this can be partially fixed by setting up ~/.pydistutils.cfg to disable the prefix option however for this to work the pip install command needs to be ran with --user. This is a problem for the module as it does not specify this option.
Should we just specify --user as part of the command? If so only on Debian or on all distros?
Thanks,
Alan Jenkins
Have the following code
environment {
variables {
BUCKET_NAME = "${aws_s3_bucket.s3_bucket.id}"
}
}
getting the following error
22: environment {
Blocks of type "environment" are not expected here. Did you mean to define
argument "environment"? If so, use the equals sign to assign it a value.
It would be nice if pipenv
was supported.
https://pipenv.readthedocs.io/en/latest/
It doesn't have a requirements.txt
file, instead it has Pipfile
and Pipfile.lock
files
pipenv run pip install -r <(pipenv lock -r) --target .
Source: pypa/pipenv#746 (comment)
Would be enough to install all of its dependencies
Instead of installing requirements.txt using pip2 or pip3 the module currently always installs using 'pip' this results in undefined behaviour as pip can either point to pip3 or pip2.
This can result in the wrong version of the required modules being installed for the Lambda function.
Just leaving this out there - This module cannot run on terraform entreprise because the zip
command is not present. Not sure if there is a workaround for this..
* module.taskPlanner_cron.module.lambda.data.external.built: data.external.built: failed to execute "/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/built.py": Traceback (most recent call last):
File "/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/build.py", line 146, in <module>
create_zip_file(temp_dir, absolute_filename)
File "/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/build.py", line 100, in create_zip_file
run('zip', '-r', target_file, '.')
File "/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/build.py", line 70, in run
subprocess.check_call(args, **kwargs)
File "/usr/lib/python3.5/subprocess.py", line 576, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python3.5/subprocess.py", line 557, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'zip'
Traceback (most recent call last):
File "/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/built.py", line 31, in <module>
subprocess.check_output(build_command, shell=True)
File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
**kwargs).stdout
File "/usr/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '/terraform/scheduling-service/.terraform/modules/c205777ab1dadf9cdd9611349aa2639d/build.py=[redacted]' returned non-zero exit status 1
There's a small problem I'm having when using the module: The CloudWatch Log Group is only created when the Lambda Function tries to send its first logs. This poses a challenge when you want to use that Log Group somewhere else which relies on its existence (in my use case, Kinesis Data Stream).
I'll gladly put in a PR to create the Log Group if enable_cloudwatch_logs is true, but was just wondering whether this was something that has come up before?
Code example:
module "lambda" {
source = "github.com/claranet/terraform-aws-lambda"
function_name = "foo"
description = "foo"
handler = "foo.lambda_handler"
runtime = "python3.7"
timeout = 300
source_path = "${path.module}/files/foo.py"
}
resource "aws_kinesis_stream" "bar" {
name = "bar"
shard_count = "1"
}
resource "aws_cloudwatch_log_subscription_filter" "foo_to_bar" {
name = "foo-to-bar"
log_group_name = "/aws/lambda/foo"
filter_pattern = ""
destination_arn = "${aws_kinesis_stream.bar.arn}"
}
Hello!
setting timeout and memory_limit as lambda parameters terraform does not apply this params
I am fitting the module to hcl2.0 namely terraform version 0.12.0, but has difficulty to understand the path variable in archive.tf
locals {
module_relpath = "${substr(path.module, length(path.cwd) + 1, -1)}"
}
what does it really mean ? shouldn't it be ?
module_relpath = "${path.module}"
thanks
Currently this project downloads dependencies using pip install, but if you rely on any packages which require a native extension you'll have problems unless you are building from a machine with the same platform (in pip's parlance).
The documentation indicates that a requirements.txt can be used to automatically build with the necessary python modules. Where is requirements.txt placed? How is it referenced? Documentation could be expanded a bit for this portion.
Thanks
The terraform currently using the ambiguous 'python' command to call hash.py. This can be either python3 or python2 depending on what the default is on the user's Linux distribution.
hash.py is known to work fine with python2 so I suggest we just change python to python2 here:
terraform-aws-lambda/archive.tf
Line 4 in 7b8d42e
I think it would be a good idea to allow for directory exclusion.
Just as an example, sometimes you have a local copy of a node_modules/ directory, but you have no intention of uploading it (or some of its modules).
Make it create an alarm for invoke errors and notify any SNS topics that have been passed in via variables.
Hi! Latest commit to master branch is this: f9ff6ee
However, latest release tag is this: https://github.com/claranet/terraform-aws-lambda/commits/v1.1.0 and it's pointing to this commit: ace2bc9 which is one commit behind f9ff6ee (master).
Could you create a new release tag for the latest commit in master branch? Thanks!
resource/aws_lambda_function: Setting reserved_concurrent_executions to 0 will now disable Lambda Function invocations, causing downtime for the Lambda Function. Previously reserved_concurrent_executions accepted 0 and below for unreserved concurrency, which means it was not previously possible to disable invocations. The argument now differentiates between a new value for unreserved concurrency (-1) and disabling Lambda invocations (0). If previously configuring this value to 0 for unreserved concurrency, update the configured value to -1 or the resource will disable Lambda Function invocations on update. If previously unconfigured, the argument does not require any changes. See the Lambda User Guide for more information about concurrency.
Discovered an issue in the new custom build script implementation that causes failures on Windows... I believe there are some differences in how shells or sys.argv are interpreting arguments with quotes around them. Here is the error:
null_resource.archive (local-exec): Traceback (most recent call last):
null_resource.archive (local-exec): File "build.py", line 129, in <module>
null_resource.archive (local-exec): with cd(source_dir):
null_resource.archive (local-exec): File "C:\Python36\lib\contextlib.py", line 81, in __enter__
null_resource.archive (local-exec): return next(self.gen)
null_resource.archive (local-exec): File "build.py", line 23, in cd
null_resource.archive (local-exec): os.chdir(path)
null_resource.archive (local-exec): OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: ''C:\\...
You can see the leading '
in the directory name, there at the end. Note that there is no closing quote. I'm obviously removing the full path, but there is indeed no closing quote by the time the value gets to os.chdir(path)
.
I threw in some debug statements to see where there are differences between linux and windows, and I can see that hash.py is outputting the json the same way on both (thankfully). The difference appears to be where build.py is reading the arguments from sys.argv
.
diff --git a/build.py b/build.py
index c32737e..38de3ad 100644
--- a/build.py
+++ b/build.py
@@ -108,6 +108,7 @@ filename = sys.argv[1]
runtime = sys.argv[2]
source_path = sys.argv[3]
+print('source_path = {0}'.format(source_path))
absolute_filename = os.path.abspath(filename)
# Create a temporary directory for building the archive,
On linux, that will output the following, with no quotes at all:
null_resource.archive (local-exec): source_path = /home/...
On windows, it outputs the following, complete with opening and closing quotes:
null_resource.archive (local-exec): source_path = 'C:\...'
Some further debugging reveals that the closing quote is stripped by source_dir = os.path.dirname(source_path)
.
The behavior of os.path.dirname
is the same on windows and linux, so the failure is simply due to how the value of sys.argv[3]
is not wrapped in quotes on linux but is wrapped in quotes on windows. The quotes come from the default value in variables.tf.
Still noodling possible fixes...
Hi, Is there a version of this that supports go workflow?
Cheers!
Since terraform doesn't support using count
on modules (open issue), it is rather difficult to optionally include a module in a terraform config. The pattern I've seen to work around this has been to expose a new variable, maybe something like create_module
or create_lambda
, and use that to interpolate the count
in every resource within the module. Yeah, it's a bit ugly.
Here's an example of what I mean:
In particular, I find I need to do this rather often when writing configs that need to work both in the commercial regions and in the GovCloud regions. Since GovCloud is missing a lot of services, I need a way to make certain resources optional when it's the target region.
I'd be willing to work this if you like.
Any thoughts on running something like npm install --production
to create the node_modules
directory prior to zipping up a directory for node runtimes? Generally, the directory is added to a .gitignore, so the dependencies are not present unless installed first.
If you delete the lambda function from the AWS console, and try to apply using this module again, it will try to create the lambda function using a filename that may or may not exist on disk. It might not exist because it was generated on someone else's laptop, or it was created on disk a long time ago and then cleaned up.
A workaround is to edit the lambda source (e.g. add a comment), apply, undo, apply.
Thanks for this awesome module! I tend to flip between platforms a lot, developing on Windows for Linux-based systems. If I can get this working on Windows, would you be open to a pull request? Looks like some folks have been recommending/doing things that definitely wouldn't work, like using python2/python3 executables, relying on the shebang, etc... So, if you want to keep that, then I won't bother and will just keep using the Windows subsystem for Linux.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.