seanroy / lambda-maven-plugin Goto Github PK
View Code? Open in Web Editor NEWA maven plugin to facilitate lambda deployments as part of your maven build/dev process
License: Apache License 2.0
A maven plugin to facilitate lambda deployments as part of your maven build/dev process
License: Apache License 2.0
The annotation scanner cannot find annotations in shaded jars. It looks like annotations are being proxied. Solution coming shortly.
AWS added SQS to the list of supported trigger sources in June/2018:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
the plug-in is missing this new integration option
Add a destroy target that allows the caller to delete the lambda function, if it exists.
First off, thanks for writing this plugin, it's pretty handy.
I like the inlined trigger support, but it looks like if you rename a trigger the old ones remain (at least CloudWatch Events do - didn't try SNS).
It would be nice if there was some way to mark a trigger "deleted" which would be applied the next time the plugin is run. This would eliminate the need to go and remove an orphaned trigger manually.
One possibility might be to add an optional "deleted" attribute to the triggers JSON, (default=false) which if true would remove the trigger. For example:
"triggers": [
{ "integration": "CloudWatch Events - Schedule",
"ruleName": "every-minute",
"deleted": true },
{ "integration": "CloudWatch Events - Schedule",
"ruleName": "an-active-trigger",
"ruleDescription": "runs lambda every minute",
"scheduleExpression": "rate(1 minutes)" }
]
every-minute
would be deleted if it exists, while an-active-trigger
would be created/updated.
This would enable a workflow where you could mark a trigger deleted for some amount of time until you're sure the lambda has been deployed across all environments and the orphaned trigger removed. Then the deleted trigger rule could be removed from the triggers
array entirely.
Looks like the UpdateFunctionCode does not wait for the lambda function to complete it's update, so the subsequent call for UpdateFunctionConfiguration will break. See: https://docs.aws.amazon.com/lambda/latest/dg/functions-states.html
[INFO] About to update functionCode for midori-jpdfc3-dev-gergelyjuhasz-headless-browser
[INFO] About to update functionConfig for midori-jpdfc3-dev-gergelyjuhasz-headless-browser
[ERROR] Error during processing
com.amazonaws.services.lambda.model.ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:974356111243:function:midori-jpdfc3-dev-gergelyjuhasz-headless-browser (Service: AWSLambda
; Status Code: 409; Error Code: ResourceConflictException; Request ID: 844687bd-de05-4d3b-abb9-dd4d5dcb9bd3)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse (AmazonHttpClient.java:1639)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest (AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper (AmazonHttpClient.java:1056)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute (AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer (AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute (AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500 (AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute (AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute (AmazonHttpClient.java:513)
at com.amazonaws.services.lambda.AWSLambdaClient.doInvoke (AWSLambdaClient.java:2488)
at com.amazonaws.services.lambda.AWSLambdaClient.invoke (AWSLambdaClient.java:2464)
at com.amazonaws.services.lambda.AWSLambdaClient.executeUpdateFunctionConfiguration (AWSLambdaClient.java:2428)
at com.amazonaws.services.lambda.AWSLambdaClient.updateFunctionConfiguration (AWSLambdaClient.java:2402)
at com.github.seanroy.plugins.DeployLambdaMojo.lambda$new$36 (DeployLambdaMojo.java:147)
at java.util.function.Function.lambda$andThen$1 (Function.java:88)
at java.util.function.Function.lambda$andThen$1 (Function.java:88)
at java.util.function.Function.lambda$andThen$1 (Function.java:88)
at java.util.function.Function.lambda$andThen$1 (Function.java:88)
at com.github.seanroy.plugins.DeployLambdaMojo.lambda$null$115 (DeployLambdaMojo.java:834)
at java.util.Optional.map (Optional.java:215)
at com.github.seanroy.plugins.DeployLambdaMojo.lambda$new$116 (DeployLambdaMojo.java:828)
at java.util.function.Function.lambda$andThen$1 (Function.java:88)
at com.github.seanroy.plugins.DeployLambdaMojo.lambda$execute$34 (DeployLambdaMojo.java:101)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept (ForEachOps.java:184)
at java.util.stream.ReferencePipeline$3$1.accept (ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining (ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto (AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto (AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential (ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential (ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate (AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach (ReferencePipeline.java:418)
at com.github.seanroy.plugins.DeployLambdaMojo.execute (DeployLambdaMojo.java:97)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
This tool was working fine for me to create a new lambda function, but any time I would run it to update the same function, it would fail with a message such as:
[ERROR] Failed to execute goal com.github.seanroy:lambda-maven-plugin:2.3.2:deploy-lambda (default-cli) on project infoplus-lambada: null (Service: AWSLambda; Status Code: 500; Error Code: InternalFailure; Request ID: e2b57c26-c102-11e8-9270-ebb4ff7eac70) -> [Help 1]
Eventually I found that I had a trailing comma in the vpcSubnetIds field of my pom.xml, as in:
<vpcSubnetIds>subnet-1c8d3ef4,</vpcSubnetIds>
Deleting that trailing comma allows me to both create and update lambdas.
Ideally, there'd either be a specific error message about invalid content in this field, or, it would work even if you had a incorrect trailing comma there.
Hi there! First off all thanks for the awesome plugin.
I'm using your plugin to upload an Alexa Skill to S3 and update it's lambda function which is working pretty good. However everytime executing the plugin the triggers are created in the lambda functions (the 'old one' is not deleted). Which means if you execute the function 10x there are 10 triggers added to it.
I guess that your plugin should only be used to add a new lambda function and not to update an existing one, however I really would appreciate if it would get full support to update existing functions.
Best Regards
Fabian
Possible to support to set http_proxy and https_proxy from plugin configuration?
Hello,
I really appreciate this maven plugin. Unfortunately every deployment would create a new Lambda version.
Is there a way to update $LATEST instead?
Introduce "triggers" in JSON configuration. This should reflect "Add trigger" wizard in AWS console.
I'm trying to get this plugin working under a multimodule project, example structure:
services (root)
-- service1 (child)
-- service2 (child)
-- service3 (child)
What I want to be able to do is to either go into one of the services and deploy it to lambda, or stand in the root and deploy all of them, using mvn package shade:shade lambda:deploy-lambda
.
I'm trying to set it up using pluginManagement but I'm miserably failing, whatever I do it tries to deploy my root module to AWS Lambda, while other plugins I use ignores the root project when the plugin is under <pluginManagement/>
tag.
The user deploying through this plugin nedds in addition to https://github.com/SeanRoy/lambda-maven-plugin#credentials:
ListAllMyBuckets
Due to:
https://github.com/SeanRoy/lambda-maven-plugin/blob/2.3.4-SNAPSHOT/src/main/java/com/github/seanroy/plugins/AbstractLambdaMojo.java#L309
In some complex projects with modules, it is sometimes more efficient to reference the code in a relative path that begins with "../". This currently fails with an error about invalid S3 path. As a workaround you can use ${project.basedir}/../ but this is not obvious to many non-experts in Maven.
Would be nice to have plugin support for Environment variables, ref http://docs.aws.amazon.com/lambda/latest/dg/env_variables.html
Hello,
It appears environment variables filled manually in https://console.aws.amazon.com/lambda/home are trashed on each lambda deployment. I guess the plugin should not remove existing environment variables if the plugin config has not environment variable (and if the plugin has environment variables, I suppose they should be pushed to AWS-Lambda without removing existing ones).
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Failed to configure plugin parameters for: com.github.seanroy:lambda-maven-plugin:2.1.5
Cause: Class 'java.util.List' cannot be instantiated
[INFO] -----------------------------------------------'
Reproduces for maven 2
mvn --version
Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700)
Java version: 1.8.0_121
Java home: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_US, platform encoding: UTF-8
'OS name: "linux" version: "3.13.0-113-generic" arch: "amd64" Family: "unix"
I switched to Maven 3 for the same pom and I get past this issue.
Just trying out this plugin for the first time.
However, when I try and perform a deployment I get the following error:
A required class was missing while executing com.github.seanroy:lambda-maven-plugin:2.3.3:deploy-lambda: javax/xml/bind/JAXBException
Solved by adding a dependency as follows:
<plugin>
<groupId>com.github.seanroy</groupId>
<artifactId>lambda-maven-plugin</artifactId>
<version>2.3.3</version>
<dependencies>
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.1</version>
</dependency>
</dependencies>
....
</plugin>
Add an integration to respond to S3 Events. Allow for a list of buckets and events
{"integration" : "S3",
"buckets" : [
{ "bucket" : <bucket name>, "events" : ["s3:ObjectRemoved:*","s3:ObjectCreated:*"],
"filter": {
key: {
filter_rules: [
{
name: "prefix",
value: "<value>"
}
]
}
}
}
]}
'nuff said.
Environment variables may contain sensitive information such as passwords. Provide a means by which the user may specify that some variables are to be encrypted, and a KMS encryption key ARN which will be used to encrypt them. Perhaps in the future we will also allow users to specify their own master keys.
This requires further investigation. To reproduce, modify region parameter in src/test/resources/test-project/basic-pom.xml and run 'mvn test'
Use GetBucketLocationRequest instead of looping through buckets.
Sean,
I do most of my dev on a Mac, but am trying to use your nifty plugin on a Windows box, and I'm getting the following error:
Caused by: java.util.regex.PatternSyntaxException: Unexpected internal error near index 1
^
at java.util.regex.Pattern.error(Unknown Source)
at java.util.regex.Pattern.compile(Unknown Source)
at java.util.regex.Pattern.(Unknown Source)
at java.util.regex.Pattern.compile(Unknown Source)
at java.lang.String.split(Unknown Source)
at java.lang.String.split(Unknown Source)
at com.github.seanroy.plugins.LambduhMojo.execute(LambduhMojo.java:98)
...
I think this can be resolved by changing line 98 of your code from:
String[] pieces = functionCode.split(File.separator);
to this:
String[] pieces = functionCode.split(File.separatorChar);
This plugin is great for development workflow and quickly uploading code and configurations to Lambda.
The one exception is "Event Sources". Currently this is something we have to manually set on each upload - in our case, it's the "Alexa Skills Kit".
Is this something that could be configured via lambduh?
cc @philipmw
Each deploy adds policy permission for lambda function. Deploy fails when policy size exceeds the limit.
com.amazonaws.services.lambda.model.PolicyLengthExceededException: The final policy size (20786) is bigger than the limit (20480). (Service: AWSLambda; Status Code: 400; Error Code: PolicyLengthExceededException; Request ID: 6bedc15d-9ceb-11e6-8889-dd864af63382) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1529)
Hi all,
KeyPrefix has been seen either deprecated or not working. any way to upload the deploying jar to a particular folder inside a s3bucket.
thanks,
Kalpa
"concurrency" is a new setting in Lambda and it's not available in the plugin
Hi,
would like to give it a try and contribute so API endpoints are created also. Is it something you had already in mind ?
When running mojo deploy-lambda for a redeploy an NPE occurs in method cleanUpOrphanedAlexaSkillsTriggers at line 537.
lambdaFunction.getExistingPolicy().getStatements().stream()
The lambda function I was trying to deploy has no policy. So getStatement() on getExistingPolicy() throws an NPE.
Use cloudwatch scheduling to keep your lambda function resident in AWS. Without this there can be a bit of a delay while a container is provisioned for the function. If the function hasn't been hit in awhile, AWS destroys the container.
mvn clean install -DskipTests gives the following error:
Generating /home/dean/Downloads/src/lambda-maven-plugin/target/apidocs/index.html...
Generating /home/dean/Downloads/src/lambda-maven-plugin/target/apidocs/help-doc.html...
[INFO] Building jar: /home/dean/Downloads/src/lambda-maven-plugin/target/lambda-maven-plugin-2.2.2-javadoc.jar
[INFO]
[INFO] --- maven-gpg-plugin:1.6:sign (sign-artifacts) @ lambda-maven-plugin ---
gpg: no default secret key: secret key not available
gpg: signing failed: secret key not available
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.618 s
[INFO] Finished at: 2017-08-11T15:17:17-06:00
[INFO] Final Memory: 39M/776M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-gpg-plugin:1.6:sign (sign-artifacts) on project lambda-maven-plugin: Exit code: 2 -> [Help 1]
[ERROR]
Currently with version 2.3.2
I cannot publish a new version if only the lambda code package changes.
When we release our project we set publish
to true
and forceUpdate
to false
. We do not want to force update because that will deploy and publish new version even when the code/config have not changed.
What I propose is to additionally publish a new version if publish
is true
and the code package has been updated due to checksum differences.
Edit:
My motivation is production deployments only when there are actual λ code / config changes. Otherwise publishing different versions with the same effective config/code is redundant and confusing later.
[INFO] ---- Create or update LambdaChatUpdaterDevelopment_sean -----
[INFO] Cleaning up orphaned triggers.
[ERROR] Error during processing
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException
Requested resource not found: Stream: arn:aws:dynamodb:us-east-1:280237693431:table/chatDevelopment_sean/stream/2017-06-23T19:06:22.628 not found (Service: AmazonDynamoDBStreams; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: C6L5KJQA3NDGKSEEP44UUSMH13VV4KQNSO5AEMVJF66Q9ASUAAJG; Proxy: null)
The stream it's complaining about exists but is disabled. Code must be updated to deal ith this.
If the function doesn't already exist, an exception is thrown when it tries to get the configuration for the function.
Now I'm getting the error when I try to use the plugin. The plugin is installed in my local repo ~/.m2.
$ mvn package lambda:deploy-lambda
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] account-customer-service
[INFO] account-service
[INFO] customer-service
Downloading: https://repo.maven.apache.org/maven2/org/codehaus/mojo/maven-metadata.xml
Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-metadata.xml
Downloaded: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-metadata.xml (13 KB at 31.8 KB/sec)
Downloaded: https://repo.maven.apache.org/maven2/org/codehaus/mojo/maven-metadata.xml (20 KB at 46.5 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] account-customer-service ........................... SKIPPED
[INFO] account-service .................................... SKIPPED
[INFO] customer-service ................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.860 s
[INFO] Finished at: 2017-08-13T11:19:39-06:00
[INFO] Final Memory: 14M/605M
[INFO] ------------------------------------------------------------------------
[ERROR] No plugin found for prefix 'lambda' in the current project and in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the repositories [local (/home/dean/.m2/repository), central (https://repo.maven.apache.org/maven2)] -> [Help 1]
[ERROR]
[
Here is the developer workflow:
mvn install
to create Lambda functionmvn install
to update Lambda functionThe first statement shows:
[INFO] Function KubernetesChatbot created. Function Arn: arn:aws:lambda:us-east-1:ACCOUNT_ID:function:KubernetesChatbot
[INFO] Alias 1-0-SNAPSHOT created for KubernetesChatbot with version $LATEST
The second statement shows:
[INFO] Cleaning up orphaned triggers.
[INFO] Config hasn't changed for KubernetesChatbot
How can a Continuous Deployment of Lambda functions be achieved using this plugin?
Currently, the Lambda function has to be explicitly deleted before the updated code can be uploaded.
There seems to be no intuitive way to attach policy to a Lambda function at this time.
May be there is a way, but at least no example could be found.
Commit d01e78a broke uploading to S3. Now, when the file is not in S3 or its hash is different, the plugin raises this exception:
[ERROR] The Content-MD5 you specified was invalid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidDigest; Request ID: 6D82591B56A1FAC9)
Using version 2.2.1 or 2.2.0
My lambda is configured with a "CloudWatch Events - Schedule" trigger.
Each time I run mvn lambda:deploy-lambda
, a new trigger is added to the lambda, pointing to the same event.
Configuration details (extract):
<plugin>
<groupId>com.github.seanroy</groupId>
<artifactId>lambda-maven-plugin</artifactId>
<version>${lambda-maven-plugin.version}</version>
<configuration>
<functionCode>${project.build.directory}/${project.build.finalName}.jar</functionCode>
<version>dev</version>
<s3Bucket>${s3-bucket}</s3Bucket>
<lambdaRoleArn>${lambda-role}</lambdaRoleArn>
<region>eu-west-1</region>
<runtime>java8</runtime>
<timeout>60</timeout>
<memorySize>256</memorySize>
<lambdaFunctionsJSON>
[
{
"functionName": "EC2Backup",
"handler": "com.example.EC2Backup",
"triggers": [
{
"integration": "CloudWatch Events - Schedule",
"ruleName": "daily-weekday-5am",
"ruleDescription": "5am on weekdays",
"scheduleExpression": "cron(0 5 ? * MON-FRI *)"
}
]
}
]
</lambdaFunctionsJSON>
</configuration>
</plugin>
Maven logs
$ mvn lambda:deploy-lambda
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building backups 4.0.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- lambda-maven-plugin:2.2.1:deploy-lambda (default-cli) @ backups ---
[INFO] backups-4.0.0.jar exists in S3 with MD5 hash 000000000000000000
[INFO] backups-4.0.0.jar is up to date in AWS S3 bucket example. Not uploading...
[INFO] ---- Create or update EC2Backup -----
[INFO] Cleaning up orphaned triggers.
[INFO] About to update functionCode for EC2Backup
[INFO] About to update functionConfig for EC2Backup
[INFO] Alias dev updated for EC2Backup with version 1
[INFO] About to create or update CloudWatch Events - Schedule trigger for daily-weekday-5am
[INFO] Created CloudWatch Events - Schedule trigger arn:aws:events:eu-west-1:0000000000:rule/daily-weekday-5am
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.282 s
[INFO] Finished at: 2017-06-21T12:53:40+02:00
[INFO] Final Memory: 17M/123M
[INFO] ------------------------------------------------------------------------
Thanks for the great plugin!
When setting up, I found that the permissions list in the README was missing quite a few permissions, causing errors when trying to deploy. Eventually I found that I needed the following much larger set of permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunction",
"lambda:ListAliases",
"lambda:GetFunctionConfiguration",
"lambda:UpdateAlias",
"s3:PutObject",
"s3:GetObject",
"lambda:UpdateFunctionCode",
"iam:PassRole",
"lambda:AddPermission",
"events:ListRuleNamesByTarget",
"lambda:GetPolicy",
"lambda:CreateAlias"
],
"Resource": [
"arn:aws:s3:::<bucket>/*",
"arn:aws:lambda:*:*:function:<functionName>",
"arn:aws:iam::*:role/service-role/<role>",
"arn:aws:events:*:*:rule/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"lambda:CreateFunction",
"events:PutTargets",
"s3:ListAllMyBuckets",
"ec2:DescribeVpcs",
"events:PutRule",
"lambda:ListEventSourceMappings",
"lambda:UpdateFunctionConfiguration",
"sns:ListSubscriptions",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:CreateBucket",
"Resource": "arn:aws:s3:::<bucket>"
}
]
}
Is this to be expected or have I done something wrong? Is it just that the documentation needs updating?
Thanks!
Hi!
How to properly deploy same jar twice with different configurations?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.