aws / aws-sdk Goto Github PK
View Code? Open in Web Editor NEWLanding page for the AWS SDKs on GitHub
Home Page: https://aws.amazon.com/tools/
License: Other
Landing page for the AWS SDKs on GitHub
Home Page: https://aws.amazon.com/tools/
License: Other
AWS allows to create a copy-on-write clone of an existing Aurora cluster (docs). When this is done it takes a few minutes before the cluster is available. It would be great if there was a extra rds wait
command added to the current list that works for "clusters" . Example usage:
rds wait db-cluster-available --db-cluster-identifier <cluster-id>
This would make scripting things easier. Currently I have a poor workaround:
status=unknown
while [[ "$status" != "available" ]]; do
sleep 10
status=$(aws rds describe-db-clusters --db-cluster-identifier $CLUSTERNAME --query 'DBClusters[0].Status' --output text)
done
Class com.amazonaws.services.cloudformation.model.StackEvent has method for retrieving resource status:
/**
* <p>
* Current status of the resource.
* </p>
*
* @return Current status of the resource.
* @see ResourceStatus
*/
public String getResourceStatus() {
return this.resourceStatus;
}
But it actually can return values from StackStatus enum, e.g.: "ROLLBACK_IN_PROGRESS" and "ROLLBACK_COMPLETE". That causes exception "Cannot create enum from <...> value!" when trying to call function ResourceStatus.fromValue(String)
with getResourceStatus()
as function's argument.
If this is intentional, then method name and documentation are misleading - status values for resources with type "AWS::CloudFormation::Stack" is actually StackStatus
enum, not ResourceStatus
.
Short example (in Scala):
import scala.collection.JavaConverters._
import com.amazonaws.services.cloudformation.model._
import com.amazonaws.services.cloudformation.AmazonCloudFormationClientBuilder
object Test {
def main(args: Array[String]): Unit = {
val cfClient = AmazonCloudFormationClientBuilder.defaultClient()
val name = "" //stack name with "ROLLBACK_*" events
val events = cfClient.describeStackEvents(new DescribeStackEventsRequest().withStackName(name)).getStackEvents.asScala
//here we fail
events.map(e => ResourceStatus.fromValue(e.getResourceStatus))
}
}
Please fill out the sections below to help us address your issue.
Any
go version
)?1.13.4
CloudFormation.DescribeStacks says that it will return an AmazonCloudFormationException
if the stack does not exist, but there is no trace of this exception in the code.
e.g.
DescribeStacks API operation for AWS CloudFormation.
Returns the description for the specified stack; if no stack name was specified, then it returns the description for all the stacks created.
If the stack does not exist, an AmazonCloudFormationException is returned.
Returns awserr.Error for service API and SDK errors. Use runtime type assertions with awserr.Error's Code and Message methods to get detailed information about the error.
See the AWS API reference guide for AWS CloudFormation's API operation DescribeStacks for usage and error information. See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStacks
Search through AWS documentation as well as the aws-sdk-go, and find no reference to the implementation of this error on the Go side. If it is returned from the API, there is no way to tell that this error is there, and use it in code.
There is no method available to rename a particular file in AWS S3 SDK.
The suggested approach copying the file content to new file is time consuming.
Please fill out the sections below to help us address your issue.
github.com/aws/aws-sdk-go v1.26.8
go version
)?go1.13.6 darwin/amd64
The model for mq is out of line with how the AWS MQ API is setting the field EngineType
- https://github.com/aws/aws-sdk-go/blob/master/models/apis/mq/2017-11-27/api-2.json#L1679
The validation on this is set to "ACTIVEMQ". This is in line with what all the AWS documentation states for this variable.
$ aws mq describe-broker-engine-types
{
"BrokerEngineTypes": [
{
"EngineType": "ACTIVEMQ",
"EngineVersions": [
....
}
The issue is that AWS MQ API internally sets the configured resource value to be ActiveMQ
, regardless of input value. This is causing problems with Terraform
in this case (and also how it interacts with tflint
).
$ aws mq create-broker --broker-name example-mq --engine-version 5.15.9 --host-instance-type mq.t2.micro --security-groups "sg-12345" --users ConsoleAccess=true,Password=admin12233443,Username=admin --engine-type AcTiVeMq --deployment-mode SINGLE_INSTANCE
{
"BrokerArn": "arn:aws:mq:eu-west-1:XXXXXXXXX:broker:example-mq:b-9b75111e-1b20-4c70-a697-3e031a037f28",
"BrokerId": "b-9b75111e-1b20-4c70-a697-3e031a037f28"
}
$ aws mq describe-broker --broker-id b-9b75111e-1b20-4c70-a697-3e031a037f28
{
"AutoMinorVersionUpgrade": false,
"BrokerArn": "arn:aws:mq:eu-west-1:XXXXXXXXXXX:broker:example-mq:b-9b75111e-1b20-4c70-a697-3e031a037f28",
"BrokerId": "b-9b75111e-1b20-4c70-a697-3e031a037f28",
"BrokerInstances": [],
"BrokerName": "example-mq",
"BrokerState": "CREATION_IN_PROGRESS",
"Configurations": {
"History": [],
"Pending": {
"Id": "c-5f6ded1d-8461-46cc-ae88-13ac9c0828af",
"Revision": 1
}
},
"Created": "2020-01-29T15:51:03.64Z",
"DeploymentMode": "SINGLE_INSTANCE",
"EncryptionOptions": {
"UseAwsOwnedKey": true
},
"EngineType": "ActiveMQ",
.....
}
Issue seen initially with tflint validation on Amazon MQ Broker resource. I don't want to go into loads of detail here about specific issue on Terraform so see this issue for more Terraform specific context:
Initially thinking the issue was with tflint before fully doing my homework I opened an issue there. I've since realised the problem lays further up the chain.
TL:DR
aws_mq_broker
states ActiveMQ
is the allowed value - https://www.terraform.io/docs/providers/aws/r/mq_broker.html#engine_typetflint
only allows ACTIVEMQ
as defined in the linked sdk model for mqActiveMQ
regardless of input.ACTIVEMQ
is used as suggested by tflint/sdk api validation.Via CLI:
$ aws mq create-broker --broker-name example-mq --engine-type AcTiVeMq --engine-version 5.15.9 --host-instance-type mq.t2.micro --security-groups SG_ID --users ConsoleAccess=true,Password=admin12233443,Username=admin --deployment-mode SINGLE_INSTANCE
$ aws mq describe-broker --broker-id BROKER_ID
Via SDK (missing a few env specific inputs)
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/mq"
)
func main() {
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
Profile: "",
}))
mqSvc := mq.New(sess)
create_result, create_err := mqSvc.CreateBroker(&mq.CreateBrokerRequest{
BrokerName: aws.String("mq-test-engine-type"),
DeploymentMode: aws.String("SINGLE_INSTANCE"),
EngineType: aws.String("ACTIVEMQ"),
EngineVersion: aws.String("5.15.9"),
HostInstanceType: aws.String("mq.t2.micro"),
SecurityGroups: []*string{
aws.String(""),
},
Users: []*mq.User{
{
ConsoleAccess: aws.Bool(true),
Password: aws.String("admin12233443"),
Username: aws.String("admin"),
},
},
})
if create_err != nil {
fmt.Println("Error", create_err)
os.Exit(1)
}
brokerId := create_result.BrokerId
fmt.Printf("Created Broker %s\n", *brokerId)
describe_result, describe_err := mqSvc.DescribeBroker(&mq.DescribeBrokerInput{
BrokerId: aws.String(*brokerId),
})
if describe_err != nil {
fmt.Println("Error", describe_err)
os.Exit(1)
} else {
fmt.Printf("Broker %s : MQ Engine Type - %s\n", *brokerId, *describe_result.EngineType)
}
}
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
There appear to be two missing CloudWatch Insights query statuses.
cloudwatchlogs.QueryStatusUnknown
which is documented in the QueryInfo page referred to by DescribeQueries.
cloudwatchlogs.QueryStatusTimeout
which is noticeably missing from QueryInfo but is mentioned by GetQueryResults. That page also has a discrepancy between the two lists for the status
key documentation.
We have personally observed a status of Timeout
being returned by the SDK. We haven't seen Unknown
.
Version of AWS SDK for Go?
v1.29.32
Version of Go (go version
)?
go version go1.14 linux/amd64
To Reproduce (observed behavior)
Run an Insights query via StartQuery
that times out. I'm not sure exactly how to induce that, but we have observed it in production a number of times.
Expected behavior
That cloudwatchlogs.QueryStatusUnknown
and cloudwatchlogs.QueryStatusTimeout
are defined.
The documentation for s3.HeadObject mentions that there are two possible error codes that can come back if the object doesn't exist.
If you have the s3:ListBucket permission on the bucket, Amazon S3 returns
an HTTP status code 404 ("no such key") error.If you don’t have the s3:ListBucket permission, Amazon S3 returns
an HTTP status code 403 ("access denied") error.
I can see there is an s3.ErrCodeNoSuchKey
string constant, but I don't see an equivalent one for the "access denied" error. Am I missing something?
Hello,
I am posting a new bug because the original one was locked, and I can't comment on it or reopen. The original can be found here:
It should not have been closed, because the solution mentioned in the original bug was a workaround, not a real fix. We should be able to query for a subset of the standard attributes.
Describe the issue with documentation
When calling the GetProducts function from pricing you will run into the issue
InvalidParameterException InvalidParameterException: Input Parameters are invalid. serviceCode cannot be null or empty
To Reproduce (observed behavior)
This is the code from the documentation:
svc := pricing.New(session.New())
input := &pricing.GetProductsInput{
Filters: []*pricing.Filter{
{
Field: aws.String("ServiceCode"),
Type: aws.String("TERM_MATCH"),
Value: aws.String("AmazonEC2"),
},
{
Field: aws.String("volumeType"),
Type: aws.String("TERM_MATCH"),
Value: aws.String("Provisioned IOPS"),
},
},
FormatVersion: aws.String("aws_v1"),
MaxResults: aws.Int64(1),
}
result, err := svc.GetProducts(input)
Expected behavior
I expected the output to be
{
FormatVersion: "aws_v1",
NextToken: "OooVlVbZ49QkMIpijFV6yA==:FVAwHedYW0hDyeuAdtqIsXiLAPyW+SF7rZvcoq3ZD88Ybj0fOEsrXuz4YMzSR0zeqAD3IBkmp02s0kT+cVVIUl1bIjWzct/+N/W3howB91Lo8/x67MOZsbpbcfrcQ4M/09bdb/34L3wdXTTI7qORPMbpDJ6STgtXnwiM4yHIjctwENzRlE66M0P+t2aDEfAV",
PriceList: [{
product: map[attributes:map[location:Asia Pacific (Singapore)
...
Add the function call input.SetServiceCode("AmazonEC2")
and it works without problems.
Hi. The List-Unsubscribe header could be made into a param in the args list for sendEmail. Right now, if we're to add the List-Unsubscribe header, we'll have to have the sendRawEmail function instead. Current solutions are like so: https://stackoverflow.com/questions/35278372/how-to-implement-list-unsubscribe-header-in-emails-sent-by-aws-ses-with-the-php/38684545#38684545
Code readability is heavily compromised.
Looks like aws cli supports this feature: https://docs.aws.amazon.com/credref/latest/refdocs/setting-s3-max_bandwidth.html. Just wondering if it's or will be supported in this sdk? Thanks.
Please fill out the sections below to help us address your issue.
1.27.4
go version
)?1.13.5
I tried to filter errors coming from Invoke for "RequestTooLargeException" but I actually got the error "RequestEntityTooLarge" exception:
2020/01/15 14:51:45 Error invoking lambda function
2020/01/15 14:51:45 RequestEntityTooLargeException: Request must be smaller than 6291456 bytes for the InvokeFunction operation
status code: 413, request id: 4f90b35e-c583-4c6e-bccb-9956291dd896
My error handling code looked like:
log.Println("Error invoking lambda function")
log.Println(err)
awsErr, ok := err.(awserr.Error)
if ok && awsErr.Code() == lambda.ErrCodeRequestTooLargeException {
log.Fatal("Request too large") // never gets called, because the error codes do not match
}
The SDK defines lambda.ErrCodeRequestTooLargeException, but not lambda.ErrCodeRequestEntityTooLargeException. So, am I looking in the wrong place for the error code constant? What's the difference between RequestTooLargeException and RequestEntityTooLargeException? (The descriptions seem to describe the same thing.)
Workaround was just to use a string instead of referencing a constant from the package.
Please fill out the sections below to help us address your issue.
v1.29.3
go version
)?1.13
ValidationException is not in errors.go, but it is an exception that is returned
output, err := a.db.Query(queryInput)
if err != nil {
if e, ok := err.(awserr.Error); ok {
switch e.Code() {
case "ValidationException": // I have to use a custom string for this case
return nil, "", &InvalidQueryError{err, "bad input"}
case dynamodb.ErrCodeConditionalCheckFailedException:
return nil, "", &InvalidQueryError{err, "failed things"}
Confirm by changing [ ] to [x] below:
I am encountering an error when trying to use the "arrayValue" parameter for the RDSDataService, it results in the "Array parameters are not supported." error.
My request looks like this
{
query: xxx
continueAfterTimeout:false,
includeResultMetadata:true,
parameters: [
{ name: company_id, value: { arrayValue: { longValues: [11] } } }
]
}
This is how the documentation seems to format the packet, am I doing something wrong?
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
I'm doing the following (simplified):
const creds = new AWS.ChainableTemporaryCredentials({
params: {
RoleArn: roleArn,
},
stsConfig: { ... },
masterCredentials: await this.defaultCredentials(),
});
(Code can be observed in the wild here)
When the role indicated via roleArn
does not exist, this is the error message I receive:
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Wow! That is super obtuse!
Is the issue in the browser/Node.js?
Node.js
If on Node.js, are you running this on AWS Lambda?
No
Details of the browser/Node.js version
Node v12.12.0
SDK version number
"version": "2.676.0",
To Reproduce (observed behavior)
See snippet above.
Expected behavior
I expect the error message to tell me that the indicated role did not exist, instead of to tell me that I need to set an environment variable (which definitely would not have helped), or that there are no credentials in my config (which is somewhat true, but there were definitely valid masterCredentails given to the assumerole credentials).
I suggest getting rid of the JSON trust relationship document.
I'm always frustrated when I should provide JSON assume role policy document to create a role or other related request, e.g.:
private static final String TRUST_RELATIONSHIP = "{\n" +
" \"Version\": \"2020-01-01\",\n" +
" \"Statement\": [\n" +
" {\n" +
" \"Effect\": \"Allow\",\n" +
" \"Principal\": {\n" +
" \"Service\": \"lambda.amazonaws.com\"\n" +
" },\n" +
" \"Action\": \"sts:AssumeRole\"\n" +
" }\n" +
" ]\n" +
"}";
// ...
CreateRoleRequest createRoleRequest = CreateRoleRequest.builder()
.roleName("foo-bar")
.assumeRolePolicyDocument(TRUST_RELATIONSHIP)
.build();
iamClient.createRole(createRoleRequest);
TrustRelationshipStatement relationshipStatement = TrustRelationshipStatement.builder()
.effect(Effect.ALLOW)
.action(Action.ASSUME_ROLE)
.principal(ServicePrincipal.AWS_LAMBDA)
.build();
List<TrustRelationshipStatement> relationshipStatements = new ArrayList<>();
relationshipStatements.add(relationshipStatement);
TrustRelationship trustRelationship = TrustRelationship.builder()
.version("2020-01-01")
.statements(relationshipStatements)
.build();
CreateRoleRequest createRoleRequest = CreateRoleRequest.builder()
.roleName("foo-bar")
.assumeRolePolicyDocument(trustRelationship)
.build();
iamClient.createRole(createRoleRequest);
I had an issue where I was sending an SSM command and then calling list-command-invocations too quickly. This ends up returning a 200 and an empty list because the commandID was not found yet.
If I sleep for a second it works as intended. This is counter intuitive since a 200 de facto means the request was successful.
A successful request means the commandID was accepted and in this case, it was clearly not. This doesn't meet the criteria for a successful request, ergo it should return a 404.
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
Forwarding remote ports to local ports does not work through the SDK.
Is the issue in the browser/Node.js?
Node.js
If on Node.js, are you running this on AWS Lambda?
No
Details of the browser/Node.js version
v12.14.0
SDK version number
v2.638.0
To Reproduce (observed behavior)
const params = {
DocumentName: "AWS-StartPortForwardingSession",
Target: "i-<my_instance>",
Parameters: {
portNumber: ["8200"],
localPortNumber: ["8200"],
}
};
await ssm.startSession(params).promise();
The above code runs, and the promise returns successfully without throwing any error. However, the specified local port is not bound to.
Expected behavior
I expected that local port 8200 would be bound to, just like if I had run
aws ssm start-session \
--target "i-<my_instance>" \
--document-name "AWS-StartPortForwardingSession" \
--parameters '{"portNumber": ["8200"], "localPortNumber": ["8200"]}'
which does bind to my local port
Let's assume that we need to get 5 last objects.
According to the docs, there's no way to get results sorted in descending.
So we should fetch all objects even there's the MaxKeys
param to get limited objects.
In that case, MaxKeys
is helpless.
It could be better to add new param to determine ascending/descending for sort algorithm.
Thanks in advance.
It appears that an SQS ARN has all of the information needed to construct a queue URL. A function to do this mapping without having to make an API call would be useful.
Usecase Example:
I can create create a pull request if this idea is deemed valuable.
Confirm by changing [ ] to [x] below:
Issue is about usage on:
Platform/OS/Hardware/Device
What are you running the cli on?
root@Alexey-HP:~# aws --version
aws-cli/2.0.27 Python/3.7.3 Linux/4.4.0-19041-Microsoft botocore/2.0.0dev31
Describe the question
I have deleted default VPC (default security group has been deleted automatically) and created new default VPC (default security group has been added automatically).
Unfortunately there is no rule to allow inbound 'ssh' traffic by default, so I have to create one:
root@Alexey-HP:~# aws ec2 authorize-security-group-ingress \
--group-name sg-040a24e0f8d5aca4c
--protocol tcp
--port 22
--cidr 0.0.0.0/0
An error occurred (InvalidGroup.NotFound) when calling the AuthorizeSecurityGroupIngress operation: The security group 'sg-040a24e0f8d5aca4c' does not exist in default VPC 'vpc-0965ff934a4aaecaf'
root@Alexey-HP:~# aws ec2 describe-security-groups
SECURITYGROUPS default VPC security group sg-040a24e0f8d5aca4c default 466641473194 vpc-0965ff934a4aaecaf
IPPERMISSIONS -1
USERIDGROUPPAIRS sg-040a24e0f8d5aca4c 466641473194
IPPERMISSIONSEGRESS -1
IPRANGES 0.0.0.0/0
root@Alexey-HP:~# aws ec2 describe-vpcs
VPCS 172.31.0.0/16 dopt-a7e848ce default True 466641473194 available vpc-0965ff934a4aaecaf
CIDRBLOCKASSOCIATIONSET vpc-cidr-assoc-03a8f45687c587484 172.31.0.0/16
CIDRBLOCKSTATE associated
TAGS Name pgpro-vpc
What does this error mean? How does default security group not exist in default default VPC ? How to create inbound ssh rule ?
Logs/output
Get full traceback and error logs by adding --debug
to the command.
Hi,
I am wondering if there are plans to add elbv2 wait commands for registering and de-registering Targets in Target Groups?
/ Carl
There seems to be no provision to retrieve the final url of the output asset file (like for HLS - main index.m3u8 file location). It can be predicated in the client side but it has limitations, as the output location could be configured to use variable expression in destination url like s3://x-vod-output-bucket/processedpoutputs/$dt$/ . In such cases , finally the playable asset would be in a dynamic location. How would the client code know until the same is returned in the job response?
The ReturnValuesOnConditionCheckFailure
enum provides the ALL_OLD
and NONE
values.
However, int Update.withReturnValuesOnConditionCheckFailure
, it is stated that NONE
, ALL_OLD
, UPDATED_OLD
, ALL_NEW
, UPDATED_NEW
are valid values.
ReturnValuesOnConditionCheckFailure
must be extended with the missing enum values
I need to get a list of emails that end with a particular domain(s) from Cognito by using cognito.listUsers({}).
Right now I can only either provide the filter type for an exact match using = or for a prefix match using ^= which is extremely limited to say the least.
I (along with possibly many other developers) would like to have the ability to list users from cognito based on a match with a regex pattern.
Or have a options like 'contains', 'not contains', 'ends with', 'exists' and more like these to be able to list the users we exactly want.
Right now, I have to retrieve each and every user in Cognito User Pool using Pagination and then filter them based on what I need which of course is no good when the total number of Users goes in thousands (given there is a limit of 60 users per page and a rate limit on number of requests)
I use local dynamodb for testing my project, and it throws this exception. According to this thread aws/aws-sdk-php#1687 It looks like the docker image that I use for dynamodb-local may not support transactWrite api at all.
Any update from AWS team? When we can have an update for the javascript dynamodb-local
listProtections API does not support pagination. On researching the issue, it was found that pagination configuration is empty in here - https://github.com/aws/aws-sdk-js/blob/master/apis/shield-2016-06-02.paginators.json.
We could retrieve all the records only after updating the paginators json with below
{ "pagination":{ "ListProtections":{ "input_token":"NextToken", "limit_key":"MaxResults", "output_token":"NextToken", "result_key":"Protections" } } }
The ~/.aws/credentials
file is unencrypted by default.
Several mitigations can be put in place, e.g.:
chmod 600
) to protect the file from being accessed by other users of the computersts assume-role
in conjunction with MFA, reducing the impact of losing the credentials, since the IAM user contains no permissions until assuming role, and the MFA device would also need to be lostBut it just struck me as odd. I was looking for a way to better secure the credentials file, when I found this issue: naftulikay/aws-env#10
The utility simply looks for credentials.gpg
file, and if it exists, decrypts it and uses it instead of the credentials
file. If GPG requires a physical card to decrypt the credentials, then the user would be prompted.
Is that something that the AWS CLI could do?
The AWS SDK for Java includes AmazonSQSBufferedAsyncClient
which accesses Amazon SQS. It allows up to 10 requests to be buffered and sent as a batch request, decreasing the cost of using Amazon SQS and reducing the number of sent requests.
A solution that allows for simple request batching using client-side buffering. Where calls made from the client are first buffered and then sent as a batch request to Amazon SQS.
I'm currently using go-cloud SDK, under the hood it's batching every call.
https://github.com/google/go-cloud/blob/master/pubsub/awssnssqs/awssnssqs.go#L107
I know that it is currently possible to specify mfa_serial
on profiles and cli tool will automatically ask for the MFA token to authenticate when you will make a call under this profile when assume role is used.
Curious is there are any reason not to support mfa on default (source) profile, like to be able to attach this policy to all admin users
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowIfMFAPresent",
"Effect": "Allow",
"Resource": "*",
"Action": "*",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
And after that allow cli tool to use config like
[default]
region = us-west-2
aws_access_key_id = YYY1
aws_secret_access_key = XXX1
mfa_serial = ZZZ1
So in that case I will be able to call
aws s3 ls
And that will ask me for the mfa token if it is expired or has not been set.
Is there a way to include N lines before and/or after a matching pattern in AWS CloudWatch Logs?
Let's say I have this query and would like 3 lines before and after each match.
aws logs filter-log-events --log-group-name my-group --filter-pattern "mypattern"
The only work around I have at the moment is to remove the filter pattern and use grep:
aws logs filter-log-events --log-group-name my-group | grep -A 3 -B 3 mypattern
However, I would like to only stream the log events I need and do it as part of the aws log events query.
Hi,
Is there a waitFor event planned for volume modification status? Or is there a better way to return the final state once the modification is completed? (failed or modified).
As of now, DescribeVolumesModifications returns 4 states: modifying, optimizing, completed or failed.
Thank you
Please fill out the sections below to help us address your issue.
50ba1df
It would be useful to have an ssm.WaitUntilCommandInvocationCompleted
method or similar so one can send an SSM command and wait before trying to collect the output.
Add waiters methods for servicecatalog like is done for cloudformation.
aws/aws-sdk-go#3148
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
Calling ec2.WaitUntilPasswordDataAvailableWithContext always exceeds the timeout and returns ResourceNotReady, even when the instance's password data is ready and available
Version of AWS SDK for Go?
v1.33.5
Version of Go (go version
)?
go1.14.2 darwin/amd64
To Reproduce (observed behavior)
Launch a new Windows EC2 t2.micro instance in the us-east-1 region, providing a key-pair. I used AMI ID ami-05bb2dae0b1de90b3. Wait until the instance is running and verify its password data is available in the AWS Management Console via the "Connect" button, then the "Get Password" button.
Run the following code
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
"os"
"fmt"
)
func main() {
// Initialize a session in us-east-1 that the SDK will use to load
// credentials from the shared credentials file ~/.aws/credentials.
session, _ := session.NewSession(aws.NewConfig().WithRegion("us-east-1"))
ec2Client := ec2.New(session)
// Verify the password data is available and start a waiter to test that functionality
instanceId := "i-019ca9f16bb7d5b0e"
passwordDataInputRequest := &ec2.GetPasswordDataInput{
InstanceId: aws.String(instanceId),
}
getPasswordDataOutput, _ := ec2Client.GetPasswordData(passwordDataInputRequest)
if len(*getPasswordDataOutput.PasswordData) > 0 {
fmt.Printf("password data is ready for %s. starting waiter to test that functionality...\n", instanceId)
ctx := aws.BackgroundContext()
err := ec2Client.WaitUntilPasswordDataAvailableWithContext(ctx, passwordDataInputRequest, request.WithWaiterMaxAttempts(5))
if err != nil {
aerr, ok := err.(awserr.Error)
if ok && aerr.Code() == request.WaiterResourceNotReadyErrorCode {
fmt.Fprintf(os.Stderr, "timed out while waiting for password data to become available for %s\n", instanceId)
}
panic(fmt.Errorf("failed to wait for password data to become available for %s:\n%v", instanceId, err))
}
}
}
Expected behavior
I expect the waiter to return almost immediately, as it's clear the instance's password data does exist.
Additional context
Nothing to add
I've been playing around with RDS instances and modifying them but i have noticed an issue that may affect others. What I'm trying to do is upgrade one of my RDS instances and once the upgrade is complete, then continue with the rest of the code:
return RDS.modifyDBInstance({
DBInstanceIdentifier: instanceId,
DBInstanceClass: instanceClass,
ApplyImmediately: true
}).promise().then(() => {
return RDS.waitFor('dBInstanceAvailable', {
DBInstanceIdentifier: instanceId
}).promise();
});
The problem that I'm having with the above is that the RDS.waitFor
promise is resolved straight away because the ApplyImmediately
settings seem to take a few seconds to actually start. As a result, to make the code work as expected, I'm having to manually delay the call to waitFor
(with bluebirds .delay
)
return RDS.modifyDBInstance({
DBInstanceIdentifier: instanceId,
DBInstanceClass: instanceClass,
ApplyImmediately: true
}).promise().then(() => {
return Promise.delay(5000).then(() => {
return RDS.waitFor('dBInstanceAvailable', {
DBInstanceIdentifier: instanceId
}).promise();
});
});
Currently, the waitFor
function seems to run straight away and then every 30 seconds. Could we have an option to wait 30 seconds before the first check?
A number of the Media Services productions (MediaLive, MediaPackage, MediaStore & MediaTailor) have operations that are longer running (i.e. creating or starting up a MediaLive channel).
It would be great to have waiters/waitFor support for these products.
I would like to be able to waitFor
an image to exist in ECR, similarly to how you can waitFor
a file to exist on S3. Is this possible or does it require a service-specific implementation?
In my use-case, both represent me waiting on CI services -- one that builds .zip
bundles for Lambda functions, and another that builds docker images for ECS tasks.
The XDG spec defines where config files and credential files should be placed.
https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
Instead of writing to $HOME, the aws config directory should be $XDG_CONFIG_HOME/aws
. (And $XDG_DATA_HOME/aws/
for creds)
In AWS console it is possible to specify a source template and version when creating a new launch template. However, when I try to create one using AWS java SDK it is only possible by specifying a LaunchTemplateData.
It might not have been a problem, but there are two different objects: RequestLaunchTemplateData and ResponseLaunchTemplateData and I wasn't able to find any way to map one to another.
Unfortunately, there's no reasonable workaround for that issue. Copying the fields (and constant maintenance of that code) is the last thing I would want to do.
v1.12.67
Not all errors have corresponding constants defined for them.
For CloudFormation the list of common errors is here https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/CommonErrors.html
It actually says it is a list of common errors for actions of all AWS apis - but why is it in the CloudFormation section then? Anyway, there are no constants for these codes. Neither in cloudformation
package nor in some common package.
If you search for ValidationError
it is in the sagemaker
package - should be moved to the list of common error constants?
It would be great to have waitFor in ElasticBeanstalk service.
My current use case is calling ElasticBeanstalk.createApplicationVersion and when that process finishes I'd like to get notified via callback to make a subsequent call to ElasticBeanstalk.updateEnvironment.
Right now the only approach that I see for this is to poll (calling ElasticBeanstalk.describeApplicationVersions) the status of the application version until it's "Processed" .
Thank you!
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
Using an ed25519
public key causes a validation error
SDK version number
aws-cli/2.0.16 Python/3.8.2 Darwin/19.4.0 botocore/2.0.0dev20
Platform/OS/Hardware/Device
Darwin laptop 19.4.0 Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64
To Reproduce (observed behavior)
aws ec2-instance-connect send-ssh-public-key --instance-id <instance_id> --availability-zone us-east-1a --instance-os-user <ec2_user> --ssh-public-key file://~/.ssh/id_ed25519.pub
Expected behavior
ed25519 public keys should be supported and not cause a validation error
Logs/output
2020-05-28 15:48:31,066 - MainThread - awscli.clidriver - DEBUG - Client side parameter validation failed
Traceback (most recent call last):
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/awscli/clidriver.py", line 335, in main
return command_table[parsed_args.command](remaining, parsed_args)
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/awscli/clidriver.py", line 507, in __call__
return command_table[parsed_args.operation](remaining, parsed_globals)
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/awscli/clidriver.py", line 682, in __call__
return self._operation_caller.invoke(
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/awscli/clidriver.py", line 805, in invoke
response = self._make_client_call(
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/awscli/clidriver.py", line 817, in _make_client_call
response = getattr(client, xform_name(operation_name))(
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/botocore/client.py", line 208, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/botocore/client.py", line 499, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/botocore/client.py", line 547, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/usr/local/Cellar/awscli/2.0.16/libexec/lib/python3.8/site-packages/botocore/validate.py", line 297, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid length for parameter SSHPublicKey, value: 107, valid range: 256-inf
Parameter validation failed:
Invalid length for parameter SSHPublicKey, value: 107, valid range: 256-inf
Additional context
Add any other context about the problem here.
describeInstanceTypes
and describeInstanceTypeOfferings
are missing pagination features. The ec2 paginators json does not have them referenced.
https://github.com/aws/aws-sdk-js/blob/master/apis/ec2-2016-11-15.paginators.json#L165
SDK version number
v2.584.0
Is there a way to add a new revision to a ECS task definition?
In my case I want to update the container URL in my CD pipeline using the command line. Either the documentation is missing how to do that or it is only possible currently using the management console?
see aws/amazon-ecs-cli#91 (but i am not using docker compose)
Confirm by changing [ ] to [x] below:
Describe the question
I use CostExplorer class and getCostAndUsage function. In UI aws there is an opportunity to get a report with "Show only untagged resources". What needs to be specified with filters to get only untagged resources in the aws sdk?
When using startTime
or endTime
of GetMetricStatisticsRequest.Builder
it requires Instant
to be truncated to seconds, otherwise and error CloudWatchException: timestamp must follow ISO8601
is produced.
The following code ends with a successful response at any time:
Instant end = Instant.now();
Instant start = end.minus(1, ChronoUnit.DAYS);
CloudWatchClient client = CloudWatchClient.create();
GetMetricStatisticsRequest request = GetMetricStatisticsRequest.builder()
.namespace("AWS/ApiGateway").metricName("Count").statistics(Statistic.SUM).startTime(start)
.endTime(end).period(SIX_HOURS_IN_SEC).build();
client.getMetricStatistics(request);
The code above ends with CloudWatchException
, message: timestamp must follow ISO8601
.
Stack trace:
software.amazon.awssdk.services.cloudwatch.model.CloudWatchException: timestamp must follow ISO8601 (Service: CloudWatch, Status Code: 400, Request ID: <REDACTED>
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:123)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:79)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:189)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:121)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:147)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:101)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
at software.amazon.awssdk.services.cloudwatch.DefaultCloudWatchClient.getMetricStatistics(DefaultCloudWatchClient.java:1388)
When examined the HTTP request by debugging it contains ISO8601 timestamp including nanoseconds, e.g. 2020-05-07T11:03:24.098123000Z
.
Mitigation is to run replace the first line of example code with:
Instant end = Instant.now().truncatedTo(ChronoUnit.SECONDS);
Run the example code above.
I would expect SDK will automatically truncate the instant, or produce a meaningful error as formatting is not part of the input and user cannot change it.
SDK behavior is unclear and it requires a user to run debugger.
When validating cloudformation templates containing Fn::Transform, the obtained ValidateTemplateResult object does not report the CAPABILITY_AUTO_EXPAND capability.
This has a side effect in the Eclipse Tookit, which cannot create CFN stacks that use macros.
When I use rds.downloadDBLogFilePortion
to download a large log file (200MB ~ 3GB), I can almost always see [Your log message was truncated]
several (5 ~ 40) times in the downloaded file.
According to AWS doc, rds.downloadDBLogFilePortion
will
Downloads all or a portion of the specified log file, up to 1 MB in size.
My guess is that 1MB limit could cut in the middle of a log line.
I tried to set NumberOfLines
option to a small number (200). Most of the time it works without truncating logs. However, under rare circumstance, a extremely long log line could still possibly get truncated. Another problem of limiting NumberOfLines
is that it will slow down the download.
Is there a way that I can avoid truncated log messages (ideally without compromising download speed)? Thank you!
My code is as follows:
function downloadLogFile(databaseId, logFileName, outputStream) {
return new Promise(function(resolve, reject) {
outputStream.on('error', function(err) {
reject(err)
})
// bound w/ AWS#Response
// recurses itself if more pages
// ends the outputStream when done
function pageCb(err, data) {
if (err) {
return reject(err)
}
const logData = data.LogFileData
if (!logData) {
outputStream.end('\n')
return resolve()
}
outputStream.write(logData)
if (this.hasNextPage()) {
return this.nextPage(pageCb)
}
// otherwise we've completed this file
outputStream.end('\n')
return resolve()
}
rds.downloadDBLogFilePortion({
DBInstanceIdentifier: databaseId,
LogFileName: logFileName,
Marker: '0'
}, pageCb)
})
}
The AWS_ROLE_ARN
environment variable was recently added with the introduction of the web identity credential provider. It would be great if the AWS_ROLE_ARN
environment variable could also be used with the environment credential provider. This allows environments where disk access is not available or read-only to assume a role without a shared configuration file.
An example workflow, given the following environment:
AWS_ACCESS_KEY_ID=AK...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE_ARN=arn:aws:iam::123456789012:role/example
The environment credential provider would use the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
credentials to assume the given AWS_ROLE_ARN
.
Creating our own application-specific environment variable(s) (e.g. AWS_ROLE_ARN
or TF_AWS_ROLE_ARN
) to trigger assuming a role automatically, at the risk of:
AWS_
namespace and default AWS Go SDK behaviorReferences:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.