Giter Site home page Giter Site logo

aws / aws-sdk-rails Goto Github PK

View Code? Open in Web Editor NEW
582.0 41.0 58.0 443 KB

Official repository for the aws-sdk-rails gem, which integrates the AWS SDK for Ruby with Ruby on Rails.

License: Other

Ruby 89.96% JavaScript 1.81% HTML 6.73% CSS 0.37% SCSS 0.70% Dockerfile 0.21% Shell 0.22%

aws-sdk-rails's Introduction

AWS SDK for Ruby Rails Plugin

Gem Version Build Status Github forks Github stars

A Ruby on Rails plugin that integrates AWS services with your application using the latest version of AWS SDK For Ruby.

Installation

Add this gem to your Rails project's Gemfile:

gem 'aws-sdk-rails'

This gem also brings in the following AWS gems:

  • aws-sdk-ses
  • aws-sdk-sesv2
  • aws-sdk-sqs
  • aws-record
  • aws-sessionstore-dynamodb

If you want to use other services (such as S3), you will still need to add them to your Gemfile:

gem 'aws-sdk-rails', '~> 3'
gem 'aws-sdk-s3', '~> 1'

You will have to ensure that you provide credentials for the SDK to use. See the latest AWS SDK for Ruby Docs for details.

If you're running your Rails application on Amazon EC2, the AWS SDK will check Amazon EC2 instance metadata for credentials to load. Learn more: IAM Roles for Amazon EC2

Features

AWS SDK uses the Rails logger

The AWS SDK is configured to use the built-in Rails logger for any SDK log output. The logger is configured to use the :info log level. You can change the log level by setting :log_level in the Aws.config hash.

Aws.config.update(log_level: :debug)

Rails 5.2+ Encrypted Credentials

If you are using Rails 5.2+ Encrypted Credentials, the credentials will be decrypted and loaded under the :aws top level key:

# config/credentials.yml.enc
# viewable with: `rails credentials:edit`
aws:
  access_key_id: YOUR_KEY_ID
  secret_access_key: YOUR_ACCESS_KEY

Encrypted Credentials will take precedence over any other AWS Credentials that may exist in your environment (eg: credentials from profiles set in ~/.aws/credentials).

If you are using ActiveStorage with S3 then you do not need to specify your credentials in your storage.yml configuration: they will be loaded automatically.

DynamoDB Session Store

You can configure session storage in Rails to use DynamoDB instead of cookies, allowing access to sessions from other applications and devices. You will need to have an existing Amazon DynamoDB session table to use this feature.

You can generate a migration file for the session table using the following command ( is optional):

rails generate dynamo_db:session_store_migration <MigrationName>

The session store migration generator command will generate two files: a migration file, db/migration/#{VERSION}_#{MIGRATION_NAME}.rb, and a configuration YAML file, config/dynamo_db_session_store.yml.

The migration file will create and delete a table with default options. These options can be changed prior to running the migration and are documented in the Table class.

To create the table, run migrations as normal with:

rails db:migrate

Next, configure the Rails session store to be :dynamodb_store by editing config/initializers/session_store.rb to contain the following:

# config/initializers/session_store.rb
Rails.application.config.session_store :dynamodb_store, key: '_your_app_session'

You can now start your Rails application with session support.

Configuration

You can configure the session store with code, YAML files, or ENV, in this order of precedence. To configure in code, you can directly pass options to your initializer like so:

# config/initializers/session_store.rb
Rails.application.config.session_store :dynamodb_store,
  key: '_your_app_session',
  table_name: 'foo',
  dynamo_db_client: my_ddb_client

Alternatively, you can use the generated YAML configuration file config/dynamo_db_session_store.yml. YAML configuration may also be specified per environment, with environment configuration having precedence. To do this, create config/dynamo_db_session_store/#{Rails.env}.yml files as needed.

For configuration options, see the Configuration class.

Rack Configuration

DynamoDB session storage is implemented in the `aws-sessionstore-dynamodb` gem. The Rack middleware inherits from the `Rack::Session::Abstract::Persisted` class, which also includes additional options (such as :key) that can be passed into the Rails initializer.

Cleaning old sessions

By default sessions do not expire. See config/dynamo_db_session_store.yml to configure the max age or stale period of a session.

You can use the DynamoDB Time to Live (TTL) feature on the expire_at attribute to automatically delete expired items.

Alternatively, a Rake task for garbage collection is provided:

rake dynamo_db:collect_garbage

Amazon Simple Email Service (SES) as an ActionMailer Delivery Method

This gem will automatically register SES and SESV2 as ActionMailer delivery methods. You simply need to configure Rails to use it in your environment configuration:

# for e.g.: config/environments/production.rb
config.action_mailer.delivery_method = :ses # or :sesv2

Override credentials or other client options

Client options can be overridden by re-registering the mailer with any set of SES or SESV2 Client options. You can create a Rails initializer config/initializers/aws.rb with contents similar to the following:

require 'json'

# Assuming a file "path/to/aws_secrets.json" with contents like:
#
#     { "AccessKeyId": "YOUR_KEY_ID", "SecretAccessKey": "YOUR_ACCESS_KEY" }
#
# Remember to exclude "path/to/aws_secrets.json" from version control, e.g. by
# adding it to .gitignore
secrets = JSON.load(File.read('path/to/aws_secrets.json'))
creds = Aws::Credentials.new(secrets['AccessKeyId'], secrets['SecretAccessKey'])

Aws::Rails.add_action_mailer_delivery_method(
  :ses, # or :sesv2
  credentials: creds,
  region: 'us-east-1',
  # some other config
)

Using ARNs with SES

This gem uses `Aws::SES::Client#send_raw_email` and `Aws::SESV2::Client#send_email` to send emails. This operation allows you to specify a cross-account identity for the email's Source, From, and Return-Path. To set these ARNs, use any of the following headers on your Mail::Message object returned by your Mailer class:

  • X-SES-SOURCE-ARN

  • X-SES-FROM-ARN

  • X-SES-RETURN-PATH-ARN

Example:

# in your Rails controller
message = MyMailer.send_email(options)
message['X-SES-FROM-ARN'] = 'arn:aws:ses:us-west-2:012345678910:identity/[email protected]'
message.deliver

Active Support Notification Instrumentation for AWS SDK calls

To add ActiveSupport::Notifications Instrumentation to all AWS SDK client operations call Aws::Rails.instrument_sdk_operations before you construct any SDK clients.

Example usage in config/initializers/instrument_aws_sdk.rb

Aws::Rails.instrument_sdk_operations

Events are published for each client operation call with the following event name: ..aws. For example, S3's put_object has an event name of: put_object.S3.aws. The service name will always match the namespace of the service client (eg Aws::S3::Client => 'S3'). The payload of the event is the request context.

You can subscribe to these events as you would other ActiveSupport::Notifications:

ActiveSupport::Notifications.subscribe('put_object.S3.aws') do |name, start, finish, id, payload|
 # process event
end

# Or use a regex to subscribe to all service notifications
ActiveSupport::Notifications.subscribe(/S3[.]aws/) do |name, start, finish, id, payload|
 # process event
end

AWS SQS Active Job

This package provides a lightweight, high performance SQS backend for ActiveJob.

To use AWS SQS ActiveJob as your queuing backend, simply set the active_job.queue_adapter to :amazon or :amazon_sqs (note, :amazon has been used for a number of other Amazon rails adapters such as ActiveStorage, so has been carried forward as convention here). For details on setting the queuing backend see: ActiveJob: Setting the Backend. To use the non-blocking (async) adapter set active_job.queue_adapter to :amazon_sqs_async. If you have a lot of jobs to queue or you need to avoid the extra latency from an SQS call in your request then consider using the async adapter. However, you may also want to configure a async_queue_error_handler to handle errors that may occur when queuing jobs. See the Aws::Rails::SqsActiveJob::Configuration for documentation.

# config/application.rb
module YourApp
  class Application < Rails::Application
    config.active_job.queue_adapter = :amazon_sqs # note: can use either :amazon or :amazon_sqs
    # To use the non-blocking async adapter:
    # config.active_job.queue_adapter = :amazon_sqs_async
  end
end

# Or to set the adapter for a single job:
class YourJob < ApplicationJob
  self.queue_adapter = :amazon_sqs
  #....
end

You also need to configure a mapping of ActiveJob queue name to SQS Queue URL. For more details, see the configuration section below.

# config/aws_sqs_active_job.yml
queues:
  default: 'https://my-queue-url.amazon.aws'

To queue a job, you can just use standard ActiveJob methods:

# To queue for immediate processing
YourJob.perform_later(args)

# or to schedule a job for a future time:
YourJob.set(wait: 1.minute).perform_later(args)

Note: Due to limitations in SQS, you cannot schedule jobs for later than 15 minutes in the future.

Retry Behavior and Handling Errors

See the Rails ActiveJob Guide on Exceptions for background on how ActiveJob handles exceptions and retries.

In general - you should configure retries for your jobs using retry_on. When configured, ActiveJob will catch the exception and reschedule the job for re-execution after the configured delay. This will delete the original message from the SQS queue and requeue a new message.

By default SQS ActiveJob is configured with retry_standard_error set to true and will not delete messages for jobs that raise a StandardError and that do not handle that error via retry_on or discard_on. These job messages will remain on the queue and will be re-read and retried following the SQS Queue's configured retry and DLQ settings. If you do not have a DLQ configured, the message will continue to be attempted until it reaches the queues retention period. In general, it is a best practice to configure a DLQ to store unprocessable jobs for troubleshooting and redrive.

If you want failed jobs that do not have retry_on or discard_on configured to be immediately discarded and not left on the queue, set retry_standard_error to false. See the configuration section below for details.

Running workers - polling for jobs

To start processing jobs, you need to start a separate process (in additional to your Rails app) with bin/aws_sqs_active_job (an executable script provided with this gem). You need to specify the queue to process jobs from:

RAILS_ENV=development bundle exec aws_sqs_active_job --queue default

To see a complete list of arguments use --help.

You can kill the process at any time with CTRL+C - the processor will attempt to shutdown cleanly and will wait up to :shutdown_timeout seconds for all actively running jobs to finish before killing them.

Note: When running in production, its recommended that use a process supervisor such as foreman, systemd, upstart, daemontools, launchd, runit, ect.

Performance

AWS SQS ActiveJob is a lightweight and performant queueing backend. Benchmark performed using: Ruby MRI 2.6.5,
shoryuken 5.0.5, aws-sdk-rails 3.3.1 and aws-sdk-sqs 1.34.0 on a 2015 Macbook Pro dual-core i7 with 16GB ram.

AWS SQS ActiveJob (default settings): Throughput 119.1 jobs/sec Shoryuken (default settings): Throughput 76.8 jobs/sec

Serverless workers: processing activejobs using AWS Lambda

Rather than managing the worker processes yourself, you can use Lambda with an SQS Trigger. With Lambda Container Image Support and the lambda handler provided with aws-sdk-rails its easy to use lambda to run ActiveJobs for your dockerized rails app (see below for some tips). All you need to do is:

  1. include the aws_lambda_ric gem
  2. Push your image to ecr
  3. Create a lambda function from your image (see the lambda docs for details).
  4. Add an SQS Trigger for the queue(s) you want to process jobs from.
  5. Set the ENTRYPOINT to /usr/local/bundle/bin/aws_lambda_ric and the CMD to config/environment.Aws::Rails::SqsActiveJob.lambda_job_handler - this will load Rails and then use the lambda handler provided by aws-sdk-rails. You can do this either as function config or in your Dockerfile.

There are a few limitations/requirements for lambda container images: the default lambda user must be able to read all the files and the image must be able to run on a read only file system. You may need to disable bootsnap, set a HOME env variable and set the logger to STDOUT (which lambda will record to cloudwatch for you).

You can use the RAILS_ENV to control environment. If you need to execute specific configuration in the lambda, you can create a ruby file and use it as your entrypoint:

# app.rb
# some custom config

require_relative 'config/environment' # load rails

# Rails.config.custom....
# Aws::Rails::SqsActiveJob.config....

# no need to write a handler yourself here, as long as
# aws-sdk-rails is loaded, you can still use the
# Aws::Rails::SqsActiveJob.lambda_job_handler

# To use this file, set CMD:  app.Aws::Rails::SqsActiveJob.lambda_job_handler

Elastic Beanstalk workers: processing activejobs using worker environments

Another option for processing jobs without managing the worker process is hosting the application in a scalable Elastic Beanstalk worker environment. This SDK includes Rack middleware that can be added conditionally and which will process requests from the SQS Daemon provided with each worker instance. The middleware will forward each request and parameters to their appropriate jobs.

To add the middleware on application startup, set the AWS_PROCESS_BEANSTALK_WORKER_REQUESTS environment variable to true in the worker environment configuration.

To protect against forgeries, daemon requests will only be processed if they originate from localhost or the Docker host.

Periodic (scheduled) jobs are also supported with this approach without requiring any additional dependencies. Elastic Beanstalk workers support the addition of a cron.yaml file in the application root to configure this.

Example:

version: 1
cron:
 - name: "MyApplicationJob"
   url: "/"
   schedule: "0 */12 * * *"

Where 'name' must be the case-sensitive class name of the job.

Configuration

For a complete list of configuration options see the Aws::Rails::SqsActiveJob::Configuration documentation.

You can configure AWS SQS Active Job either through the yml file or through code in your config/.rb or initializers.

For file based configuration, you can use either:

  1. config/aws_sqs_active_job/<RAILS_ENV>.yml
  2. config/aws_sqs_active_job.yml

The yml file supports ERB.

To configure in code:

Aws::Rails::SqsActiveJob.configure do |config|
  config.logger = ActiveSupport::Logger.new(STDOUT)
  config.max_messages = 5
  config.client = Aws::SQS::Client.new(region: 'us-east-1')
end

Using FIFO queues

If the order in which your jobs executes is important, consider using a FIFO Queue. A FIFO queue ensures that messages are processed in the order they were sent (First-In-First-Out) and exactly-once processing (ensuring duplicates are never introduced into the queue). To use a fifo queue, simply set the queue url (which will end in ".fifo") in your config.

When using FIFO queues, jobs will NOT be processed concurrently by the poller to ensure the correct ordering. Additionally, all jobs on a FIFO queue will be queued synchronously, even if you have configured the amazon_sqs_async adapter.

Message Deduplication ID

FIFO queues support Message deduplication ID, which is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.

Customize Deduplication keys

If necessary, the deduplication key used to create the message deduplication ID can be customized:

Aws::Rails::SqsActiveJob.configure do |config|
  config.excluded_deduplication_keys = [:job_class, :arguments]
end

# Or to set deduplication keys to exclude for a single job:
class YourJob < ApplicationJob
  include Aws::Rails::SqsActiveJob
  deduplicate_without :job_class, :arguments
  #...
end

By default, the following keys are used for deduplication keys:

job_class, provider_job_id, queue_name, priority, arguments, executions, exception_executions, locale, timezone, enqueued_at

Note that job_id is NOT included in deduplication keys because it is unique for each initialization of the job, and the run-once behavior must be guaranteed for ActiveJob retries. Even without setting job_id, it is implicitly excluded from deduplication keys.

Message Group IDs

FIFO queues require a message group id to be provided for the job. It is determined by:

  1. Calling message_group_id on the job if it is defined
  2. If message_group_id is not defined or the result is nil, the default value will be used. You can optionally specify a custom value in your config as the default that will be used by all jobs.

AWS Record Generators

This package also pulls in the `aws-record` gem and provides generators for creating models and a rake task for performing table config migrations.

Setup

You can either invoke the generator by calling rails g aws_record:model ...

If DynamoDB will be the only datastore you plan on using you can also set aws-record-generator to be your project's default orm with

config.generators do |g|
  g.orm :aws_record
end

Which will cause aws_record:model to be invoked by the Rails model generator.

Generating a model

Generating a model can be as simple as: rails g aws_record:model Forum --table-config primary:10-5 aws-record-generator will automatically create a uuid:hash_key field for you, and a table config with the provided r/w units

# app/models/forum.rb

require 'aws-record'

class Forum
  include Aws::Record

  string_attr :uuid, hash_key: true
end

# db/table_config/forum_config.rb

require 'aws-record'

module ModelTableConfig
  def self.config
    Aws::Record::TableConfig.define do |t|
      t.model_class Forum

      t.read_capacity_units 10
      t.write_capacity_units 5
    end
  end
end

More complex models can be created by adding more fields to the model as well as other options:

rails g aws_record Forum post_id:rkey author_username post_title post_body tags:sset:default_value{Set.new}

# app/models/forum.rb

require 'aws-record'

class Forum
  include Aws::Record

  string_attr :uuid, hash_key: true
  string_attr :post_id, range_key: true
  string_attr :author_username
  string_attr :post_title
  string_attr :post_body
  string_set_attr :tags, default_value: Set.new
end

# db/table_config/forum_config.rb
# ...

Finally you can attach a variety of options to your fields, and even ActiveModel validations to the models:

rails g aws_record:model Forum forum_uuid:hkey post_id:rkey author_username post_title post_body tags:sset:default_value{Set.new} created_at:datetime:db_attr_name{PostCreatedAtTime} moderation:boolean:default_value{false} --table-config=primary:5-2 AuthorIndex:12-14 --required=post_title --length-validations=post_body:50-1000 --gsi=AuthorIndex:hkey{author_username}

Which results in the following files being generated:

# app/models/forum.rb

require 'aws-record'
require 'active_model'

class Forum
  include Aws::Record
  include ActiveModel::Validations

  string_attr :forum_uuid, hash_key: true
  string_attr :post_id, range_key: true
  string_attr :author_username
  string_attr :post_title
  string_attr :post_body
  string_set_attr :tags, default_value: Set.new
  datetime_attr :created_at, database_attribute_name: "PostCreatedAtTime"
  boolean_attr :moderation, default_value: false

  global_secondary_index(
    :AuthorIndex,
    hash_key: :author_username,
    projection: {
      projection_type: "ALL"
    }
  )
  validates_presence_of :post_title
  validates_length_of :post_body, within: 50..1000
end

# db/table_config/forum_config.rb
# ...

To migrate your new models and begin using them you can run the provided rake task: rails aws_record:migrate

Docs

The syntax for creating an aws-record model follows:

rails generate aws_record:model NAME [field[:type][:opts]...] [options]

The possible field types are:

Field Name aws-record attribute type
bool | boolean :boolean_attr
date :date_attr
datetime :datetime_attr
float :float_attr
int | integer :integer_attr
list :list_attr
map :map_attr
num_set | numeric_set | nset :numeric_set_attr
string_set | s_set | sset :string_set_attr
string :string_attr

If a type is not provided, it will assume the field is of type :string_attr.

Additionally a number of options may be attached as a comma separated list to the field:

Field Option Name aws-record option
hkey marks an attribute as a hash_key
rkey marks an attribute as a range_key
persist_nil will persist nil values in a attribute
db_attr_name{NAME} sets a secondary name for an attribute, these must be unique across attribute names
ddb_type{S|N|B|BOOL|SS|NS|BS|M|L} sets the dynamo_db_type for an attribute
default_value{Object} sets the default value for an attribute

The standard rules apply for using options in a model. Additional reading can be found here

Command Option Names Purpose
[--skip-namespace], [--no-skip-namespace] Skip namespace (affects only isolated applications)
[--disable-mutation-tracking], [--no-disable-mutation-tracking] Disables dirty tracking
[--timestamps], [--no-timestamps] Adds created, updated timestamps to the model
--table-config=primary:R-W [SecondaryIndex1:R-W]... Declares the r/w units for the model as well as any secondary indexes
[--gsi=name:hkey{ field_name }[,rkey{ field_name },proj_type{ ALL|KEYS_ONLY|INCLUDE }]...] Allows for the declaration of secondary indexes
[--required=field1...] A list of attributes that are required for an instance of the model
[--length-validations=field1:MIN-MAX...] Validations on the length of attributes in a model
[--table-name=name] Sets the name of the table in DynamoDB, if different than the model name
[--skip-table-config] Doesn't generate a table config for the model
[--password-digest] Adds a password field (note that you must have bcrypt has a dependency) that automatically hashes and manages the model password

The included rake task aws_record:migrate will run all of the migrations in app/db/table_config

aws-sdk-rails's People

Contributors

alextwoods avatar amw avatar awood45 avatar bongole avatar c960657 avatar chiastolite avatar cjyclaire avatar dependabot[bot] avatar hughevans avatar hyandell avatar jonathanhefner avatar jterapin avatar kakubin avatar kyto64 avatar mrwellington avatar mullermp avatar nilpoona avatar nov avatar ohbarye avatar osyoyu avatar rdubya avatar roharon avatar sgomez17 avatar tetsuya-ogawa avatar tsuwatch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-sdk-rails's Issues

Active Job logs are not output realtime to Cloudwatch logs on ECS fargate

aws sqs active job is running in an ECS fargate environment.

A container is built using the dockerfile that comes standard with rails 7.1.1.

CMD in the dockerfile is overwritten with the "command" in the ECS task-definition, and aws_sqs_active_job is started with the following settings.

[
  "bundle",
  "exec",
  "aws_sqs_active_job",
  "--queue",
  "default"
]

At this time, logs that should have been output to Cloudwatch Logs via STDOUT were not output.

After that, logs were output to Cloudwatch Logs at the time the container was killed for deployment, etc.

We would like to see JOB logs in real time if possible.

A rails server running in a similar environment is able to check logs in cloudwatch logs in real time.

The following are the settings for aws sqs active job.

queues:
  default: 'https://sqs.xxx.amazonaws.com/xxxx/xxxx'
threads: 5
max_messages: 5
shutdown_timeout: 110

Is there any other way to configure the logs so that they can be viewed in real time?

version:
aws-sdk-rails (3.9.0)

Add DynamoDB support for ActiveRecord

I just want my models to use DynamoDB, rather than MySQL or PostgreSQL via RDS. This was started in the v1 aws-sdk gem back in 2013, but never made it to maturity.

To simplify, Rails 4.2 and above.

Aws::SES::Client 403 Β» Mailer does not raise error

Currently seeing a 403 error on Aws::SES::Client.

Aug 18 15:33:24 energylink-production production.log:  I, [2016-08-18T05:33:23.954349 #18197]  INFO -- : [AWS SimpleEmailService 200 1.134861 0 retries] send_raw_email(:destinations=>[EMAILS],:raw_message=>{:data=> ... (45260 bytes)>})  
Aug 18 15:33:24 production.log:  I, [2016-08-18T05:33:23.954749 #18197]  INFO -- : 
Aug 18 15:33:24 production.log:  Sent mail to  (1180.5ms)

My reading of the documentation is that Aws SDK should be rotating Aws::InstanceProfileCredentials automagically as required without manual intervention.

If you're running your Ruby on Rails application on Amazon Elastic Compute Cloud, keep in mind that the AWS SDK for Ruby will automatically check Amazon EC2 instance metadata for credentials.

http://docs.aws.amazon.com/sdkforruby/api/Aws/InstanceProfileCredentials.html
amazon-archives/aws-sdk-core-ruby#193

Sender name gets dropped from email_from when using SESv2

Following the introduction of SESV2, we have noticed that the name part of the sender gets dropped when using SESV2 when then email is sent.
So if the email comes from Some Name <[email protected]>, the recipient only sees the email address '[email protected]>', but the 'Some Name' part is dropped.
We tested the same email with SES (v1), and that works ok and the email is received with both parts of the sender email (name + address).

Authentication problem occurs while using SES

unable to sign request without credentials set error message

Our system is
config.action_mailer.delivery_method = :aws_sdk
Aws::Rails.add_action_mailer_delivery_method() and is using sdk as an option.

When an error occurs, the system sends a large number of e-mails to up to N users asynchronous jobs.
At this point, the above error message occurs (it works well in normal situatxions)

As far as I know, it occurs when the rate limit of authentication is exceeded.
Other aws services made aws client singleton, so the problem has been solved. Is there any way to fix that?


  • aws-sdk-rails (2.1.0)
  • aws-sdk-ses (1.24.0)
  • rails (6.1.7.5)

Extending support for pinpoint's send_email API

Amazon has been pushing pinpoint a lot lately, seems like it should be doable to extend support for pinpoint as well. I assumed my pinpoint project would automatically just pick up up my existing integration with SES (via this gem + rails mailers), but it looks like that's not the case. Thats probably because pinpoint injects a pixel for analytics, so they probably need this endpoint to highjack/inject the content

https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/PinpointEmail/Client.html#send_email-instance_method

I haven't looked at the method signatures for SES to see how they compare, but i'm guessing they are similar-ish

Installing aws-sdk-rails causes version resolution issue

Following the documentation to set up SES v2 as my Action Mailer:

config.action_mailer.delivery_method = :sesv2

Resulted in:

RuntimeError (Invalid delivery method :sesv2):

Digging in, I realized that Bundler had installed 3.1.0 from 2020 instead of the current 3.10.0 release, which predated the addition of the SES v2 mailer (as well as many other features). Attempting to run bundle update didn't change anything.

I'm no stranger to this class of bug, where an older release had more forgiving version resolution constraints results in users being served a confusingly/bewilderingly old version that doesn't work or match the README any more. I solved this by yanking older versions of standard which were at that point extremely out of date and no longer in much use.

Anyway, in addition to the above issue, the hard deps in the gemspec result in this for me when I do pin the aws-sdk-rails version to 3.10.0:

etching rack 2.2.8 (was 3.0.8)
Fetching aws-sdk-dynamodb 1.100.0
Fetching aws-sdk-sqs 1.69.0
Fetching aws-sdk-sesv2 1.43.0
Installing rack 2.2.8 (was 3.0.8)
Installing aws-sdk-dynamodb 1.100.0
Fetching aws-record 2.13.0
Fetching rack-session 1.0.2 (was 2.0.0)
Fetching aws-sessionstore-dynamodb 2.1.0
Fetching rackup 1.0.0 (was 2.1.0)
Installing aws-sdk-sqs 1.69.0
Installing aws-sdk-sesv2 1.43.0
Installing aws-record 2.13.0
Installing rack-session 1.0.2 (was 2.0.0)
Installing aws-sessionstore-dynamodb 2.1.0
Installing rackup 1.0.0 (was 2.1.0)
Fetching aws-sdk-rails 3.10.0 (was 3.1.0)
Installing aws-sdk-rails 3.10.0 (was 3.1.0)
Bundle complete! 27 Gemfile dependencies, 126 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.

Tracking this down, the culprit is the dynamodb gem, which has a hard dep on Rack 2:

aws-sessionstore-dynamodb (2.1.0)
      aws-sdk-dynamodb (~> 1, >= 1.85.0)
      rack (~> 2)

Giving up Rack 3 in order to adopt this gem for the SES integration in my Rails 7.1 app seems like a Bad Idea.

I actually only found the gem through lots of googling which actually landed me at this issue, which raised the challenges posed not only by the gem's name, but its wide surface area on other AWS dependencies: #22

In that thread, @mullermp said:

I would hope not. It's a decision that we made earlier last year. aws-sdk-rails will pull in extra AWS dependencies for its features. Rails itself might be considered "bloat" as it has many dependencies and features too. If you only need aws-sdk-ses you could install that gem instead and write a custom action mailer adapter similar to the one in aws-sdk-rails.

I definitely understand this sentiment, but because this gem is a toolkit of a half dozen different Rails integrations with Rails, each with potential version-constrained transitive dependencies on other things, using this gem seems to pose a pretty significant risk to keeping Rails (and one's other gems) up-to-date. For example, I only want to use the SES mailer functionality of the gem (so as to avoid SMTP), but because the gem also includes DynamoDB integration, even users who don't use DynamoDB will have to sacrifice Rack 3, which will eventually be required by Rails, which means a dependency on this gem could prevent upgrading Rails itself (to say nothing of Rack)

My advice would be to:

  1. Start by documenting this issue in the README so other new users don't get thrown off, since Bundler will default to resolving to the very-old 3.1.0 for anyone who's on the latest version of Rails (currently 7.1.3).
  2. Consider a release that either:
    a. Removes the hard dependency on the other AWS SDK gems, update the documentation to indicate that users should state the dependencies themselves, and instead detect whether those gems are installed and loadable at runtime (warning/erroring if the associated feature is configured and the gem is missing or an unsupported version)
    b. Spins this gem out into N gemsβ€”one per AWS featureβ€”so that the surface area for end-users is constrained to only the features they need, reducing their dependency exposure (and associated security/support risks due to not being able to upgrade other dependencies)

I think 2(b) is probably closer to the "right" answer, even though it'd present more work. 2(a) is something I have done for plugin gems in the past but would be a breaking change for existing users and would require the gem to effectively manually do the job of version resolution insofar as validating the Gem::Version range matches your expectations at runtime

Rename to ses-mailer?

I was Googling for "Action Mailer SES", "SES Mailer Rails" etc, and I didn't find this gem - I found only this one, which apparently does not use the aws-sdk, therefore I don't want to use it.

It seems the only thing aws-sdk-rails does is the Action Mailer support for SES. With that, if this gem had a more specific name, it would be easier to find it and also reduce the time spent checking its code. I only wanted Action Mailer support, I didn't want anything different than that, since it could conflict with my current aws-sdk setup.

Any thoughts on renaming it or extracting it to a new repo?

how to lock down the aws-sdk version

"This dependency will automatically pull in version 2 of the AWS SDK for Ruby."

What does this mean exactly? How can I specify a version of the sdk. Right now my gemfile looks like this. Is that correct or is that going to lead to conflicts?

gem 'aws-sdk', '2.2.8'
gem 'aws-sdk-rails', '1.0.0'

Using SESv2

Hey guys! Only found out about this gem recently and I love it. Thanks for the effort!

I noticed the built in ActionMailer integration uses SES instead of the newer SESv2 API. Is this going to make a huge difference? Are there plans to switch over?
Would you guys accept pull requests?

Thanks!

[Documentation] Credential configuration specific to client library - Using ec2 IAM roles

I prefer the method of not managing any access key/secrets as part of my deployment and this feature sounds fantastic, but was really hard to understand how to actually implement it.

  1. Create EC2 role with desired policies βœ…
  2. Attach IAM role to ec2 instance βœ…
  3. Configure aws-sdk-rails to use policy. ❓

Looking here, you can see a blurb for links with new information, but this is just general information, nothing to do with how to interact with this setup using the gem.
http://docs.aws.amazon.com/sdk-for-ruby/v2/developer-guide/setup-config.html#aws-ruby-sdk-credentials-iam

After a bunch of digging i found
Aws::InstanceProfileCredentials.new.credentials.access_key_id and Aws::InstanceProfileCredentials.new.credentials.secret_access_key

Which now allows me to do the following:

Aws::Credentials.new(
                    Aws::InstanceProfileCredentials.new.credentials.access_key_id,
                    Aws::InstanceProfileCredentials.new.credentials.secret_access_key)

Is there any other convenience methods i'm missing? It would be nice if the documentation gave implementation examples similar to the others (shared credentials, environment variables, etc)

Default `visibility_timeout` does not pick up the value from the SQS Queue in AWS

Currently, the visibility_timeout defaults to 60 seconds in poller.rb and 120 seconds in configuration.rb (this is confusing in its own right) but I would expect the default to be what we set on the SQS queue directly in AWS.

Setting the visibility_timeout on the SQS queue gives the assumption that all messages would follow the visibility_timeout from the SQS queue, not from the workers.

  1. Could I get more information on why that decision was made and if we could update to have the default pulled from the SQS queue directly?
  2. Can we update the defaults from poller.rb and configuration.rb for the visibility_timeout to be the same?

Thanks and please let me know if you need more information

DynamoDB Local support

Hi,

for testing purposes I can't find a possibility to configure host and port of the database in order to use AWS' DynamoDB Local. In Dynamoid, I can easily configure an endpoint: config.endpoint = 'http://localhost:8000'

How to configure that with aws-sdk-rails?

SES SDK Rails6 not picking up Region

I have set up my SES to initialise per:

Rails.application.reloader.to_prepare do
    ActionMailer::Base.add_delivery_method :ses, AWS::SES::Base,
      access_key_id: ENV['AMAZON_ACCESS_ID'],
      secret_access_key: ENV['AMAZON_SECRET_KEY'],
    region: ENV['AWS_REGION']
end

However I get an error message that would indicate it is not using my AWS_REGION environment specified.

The following identities failed the check in region US-EAST-1

my AWS_REGION is actually ap-southeast-2

Is there any reason why my region: setting should be getting ignored?

rails logging level

How do I set the logging level output for the aws calls? Are they only setup to go to the debug level?

Cannot use cross account access with SourceArn

The SDK does not seem to allow setting the SourceArn and so we are unable to use this gem for cross account access.

It should be updated to allow using SourceArn when sending emails.

SES mail delivery stopped working

I'm having trouble sending mail through SES after updating Rails and aws-sdk-rails. Before the update:

Rails version: 5.2.4.3
aws-sdk-rails version: 1.0.1

config.action_mailer.delivery_method = :aws_sdk

after the update:

Rails version: 6.1.4
aws-sdk-rails version: 3.6.1

config.action_mailer.delivery_method = :ses

The app is running on EC2 via ElasticBeanstalk. The docs say, that without providing credentials, it takes them from the instance metadata. This seems to have worked fine before the update. Now I get an exception:

Errno::EADDRNOTAVAIL: Cannot assign requested address - connect(2) for "localhost" port 25

Which looks like it is trying to send mail via SMTP at localhost. Locally, I can send mail through SES, if I set the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY environment variables. So, I guess it is an issue with discovering the credentials on EC2. The app is running in a Docker container on Docker running on 64bit Amazon Linux 2/3.4.9, which I updated from 64bit Amazon Linux 2018.03 v2.17.1 running Docker 20.10.7-ce, just to make sure it's not some version incompatibility issue between the gem and the platform, but the issue persists. Any hints at how to debug this issue would be appreciated. πŸ™

Separate settings for each queue

Are there any plans to make it possible to configure SQS Active Job settings on a per queue basis, as shown below?

queue1:
  max_messages: 10
  visibility_timeout: 30
queue2:
  max_messages: 5
  visibility_timeout: 100

Aws::SES::Client Credentials are not refreshed

As a root cause of issues highlighted in #11 Aws credentials are not being refreshed when using Aws::InstanceProfileCredentials.

My reading of the documentation is that Aws SDK should be rotating Aws::InstanceProfileCredentials automagically as required without manual intervention.

If you're running your Ruby on Rails application on Amazon Elastic Compute Cloud, keep in mind that the AWS SDK for Ruby will automatically check Amazon EC2 instance metadata for credentials.

http://docs.aws.amazon.com/sdkforruby/api/Aws/InstanceProfileCredentials.html
amazon-archives/aws-sdk-core-ruby#193

Region Mismatch

For anyone having issues with sending, you might want to check the region is correctly set. You can manually add the environment variable AWS_REGION to set it. This might be useful information in the readme.

Setting `log_level` to `:error` does not stop Aws client from outputting

The README says:

The AWS SDK is configured to use the built-in Rails logger for any SDK log output. The logger is configured to use the :info log level. You can change the log level by setting :log_level in the Aws.config hash.

Aws.config.update(log_level: :debug)

However, setting to :error: Aws.config.update(log_level: :error) does not stop the Aws clients from outputting :info logs.

[Aws::SecretsManager::Client 200 0.204598 0 retries] get_secret_value(secret_id: <hidden>)

Only setting the logger to nil works:

Aws.config.update(logger: nil)

I would assume setting Aws.config.log_level would apply to all Aws modules. Is this the recommended way? Can we update docs to reflect what is recommended here?

DynamoDB Session Store does not work with Rails 7

DynamoDB Session Store (dynamodb_store) does not work with Rails 7.

The most simple setup (a fresh Rails 7.0.1 project with an empty action) would generate the following error:

NoMethodError (undefined method `enabled?' for {"_csrf_token"=>"..."}:Rack::Session::Abstract::SessionHash):

actionpack (7.0.1) lib/action_dispatch/middleware/flash.rb:62:in `commit_flash'
actionpack (7.0.1) lib/action_controller/metal.rb:189:in `dispatch'
actionpack (7.0.1) lib/action_controller/metal.rb:251:in `dispatch'
actionpack (7.0.1) lib/action_dispatch/routing/route_set.rb:49:in `dispatch'
actionpack (7.0.1) lib/action_dispatch/routing/route_set.rb:32:in `serve'
actionpack (7.0.1) lib/action_dispatch/journey/router.rb:50:in `block in serve'
actionpack (7.0.1) lib/action_dispatch/journey/router.rb:32:in `each'
actionpack (7.0.1) lib/action_dispatch/journey/router.rb:32:in `serve'
actionpack (7.0.1) lib/action_dispatch/routing/route_set.rb:850:in `call'
(snip)

The direct trigger is this change rails/rails@ca7c820 which accesses session.enabled? in the request dispatching.

However, the root cause seems to lie in dynamodb_store's implementation. When dynamodb_store is used, the session variable is a Rack::Session::Abstract::SessionHash, where it should be a ActionDispatch::Request::Session. session.enabled? is implemented only in ActionDispatch::Request::Session, thus resulting in the NoMethodError.

SQS Poller pulls in messages even if `max_number_of_messages` = 1 and previous messages are invisible

I don't have any logs to provide but I am seeing messages that are enqueued after existing messages become "not visible" get polled by SQS workers and not commence the work. It seems like they are polled and sit around waiting for something (existing messages maybe?).

I would expect that workers do not pick up any new messages while the current one is being worked on (or in other words, polled). If they are polled, they start work immediately and do not hold on to other messages. Can I confirm that this is the expected behavior?

You can see that in the picture below, we had lots of messages that were "not visible" and we queued more messages when the "message visible" was 0. That message gets polled by a worker immediately but never actually starts work. I would have expected one of the available workers to start work on it while we had hundreds of invisible messages.

CleanShot 2022-12-21 at 01 18 29@2x

Rails.application.credentials.aws reserved for aws credentials πŸ‘Ž

Recently in our application that uses this gem, we migrated from previous Rails.secrets mechanism to the new Rails.application.credentials. Unfortunately, after moving the same keys from config/secrets.yml.enc to new encrypted credentials we saw an error about invalid configuration option: ':rds_instance', which we stored under Rails.application.credentials.aws.

I tried to look in aws-sdk-ruby for code that automatically loads these credentials. With no luck there, I assumed it was Rails itself that's responsible. I was very happy to see that this was not the case.

Reserving keys in the standard credentials mechanism is bad for many reasons::

  1. It's magic and unexpected
  2. Was not present in previous Rails.application.secrets making the migration difficult
  3. This invalid configuration option error is difficult to debug (validation happens late in a different gem and aws-sdk-rails does not appear in the stack trace)
  4. Most importantly, if more gems would start reserving their own keys in Rails.application.credentials, it would be a mess

I would want to call for this behavior to be deprecated and eventually removed from defaults.

How to specify `configuration_set`

If I setup a Configuration Set for SES open/click notifications, how can I use it with this gem? Is that a config option that can be passed into Aws::Rails.add_action_mailer_delivery_method? Or even better, can it be specified in the mailer somewhere directly so I could potentially use different configuration sets based on the mailer?

Cannot load other sdks besides S3

Rails version: 6.0.0

Besides aws-sdk-s3, no other aws-sdk-* wants to cooperate with this gem. Here's my gemfile:

gem 'aws-sdk-rails', '~> 3'
gem 'aws-sdk-s3', '~> 1'
gem 'aws-sdk-sqs', '~> 1'

Gemfile.lock:

aws-sdk-rails (3.0.5)
      aws-sdk-ses (~> 1)
      railties (>= 5.2.0)
    aws-sdk-s3 (1.50.0)
      aws-sdk-core (~> 3, >= 3.61.1)
      aws-sdk-kms (~> 1)
      aws-sigv4 (~> 1.1)
    aws-sdk-ses (1.26.0)
      aws-sdk-core (~> 3, >= 3.61.1)
      aws-sigv4 (~> 1.1)
    aws-sdk-sqs (1.22.0)
      aws-sdk-core (~> 3, >= 3.61.1)
      aws-sigv4 (~> 1.1)

However, I get these errors when trying to access SQS from the rails console (using a fresh console session, I've restarted console sessions and the server multiple times to make sure I wasn't using a stale session):

irb(main):001:0> Aws::SQS
Traceback (most recent call last):
        1: from (irb):1
NameError (uninitialized constant Aws::SQS
Did you mean?  Aws::SES
               Aws::STS)
irb(main):002:0> Aws.constants
=> [:ClientStubs, :Partitions, :RefreshingCredentials, :Xml, :Query, :AssumeRoleCredentials, :Stubbing, :STS, :Credentials, :AssumeRoleWebIdentityCredentials, :Resources, :ParamValidator, :EventEmitter, :Rest, :Structure, :AsyncClientStubs, :CredentialProviderChain, :TypeBuilder, :Rails, :EndpointCache, :EventStream, :EagerLoader, :EmptyStructure, :SharedCredentials, :Json, :Plugins, :ClientSideMonitoring, :ProcessCredentials, :ECSCredentials, :InstanceProfileCredentials, :Waiters, :Errors, :PageableResponse, :IniParser, :Log, :Pager, :ParamConverter, :Binary, :S3, :KMS, :Util, :Sigv4, :SES, :Deprecations, :CredentialProvider, :CORE_GEM_VERSION, :SharedConfig]

Cannot override region in config/environments/*.rb

If i don't set AWS_REGION as an env variable, and pass it manually instead like:

Aws::Rails.add_action_mailer_delivery_method(
  :ses,
  credentials: creds,
  region: 'us-east-1'
)

Whenever I try to send an email, i receive the following error: Aws::Errors::MissingRegionError. According to the documentation, this should be working.

Edit:

In fact, I cannot override the credentials either. If I have no AWS credentials in Rails.credentials, then the SES client seems to be instantiated with nil credentials, and every email attempt fails with: Aws::SES::Errors::SignatureDoesNotMatch

Support for forwarding Elastic Beanstalk SQS Daemon requests to Active Job

Hi everyone, just discovered this gem recently. So far has been a great productivity tool especially as someone new to Rails and AWS, appreciate the good work!

I am currently hosting an application via Elastic Beanstalk and using your Active Job queue adapter to send messages to SQS. To process the queue I'm looking to host the app in an EB worker environment. Those instances come with an SQS Daemon that retrieves messages and forwards them to localhost over HTTP. Instead of using the provided aws_sqs_active_job process to retrieve messages (which I believe would conflict with the daemon), I would like the ability to listen for messages from the daemon and forward them to the appropriate jobs.

The existing library Active Elastic Job provides this capability using a lightweight Rack middleware that is only added with the presence of a specific environment variable. However that gem is largely out of date (still requires version 2.x of the core SDK) and requires the use of their queuing backend as well (I'd rather use yours).

I thought the approach of using middleware to achieve this was sound, and for the time being I currently have a stripped-down version of that library running successfully in my source code. However I thought this could be a nice addition to the Rails SDK itself that others could benefit from. Is this an idea you would be willing to implement or accept contributions for?

Seems to not do RFC-2047 encoding

Hi, I'm using this gem to send emails via SES and am finding that it breaks when I use email addresses with non-ASCII characters. e.g. η”¨ζˆ·@例子.εΉΏε‘Š

Aws::SES::Errors::InvalidParameterValue (Local address contains control or whitespace):

From the docs, it seems that the email address must be encoded according to RFC-2047. Is there a way to handle this case easily?

Support FIFO Queues for ActiveJob

The current ActiveJob does not support FIFO SQS Queues - MessageGroupId needs to be supported. This was intended as a follow up to the initial release.

Creating this Issue to track.

Feature Request: ActiveSupport Notifications

Hi,

In the current aws-sdk-rails and in aws-sdk-ruby there is no instrumentation support (unless I've missed it?)

Is it possible to publish an ActiveSupport::Notification for every AWS call so we can subscribe to these events?

thanks

Missing SQS message argument when queued for ActiveJob

I am struggling to integrate Aws::Rails::SqsActiveJob so that I can pull events from an AWS SQS Queue. I keep getting the error 'wrong number of arguments (given 0, expected 1)' when ActiveJob.perform is called with a queued message

Is this a serialization issue? Is the message not being serialized for ActiveJob.perform_later? Do I need a custom serializer? It appears the message argument from the queued event is not being sent to the ApplicationJob perform() function. I do restart the aws_sqs_active_job process between attempts. Here are some of the things I have tried all with this same issue.

  • various function argument signatures such as a keyword job_data:
  • inheriting from ActiveJob::Base and ApplicationJob
  • using :amazon_sqs, :amazon_sqs_async and :shoryuken for config.active_job.queue_adapter
  • using both FIFO and standard SQS queues

The gemfile contains:

ruby '2.6.6'
gem 'aws-sdk-rails'
gem 'rails', '~> 5.2'

Here is the activejob class for testing the issue:

class CartsUpdateJob < ApplicationJob
  queue_as :default

  rescue_from ActiveJob::DeserializationError do |ex|
    Rails.logger.error ex
    Rails.logger.error ex.backtrace.join("\n")
  end

  def perform(job_data)
    Rails.logger.info "data: " + job_data.inspect
    Rails.logger.info "data: " + job_data['job_class']
  end
end

I know that the message payload is arriving from the SQS queue successfully because the event's json hash value of ['job_class'] is being received by the JobRunner since JobRunner knows which ActiveJob class to use.

Starting Poller with options={:threads=>12, :max_messages=>1, :visibility_timeout=>120, :shutdown_timeout=>15, :backpressure=>10, :queues=>{:default=>"https://sqs.us-west-2.amazonaws.com/redacted/default"}, :logger=>#<ActiveSupport::Logger:0x00000000095b4920 @level=0, @progname=nil, @default_formatter=#<Logger::Formatter:0x00000000095b4718 @datetime_format=nil>, @formatter=#<ActiveSupport::Logger::SimpleFormatter:0x00000000095bfe88 @datetime_format=nil>, @logdev=#<Logger::LogDevice:0x00000000095b4420 @shift_period_suffix="%Y%m%d", @shift_size=1048576, @shift_age=0, @filename="log/aws.log", @dev=#<File:log/aws.log>, @mon_mutex=#<Thread::Mutex:0x00000000095b42b8>, @mon_mutex_owner_object_id=78488080, @mon_owner=nil, @mon_count=0>, @local_levels=#<Concurrent::Map:0x00000000095bf960 entries=0 default_proc=nil>>, :message_group_id=>"SqsActiveJobGroup", :config_file=>#<Pathname:C:/Users/KG/theapp/config/aws_sqs_active_job.yml>, :client=>#<Aws::SQS::Client>, :queue=>"default", :environment=>"development"}
Polling on: default => https://sqs.us-west-2.amazonaws.com/redactedid/default
Processing batch of 1 messages
Running job: [CartsUpdateJob]
Error processing job [CartsUpdateJob]: wrong number of arguments (given 0, expected 1)
C:/Users/KG/theapp/app/jobs/carts_update_job.rb:9:in `perform'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:39:in `block in perform_now'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:109:in `block in run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/i18n-1.8.10/lib/i18n.rb:314:in `with_locale'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/translation.rb:9:in `block (2 levels) in <module:Translation>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `instance_exec'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `block in run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/logging.rb:26:in `block (4 levels) in <module:Logging>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/notifications.rb:168:in `block in instrument'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/notifications/instrumenter.rb:23:in `instrument'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/notifications.rb:168:in `instrument'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/logging.rb:25:in `block (3 levels) in <module:Logging>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/logging.rb:46:in `block in tag_logger'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/tagged_logging.rb:71:in `block in tagged'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/tagged_logging.rb:28:in `tagged'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/tagged_logging.rb:71:in `tagged'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/logging.rb:46:in `tag_logger'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/logging.rb:22:in `block (2 levels) in <module:Logging>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `instance_exec'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `block in run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:136:in `run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:38:in `perform_now'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:24:in `block in execute'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:109:in `block in run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/railtie.rb:28:in `block (4 levels) in <class:Railtie>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/execution_wrapper.rb:87:in `wrap'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/reloader.rb:73:in `block in wrap'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/execution_wrapper.rb:87:in `wrap'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/reloader.rb:72:in `wrap'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/railtie.rb:27:in `block (3 levels) in <class:Railtie>'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `instance_exec'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:118:in `block in run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:136:in `run_callbacks'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:22:in `execute'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/aws-sdk-rails-3.6.0/lib/aws/rails/sqs_active_job/job_runner.rb:17:in `run'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/aws-sdk-rails-3.6.0/lib/aws/rails/sqs_active_job/executor.rb:30:in `block in execute'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:363:in `run_task'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:352:in `block (3 levels) in create_worker'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:335:in `loop'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:335:in `block (2 levels) in create_worker'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:334:in `catch'
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:334:in `block in create_worker'

SQS events are queued in an SQS queue by a Ruby Lambda function that is triggered from an Eventbridge Rule. The test message is 20.56KB and here are some pertinent parts of the json that are taken from the SQS 'send and receive messages' utility:

{
  "version": "0",
  ...
  "region": "us-west-2",
  "resources": [],
  "detail": {
    "metadata": {
	...
    }
  },
  "webhook": {
    "line_items": [
	...
    ],
    "note": null,
    "updated_at": "2021-04-16T00:03:52.020Z",
    "created_at": "2021-04-10T00:05:43.173Z"
  },
  "job_class": "CartsUpdateJob"
}

I am able to simulate a successful ActiveJob handoff and processing with my business logic added back into the ActiveJob CartsUpdateJob perform_now() function and using the same test event via the console:

client = Aws::SQS::Client.new
queue_url = client.get_queue_url(queue_name: "theapp-dev-aws-queue")
resp = client.receive_message(queue_url: queue_url.queue_url)
job_data = JSON.parse(resp.messages[0].body)
CartsUpdateJob.perform_now(job_data)

but when I run that same code in the console with .perform_later:

Enqueued CartsUpdateJob (Job ID: e84635b8-75bd-462b-bd83-4606fb9cfa54)
to AmazonSqs(default) with arguments: ...

I can see the same argument error in the worker process output:

8:18:51 PM aws.1 |  Running job: e84635b8-75bd-462b-bd83-4606fb9cfa54[CartsUpdateJob] 8:18:51 PM aws.1 |  Running job: [CartsUpdateJob] 
8:18:51 PM aws.1 |  Error processingjob [CartsUpdateJob]: wrong number of arguments (given 0, expected 1)
8:18:51 PM aws.1 | C:/Users/KG/theapp/app/jobs/carts_update_job.rb:5:in `perform'

Thank you for any insight or help you can provide.

Assuming role with temporary credentials timeout occurs.

While leveraging sts to assume a role and provide temp credentials to Aws.config[:credentials] inside the initializers as stated in this guide. I am able to call AWS (SES) to send emails until the credentials expire. After that I receive 403 errors. Should I not be setting my credentials in my initializer file? I attempted to bring the functionality into my ActionMailer base class to attempt to grab new credentials but did not succeed.

Is there some obvious thing I am missing? I am not a rails expert so if there is an obvious fix please forgive my lack of experience.

My initializer file. I understand why this times out but how do I retreive new credentials for role?


sts = Aws::STS::Client.new
role = sts.assume_role(role_arn: ENV['ROLE_ARN'],
                       role_session_name: format(
                         '%{appname}_%{stage}',
                          appname: ENV['APP_NAME'],
                          stage: ENV['RACK_ENV']),
                       duration_seconds: 900)

Aws.config[:credentials] = Aws::Credentials.new(
  role.credentials.access_key_id,
  role.credentials.secret_access_key,
  role.credentials.session_token)

Gem dependencies are not listed correctly

The README states that this gem includes aws-sdk-sesv2 but in order to use any functionality beyond the default mailed (like ListManagement), you'd need to include gem "aws-sdk-sesv2" in the Gemfile.

Prefix with table name in `Aws :: Record`

Hey everyone! This gem is great so I'm using it.

However, in my environment, prefix is added to the dynamodb table for each environment.
Are you planning to add the ability to prefix the table name? (Please let me know if you have already added it.)
Would you accept pull requests?

Thanks!

Support for `ActiveJob.perform_all_later` (Rails 7.1)

Hi,

Thanks for the well written gem. We have been using it for ActiveJob mainly and it has been pleasantly performant and unsurprising with our production workload 😌 πŸ™

I stumbled on ActiveJob.perform_all_later that was added in rails 7.1.

Are there any plans to support it in aws-sdk-rails?

Specifically to support creating many sqs message in a single SendMessageBatch for increased throughput.

The rails PR for reference: rails/rails#46603

SQS ActiveJobs are retried if uncaught exceptions are raised or retry attempts are exceeded

https://guides.rubyonrails.org/active_job_basics.html#retrying-or-discarding-failed-jobs notes that failed jobs are not retried unless the jobs are configured otherwise, however SQS backed ActiveJobs that raise exceptions which are not explicitly discarded or retried continue to be run. This occurs with either the amazon_sqs or amazon_sqs_async adapters. For example, a simple job such as:

class SampleJob < ActiveJob::Base
  queue_as :default

  def perform
    raise Exception, "testing"
  end
end

never gets deleted from the queue, and will be fetched and run again after the messages visibility_timeout expires. I am not sure if this is desired behavior or not, but it caught me off guard as it is different than the above note in the ActiveJob rails docs. A similar test with the Resque gem (being the only other queueing service I have familiarity with and an active project using) removed the job after a single failed run.

If this is intended behavior of the Rails SQS ActiveJob backend, a note in the Readme that the behavior differs from the rails noted behavior would be helpful.

Versions:
Rails 7.1.3
Ruby 3.2.2
aws-sdk-rails 3.10.0
SQS standard queue (not FIFO)
OSX Ventura 13.6.4

edit:
It looks like even if a retry_on Exception, wait: 40.seconds, attempts: 2 is added to the job, it's not removed after 2 attempts as would be expected. Are retries with limited attempts not supported with the SQS backend?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.