flexera-public / right_aws Goto Github PK
View Code? Open in Web Editor NEWRightScale Amazon Web Services Ruby Gems
License: MIT License
RightScale Amazon Web Services Ruby Gems
License: MIT License
Hi, I'd been looking for a way to discover the names of folders inside buckets when I eventually came across the :common_prefixes attribute and the following snippet of code in RightAWS::S3:
def keys_and_service(options={}, head=false)
# ... some code ....
thislist.each_key do |key|
service_data[key] = thislist[key] unless (key == :contents || key == :common_prefixes)
end
[list, service_data]
end
Why on earth is the :common_prefixes being removed when it is so useful and has no additional cost in passing it through?
Cheers, sam
The branch referenced below fixes deprecation warnings stemming from use of an older SaxParser API:
http://github.com/devver/right_aws/tree/libxml-update
It would be great if support for these could be added to right_aws. The Cloud Formation is a JSON format is so very simple from an API point of view. From Rightscale's point of view it is probably not in their interest to support the new PaaS features of Amazon but as this is an open source product they are going to be added sooner or later. Cloud Formation has tremendous potential as a common API that could be implemented across clouds.... right_aws Cloud Formation plus Chef could become a great cloud admin approach as they are both ruby and JSON formats...
Seems the python sql library does base64 encoding to avoid this problem.
http://mark.koli.ch/2010/10/json-xml-and-ampersands-with-amazons-sqs-simple-queuing-service.html
Animoto submitted the following patch. We may want to add something similar (THE PATCH BELOW IS NOT NOT NOT APPROPRIATE FOR CUT & PASTE INTO OUR GEMS! REWRITE IT!)
$GLOBAL_HTTP_COUNT={}
END
module Net
class HTTP
def request_with_logging(req, body = nil, &block)
res = request_without_logging(req, body, &block)
(domain_type, peer_port, peer_name, peer_ip) = @socket.io.peeraddr rescue []
peer = self.address
peer = "#{peer} #{peer_ip}:#{peer_port}" if peer_ip
Thread.exclusive do
$GLOBAL_HTTP_COUNT[ peer ]
= Hash.new(0)
$GLOBAL_HTTP_COUNT[ peer ][ res.code ] += 1
end
return res
end
alias_method :request_without_logging, :request
alias_method :request, :request_with_logging
end
end
...
[2009-04-28 19:56:06] [Ec2ImageReplicationWorker] : Request was: /Ubuntu8.04_i386_V4_3_5.manifest.xml
[2009-04-28 19:56:06] [Ec2ImageReplicationWorker] : Response was: 307 -- Temporary Redirect --
[2009-04-28 19:56:06] [Ec2ImageReplicationWorker] : Error (4): TypeError:wrong argument type nil (expected String). Next attempt in 3 seconds...
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : ##### RightAws::S3Interface redirect requested: 307 Temporary Redirect #####
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : ##### New location: https://kd-eu.s3-external-3.amazonaws.com/?prefix=Ubuntu8.04_i386_V4_3_5.manifest.xml #####
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : ##### Retry #1 is being performed due to a redirect. ####
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : Closing HTTPS connection to kd-eu.s3.amazonaws.com:443
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : Opening new HTTPS connection to kd-eu.s3-external-3.amazonaws.com:443
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : Closing HTTPS connection to kd-eu.s3-external-3.amazonaws.com:443
[2009-04-28 19:56:09] [Ec2ImageReplicationWorker] : Opening new HTTPS connection to kd-eu.s3.amazonaws.com:443
[2009-04-28 19:56:10] [Ec2ImageReplicationWorker] : #
/home/rails/right_gems/current/right_aws/lib/awsbase/right_awsbase.rb:791:in string='/home/rails/right_gems/current/right_aws/lib/awsbase/right_awsbase.rb:791:in
parse'/home/rails/right_gems/current/right_aws/lib/awsbase/right_awsbase.rb:612:in check'/usr/lib/ruby/1.8/benchmark.rb:293:in
measure'/home/rails/right_gems/current/right_gogrid/lib/benchmark_fix.rb:30:in add!'/home/rails/right_gems/current/right_aws/lib/awsbase/right_awsbase.rb:610:in
check'/home/rails/right_gems/current/right_aws/lib/awsbase/right_awsbase.rb:404:in request_info_impl'/home/rails/right_gems/current/right_aws/lib/s3/right_s3_interface.rb:180:in
request_info'/home/rails/right_gems/current/right_aws/lib/s3/right_s3_interface.rb:613:in head'/home/rails/right_gems/current/right_aws/lib/s3/right_s3.rb:587:in
head'/home/rails/right_gems/current/right_aws/lib/s3/right_s3.rb:571:in refresh'./lib/workers/ec2_image_replication_worker.rb:100:in
do_work'./lib/workers/ec2_image_replication_worker.rb:160:in retriable_s3_call'./lib/workers/ec2_image_replication_worker.rb:99:in
do_work'/home/rails/rightscale/releases/20090422043040/lib/awsd_logger.rb:173:in each_with_index'./lib/workers/ec2_image_replication_worker.rb:73:in
each'./lib/workers/ec2_image_replication_worker.rb:73:in each_with_index'./lib/workers/ec2_image_replication_worker.rb:73:in
do_work'./script/daemon/../../lib/daemon/workers_daemon.rb:200:in do_work'./script/daemon/../../lib/daemon/workers_daemon.rb:168:in
loop'./script/daemon/../../lib/daemon/workers_daemon.rb:168:in do_work'./script/daemon/../../lib/daemon/workers_daemon.rb:57:in
execute'./script/daemon/../../lib/daemon/daemon.rb:410:in execute_worker'./script/daemon/../../lib/daemon/daemon.rb:346:in
start_daemon'./script/daemon/../../lib/daemon/daemon.rb:344:in fork'./script/daemon/../../lib/daemon/daemon.rb:344:in
start_daemon'./script/daemon/../../lib/daemon/daemon.rb:343:in times'./script/daemon/../../lib/daemon/daemon.rb:343:in
start_daemon'./script/daemon/../../lib/daemon/daemon.rb:110:in daemonize'./script/daemon/action.rb:61/home/rails/rightscale/current/script/daemon/restart:27:in
require'/home/rails/rightscale/current/script/daemon/restart:27.join('
')}
[2009-04-28 19:56:10] [Ec2ImageReplicationWorker] : Request was: /Ubuntu8.04_i386_V4_3_5.manifest.xml
[2009-04-28 19:56:10] [Ec2ImageReplicationWorker] : Response was: 307 -- Temporary Redirect --
[2009-04-28 19:56:10] [Ec2ImageReplicationWorker] : Exception: wrong argument type nil (expected String)
RightAws::S3Interface#copy now seems to be completely broken. We've repeated it consistently. I'll try again tomorrow and see if this is Amazon's problem, but something certainly changed.
Here's a stack trace:
NoMethodError: private method `clone' called for "\n":String /usr/local/jruby/lib/ruby/1.8/rexml/parsers/baseparser.rb:451:in `unnormalize' /usr/local/jruby/lib/ruby/1.8/rexml/parsers/streamparser.rb:28:in `parse' /usr/local/jruby/lib/ruby/1.8/rexml/document.rb:201:in `parse_stream' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/awsbase/right_awsbase.rb:901:in `parse' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/awsbase/right_awsbase.rb:445:in `request_info_impl' /usr/local/jruby/lib/ruby/1.8/benchmark.rb:293:in `measure' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/awsbase/benchmark_fix.rb:30:in `add!' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/awsbase/right_awsbase.rb:445:in `request_info_impl' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/s3/right_s3_interface.rb:184:in `request_info' /var/www/studio/shared/bundled_gems/jruby/1.8/gems/right_aws-2.0.0/lib/s3/right_s3_interface.rb:657:in `copy'
In lib/awsbase/support.rb, constantize and camelize are added to String in the event that ActiveSupport::CoreExtensions has not (yet) been defined. It seems that in some cases these nonetheless can override the Rails versions of the methods.
Worse, the versions in support.rb are broken (or at least antiquated): they do not handle namespaces correctly, so the router is unable to route to any namespaced controllers when these are String's inflectors.
The currently published gem (2.0.0) tries to add ActiveSupport's camelize if ActiveSupport is not there, but it assumes Rails 2's version of ActiveSupport. See http://yehudakatz.com/tags/ruby-2-0/ for more explanation.
I'd be willing to offer a patch if I can find where the 2.0.0 commit left off. It'd be nice to have a 2.0.1 release with the patch so that the gem works with the current version of Rails.
require 'right_aws'
sqs = RightAws::SqsGen2.new('xxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxx')
=begin
On Windows with the right_aws-1.1.0 gem installed via: gem install right_aws
I get the following error message:
uninitialized constant RightAws::SqsGen2 (NameError)
=end
After updating right_aws gem to version 2.1.0, our service stopped working (based on Eucalyptus).
After digging a while, I found that in prepare_instance_launch_params
method defined in ec2/right_ec2_instances.rb
is set ClientToken, which is not supported in Eucalyptus.
Here is the error:
##### RightAws::Ec2 returned an error: 400 Bad Request
Failure: 400 Bad Request
Failed to bind the following fields:
ClientToken = 1301984140-617558-1ISow-nzbUH-Jnyvb-pblkZ
Solution can be to automatically disable this param if eucalyptus param is set to true.
I can prepare a patch for this.
What do you think?
This was regressed in 2.0 version of a year ago.
The earlier 1.x version had a method
describe_snapshots(list=[],owner=nil,restorable_by=nil,tags=[])
this was regressed by removal of the owner=nil,restorable_by=nil parameters. This meant a loss of functionality. Could not list snapshots by owner.
Please see the reproduce code here: https://gist.github.com/904753
When I create a launch configuration using right_aws, the user data is returned base64-encoded, both in the instance (http://169.254.169.254/latest/user-data) and api as shown in the gist.
Creating a launch configuration using amazon's command line tools and also specifying user data in aws console (both giving actual user data and base64-encoded user data) all work properly.
For now I ended up trying to base64-decode the user data in my initscript as a workaround if the proper retrieved user data does not look valid.
This is on the latest gem version (2.1.0).
Looking at the built request I cannot find an obvious problem.
Newer versions of libxml define a method 'on_start_element_ns' rather than 'on_start_element', causing this error on parsing:
NoMethodError: undefined method 'on_start_element_ns' for RightAws::RightSaxParserCallback:0x10460c570
This is fixed here:
http://github.com/ericworking/right_aws/commit/352ee916cbd501c397da47f6b63a1e6c56e44750
The old callback methods on_start_element and on_end_element remain in place to support older versions.
Reposted from http://rightscaleforum.com/showthread.php?t=513:
Amazon just released support for Consistent Reads and Conditional Puts in SimpleDB: http://developer.amazonwebservices.c...xternalID=3572.
Is there any plan to add support for these features in the RightAWS library?
If I call:
acw = RightAws::AcwInterface.new(key_goes_here, secret_goes_here)
stats = acw.list_metrics
I get:
400 Bad Request
http://monitoring.amazonaws.com/doc/2009-05-15/">
Sender
DelegationFailure
Malformed token
002b8b87-48bc-11e0-bc36-0542cab5ae1c
This is also the case for AWS' javascript scratchpad (their demo doesn't work).
However, if you use the command line tool "mon-list-metrics", it gives you a token, which you can then use to get access to the metrics via right_aws & the javascript scratchpad.
According to https://forums.aws.amazon.com/thread.jspa?threadID=42887&tstart=0#167316, users of the Java API were noticing the issue too.
This is clearly an AWS bug, which they still haven't fixed, but a work around seems to be setting the user-agent string.
Unfortunately, this doesn't seem to be able to be set in AcwInterface? Any chance we can get this exposed, as the interface is currently unusable.
This bug was noted here:
http://forums.rightscale.com/showthread.php?p=1973#post1973
and still exists
I've been trying to use what I believe to be the thread safe mode of S3Interface. Earlier RDoc listed this as a :thread_safe => true option to the initializer, but that no longer appears to be supported.
I found the :connections parameter which defaults to :shared and appeared to use thread-specific connection pools, and assumed that meant S3Interface is supposed to be thread-safe.
However, I've been battling a very strange bug on JRuby in Rails thread-safe mode, and it appears to be caused by thread safety problems with S3Interface:
http://jira.codehaus.org/browse/JRUBY-5267
Switching to thread-specific S3Interfaces solved the problem. So first I'm curious if S3Interface is actually intended to be thread safe or not, and if it is, does it require some configuration I'm not doing?
2011-03-18T09:22:32-07:00 app[web.1]: /app/6d05e000-94be-4d75-b04d-30d64bbdf55/home/.bundle/gems/ruby/1.9.1/gems/right_
aws-2.0.0/lib/awsbase/support.rb:44:in constantize': "Admin/adminsHelper" is not a valid constant name! (NameError) 2011-03-18T09:22:32-07:00 app[web.1]: from /app/6d05e000-94be-4d75-b04d-730d64bbdf55/home/.bundle/gems/ruby/1.9.1/gems /actionpack-3.0.5/lib/abstract_controller/helpers.rb:149:in
block in modules_for_helpers'
2011-03-18T09:22:32-07:00 app[web.1]: from /app/6d05e000-94be-4d75-b04d-730d64bbdf55/home/.bundle/gems/ruby/1.9.1/gems
/actionpack-3.0.5/lib/abstract_controller/helpers.rb:144:in `map!'
...
rds connection fails in Ruby 1.9.1
I, [2010-11-27T17:42:21.817452 #5744] INFO -- : New RightAws::RdsInterface usin
g shared connections mode
I, [2010-11-27T17:42:21.819452 #5744] INFO -- : Opening new HTTPS connection to
rds.amazonaws.com:443
W, [2010-11-27T17:42:24.250591 #5744] WARN -- : Rightscale::HttpConnection : re
quest failure count: 1, exception: #<OpenSSL::SSL::SSLError: SSL_connect returne
d=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed>
http://www.ruby-forum.com/topic/176626 suggest that
Add the following, just next to http.use_ssl:
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
Apparently it's now possible to grant access to an S3 bucket by email address (naming the email address of another S3 user). See http://docs.amazonwebservices.com/AmazonS3/latest/index.html?S3_ACLs.html. A user submitted this patch to enable that: http://github.com/francois/right_aws/commit/ff8065c92ee794bb26b8bcd1495aff9292417d8d. I'm not 100% sure from the docs that there aren't additional cases we should consider.
I can't find any reference it this API call was implemented.....
Would you mind implement it in the next release?
Thank you in advance.
It's an endless source of pain and headache that the gem returns symbols, so it'd be a handy little option. :)
The rdoc for RightAws::S3Generator::Key#get seems to be incorrect. It says:
Generate link to GET key data.
bucket.get('logs/today/1.log', 1.hour) #=> https://s3.amazonaws.com:443/my_awesome_bucket/logs%2Ftoday%2F1.log?Signature=h...M%3D&Expires=1180820032&AWSAccessKeyId=1...2
...but to do this properly one must do:
RightAws::S3Generator::Key.new(bucket_object, 'key_string').get(2.hours)
That is, unless I'm missing something?
I believe you have a 'bug' in your right_aws gem (either by design or
by accident). It is in file right_ec2.rb around line 533 in method:
launch_instances(image_id, lparams={})
The line in question is:
lparams[:user_data].strip!
The problem is that when trying to pass user data that includes binary
hex values, the strip! has the potential to corrupt the file. In my
example, I tried passing a zip file that included 4 bytes of 0x00 (in
hex) at the end, which the strip! then removed and corrupted the
resulting file when trying to unzip it inside the ec2 image that was
launched.
Like I mentioned, I don't know if this is by design or accident, but I
had to comment this line out for the gem to work in my environment.
Since March 2009 SimpleDB supports a BatchPutAttributes method to update multiple items at once. The branch referenced below implements support for BatchPutAttributes:
http://github.com/devver/right_aws/tree/batch-put-attributes
Thanks to this patch, the hardwork is done:
http://github.com/lomography/right_aws/commit/34fe2512ebdf759ac3b7a671e8b8eb818a987249
Please pull and merge.
It may be a feature we want to support, as at least one customer is trying to use it:
S3interface authorized-read get_link with cloudfront
Is there any way to do this? No matter how I configure the bucket and path names, it will not give me a valid, working, authorized-read link. I've tried:
s3 = RightAws::S3Interface.new(AWS_CREDS[:access_key], AWS_CREDS[:secret_access_key], :port => 80, :protocol => 'http', :server => '<cloudfront cname>')
s3.get_link(AWS_CREDS[:bucket],<path to resource less bucket name>)
I've tried aliasing the cloudfront distribution to <bucket name>.<domain name> and leaving the bucket name off of the cloudfront cname when I declare the server in S3Interface.new. Nothing works. I've looked at the code, and am not clear why this is failing.
It's delivering large video files that have to be authorized-read, so speed is important. It works fine so long as I stay with an S3 link.
in 2.0 it was
describe_snapshots(list=[],owner=nil,restorable_by=nil)
now in 2.1 it is:
describe_snapshots(*list_and_options)
Now i appreciate that you can use filters for the owner and restorable by but
Between versions you really need to keep the method signatures the same or extend them rather than breaking them....hope this is not too harsh...... but it would reduce folks cloning the code. Also releasing the ruby gem once a year is too slow. need to release at least every 6 months. 3 months would be even better.....
Neill
right_aws-1.10.0
in right_awsbase.rb:229
@logger.info "New #{self.class.name} using #{@params[:multi_thread] ? 'multi' : 'single'}-threaded mode"
Can this be a @logger.debug instead, seems more like a debugging statement to ensure you have the correct one running.
We're getting broken pipe errors when uploading to S3.
Things to note:
Now, it could just be S3/EC2 flakiness or it could be an issue with connection management. I'm not sure and it's quite difficult to diagnose. Anyone have any tips for tracking this down?
Sometimes when I try to access an S3 bucket, I get a SignatureDoesNotMatch error. I added some logging to the S3Interface#generate_rest_request method to see what was going on. Here is the log, edited for clarity:
I, [2010-06-01T08:40:12.897096 #18502] INFO -- : Signed string: GET\n\n\nTue, 01 Jun 2010 15:40:12 GMT\n/com.talentspring.attachments/
I, [2010-06-01T08:40:12.897206 #18502] INFO -- : Signature: StRFAW/7qxD8z2e5847fzTOu88w=
I, [2010-06-01T08:40:12.929960 #18502] INFO -- : Closing HTTPS connection to s3.amazonaws.com:443
I, [2010-06-01T08:40:12.931784 #18502] INFO -- : Opening new HTTPS connection to com.talentspring.attachments.s3.amazonaws.com:443
I, [2010-06-01T08:40:12.932062 #18502] INFO -- : Signed string: GET\n\n\nTue, 01 Jun 2010 15:40:12 GMT\n/
I, [2010-06-01T08:40:12.932926 #18502] INFO -- : Signature: cpeR9+WCyjbRs+6PUyqHXZl0O1w=
I, [2010-06-01T08:40:12.934483 #18502] INFO -- : Opening new HTTPS connection to s3.amazonaws.com:443
GET /?prefix=ts%2FEvent%2F57359%2Fsnapshot HTTP/1.1
Host: s3.amazonaws.com
User-Agent:
Accept: */*
Connection: close
Authorization: AWS 1ATYZYK25V9A292840R2:StRFAW/7qxD8z2e5847fzTOu88w=
Content-Type:
Date: Tue, 01 Jun 2010 15:40:12 GMT
HTTP/1.X 403 Forbidden
X-Amz-Id-2: aNHZVshi4dgk8m9PIhin4yqRPUsLGUAXyv7fwj3iTkmNP/p6s5b5ob2kdHfmgVaO
Connection: close
Content-Type: application/xml
Server: AmazonS3
Date: Tue, 01 Jun 2010 15:40:12 GMT
X-Amz-Request-Id: 32D0CFE56BF4EA55
Transfer-Encoding: chunked
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<StringToSignBytes>47 45 54 0a 0a 0a 54 75 65 2c 20 30 31 20 4a 75 6e 20 32 30 31 30 20 31 35 3a 34 30 3a 31 32 20 47 4d 54 0a 2f</StringToSignBytes>
<RequestId>32D0CFE56BF4EA55</RequestId>
<HostId>aNHZVshi4dgk8m9PIhin4yqRPUsLGUAXyv7fwj3iTkmNP/p6s5b5ob2kdHfmgVaO</HostId>
<SignatureProvided>StRFAW/7qxD8z2e5847fzTOu88w=</SignatureProvided>
<StringToSign>GET\n\n\nTue, 01 Jun 2010 15:40:12 GMT\n/</StringToSign>
<AWSAccessKeyId>1ATYZYK25V9A292840R2</AWSAccessKeyId>
</Error>
For some reason RightAWS is signing two different requests, and then submitting one of the requests with the signature for the other one. Any idea why this is happening? Is this a threading issue?
I just accidentally deleted my entire S3 bucket. Reading the code I'd just pasted into irb, I find the command "bucket.delete key" which apparently actually went to the method RightAws::S3::Bucket.delete(force=false). Instead of a single key being deleted, my key was interpreted as a true boolean for the force parameter.
Dang.
Also, I tried to register for a RightScale.com forum account but it looks like I couldn't do that without giving you my AWS credentials. I thought that was over the top for just posting a "bug" report. The IRC channel #ruby-lang suggested I come here with the report.
Anyway, I think your API might use some work to make this kind of error less likely. It'd be nice to be able to post in the gem's forum without giving away my AWS keys too.
Hi.
When trying to insert some tags at the same time using "create_tags" method it fails with
"The parameter 'Value' may only be specified once."
The http request seems to be malformed. It mixes keys and values :
Tag.1.Key.1=myKey1&
Tag.1.Key.2=myValue1&
Tag.1.Value.1=myKey2&
Tag.1.Value.2=myValue2&
Tag.2.Key.1=default&
Tag.2.Key.2=&
Tag.2.Value=
Code sample and full error output => https://gist.github.com/789496
It seems like a bug, but maybe I am missing something obvious here.
Any help is appreciated.
On December 13th, RightAws::VERSION was bumped to 1.11.0. Are there any plans to release a gem of that version soon?
I'm building a gem that depends on right_aws and for the next version needs functionality that exists in 1.11. I'd rather depend on right_aws than on a copy.
How can I make it list all of them?
All my logs and spec output gets polluted because RAILS_DEFAULT_LOGGER is sill being used.
Please just change 'RAILS_DEFAULT_LOGGER' to 'Rails.logger' in right_awsbase.rb:300
Currently right_aws doesn't correctly set the "LoadBalancerNames" parameter when creating an autoscaling group.
This causes the error:
RightAws::AwsError: MalformedInput: Top level element may not be treated as a list
My patch is pretty trivial; it adds ".members" to theLoadBalancerNames param:
http://github.com/ktheory/right_aws/commits/fix_autoscaling_load_balancer
To reproduce the bug, try to create an autoscaling group with an elastic load balancer.
# Set up credentials
access_key_id = ''
secret_access_key = ''
ssh_key_name = ''
# Create an ELB
elb = RightAws::ElbInterface.new(access_key_id, secret_access_key)
elb.create_load_balancer('test-lb', ['us-east-1c'], [{:protocol => 'HTTP', :load_balancer_port => 80, :instance_port => 80}])
# Create a launch config
# NB: ami-4234de2b = Ubuntu 10.04 64-bit ami
as = RightAws::AsInterface.new(access_key_id, secret_access_key)
as.create_launch_configuration('test-config', 'ami-4234de2b', 'm1.large', {:security_groups => ['default'], :key_name => ssh_key_name})
# Create autoscaling group using launch config and ELB
as.create_auto_scaling_group('test-asg', 'test-config', ['us-east-1c'], {:min_size => 0, :max_size => 0, :load_balancer_names => ['test-lb']})
The last line raises the error. With my patch, there autoscaling group is correctly created with the right load balancer(s).
We're happy to announce that we're moving active development on our gems here to GitHub!
So please, feel free to file any bugs or feature requests you have here.
I have a key that has forward slashes, single quotes, apostrophes, spaces, and some valid UTF-8 characters, and CGI::escape mangles them all.
URI::escape handles this just fine...
Is there a reason why CGI::escape over URI::escape?
I have an app which is constantly sending requests to SDB but I'm
getting regular SignatureDoesNotMatch errors. A bit of Googling I
found this:
http://developer.amazonwebservices.com/connect/message.jspa?messageID=77430
I have a feeling there is something not quite right about the URL
encoding that's happening in the generate_request method of
right_sdb_interface.rb
I think occasionally the Signature has characters in it which are not
the encoded properly - beyond that I'm not sure.
There is a thread in your forum about the issues which hasn't had a
response: http://forums.rightscale.com/showthread.php?p=295
Have you guys got any ideas? This is the specific error I'm getting:
SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your AWS Secret Access Key and
signing method. Consult the service documentation for details.
/usr/lib/ruby/gems/1.8/gems/right_aws-1.9.0/lib/awsbase/right_awsbase.rb:280:in
request_info_impl' /usr/lib/ruby/gems/1.8/gems/right_aws-1.9.0/lib/sdb/right_sdb_interface.rb:116:in
request_info'
/usr/lib/ruby/gems/1.8/gems/right_aws-1.9.0/lib/sdb/right_sdb_interface.rb:314:in
put_attributes' /usr/lib/ruby/gems/1.8/gems/right_aws-1.9.0/lib/sdb/active_sdb.rb:657:in
save'
/usr/lib/ruby/gems/1.8/gems/right_aws-1.9.0/lib/sdb/active_sdb.rb:475:in
`create'
the following line fails
acwint.list_metrics[{:measure_name =>'CPUUtilization', :namespace => "AWS/EC2"}]
this is the ouptput request
I am trying to use right_aws + right_http_connection to use one persistent connection per-process to reduce the overhead of dealing with S3.
I've got a module in lib/onehub.rb that keeps the connection and bucket objects.
https://gist.github.com/7ef90619fbd331479c6a
Then from my models I'll call something like Onehub.bucket.put to upload the file in background task, with the idea that this should be a persistent connection since these background workers are simply uploaders.
What I get quite frequently is 'hung' sockets. The socket doesn't get written to for > 15 minutes, but eventually this recovers (maybe related to a thread not getting scheduled?). The problem is that the request is signed and we've now left the 15 minute grace period that S3 will tolerate so I get the exception: RightAws::AwsError: RequestTimeTooSkewed: The difference between the request time and the current time is too large.
Here is a backtrace:
https://gist.github.com/26ddd66d2cc5de223c9c
Is there a better way to handle a per-process persistent connection? Is this some subtle threading issue where the thread that writes isn't being scheduled by the interpreter? I am not using this in a multi-threaded environment. Is this because S3 hangs up after 60 seconds but the library expects the connection to still be open?
We diagnosed the issue by instrumenting the PUT operations and dumping to a log file, but could never create a case that reliably reproduced it.
Text from Customer Ticket:
/opt/local/lib/ruby/gems/1.8/gems/right_aws-1.10.0/lib/s3/right_s3_interface.rb:248:in `put_logging': uninitialized constant RightAws::S3Interface::S3TrueParser (NameError?)
from
/opt/local/lib/ruby/gems/1.8/gems/right_aws-1.10.0/lib/s3/right_s3.rb:200:in `disable_logging'
from bin.rb:5
Diving into the code led me here:
right_aws-1.10.0/lib/s3/right_s3_interface.rb:248
def put_logging(params)
AwsUtils.mandatory_arguments([:bucket,:xmldoc], params)
AwsUtils.allow_only([:bucket,:xmldoc, :headers], params)
params[:headers] = {} unless params[:headers]
req_hash = generate_rest_request('PUT', params[:headers].merge(:url=>"#{params[:bucket]}?logging", :data => params[:xmldoc]))
request_info(req_hash, RightHttp2xxParser.new)
rescue
on_exception
end
Hi,
Can you please add support for Default root object?
http://aws.amazon.com/about-aws/whats-new/2010/08/05/cloudfront-adds-default-root-object-capability/
Thanks!
With version 2.0.0 I get 'error hostname was not match with the server certificate'. I have US bucket. There are period in the bucket name. Could that be the problem?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.