Giter Site home page Giter Site logo

sshkit's Introduction

SSHKit Logo

SSHKit is a toolkit for running commands in a structured way on one or more servers.

Gem Version Build Status

Example

  • Connect to 2 servers
  • Execute commands as deploy user with RAILS_ENV=production
  • Execute commands in serial (default is :parallel)
require 'sshkit'
require 'sshkit/dsl'
include SSHKit::DSL

on ["1.example.com", "2.example.com"], in: :sequence do |host|
  puts "Now executing on #{host}"
  within "/opt/sites/example.com" do
    as :deploy  do
      with RAILS_ENV: 'production' do
        execute :rake, "assets:precompile"
        execute :rails, "runner", "S3::Sync.notify"
      end
    end
  end
end

Many other examples are in EXAMPLES.md.

Basic usage

The on() method is used to specify the backends on which you'd like to run the commands. You can pass one or more hosts as parameters; this runs commands via SSH. Alternatively you can pass :local to run commands locally. By default SSKit will run the commands on all hosts in parallel.

Running commands

All backends support the execute(*args), test(*args) & capture(*args) methods for executing a command. You can call any of these methods in the context of an on() block.

Note: In SSHKit, the first parameter of the execute / test / capture methods has a special significance. If the first parameter isn't a Symbol, SSHKit assumes that you want to execute the raw command and the as / within / with methods, SSHKit.config.umask and the comand map have no effect.

Typically, you would pass a Symbol for the command name and it's args as follows:

on '1.example.com' do
  if test("[ -f somefile.txt ]")
    execute(:cp, 'somefile.txt', 'somewhere_else.txt')
  end
  ls_output = capture(:ls, '-l')
end

By default the capture methods strips whitespace. If you need to preserve whitespace you can pass the strip: false option: capture(:ls, '-l', strip: false)

Transferring files

All backends also support the upload! and download! methods for transferring files. For the remote backend, the file is transferred with scp by default, but sftp is also supported. See EXAMPLES.md for details.

on '1.example.com' do
  upload! 'some_local_file.txt', '/home/some_user/somewhere'
  download! '/home/some_user/some_remote_file.txt', 'somewhere_local', log_percent: 25
end

Users, working directories, environment variables and umask

When running commands, you can tell SSHKit to set up the context for those commands using the following methods:

as(user: 'un', group: 'grp') { execute('cmd') } # Executes sudo -u un -- sh -c 'sg grp cmd'
within('/somedir') { execute('cmd') }           # Executes cd /somedir && cmd
with(env_var: 'value') { execute('cmd') }       # Executes ENV_VAR=value cmd
SSHKit.config.umask = '077'                     # All commands are executed with umask 077 && cmd

The as() / within() / with() are nestable in any order, repeatable, and stackable.

When used inside a block in this way, as() and within() will guard the block they are given with a check.

In the case of within(), an error-raising check will be made that the directory exists; for as() a simple call to sudo -u <user> -- sh -c <command>' wrapped in a check for success, raising an error if unsuccessful.

The directory check is implemented like this:

if test ! -d <directory>; then echo "Directory doesn't exist" 2>&1; false; fi

And the user switching test is implemented like this:

if ! sudo -u <user> whoami > /dev/null; then echo "Can't switch user" 2>&1; false; fi

According to the defaults, any command that exits with a status other than 0 raises an error (this can be changed). The body of the message is whatever was written to stdout by the process. The 1>&2 redirects the standard output of echo to the standard error channel, so that it's available as the body of the raised error.

Helpers such as runner() and rake() which expand to execute(:rails, "runner", ...) and execute(:rake, ...) are convenience helpers for Ruby, and Rails based apps.

Verbosity / Silence

  • raise verbosity of a command: execute "echo DEAD", verbosity: :ERROR
  • hide a command from output: execute "echo HIDDEN", verbosity: :DEBUG

Parallel

Notice on the on() call the in: :sequence option, the following will do what you might expect:

on(in: :parallel) { ... }
on(in: :sequence, wait: 5) { ... }
on(in: :groups, limit: 2, wait: 5) { ... }

The default is to run in: :parallel which has no limit. If you have 400 servers, this might be a problem and you might better look at changing that to run in groups, or sequence.

Groups were designed in this case to relieve problems (mass Git checkouts) where you rely on a contested resource that you don't want to DDOS by hitting it too hard.

Sequential runs were intended to be used for rolling restarts, amongst other similar use-cases.

The default runner can be set with the SSHKit.config.default_runner option. For example:

SSHKit.config.default_runner = :parallel
SSHKit.config.default_runner = :sequence
SSHKit.config.default_runner = :groups
SSHKit.config.default_runner = MyRunner # A custom runner

If more control over the default runner is needed, the SSHKit.config.default_runner_config can be set.

# Set the runner and then the config for the runner
SSHKit.config.default_runner = :sequence
SSHKit.config.default_runner_config = { wait: 5 }

# Or just set everything once
SSHKit.config.default_runner_config = { in: :sequence, wait: 5 }

Synchronisation

The on() block is the unit of synchronisation, one on() block will wait for all servers to complete before it returns.

For example:

all_servers = %w{one.example.com two.example.com three.example.com}
site_dir    = '/opt/sites/example.com'

# Let's simulate a backup task, assuming that some servers take longer
# then others to complete
on all_servers do |host|
  within site_dir do
    execute :tar, '-czf', "backup-#{host.hostname}.tar.gz", 'current'
    # Will run: "/usr/bin/env tar -czf backup-one.example.com.tar.gz current"
  end
end

# Now we can do something with those backups, safe in the knowledge that
# they will all exist (all tar commands exited with a success status, or
# that we will have raised an exception if one of them failed.
on all_servers do |host|
  within site_dir do
    backup_filename = "backup-#{host.hostname}.tar.gz"
    target_filename = "backups/#{Time.now.utc.iso8601}/#{host.hostname}.tar.gz"
    puts capture(:s3cmd, 'put', backup_filename, target_filename)
  end
end

The Command Map

It's often a problem that programmatic SSH sessions don't have the same environment variables as interactive sessions.

A problem often arises when calling out to executables expected to be on the $PATH. Under conditions without dotfiles or other environmental configuration, $PATH may not be set as expected, and thus executables are not found where expected.

To try and solve this there is the with() helper which takes a hash of variables and makes them available to the environment.

with path: '/usr/local/bin/rbenv/shims:$PATH' do
  execute :ruby, '--version'
end

Will execute:

( PATH=/usr/local/bin/rbenv/shims:$PATH /usr/bin/env ruby --version )

By contrast, the following won't modify the command at all:

with path: '/usr/local/bin/rbenv/shims:$PATH' do
  execute 'ruby --version'
end

Will execute, without mapping the environmental variables, or querying the command map:

ruby --version

(This behaviour is sometimes considered confusing, but it has mostly to do with shell escaping: in the case of whitespace in your command, or newlines, we have no way of reliably composing a correct shell command from the input given.)

Often more preferable is to use the command map.

The command map is used by default when instantiating a Command object

The command map exists on the configuration object, and in principle is quite simple, it's a Hash structure with a default key factory block specified, for example:

puts SSHKit.config.command_map[:ruby]
# => /usr/bin/env ruby

To make clear the environment is being deferred to, the /usr/bin/env prefix is applied to all commands. Although this is what happens anyway when one would simply attempt to execute ruby, making it explicit hopefully leads people to explore the documentation.

One can override the hash map for individual commands:

SSHKit.config.command_map[:rake] = "/usr/local/rbenv/shims/rake"
puts SSHKit.config.command_map[:rake]
# => /usr/local/rbenv/shims/rake

Another opportunity is to add command prefixes:

SSHKit.config.command_map.prefix[:rake].push("bundle exec")
puts SSHKit.config.command_map[:rake]
# => bundle exec rake

SSHKit.config.command_map.prefix[:rake].unshift("/usr/local/rbenv/bin exec")
puts SSHKit.config.command_map[:rake]
# => /usr/local/rbenv/bin exec bundle exec rake

One can also override the command map completely, this may not be wise, but it would be possible, for example:

SSHKit.config.command_map = Hash.new do |hash, command|
  hash[command] = "/usr/local/rbenv/shims/#{command}"
end

This would effectively make it impossible to call any commands which didn't provide an executable in that directory, but in some cases that might be desirable.

Note: All keys should be symbolised, as the Command object will symbolize it's first argument before attempting to find it in the command map.

Interactive commands

(Added in version 1.8.0)

By default, commands against remote servers are run in a non-login, non-interactive ssh session. This is by design, to try and isolate the environment and make sure that things work as expected, regardless of any changes that might happen on the server side. This means that, although the server may have prompted you, and be waiting for it, you cannot send data to the server by typing into your terminal window. Wherever possible, you should call commands in a way that doesn't require interaction (eg by specifying all options as command arguments).

However in some cases, you may want to programmatically drive interaction with a command and this can be achieved by specifying an :interaction_handler option when you execute, capture or test a command.

It is not necessary, or desirable to enable Netssh.config.pty to use the interaction_handler option. Only enable Netssh.config.pty if the command you are calling won't work without a pty.

An interaction_handler is an object which responds to on_data(command, stream_name, data, channel). The interaction_handler's on_data method will be called each time stdout or stderr data is available from the server. Data can be sent back to the server using the channel parameter. This allows scripting of command interaction by responding to stdout or stderr lines with any input required.

For example, an interaction handler to change the password of your linux user using the passwd command could look like this:

class PasswdInteractionHandler
  def on_data(command, stream_name, data, channel)
    puts data
    case data
      when '(current) UNIX password: '
        channel.send_data("old_pw\n")
      when 'Enter new UNIX password: ', 'Retype new UNIX password: '
        channel.send_data("new_pw\n")
      when 'passwd: password updated successfully'
      else
        raise "Unexpected stderr #{stderr}"
    end
  end
end

# ...

execute(:passwd, interaction_handler: PasswdInteractionHandler.new)

Using the SSHKit::MappingInteractionHandler

Often, you want to map directly from a short output string returned by the server (either stdout or stderr) to a corresponding input string (as in the case above). For this case you can specify the interaction_handler option as a hash. This is used to create a SSHKit::MappingInteractionHandler which provides similar functionality to the linux expect library:

execute(:passwd, interaction_handler: {
  '(current) UNIX password: ' => "old_pw\n",
  /(Enter|Retype) new UNIX password: / => "new_pw\n"
})

Note: the key to the hash keys are matched against the server output data using the case equals === method. This means that regexes and any objects which define === can be used as hash keys.

Hash keys are matched in order, which allows for default wildcard matches:

execute(:my_command, interaction_handler: {
  "some specific line\n" => "specific input\n",
  /.*/ => "default input\n"
})

You can also pass a Proc object to map the output line from the server:

execute(:passwd, interaction_handler: lambda { |server_data|
  case server_data
  when '(current) UNIX password: '
    "old_pw\n"
  when /(Enter|Retype) new UNIX password: /
    "new_pw\n"
  end
})

MappingInteractionHandlers are stateless, so you can assign one to a constant and reuse it:

ENTER_PASSWORD = SSHKit::MappingInteractionHandler.new(
  "Please Enter Password\n" => "some_password\n"
)

execute(:first_command, interaction_handler: ENTER_PASSWORD)
execute(:second_command, interaction_handler: ENTER_PASSWORD)

Exploratory logging

By default, the MappingInteractionHandler does not log, in case the server output or input contains sensitive information. However, if you pass a second log_level parameter to the constructor, the MappingInteractionHandler will log information about what output is being returned by the server, and what input is being sent in response. This can be helpful if you don't know exactly what the server is sending back (whitespace, newlines etc).

  # Start with this and run your script
  execute(:unfamiliar_command, interaction_handler: MappingInteractionHandler.new({}, :info))
  # INFO log => Unable to find interaction handler mapping for stdout:
  #             "Please type your input:\r\n" so no response was sent"

  # Add missing mapping:
  execute(:unfamiliar_command, interaction_handler: MappingInteractionHandler.new(
    {"Please type your input:\r\n" => "Some input\n"},
    :info
  ))

The data parameter

The data parameter of on_data(command, stream_name, data, channel) is a string containing the latest data delivered from the backend.

When using the Netssh backend for commands where a small amount of data is returned (eg prompting for sudo passwords), on_data will normally be called once per line and data will be terminated by a newline. For commands with larger amounts of output, data is delivered as it arrives from the underlying network stack, which depends on network conditions, buffer sizes, etc. In this case, you may need to implement a more complex interaction_handler to concatenate data from multiple calls to on_data before matching the required output.

When using the Local backend, on_data is always called once per line.

The channel parameter

When using the Netssh backend, the channel parameter of on_data(command, stream_name, data, channel) is a Net::SSH Channel. When using the Local backend, it is a ruby IO object. If you need to support both sorts of backends with the same interaction handler, you need to call methods on the appropriate API depending on the channel type. One approach is to detect the presence of the API methods you need - eg channel.respond_to?(:send_data) # Net::SSH channel and channel.respond_to?(:write) # IO. See the SSHKit::MappingInteractionHandler for an example of this.

Output Handling

Example Output

By default, the output format is set to :pretty:

SSHKit.config.use_format :pretty

However, if you prefer non colored text you can use the :simpletext formatter. If you want minimal output, there is also a :dot formatter which will simply output red or green dots based on the success or failure of commands. There is also a :blackhole formatter which does not output anything.

By default, formatters log to $stdout, but they can be constructed with any object which implements << for example any IO subclass, String, Logger etc:

# Output to a String:
output = String.new
SSHKit.config.output = SSHKit::Formatter::Pretty.new(output)
# Do something with output

# Or output to a file:
SSHKit.config.output = SSHKit::Formatter::SimpleText.new(File.open('log/deploy.log', 'wb'))

Output & Log Redaction

If necessary, redact can be used on a section of your execute arguments to hide it from both STDOUT and the capistrano.log. It supports the majority of data types.

# Example from capistrano-postgresql gem
execute(:psql, fetch(:pg_system_db), '-c', %Q{"CREATE USER \\"#{fetch(:pg_username)}\\" PASSWORD}, redact("'#{fetch(:pg_password)}'"), %Q{;"})

Once wrapped, sshkit logging will replace the actual pg_password with a [REDACTED] value. The created database user will have the value from fetch(:pg_password).

# STDOUT
00:00 postgresql:create_database_user
      01 sudo -i -u postgres psql -d postgres -c "CREATE USER \"db_admin_user\" PASSWORD [REDACTED] ;"
      01 CREATE ROLE
    ✔ 01 user@localhost 0.099s

# capistrano.log
INFO [59dbd2ba] Running /usr/bin/env sudo -i -u postgres psql -d postgres -c "CREATE USER \"db_admin_user\" PASSWORD [REDACTED] ;" as user@localhost
DEBUG [59dbd2ba] Command: ( export PATH="$HOME/.gem/ruby/2.5.0/bin:$PATH" ; /usr/bin/env sudo -i -u postgres psql -d postgres -c "CREATE USER \"db_admin_user\" PASSWORD [REDACTED] ;" )
DEBUG [529b623c] CREATE ROLE

Certain commands will require that no spaces exist between a string and what you want hidden. Because SSHKIT will include a whitespace between each argument of execute, this can be dealt with by wrapping both in redact:

# lib/capistrano/tasks/systemd.rake
execute :sudo, :echo, redact("CONTENT_WEB_TOOLS_PASS='#{ENV['CONTENT_WEB_TOOLS_PASS']}'"), ">> /etc/systemd/system/#{fetch(:application)}_sidekiq.service.d/EnvironmentFile", '"'

Output Colors

By default, SSHKit will color the output using ANSI color escape sequences if the output you are using is associated with a terminal device (tty). This means that you should see colors if you are writing output to the terminal (the default), but you shouldn't see ANSI color escape sequences if you are writing to a file.

Colors are supported for the Pretty and Dot formatters, but for historical reasons the SimpleText formatter never shows colors.

If you want to force SSHKit to show colors, you can set the SSHKIT_COLOR environment variable:

ENV['SSHKIT_COLOR'] = 'TRUE'

Custom formatters

Want custom output formatting? Here's what you have to do:

  1. Write a new formatter class in the SSHKit::Formatter module. Your class should subclass SSHKit::Formatter::Abstract to inherit conveniences and common behavior. For a basic an example, check out the Pretty formatter.
  2. Set the output format as described above. E.g. if your new formatter is called FooBar:
SSHKit.config.use_format :foobar

All formatters that extend from SSHKit::Formatter::Abstract accept an options Hash as a constructor argument. You can pass options to your formatter like this:

SSHKit.config.use_format :foobar, :my_option => "value"

You can then access these options using the options accessor within your formatter code.

For a much more full-featured formatter example that makes use of options, check out the Airbrussh repository.

Output Verbosity

By default calls to capture() and test() are not logged, they are used so frequently by backend tasks to check environmental settings that it produces a large amount of noise. They are tagged with a verbosity option on the Command instances of Logger::DEBUG. The default configuration for output verbosity is available to override with SSHKit.config.output_verbosity=, and defaults to Logger::INFO. Another way to is to provide a hash containing {verbosity: Logger::INFO} as a last parameter for the method call.

At present the Logger::WARN, ERROR and FATAL are not used.

Deprecation warnings

Deprecation warnings are logged directly to stderr by default. This behaviour can be changed by setting the SSHKit.config.deprecation_output option:

# Disable deprecation warnings
SSHKit.config.deprecation_output = nil

# Log deprecation warnings to a file
SSHKit.config.deprecation_output = File.open('log/deprecation_warnings.log', 'wb')

Connection Pooling

SSHKit uses a simple connection pool (enabled by default) to reduce the cost of negotiating a new SSH connection for every on() block. Depending on usage and network conditions, this can add up to a significant time savings. In one test, a basic cap deploy ran 15-20 seconds faster thanks to the connection pooling added in recent versions of SSHKit.

To prevent connections from "going stale", an existing pooled connection will be replaced with a new connection if it hasn't been used for more than 30 seconds. This timeout can be changed as follows:

SSHKit::Backend::Netssh.pool.idle_timeout = 60 # seconds

If you suspect the connection pooling is causing problems, you can disable the pooling behaviour entirely by setting the idle_timeout to zero:

SSHKit::Backend::Netssh.pool.idle_timeout = 0 # disabled

Tunneling and other related SSH themes

In order to do special gymnastics with SSH, tunneling, aliasing, complex options, etc with SSHKit it is possible to use the underlying Net::SSH API however in many cases it is preferred to use the system SSH configuration file at ~/.ssh/config. This allows you to have personal configuration tied to your machine that does not have to be committed with the repository. If this is not suitable (everyone on the team needs a proxy command, or some special aliasing) a file in the same format can be placed in the project directory at ~/yourproject/.ssh/config, this will be merged with the system settings in ~/.ssh/config, and with any configuration specified in SSHKit::Backend::Netssh.config.ssh_options.

These system level files are the preferred way of setting up tunneling and proxies because the system implementations of these things are faster and better than the Ruby implementations you would get if you were to configure them through Net::SSH. In cases where it's not possible (Windows?), it should be possible to make use of the Net::SSH APIs to setup tunnels and proxy commands before deferring control to Capistrano/SSHKit..

Proxying

To connect to the target host via a jump/bastion host, use a Net::SSH::Proxy::Jump

host = SSHKit::Host.new(
  hostname: 'target.host.com',
  ssh_options: { proxy: Net::SSH::Proxy::Jump.new("proxy.bar.com") }
)
on [host] do
  execute :echo, '1'
end

SSHKit Related Blog Posts

Embedded Capistrano with SSHKit

sshkit's People

Contributors

akm avatar azrle avatar betesh avatar bobziuchkovski avatar bretweinraub avatar byroot avatar cheald avatar colorbox avatar dependabot[bot] avatar fjan avatar grosser avatar hab278 avatar kirs avatar leehambley avatar mattbrictson avatar miry avatar nikolayrys avatar norsegaud avatar rgo avatar robd avatar seanhandley avatar seenmyfate avatar shirosaki avatar sinjo avatar sj26 avatar steved avatar tisba avatar townsen avatar will-in-wi avatar wmontgomery-splunk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sshkit's Issues

`background` hangs when ran in `within` blocks

Hi,

Thanks a lot for the great software!
I'm getting more and more excited as I see how flexible SSHKit (and of course Capistrano 3) is :)

I have found that background hangs when used within within blocks.
I'm running Capistrano 3.0.1 with SSHKit on Ubuntu 12.04.

With the code below, run cap test will_hang1 to reproduce the hangs. You can also run cap test wont_hang to see how to fix those.

As far as I know, the problem here is that nohup seems to hang when executed after cd.
We can probably avoid the hangs by changing background to emit shell commands like cd the_dir; ( ( nohup the_command &>/dev/null ) & ).

I suspect that the issue is arrising from Net::SSH(ref). So I believe that we can do a work-around on SSHKit side or just document it in Usage Examples.

# config/deploy/test.rb

task :will_hang1 do
  on roles(:local) do
    within '/home/vagrant' do
      background 'nc -l 12345'
    end
  end
end

# FYI, Don't do the below, too!
task :will_hang2 do
  on roles(:local) do
    execute "( ( cd /home/vagrant; nohup nc -l 12345 &>/dev/null ) & )"
  end
end

task :wont_hang do
  on roles(:local) do
    execute "cd /home/vagrant; ( ( nohup nc -l 12345 &>/dev/null ) & )"
  end
end

Regards,
Yusuke

multiline bash with if

If have the following bash block:

  execute <<-EOBLOCK
    set -e
    if [ -L #{current_path} ]; then
      service #{service_name} stop
    fi
    rm -f #{current_path}
    ln -s #{release_path} #{current_path}
    service #{service_name} start
  EOBLOCK

what sshkit/capistrano runs on the server is

/usr/bin/env if [ -L /var/play/hello-b/release/CURRENT ]; then; service play-hello-b stop; fi; rm -f /var/play/hello-b/release/CURRENT; ln -s /var/play/hello-b/release/hello-b-1.0-SNAPSHOT /var/play/hello-b/release/CURRENT; service play-hello-b start

the problem seems to be the added semicolon after the then statement then;, this causes an error.

I am not sure, if I am doing something wrong or if this is a problem with the muliline command conversion in sshkit.

within should wrap command in parentheses

When we use

within '/my/directory'
  background(:sleep, 10)
end

it won't return immediately since it will generate this command
cd /my/directory && /usr/bin/env nohup sleep 10 &> /dev/null &
and the && take precedence over the background &

We can make this work by just simply add parentheses over the command in the within block to be something like
cd /my/directory && (/usr/bin/env nohup sleep 10 &> /dev/null &)

Basic Sanity test

Is there an example we can run to validate both connection and receipt of output via commands? I'm connecting fine, and actually have fully deployments for aws instances with nginx/rails/unicorn working just fine. But whenever I change anything I have to debug from server logs.

I have no 'output' coming back from any commands.
So if for instance I enter bundle exec cap production console, I enter the console, and can execute commands, but I get no output from them. This makes it hard to debug things like setting environment vars or other little deployment helpers.

If there is a test suite of verification step I can run that would tell me, I 'should' be getting output from commands, that would be very helpful.

Bundler is broken

root@Ubuntu-1204-precise-64-minimal ~ # bundle --trace
Unknown switches '--trace'
root@Ubuntu-1204-precise-64-minimal ~ # echo $?
0

That is written on stderr, and then it exits with a success status.

GIT_SSH Wrapper (Prevents strict host checking error on first host connect)

I've seen this technique for the first time today, it's really really smooth, it's used in Chef's deployment resource, which works quite smoothly, but this Ruby implementation is also really neat.

This delivers two features:

  1. Specify the key explicitly, misses out the chance that the ssh sub system will figure out the wrong key, or fail on agent forwarding/etc
  2. Enables the user to disable strict host key checking, which is actually the point of these things in Chef, at least.

Resources:

:wait option is broken

Using 1.1.0. The wait: n option for in: :sequence/:groups is being ignored. I lookd over the code and can't see on's options being used anywhere, except for :in. I'm happy to submit a patch, but I wanted to verify that I wasn't overlooking something first.

warning: already initialized constant StandardError

I'm using Capistrano 3.0.0 with jRuby 1.7.4 and I'm getting the warning below every time I run the cap command.

  $ cap install
  /gems/sshkit-0.0.34/lib/sshkit.rb:4 warning: already initialized constant StandardError
  mkdir -p config/deploy
  create config/deploy.rb
  create config/deploy/staging.rb
  create config/deploy/production.rb
  mkdir -p lib/capistrano/tasks
  Capified

Any ideas?

A case for assert_shell_equal

I'm working on the CommandContext spike in a branch right now, and notice that the formatting specifics of the shell commands is getting in the way of real work.

I'd propose that we write something like assert_shell_equal (_equivalent might be better?) implementation something like this:

class ShellCommand < String
  def initialize(string)
    @string = string
  end
  # Naïve implementation, probably need to build a stateful parser to
  # do a decent job of this without blowing up file paths/etc
  def to_minified_s
     string.gsub!(/\n/, '; ') #New lines for semicolon
     string.gsub!(/;\s?;/, ';') # Strip double semi-colons (empty commands)
     string.gsub(/\s+/, ' ') # Strip duplicate space
  end
  def to_formatted_s
    # imagine this makes it beautiful and highlights for ANSI with pygments, or something
  end
end

Usage via assert_shell_equal

def assert_shell_equal(a, b)
  assert_equal ShellCommand.new(a).to_minified_s, ShellCommand.new(b).to_minified_s
end

Then one would be able to write a test such as:

def test_equal_things_are_equal
   assert_shell_equal "echo 'Hello World'\n\ntrue", "echo 'Hello World'; true"
end

Support project-local known hosts file.

I didn't know (until reading the docs) that SSH supports multiple known hosts files, how cool would it be if we could check in to source control (with all the integrity benefits that come with it) the list of known hosts for a given project, and refer to that list when deploying/working with sshkit?

command_map for piped commands

Given:
The SSHKit (capistrano) command in question is:
execute :curl, "-s", fetch(:composer_download_url), "|", :php

My PHP binary is in the following path and set so:
SSHKit.config.command_map[:php] = '/usr/local/bin/php54'

Problem:
The :curl symbol is correctly replaced from the command map, but the :php symbol is not.

I've tested replacement by changing the :curl value in the command_map and succeeded, so I'm guessing it's because :php is in a pipe or is the second substitution that it is not replaced?

with() escapes paths too aggressively

Trying to augment the path with:

with path: '/usr/bin/weird:$PATH' do
  #  ....
end

becomes:

( PATH=/usr/bin/weird:\$PATH  ..... )

(note the backslashed $ in the $PATH)

`test()` in `within(dir)` should run after cd into dir

Given a test() in a within(),

within('/home/vagrant/app1/current') do
  if test '[ -e tmp/pids/server.pid ]'
     execute :kill, '-INT', 'tmp/pids/server.pid'
  end
end

Does it make sense to generate the following commands?

cd /home/vagrant/app1/current && [ -e tmp/pids/server.pid ]
cd /home/vagrant/app1/current && (/usr/bin/env kill -INT tmp/pids/server.pid)

However, currently it doesn't cd into the given directory when generate test() command:

[ -e tmp/pids/server.pid ]
cd /home/vagrant/app1/current && (/usr/bin/env kill -INT tmp/pids/server.pid)

Nesting 'with' fails

I have some example code below from my Capistrano v3 tasks for Django. It seems that nesting 'with' calls is broken. Is this functionality supposed to be supported? The docs state "One will notice that it's quite low level, but exposes a convenient API, the as()/within()/with() are nestable in any order, repeatable, and stackable."

My current work around is to wrap the body of with_virtualenv in a begin-rescue block that captures the NameError and check for the message "instance variable @_env not defined".

Note that my intention is to keep the Django and virtualenv tasks in separate files/gems, hence my not simply merging the with calls.

def with_virtualenv(&block)
  with path: "#{fetch(:virtualenv_dir)}/bin:$PATH" do
    block.call
  end
end

def django_manage(*args)
  within release_path do
    with_virtualenv do
      with app_environment: fetch(:stage) do
        execute :python, 'manage.py', args
      end
    end
  end
end

# Results in "instance variable @_env not defined"
namespace :django do
  task :migrate do
    on primary fetch(:migration_role) do
      django_manage 'syncdb', '--noinput', '--migrate'
    end
  end
end

SSHKit::DSL breaks RSpec feature specs

After adding the sshkit gem in a Rails project Gemfile and running RSpec feature specs, I get really strange errors.

group :test do
  gem 'sshkit'
end

If I comment the following line https://github.com/leehambley/sshkit/blob/master/lib/sshkit/dsl.rb#L19 and use the non-dsl API then everything starts working again. I imagine that somehow, the on method overrides some other classes/modules on method.

Here the error when running the specs:

An error occurred in an after hook
  ActionView::Template::Error: no block given
  occurred at /Users/damselem/.rvm/gems/ruby-2.0.0-p247/gems/sshkit-0.0.34/lib/sshkit/backends/netssh.rb:42:in `instance_exec'`

Support project-local SSH options.

This can probably be done with the :config option for Net::SSH.start.

From their documentation:

:config => set to true to load the default OpenSSH config files (~/.ssh/config, /etc/ssh_config), or to false to not load them, or to a file-name (or array of file-names) to load those specific configuration files. Defaults to true.

That sounds like we should override that to always be Net::SSH::Config.default_files + [__dir_+"/config/ssh"]. (Or something.)

On exit<>0 netssh backend deletes all command output

_execute() in the netssh backend assigns '' to stdout and stderr when a non zero exit code which automatically produces the "command stderr: Nothing written" & "command stdout: Nothing written" logs. Is there a reason stdout/stderr are nixed?

This is confusing especially if the command has been dutifully printing things on stdout.

Also throwing an exception on exit<>0 by default totally destroys the usefulness of capture(). capture() should probably add raise_on_non_zero_exit: false in the arguments like test() does.

The point is that in most cases capturing the output is significant when there is an error - the successful case is the boring one. The way SSHKit's API is at the moment it seems it does not provide an easy way to grab the log and determine that a command failed.

The naive approach would be to let capture() return a tuple: [bool,output]

Net::SSH::Shell

I think we should consider using this to offer stateful shells to the on() block. Otherwise this doesn't do what a user expects

on 'example.com' do
  cd "/tmp"
  run "rm -rf *"
end

In this first example, the user will have just blown away the whole server, as the state of cd isn't preserved, the shell is non-interactive.

The problem is slightly mitigated by the following"

on 'example.com' do
  in "/tmp" do
    run "rm -rf *"
  end
end

Which will in effect run cd /tmp && rm -rf * as one command, but it might be saner to look at keeping the shell state using Net::SSH::Shell in order to allow the other use-case (although that said, I think the on(), in(), as(), with() API is very powerful for almost every case.

Multiple ENV vars don't stack

with foo: 'bar', baz: 'boo' do
  # .....
end

The command is expanded to ( FOO=barBAZ=boo ........ ) which is obviously wrong.

Please explain command.rb:94:in `exit_status=' in Capistrano run

Sorry, for posting capistrano issue here, but it really goes to SSHKit, imho.
So, i've made custom capistrano task, for running background delayed_job processes:

namespace :delayed_job do
  def rails_env
    fetch(:stage) ? "RAILS_ENV=#{fetch(:stage)}" : ''
  end

  desc "Restarts all delayed jobs"
  task :restart do
    on roles(:all) do
      SSHKit.config.command_map[:delayed_job] = "#{ rails_env } script/delayed_job"
      # It's just for running this simple command
      # cd /srv/myapp/current && RAILS_ENV=parsing script/delayed_job --queues=mailer restart -i "mailers"
      within current_path do
            execute :delayed_job, "--queues=mailer restart -i \"mailers\""
      end
    end
  end  

end

As I've mentioned, I just want to run cd /srv/myapp/current && RAILS_ENV=parsing script/delayed_job --queues=mailer restart -i "mailers" and it works as needed on server. But not with capistrano, and I keep getting such error:

 INFO [2ddcee9f] Running RAILS_ENV=parsing script/delayed_job --queues=mailer restart -i "mailers" on 123.123.123.123
DEBUG [2ddcee9f] Command: cd /srv/myapp/current && RAILS_ENV=parsing script/delayed_job --queues=mailer restart -i "mailers"
cap aborted!
delayed_job stdout: Nothing written
delayed_job stderr: Nothing written
.../vendor/bundle/gems/sshkit-1.0.0/lib/sshkit/command.rb:94:in `exit_status='

How I can remap this command?

Please note, that I just want to migrate such Capistrano v2 construction, to v3: run "cd #{current_path};#{rails_env} script/delayed_job --queues=mailer restart -i \"mailers\""

Thanks in advance.

in: :sequence not working when used with capistrano roles

Not sure if I am doing this right and if it is capistrano or ssh kit related:

 on (roles :app), in: :sequence do |host|

does execute on all servers in parallel, while

 on %w{appserver-1, appserver-2, appserver-3}, in: :sequence do |host|

does execute sequentially.

Not sure if I am doing something wrong here...

Capistrano: run with options --dry-run

cap local deploy --dry-run --trace
** Invoke local (first_time)
** Execute local
** Invoke load:defaults (first_time)
** Execute load:defaults
** Invoke deploy (first_time)
** Execute deploy
** Invoke deploy:starting (first_time)
** Execute deploy:starting
** Invoke deploy:check (first_time)
** Execute deploy:check
** Invoke git:check (first_time)
** Invoke git:wrapper (first_time)
** Execute git:wrapper
cap aborted!
undefined method `verbosity' for "/usr/bin/env #<StringIO:0x007ff7a9827118> /tmp/git-ssh.sh":String
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/formatters/pretty.rb:10:in `write'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/backends/printer.rb:14:in `block in execute'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/backends/printer.rb:13:in `tap'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/backends/printer.rb:13:in `execute'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/capistrano-3.0.0/lib/capistrano/tasks/git.rake:11:in `block (3 levels) in <top (required)>'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/backends/printer.rb:9:in `instance_exec'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/backends/printer.rb:9:in `run'
/Users/stamm/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/sshkit-1.1.0/lib/sshkit/runners/parallel.rb:12:in `block (2 levels) in execute'
Tasks: TOP => git:check => git:wrapper

In https://github.com/leehambley/sshkit/blob/ffbce7622ae57bb960d4a75375f5afb18b9228b7/lib/sshkit/backends/printer.rb#L14

output << cmd.to_s

This line call SSHKit::Formatter::Pretty.write() with object String

If temporary comment this line
https://github.com/leehambley/sshkit/blob/ffbce7622ae57bb960d4a75375f5afb18b9228b7/lib/sshkit/formatters/pretty.rb#L10
I will get messages:

Output formatter doesn't know how to handle String

In 5 minutes I have change line to:

return if obj.respond_to?(:verbosity) && obj.verbosity < SSHKit.config.output_verbosity

And add more condition:

when ::String then original_output << obj + "\n"

It's look like what I want.

cap local deploy --dry-run
/usr/bin/env #<StringIO:0x007f92cb827368> /tmp/git-ssh.sh
/usr/bin/env chmod +x /tmp/git-ssh.sh
/usr/bin/env git ls-remote [email protected]:stamm/grape.git
/usr/bin/env mkdir -pv /var/www/shared /var/www/releases
/usr/bin/env mkdir -pv /var/www/shared/log /var/www/shared/tmp/pids /var/www/shared/tmp/cache /var/www/shared/tmp/sockets
/usr/bin/env mkdir -pv /var/www/shared/config
/usr/bin/env [ -f /var/www/shared/config/thin.yml ]
/usr/bin/env [ -f /var/www/repo/HEAD ]
 INFO The repository mirror is at /var/www/repo
/usr/bin/env if test ! -d /var/www/repo; then echo "Directory does not exist '/var/www/repo'" 1>&2; false; fi
/usr/bin/env git remote update
/usr/bin/env if test ! -d /var/www/repo; then echo "Directory does not exist '/var/www/repo'" 1>&2; false; fi
/usr/bin/env mkdir -p /var/www/releases/20131028002432
/usr/bin/env git archive master | tar -x -C /var/www/releases/20131028002432
/usr/bin/env mkdir -pv /var/www/releases/20131028002432/config
/usr/bin/env [ -L /var/www/releases/20131028002432/config/thin.yml ]
/usr/bin/env mkdir -pv /var/www/releases/20131028002432 /var/www/releases/20131028002432/tmp /var/www/releases/20131028002432/tmp /var/www/releases/20131028002432/tmp
/usr/bin/env [ -L /var/www/releases/20131028002432/log ]
/usr/bin/env [ -L /var/www/releases/20131028002432/tmp/pids ]
/usr/bin/env [ -L /var/www/releases/20131028002432/tmp/cache ]
/usr/bin/env [ -L /var/www/releases/20131028002432/tmp/sockets ]
/usr/bin/env if test ! -d /var/www/releases/20131028002432; then echo "Directory does not exist '/var/www/releases/20131028002432'" 1>&2; false; fi
/usr/bin/env bundle --gemfile /var/www/releases/20131028002432/Gemfile --path /var/www/shared/bundle --deployment --quiet --binstubs /var/www/shared/bin --without development test
/usr/bin/env rm -rf /var/www/current
/usr/bin/env ln -s /var/www/releases/20131028002432 /var/www/current
/usr/bin/env if test ! -d /var/www/releases/20131028002432; then echo "Directory does not exist '/var/www/releases/20131028002432'" 1>&2; false; fi
/usr/bin/env /var/www/shared/bin/thin restart -C /var/www/releases/20131028002432/config/thin.yml
/usr/bin/env ls -x /var/www/releases
/usr/bin/env if test ! -d /var/www/releases; then echo "Directory does not exist '/var/www/releases'" 1>&2; false; fi
/usr/bin/env echo "Branch master deployed as release 20131028002432 by stamm; " >> /var/www/revisions.log

I knew that is bad idea.
What you think about this problem?

Running multiple commands simultaneously on a single role/server

In the capistrano-resque gem, we were previously using threads to start multiple Resque workers at the same time:

# This code is in a loop for X number of pids
threads << Thread.new(pid) do |pid|
  on roles(role) do
    info "Starting worker for QUEUE: #{queue}" 
    within current_path do
      execute :rake, %{RAILS_ENV=#{fetch(:rails_env)} QUEUE="#{queue}" PIDFILE=#{pid} BACKGROUND=yes VERBOSE=1 INTERVAL=#{fetch(:interval)} #{"environment" if fetch(:resque_environment_task)} resque:work}
    end
  end
end

# After the loop, we wait on the threads
threads.each(&:join)

Assuming 2 workers, it starts 2 threads. It's now resulting in output like this, and capistrano hangs waiting for the commands to finish.

 INFO Starting worker(s) with QUEUE: foo
DEBUG [88e9068c] Running /usr/bin/env if test ! -d /data/www/capistrano-resque-test-app/current; then echo "Directory does not exist '/data/www/capistrano-resque-test-app/current'" 1>&2; false; fi on ec2-2.petefowler.com
 INFO Starting worker(s) with QUEUE: foo
DEBUG [88e9068c] Command: if test ! -d /data/www/capistrano-resque-test-app/current; then echo "Directory does not exist '/data/www/capistrano-resque-test-app/current'" 1>&2; false; fi
DEBUG [d78b4902] Running /usr/bin/env if test ! -d /data/www/capistrano-resque-test-app/current; then echo "Directory does not exist '/data/www/capistrano-resque-test-app/current'" 1>&2; false; fi on ec2-2.petefowler.com
DEBUG [d78b4902] Command: if test ! -d /data/www/capistrano-resque-test-app/current; then echo "Directory does not exist '/data/www/capistrano-resque-test-app/current'" 1>&2; false; fi

As far as I can tell, the within block first tests the current_path, then executes the code within. However, that directory test doesn't seem to be completing.

This worked fine with SSHKit 1.2.0, so I assume it has something to do with reusing SSH connections in 1.3.0? Is there a "correct" way to be handling simultaneous commands, or are we stuck running them sequentially for now?

Block functionality lost when using a heredoc (or string)

In the execute, if you have a string or heredoc, the surrounding blocks are ignored.
Was that done intentionally? Kind of like if you don't use a symbol to start the command line statement, you don't get any syntactical sugar? I wasn't sure if this is really an issue that you're interested in but I have a usecase for this and it seems like others could too.

My case is like below:

on 'google.com' do
  within release_path do
    execute <<-SCRIPT
      bundle exec build.mysql2 --with-mysql-include=/blah/blah
      bundle exec build.nokogiri --with-xml-lib=/blah/blah
    SCRIPT
  end
end

Thanks for taking a look at this.

Connection Pool Implementation

I've been trying to consider how to implement the worker pools, given the following:

class InstallBundlerRubyGem
  attr_reader :all_hosts
  def initialize(hosts)
    @all_hosts = hosts
  end
  def perform
    on hosts_which_need_bundler do
      as 'root' { run "gem install bundler --no-rdoc --no-ri" }
    end
  end
  private
    def hosts_which_need_bundler
        [].tap do |hosts_which_need_bundler|
          on all_hosts do |host|
            if (capture("gem list bundler") ~= bundler) != nil
              hosts_which_need_bundler.push host
            end
          end
        end
    end
end

Usage Example:

hosts = (0..20).collect { |n| "#{n}.example.com" }
InstallBundlerRubyGem.new(hosts).perform

There's a few things going on in this that I'd like to draw attention to:

  1. Should on() always make a new connection pool for the given hosts, if it should, it should also hang them up at the end of the block, the overhead of opening all these connections shouldn't be underestimated, but it makes the implementation clearer than the alternative, which is finding some global space in which to store connection pools, and attempting to find a connection pool which has the suitable hosts, and duplicating those connections into a new pool with the valid subset.
  2. I think on() should yield the current host to the block, given the nature of the implementation I'd like to achieve, I think this would be a win, and needn't complicate the implementation.
  3. In writing this contrived example, have I stumbled upon something sane for a plugin architecture? Maybe in this example the class could < Deploy::Extension, which could inherit settings such as default environment, default user, etc. (just a thought)

run(command, options = {}, &block) Command

Note: This should only be available inside the scope of on() (also yet to be written).

This command should work like this:

  1. Block until there is a free worker (dependent on the implementation of the on() feature, expect that it's implemented with a worker/connection-pool)
  2. Immediately execute the command, honoring where suitable the in(), as(), with() settings. It remains to be seen whether on() will really be a command context such as the others are, but I think that would be a reasonable assumption.
  3. run() should wait for the command to finish, I'm not excluding the idea of run('something', nohup: true), but I don't know enough about how that might work (it would be another command context, to wrap it in a no-hup script) to confidently make it a requirement.
  4. run() should yield to a block, passing the stdin, stdout and stderr to the block, for each line of input (on any I/O stream), the block should be executed (if given) - in the block anything written to stdin should be funneled back to the running process.

Method signature: run('command', options = {}, &block)

Similar commands, not implemented yet: stream() and capture().

High level API

This is a summary of what I've had in mind for this:

End user can proceed with:

    > deploy my_app production master

    > setup my_app staging develop

So lib/deploy/production.rb

    require 'deploy/rails' # standard deploy included in gem
    require 'deploy/my_data_centre' # my own gem extensions

    module Deploy
      class Production
        include Deploy::Rails
        include MyDataCentre::Roles
        include Deploy::Notifier::Twitter
      end
    end

Rails users could get a version of the file above with:

    rails g install deploy

Which would generate:

    require 'deploy/rails'
    # comments/instructions here
    module Deploy
      class Production
        include Deploy::Rails
      end
    end

Allowing either rake task or executable to call

    configuration = Deploy::Configuration.new(parse_config)
    Deploy::Production.new(configuration).deploy

I would imagine configuration to be used for over-riding default configuration for symlinks, and setting up shared folders.

            {
              :name => 'my_app',
              :shared_directories => ['system', 'config', 'bundle', 'bin'],
              :normal_symlinks => ['config/database.yml', 'bin', 'log'],
              :weird_symlinks =>  {
                'system' => 'public/system',
                'pids'   => 'tmp/pids',
                'bundle' => 'vendor/bundle',
                'cache' => 'tmp/cache',
                'sockets'    => 'tmp/sockets'
              }
            }

But when using defaults, it could be as simple as

    { :name => 'my_app' }

Allowing the code above to become:

    config = Deploy::Configuration.new(:name => 'my_app')
    Deploy::Production.new(config).deploy

Going back to the first bit of code, I imagine the Deploy::Rails module to looks something like this:

    module Deploy
      # class per env
      class Rails
        include Deploy::SCM::Git
        include Deploy::DependencyMangement::Bundler
        include Deploy::AppServer::Unicorn
        include Deploy::Command::Rake

        def deploy
          update
          bundle
          symlink
          migrate
          restart
        end

        # replace 'cold'
        # prompt for yml configuration not in repo
        # or allow 'accept defaults' which can be read from a local file
        def setup
          build_directory_structure
          clone
          symlink
          bundle
          migrate
          cleanup #(fix permissions/ownership, touch assets)
          notify
        end
      end
    end

A rack app version could implement deploy with just update and restart.

You could imagine a scenario where PAAS customers could deploy with a lib/deploy/production.rb file that just contains:

    require 'deploy/engine_yard'

I imagine the roles to be something like this, allowing the module to be shared between applications

module MyDataCentre
  module Roles
    def web
      %w{10.1.1.1 10.1.1.2}
    end

    def app

    end

    def database

    end 
  end
end

All the modules would need to confirm to an interface per type, so switch unicorn for mongrel, or nginx for apache is trivial.

    module Deploy
      class SCM::Git < SCM
        def update

        end

        def clone

        end
      end
    end

    module Deploy
      class Unicorn < AppServer
        def start

        end

        def stop

        end

        def status

        end

        def restart

        end
      end
    end

Or maybe something like this

module Deploy

    class Production
      include SomeComannds
      include Dispatchable

      def deploy
        Dispatcher.new(self).start!
      end
    end

    class Dispatcher
      attr_accessor :state

      def initialize(deploy)
      end

      def start!
        commands.each { command.perform }
      end
    end

    class Command
      #as, on, in
      attr_accessor :identity, :target, :context, :command

     def perform
       #
     end
    end
  end

`#as` and `#with` don't work together

The following script triggers the error:

require 'sshkit/dsl'

SSHKit.config.output_verbosity = Logger::DEBUG

run_locally do
  as :root do
    with foo: 'bar' do
      execute :env
    end
  end
end

It doesn't print the FOO environment variable. This happens because environment variables should be set after calling /usr/bin/env, not before everything else in the command, as it is.

For example, the command generated by the script above is:

$ FOO=bar sudo su root -c "/usr/bin/env env"

It should be:

$ sudo su root -c "/usr/bin/env FOO=bar env"

For this reason, I believe the first example in README is broken:

require 'sshkit/dsl'

on %w{1.example.com 2.example.com}, in: :sequence, wait: 5 do
  within "/opt/sites/example.com" do
    as :deploy  do
      with rails_env: :production do
        rake   "assets:precompile"
        runner "S3::Sync.notify"
      end
    end
  end
end

Maybe this line in README is about this issue:

No environment handling (sshkit might not need to care)

If that is the case, I believe it would be nice to make it clear.

If this is really an issue, please let me know so I can work to fix it.

upload!() should honor within()

The upload!() (and download()) methods should both honor within() for relative paths.

Example:

within '/tmp/' do
  upload! '/etc/hosts', 'should-be-in-tmp'
end

I expect the file should-be-in-tmp to be in /tmp/ but instead it is in the user's home directory.

Ciao!

GPL copyleft affects any software deployed with Capistrano?

Hi,

Perhaps this is a question for a lawyer rather than a GitHub issue, but I thought I would ask here anyway:

If my (proprietary) software requires Capistrano for deployment, and Capistrano (v3) in turn requires SSHKit, which is GPL software, does that mean that my software is bound by the terms of the GPL? After all, SSHKit is in my Gemfile.lock and "incorporated" into my program for the purposes of deployment.

For some perspective, popular deployment tools like Chef and Puppet are notably not GPL, and in fact Puppet switched from GPL to Apache license in 2011 specifically to address concerns from companies wanting to use Puppet but worried about licensing.

Maybe I am being paranoid here. What is your stance on this?

Thanks.

Mapped commands aren't re-mapped

Ran into a weird one today, I had something like:

SSHKit.config.command_map[:rake]   = "/usr/local/rbenv/shims/bundle exec rake"
SSHKit.config.command_map[:ruby]   = "/usr/local/rbenv/shims/ruby"

I had a problem with Rake x.x.x is already activated, bundle exec might help so I tried mapping:

SSHKit.config.command_map[:bundle] = "/usr/local/rbenv/shims/bundle"

Then I realised it would have been nice to be able to have have mapped:

SSHKit.config.command_map[:rake]   = "bundle exec rake"

and have the bundle part of that be somehow re-expanded.

In the end I worked around by using:

SSHKit.config.command_map[:bundle] = "/usr/local/rbenv/shims/bundle"
SSHKit.config.command_map[:rake]   = "/usr/local/rbenv/shims/bundle exec rake"
SSHKit.config.command_map[:ruby]   = "/usr/local/rbenv/shims/ruby"

Which is still absolutely OK, but weird that I never thought of this when designing the command map.

Re-using connections between ok() calls.

Seems like a recurring problem that Capistrano is tripping warnings about opening too many connections.

Things to consider are:

  • What if the cached connection has died (re-open?)
  • Closing connection at teardown?

SSHKit::Configuration.format doesn't pick up on "BlackHole"

Gives the following error.

uninitialized constant SSHKit::Formatter::Blackhole
/sshkit/lib/sshkit/configuration.rb:47:in `const_get'
/sshkit/lib/sshkit/configuration.rb:47:in `formatter'
/sshkit/lib/sshkit/configuration.rb:29:in `format='

Fix handling of output streams, including in cases of error.

In SSHKit we do some weird stuff with stream handling depending on the packets (sic) we receive back from Net::SSH.

I'd like to write a test case script that produces buffered, and unbuffered output, line- and character-wise, on standard error, and standard out to test the correct usage.

With a pathologically badly behaved script, we could find a way to make this work reliably, I am sure.

on(hosts, options = {}, &block) Command

This is the main entry point for doing anything with deploy.rb, here's a run-down of how it should work:

Given the following

hosts = (0..20).collect { |n| "#{n}.example.com" }
on( hosts , in: :sequence, limit: 2) do
   # ... snip ...
end
  1. All addresses should be resolved, ahead of time. In the case of unreachable hosts, an exception should be raised of a unique, catchable type.

  2. A new instance of connection pool should be instantiated, this component may be responsible for connecting to the hosts, and raising the exception.

  3. A new instance of something to manage the connection pool (worker pool?) should be instantiated, this should be responsible for enforcing the limit of two (limit: 2) hosts, and operating in sequence (in: :sequence). Other options might be in: :parallel (the default) and (limit: nil).

  4. In the context of this piece of code, the on(), in(), as(), and with() commands should be executed once per host. Importantly the block cannot be somehow resolved and then passed in completion (as a string to run) to it's host, the block given should be yielded in this context, so that the block may do something such as:

    on(hosts) do
      if capture('uptime').split(" ")[2] > 365 # we've been up too long, fake some downtime
        as :root { run "reboot -h now" }
      end
    end
    
  5. At the creation of the connection pool, it should be added to a list of connection pools to be closed, an at_exit handler somewhere should be responsible for hanging up the connections before the program terminates.

within(dir) not applied for text commands

I'm trying to run "foreman export" command on a server. This seems to be an intersection where capistrano 3, rvm and bundler aren't coming together for me.

I'd like to just do this in a capistrano "on" block:

on roles(:thingy) do
  execute :foreman, "export ...etc..."
end

The interactive session in the server works just fine using 'bundle exec foreman', and ruby is set up via the login shell.

Whenever I use execute with a raw string, then all of the environment variables (with) and current directories (within) are not applied. Is this normal?

What I want it to generate is this command (as an example):

ssh user@server "cd /apps/mobiledataanywhere/current && ~/.rvm/bin/rvm 2.0.0 do bundle exec foreman help export"

This works, but I cannot find how to prefix a command by bundle exec, prefix that by dvm do, and prefix that by environment and prefix that by put it in a directory.

Is there a way???

Anyway, the point of this issue was that in a within(dir) block, commands that are mapped (such as :rake) get the directory prefixed into the command correctly, but string only commands do not. e.g.:

This code:

          within(release_path) do
              execute :ruby, '-e', %Q["puts 'hello'"]
              set :rvm_do, -> { "#{fetch(:rvm_path)}/bin/rvm #{fetch(:rvm_ruby_version)} do" }
              execute %Q[#{fetch(:rvm_do)} ruby -e "puts 'hello'"]
          end

Generate these commands:

DEBUG [849c50cc] Command: cd /my/release/path && ~/.rvm/bin/rvm 2.0.0 do ruby -e "puts 'hello'"
DEBUG [7a309cb5] Command: ~/.rvm/bin/rvm 2.0.0 do ruby -e "puts 'hello'"

Notice the directory

FloatDomainError downloading empty file

Eg.

testfile='/tmp/arandomfoobarfile'
on 'somehost' do
  execute "rm -f #{testfile}; touch #{testfile}"
  download! testfile, testfile
end

raises a FloatDomainError:

lib/sshkit/backends/netssh.rb:84:in `to_i': NaN (FloatDomainError)

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.