Giter Site home page Giter Site logo

toolbox's Introduction

Toolbox

Toolset used in Semaphore 2.0 jobs.

Installation

# Install binaries
bash ~/.toolbox/install-toolbox

# Source functions into current session
source ~/.toolbox/toolbox

# Add toolbox to your bash_profile to activate it in every SSH session
echo 'source ~/.toolbox/toolbox' >> ~/.bash_profile

toolbox's People

Contributors

addersuk avatar bobvanderlinden avatar bogyo210 avatar bogyo2102000 avatar commanderk5 avatar d-stefanovic avatar damjanbecirovic avatar darkofabijan avatar hamir-suspect avatar iret avatar lucaspin avatar mactsouk avatar markoa avatar mattrym avatar mimimalizam avatar miselin avatar radwo avatar semaphore-uncut avatar shiroyasha avatar skipi avatar thomas-nedap avatar veljkomaksimovic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

toolbox's Issues

Cache: parallel tar and upload/unpack and download

Cache operations can take up a long time. For our builds, the do in fact take up the majority of the time. Right now that works roughly as follows:

  • On restore
    • A compressed archive is downloaded to a location
    • After that is complete, the tarball is unpacked into position
  • On store
    • A compressed archive is created and written to a temporary location
    • That tempfile is then uploaded to the cache

For example, this is the code that actually does the restore:

func downloadAndUnpackKey(storage storage.Storage, metricsManager metrics.MetricsManager, key string) {
downloadStart := time.Now()
fmt.Printf("Downloading key '%s'...\n", key)
compressed, err := storage.Restore(key)
utils.Check(err)
downloadDuration := time.Since(downloadStart)
info, _ := os.Stat(compressed.Name())
fmt.Printf("Download complete. Duration: %v. Size: %v bytes.\n", downloadDuration.String(), files.HumanReadableSize(info.Size()))
publishMetrics(metricsManager, info, downloadDuration)
unpackStart := time.Now()
fmt.Printf("Unpacking '%s'...\n", compressed.Name())
restorationPath, err := files.Unpack(metricsManager, compressed.Name())
utils.Check(err)
unpackDuration := time.Since(unpackStart)
fmt.Printf("Unpack complete. Duration: %v.\n", unpackDuration)
fmt.Printf("Restored: %s.\n", restorationPath)
err = os.Remove(compressed.Name())
if err != nil {
fmt.Printf("Error removing %s: %v", compressed.Name(), err)
}
}

The archives are a tar file, which supports streaming (de)compression. Thus, it should be possible to interleave the download and unpacking (as well as packing and upload) resulting in less overall latency.

In the bash version this would've been as simple as piping the sftp output to tar -x and vice-versa for uploads; in Go this might be slightly more tricky but in general possible and an easy win for faster builds.

Cache command does not handle connection isses

screen shot 2019-02-15 at 1 53 01 pm

You can see two output's in the upper screenshot:

  • The standard cache list output that fails in a weird way
  • The output from the command that is executed internally

As you can see, the underlying command is providing an error message that is not handled in the cache list command.

Would a check of the exit status after that command help with this issue?

Cache: Filter temporay uploads

When cache store command is canceled or for any other reason cache archive is not uploaded successfully, temporary upload files remain in the cache and are viable witch cache list. This might confuse users so it would be good to filter out these entries from the cache list output.

Note: Temporary upload files have a limited lifetime and are automatically removed after 2 hours.

`sem-version`: `change-python-version`: command not found

Trying to use sem-version python 2.5 generates the following error:

sem-version python 2.5
--
  | [16:05 26/09/2018]: Changing 'python' to version 2.5
  | change-python-version: command not found
  | exit code: 220 duration: 0s
  | Job Finished

Libcheckout: Add support to optionally merge on PRs

In the case of PRs, checkout just checks out the PR. It would be nice if there was an option to also merge the PR into the base branch, so that issues caused by changes merged into the basebranch after the PR branch was crated get surfaced. This entails that PRs with conflicts won't build at all.

checkout of PR does not retry and gives unhelpful error if git clone fails

Problem

When running checkout on a PR-triggered workflow, I get this unhelpful error:

bash: cd: myproj: No such file or directory
Revision: 53efa4c0626c054a9c305dfe40dc929a95f1142f not found .... Exiting

After doing some digging, it seems that the underlying git clone is failing, due to some intermittent network failures with GitHub. If I repeatedly run git clone directly, I sometimes see:

Cloning into 'myproj'...
kex_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

In other words, the root cause is apparently GitHub in this case (it is dropping the connection). However, Semaphore's checkout is hiding this error, because stderr for the git clone command is sent to /dev/null, like this:

git clone --depth $SEMAPHORE_GIT_DEPTH $SEMAPHORE_GIT_URL $SEMAPHORE_GIT_DIR 2>/dev/null

Instead, checkout assumes that the clone succeeded (it doesn't check the exit status), and tries to change into the cloned directory, which doesn't exist. So the error the user sees in this case is:

bash: cd: myproj: No such file or directory

Desired behavior

  1. checkout should abort when a git clone operation fails, and show a helpful error message; ideally the stderr output of the underlying failed git command.
  2. In addition, ideally checkout would provide a retry mechanism to gracefully recover from intermittent connection failures. Or the documentation should explain how retry and checkout can be used together (a naive retry checkout doesn't work).

Support asdf .tool-versions

We started using asdf as a runtime version manager. It's an 18k GitHub stars project. Would you mind adding asdf to your toolbox?

I see the following advantages for your users:

  • Less context knowledge about the CI runtime is needed. It's enough to know that asdf is available.
  • Caching is easier to manage because runtimes install their environment into the same folder (~/.asdf/installs).

`sem-version node 18.16.1` fails but prints "success" with exit status 0 anyway

Switching node versions to 18.16.1 doesn't work:

semaphore@semaphore-vm:~$ sem-version node 18.16.1

[16:03 22/06/2023]: Changing 'node' to version 18.16.1
Version '18.16.1' not found - try `nvm ls-remote` to browse available versions.
N/A: version "v18.16.1" is not yet installed.

You need to run `nvm install 18.16.1` to install and use it.

changed 59 packages in 619ms

4 packages are looking for funding
  run `npm fund` for details

[16:03 22/06/2023]: Switch successful.

Confirming it didn't work (this should print v18.16.1):

semaphore@semaphore-vm:~$ node -v
v18.15.0

The sem-version command exits with status 0, so the build proceeds even though the version switch didn't work.

Toolbox installation in a container with non-root user

I'm using a custom docker image for my Semaphore CI workflow. For that image I intentionally create a user to avoid executing the workflow on behalf of root. The problem now is, however, that in the beginning of the workflow, the toolbox installation is happening, which executes commands with sudo:

`sudo $@`

Which is not possible in my context. My options now are to allow executing sudo without a password for my user or even using root for that matter now. However, this is something that I wanted to avoid in the first place.

What would my options now be? Is it possible to skip the installation of the toolbox at runtime and do that once when I build the image?

LICENSE file

I would like to see a LICENSE file in the repo

Cache is slow if tar file is stored

In our test case one of the steps involves caching a docker image into the file and restoring it later.

Problem is, if we cache a tar file, the script will compress it which is a time-consuming operation which doesn't really bring a big value.

Default cache command, provided by semaphore:

semaphore@semaphore-vm:~/app_cache_dir$ time tar czPf /tmp/app_image_cache-cac3c3500fc4aa1a4f8dece0edb86e57ad36510e.tar app-image.tar 
real	0m34.813s
user	0m33.855s
sys	0m1.230s

without compression:

semaphore@semaphore-vm:~/app_cache_dir$ time tar cPf /tmp/app_image_cache-cac3c3500fc4aa1a4f8dece0edb86e57ad36510e.2.tar app-image.tar 
real	0m0.708s
user	0m0.049s
sys	0m0.655s

The possible solution would be to provide a flag option to cache, like: cache store --compress=false my_file_or_directory which would add or remove z option from tar command.

Calling `retry checkout` cause `checkout::refbased: command not found`

As mention in title, when I calling retry checkout in my task, the log shows following error. So how can I solve this issue?

environment: line 11: checkout::refbased: command not found00:00
environment: line 17: checkout::metric: command not found00:00
[1/3] Execution Failed with exit status 127. Retrying.00:00
environment: line 11: checkout::refbased: command not found00:00
environment: line 17: checkout::metric: command not found00:00
[2/3] Execution Failed with exit status 127. Retrying.00:00
environment: line 11: checkout::refbased: command not found00:00
environment: line 17: checkout::metric: command not found00:00
[3/3] Execution Failed with exit status 127. No more retries.

my prologue

commands:
  - sem-version go 1.15
  - mkdir -vp "${SEMAPHORE_GIT_DIR}" "$(go env GOPATH)/bin"
  - retry checkout

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.