Giter Site home page Giter Site logo

ktomk / pipelines Goto Github PK

View Code? Open in Web Editor NEW
107.0 7.0 12.0 3.88 MB

Pipelines - Run Bitbucket Pipelines Wherever They Dock

Home Page: https://ktomk.github.io/pipelines/

License: GNU Affero General Public License v3.0

PHP 88.83% Shell 8.80% Makefile 0.66% CSS 0.25% Python 1.18% HTML 0.27%
pipelines local-build docker bitbucket-pipelines pipeline-runner

pipelines's Introduction

Pipelines

Run Bitbucket Pipelines Wherever They Dock

CI Status Build Status Code Coverage Scrutinizer Code Quality

Command line pipeline runner written in PHP. Available from Github or Packagist.

Usage | Environment | Exit Status | Details | References

Usage

From anywhere within a project or (Git) repository with a Bitbucket Pipeline file:

$ pipelines

Runs pipeline commands from bitbucket-pipelines.yml [BBPL].

Memory and time limits are ignored. Press ctrl + c to quit.

The Bitbucket limit of 100 (previously 10) steps per pipeline is ignored.

Exit status is from last pipeline script command, if a command fails the following script commands and steps are not executed.

The default pipeline is run, if there is no default pipeline in the file, pipelines tells it and exists with non-zero status.

To execute a different pipeline use the --pipeline <id> option where <id> is one of the list by the --list option. Even more information about the pipelines is available via --show. Both --list and --show output and exit.

Use --steps <steps> to specify which step(s) to execute (also in which order).

If the next pipeline step has a manual trigger, pipelines stops the execution and outputs a short message on standard error giving info about the fact. Manual triggers can be ignored with the --no-manual option.

Run the pipeline as if a tag/branch or bookmark has been pushed with --trigger <ref> where <ref> is tag:<name>, branch:<name>, bookmark:<name> or pr:<branch-name>[:<destination-branch>]. If there is no tag, branch, bookmark or pull-request pipeline with that name, the name is compared against the patterns of the referenced type and if found, that pipeline is run.

Otherwise the default pipeline is run, if there is no default pipeline, no pipeline at all is run and the command exits with non-zero status.

--pipeline and --trigger can be used together, --pipeline overrides pipeline from --trigger but --trigger still influences the container environment variables.

To specify a different file use the --basename <basename> or --file <path> option and/or set the working directory --working-dir <path> in which the file is looked for unless an absolute path is set by --file <path>.

By default pipelines operates on the current working tree which is copied into the container to isolate running the pipeline from the working directory (implicit --deploy copy).

Alternatively the working directory can be mounted into the pipelines container by using --deploy mount.

Use --keep flag to keep containers after the pipeline has finished for further inspection. By default all containers are destroyed. Sometimes for development it is interesting to keep containers on error only, the --error-keep flag is for that.

In any case, if a pipeline runs again and it finds an existing container with the same name (generated by the pipeline name etc.), the existing container will be re-used. This can be very useful to re-iterate quickly.

Manage leftover containers with --docker-list showing all pipeline containers, --docker-kill to kill running containers and --docker-clean to remove stopped pipeline containers. Use in combination to fully clean, e.g.:

$ pipelines --docker-list --docker-kill --docker-clean

Or just run for a more shy clean-up:

$ pipelines --docker-zap

to kill and remove all pipeline containers (w/o showing a list) first. "zap" is pipelines "make clean" equivalent for --keep.

All containers run by pipelines are labeled to ease maintaining them.

Validate your bitbucket-pipelines.yml file with --show which highlights errors found.

For schema-validation use --validate [<file>]. Schema validation might show errors that are not an issue when executing a pipeline (--show and/or --dry-run is better for that) but validates against a schema which is aligned with the one that Atlassian/ Bitbucket provides (the schema is more lax compared to upstream for the cases known to offer a better practical experience). E.g. use it for checks in your CI pipeline or linting files before push in a pre-commit hook or your local build.

Inspect your pipeline with --dry-run which will process the pipeline but not execute anything. Combine with -v (, --verbose) to show the commands which would have run verbatim which allows to better understand how pipelines actually works. Nothing to hide here.

Use --no-run to not run the pipeline at all, this can be used to test the utilities' options.

Pipeline environment variables can be passed/exported to or set for your pipeline by name or file with -e, --env and --env-file options.

Environment variables are also loaded from dot env files named .env.dist and .env and processed in that order before the environment options. Use of --no-dot-env-files prevents automatic loading, --no-dot-env-dot-dist for the .env.dist file only.

More information on pipelines environment variables in the environment section below.

Help

A full display of the pipelines utility options and arguments is available via -h, --help:

usage: pipelines [<options>] --version | -h | --help
       pipelines [<options>] [--working-dir <path>] [--file <path>]
                 [--basename <basename>] [--prefix <prefix>]
                 [--verbatim] [--[no-|error-]keep] [--no-run]
                 [(-e | --env) <variable>] [--env-file <path>]
                 [--no-dot-env-files] [--no-dot-env-dot-dist]
                 [--docker-client <package>] [--ssh]
                 [--user[=<name|uid>[:<group|gid>]]]
                 [--deploy mount | copy ] [--pipeline <id>]
                 [(--step | --steps) <steps>] [--no-manual]
                 [--trigger <ref>] [--no-cache]
       pipelines [<options>] --service <service>
       pipelines [<options>] --list | --show | --images
                 | --show-pipelines | --show-services
                 | --step-script[=(<id> | <step>[:<id>])]
                 | --validate[=<path>]
       pipelines [<options>] --docker-client-pkgs
       pipelines [<options>] [--docker-list] [--docker-kill]
                 [--docker-clean] [--docker-zap]

Generic options
    -h, --help            show usage and help information
    --version             show version information
    -v, --verbose         be more verbose, show more information and
                          commands to be executed
    --dry-run             do not execute commands, e.g. invoke docker or
                          run containers, with --verbose show the
                          commands that would have run w/o --dry-run
    -c <name>=<value>     pass a configuration parameter to the command

Pipeline runner options
    --basename <basename> set basename for pipelines file, defaults to
                          'bitbucket-pipelines.yml'
    --deploy mount|copy   how files from the working directory are
                          placed into the pipeline container:
                          copy     (default) working dir is copied into
                                 the container. stronger isolation as
                                 the pipeline scripts can change all
                                 files without side-effects in the
                                 working directory
                          mount    the working directory is mounted.
                                 fastest, no isolation
    --file <path>         path to the pipelines file, overrides looking
                          up the <basename> file from the current
                          working directory, use '-' to read from stdin
    --trigger <ref>       build trigger; <ref> can be either of:
                          tag:<name>, branch:<name>, bookmark:<name> or
                          pr:<branch-name>[:<destination-branch>]
                          determines the pipeline to run
    --pipeline <id>       run pipeline with <id>, use --list for a list
                          of all pipeline ids available. overrides
                          --trigger for the pipeline while keeping
                          environment from --trigger.
    --step, --steps <steps>
                          execute not all but this/these <steps>. all
                          duplicates and orderings allowed, <steps> are
                          a comma/space separated list of step and step
                          ranges, e.g. 1 2 3; 1-3; 1,2-3; 3-1 or -1,3-
                          and 1,1,3,3,2,2
    --no-manual           ignore manual steps, by default manual steps
                          stop the pipeline execution when not the first
                          step in invocation of a pipeline
    --verbatim            only give verbatim output of the pipeline, do
                          not display other information like which step
                          currently executes, which image is in use ...
    --working-dir <path>  run as if pipelines was started in <path>
    --no-run              do not run the pipeline
    --prefix <prefix>     use a different prefix for container names,
                          default is 'pipelines'
    --no-cache            disable step caches; docker always caches

File information options
    --images              list all images in file, in order of use, w/o
                          duplicate names and exit
    --list                list pipeline <id>s in file and exit
    --show                show information about pipelines in file and
                          exit
    --show-pipelines      same as --show but with old --show output
                          format without services and images / steps are
                          summarized - one line for each pipeline
    --show-services       show all defined services in use by pipeline
                          steps and exit
    --validate[=<path>]   schema-validate file, shows errors if any,
                          exits; can be used more than once, exit status
                          is non-zero on error
    --step-script[=(<id> | <step>[:<id>])]
                          write the step-script of pipeline <id> and
                          <step> to standard output and exit

Environment control options
    -e, --env <variable>  pass or set an environment <variable> for the
                          docker container, just like a docker run,
                          <variable> can be the name of a variable which
                          adds the variable to the container as export
                          or a variable definition with the name of the
                          variable, the equal sign "=" and the value,
                          e.g. --env NAME=<value>
    --env-file <path>     pass variables from environment file to the
                          docker container
    --no-dot-env-files    do not pass .env.dist and .env files as
                          environment files to docker
    --no-dot-env-dot-dist dot not pass .env.dist as environment file to
                          docker only

Keep options
    --keep                always keep docker containers
    --error-keep          keep docker containers if a step failed;
                          outputs non-zero exit status and the id of the
                          container kept and exit w/ container exec exit
                          status
    --no-keep             do not keep docker containers; default

Container runner options
    --ssh                 ssh agent forwarding: if $SSH_AUTH_SOCK is set
                          and accessible, mount SSH authentication
                          socket read only and set SSH_AUTH_SOCK in the
                          pipeline step container to the mount point.
    --user[=<name|uid>[:<group|gid>]]
                          run pipeline step container as current or
                          given <user>/<group>; overrides container
                          default <user> - often root, (better) run
                          rootless by default.

Service runner options
    --service <service>   runs <service> attached to the current shell
                          and waits until the service exits, exit status
                          is the one of the docker run service
                          container; for testing services, run in a
                          shell of its own or background

Docker service options
    --docker-client <package>
                          which docker client binary to use for the
                          pipeline service 'docker' defaults to the
                          'docker-19.03.1-linux-static-x86_64' package
    --docker-client-pkgs  list all docker client packages that ship with
                          pipelines and exit

Docker container maintenance options
      usage might leave containers on the system. either by interrupting
      a running pipeline step or by keeping the running containers
      (--keep, --error-keep)

      pipelines uses a <prefix> 'pipelines' by default, followed by '-'
      and a compound name based on step-number, step-name, pipeline id
      and image name for container names. the prefix can be set by the
      --prefix <prefix> option and argument.

      three options are built-in to monitor and interact with leftovers,
      if one or more of these are given, the following operations are
      executed in the order from top to down:
    --docker-list         list prefixed containers
    --docker-kill         kills prefixed containers
    --docker-clean        remove (non-running) containers with
                          pipelines prefix

      for ease of use:
    --docker-zap          kill and remove all prefixed containers at
                          once; no show/listing

Less common options
    --debug               flag for trouble-shooting (fatal) errors,
                          warnings, notices and strict warnings; useful
                          for trouble-shooting and bug-reports

Usage Scenario

Give your project and pipeline changes a quick test run from the staging area. As pipelines are normally executed far away, setting them up becomes cumbersome, the guide given in Bitbucket Pipelines documentation [BBPL-LOCAL-RUN] has some hints and is of help, but it is not about a bitbucket pipelines runner.

This is where the pipelines command jumps in.

The pipelines command closes the gap between local development and remote pipeline execution by executing any pipeline configured on your local development box. As long as Docker is accessible locally, the bitbucket-pipelines.yml file is parsed and it is taken care of to execute all steps and their commands within the container of choice.

Pipelines YAML file parsing, container creation and script execution is done as closely as possible compared to the Atlassian Bitbucket Pipeline service. Environment variables can be passed into each pipeline as needed. You can even switch to a different CI/CD service like Github/Travis with little integration work fostering your agility and vendor independence.

Features

Features include:

Dev Mode

Pipeline from your working tree like never before. Pretend to be on any branch, tag or bookmark (--trigger) even in a different repository or none at all.

Check if the reference matches a pipeline or just run the default (default) or a specific one (--list, --pipeline). Use a different pipelines file (--file) or swap the "repository" by changing the working directory (--working-dir <path>).

If a pipeline step fails, the steps container can be kept for further inspection on error with the --error-keep option. The container id is shown then which makes it easy to spawn a shell inside:

$ docker exec -it $ID /bin/sh

Containers can be always kept for debugging and manual testing of a pipeline with --keep and with the said --error-keep on error only. Kept containers are re-used by their name regardless of any --keep (, --error-keep) option.

Continue on a (failed) step with the --steps <steps> argument, the <steps> option can be any step number or sequence (1-3), separate multiple with comma (3-,1-2), you can even repeat steps or reverse order (4,3,2,1).

For example, if the second step failed, continue with use of --steps 2- to re-run the second and all following steps (--steps 2 or --step 2 will run only the next step; to do a step-by-step approach).

Afterwards manage left overs with --docker-list|kill|clean or clean up with --docker-zap.

Debugging options to dream for; benefit from the local build, the pipeline container.

Container Isolation

There is one container per step, like it is on Bitbucket.

Files are isolated by being copied into the container before the pipeline step script is executed (implicit --deploy copy).

Alternatively files can be mounted into the container instead with --deploy mount which normally is faster on Linux, but the working tree might become changed by the container script which causes side-effect that may be unwanted. Docker runs system-wide and containers do not isolate users (e.g. root is root).

Better with --deploy mount (and peace of mind) is using Docker in rootless mode where files manipulated in the pipeline container are accessible to the own user account (like root is your user automatically mapped).

Pipeline Integration

Export files from the pipeline by making use of artifacts, these are copied back into the working tree while in (implicit) --deploy copy mode. Artifacts' files are always created by the user running pipelines. This also (near) perfectly emulates the file format artifacts section with the benefit/downside that you might want to prepare a clean build in a pipeline step script while you can keep artifacts from pipelines locally. This is a trade-off that has turned out to be acceptable over the years.

wrap pipelines in a script for clean checkouts or wait for future options to stage first (git-deployment feature). In any case, control your build first of all.

Ready for Offline

On the plane? Riding Deutsche Bahn? Or just a rainy day on a remote location with broken net? Coding while abroad? Or just Bitbucket down again?

Before going into offline mode, read about Working Offline you'll love it.

Services? Check!

The local pipeline runner runs service containers on your local box/system (that is your pipelines' host). This is similar to use services and databases in Bitbucket Pipelines [BBPL-SRV].

Even before any pipeline step makes use of a service, a service definition can already be tested with the --service option turning setting up services in pipelines into a new experience. A good way to test service definitions and to get an impression on additional resources being consumed.

Default Image

The pipelines command uses the default image like Bitbucket Pipelines does ("atlassian/default-image"). Get started out of the box, but keep in mind it has roughly 1.4 GB.

Pipelines inside Pipeline

As a special feature and by default pipelines mounts the docker socket into each container (on systems where the socket is available). This allows to launch pipelines from a pipeline as long as pipelines and the Docker client is available in the pipelines' container. pipelines will take care of the Docker client as /usr/bin/docker as long as the pipeline has the docker service (services: [docker]).

This feature is similar to run Docker commands in Bitbucket Pipelines [BBPL-DCK].

The pipelines inside pipeline feature serves pipelines itself well for integration testing the projects build. In combination with --deploy mount, the original working-directory is mounted from the host (again). Additional protection against endless loops by recursion is implemented to prevent accidental pipelines inside pipeline invocations that would be endlessly on-going.

Environment

Pipelines mimics "all" of the Bitbucket Pipeline in-container environment variables [BBPL-ENV], also known as environment parameters:

  • BITBUCKET_BOOKMARK - conditionally set by --trigger
  • BITBUCKET_BUILD_NUMBER - always set to "0"
  • BITBUCKET_BRANCH - conditionally set by --trigger
  • BITBUCKET_CLONE_DIR - always set to deploy point in container
  • BITBUCKET_COMMIT - faux as no revision triggers a build; always set to "0000000000000000000000000000000000000000"
  • BITBUCKET_REPO_OWNER - current username from environment or if not available "nobody"
  • BITBUCKET_REPO_SLUG - base name of project directory
  • BITBUCKET_TAG - conditionally set by --trigger
  • CI - always set to "true"

All of these (but not BITBUCKET_CLONE_DIR) can be set within the environment pipelines runs in and are taken over into container environment. Example:

$ BITBUCKET_BUILD_NUMBER=123 pipelines # build no. 123

More information on (Bitbucket) pipelines environment variables can be found in the Pipelines Environment Variable Usage Reference.

Additionally pipelines sets some environment variables for introspection:

  • PIPELINES_CONTAINER_NAME - name of the container itself
  • PIPELINES_ID - <id> of the pipeline that currently runs
  • PIPELINES_IDS - list of space separated md5 hashes of so far running <id>s. used to detect pipelines inside pipeline recursion, preventing execution until system failure.
  • PIPELINES_PARENT_CONTAINER_NAME - name of the container name if it was already set when the pipeline started (pipelines inside pipeline "pip").
  • PIPELINES_PIP_CONTAINER_NAME - name of the first (initial) pipeline container. Used by pipelines inside pipelines ("pip").
  • PIPELINES_PROJECT_PATH - path of the original project as if it would be used for --deploy with copy or mount so that it is possible inside a pipeline to do --deploy mount when the current container did not mount. A mount always requires the path of the project directory on the system running pipelines. With no existing mount (e.g. --deploy copy) it would otherwise be unknown. Manipulating this parameter within a pipeline leads to undefined behaviour and can have system security implications.

These environment variables are managed by pipelines itself. Some of them can be injected which can lead to undefined behaviour and can have system security implications as well.

Next to these special purpose environment variables, any other environment variable can be imported into or set in the container via the -e, --env and --env-file options. These behave exactly as documented for the docker run command [DCK-RN].

Instead of specifying custom environment parameters for each invocation, pipelines by default automatically uses the .env.dist and .env files from each project supporting the same file-format for environment variables as docker.

Exit Status

Exit status on success is 0 (zero).

A non zero exit status denotes an error:

  • 1 : An argument supplied (also a missing one) caused the error.
  • 2 : An error is caused by the system not being able to fulfill the command (e.g. a file could not be read).
  • 127: Running pipelines inside pipelines failed detecting an endless loop.

Example

Not finding a file might cause exit status 2 (two) on error because a file is not found, however with a switch like --show the exit status might still be 1 (one) as there was an error showing that the file does not exists (indirectly) and the error is more prominently showing all pipelines of that file.

Details

Requirements | User Tests | Installation | Known Bugs | Todo

Requirements

Pipelines works best on a POSIX compatible system having a PHP runtime.

Docker needs to be available locally as docker command as it is used to run the pipelines. Rootless Docker is supported.

A recent PHP version is favored, the pipelines command needs PHP to run. It should work with PHP 5.3.3+. A development environment should be PHP 7+, this is especially suggested for future releases. PHP 8+ is supported as well.

Installing the PHP YAML extension [PHP-YAML] is highly recommended as it does greatly improve parsing the pipelines file which is otherwise with a YAML parser on it's own as a fall-back and is not bad at all. There are subtle differences between these parsers, so why not have both at hand?

User Tests

Successful use on Ubuntu 16.04 LTS, Ubuntu 18.04 LTS, Ubuntu 20.04 LTS and Mac OS X Sierra and High Sierra with PHP and Docker installed.

Known Bugs

  • The command ":" in pipelines exec layer is never really executed but emulated having exit status 0 and no standard or error output. It is intended for pipelines testing.

  • Brace expansion (used for glob patterns with braces) is known to fail in some cases. This could affect matching pipelines, collecting asset paths and did affect building the phar file.

    For the first two, this has never been reported nor experienced, for building the phar file the workaround was to entail the larger parts of the pattern.

  • The sf2yaml based parser does not support the backslash at the end of a line to fold without a space with double quoted strings.

  • The libyaml based parser does not support dots (".") in anchor names.

  • The libyaml based parser does not support folded scalar (">") as block style indicator. Suggested workaround is to use literal style ("|").

  • NUL bytes ("\0") are not supported verbatim in step-scripts due to defense-in-depth protection on passthru in the PHP-runtime to prevent Null character injection.

  • When the project directory is large (e.g. a couple of GBs) and copying it into the pipeline container, it may appear as if pipelines hangs as the copying operation is ongoing and taking a long time.

    Pressing ctrl + c may stop pipelines but not the copying operation. Kill the process of the copy operation (tar pipe to docker cp) to stop the operation.

Installation

Phar (Download) | Composer | Phive | Source (also w/ Phar) | Full Project (Development)

Installation is available by downloading the phar archive from Github, via Composer/Packagist or with Phive and it should always work from source which includes building the phar file.

Download the PHAR (PHP Archive) File

Downloads are available on Github. To obtain the latest released version, use the following URL:

https://github.com/ktomk/pipelines/releases/latest/download/pipelines.phar

Rename the phar file to just "pipelines", set the executable bit and move it into a directory where executables are found.

Downloads from Github are available since version 0.0.4. All releases are listed on the following website:

https://github.com/ktomk/pipelines/releases

Install with Composer

Suggested is to install it globally (and to have the global composer vendor/bin in $PATH) so that it can be called with ease and there are no dependencies in a local project:

$ composer global require ktomk/pipelines

This will automatically install the latest available version. Verify the installation by invoking pipelines and output the version:

$ pipelines --version
pipelines version 0.0.19

To uninstall remove the package:

$ composer global remove ktomk/pipelines

Take a look at Composer from getcomposer.org [COMPOSER], a Dependency Manager for PHP. Pipelines has support for composer based installations, which might include upstream patches (composer 2 is supported, incl. upstream patches).

Install with Phive

Perhaps the most easy way to install when phive is available:

$ phive install pipelines

Even if your PHP version does not have the Yaml extension this should work out of the box. If you use composer and you're a PHP aficionado, dig into phive for your systems and workflow.

Take a look at Phive from phar.io [PHARIO], the PHAR Installation and Verification Environment (PHIVE). Pipelines has full support for phar.io/phar based installations which includes support for the phive utility including upstream patches.

Install from Source

To install from source, checkout the source repository and symlink the executable file bin/pipelines into a segment of $PATH, e.g. your $HOME/bin directory or similar. Verify the installation by invoking pipelines and output the version:

$ pipelines --version
pipelines version 0.0.19 # NOTE: the version is exemplary

To create a phar archive from sources, invoke from within the projects root directory the build script:

$ composer build
building 0.0.19-1-gbba5a43 ...
pipelines version 0.0.19-1-gbba5a43
file.....: build/pipelines.phar
size.....: 240 191 bytes
SHA-1....: 9F118A276FC755C21EA548A77A9DBAF769B93524
SHA-256..: 0C38CBBB12E10E80F37ECA5C4C335BF87111AC8E8D0490D38683BB3DA7E82DEF
file.....: 1.1.0
api......: 1.1.1
extension: 2.0.2
php......: 7.2.16-[...]
uname....: [...]
count....: 62 file(s)
signature: SHA-1 E638E7B56FAAD7171AE9838DF6074714630BD486

The phar archive then is (as written in the output of the build):

build/pipelines.phar

Check the version by invoking it:

$ build/pipelines.phar --version
pipelines version 0.0.19-1-gbba5a43
# NOTE: the version is exemplary
Php Compatibility and Undefined Behaviour

The pipelines project aims to support php 5.3.3 up to php 8.1.

Using any of its PHP functions or methods with named parameters falls into undefined behaviour.

Reproducible Phar Builds

The pipelines project practices reproducible builds since it's first phar build. The build is self-contained, which means that the repository ships with all required files to build with only little dependencies:

Reproducible builds of the phar file would be incomplete without the fine work from the composer projects phar-utils (Seldaek/Jordi Boggiano) which is forked by the pipelines project in Timestamps.php by keeping the original license with the file (MIT), providing bug-fixes to upstream under that license (see Phar-Utils #2 and Phar-Utils #3).

This file is used to set the timestamps inside the phar file to that of the release as otherwise those would be at the time of build. This is the same as the Composer project does (see Composer #3927).

Additionally in the pipelines project that file is used to change the access permissions of the files in the phar. That is because across PHP versions the behaviour has changed so the build is kept backwards and forwards compatible. As this has been noticed later in the projects' history, the build might show different binaries depending on which PHP version is used (see PHP #77022 and PHP #79082) and the patch state of the timestamps file.

Install Full Project For Development

When working with git, clone the repository and then invoke composer install. The project is setup for development then.

Alternatively it's possible to do the same via composer directly:

$ composer create-project --prefer-source --keep-vcs ktomk/pipelines
...
$ cd pipelines

Verify the installation by invoking the local build:

$ composer ci

Should exit with status 0 when it went fine, non 0 when there is an issue. Composer tells which individual script did fail.

Follow the instructions in Install from Source to use the development version for pipelines.

Todo

  • Support for private Docker repositories
  • Inject docker client if docker service is enabled
  • Run specific steps of a pipeline (only) to put the user back into command on errors w/o re-running everything
  • Stop at manual steps (--no-manual to override)
  • Support BITBUCKET_PR_DESTINATION_BRANCH with --trigger pr:<source>:<destination>
  • Pipeline services
  • Run as current user with --user (--deploy mount should not enforce the container default user [often "root"] for project file operations any longer), however the Docker utility still requires you (the current user) to be root like, so technically there is little win (see Rootless Pipelines for what works better in this regard)
  • Have caches on a per-project basis
  • Copy local composer cache into container for better (offline) usage in PHP projects (see Populate Caches)
  • Run scripts with /bin/bash if available (#17) (bash-runner feature)
  • Support for BITBUCKET_DOCKER_HOST_INTERNAL environment variable / host.docker.internal hostname within pipelines
  • Count BITBUCKET_BUILD_NUMBER on a per project basis (build-number feature)
  • Option to not mount docker.sock
  • Limit projects' paths below $HOME, excluding dot . directory children.
  • More accessible offline preparation (e.g. --docker-pull-images, --go-offline or similar)
  • Check Docker existence before running a pipeline
  • Pipes support (pipe feature)
    • Show scripts with pipe/s
    • Fake run script with pipe/s showing information
    • Create test/demo pipe
    • Run script with pipe/s
  • Write about differences from Bitbucket Pipelines
  • Write about the file format support/limitations
  • Pipeline file properties support:
    • step.after-script (after-script feature)
    • step.trigger (--steps / --no-manual options)
    • step.caches (to disable use --no-cache option)
    • definitions
      • services (services feature)
      • caches (caches feature)
    • step.condition (#13)
    • clone (git-deployment feature)
    • max-time (never needed this for local run)
    • size (likely neglected for local run, limited support for Rootless Pipelines)
  • Get VCS revision from working directory (git-deployment feature)
  • Use a different project directory --project-dir <path> to specify the root path to deploy into the container, which currently is the working directory (--working-dir <path> works already)
  • Run on a specific revision, reference it (--revision <ref>); needs a clean VCS checkout into a temporary folder which then should be copied into the container (git-deployment feature)
  • Override the default image name (--default-image <name>; never needed this for local run)

References

pipelines's People

Contributors

ktomk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipelines's Issues

Validator fails on pipes in steps

Background

I'm trying to validate my pipelines with pipelines --show bitbucket-pipeline.yaml.

Using latest .phar release as of time of writing.

bitbucket-pipelines.yml:

pipelines:

  branches:
    develop:
    - step:
        name: Deploy to staging
        deployment: staging
        script:
          - pipe: atlassian/rsync-deploy:0.4.2
            variables:
              USER: 'ec2-user'
              SERVER: '${SERVER_HOST}'
              REMOTE_PATH: '/home/ec2-user/app'
              LOCAL_PATH: '$PWD'
              EXTRA_ARGS: '--exclude=".git*"'
          - 'echo "Hello, World!"' 

Expected Results

Expected this file to pass with flying colours - verified its correctness with the official online validator.

Actual Results

pipelines --show bitbucket-pipelines.yml 
PIPELINE ID         IMAGES    STEPS                                                         
branches/develop    ERROR     'script' requires a list of commands, step #0 is not a command

Does this tool support pipes?

I am getting this weird output:

+ echo "pipe: atlassian/git-secrets-scan:1.4.0 (pending feature)" # pipe feature is pending
printf %s '  FILES_IGNORED (**/node_modules): '; printf '%s ' **/node_modules; printf '\n' 

pipe: atlassian/git-secrets-scan:1.4.0 (pending feature)
  FILES_IGNORED (**/node_modules): **/node_modules 

pipelines: not a readable file:bitbucket-pipelines.yml

Hello,
This is some great work!
But I am stuck at the first step, while running the YAML file. it says it's not a readable file. My YAML file uses a scala-sbt image with sbt commands in the script.
Could you please help me with this issue I am facing. Do I do some things differently?

Thanks,
Prathamesh

/bin/sh: 5: source: not found

Our images in the bitbucket-pipelines have bash as default runner. Can this be adhered?

This does not only affect the source command, which we could easily replace by .. But also environment variables that should be present are not there.

Default pipeline with parallel fails validation

Hi thanks for an awesome utility

I've a problem when defining a default pipeline with parallel step it just stops with

$ pipelines --debug
pipelines: error: pipeline id 'default'
pipelines: file parse error: Missing required property 'step'
--------
class....: Ktomk\Pipelines\File\ParseException
message..: file parse error: Missing required property 'step'
code.....: 2
file.....: phar:///~/bin/pipelines/src/File/ParseException.php
line.....: 27
backtrace:
#0 phar:///~/bin/pipelines/src/File/Pipeline.php(106): Ktomk\Pipelines\File\ParseException::__()
#1 phar:///~/bin/pipelines/src/File/Pipeline.php(91): Ktomk\Pipelines\File\Pipeline->step()
#2 phar:///~/bin/pipelines/src/File/Pipeline.php(33): Ktomk\Pipelines\File\Pipeline->parseSteps()
#3 phar:///~/bin/pipelines/src/File/File.php(231): Ktomk\Pipelines\File\Pipeline->__construct()
#4 phar:///~/bin/pipelines/src/Utility/App.php(438): Ktomk\Pipelines\File\File->getById()
#5 phar:///~/bin/pipelines/src/Utility/App.php(141): Ktomk\Pipelines\Utility\App->getRunPipeline()
#6 phar:///~/bin/pipelines/src/Utility/ExceptionHandler.php(48): Ktomk\Pipelines\Utility\App->run()
#7 phar:///~/bin/pipelines/src/Utility/App.php(86): Ktomk\Pipelines\Utility\ExceptionHandler->handle()
#8 phar:///~/bin/pipelines/bin/pipelines(26): Ktomk\Pipelines\Utility\App->main()
#9 /~/bin/pipelines(14): require('phar:///~/...')
#10 {main}
--------

This is the smallest config I can reproduce the error with

bitbucket-pipelines.yml

image: atlassian/default-image:latest

pipelines:
  default:
    - step:
        script:
          - echo "Step 1"
    - parallel:
        - step:
            script:
              - echo "Step 2.1"
        - step:
            script:
              - echo "Step 2.2"

`step.clone-path` does not match current Bitbucket behavior

The default config parameter for step.clone-path mentioned in this doc is /app whereas when a pipeline runs in Bitbucket (as of 10/29/21) the mount point is /build. This discrepancy should be resolved so that users do not have to manually configure this option to mimic the full Bitbucket behavior

Dockerfile for pipelines?

Hi there,

Is there a Dockerfile with the pipelines command and PHP already installed? The use for this would be to allow users to run bitbucket pipelines through docker using your tool. Something like:

docker run -v "path/to/docker.sock:..." <pipelines-image-tag> [PIPELINES OPTIONS] <PIPELINES ARGS>

If not, have you given any thought to whether this is possible? My team is looking for a way to run BBP locally, and this tool looks promising. I might be able to contribute a Dockerfile like this if we end up going that way.

More Docker Client Versions

Hi @ktomk!

I would like to say I've using your tool to a great degree and really appreciate your supporting it!

I wanted to ask if we could get more versions of docker? Right now the supported versions are

docker-17.12.0-ce-linux-static-x86_64
docker-18.09.1-linux-static-x86_64
docker-19.03.1-linux-static-x86_64

according to pipelines --docker-client-pkgs. Some scripts require more recent flags, such as --all-tags of docker push. I have used this flag successfully in bitbucket pipelines and therefore think some newer versions of the docker client would be helpful. I am not sure exactly which version is used in bitbucket.

Also I would like to state that I am using pipelines version 0.0.68 which appears to be the most recent tag.

'image' required in service definition

This failure occurs when I have defined 'docker' as a service in order to increase the RAM size.

bitbucket-pipelines.yml

....
definitions: 
  services:
    docker:
      memory: 3072

....

Error

    container......: pipelines-1.Format-Validation.default.pts_objects
pipelines: file parse error: 'image' required in service definition

Working with private repos

Can't make it work with a private repository

Failed to clone the [email protected]:.....git repository, try running in interactive mode so that you can enter your Bitbucket OAuth consumer credentials

Is there a way to share my local ssh credentials with the docker container?

Parallel step support for fail-fast and steps properties

When a parallel step is defined with the fail-fast and steps properties pipelines shows it as invalid.

Here is a sample that is perfectly valid and runs in BitBucket:

pipelines:
  custom:
    'custom-pipeline':
      - step: *stepA
      - parallel:
          fail-fast: true
          steps:
            - step: *stepB
            - step: *stepC

Pipelines does not allow the above and expects parallel steps to be defined only as an array, like this:

pipelines:
  custom:
    'custom-pipeline':
      - step: *stepA
      - parallel:
          - step: *stepB
          - step: *stepC

Dealing with SSH stuff

It would be great to have a way to copy SSH stuff (public/private keys and host key verification) into the container. This way we could copy the same SSH keys used by the real pipeline.

What do you think?

Failed to parse validated YAML file

The following simple bitbucket-pipelines.yml is rejected with "verify the file contains valid YAML: Unable to parse at line 2 (near "sleep 10;")."

.test-sleep: &test-sleep
  sleep 10;
  sleep 20;
  
.test-step: &test-sleep-step
  step:
    name: Simple Test
    script:
      - date;
      - *test-sleep
      - date;

pipelines:
  default:
    - parallel:
        - <<: *test-sleep-step

This file was validated by https://yamlchecker.com/ and https://bitbucket-pipelines.atlassian.io/validator and executes properly in Atlassian Bitbucket Pipelines.

bitbucket-pipelines.zip

script non-zero exit status: 243

  • What are you trying to do?
    Just want to execute bitbucket-pipelines.yml on my local machine before pushing the code to the repo.

  • So what problem are you facing?
    In execution, everything is going well until I trigger the script.

  • You mean, you are able to successfully complete all operations without triggering the script?
    Yes, if I don't trigger the script the "pipelines" complete execution successfully.

  • Okay fine, show me your bitbucket-pipelines.yml file.

image: node:16.15.1
pipelines:
    default:
      - step:
          name: test
          caches:
            - node
          script:
            - apt-get update
            - apt-get install postgresql-client -y
            - cd functions
            - npm install
            - sh create-db.sh test
            - sh run_migration.sh test
            - echo "Completed"
          services:
            - postgres
definitions:
  services:
    postgres:
      image: postgres:latest
      variables:
        POSTGRES_USER: 'root'
        POSTGRES_PASSWORD: 'root'
  • Cool, let me show the execution part.

rentsher@rentsher-ThinkPad-P14s-Gen-1:/work/napses/backend-core$ pwd
/home/rentsher/work/napses/backend-core
rentsher@rentsher-ThinkPad-P14s-Gen-1:
/work/napses/backend-core$ ls
bitbucket-pipelines.yml Dockerfile functions package-lock.json pipe.sh README.md start_api.sh
rentsher@rentsher-ThinkPad-P14s-Gen-1:~/work/napses/backend-core$ pipelines
+++ step #1

name...........: "test"
effective-image: node:16.15.1
container......: pipelines-1.test.default.backend-core
container-id...: 923bded7d99b

+++ copying files into container...

+++ populating caches...

  • node node_modules (hit)
  • apt-get update
    Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
    Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
    Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7911 kB]
    Get:5 http://security.debian.org/debian-security buster/updates/main amd64 Packages [336 kB]
    Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [8788 B]
    Fetched 8494 kB in 3s (3241 kB/s)
    Reading package lists...

  • apt-get install postgresql-client -y
    Reading package lists...
    Building dependency tree...
    Reading state information...
    The following additional packages will be installed:
    distro-info-data lsb-release postgresql-client-11 postgresql-client-common
    Suggested packages:
    lsb postgresql-11 postgresql-doc-11
    The following NEW packages will be installed:
    distro-info-data lsb-release postgresql-client postgresql-client-11
    postgresql-client-common
    0 upgraded, 5 newly installed, 0 to remove and 15 not upgraded.
    Need to get 1594 kB of archives.
    After this operation, 6613 kB of additional disk space will be used.
    Get:1 http://deb.debian.org/debian buster/main amd64 distro-info-data all 0.41+deb10u4 [6880 B]
    Get:2 http://security.debian.org/debian-security buster/updates/main amd64 postgresql-client-11 amd64 11.16-0+deb10u1 [1413 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 lsb-release all 10.2019051400 [27.5 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 postgresql-client-common all 200+deb10u4 [85.1 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 postgresql-client all 11+200+deb10u4 [61.1 kB]
    debconf: delaying package configuration, since apt-utils is not installed
    Fetched 1594 kB in 0s (6349 kB/s)
    Selecting previously unselected package distro-info-data.
    (Reading database ... 23988 files and directories currently installed.)
    Preparing to unpack .../distro-info-data_0.41+deb10u4_all.deb ...
    Unpacking distro-info-data (0.41+deb10u4) ...
    Selecting previously unselected package lsb-release.
    Preparing to unpack .../lsb-release_10.2019051400_all.deb ...
    Unpacking lsb-release (10.2019051400) ...
    Selecting previously unselected package postgresql-client-common.
    Preparing to unpack .../postgresql-client-common_200+deb10u4_all.deb ...
    Unpacking postgresql-client-common (200+deb10u4) ...
    Selecting previously unselected package postgresql-client-11.
    Preparing to unpack .../postgresql-client-11_11.16-0+deb10u1_amd64.deb ...
    Unpacking postgresql-client-11 (11.16-0+deb10u1) ...
    Selecting previously unselected package postgresql-client.
    Preparing to unpack .../postgresql-client_11+200+deb10u4_all.deb ...
    Unpacking postgresql-client (11+200+deb10u4) ...
    Setting up postgresql-client-common (200+deb10u4) ...
    Setting up distro-info-data (0.41+deb10u4) ...
    Setting up postgresql-client-11 (11.16-0+deb10u1) ...
    update-alternatives: using /usr/share/postgresql/11/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode
    Setting up lsb-release (10.2019051400) ...
    Setting up postgresql-client (11+200+deb10u4) ...

  • cd functions

  • npm install

up to date, audited 922 packages in 1s

89 packages are looking for funding
run npm fund for details

found 0 vulnerabilities

  • sh create-db.sh test
  • [ -z test ]
  • echo Creating db...
  • NODE_ENV=test npx env-cmd -f ./env/.env.test npx sequelize-cli db:create --env=test
    Creating db...

script non-zero exit status: 243
rentsher@rentsher-ThinkPad-P14s-Gen-1:~/work/napses/backend-core$

YAML Anchors

Would it be possible to have this work with YAML anchors? We use them extensively, and it would be nice to have this tool automatically expand them if found in the file as a pre-step, since it bombs like this when ancors are used:
branches/dev ERROR Missing required property 'step'

Pipelines Fatal Environment Definition Error is puzzling

Hi @ktomk! Sorry if I have raised too many issues. Hopefully this is not something caused by my configuration as it runs in bitbucket. I run pipelines in a repository with the following bitbucket configuration :

image: python:3.10

options :
  docker : true

definitions :
  services :
    mysql :
      variables :
        MYSQL_RANDOM_ROOT_PASSWORD : 'yes'
        MYSQL_DATABASE : 'mve_brain_sqlalchemy_tests'
        MYSQL_USER : 'adrian'
        MYSQL_PASSWORD : 'somepassword'
      image : mysql
  steps :
    - step : &create_env_test
        name : Create .env.test for testing since this file should never be commited.
        script :
          - export ENV_TEST_PATH=".env.test"
          - touch $ENV_TEST_PATH
          - echo BRAIN_MYSQL='{"database":"mve_brain_sqlalchemy_tests","host":"localhost","port":3306,"username":"adrian","password":"somepassword","drivername":"mysql+asyncmy"}' >> $ENV_TEST_PATH 
          - echo BRAIN_AUTH0='{"client_id":"$BRAIN_AUTH0__CLIENT_ID","client_secret":"$BRAIN_AUTH0__CLIENT_SECRET","secret_key":"$BRAIN_AUTH0__SECRET_KEY","audience":"http://localhost:8000/","issuer":"$BRAIN_AUTH0__ISSUER"}' >> $ENV_TEST_PATH
          - echo BRAIN_UVICORN='{"host":"0.0.0.0","port":8000,"reload":true}' >> $ENV_TEST_PATH
          - echo BRAIN_AUTHDUMMY='{"secret_key":"$BRAIN_AUTH_DUMMY_SECRET_KEY","client_secret":"$BRAIN_AUTH_DUMMY__CLIENT_SECRET","admin_secret":"$BRAIN_AUTH_DUMMY__ADMIN_SECRET","token_timeout":3600}' >> $ENV_TEST_PATH
          - echo BRAIN_API_INCLUDE_UNSAFE_ROUTES=1 >> $ENV_TEST_PATH
        artifacts :
          - .env.test

    - step : &invoke_tests
        name : Run tests for non-fetchers items.
        caches :
          - pip
        script :
          - pip install -r requirements.txt 
          - pip install -r requirements.dev.txt
          - BRAIN_API_PURPOSE=test python -m pytest
        services :
          - mysql
 
    - step : &build_containers
        name : Building prod and fetchers docker images (without an enironment files)
        caches :
          - pip
        script :
          - export ACR_URI="$ACR_NAME.azurecr.io"
          - export IMAGE_API="$ACR_URI/api:$BITBUCKET_COMMIT" 
          - export IMAGE_FETCHERS="$ACR_URI/fetchers:$BITBUCKET_COMMIT"
          - docker login $ACR_URI --username $BBSP_USERNAME --password $BBSP_PASSWORD

          - docker build -t "$IMAGE_API" -f "Dockerfile.prod" --target "prod" .
          - docker build -t "$IMAGE_FETCHERS" -f "Dockerfile.prod" --target "fetcher_runner" . 
          
          - docker run --name test1 --detach "$IMAGE_API" 
          - sleep 30 
          - docker stop test1
                    
          - docker push $IMAGE_API 
          - docker push $IMAGE_FETCHERS

    - step : &typecheck_source
        name : See if code passes mypy's type checking.
        caches :
          - pip
        script :
          - python -m pip install mypy sqlalchemy[mypy]
          - if ( python -m mypy . > results_mypy 2>&1 ); then echo 1; else echo 0; fi
        artifacts :
          - results_mypy

    - step : &lint_source
        name : See if code passes flake8. Try to autolint.
        caches :
          - pip 
        script :
          - pip install flake8 
          - if ( python -m flake8 . > results_flake8 2>&1 ); then echo 1; else echo 0; fi
        artifacts :
          - results_flake8

    - step  : &scan_source
        name : Check for secrets etc.
        caches : 
          - pip
        script :
          - pip install bandit
          - if ( bandit . > results_bandit 2>&1 ); then echo 1; else echo 0; fi
        artifacts :
          - results_bandit

  basic : &basic
    - step : 
        <<: *create_env_test
    - step :
        <<: *invoke_tests

  everything : &everything

    - parallel :
      - step : 
          <<: *scan_source
      - step :
          <<: *lint_source
      - step :
          <<: *typecheck_source
    - step : 
        <<: *create_env_test
    - step :
        <<: *invoke_tests

  everythingandbuild  : &everythingandbuild

    - parallel :
      - step : 
          <<: *scan_source
      - step :
          <<: *lint_source
      - step :
          <<: *typecheck_source
    - step : 
        <<: *create_env_test
    - step :
        <<: *invoke_tests
    - step :
        <<: *build_containers

pipelines:
  default : *basic
  branches :
    master : *everythingandbuild
    refactoring :  *everythingandbuild

  pull-requests : 
    '**' : *everything

But I get the following issue when I run 'pipelines':

pipelines: fatal: Variable definition error: '  "database" : "mve_brain_sqlalchemy_tests",'

I believe this to be a yaml parsing error, due to the dictionary/json like structure. I pushed my changes to bitbucket and everything worked correctly. I am using pipelines 0.0.35 ( EDIT: THIS IS WRONG, I AM USING 0.65 ) and installed it onto WSL2 ubuntu using composer version 2.3.4.

Thanks again for your help @ktomk! This project is very helpful to my productivity.

cannot parse yaml files with dash as indentation

Yaml spec 1.2 says:

The “-”, “?” and “:” characters used to denote block collection entries are perceived by people to be part of the indentation. This is handled on a case-by-case basis by the relevant productions.

However piplines fails when the file uses dash as identitation as follows:

pipelines:
  default:
  - step:
      name: Build and Test

it tells, theres no steps defined for default pipeline

PHP Deprecated: trim(): ... no pipeline to run!

Just installed pipelines today and I'm brand new to this util. I've installed it using phive on macOS and setup my pipelines executable globally. I do have a bitbucket-pipelines.yml file setup and configured correctly. When just running the pipelines command with no additional flags in the same folder as the yaml file, it fails showing this error:

PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
PHP Deprecated:  trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49

Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in phar:///Users/user/.phive/phars/pipelines/vendor/ktomk/symfony-yaml/Inline.php on line 49
pipelines: no pipeline to run!

🙋🏻‍♂️How do I fix this deprecation issue?

PHP Info

php --version
PHP 8.1.8 (cli) (built: Jul  8 2022 12:51:36) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.8, Copyright (c) Zend Technologies
    with Zend OPcache v8.1.8, Copyright (c), by Zend Technologies

(Some) Anchors Do Not Work - Raises Invalid YAML

Thanks for the awesome tool, it has/will save me a lot of time. I was running this on a new pipeline like

cd <my_project>
pipelines

and it says

bitbucket-pipelines.yml; verify the file contains valid YAML

But I can parse the file with python

from yaml import safe_load
with open( 'bitbucket-pipelines.yml', 'r' )  as file : safe_load( file )

and it passes in the bitbucket validator. When I run the pipeline on bitbucket it works. I can provide my yaml, but it will require some amendments as this is internal to my organization and thus I am hesitant. If I figure out what is making the parser upset I will add that to my post here.

It also might be worth considering that I am using the windows subsystem for linux.

Thank you again for any time, it is really appreciated.

EDIT : Debug output

Running pipelines --debug gets

pipelines: file parse error: YAML error: /mnt/c/MVE/.../bitbucket-pipelines.yml; verify the file contains valid YAML
pipelines: version 0.0.62-composer w/ php 7.4.3 (libyaml: n/a)
--------
class....: Ktomk\Pipelines\File\ParseException
message..: file parse error: YAML error: /mnt/c/.../<my_project>/bitbucket-pipelines.yml; verify the file contains valid YAML
code.....: 2
file.....: /home/adr1an/.config/composer/vendor/ktomk/pipelines/src/File/File.php
line.....: 53
backtrace:
#0 /home/adr1an/.config/composer/vendor/ktomk/pipelines/src/Utility/App.php(135): Ktomk\Pipelines\File\File::createFromFile()
#1 /home/adr1an/.config/composer/vendor/ktomk/pipelines/src/Utility/ExceptionHandler.php(50): Ktomk\Pipelines\Utility\App->run()
#2 /home/adr1an/.config/composer/vendor/ktomk/pipelines/src/Utility/ExceptionHandler.php(65): Ktomk\Pipelines\Utility\ExceptionHandler->handle()
#3 /home/adr1an/.config/composer/vendor/ktomk/pipelines/src/Utility/App.php(90): Ktomk\Pipelines\Utility\ExceptionHandler->handleStatus()
#4 /home/adr1an/.config/composer/vendor/ktomk/pipelines/bin/pipelines(37): Ktomk\Pipelines\Utility\App->main()
#5 /home/adr1an/.config/composer/vendor/bin/pipelines(112): include('/home/adr1an/.c...')
#6 {main}
--------

EDIT

It turns out that the parser does not like anchors and this is the root of the problem.

Support for `condition`

Hi

What I can see the library don't support condition and changesets

I saw that you have a test yaml file with the structure test/data/yml/condition.yml but it don't seem to be used for anything else than to verify that its a valid pipeline config

So is this something that would be possible to implement or would you be open to a pull-request?


Yes we use git. And we only use it in BitBucket

How the detection work in BitBucket:

  • In the branches sections they looks in the last commit and sees if any files matches the includePaths and if so run the step. This should be fairly simple to implement
  • In the pull-requests section it will look in all commits for changes that matches includePaths. This might be harder but maybe we could require an option with a destionation branch and do the check against that diff?

Usage example:

pipelines:
  pull-requests:
      '**': #this runs as default for any branch not elsewhere defined
          - step:
                script:
                    - phpunit
                condition:
                    changesets:
                        includePaths:
                            - src/
                            - composer.lock

          - step:
                script:
                    - eslint
                condition:
                    changesets:
                        includePaths:
                            - js/

    branches:
        master:
            - step:
                  script:
                      - phpunit
                  condition:
                      changesets:
                          includePaths:
                              - src/
                              - composer.lock

BITBUCKET_REPO_SLUG holds the `.git` extension

In the bitbucket pipelines the BITBUCKET_REPO_SLUG does not hold the .git extension.

It's an inconsistency, but it's not documented anywhere by atlassian that is should not or cannot hold the .git extension.

Mounting Local Volumes to allow application to access local files

Hello Sir,
I have been using your pipeline package to run my yaml.file
My application needs to access certain files on the local machine, which it cannot do unless a volume has been mounted.
I can see some functionality --mount in your readme, but I am not sure how to use it to mount certain volumes to allow my application to interact with it.
Could you please help me with this. I dont want to keep every file in the application working directory.

Thank you.

Support for 'docker' in 'options'

Hi again @ktomk. Sorry for raising another issue. I found that the options section does nothing for my pipelines, e.g.

image : node:16
options :
  docker : True
...

See for instance the YAMLs in in #14. IDK if this is a big ask; my experiences make me imagine it might require some docker magic. I think I could do it by using a different image than the one I am using, but it is easier to just have it in options.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.