Giter Site home page Giter Site logo

f1bonacc1 / process-compose Goto Github PK

View Code? Open in Web Editor NEW
967.0 9.0 22.0 2.22 MB

Process Compose is a simple and flexible scheduler and orchestrator to manage non-containerized applications.

Home Page: https://f1bonacc1.github.io/process-compose/

License: Apache License 2.0

Makefile 0.66% Go 94.86% Shell 3.51% Nix 0.81% Python 0.17%
go golang open-source orchestration orchestrator processes tui workflows docker

process-compose's People

Contributors

adrian-gierakowski avatar airwoodix avatar anmonteiro avatar dependabot[bot] avatar enobayram avatar f1bonacc1 avatar johnalotoski avatar joshuabehrens avatar r-vdp avatar srid avatar stijnveenman avatar thenonameguy avatar tomhoule avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

process-compose's Issues

Reference to non-existing `self.overlays.${system}`

Since #80, it's no longer possible to nix run the flake:

➜ nix run 'github:f1bonacc1/process-compose' --
error:
       … while evaluating the attribute 'packages.x86_64-linux.process-compose'

         at /nix/store/x13pm9vsklcq0iis25w8k28n8x2w24ml-source/flake.nix:17:36:

           16|       in {
           17|         packages = { inherit (pkgs) process-compose; };
             |                                    ^
           18|         defaultPackage = self.packages."${system}".process-compose;

       … while evaluating a branch condition

         at /nix/store/hbxj731z8hd22lva84px6h9ppl5qjzgx-source/lib/trivial.nix:386:27:

          385|   */
          386|   throwIfNot = cond: msg: if cond then x: x else throw msg;
             |                           ^
          387|

       (stack trace truncated; use '--show-trace' to show the full trace)

       error: attribute 'x86_64-linux' missing

       at /nix/store/x13pm9vsklcq0iis25w8k28n8x2w24ml-source/flake.nix:14:24:

           13|           inherit system;
           14|           overlays = [ self.overlays."${system}".default ];
             |                        ^
           15|         };

Custom Layout - Grouping or Sorting

Would be nice if we can define a custom group for each processes.
Or at the very least a persistent boolean flag and have those non persistent process be at the bottom of the list.

Sample 1:

Group 1
   - Process 1
   - Process 2
Group 2
   - Process 3
   - Process 4

ps: Group Name is not required a sample divider between groups is enough

Sample 2

Process 2
Process 3
Process 1 (non persistent)
Process 4 (non persistent)

Combination of both would be nice as well.

Start/Restart action should start depended on disabled services

If you have a service config like:

    depends_on:
      kafka:
        condition: process_healthy
      sftp:
        condition: process_started

for a seldomly used subset of your system (therefore using disabled: true for both the service and it's dependencies (kafka and sftp)) currently starting/restarting the service does not visually do anything.

Expectation: start sftp/ kafka so the depends_on conditions have a chance to be met.

Bridge from docker-compose

I made a similar tool a while ago https://github.com/sordina/logody#logody but ended up not using it very much mainly because of this one feature that I wished would exist - interoperability with docker-compose.

Feel free to close this issue if it's beyond the scope of your project but I think it would be extremely useful and popular if you liked the idea.

In essence I wished that docker compose would be able to not only spin up dockerized processes, but also native processes and express them together in the same services section. You'd be able to have them networked by name and have their lifecycle managed by the docker-compose tool.

One way I imagine that this could be achieved is by having two programs:

  • A dockerized-bridge process that acts as a docker image for reference by a compose file to describe what native processes you want to run
  • A local process manager process that the bridge can communicate with to manage native processes

This second program could be process-compose!

The main thing missing that would enable this is a network API for lifecycling processes. If this was present then it would be possible to create the bridge program to allow description of processes in its configuration to be communicated to the process-compose running on the host natively.

Anyway, just thought you might think the idea is interesting. As I mentioned, no worries if it is out of scope! Happy to discuss if you like.

Feature Request: Run a "sanity-check" program

Hello!

This is a very great project! I have been looking for a much smaller but still capable process supervisor compared to PM2 and this looks like it is exactly what i need :)

I would like to request/suggest a feature: Allow the user to specify a "sanity check". For instance, imagine this shell script:

#!/bin/bash

check() {
  command -v $1 >/dev/null
  if [ $? != 0 ]; then
    echo "$1 was not found!"
    exit 1
  fi
}

check go
check node

If either of these fail, the script exits with exit code 1.

What I would like to do is to run this before any of the processes are invoked as a way to introduce a script that verifies the environment if everything is exactly where it should be. :)

Again thanks a lot for this project, been playing around with it today a lot and it's great!

Kind regards,
Ingwie

Process not stopped

I am starting a PHP-FPM server using process-compose and after stopping process-compose its still active.

My config:

processes:
  phpfpm-web:
    command: /Users/shyim/Downloads/start.sh

My start.sh

#!/usr/bin/env bash

exec /nix/store/770sz2xzliv4xf3z7mgmhj1kbc3asy60-php-with-extensions-8.1.13/bin/php-fpm -F -y /nix/store/ib4r50ck8gnk3jviscz4d0rbr08x3qsh-phpfpm-web.conf -c /nix/store/13gh8w0v5ps7xj5lqwndld0in3bmzsc0-php.ini

The -F parameter disables deamonize of php-fpm, but for some reason its not stopped.

I started process compose and killed the process by own and it works.

Too much log output seems to cause processes to freeze

Hi! We use this tool in our team to orchestrate a web application on our development machines. It's been very pleasant so far.

One of the tools in our stack managed by process-compose is Hasura. Specifically, the process is a shell script that sets environment variables and then executes graphql-engine.

For the longest time, we've observed graphql-engine occasionally freezing up for no apparent reason. A process-compose restart is enough to get it to behave again. Today I discovered something new - the problem is:

  • Reliably reproducible by having graphql-engine output more logs (when running it through process-compose)
  • Impossible to reproduce with a lower graphql-engine verbosity (when running it through process-compose)
  • Impossible to reproduce when running graphql-engine outside of process-compose

One way that I can run graphql-engine through process-compose without any freezing is by redirecting stdout to a log file, i.e. adding 1>graphql-engine.log to the process shell script. Setting a log_location and modifying log_level in process-compose.yaml did not seem to fix freezing.

I can't say if the sheer amount of logs is what causes process-compose to choke or if something else is going on. If there exists an error log of process-compose itself or any other information that may be useful, let me know and I'll try to provide.

I'll expand this issue if I discover anything more.

Create intermediate directories of log_location if they do not exist

Feature Request

I'd like process-compose to create intermediate directories to log_location.
Currently setting log_location to within a missing directory will result in error on startup.

Use Case:

Improving DX when using custom log locations. Currently this may require additional steps or wrapper script when running given configuration for the first time.

Proposed Change:

create intermediate directories for all log_location's

Who Benefits From The Change(s)?

people using custom log_location

Alternative Approaches

wrapper script or extra manual steps which need to be documented

Logging only to file by default is confusing

I was trying to get set up with process-compose, and I had an error in my YAML. I spent far too long scratching my head at why it was exiting 1 with no output, and had to do a fair amount of source code digging to discover that it was logging an error message to $TMPDIR/process-compose-daisy.yaml.

IMO it would be good if the default was to log to stdout, at least until the TUI starts up, or failing that to print a message explaining where the log file is.

allow customising global process compose log file

Feature Request

I'd like to be able run multiple instances of pc concurrently and have each use a distinct log file for process-compose logs. Currently all instances share the same file (/tmp/process-compose-$USER.log), which causes the file to be truncated each time a new instance starts, and makes it difficult to debug individual pc instances.

example result of failing to log file while 2 instances are started:

> tail -f /tmp/process-compose-$USER.log
23-07-19 11:39:37.693 INF Process Compose v0.51.4
23-07-19 11:39:37.693 INF Global shell command: bash -c
23-07-19 11:39:37.693 INF Loaded project from /home/adrian/code/rhinofi/rhino-core/process-compose.yaml
23-07-19 11:39:37.693 INF start http server listening :8080
23-07-19 11:39:37.693 DBG Spinning up 1 processes. Order: ["log-test"]
23-07-19 11:39:37.694 INF log-test started
tail: /tmp/process-compose-adrian.log: file truncated
23-07-19 11:39:42.180 INF Process Compose v0.51.4
23-07-19 11:39:42.180 INF Global shell command: bash -c
23-07-19 11:39:42.181 INF Loaded project from /home/adrian/code/rhinofi/rhino-core/process-compose.yaml
23-07-19 11:39:42.181 INF start http server listening :8080
23-07-19 11:39:42.181 DBG Spinning up 1 processes. Order: ["log-test"]
23-07-19 11:39:42.181 INF log-test started
23-07-19 11:39:46.085 INF log-test exited with status -1
23-07-19 11:39:46.085 INF Project completed
23-07-19 11:39:46.085 DBG process log-test is in state Completed not shutting down
23-07-19 11:39:47.086 INF Thank you for using process-compose
23-07-19 11:39:47.147 INF log-test exited with status -1
23-07-19 11:39:47.147 INF Project completed
23-07-19 11:39:48.148 INF Thank you for using process-compose

Use Case:

running multiple instances of pc concurrently

Proposed Change:

Add a way to configure the log files location.

Who Benefits From The Change(s)?

Anyone who wants to run multiple instances of pc concurrently

Alternative Approaches

Each instance could be given a name and then the name could be prepended to each log line in shared file. By default, port could be used instead of a name (if not given), since that should be unique per instance. In this case, it would be good to be able to disable truncation of the log file on startup.

Another option could be to add PC_LOG_POSTFIX env var and use it (if defined) instead of USER for the log location.

allow setting port, config file and tui via env vars

for example:

PROC_C_PORT=9999 process-compose

instead of:

process-compose -p 9999

why?

This could be set by tools like direnv or devenv so that when I'm working on multiple projects, I can have multiple instances of process-compose running on different ports, and when I enter each project dir, appropriate env vars are set so that I can simply run process-compose in the project dir and it will know on which port it should be running without me having to provide the argument manually with correct port on each invocation.

Occassional segfault with v0.24.0

Example stacktrace.

zerolog: could not write event: write /tmp/process-compose-kszabo.log: file already closed
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xa9f67c]

goroutine 1 [running]:
github.com/f1bonacc1/process-compose/src/tui.(*pcView).handleShutDown(0x0)
	github.com/f1bonacc1/process-compose/src/tui/view.go:132 +0x9c
github.com/f1bonacc1/process-compose/src/tui.Stop(...)
	github.com/f1bonacc1/process-compose/src/tui/view.go:389
github.com/f1bonacc1/process-compose/src/cmd.runTui(0xc00042bf40)
	github.com/f1bonacc1/process-compose/src/cmd/root.go:119 +0xda
github.com/f1bonacc1/process-compose/src/cmd.glob..func3(0x1a1c520?, {0xbea3a3?, 0x2?, 0x2?})
	github.com/f1bonacc1/process-compose/src/cmd/root.go:72 +0x387
github.com/spf13/cobra.(*Command).execute(0x1a1c520, {0xc000032190, 0x2, 0x2})
	github.com/spf13/[email protected]/command.go:920 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0x1a1c520)
	github.com/spf13/[email protected]/command.go:1040 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/[email protected]/command.go:968
github.com/f1bonacc1/process-compose/src/cmd.Execute({0xd13d28?, 0xc0000101a8?})
	github.com/f1bonacc1/process-compose/src/cmd/root.go:87 +0x4a
main.main()
	github.com/f1bonacc1/process-compose/src/main.go:37 +0x98

show open ports of running processes on linux

Feature Request

Show ports on which processes listen

Use Case:

So users can find out ports to connect to

Proposed Change:

So make these visible via cli or tui

Who Benefits From The Change(s)?

Same feature as in docker shows all listen ports

Alternative Approaches

use native linux tooling, so it is not so easy

macOS support

I noticed that the project's README explicitly states that macOS is unsupported. Is it unsupported in the sense that no support will be provided to mac users or that the project flat out doesn't work under macOS?

To be clear, I haven't even attempted to run this on a mac but this project caught my eye because I think it would be interesting to attempt to convert some of my existing docker development workflows on mac to use process-compose instead as it wouldn't require a VM running in the background and gobbling up RAM to operate.

Log formatting

Feature Request

Log formatting

  • process-compose write log like
    {"level":"error","process":"someapp","replica":0,"message":"some application message"}
  • i think we can use thoese keys(level, process, ...) to format log

Use Case:

in process-compose.yml

log_location: logs/log.log
log_level: debug
log_format: "[{process}-{replica}] {asctime}  {level} --- {message}"

-->

[someapp-0] 2023-09-21  error --- some application message

Proposed Change:

Who Benefits From The Change(s)?

Alternative Approaches

Add support for configuring keybindings for TUI

The current keybinding is somewhat cumbersome on Fn Lock laptop keyboards (causing airplane mode to be hit instead Start 😄)
It would be nice if the process-compose.yml had some optional top-level action->keybind map that could override the current default bindings.

Question: Where is documentation for "indented/nested" processes?

UPDATE: Please disregard, those are not indents, but pid 0s.

Hello!

First of all, this is a slick application. I thought about something similar for a long time, and I'm glad someone executed on it well.

Quick question... The image in the README.md seems to indicated there is an ability to indent/nest processes (see image).

Screenshot 2023-09-19 at 11 07 41 AM

This appears to be an old version, but does this feature still exist? Where can I find the docs for it?

Thanks!

add option to stop all other process on one process completion

Feature Request

Similar to exit_on_failure but also exists if give process exits with 0 status

Use Case:

Running integration tests, where multiple supporting services (databases etc.) need to be started first, followed by the test executor process. Once test executor completes (regardless of exit code), all other processes should be shut down and PC should exit with the code returned by test executor.

Proposed Change:

add boolean flag: availability.exit_on_end
the reason why this is not just another option in availability.restart is that you might want to restart a process on failure up to certain amount of times, and only once retries are exhausted (or once process completes successfully) shut down all processes.

Who Benefits From The Change(s)?

Anyone which wishes to run one-shot a process which need other processes to run in the background while the main process is executing.

Alternative Approaches

One could execute process-compose process stop PROCESS_NAME for all supporting processes once the main process finished

`go install` doesn't work?

Ì found another use for process-compose: Orchestrating running rclone instances on my openwrt router which lacks a more-solid init system that can handle automatic restarts. Technically procd can - but I would have to create that for an absurd number of scripts...

So, instead, I will be using process-compose. Well, I tried to install it via Go's install command but:

root@FriendlyWrt ~/.c/rclone [1]# go install "github.com/f1bonacc1/process-compose@latest"
go: downloading github.com/f1bonacc1/process-compose v0.29.0
go: github.com/f1bonacc1/process-compose@latest: module github.com/f1bonacc1/process-compose@latest found (v0.29.0), but does not contain package github.com/f1bonacc1/process-compose

Mind looking into this? :)

fails to run in docker because user is nil

Defect

Make sure that these boxes are checked before submitting your issue -- thank you!

  • Included the relevant configuration snippet
  • Included the relevant process-compose log (log location: process-compose info)
  • Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)

Version of process-compose:

 docker run -it --entrypoint bash composablefi/devnet-xc:f110a75b84068c72c0d6589d964649547c0f9c74 
bash-5.2# cat devnet-xc-background
#!/nix/store/a7f7xfp9wyghf44yv6l6fv9dfw492hd3-bash-5.2-p15/bin/bash
set -o errexit
set -o nounset
set -o pipefail

export PATH="/nix/store/k01vbwj28s01da6f95r7qkiw2c56p52f-process-compose-0.51.4/bin:$PATH"

cat /nix/store/f50cr9qh6lc7cawbz8dbmf6nj9pl7k5k-devnet-xc-background.yaml
export PC_CONFIG_FILES=/nix/store/f50cr9qh6lc7cawbz8dbmf6nj9pl7k5k-devnet-xc-background.yaml
export PC_DISABLE_TUI=true
exec process-compose -p 8080 "$@"

bash-5.2# PATH="/nix/store/k01vbwj28s01da6f95r7qkiw2c56p52f-process-compose-0.51.4/bin:$PATH"
bash-5.2# 
bash-5.2# process-compose    
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:27:45Z","message":"Failed to retrieve user info"}
bash-5.2# process-comepose --version
bash: process-comepose: command not found
bash-5.2# ^[[A^C
bash-5.2# process-compose --version
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:28:00Z","message":"Failed to retrieve user info"}
bash-5.2# process-compose version      
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:28:09Z","message":"Failed to retrieve user info"}
bash-5.2# process-compose -v       
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:28:18Z","message":"Failed to retrieve user info"}
bash-5.2# process-compose -version 
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:28:22Z","message":"Failed to retrieve user info"}
bash-5.2# process-compose --help   
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:28:28Z","message":"Failed to retrieve user info"}
bash-5.2# 

OS environment:

Docker.

Steps or code to reproduce the issue:

 docker run -it  composablefi/devnet-xc:main 

Expected result:

Runs or other error.

Actual result:

actions-runner@Ubuntu-2204-jammy-amd64-base:~$ docker run -it  composablefi/devnet-xc:f110a75b84068c72c0d6589d964649547c0f9c74 
processes:
  centauri:
    availability:
      restart: on_failure
    command: /nix/store/4l88isl2fhib4zsdq94xx3sspjs70jsg-centaurid-gen/bin/centaurid-gen
    log_location: /tmp/composable-devnet/centauri.log
    readiness_probe:
      failure_threshold: 3
      http_get:
        host: 127.0.0.1
        path: /
        port: 26657
        scheme: http
      initial_delay_seconds: 0
      period_seconds: 10
      success_threshold: 1
      timeout_seconds: 3
  centauri-init:
    availability:
      restart: on_failure
    command: /nix/store/65jjn2ckp8qa32k935ill8liqifx4l4x-centaurid-init/bin/centaurid-init
    depends_on:
      centauri:
        condition: process_healthy
    log_location: /tmp/composable-devnet/centauri-init.log
  composable:
    availability:
      restart: on_failure
    command: /nix/store/07q9wz30dchv8s64zxqfdzhb4nbbvcvw-zombienet-composable-centauri-b/bin/zombienet-composable-centauri-b
    log_location: /tmp/composable-devnet/composable.log
    readiness_probe:
      exec:
        command: |
          curl --header "Content-Type: application/json" --data '{"id":1, "jsonrpc":"2.0", "method" : "assets_listAssets"}' http://127.0.0.1:32201
      failure_threshold: 8
      initial_delay_seconds: 32
      period_seconds: 8
      success_threshold: 1
      timeout_seconds: 2
  composable-picasso-ibc-channels-init:
    availability:
      restart: on_failure
    command: "HOME=\"/tmp/composable-devnet/composable-picasso-ibc\"\nexport HOME       \nRUST_LOG=\"hyperspace=info,hyperspace_parachain=debug,hyperspace_cosmos=debug\"\nexport RUST_LOG\n/nix/store/f6anaxjgf6i33380mz03njaks17hcvb3-hyperspace-composable-rococo-picasso-rococo/bin/hyperspace create-channel --config-a /tmp/composable-devnet/composable-picasso-ibc/config-chain-a.toml --config-b /tmp/composable-devnet/composable-picasso-ibc/config-chain-b.toml --config-core /tmp/composable-devnet/composable-picasso-ibc/config-core.toml --delay-period 10 --port-id transfer --version ics20-1 --order unordered\n"
    depends_on:
      composable-picasso-ibc-connection-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/composable-picasso-ibc-channels-init.log
  composable-picasso-ibc-connection-init:
    availability:
      restart: on_failure
    command: "HOME=\"/tmp/composable-devnet/composable-picasso-ibc\"\nexport HOME                \nRUST_LOG=\"hyperspace=info,hyperspace_parachain=debug,hyperspace_cosmos=debug\"\nexport RUST_LOG      \n/nix/store/f6anaxjgf6i33380mz03njaks17hcvb3-hyperspace-composable-rococo-picasso-rococo/bin/hyperspace create-connection --config-a /tmp/composable-devnet/composable-picasso-ibc/config-chain-a.toml --config-b /tmp/composable-devnet/composable-picasso-ibc/config-chain-b.toml --config-core /tmp/composable-devnet/composable-picasso-ibc/config-core.toml --delay-period 10\n"
    depends_on:
      composable-picasso-ibc-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/composable-picasso-ibc-connection-init.log
  composable-picasso-ibc-init:
    availability:
      restart: on_failure
    command: /nix/store/ag4h4pzbsfsybxfkq9vwxnhrvpy44lbc-composable-picasso-ibc-init/bin/composable-picasso-ibc-init
    depends_on:
      composable:
        condition: process_healthy
      picasso:
        condition: process_healthy
      picasso-centauri-ibc-channels-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/composable-picasso-ibc-init.log
  composable-picasso-ibc-relay:
    availability:
      restart: on_failure
    command: /nix/store/rps5vp8r12ppn268hq7zldk4hx42jb5c-composable-picasso-ibc-relay/bin/composable-picasso-ibc-relay
    depends_on:
      composable-picasso-ibc-channels-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/composable-picasso-ibc-relay.log
  osmosis:
    command: /nix/store/vpg6764j5wagzmlqml3vmsh3288yh8ig-osmosisd-gen/bin/osmosisd-gen
    log_location: /tmp/composable-devnet/osmosis.log
    readiness_probe:
      failure_threshold: 3
      http_get:
        host: 127.0.0.1
        path: /
        port: 36657
        scheme: http
      initial_delay_seconds: 0
      period_seconds: 10
      success_threshold: 1
      timeout_seconds: 3
  osmosis-centauri-hermes-init:
    availability:
      restart: on_failure
    command: /nix/store/pbglg0a7f9a52b0f7af15g3r8cc8698m-osmosis-centauri-hermes-init/bin/osmosis-centauri-hermes-init
    depends_on:
      centauri-init:
        condition: process_completed_successfully
      osmosis:
        condition: process_healthy
      picasso-centauri-ibc-channels-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/osmosis-centauri-hermes-init.log
  osmosis-centauri-hermes-relay:
    availability:
      restart: on_failure
    command: /nix/store/dal3sw1j21vvrv1ib6l8hxcsirc1na6r-osmosis-centauri-hermes-relay/bin/osmosis-centauri-hermes-relay
    depends_on:
      osmosis-centauri-hermes-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/osmosis-centauri-hermes-relay.log
  osmosis-init:
    availability:
      restart: on_failure
    command: /nix/store/ggrnlgq6rfwhp5r9h067l8kwi358qf8s-osmosisd-init/bin/osmosisd-init
    depends_on:
      osmosis:
        condition: process_healthy
    log_location: /tmp/composable-devnet/osmosis-init.log
  picasso:
    availability:
      restart: on_failure
    command: /nix/store/3khkmb88vyw376agfczi0k0j2hfqr62h-zombienet-rococo-local-picasso-dev/bin/zombienet-rococo-local-picasso-dev
    log_location: /tmp/composable-devnet/picasso.log
    readiness_probe:
      exec:
        command: |
          curl --header "Content-Type: application/json" --data '{"id":1, "jsonrpc":"2.0", "method" : "assets_listAssets"}' http://127.0.0.1:32200
      failure_threshold: 8
      initial_delay_seconds: 32
      period_seconds: 8
      success_threshold: 1
      timeout_seconds: 2
  picasso-centauri-ibc-channels-init:
    availability:
      restart: on_failure
    command: /nix/store/w9c4fcnzkkwjvqzlxms63fzlp4warksd-picasso-centauri-ibc-channels-init/bin/picasso-centauri-ibc-channels-init
    depends_on:
      picasso-centauri-ibc-connection-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/picasso-centauri-ibc-channels-init.log
  picasso-centauri-ibc-connection-init:
    availability:
      restart: on_failure
    command: /nix/store/g0kknc92nbn99h5wgk7sxf8zlw4v9rb2-picasso-centauri-ibc-connection-init/bin/picasso-centauri-ibc-connection-init
    depends_on:
      picasso-centauri-ibc-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/picasso-centauri-ibc-connection-init.log
  picasso-centauri-ibc-init:
    availability:
      restart: on_failure
    command: /nix/store/parh7f1mvlkjyfs1kz0xxmp64lqmv96n-picasso-centauri-ibc-init/bin/picasso-centauri-ibc-init
    depends_on:
      centauri:
        condition: process_healthy
      centauri-init:
        condition: process_completed_successfully
      picasso:
        condition: process_healthy
    log_location: /tmp/composable-devnet/picasso-centauri-ibc-init.log
  picasso-centauri-ibc-relay:
    availability:
      restart: on_failure
    command: /nix/store/9nwl712053i5zgb8qdxfammx288091gi-picasso-centauri-ibc-relay/bin/picasso-centauri-ibc-relay
    depends_on:
      picasso-centauri-ibc-channels-init:
        condition: process_completed_successfully
    log_location: /tmp/composable-devnet/picasso-centauri-ibc-relay.log
shell:
  shell_argument: -c
  shell_command: /nix/store/a7f7xfp9wyghf44yv6l6fv9dfw492hd3-bash-5.2-p15/bin/bash
{"level":"fatal","error":"user: lookup userid 0: no such file or directory","time":"2023-08-04T00:26:18Z","message":"Failed to retrieve user info"}

commands which are meant to connect to "process-compose" via REST API fail silently if server is not accessible or returns unexpected result

Defect

running commands like:

process-compose process list

doesn't exit with non-zero status if server is not accessible, or returns unexpected result

Version of process-compose:

Process Compose
Version:        v0.51.4
Commit:         e3cc52e

OS environment:

nixos

Steps or code to reproduce the issue:

make sure there is no process-compose running on default port and run

process-compose process list

Expected result:

exits with non-zero status, and prints an error that it could not connect to server on given port

if some sever is accessible but returns unexpected result, this should also result non-zero status and a message that connection was established but requests failed, suggesting that some process other than process-compose might be running on given host\port

Actual result:

exists silently with status 0

Feature request: per-process shutdown command / trap support

For my use-case most of the commands I'm managing via process-compose eventually yield a process into the background, and exit cleanly with 0, with a no retry policy.
Other processes use the process_completed dependency feature to realize that the given process (let's say a postgres init script) has concluded and now the postgres server is ready in the background.
So far so good 🎉

Now, the only problem with these daemon-launching scripts are that if the process-compose process is killed, the underlying daemons are not turned off.
This leaves the environment in an inconsistent state (unlike calling docker-compose down/ Ctrl-C'ing the up command).

Proposal:

  • add processes.$PROCESSNAME.shutdown_command: string to the schema
  • handle SIGTERM in process-compose itself, exec.Command these commands in parallel, wait for the command to terminate.

stretch goal/kinda unrelated but-nice-to-have:

  • revisit the current SIGKILL way of killing the underlying process (instead use SIGTERM), so the command has a chance to run any kind of bash trap like functionality

image

Expose CLI RPC in addition to HTTP

While transforming repositories from docker-compose to process-compose a new need came up:
Replace docker-compose restart $SERVICE with the process-compose equivalent in a bash setting.

Currently you need to do something like:

curl -X 'PATCH' 'http://localhost:$PORT/process/stop/$SERVICE' 
curl -X 'PATCH' 'http://localhost:$PORT/process/start/$SERVICE' 

Given that process-compose is not a singleton daemon, you need to somehow differentiate between the potentially many processes running on the given host.
Given that Go already includes a HTTP client, it would make sense to provide a CLI interface that calls the exposed HTTP interfaces.

Something like:

-h --host String default: localhost
-p --port String required
-o --output json|text default json (no work needed translating the HTTP interface)

process-compose -p $PORT ps # /processes
process-compose -p $PORT process stop 
process-compose -p $PORT process start 

Additionally, a bit unrelated it would be awesome to:

  • Expose an atomic/blocking restart operation
  • Redirect GET / to /swagger/index.html

process stop to support list of processes

Feature Request

I can do

process stop my1
process stop my2

I cannot

process stop my1 my2

Use Case:

process list | xargs process stop

like with docker

docker ps -aq | xargs docker stop 

Proposed Change:

Who Benefits From The Change(s)?

Like docker.

Alternative Approaches

for loop or something more complicated than xargs

Implement -t flag from docker run (pseudo-terminal support)

Feature Request

Certain processes check if they are running within a terminal (bound std* streams, TERM env var, etc.)
A request came up on the devenv Discord to have a similar functionality to docker-compose, which simulates TTYs by default.

Use Case:

Running tailwindcss --watch via process-compose should not exit (as the underlying esbuild process checks for a pseudo-TTY).

Proposed Change:

Who Benefits From The Change(s)?

@jcf

Potentially could allow supplying stdin input in the TUI to interactive processes (shells).

Alternative Approaches

Split log files by day or size (log rolling)

Feature Request

Split log files by day or size (log rolling)

Use Case:

  • Prevent the log file from becoming too large.
    • If the application is too talkative, there is a risk that the log file size will grow infinitely.
  • Effective log management.
    • Quick access to logs from a specific date.

Proposed Change:

in process-compose.yml

log_location: logs/log.txt
log_level: debug

to

log_location: logs  # it must be a directory
log_level: debug
log_rolling:
  format: log-{day}.txt  # or log-{index}.txt / log-{day}_{index}.txt ...
  day: 1  # Interval for rolling log files. [logs/log.txt-2023-09-21] [[logs/log.txt-2023-09-22] ... 
  size: 100MB  # File size for rolling files. [logs/log.txt-1] [logs/log.txt-2] ...

Who Benefits From The Change(s)?

People who will keep this application running for a long time.

Alternative Approaches

I think it would be great if this amazing app has more options to manage log.

donate button in crypto

i use this cool tool in crypto, payed in crypto, so can donate in any crypto of choice :)

thank you for considering adding such button :)

equivalent of docker-compose run: start and attach to process (stdin\stdout) while running deps in background

Feature Request

I'd like to be able to run a selected process in forground, with stdin\stdout attached, so that I see it's logs and interact with it from command line. All of it's dependent processes should run in the background (no logs output shown in terminal).

This would be similar to docker-compose run $SERVICE_NAME

For example, given the following docker-compose.yaml:

services:
  main:
    image: busybox
    command:
      - sh
      - -c
      - |
        echo running main; while true; read -p "enter some text: " reply && echo $$reply; done
    depends_on:
      - deps

  deps:
    image: busybox
    command:
      - sh
      - -c
      - while true; do date > deps; sleep 1; done

when I run docker-compose run main, I see the following:

[+] Running 1/0
 ⠿ Container example-deps-1  Created                                                                                                                                          0.0s
[+] Running 1/1
 ⠿ Container example-deps-1  Started                                                                                                                                          0.2s
running main
enter some text: 

which then allows me to provide input at the prompt

the deps container keeps running in the background until the main process exits.

Use Case:

I'd like to be able to run tests as the main process and a bunch of deps (databases) as other processes in the background. The main process needs to wait for deps before it starts. I am only interested in seeing the test logs in the terminal, and I'd also like to provide input to the test process while it's running: for example, the jest test runner, when ran in --watch mode, can be controlled with various keyboard shortcuts which allow to rerun all tests, or only ones that failed.

Proposed Change:

Addition of:

  • top level run command, taking 1 process name as argument: this would start all process defined in the config file, while attaching to the named process
  • run sub-command to the process command, taking 1 process name as argument: this would start (and attach to) only given process. with all processes in it's dependency tree running in the background

a --no-deps flag could be nice to have

Who Benefits From The Change(s)?

People who'd like to use PC to interactively run some process, while also running other process on which the main one depends

Alternative Approaches

One could run deps in PC, while the main process outside of it, and use a wrapper script to start\stop PC as needed.

Also, a workaround I am currently using in CI, when running test process and it's deps via PC (with a config similar to this), is a following bash script:

process-compose >/dev/null 2>&1 &
pc_pid=$!

process-compose process logs tests -f &
test_logs_pid=$!

wait $pc_pid
kill $test_logs_pid

This way only output of tests process is shown in CI logs.
However this would not work if interaction with the tests process was required, like when running tests via jest --watch.

Cannot copy TUI text on Mac

Using TUI with selection on, there seem to be no keyboard shortcut to copy the selection.

CTRL-C attempts to kill the process.
CMD-C just blips.

Any idea how this can be done?

Allow runnning executables directly without shell wrapper

Feature Request

I’d like to be able to specify a command as a list of: [executable, args…] and have it ran directly instead of via a shell.

Note that I’m willing to implement this.

Use Case:

Reducing dependencies needed to run the processes or double-wrapping in a shell, if my executable is already a shell script, which calls other executables.

Proposed Change:

Allow process.command to be an array, in which case it would be invoked directly

Who Benefits From The Change(s)?

People wanting to minimise the deps of their projects.

Alternative Approaches

Not sure.

readiness_probe does not respect working_dir

Defect

Make sure that these boxes are checked before submitting your issue -- thank you!

  • Included the relevant configuration snippet
  • Included the relevant process-compose log (log location: process-compose info)
  • Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)

Version of process-compose:

OS environment:

Steps or code to reproduce the issue:

Config:

processes:
  clientA:
    working_dir: "clientA"
    is_daemon: true
    command: "sleep 10 && touch ready"
    shutdown:
      command: "rm ready"
    readiness_probe:
      exec:
        command: "echo $(pwd) > ready-check && test -f ready"
  clientB:
    command: "echo all done!"
    depends_on:
      parent:
        condition: process_healthy

Expected result:

  • clientA process becomes ready and clientB prints "all done"
  • File ready-check exists in folder clientA

Actual result:

clientA process never becomes ready and log says:

process-compose.log

23-07-19 09:41:34.127 INF Process Compose v0.51.4
23-07-19 09:41:34.127 INF Global shell command: bash -c
23-07-19 09:41:34.127 INF Loaded project from /var/home/joe/Projects/code/process-compose-working_dir-mcve/process-compose.yaml
23-07-19 09:41:34.127 INF start http server listening :8080
23-07-19 09:41:34.127 DBG Spinning up 2 processes. Order: ["clientA" "clientB"]
23-07-19 09:41:34.127 INF clientB is waiting for clientA to be healthy
23-07-19 09:41:34.127 DBG Shortcuts loaded from /var/home/joe/.config/process-compose/shortcuts.yml
23-07-19 09:41:34.128 INF clientA started
23-07-19 09:41:34.128 DBG clientA_ready_probe started monitoring
23-07-19 09:41:44.130 INF clientA exited with status 0
23-07-19 09:41:54.129 INF clientA is not ready anymore - exit status 1
23-07-19 09:41:54.129 DBG terminating clientA with timeout 10 ...
23-07-19 09:41:54.129 ERR Process clientA was aborted and won't become ready
23-07-19 09:41:54.130 ERR Error: process clientB depended on clientA to become ready, but it was terminated
23-07-19 09:41:54.130 ERR Error: process clientB won't run
23-07-19 09:41:54.131 ERR terminating clientA with timeout 10 failed - exit status 1
23-07-19 09:41:54.131 INF Project completed
23-07-19 09:41:55.131 INF Thank you for using process-compose

  • File ready-check exists in parent folder, not in clientA
  • clientB never prints "all done" since clientA never becomes ready
❯ ls clientA/
ready
❯ ls .
clientA/  process-compose.yaml  ready-check

MCVE: https://github.com/joefiorini/process-compose-working_dir-mcve

Document support for optionally startable services

Problem:
I have a relatively large process-compose service graphs, some with services that are seldomly wanted to be running.
Still whenever they are needed, it would be nice to have the dependency mechanism/retries of process-compose.

Proposal:
Allow process-compose services to be in "Not Started" state by some config parameter, so the TUI user can start/stop them manually.

After C-c process the tasks are still nohup

process-compose version  
Process Compose
Version:        v0.40.0
Commit:         db93aa4
Date (UTC):     20230122201755
License:        Apache-2.0

case: running a flask python web.

[//example/composeJobs/hello_1  ] *** Operational MODE: single process ***
[//example/composeJobs/hello_1  ] WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xd2a750 pid: 1541517 (default app)
[//example/composeJobs/hello_1  ] *** uWSGI is running in multiple interpreter mode ***
[//example/composeJobs/hello_1  ] spawned uWSGI worker 1 (and the only) (pid: 1541517, cores: 1)
^C%                                                                                                                      

guangtao in 🌐 Desktop in cells-lab on  main [!?] via 🐍 v3.10.9 took 20s 
❯ ps aux | grep ./hello.py

guangtao 1541517  0.5  0.0 259096 31536 pts/2    S    22:26   0:00 /nix/store/r09q26lws8grvk6vasj398ig02nhpbmi-uwsgi-2.0.21/bin/uwsgi --plugin=python3 --http :9091 -H /nix/store/mfsp48c1khbv29wd23rqcwagr28bryis-python3-3.10.9-env --callable app --wsgi-file ./hello.py

guangtao 1541518  0.0  0.0  18640  5896 pts/2    S    22:26   0:00 /nix/store/r09q26lws8grvk6vasj398ig02nhpbmi-uwsgi-2.0.21/bin/uwsgi --plugin=python3 --http :9091 -H /nix/store/mfsp48c1khbv29wd23rqcwagr28bryis-python3-3.10.9-env --callable app --wsgi-file ./hello.py

Expected:

  • after C-g or process-compose exits, the related tasks should be hung up.

not working on Windows Server 2019

Defect

Make sure that these boxes are checked before submitting your issue -- thank you!

  • Included the relevant configuration snippet
  • Included the relevant process-compose log (log location: process-compose info)
  • Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)

Version of process-compose: v0.60.0

OS environment: Windows Server 2019 Datacenter

Steps or code to reproduce the issue: process-compose.exe version

Expected result:

Process Compose
Version:        v0.60.0
Commit:         79d0cbd
Date (UTC):     2023-07-21T21:40:41Z
License:        Apache-2.0

Written by Eugene Berger

Actual result:

panic: open C:\Users\ADMINI~1\AppData\Local\Temp\2\process-compose-WIN-2B62Q54U8RC\Administrator.log: The system cannot find the path specified.

goroutine 1 [running]:
github.com/f1bonacc1/process-compose/src/cmd.Execute()
        /home/eugene/projects/go/process-compose/src/cmd/root.go:54 +0x111
main.main()
        /home/eugene/projects/go/process-compose/src/main.go:9 +0x17

Full command

C:\Users\Administrator\Downloads>process-compose.exe version
panic: open C:\Users\ADMINI~1\AppData\Local\Temp\2\process-compose-WIN-2B62Q54U8RC\Administrator.log: The system cannot find the path specified.

goroutine 1 [running]:
github.com/f1bonacc1/process-compose/src/cmd.Execute()
        /home/eugene/projects/go/process-compose/src/cmd/root.go:54 +0x111
main.main()
        /home/eugene/projects/go/process-compose/src/main.go:9 +0x17

When I manually create the C:\Users\ADMINI~1\AppData\Local\Temp\2\process-compose-WIN-2B62Q54U8RC directory, it works.

I think there's a problem creating the directory!

BTW, thank you for this beautiful app. It is exactly what I looking for! 😆

Add an option to not send the shutdown signal to a whole process group.

Feature Request

Add an option to send the shutdown signal only to the process started by process-compose instead of its whole process group.

Use Case:

Consider the following Python program (Linux 6.1, Python 3.11):

import os
import signal
import threading
import multiprocessing as mp


stop = threading.Event()


def handler(signum, _frame):
    print(f"Process {os.getpid()} received signal {signum}.", flush=True)
    stop.set()


if __name__ == "__main__":
    print("main:", os.getpid())

    signal.signal(signal.SIGTERM, handler)
    
    ctx = mp.get_context("spawn")
    with ctx.Pool(processes=1) as _pool:
        stop.wait()

    print("Pool terminated", flush=True)

This is a small variation on the first example of the documentation of the multiprocessing module.

The relevant part of the process tree is:

  PGID     PID    PPID COMMAND
 177518  177518   98725 /usr/bin/python proc.py
 177518  177542  177518  \_ /usr/bin/python -c from multiprocessing.resource_tracker import main;main(5)
 177518  177543  177518  \_ /usr/bin/python -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=6, pipe_handle=12) --multiprocessing-fork

When terminated with

kill -TERM 177518

the process terminates properly as expected:

main: 177518
Process 177518 received signal 15.
Pool terminated

However, when the termination signal is sent to the whole process group:

kill -TERM -177518

the Pool's context hangs on exit. My understanding is that Pool.__exit__ (the method called when exiting the context) calls Pool.terminate, which hangs because the associated process was already terminated by the signal sent to the process group.

Since, on UNIX, process-compose always sends the registered shutdown signal to the whole process group (process_unix.go:23), such a program cannot be properly terminated properly by process-compose.

Proposed Change:

Add an option to the shutdown section of the configuration schema to send the shutdown signal to the managed process' PID only, instead of the whole process group.

For example:

shutdown:
    signal: 15
    # ...
    kill_process_group: yes  # default to keep backwards compatibility

Who Benefits From The Change(s)?

Processes spawned by process-compose that actively manage their children processes.
I feel like the current approach is quite invasive since it also circumvents any shutdown ordering the parent process might want to do on the children.

Alternative Approaches

This may well be a bug/feature of the Python standard library. However, while killing the whole process group makes sense when mimicking what docker-compose does (this seems to be the spirit of #49), a less aggressive shutdown strategy is more predictable (and would reflect better what the readme says: "In case only shutdown.signal is defined [1..31] the running process will be terminated with its value.").

parsing of PC_DISABLE_TUI should be consistent with -t flag

Defect

Currently all of the following will result in TUI being disabled:

PC_DISABLE_TUI=false process-compose
PC_DISABLE_TUI=true process-compose
PC_DISABLE_TUI=1 process-compose
PC_DISABLE_TUI=0 process-compose
PC_DISABLE_TUI=whatever process-compose
PC_DISABLE_TUI= process-compose

So the only way override PC_DISABLE_TUI once it's set is by un-setting the env var, or by adding -t flag.

By comparison, when using t flag following result in tui being disabled:

process-compose -t=0
process-compose -t=false

while following results in it being enabled:

process-compose -t=1
process-compose -t=true
process-compose -t

anything else (like t=whatever) is an error

Version of process-compose:

Version:        v0.51.4
Commit:         e3cc52e
Date (UTC):     2023-06-16T18:14:56Z

OS environment:

nixos

Steps or code to reproduce the issue:

runt

Expected result:

I would expect the parsing of PC_DISABLE_TUI to be consistent with parsing o -t

Btw. I think it's a bit confusing that PC_DISABLE_TUI is reverse of -t. I would deprecate PC_DISABLE_TUI and add PC_TUI instead.

Actual result:

setting PC_DISABLE_TUI to any value is interpreted as true

`$` (dollar sign) is being removed from `command`

I'm using https://github.com/Platonic-Systems/process-compose-flake. My config is:

process-compose."dev:web-node" = {
  debug = true;
  settings = {
    shell.shell_command = "nu";
    processes.web = {
      command = "print $env.PWD; print $PWD; print (pwd)";
      working_dir = "./node/web";
    };
  };
};

results in

processes:
  web:
    command: print $env.PWD; print $PWD; print (pwd)
    working_dir: ./node/web
shell:
  shell_argument: -c
  shell_command: nu
[web_1  ] .PWD
[web_1  ] /home/foxpro/craft/sferadel/next
[web_1  ] /home/foxpro/craft/sferadel/next/node/web

so $env.PWD breaks, but $PWD is okay.

Use process group-based killing to make sure frequently starting processes don't remain running after shutdown

Description

I only have anecdotal evidence for this, but I'm still sometimes getting the effects of #43:
After stopping process-compose some of the processes in the DAG still keep running.

Proposal

Inspiration: watchexec
One sure-fire way to guarantee that ALL processes belonging to process-compose receive the signal is to let the kernel handle it, by passing in the process group ID (= PID of process-compose) to kill. This could be added at the end of ShutDownProject with a SIGKILL, as at that point all custom signaling should have happened.
This would make sure that any subprocesses are killed, even if they were started between the process receiving the shutDown configured signal (let's say an incorrectly implemented bash script, that does not forward signals to the subprocesses).

running "process-compose" with the same port twice makes the second instance inaccessible via REST API

Defect

When multiple instance of process-compose are ran concurrently, only the first instance is accessible via REST API. This can lead to confusion, if one forgets that an instance is already running in the background and starts a new one, and the tries to access it via REST or process-compose command (like process-compose process list etc).

This is also the case if any other process is already using the port process-compose is supposed to use.

IMHO, process-compose should exit with an error if it cannot bind the the port. If one doesn't intend to use the REST interface, a flag to disable the REST server could be provided.

Version of process-compose:

Process Compose
Version:        v0.51.4
Commit:         e3cc52e

OS environment:

nixos

Steps or code to reproduce the issue:

Expected result:

Actual result:

Add support for colored logs

Currently ASCII color codes are lost in the TUI process log view.
It would be nice to have an option to actually interpret them in the terminal, as an opt-in flag in the process definition (no breakage).

do not evaluate `log_location` until decide to start process

Feature Request

use log_location only when going to start process

Use Case:

process with log_location depends on other process before to create folder

Proposed Change:

only before process start, try to create log file.

may be write warning on overall compose start

Who Benefits From The Change(s)?

me

Alternative Approaches

wrapper around compose to run some script before to setup folders, like in docker compose mounts?

Attempting to install v0.28.0 from Nix results in error regarding `pkg` variable

error: undefined variable 'pkg'

       at /nix/store/1r89xk6v6jhhr4nlixqa9d9pvxllka8j-source/default.nix:8:21:

            7|   pkg = "github.com/f1bonacc1/process-compose";
            8|   ldflags = [ "-X ${pkg}/src/config.Version=v${version} -s -w" ];
             |                     ^
            9|

Command used was nix --extra-experimental-features "nix-command flakes" run github:F1bonacc1/process-compose

How to show stderr and/or stdout in tui?

Hey,

I like the project, thank you! It is quite useful for "local testing" of complex projects, and I really enjoy using it!
One thing I cannot figure out is how to "select" what the tui output window shows.
I want to print the stderr (in red) and stdout for a selected process and view it in the tui.
But I cannot figure out how to do it.
The documentation also seems to be missing this part: https://github.com/F1bonacc1/process-compose#capture-stdout-output

Thanks!

PS: Using latest V0.60.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.