Giter Site home page Giter Site logo

loft-sh / devpod Goto Github PK

View Code? Open in Web Editor NEW
7.7K 7.7K 279.0 42.67 MB

Codespaces but open-source, client-only and unopinionated: Works with any IDE and lets you use any cloud, kubernetes or just localhost docker.

Home Page: https://devpod.sh

License: Mozilla Public License 2.0

Go 65.58% Shell 1.53% JavaScript 0.09% HTML 0.06% Rust 5.22% TypeScript 27.43% Dockerfile 0.09%
cloud devcontainer devcontainers developer-tools development docker hacktoberfest ide kubernetes remote-development remote-development-environment vscode

devpod's People

Contributors

89luca89 avatar aacebedo avatar alexandradragodan avatar amitds1997 avatar aunali321 avatar dirien avatar eduardodbr avatar fabiankramm avatar frangio avatar hrittikhere avatar inhumantsar avatar jerempy avatar joycebabu avatar kianmeng avatar lizardruss avatar lukasgentele avatar matskiv avatar mpetason avatar mrsimonemms avatar mukerjee avatar neogopher avatar pascalbreuninger avatar pbialon avatar plars avatar pleclech avatar shanman190 avatar syedzubeen avatar thomask33 avatar titilambert avatar tnbozman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

devpod's Issues

Devcontainers implementation doesn't adhere to spec for the lifestyle script commands

What happened?

I've got a workspace running using the DigitalOcean and seems to be running fine. When I run any of the demo repos (eg, the Rust one) it works fine. When I run anything of my own (eg, my website) with my own Devcontainer (which works fine locally) I get some weird errors - even with --debug enabled they don't make much sense. It looks like it's a collection of errors. Clearly the unsupported type is the important bit, but it doesn't tell me what the type I'm using is (I don't have type specified in my devcontainer.json) so I can stop using that type.

This works fine when using devcontainers locally (using 0.40.0), so I'm wondering if it's that DevPod uses an older version of devcontainers internally.

I'm not necessarily asking for help solving the problem, but I'm after helping working out what the problem is because the logs seems to throwing an unhandled error.

Logs
[11:00:04] fatal exit status 1
agent error: Error trying to reach docker daemon: docker ps: exit status 1
Rerun as root: /home/devpod/.devpod/devpod agent workspace up --workspace-info H4sIAAAAAAAA/+xUXW/aSBT9K6vRPtowM/72GwKUetNARJxKbbVC83EN3mCP5RnTVIj/vhoDFYlC6O5LX/pka+acO/ec+7FD31T7pBsmAKU7VEqUoqrVZaVqqCrtrkqz7rhbKuSgrr+VYYwJ+KFLZMBdnzDqJsCFC5wLEYesYAlHDirURkKLUjRcqwqG2kYbDiRsGyWHQtUGno0eSihYtzHDH0no4aXXm1KYrgWUorUxjU6HQ9VAvWpZsx4ccExrMHogVDWkmIecUEm9JPZYRCAiWGKMMUSSC1oQjEPmM04xB+ZLiHyPRX4QgFckXhSQ8yzO/48vDQ4ZtWpb9iJ3qGaVTU2Wq9KwjRLAarR3UMXEuqx7a4+/We9hb4Or16wF6b4mlRLOQm61UBLsuVZdeyjTqjQLaJQujWq/nzlyzM46cF3A3kGiBWZKVedlBdqwqkEpoph6Lg5dHOY4Tr0wpf4X5KAN0+ZRg3yJIDjFOMXEIo5F7eX1VX1lQHlF+X/tmWNoe/BOzGtFcpBqrAXaAkY301m+nIzy0fJ+lH+wR1u26eBHSsdcTjmxFdS9zAPxpziHjyVNsofbZXY3upmek6QST9C6FLvYt0Onob0/aJAoNW0HJ+ZD9uUFMcCX4NlsNM6zT1n+eZlnd9P5Y37OI7hCPeiv6ThfTubj2+liOV5MJ9NZno0+PpxjbcAz8E2WX0PejcYfstl0mX++f5GtdulWNJ3rr/iltBfTm2w+OydtVE0uofP57XT20shmuSVLKpMiimKv8EjAZVJQj2FPeEx6cSI4BSkKHtOCBkUQF4FHqUeKOCJJhEVAolC+/d7+p6aHkjSgF2fj0D3pDjXMrK+0i4MkM+z+HeAhmoOk+lZvFJOPi49vb4aNKoyr1yd+Cxtg2o7RkTjc4gEZJMhBZc2EKbel+W4lqs4cu8Xe/APC3JRm3IKE2pRso09FP91O+kZ+EwDPIKxwve6MfRalX9Gfu8l8eb+Yf8om08X+D21Ug/7eO4iXNWtL6OfzDIHSrzukbMxNWXfPyEGsFdYcVsnQDo5Yg3jSXWWPcEFi7EWcEEok9hLuhwVgmUQBIYwEkR9GTAALQh4kYVAECSFJJEkSQsiFjOwaOTh/1U73tG9e7KG3TcYD8i7L7ZW5B0F754LctnotlzBBaFzggoeMxT4XvhcmEeWelUl8RqiIgiSmAUkSGnqeSIRkgU+LSEqeEP8Xy+0F7W3pnzoObQ2mL/7eOa5G+7//vbZ/r+3/tbb/BQAA//8BAAD//wc4rq7xCgAA --debug
Use /home/devpod/.devpod/agent/contexts/default/workspaces/mrsimonemms-github-io as workspace dir
unsupported type
github.com/loft-sh/devpod/pkg/types.init
        /home/runner/work/devpod/devpod/pkg/types/types.go:12
runtime.doInit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:6329
runtime.doInit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:6306
runtime.doInit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:6306
runtime.doInit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:6306
runtime.doInit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:6306
runtime.main
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:233
runtime.goexit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594
parsing devcontainer.json
github.com/loft-sh/devpod/pkg/devcontainer.(*Runner).prepare
        /home/runner/work/devpod/devpod/pkg/devcontainer/run.go:68
github.com/loft-sh/devpod/pkg/devcontainer.(*Runner).Up
        /home/runner/work/devpod/devpod/pkg/devcontainer/run.go:112
github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).devPodUp
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:374
github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).up
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:164
github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:98
github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:60
github.com/spf13/cobra.(*Command).execute
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
github.com/spf13/cobra.(*Command).Execute
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
github.com/loft-sh/devpod/cmd.Execute
        /home/runner/work/devpod/devpod/cmd/root.go:71
main.main
        /home/runner/work/devpod/devpod/main.go:12
runtime.main
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:250
runtime.goexit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594
devcontainer up
github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:100
github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
        /home/runner/work/devpod/devpod/cmd/agent/workspace/up.go:60
github.com/spf13/cobra.(*Command).execute
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
github.com/spf13/cobra.(*Command).Execute
        /home/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
github.com/loft-sh/devpod/cmd.Execute
        /home/runner/work/devpod/devpod/cmd/root.go:71
main.main
        /home/runner/work/devpod/devpod/main.go:12
runtime.main
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:250
runtime.goexit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594
error parsing workspace info: rerun as root: exit status 1

github.com/loft-sh/devpod/pkg/agent.InjectAgentAndExecute
        /home/runner/work/devpod/devpod/pkg/agent/inject.go:119
github.com/loft-sh/devpod/cmd.(*UpCmd).devPodUpMachine.func1
        /home/runner/work/devpod/devpod/cmd/up.go:262
runtime.goexit
        /opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594

What did you expect to happen instead?

I would expect it to open. Or at least give me something useful to debug.

How can we reproduce the bug? (as minimally and precisely as possible)

Run my website

My devcontainer.json:

{
  "name": "devcontainer",
  "image": "ghcr.io/mrsimonemms/devcontainers/full",
  "features": {},
  "customizations": {
    "vscode": {
      "settings": {},
      "extensions": [
        "donjayamanne.git-extension-pack",
        "EditorConfig.EditorConfig",
        "waderyan.gitblame",
        "esbenp.prettier-vscode",
        "svelte.svelte-vscode",
        "GitHub.vscode-github-actions"
      ]
    }
  },
  "postStartCommand": {
    "pre-commit": "pre-commit install --install-hooks -t pre-commit -t commit-msg"
  }
}

Local Environment:

  • DevPod Version: 0.1.7
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Cloud Provider: digitalOcean

Anything else we need to know?

Slack thread

Q: Is it possible to support Visual Studio?

Is your feature request related to a problem?
No

Which solution do you suggest?
N/A

Which alternative solutions exist?
N/A

Additional context
I would like to move a lot of my peers into our K8S. A lot of people are used to Visual Studio. Is it possible to support Visual Studio?

I have updated the JetBrains gateway to the latest version, but every time a dialog box pops up asking me to download the latest JetBrains gateway

What happened?
I have updated the JetBrains gateway to the latest version, but every time a dialog box pops up asking me to download the latest JetBrains gateway

What did you expect to happen instead?

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
    "name": "...",
    ...
}

Local Environment:

  • DevPod Version: [use devpod --version]

  • Operating System: mac
  • ARCH of the OS: AMD64 | ARM64 | i386

DevPod Provider:

  • Cloud Provider: google | aws | azure | digitalOcean
  • Kubernetes Provider: [use kubectl version]
  • Local/remote provider: docker | ssh
  • Custom provider: provide imported provider.yaml config file

Anything else we need to know?

SSH remote doesn't surface errors

What happened?

Tried to use a project with SSH remote. Appeared to hang on agent init.

What did you expect to happen instead?

Work, or at least get an error, without having to turn on debug mode.

Turning on debug I saw: debug zsh:87: command not found: sudo -E sh -c

and then when I changed shell to bash:

Inject Error: sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required

How can we reproduce the bug? (as minimally and precisely as possible)

ssh remote
apt install -y zsh
chsh -s /bin/zsh
# and/or
sudo visudo  # and drop any "NOPASSWD" bits

My devcontainer.json: n/a; I think this applies to anything.

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: mac
  • ARCH of the OS: arm64

DevPod Provider:

  • Cloud Provider: own hardware
  • Local/remote provider: ssh to Ubuntu 22.04 machine

Anything else we need to know?

hostRequirements, pod nodeselectors in kubernetes provider

Is your feature request related to a problem?

Nope. Loving devpod so far!

Which solution do you suggest?

Please add support to kubernetes provider for devcontainer "hostRequirements" (I see some support exists in a few other providers).

Related for my needs (but unrelated in implementation), kubernetes provider options to pass through pod node selectors would be super useful too.

Which alternative solutions exist?

Additional context

In our k8s cluster, we dynamically provision nodes using karpenter. This needs some information to know how 'big' the pod needs to be, which I think means devcontainer hostRequirements. Karpenter can also use node selectors for additional hints like EC2 instance-type/spot/etc.

Also on k8s, it's important to tie the pod to the correct OS/architecture, since there may be mixed nodes in the same cluster. I think devpod knows this information and can set these os/arch node selectors correctly automatically.

feature: custom image for docker provider

Add a configuration in the docker provider to use a custom image instead of the one that is inferred or choosen by devcontainer.json

This could be useful in case the user uses personalized development images that wants to keep consistent between multiple projects without necessarly modifying the devcontainer.json in a repo

(aws): Ability to specify a subnet

Is your feature request related to a problem?
For anything other than the default VPC, you need to specify the subnet that the instance should be launched in as part of the RunInstances API call. At the moment, the vpc id is available to be set, but the subnet is not.

Which solution do you suggest?
Expose an optional subnet field that propagates all the way into the RunInstances API call.

Which alternative solutions exist?
None

Additional context

debug Inject Error: /bin/bash: line 1: helper: command not found

What happened?

Tried the demo go project and it fails with this output

[19:08:42] info Creating devcontainer...
[19:08:42] debug Inject and run command: //?/C:/Program Files/DevPod/devpod-cli.exe agent workspace up --workspace-info 'H4sIAAAAAAAA/7ySXW+bPBTHv8qjcw0xBBJcbh5VaadFe2lUkUmbkKqDfSBWDUa2SVtVfPeJ0HWd1Gh3u7TOOfD7vzzDg7H3rkdBkD+DkpDD0QkjKbTUGk+ht09hYyCA4TRcRtFFmmUypCpeh2myqkO+lBjGJBJMRSQTKSCA2mhJFnLY5GW5d2RdWUq0lspyIenYG1mWwnSeHv00oRoH7cvyFcaV5RmMXgk/WIIcDt73LmfM9NQ1FvvDolH+MFToHHm3EKZlSZwuMcULrDLkFec1yiolzsWac8wwo1WC0Qpjvl5lMllTlMi04rLmFylP6ypmrRLWOFN7dg7HmqM6SX2GDtuJSxpxTxYCML1XpnPT6Opm8+n69m53WXycnkfUwx+rgyO7mz8lIfd2oDGA7dfLTbH9ti2+3xXbL9c3+2K6fWd1HANoURxUN6U4BqAkvQFqjMZOwhiAM4Odk26Uv6XeOOWNfXpj5mzhyby/aB8DEJZwkliolpzHtj8VZJmE0SqMsyJa5hHP0+gHBKDR+b0jeX7jpQ6TK3Mdpj9gQ52feHv0B8iBsf/ZJmc7axqL7X8flCbHrui4M5LNvQqFVgt6JAhAmodOG5T728/vK9Sm9qE7vFwyS5rQkWO/DtkxWsSLJQRAjyRma++HimxHntz8fonwN+JrpqpzHrWGHGrUjmBK6R9W4icAAAD//wEAAP//OqRaQtsDAAA=' --debug
[19:08:42] debug execute inject script
[19:08:42] debug Run command provider command: ${DEVPOD} helper sh -c "${COMMAND}"
[19:08:42] debug /bin/bash: line 1: helper: command not found
[19:08:42] debug done exec
[19:08:42] debug done inject
[19:08:42] debug done injecting
[19:08:42] debug Inject Error: /bin/bash: line 1: helper: command not found
EOF

What did you expect to happen instead?

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
    "name": "...",
    ...
}

Local Environment:

  • DevPod Version: [use devpod --version]
  • Operating System: windows
  • ARCH of the OS: AMD64

DevPod Provider:

  • Local/remote provider: docker

Anything else we need to know?

DevPod starts with blank window and nothing works

What happened?
I installed the DevPod app from the website https://devpod.sh/ (Apple Silicon)

What did you expect to happen instead?
The app to start.

How can we reproduce the bug? (as minimally and precisely as possible)
 🤷‍♀️

Local Environment:

  • DevPod Version: 0.1.4 [use devpod --version]
  • Operating System: mac
  • ARCH of the OS: AMD64 | ARM64 | i386

DevPod Provider:

  • Cloud Provider: not setup
  • Kubernetes Provider: [use kubectl version]
  • Local/remote provider: docker | ssh
  • Custom provider: provide imported provider.yaml config file

Anything else we need to know?

Logs

default	10:00:08.852132-0600	DevPod	Received configuration update from daemon (initial)
default	10:00:08.852930-0600	DevPod	CHECKIN: pid=84938
default	10:00:08.869647-0600	DevPod	CHECKEDIN: pid=84938 asn=0x0-0x5e55e5 foreground=1
default	10:00:08.871766-0600	DevPod	FRONTLOGGING: version 1
default	10:00:08.871828-0600	DevPod	Registered, pid=84938 ASN=0x0,0x5e55e5
default	10:00:08.874679-0600	DevPod	BringForward: pid=84938 asn=0x0-0x5e55e5 bringForward=1 foreground=1 uiElement=0 launchedByLS=1 modifiersCount=1 allDisabled=0
default	10:00:08.874783-0600	DevPod	BringFrontModifier: pid=84938 asn=0x0-0x5e55e5 Modifier 0 hideAfter=0 hideOthers=0 dontMakeFrontmost=0 mouseDown=0/0 seed=0/0
default	10:00:08.874903-0600	DevPod	BringForward: pid=84938 asn=0x0-0x5e55e5
default	10:00:08.874933-0600	DevPod	SetFrontProcess: asn=0x0-0x5e55e5 options=0
default	10:00:08.880038-0600	DevPod	Current system appearance, (HLTB: 1), (SLS: 0)
default	10:00:08.880985-0600	DevPod	No persisted cache on this platform.
default	10:00:08.882098-0600	DevPod	Post-registration system appearance: (HLTB: 1)
default	10:00:08.914007-0600	DevPod	NSApp cache appearance:
-NSRequiresAquaSystemAppearance: 0
-appearance: (null)
-effectiveAppearance: <NSCompositeAppearance: 0x6000010c8c80
 (
    "<NSAquaAppearance: 0x6000010c8e80>",
    "<NSSystemAppearance: 0x6000010d1780>"
)>
default	10:00:08.967426-0600	DevPod	0x109038c40 - [PID=0] WebProcessCache::updateCapacity: Cache is disabled because process swap on navigation is disabled
default	10:00:08.970657-0600	DevPod	-[SOAuthorization init]  on <private>
default	10:00:08.970828-0600	DevPod	-[SOAuthorizationCore init]  on <private>
default	10:00:08.971060-0600	DevPod	<SOServiceConnection: 0x600003ed26c0>: new XPC connection
default	10:00:08.971148-0600	DevPod	0x109034a00 - [PID=0] WebProcessProxy::constructor:
default	10:00:08.971278-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=0] WebPageProxy::constructor:
default	10:00:08.971491-0600	DevPod	0x109034a00 - [PID=0] WebProcessProxy::addExistingWebPage: webPage=0x129044e18, pageProxyID=5, webPageID=6
default	10:00:08.971515-0600	DevPod	0x109038c40 - [PID=0] WebProcessCache::updateCapacity: Cache is disabled by client
default	10:00:08.972763-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=0] WebPageProxy::loadRequest:
default	10:00:08.972790-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=0] WebPageProxy::launchProcess:
default	10:00:08.972800-0600	DevPod	0x109034a00 - [PID=0] WebProcessProxy::removeWebPage: webPage=0x129044e18, pageProxyID=5, webPageID=6
default	10:00:08.973021-0600	DevPod	0x109034d00 - [PID=0] WebProcessProxy::constructor:
default	10:00:08.974970-0600	DevPod	0x109011c20 - [PID=0, throttler=0x109034e78] ProcessThrottler::Activity::Activity: Starting background activity / 'WebProcess initialization'
default	10:00:08.976299-0600	DevPod	<PKDiscoveryDriver:0x600001ad0420> created discovery driver
default	10:00:08.976336-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 0, point: com.apple.services
default	10:00:08.976403-0600	DevPod	<PKDiscoveryDriver:0x600001acd860> created discovery driver
default	10:00:08.976435-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 0, point: com.apple.ui-services
default	10:00:08.977468-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 0
default	10:00:08.977494-0600	DevPod	<PKDiscoveryDriver:0x600001ad0420> delivering update to host (0 plugins)
default	10:00:08.977596-0600	DevPod	<PKDiscoveryDriver:0x600001ad0420> installing watchers for continuous discovery
default	10:00:08.982161-0600	DevPod	<<<< Alt >>>> fpSupport_GetVideoRangeForCoreDisplayWithPreference: displayID 3 reported potentialHeadRoom=1 wideColorSupported=NO marz=NO almd=NO deviceAllowsHDR=YES isBuiltinPanel=NO externalPanel=YES prefersHDR10=NO
default	10:00:08.992259-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 2
default	10:00:08.992355-0600	DevPod	<PKDiscoveryDriver:0x600001acd860> delivering update to host (2 plugins)
default	10:00:08.996840-0600	DevPod	<PKDiscoveryDriver:0x600001acd860> installing watchers for continuous discovery
default	10:00:08.998337-0600	DevPod	<<<< Alt >>>> fpSupport_GetVideoRangeForCoreDisplayWithPreference: displayID 2 reported potentialHeadRoom=1 wideColorSupported=NO marz=NO almd=NO deviceAllowsHDR=YES isBuiltinPanel=NO externalPanel=YES prefersHDR10=NO
default	10:00:08.998785-0600	DevPod	0x109034a00 - [PID=0] WebProcessProxy::destructor:
default	10:00:08.998816-0600	DevPod	0x109034b78 - [PID=0] ProcessThrottler::invalidateAllActivities: BEGIN (foregroundActivityCount: 0, backgroundActivityCount: 0)
default	10:00:08.998827-0600	DevPod	0x109034b78 - [PID=0] ProcessThrottler::invalidateAllActivities: END
default	10:00:08.998838-0600	DevPod	0x109034d00 - [PID=0] WebProcessProxy::addExistingWebPage: webPage=0x129044e18, pageProxyID=5, webPageID=6
default	10:00:09.000911-0600	DevPod	[0x129045e20] CVCGDisplayLink::setCurrentDisplay: 0
default	10:00:09.001009-0600	DevPod	[0x129045e00] CVDisplayLinkCreateWithCGDisplays count: 1 [displayID[0]: 0x0] [CVCGDisplayLink: 0x129045e20]
default	10:00:09.001032-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=0] WebPageProxy::loadRequestWithNavigationShared:
default	10:00:09.001396-0600	DevPod	nw_path_evaluator_start [7D890091-2AC3-4B37-B664-75A911E2BCD7 <NULL> generic, attribution: developer]
	path: satisfied (Path is satisfied), interface: en0, ipv4, dns
default	10:00:09.002006-0600	DevPod	Faulting in CFHTTPCookieStorage singleton
default	10:00:09.002017-0600	DevPod	Creating default cookie storage with process/bundle identifier
default	10:00:09.002813-0600	DevPod	Faulting in NSHTTPCookieStorage singleton
default	10:00:09.003835-0600	DevPod	[0x13a826620] CVCGDisplayLink::setCurrentDisplay: 3
default	10:00:09.003860-0600	DevPod	[0x13a826600] CVDisplayLinkCreateWithCGDisplays count: 1 [displayID[0]: 0x3] [CVCGDisplayLink: 0x13a826620]
default	10:00:09.004352-0600	DevPod	SetFrontProcess: asn=0x0-0x5e55e5 options=1
default	10:00:09.005404-0600	DevPod	Registering for test daemon availability notify post.
default	10:00:09.005511-0600	DevPod	notify_get_state check indicated test daemon not ready.
default	10:00:09.005584-0600	DevPod	notify_get_state check indicated test daemon not ready.
default	10:00:09.040220-0600	DevPod	SignalReady: pid=84938 asn=0x0-0x5e55e5
default	10:00:09.041178-0600	DevPod	SIGNAL: pid=84938 asn=0x0x-0x5e55e5
default	10:00:09.058645-0600	DevPod	Initializing connection
default	10:00:09.058681-0600	DevPod	Removing all cached process handles
default	10:00:09.058726-0600	DevPod	Sending handshake request attempt #1 to server
default	10:00:09.058743-0600	DevPod	Creating connection to com.apple.runningboard
default	10:00:09.059586-0600	DevPod	Handshake succeeded
default	10:00:09.059600-0600	DevPod	Identity resolved as app<application.sh.loft.devpod.66206459.66206475(502)>
default	10:00:09.069626-0600	DevPod	0x109038c40 - [PID=0] WebProcessCache::setApplicationIsActive: (isActive=1)
default	10:00:09.093915-0600	DevPod	SetFrontProcess: asn=0x0-0x5e55e5 options=1
default	10:00:09.097908-0600	DevPod	filteredItemsFromItems:<private> [84938]--> <private>
default	10:00:09.097933-0600	DevPod	Requesting sharingServicesForFilteredItems:<private> mask:<private>
default	10:00:09.097950-0600	DevPod	Query extensions (async) for items: <private> onlyViewerOrEditor:1
default	10:00:09.098009-0600	DevPod	filteredItemsFromItems:<private> [84938]--> <private>
default	10:00:09.098026-0600	DevPod	Requesting sharingServicesForFilteredItems:<private> mask:<private>
default	10:00:09.098035-0600	DevPod	Query extensions (async) for items: <private> onlyViewerOrEditor:1
default	10:00:09.098179-0600	DevPod	0x109034d00 - [PID=84965] WebProcessProxy::didFinishLaunching:
default	10:00:09.101305-0600	DevPod	filteredItemsFromItems:<private> [84938]--> <private>
default	10:00:09.101322-0600	DevPod	Requesting sharingServicesForFilteredItems:<private> mask:<private>
default	10:00:09.101336-0600	DevPod	Query extensions (async) for items: <private> onlyViewerOrEditor:1
default	10:00:09.101388-0600	DevPod	Matching dictionary: <private>, attributesArray: <private>
default	10:00:09.101406-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.101391-0600	DevPod	Matching dictionary: <private>, attributesArray: <private>
default	10:00:09.101613-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.102190-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.ui-services
default	10:00:09.102217-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.ui-services
default	10:00:09.102324-0600	DevPod	Matching dictionary: <private>, attributesArray: <private>
default	10:00:09.102361-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.102422-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.ui-services
default	10:00:09.108971-0600	DevPod	0x1090741a0 - NetworkProcessProxy is taking a background assertion because a web process is requesting a connection
default	10:00:09.116625-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 2
default	10:00:09.116790-0600	DevPod	2 plugins found
default	10:00:09.117600-0600	DevPod	Service with identifier <private> passes activation rule: 1
default	10:00:09.122828-0600	DevPod	UNIX error exception: 17
default	10:00:09.124845-0600	DevPod	Plugin <private> not enabled, skip it.
default	10:00:09.124881-0600	DevPod	1 compatible services found for attributes <private>
default	10:00:09.124938-0600	DevPod	Discovery done
default	10:00:09.124979-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.125148-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.services
default	10:00:09.132025-0600	DevPod	UNIX error exception: 17
default	10:00:09.139817-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 2
default	10:00:09.140399-0600	DevPod	2 plugins found
default	10:00:09.140741-0600	DevPod	Service with identifier <private> passes activation rule: 0
default	10:00:09.140840-0600	DevPod	Service dictionary for plugin <private> not available, skip it.
default	10:00:09.141035-0600	DevPod	Plugin <private> not enabled, skip it.
default	10:00:09.141139-0600	DevPod	0 compatible services found for attributes <private>
default	10:00:09.141258-0600	DevPod	Discovery done
default	10:00:09.141371-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.141566-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.services
default	10:00:09.142391-0600	DevPod	UNIX error exception: 17
default	10:00:09.153220-0600	DevPod	0x109011c20 - [PID=0, throttler=0x109034e78] ProcessThrottler::Activity::invalidate: Ending background activity / 'WebProcess initialization'
default	10:00:09.160839-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 2
default	10:00:09.161227-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 0
default	10:00:09.161319-0600	DevPod	2 plugins found
default	10:00:09.161387-0600	DevPod	0 plugins found
default	10:00:09.161475-0600	DevPod	0 compatible services found for attributes <private>
default	10:00:09.161591-0600	DevPod	Discovery done
default	10:00:09.161638-0600	DevPod	Service with identifier <private> passes activation rule: 0
default	10:00:09.161680-0600	DevPod	Completed querying extensions: <private>
default	10:00:09.161700-0600	DevPod	Service dictionary for plugin <private> not available, skip it.
default	10:00:09.161830-0600	DevPod	Plugin <private> not enabled, skip it.
default	10:00:09.161858-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 0
default	10:00:09.161879-0600	DevPod	0 compatible services found for attributes <private>
default	10:00:09.161973-0600	DevPod	0 plugins found
default	10:00:09.162074-0600	DevPod	Discovery done
default	10:00:09.162101-0600	DevPod	0 compatible services found for attributes <private>
default	10:00:09.162137-0600	DevPod	Discover extensions with attributes <private>
default	10:00:09.162153-0600	DevPod	Discovery done
default	10:00:09.162213-0600	DevPod	Completed querying extensions: <private>
default	10:00:09.162252-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Beginning discovery for flags: 1024, point: com.apple.services
default	10:00:09.162900-0600	DevPod	[d <private>] <PKHost:0x600002bc12c0> Completed discovery. Final # of matches: 0
default	10:00:09.162970-0600	DevPod	0 plugins found
default	10:00:09.163026-0600	DevPod	0 compatible services found for attributes <private>
default	10:00:09.163115-0600	DevPod	Discovery done
default	10:00:09.163182-0600	DevPod	Completed querying extensions: <private>
default	10:00:09.164932-0600	DevPod	Sorted services: <private>
default	10:00:09.165090-0600	DevPod	Sorted services: <private>
default	10:00:09.165179-0600	DevPod	Sorted services: <private>
default	10:00:09.175189-0600	DevPod	[0x129045e00] CVDisplayLinkStart
default	10:00:09.175223-0600	DevPod	[0x129045e20] CVDisplayLink::start
default	10:00:09.175282-0600	DevPod	[0x6000015ca0d0] CVXTime::reset
default	10:00:09.179903-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::decidePolicyForNavigationAction: frameID=4, navigationID=1
default	10:00:09.180051-0600	DevPod	0x1090118f0 - SOAuthorizationCoordinator::tryAuthorize
default	10:00:09.180798-0600	DevPod	-[SOConfigurationClient init]  on <private>
default	10:00:09.180876-0600	DevPod	<SOServiceConnection: 0x600003e88300>: new XPC connection
default	10:00:09.181039-0600	DevPod	0x1090118f0 - SOAuthorizationCoordinator::tryAuthorize: Cannot authorize the requested URL.
default	10:00:09.181304-0600	DevPod	[0x13a826600] CVDisplayLinkStart
default	10:00:09.181361-0600	DevPod	[0x13a826620] CVDisplayLink::start
default	10:00:09.181483-0600	DevPod	[0x6000015ca3e0] CVXTime::reset
default	10:00:09.181467-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::decidePolicyForNavigationAction: listener called: frameID=4, navigationID=1, policyAction=0, safeBrowsingWarning=0, isAppBoundDomain=0
default	10:00:09.181596-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::receivedNavigationPolicyDecision: frameID=4, navigationID=1, policyAction=0
default	10:00:09.181641-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::decidePolicyForNavigationAction: keep using process 84965 for navigation, reason=Process has not yet committed any provisional loads
default	10:00:09.185024-0600	DevPod	client.trigger:#N CCFG for cid 0x62 has # of profiles: 0
default	10:00:09.211201-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didStartProvisionalLoadForFrame: frameID=4
default	10:00:09.211221-0600	DevPod	0x109034d00 - [PID=84965] WebProcessProxy::didStartProvisionalLoadForMainFrame:
default	10:00:09.212255-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didNavigateWithNavigationDataShared:
default	10:00:09.212707-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didCommitLoadForFrame: frameID=4
default	10:00:09.223455-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didFinishLoadForFrame: frameID=4
default	10:00:09.278764-0600	DevPod	<<<< Alt >>>> fpSupport_GetVideoRangeForCoreDisplayWithPreference: displayID 3 reported potentialHeadRoom=1 wideColorSupported=NO marz=NO almd=NO deviceAllowsHDR=YES isBuiltinPanel=NO externalPanel=YES prefersHDR10=NO
default	10:00:09.278896-0600	DevPod	<<<< Alt >>>> fpSupport_GetVideoRangeForCoreDisplayWithPreference: displayID 2 reported potentialHeadRoom=1 wideColorSupported=NO marz=NO almd=NO deviceAllowsHDR=YES isBuiltinPanel=NO externalPanel=YES prefersHDR10=NO
default	10:00:09.279020-0600	DevPod	0x109098380 - GPUProcessProxy is taking a background assertion because a web process is requesting a connection
default	10:00:09.384742-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didNavigateWithNavigationDataShared:
default	10:00:09.384773-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didSameDocumentNavigationForFrame: frameID=4
default	10:00:09.389856-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didFinishDocumentLoadForFrame: frameID=4
default	10:00:09.440335-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didNavigateWithNavigationDataShared:
default	10:00:09.440390-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::didSameDocumentNavigationForFrame: frameID=4
default	10:00:09.464831-0600	DevPod	0x129044e18 - [pageProxyID=5, webPageID=6, PID=84965] WebPageProxy::updateActivityState: view visibility state changed 0 -> 1
default	10:00:09.712385-0600	DevPod	[0x129045e00] CVDisplayLinkStop
default	10:00:09.712446-0600	DevPod	[0x129045e20] CVDisplayLink::stop
default	10:00:09.845570-0600	DevPod	[0x13a826600] CVDisplayLinkStop
default	10:00:09.845679-0600	DevPod	[0x13a826620] CVDisplayLink::stop
default	10:00:09.968003-0600	DevPod	[0x13a826600] CVDisplayLinkStart
default	10:00:09.968030-0600	DevPod	[0x13a826620] CVDisplayLink::start
default	10:00:09.968099-0600	DevPod	[0x6000015ca3e0] CVXTime::reset
default	10:00:10.346743-0600	DevPod	[0x13a826600] CVDisplayLinkStop
default	10:00:10.346813-0600	DevPod	[0x13a826620] CVDisplayLink::stop
default	10:00:10.391412-0600	DevPod	[0x13a826600] CVDisplayLinkStart
default	10:00:10.391474-0600	DevPod	[0x13a826620] CVDisplayLink::start
default	10:00:10.391597-0600	DevPod	[0x6000015ca3e0] CVXTime::reset
default	10:00:10.779378-0600	DevPod	[0x13a826600] CVDisplayLinkStop
default	10:00:10.779468-0600	DevPod	[0x13a826620] CVDisplayLink::stop
default	10:00:11.234520-0600	DevPod	0x109038c40 - [PID=0] WebProcessCache::setApplicationIsActive: (isActive=0)
default	10:00:11.739481-0600	DevPod	[0x13a826600] CVDisplayLinkStart
default	10:00:11.739537-0600	DevPod	[0x13a826620] CVDisplayLink::start
default	10:00:11.740319-0600	DevPod	[0x6000015ca3e0] CVXTime::reset

AWS Provider shouldn't use static IAM keys if possible

Is your feature request related to a problem?
I would love to make use of this tool, but we explicitly don't support or allow IAM users to be created, and thus, don't allow setting access keys directly. The alternative is a time-limited STS token, but digging those out of the ~/.aws/credentials file is tedious and repetitive.

Which solution do you suggest?
Using IAM Roles is the preferred option, and is often accomplished through the use of profiles:

https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html

Which alternative solutions exist?
I'm sure there are a lot of other tools, but as this comes from the vendor, I would hope that most tools would lean on the options offered OOTB from Amazon.

Additional context

Docker Desktop not recognized on MacOS

What happened?
I was using RancherDesktop. Since DevPod didn't recognize it, I uninstalled it and newly installed Docker Desktop. However, DevPod still not recognize Docker as a Provider.

What did you expect to happen instead?
The GUI should report that it is able to detect Docker on my Macbook.

How can we reproduce the bug? (as minimally and precisely as possible)
NAN

Local Environment:

  • DevPod Version: 0.1.7
  • Operating System: mac
  • ARCH of the OS: i386

DevPod Provider:

  • Local provider: docker

Panic when creating a workspace for a private gitlab repository

What happened?

A panic occurs when creating a workspace for a private repository on gitlab.com

07:55:34 fatal panic: runtime error: index out of range [1] with length 1 goroutine 1 [running]:
runtime/debug.Stack()
	/home/jsiebens/sdk/go1.20.1/src/runtime/debug/stack.go:24 +0x65
github.com/loft-sh/devpod/cmd.Execute.func1()
	/workbench/workspaces/oss/projects/devpod/cmd/root.go:61 +0x3d
panic({0x14481a0, 0xc0003011b8})
	/home/jsiebens/sdk/go1.20.1/src/runtime/panic.go:884 +0x213
github.com/loft-sh/devpod/pkg/workspace.getProjectImage({0x7ffd18f84241?, 0x26?})
	/workbench/workspaces/oss/projects/devpod/pkg/workspace/workspace.go:444 +0x285
github.com/loft-sh/devpod/pkg/workspace.resolve(0xc000609200, 0xc0003f8000, {0x7ffd18f84241, 0x26}, {0xc000630340, 0x6}, {0xc000401940, 0x39}, 0x0)
	/workbench/workspaces/oss/projects/devpod/pkg/workspace/workspace.go:368 +0x385
github.com/loft-sh/devpod/pkg/workspace.createWorkspace({0x172e658, 0xc0000440b8}, 0xc0003f8000, {0xc000630340, 0x6}, {0x7ffd18f84241, 0x26}, {0x0, 0x0}, {0x20dd450, ...}, ...)
	/workbench/workspaces/oss/projects/devpod/pkg/workspace/workspace.go:243 +0x125
github.com/loft-sh/devpod/pkg/workspace.resolveWorkspace({0x172e658, 0xc0000440b8}, 0xc0003f8000, {0xc00063c240?, 0xc0006434d8?, 0x0?}, {0x0, 0x0}, {0x0, 0x0}, ...)
	/workbench/workspaces/oss/projects/devpod/pkg/workspace/workspace.go:218 +0x325
github.com/loft-sh/devpod/pkg/workspace.ResolveWorkspace({0x172e658, 0xc0000440b8}, 0x0?, {0x0, 0x0}, {0x20dd450, 0x0, 0x0}, {0xc00063c240, 0x1, ...}, ...)
	/workbench/workspaces/oss/projects/devpod/pkg/workspace/workspace.go:151 +0x170
github.com/loft-sh/devpod/cmd.NewUpCmd.func1(0xc00040b800?, {0xc00063c240, 0x1, 0x1})
	/workbench/workspaces/oss/projects/devpod/cmd/up.go:62 +0x1a8
github.com/spf13/cobra.(*Command).execute(0xc00040b800, {0xc00063c200, 0x1, 0x1})
	/workbench/workspaces/oss/projects/devpod/vendor/github.com/spf13/cobra/command.go:916 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0xc000004f00)
	/workbench/workspaces/oss/projects/devpod/vendor/github.com/spf13/cobra/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/workbench/workspaces/oss/projects/devpod/vendor/github.com/spf13/cobra/command.go:968
github.com/loft-sh/devpod/cmd.Execute()
	/workbench/workspaces/oss/projects/devpod/cmd/root.go:71 +0x59
main.main()
	/workbench/workspaces/oss/projects/devpod/main.go:12 +0xca

What did you expect to happen instead?

The workspace is created correctly for such a private repository.

How can we reproduce the bug? (as minimally and precisely as possible)

Local Environment:

  • DevPod Version: v0.1.8
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Cloud Provider: google | aws | azure | digitalOcean
  • Local/remote provider: docker | ssh
  • Custom provider: provide imported provider.yaml config file

Anything else we need to know?

Unable to open second workspace in VS Code

What happened?
I am unable to open a second workspace in VS Code.
VS Code Remote will hang at the point where it is trying to copy data via scp:
image

Any attempt to open a new workspace results in this issue.

What did you expect to happen instead?
To be able to open new workspaces in VS Code.

How can we reproduce the bug? (as minimally and precisely as possible)

  • Create a workspace and set it to open in VS Code.
  • Create a second workspace and try to open it in VS Code.

My devcontainer.json:

{
	"name": "devpod-test-2",
	"build": {
		"dockerfile": "Dockerfile"
	}
}

My Dockerfile:

FROM ubuntu:jammy

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Cloud Provider: aws

Anything else we need to know?

  • I can delete and then create my original workspace again without any problem. VS Code will not get caught up during the Remote setup process.
  • Even with the original workspace deleted, I am still unable to open any new workspaces in VS Code.

HTTP proxy

Is your feature request related to a problem?

Yes: I'm unable to use DevPod in corporate environment.

Which solution do you suggest?

Add support for HTTP proxy, via settings or (in Linux) by supporting standard variables like http_proxy, no_proxy and/or upcase versions.

Which alternative solutions exist?

Additional context

The error occured when trying to download https://github.com/gitpod-io/openvscode-server/releases/download/openvscode-server-v1.76.2/openvscode-server-v1.76.2-linux-x64.tar.gz

QEMU provider

Hello! Is it feasible to create a provider for QEMU? I work with embedded software and access to physical devices is critical. Docker and Podman have limitations that make the general case impossible. A full VM like QEMU (and full access to it's features) provides most of what embedded developers need. If I understand the devpod model currently, it should be feasible to create provider for QEMU. If so, is that on the roadmap or something the community has already expressed interest around?

Cheers!

Allow specifying DOCKER_HOST for docker provider

Is your feature request related to a problem?
Docker is configured on my local machine to use a remote host running docker (i.e. DOCKER_HOST=ssh://user@my-docker-host). However when trying to use the Docker provider I see a message saying

Seems like docker is not reachable on your system.
Please make sure docker is installed and running.
You can verify if docker is running correctly via 'docker ps'
init: exit status 1

Which solution do you suggest?
Allow for specifying DOCKER_HOST in the Advanced Options section or have the provider check to see if that is configured at the system level instead of (my assumption) looking for a running docker process.

Which alternative solutions exist?

I'm sure I could get something similar to what I want running using the ssh provider? I just haven't made it that far yet 😄

Additional context

I have a handful of remote systems running docker that I use for dev machines and it would be great to be able to continue to use them in this manner with devpod.

Failed to start PHP devpod

What happened?
Failed to start a remote PHP devpod using the ssh provider.

[10:12:18] info Workspace php-devpod already exists
[10:12:18] info Creating devcontainer...
[10:12:18] debug Inject and run command: '/opt/devpod/agent' agent workspace up --workspace-info 'H4sIAAAAAAAA/+xS7W6jOBR9l/ub7xAg/NoozabsdpMqJauZEVJk7EugAWzZJm0V8e4jQjOTSpX6AvMPm3N8zz3nnOGFy6MShCLEZ6gYxCBKYTI8Cc7AgO5y5Uxms0lEctMlGJp+4FGTsCg0g4IEfjgNnZBOwYCC1wwlxLCIs2ynUKoseyZHfMkya3wxyyhvNb5qlWUMC9LVOst+SVBZ9mG4qKjuJEIMpdZCxbbNBbYHSURpHSpddjlRCrWyKG/sSV7kzpQyL/do7jLHY+E0L/LA9SZh4fszWtCZV/gREjINiyhgUcTQ853CiWjkE7SbikqueKHtk6KcoSmx4RpNLd9MUYpBj+Sn6rLhGVrSDMKUKsEALnTFWzXcz1fLdbp/nKf3w+lE6m6A2Vxoe9zLJgdsNfQGLL+l2/n+74f56gnic2/A/eYpvWU1RL7+5c48yw0iy7VcxxkSUSgfRyEMYi077A1I1vNFmvyfpN/3afLfcrNLxxeT9T/LRbq/2yz+XW73i+3ybrlOk/nD0+2Y4Q34DV4l6VfIx832g1DPg77vDWgILasWx9kVwxujRk8HsuKdHOt2qPQWBVeV5vLtJuUx20uqX4XSG0AlksH+tGpQadKIQZDjTUxnarpR6gbxNIid6AcYUBOld2rw7QMijF0vdi+I93pCDO/1HCaMicVnEESXn6ZpAOMvbc0J220fPt+k5oU2VXnlSayRKFT2lWifHMu1fDCgap+R6lWlFxIZtroitbqaf/17x+kR5acAfEU6JnDscpQtalTjmV1Iw3f/p7NjZ38CAAD//wEAAP//7lkGkQEFAAA=' --debug
[10:12:18] debug execute inject script
[10:12:18] debug Run command provider command: ssh -oStrictHostKeyChecking=no \
    -p ${PORT} \
    ${EXTRA_FLAGS} \
    "${HOST}" \
    "${COMMAND}"
[10:12:28] debug done inject
[10:12:28] debug done injecting
[10:12:28] info Waiting for devpod agent to come up...
[10:12:28] debug Inject Error: context deadline exceeded
[10:12:28] debug done exec
...
[10:17:28] fatal context deadline exceeded
timeout waiting for instance connection
github.com/loft-sh/devpod/pkg/agent.InjectAgentAndExecute
        D:/a/devpod/devpod/pkg/agent/inject.go:117
github.com/loft-sh/devpod/cmd.(*UpCmd).devPodUpMachine.func1
        D:/a/devpod/devpod/cmd/up.go:262
runtime.goexit
        C:/hostedtoolcache/windows/go/1.19.9/x64/src/runtime/asm_amd64.s:1594

What did you expect to happen instead?
I expected the devpod to start.

How can we reproduce the bug? (as minimally and precisely as possible)
Install devpod for windows, create an ssh provider and try to launch a PHP workspace.

My devcontainer.json:

{
    "name": "...",
    ...
}

Local Environment:

  • DevPod Version: 0.1.4
  • Operating System: windows
  • ARCH of the OS: x86_64

DevPod Provider:

  • Local/remote provider: ssh

Allow changing HOST and PORT bindings for openvscode ide

Is your feature request related to a problem?
I am trying to run a workspace on a remote system using the cli on that remote system:

devpod-cli up https://github.com/microsoft/vscode-remote-try-go --id=testing --ide openvscode --provider=docker --ide-option OPEN=false

But then I have to also setup port forwarding from my local machine to the remove system so I can access http://localhost:10800/?folder=/workspaces/testing

ssh -X -A -L 10800:localhost:10800 myuser@myremoteserver

Which solution do you suggest?
Allow changing the host:port used for openvscode ide, something like:

--ide-option OPEN=false,HOST=0.0.0.0,PORT=9999

Which alternative solutions exist?
Using a proxy server on the remote server to direct requests to the 127.0.0.1:10800 default binding

Additional context
My goal is an self-hosted vscode workspace I can access from a chromeos laptop where I can't install anything (even the lightweight devpod client)

fatal agent error: devcontainer up: start container: build and extend docker-compose

What happened?

fatal agent error: devcontainer up: start container: build and extend docker-compose: inspect image base: get image config remotely: retrieve image base: GET https://index.docker.io/v2/library/base/manifests/latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/base Type:repository]]

What did you expect to happen instead?

Work

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
  "name": "Ruby",
  "workspaceFolder": "/app",
  "dockerComposeFile": [
    "../docker-compose.yml",
    "../docker/docker-compose.local.yml"
  ],
  "service": "web",
  "forwardPorts": [
    3000
  ],
  "customizations": {
    "vscode": {
      "settings": {},
      "extensions": []
    }
  }
}

My docker-compose.yml:

version: "3.4"

services:
  web:
    restart: always
    image: "ruby:latest"
 [...]

Local Environment:

  • DevPod Version: v0.1.4
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Local/remote provider: docker

VSCode Browser is OpenVSCode Browser; can we also have VSCode Browser?

Is your feature request related to a problem?
Kinda; the UI says "Launch with VSCode Browser", but it should read "Launch with OpenVSCode Browser". So you ask, but what is the difference? I only know of one small detail which is rather annoying in the day-to-day: you cannot use GitHub CoPilot extension with OpenVSCode, only with VSCode.

Which solution do you suggest?
VSCode itself also has a browser-mode (which already seems to be used for installing extensions in the VSCode IDE code). Like, if I start the DevPod as VSCode, I can go into it and run:

.vscode-server/bin/code-server serve-local --accept-server-license-terms --disable-telemetry

This seems to be working fine and do what I would expect.

Which alternative solutions exist?
VSCode tunnel, via their vscode.dev (code --tunnel). Which works fine btw, but it can be rather laggy, as you bounce via their infrastructure. It isn't always the best experience.

And yes, there are also alternatives to GitHub CoPilot, so one could use OpenVSCode and use those. But I would prefer not to :)

Additional context
Now I am not sure here, as Microsoft is a bit vague here .. there was a private beta for the browser-mode of VSCode, and that still seems to work fine, but there isn't a word about it on their official documentation that I could find. So I am not sure they are deprecating this, or if they just don't announce it, or anything like that. I also don't know if it works for my account as I am part of that beta, or if it works for everyone.

So this feature request might be a terrible idea .. or it might just work fine for everyone :) I leave that to people who are more involved in the VSCode eco-system :)

devpod up --recreate flag does not work as documented

What happened?
The --recreate flag does not appear to work as documented here: https://devpod.sh/docs/developing-in-workspaces/devcontainer-json#devcontainerjson-development-flow

I made an error when composing my devcontainer.json file, and DevPod returned an error message:
fatal agent error: devcontainer up: parsing devcontainer.json: json: invalid number literal, trying to unmarshal "\"8081:80\"" into Number.

I fixed the error and then attempted to recreate the container. DevPod returned the same error message despite the error being resolved:
fatal agent error: devcontainer up: parsing devcontainer.json: json: invalid number literal, trying to unmarshal "\"8081:80\"" into Number.

What did you expect to happen instead?
I expected the container to be recreated from scratch and the modifications in devcontainer.json to be used when creating the new container.

How can we reproduce the bug? (as minimally and precisely as possible)

  • Bring a container up using devpod up.
  • Modify the devcontainer.json contents.
  • Issue devpod up ./ --recreate.

My original devcontainer.json (with syntax error):

{
	"name": "devpod-test",
	"build": {
		"dockerfile": "Dockerfile"
	},
	"forwardPorts": ["8081:80"]
}

My updated devcontainer.json (without syntax error):

{
	"name": "devpod-test",
	"build": {
		"dockerfile": "Dockerfile"
	},
	"forwardPorts": [80]
}

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Cloud Provider: aws

Anything else we need to know?
I have also noticed that changes made to the Dockerfile are not picked up when using the --recreate flag.

SSH should prompt to change ~/.ssh/config

devpod cli v0.1.7

I was surprised to find my ~/.ssh/config file was modified without my permission. I understand the need, but that action should ask for confirmation before modifying.

It might be better to store config entries in their own file so ~/.ssh is not touched at all. Use the -F flag like so

ssh -F ~/.config/devpod/ssh.config SOME_NAME

Workspace cleanup does not happen after a failed clone

What happened?

I tried running devpod up $REPO and it failed because of an auth issue. I tried running it again and because the workspace already existed but was empty devpod defaulted to just adding a basic devcontainer.json.

What did you expect to happen instead?

The clone should be attempted again.

How can we reproduce the bug? (as minimally and precisely as possible)

# clone and let it fail
❯ devpod up github.com/i/do-not-exist.git
09:35:42 info Creating devcontainer...
09:35:42 info Cloning into '/home/user/.devpod/agent/contexts/default/workspaces/do-not-exist/content'...
09:35:42 info fatal: could not read Username for 'https://github.com': terminal prompts disabled
09:35:42 fatal agent error: error cloning repository: exit status 128
: exit status 1

# retry and it passes
❯ devpod up github.com/i/do-not-exist.git
09:35:45 info Workspace do-not-exist already exists
09:35:45 info Creating devcontainer...
09:35:45 info Couldn't find a devcontainer.json
09:35:45 info Try detecting project programming language...
09:35:45 info Detected project language 'None'
09:35:45 info 41e79b26886cbb359faad34efa14fffc95bfa15b704eff782c3f369bcc75ae80
09:35:45 info Setup container...
09:35:45 info Chown workspace...
09:35:45 info Run 'ssh do-not-exist.devpod' to ssh into the devcontainer
09:35:45 info Starting VSCode...

Local Environment:

  • DevPod Version: v0.1.2
  • Operating System: Fedora Linux 38
  • ARCH of the OS: AMD64

rancher desktop / nerdctl/containerd

Is your feature request related to a problem?
Use with Rancher Desktop and containerd+nerdctl

Which solution do you suggest?
How to use DevPod with nerdctl instead of Docker?

Which alternative solutions exist?
/

Additional context
I'm not using Docker but alternative Rancher Desktop on MacOS which comes with containerd and nerdctl default.
I already replaced all "Docker" commands to Nerdctl (eg Docker run .... -> Nerdctl run ... or Docker compose -> Nerdctl compose ...) via my zshrc shell as custom aliases but this also doesn't work for DevPod.

Any documentation how I can do this?

Allow selection of alternate devcontainer.json from UI

Is your feature request related to a problem?
Currently the only way to pick which devcontainer to use is via the CLI. This would allow the UI to detect all devcontainer configurations in a workspace and allow the user to choose which one to build.

Which solution do you suggest?
When devcontainer.json detection happens, look for all possible devcontainer.json files and if there is more than one, ask the user which one they would like to apply to the workspace (and possibly optionally set it as default for the future)

Which alternative solutions exist?
Currently this can be done manually via the CLI with the --devcontainer-path flag, however there is no way to supply this in the UI.

Additional context

forwardPorts not support "host:port" syntax

What happened?

When i try to use "[service_name]:[port]" with docker-compose config. I get json: invalid number literal, trying to unmarshal "\"db:5432\"" into Number

See https://containers.dev/implementors/json_reference/#general-properties

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
    ...
    "forwardPorts": [
        3000,
        "db:5432"
    ],
    ...
}

Local Environment:

  • DevPod Version: v0.1.4
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Local/remote provider: docker

Allow Specifying Storage Class in provider.yaml

It would be beneficial to have the ability to specify the name of the storage class used for creating PVCs in the Kubernetes provider.yaml file. Currently, it defaults to the default kubernetes storage class, which may not always be the most appropriate choice for development environments.

In our case, we have a default storage class based on SSDs, primarily intended for production operations. Meanwhile, we utilize a Ceph/Rook storage class for our development environments. Therefore, being able to specify the storage class within the provider.yaml file would provide better flexibility and optimization in managing resources, Thanks! 🙏

DevPod fails to start with incorrect devcontainer.json options, but does not show a appropriate error message

What happened?

I updated my custom devcontainer.json to map forwardPorts. I think I used a wrong configuration ("8083:8080"), as it is in the devcontainer.json bellow. When I tried to rebuild my workspace, DevPod presented the following messages:

[11:40:19] info Workspace my-namespace already exists
[11:40:19] info Creating devcontainer...
[11:40:19] info Workspace my-namespace already exists
[11:40:19] info Creating devcontainer...

And nothing else more. The IDE was not started, and the workspace presented the status "Error". If I click on "Error", it shows the log:

[11:46:19] info Workspace my-workspace already exists
[11:46:19] info Creating devcontainer...

And nothing more. So I set "Use --debug" option in Settings tab, and tried to rebuild again the workspace. Now I got the following messages (and the same previous behavior):

[11:39:45] debug Created logger
[11:39:45] debug Received ping from agent
[11:39:45] debug Workspace Folder already exists
[11:39:45] debug Using docker command 'docker'
[11:39:45] debug json: invalid number literal, trying to unmarshal "\"8083:8080\"" into Number
[11:39:45] debug parsing devcontainer.json

I updated my devcontainer.json to use only "8080" in forwardPorts. And rerun the workspace rebuild process. Now, DevPod acted as expected. It opened the IDE and the workpace is in the running state.

What did you expect to happen instead?

Of course is normal DevPod does not start because of the incorrect option in the devcontainer.json file. But the "normal log" (without "Use --debug" option) must show an appropriate message so that it is easy for the user to identify the error.

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/java
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/java
{
    "name": "Java",
    // Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
    "image": "mcr.microsoft.com/devcontainers/java:0-17",
    "features": {
        "ghcr.io/devcontainers/features/java:1": {
            "version": "none",
            // "installMaven": "true",
            "mavenVersion": "3.8.6",
            "installGradle": "true",
            "gradleVersion": "7.6.1"
        }
    },
    // Configure tool-specific properties.
    "customizations": {
        // Configure properties specific to VS Code.
        "vscode": {
            "settings": {},
            "extensions": [
                "streetsidesoftware.code-spell-checker",
                "richardwillis.vscode-gradle-extension-pack"
            ]
        }
    },
    // Use 'forwardPorts' to make a list of ports inside the container available locally.
    "forwardPorts": [
        "8083:8080"
    ]
    // Use 'postCreateCommand' to run commands after the container is created.
    // "postCreateCommand": "java -version",
    // Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
    // "remoteUser": "root"
}

Local Environment:

  • DevPod Version: 0.1.5
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Local/remote provider: docker

Anything else we need to know?
DevPod looks very promising

Clone/Duplicate existing Provider

Is your feature request related to a problem?
I want to make two AWS Providers with slightly different settings, but I have to re-enter all the info manually.

Which solution do you suggest?
Provide a clone/copy/duplicate option in an existing Provider so I can reuse the values.

Which alternative solutions exist?
None that I know of. I can't easily copy text from one Provider to another (AMIs, Keys, etc.) because I can't have two Windows or two Providers open at once.

Additional context

devpod for hashicorp nomad

it would be nice if we can provide support for provider nomad

let me know if I can help in integrating into it.

AWS Provider Error: "VPCIdNotSpecified: No default VPC for this user"

What happened?
Cannot create workspace using AWS Provider with a specific VPC ID.

We already deleted the default VPC that comes from AWS in favor of using our own custom settings.

The existing VPC ID was supplied along with the key and secret.

But when creating the workspace error is thrown: VPCIdNotSpecified: No default VPC for this user

What did you expect to happen instead?
Create the workspace successfully

How can we reproduce the bug? (as minimally and precisely as possible)

Delete the default VPC provided by AWS and create a new one. Add the VPC ID when creating the workspace.

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Cloud Provider: aws
  • Local/remote provider: docker

Add VSCode Insiders to the list of Default IDE

Is your feature request related to a problem?
I currently just have VSCode Insiders edition installed and not VSCode so I can only use the ssh option. I would like the ability to just work out of VSCode Insdiers

Which solution do you suggest?
Add VsCode Insiders as an option for Default IDE

Which alternative solutions exist?
Currently just connecting over ssh works

Additional context
None

AppImage UI: kubernetes provider permission denied

What happened?

When creating a workspace with the kubernetes provider, the pod spins up and VSCode is able to connect to it, but the workspace is empty.
The status in the UI says "Error" and shows the following logs (btw. how can I check the logs from devpod CLI?):

[12:21:59] fatal error retrieving container status: bash: Unable to set terminal process group (1382).: Inappropriate IOCTL (I/O control) for the device
bash: No job control in this shell.
bash: Cannot set the terminal's process group (1382).: Inappropriate IOCTL (I/O control) for the device.
bash: No job control in this shell.
find dev container: find pvc: cannot open path of the current working directory: Permission denied
exit status 1

I guess devpod tries to copy or mount the workspace into the PV of the Pod and this command fails?

What did you expect to happen instead?

Devpod copies or mounts the workspace content into the Pod

How can we reproduce the bug? (as minimally and precisely as possible)

I have a feeling it might be a problem with my local environment, so i don't know if it is reproducible.

devpod up --provider kubernetes --debug --id test --ide vscode .

My devcontainer.json:

{
    "name": "Kubernetes Test",
    "image": "mcr.microsoft.com/devcontainers/java:0-17",
    "features": {
        "ghcr.io/devcontainers/features/docker-in-docker:2": {}
    },
    "workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,consistency=cached",
    "workspaceFolder": "/workspace",
    "postCreateCommand": "bash /workspace/.devcontainer/create-workspace.sh"
}

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: linux
  • ARCH of the OS: AMD64

DevPod Provider:

  • Kubernetes Provider:

    Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-18T02:15:29Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4+k0s", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-17T13:46:59Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
    

Anything else we need to know?

Hangs trying to configure any provider

What happened?
Trying to add any provider hangs on "Loading ..."

What did you expect to happen instead?
Anything

How can we reproduce the bug? (as minimally and precisely as possible)
AppImage on Debian Bookworm

Local Environment:

  • DevPod Version: 0.1.5
  • Operating System: linux
  • ARCH of the OS: AMD64

Anything else we need to know?
Bottom of DevPod window also has incorrect system information. "Version 0.1.5 | unknown platform | unknown arch"

Web Client

Is your feature request related to a problem?
Can't use in browser. That's the point of codespaces.

Which solution do you suggest?
A web client.

Which alternative solutions exist?
Using GitHub Codespaces

Additional context

SSH Provider does not work

What happened?
I added a new SSH provider and then created a Workspace. But workspace was never created. Instead it just keeps on waiting.

What did you expect to happen instead?
I thought it would start the dev container

How can we reproduce the bug? (as minimally and precisely as possible)
I just downloaded the app to my MacM1. I can connect to SSH provider.

Local Environment:

  • DevPod Version: 0.12
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Local/remote provider: ssh

Anything else we need to know?

IntelliJ/Jetbrains IDE fails to start with custom docker image

What happened?
With an existing devcontainer (which works fine with VSCode), attempted to open with IntelliJ Ultimate but failed to start.

What did you expect to happen instead?
Expected the IDE to start successfully, it seems there should be some kind of process to ensure all the dependencies required by Jetbrains/Gateway remote system will be met.

Here is the log from the Jetbrains Gateway connection:

Jetbrains Gateway log
2023-05-17 08:31:52,247	INFO	uname -sm
	stdout:
	Linux aarch64

2023-05-17 08:31:52,261	INFO	echo $SHELL
	stdout:
	/bin/bash

2023-05-17 08:31:52,382	INFO	uname -sm
	stdout:
	Linux aarch64

2023-05-17 08:31:52,499	INFO	echo $SHELL
	stdout:
	/bin/bash

2023-05-17 08:31:52,647	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ echo\ \$HOME
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root

2023-05-17 08:31:52,767	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ echo\ \$XDG_CACHE_HOME
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:52,887	WARN	exit code: 1	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ test\ -f\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:53,008	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ dirname\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.cache/JetBrains/RemoteDev/remote-dev-worker

2023-05-17 08:31:53,126	WARN	exit code: -1	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ ls\ -la\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker

2023-05-17 08:31:53,149	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ mkdir\ -p\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:53,286	INFO	dd of=/root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3.41227306678416.tmp
	stdout:
	stdout used for binary data
	stderr:
	4736+0 records in
	4736+0 records out
	2424832 bytes (2.4 MB, 2.3 MiB) copied, 0.0803297 s, 30.2 MB/s

2023-05-17 08:31:53,481	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ mv\ -v\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3.41227306678416.tmp\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	renamed '/root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3.41227306678416.tmp' -> '/root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3'

2023-05-17 08:31:53,599	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ chmod\ 755\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:53,717	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ test\ -f\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:53,840	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ test\ -x\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:53,973	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-sha256\ --path=/root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3

2023-05-17 08:31:54,097	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=cache
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.cache/JetBrains

2023-05-17 08:31:54,215	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=config
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.config/JetBrains

2023-05-17 08:31:54,336	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ exists\ --path=/root/.cache/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	true

2023-05-17 08:31:54,453	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ exists\ --path=/root/.config/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	false

2023-05-17 08:31:54,575	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ create-dir\ --path=/root/.config/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:54,696	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ lock-support
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_

2023-05-17 08:31:54,816	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ available-memory
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	32828612

2023-05-17 08:31:54,942	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ cpu-count
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	5

2023-05-17 08:31:55,069	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ port-forwarding-test
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	34351

2023-05-17 08:31:55,188	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ readlink\ --path=/workspaces/backstage
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/workspaces/backstage

2023-05-17 08:31:55,297	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=cache
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.cache/JetBrains

2023-05-17 08:31:55,410	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=config
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.config/JetBrains

2023-05-17 08:31:55,524	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ exists\ --path=/root/.cache/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	true

2023-05-17 08:31:55,643	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ exists\ --path=/root/.config/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	true

2023-05-17 08:31:55,761	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=cache
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.cache/JetBrains

2023-05-17 08:31:55,883	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ get-path\ --path=config
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	/root/.config/JetBrains

2023-05-17 08:31:56,007	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ available-space\ --path=/root/.cache/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	87085367296

2023-05-17 08:31:56,128	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ available-space\ --path=/root/.config/JetBrains
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	87085363200

2023-05-17 08:31:56,259	WARN	exit code: 1	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ host-status\ --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij\ --project-path=/workspaces/backstage
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	Stdout: 
	Stderr: 
	{"type":"error","errorCode":"CommandError","data":"fork/exec /home/root/.cache/JetBrains/RemoteDev/dist/intellij/bin/remote-dev-server.sh: no such file or directory"}

	[command is repeated 53 more times]

2023-05-17 08:32:57,229	INFO	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ backend-status-alive\ --project-path=/workspaces/backstage
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	false

2023-05-17 08:32:57,353	WARN	exit code: 1	/bin/bash -lc echo\ REMOTE_EXEC_OUTPUT_MARKER_\ \&\&\ /root/.cache/JetBrains/RemoteDev/remote-dev-worker/remote-dev-worker_6d6fd444f51603f0cc5a6f8fe6627b75da5d18d44d77559a9b06693f44b33fd3\ product-code\ --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij
	stdout:
	REMOTE_EXEC_OUTPUT_MARKER_
	{"type":"error","errorCode":"CommandError","data":"/home/root/.cache/JetBrains/RemoteDev/dist/intellij/build.txt doesn't exist"}


==== FAILURES ====

The following exception failed the deployment
com.jetbrains.gateway.ssh.deploy.DeployException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
	at com.jetbrains.gateway.ssh.DeployFlowUtil$fullDeployCycleImpl$2.invokeSuspend(DeployFlowUtil.kt:280)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1$3.invokeSuspend(BackgroundProgressCoroutineUtil.kt:172)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
	at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
	at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
	at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
	at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
	at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1.invoke(BackgroundProgressCoroutineUtil.kt:168)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1.invoke(BackgroundProgressCoroutineUtil.kt:158)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$createModal$1$1.run(BackgroundProgressCoroutineUtil.kt:203)
	at com.intellij.openapi.progress.impl.CoreProgressManager.startTask(CoreProgressManager.java:429)
	at com.intellij.openapi.progress.impl.ProgressManagerImpl.startTask(ProgressManagerImpl.java:114)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcessWithProgressSynchronously$9(CoreProgressManager.java:513)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$new$0(ProgressRunner.java:84)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$3(ProgressRunner.java:252)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:186)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$13(CoreProgressManager.java:604)
	at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:679)
	at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:635)
	at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:603)
	at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:60)
	at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:173)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$4(ProgressRunner.java:252)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:702)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:699)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:699)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: com.jetbrains.gateway.ssh.deploy.DeployException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
	at com.jetbrains.gateway.ssh.DeployFlowUtil$fullDeployCycleImpl$2.invokeSuspend(DeployFlowUtil.kt:275)
	... 35 more
Caused by: com.jetbrains.gateway.ssh.deploy.DeployException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
	at com.jetbrains.gateway.ssh.DeployFlowUtil$fullDeployCycleImpl$2.invokeSuspend(DeployFlowUtil.kt:273)
	... 35 more
Caused by: com.jetbrains.gateway.ssh.RemoteCommandException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.anErrorOccurred(GoHighLevelHostAccessor.kt:184)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.callAndReportError(GoHighLevelHostAccessor.kt:173)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.access$callAndReportError(GoHighLevelHostAccessor.kt:29)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor$callAndReportError$1.invokeSuspend(GoHighLevelHostAccessor.kt)
	... 35 more



==== ENVIRONMENT ====

INSTALLED PRODUCTS

	


AVAILABLE MEMORY

	31.31GB



==== DIAGNOSTIC ERRORS ====

Collecting fixes for fixer com.jetbrains.gateway.ssh.deploy.fixes.LogDownloadDiagnosticProvider@512985fa failed with an exception: com.jetbrains.gateway.ssh.RemoteCommandException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
com.jetbrains.gateway.ssh.RemoteCommandException: 

Details:
An error occurred while executing command: 'product-code --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij'
Exit code: 1
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.anErrorOccurred(GoHighLevelHostAccessor.kt:184)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.callAndReportError(GoHighLevelHostAccessor.kt:173)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor.access$callAndReportError(GoHighLevelHostAccessor.kt:29)
	at com.jetbrains.gateway.ssh.GoHighLevelHostAccessor$callAndReportError$1.invokeSuspend(GoHighLevelHostAccessor.kt)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1$3.invokeSuspend(BackgroundProgressCoroutineUtil.kt:172)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
	at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
	at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
	at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
	at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
	at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1.invoke(BackgroundProgressCoroutineUtil.kt:168)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$create$task$1.invoke(BackgroundProgressCoroutineUtil.kt:158)
	at com.intellij.openapi.rd.util.CoroutineProgressContext$Companion$createModal$1$1.run(BackgroundProgressCoroutineUtil.kt:203)
	at com.intellij.openapi.progress.impl.CoreProgressManager.startTask(CoreProgressManager.java:429)
	at com.intellij.openapi.progress.impl.ProgressManagerImpl.startTask(ProgressManagerImpl.java:114)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcessWithProgressSynchronously$9(CoreProgressManager.java:513)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$new$0(ProgressRunner.java:84)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$3(ProgressRunner.java:252)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:186)
	at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$13(CoreProgressManager.java:604)
	at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:679)
	at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:635)
	at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:603)
	at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:60)
	at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:173)
	at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$4(ProgressRunner.java:252)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:702)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:699)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
	at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:699)
	at java.base/java.lang.Thread.run(Thread.java:833)


Could not get host jstack: 

Details:
An error occurred while executing command: 'get-jstack --ide-path=/home/root/.cache/JetBrains/RemoteDev/dist/intellij --project-path=/workspaces/backstage'
Exit code: 1

Looking through that, it seems Jetbrains may be expecting there to be a /home/root directory (not totally sure what is going on there, perhaps because the container is running as root, the home folder is expected to exist?) however the actual home directory for root is /root so this may be an incorrect assumption on Jetbrains side.

I understand this is an issue on Jetbrains side, however, given that DevPod is already injecting things, maybe figuring out where the root home dir is and ensuring that a symbolic link to it exists at /home/root isn't out of the question?

I was able to confirm that a symbolic link at /home/root pointing to /root fixes the connection.

I also confirmed that using a non-root user did NOT fix the problem, however there was no error, Gateway would just silently kick me out to the connection screen.

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
  "build": {
    "dockerfile": "Dockerfile"
  },
  "name": "devpod-test"
}

My Dockerfile:

FROM node:16-bullseye-slim

RUN apt-get update && \
    apt-get install -y --no-install-recommends git openssh-client libsqlite3-dev python python3 python3-pip cmake g++ build-essential && \
    pip3 install mkdocs-techdocs-core==1.0.2 && \
    yarn config set python /usr/bin/python3

Local Environment:

  • DevPod Version: 0.1.2 (btw there is no devpod --version, just devpod version)
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Local/remote provider: docker

Anything else we need to know?

On windows remote init script using ssh failed

What happened?
Using devpod on windows with ssh and the inner shell
installation fail with error:

[14:13:30] debug execute inject script
[14:13:30] debug Run command provider command: ssh -oStrictHostKeyChecking=no \
    -p ${PORT} \
    ${EXTRA_FLAGS} \
    "${HOST}" \
    "${COMMAND}"
[14:13:40] debug done inject
[14:13:40] debug done injecting
[14:13:40] info Waiting for devpod agent to come up...
[14:13:40] debug Inject Error: context deadline exceeded
[14:13:40] debug done exec
[14:13:43] debug execute inject script
[14:13:43] debug Run command provider command: ssh -oStrictHostKeyChecking=no \
    -p ${PORT} \
    ${EXTRA_FLAGS} \
    "${HOST}" \
    "${COMMAND}"
 : option non valable: ligne 2 : set: -
[14:13:44] debug set : utilisation :set [-abefhkmnptuvxBCHP] [-o nom-option] [--] [arg ...]
[14:13:44] debug bash: ligne 3: $'\r' : commande introuvable
[14:13:44] debug bash: ligne 6: $'\r' : commande introuvable
[14:13:44] debug bash: ligne 10: $'\r' : commande introuvable
[14:13:44] debug bash: ligne 13: $'\r' : commande introuvable
 » : identifiant non valable 15 : read: « DEVPOD_PING
[14:13:44] debug bash: -c: ligne 21: erreur de syntaxe près du symbole inattendu « $'{\r' »
'14:13:44] debug bash: -c: ligne 21: `command_exists() {
[14:13:44] debug done exec

What did you expect to happen instead?
I expect the script to continue normally

How can we reproduce the bug? (as minimally and precisely as possible)

  1. to bypass any bash installed on my machine i create an empty bash.cmd inside the devpod installation directory
  2. I create an ssh custom provider to allow me to use my .ssh/config
  3. I create a workspace using one of the sample provided using my custom provider

Local Environment:

  • DevPod Version: v0.1.2
  • Operating System: windows
  • ARCH of the OS: AMD64

DevPod Provider:

  • Custom provider:
name: atoc-ssh
version: ##VERSION##
description: |-
  DevPod on SSH
icon: https://devpod.sh/assets/ssh.svg
optionGroups:
  - options:
      - USE_SSH_CONFIG
      - PORT
      - EXTRA_FLAGS
    name: "SSH options"
    defaultVisible: true
  - options:
      - AGENT_PATH
      - INACTIVITY_TIMEOUT
      - INJECT_DOCKER_CREDENTIALS
      - INJECT_GIT_CREDENTIALS
    name: "Agent options"
    defaultVisible: false
options:
  INACTIVITY_TIMEOUT:
    description: "If defined, will automatically stop the container after the inactivity period. Example: 10m"
  AGENT_PATH:
    description: The path where to inject the DevPod agent to.
    default: /opt/devpod/agent
  INJECT_GIT_CREDENTIALS:
    description: "If DevPod should inject git credentials into the remote host."
    default: "true"
  INJECT_DOCKER_CREDENTIALS:
    description: "If DevPod should inject docker credentials into the remote host."
    default: "true"
  HOST:
    required: true
    description: "The SSH Host to connect to. Example: [email protected]"
    default: "atoc-dev"
  USE_SSH_CONFIG:
    description: "If checked use you sshconfig entry"
    default: "true"
    type: boolean
  PORT:
    description: "The SSH Port to use. Defaults to 22"
    default: "22"
  EXTRA_FLAGS:
    description: "Extra flags to pass to the SSH command."
agent:
  inactivityTimeout: ${INACTIVITY_TIMEOUT}
  injectGitCredentials: ${INJECT_GIT_CREDENTIALS}
  injectDockerCredentials: ${INJECT_DOCKER_CREDENTIALS}
  path: ${AGENT_PATH}
  docker:
    path: /usr/bin/podman
    install: false
exec:
  init: |-
    if [ "$USE_SSH_CONFIG" = "true" ]; then
      OUTPUT=$(ssh -oStrictHostKeyChecking=no \
                   ${EXTRA_FLAGS} \
                   "${HOST}" \
                   "sh -c 'echo DevPodTest'")
    else
      OUTPUT=$(ssh -oStrictHostKeyChecking=no \
                   -p ${PORT} \
                   ${EXTRA_FLAGS} \
                   "${HOST}" \
                   "sh -c 'echo DevPodTest'")
    fi
    if [ "$OUTPUT" != "DevPodTest" ]; then
      >&2 echo "Unexpected ssh output."
      >&2 echo "Please make sure you have configured the correct SSH host"
      >&2 echo "and the following command can be executed on your system:"
      >&2 echo ssh -oStrictHostKeyChecking=no -p "${PORT}" "${HOST}" "sh -c 'echo DevPodTest'"
      exit 1
    fi

  command: |-
    if [ "$USE_SSH_CONFIG" != "true" ]; then
      ssh -oStrictHostKeyChecking=no \
          -p ${PORT} \
          ${EXTRA_FLAGS} \
          "${HOST}" \
          "${COMMAND}"
    else
      ssh -oStrictHostKeyChecking=no \
          ${EXTRA_FLAGS} \
          "${HOST}" \
          "${COMMAND}"
    fi

Anything else we need to know?

  • the remote system is a Linux one
  • i mange to circumvent the error removing the \r with the tr program who unfortunately is not provided by the inner shell.
    the command used that worked for me:
  command: |-
    cc=$(printf '%s' "${COMMAND}" | tr -d '\r')
    if [ "$USE_SSH_CONFIG" != "true" ]; then
      ssh -oStrictHostKeyChecking=no \
          -p ${PORT} \
          ${EXTRA_FLAGS} \
          "${HOST}" \
          "${cc}"
    else
      ssh -oStrictHostKeyChecking=no \
          ${EXTRA_FLAGS} \
          "${HOST}" \
          "${cc}"
    fi

Using podman devpod failed to find container at creation time

What happened?
I create a workspace using the example provided but i am using podman instead of docker.
After a while when all is created podman failed with an error:

find container: Error: invalid argument "label=devcontainer.metadata=[{\"id\":\"ghcr.io/devcontainers/features/common-utils:2\"},{\"id\":\"./local-features/apache-config\"},{\"id\":\"ghcr.io/devcontainers/features/node:1\",\"customizations\":{\"vscode\":{\"extensions\":[\"dbaeumer.vscode-eslint\"]}}},{\"id\":\"ghcr.io/devcontainers/features/git:1\"},{\"remoteUser\":\"vscode\",\"customizations\":{\"vscode\":{\"extensions\":[\"xdebug.php-debug\",\"bmewburn.vscode-intelephense-client\",\"mrmlnc.vscode-apache\"],\"settings\":{\"php.validate.executablePath\":\"/usr/local/bin/php\"}}}},{\"customizations\":{\"vscode\":{\"extensions\":[\"streetsidesoftware.code-spell-checker\"],\"settings\":{}}}}]" for "-f, --filter" flag: parse error on line 1, column 31: bare " in non-quoted-field
See 'podman ps --help'

What did you expect to happen instead?
The intallation script to continue normally.

How can we reproduce the bug? (as minimally and precisely as possible)
Just create a workspace using one of the sample provided using podman as an agent
for the time being i use:

podman version
Client:       Podman Engine
Version:      4.5.0
API Version:  4.5.0
Go Version:   go1.20.4
Git Commit:   268511680f4a72b4a0595497a37e4d6a7da0215c
Built:        Thu May 11 11:30:51 2023
OS/Arch:      linux/amd64

Local Environment:

  • DevPod Version: v0.1.2
  • Operating System: windows
  • ARCH of the OS: AMD64

DevPod Provider:

  • Local/remote provider: docker | ssh
  • Custom provider: based on ssh using podman as agent instead of docker

Anything else we need to know?
I think the culprit is here : find after creation
Containers are searched using labels that contains config.DockerIDLabel and metadata.ImageMetadataLabel
Since the ID for config.DockerIDLabel is unique i assume that you use only config.DockerIDLabel for the search.
It's what it use at the start of the function:
init labels
first find container

/usr/local/bin/devpod not found

Hello!
I am using Linux.
I did the following:

git clone https://github.com/loft-sh/devpod.git
cd devpod
go build main.go
mkdir ./desktop/src-tauri/bin
cp main ./desktop/src-tauri/bin/devpod-cli-x86_64-unknown-linux-gnu
cd desktop
yarn
cd src-tauri
cargo update
cd ..
yarn tauri build

I had the following executables and others in the src-tauri/target/release
dev-pod devpod-cli

I run the dev-pod and set the docker provided, it went good. I tried to run the https://github.com/microsoft/vscode-remote-try-python sample and I receive the following:
I tried both copying or symlinking the executable to the /usr/local/bin and also did check the install procedure at

curl -L -o devpod "https://github.com/loft-sh/devpod/releases/latest/download/devpod-linux-amd64" && sudo install -c -m 0755 devpod /usr/local/bin && rm -f devpod
No success. What can be the problem?

Build Repository registry credentials

What happened?

I'm testing the Kubernetes provider on a local Rancher Desktop (K3s) cluster. I couldn't find anything obvious about how you specify container registry credentials for the pushes to the Build Repository?

Error I'm getting using a Docker registry:

...
[13:15:52] info #5 pushing layers
[13:15:58] info #5 pushing layers 6.4s done
[13:15:58] info #5 ERROR: failed to push xxx:devpod-4c3cab74a0cec348ae8be1f977469860: push access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
...

What did you expect to happen instead?

Use my Docker credentials to be able to push to my private repository.

How can we reproduce the bug? (as minimally and precisely as possible)

NA

Local Environment:

  • DevPod Version: v0.1.5
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • Cloud Provider: NA
  • Kubernetes Provider:
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:58:30Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6+k3s1", GitCommit:"9176e03c5788e467420376d10a1da2b6de6ff31f", GitTreeState:"clean", BuildDate:"2023-01-26T00:30:33Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/arm64"}
  • Local/remote provider:
  • Custom provider: NA

Anything else we need to know?

[FEATURE] Windows install with Winget

Is your feature request related to a problem?
Not really, more a "quality of life" feature for the Windows users.

Which solution do you suggest?
Add devpod to the Winget packages, so it can be installed more easily in the most recent versions of Windows.

Which alternative solutions exist?
Currently we already can install devpod by downloading the MSI file and then running the installer. The CLI also can be downloaded as a standalone, however as Windows doesn't have a "common" place to store the standalone binaries, some PATH configuration is needed.
This is only true for the CLI as a standalone, as the full install allows to add the binary to the PATH with a simple click in the UI.

Additional context
In order to help, here's the manifests created with the tool from the winget-pkgs repo: YamlCreate.ps1.

Here's the log of the creation process:
winget-devpod-install-log.txt

Here's the manifest files for the version 0.1.2:
loft-sh.devpod.installer.zip

The following steps explain how to test it:

  • Unzip the files
  • In a terminal, go to the directory containing the files
  • Install wash from the manifest with the command winget install -m .
  • Verify that the installation was successful by running devpod

As shown in the log, I didn't create a PR for adding it to the winget-pkgs repo as I though you might want to review the different fields and even automate it through a github action.
Last but not least, only the first version will require a "longer" process, as there's a faster creation process for the updates.

Hope this helps.

Can't create a workspace using SSH provider due to a tty issue

Disclaimer: I am not entirely sure whether it's a bug or just me missing some detail 😅

What happened?
I am trying to set up an remote workspace using the SSH provider. When creating the workspace (I'm using the --debug flag) I keep repeatedly seeing the following error:

[11:00:35] info Creating devcontainer...
[11:00:35] debug Inject and run command: '/home/<redacted>/devpod/agent' agent workspace up --workspace-info '<long-token>' --debug
[11:00:35] debug execute inject script
[11:00:35] debug Run command provider command: ssh -oStrictHostKeyChecking=no \
    -p ${PORT} \
    ${EXTRA_FLAGS} \
    "${HOST}" \
    "${COMMAND}"
[11:00:35] debug Pseudo-terminal will not be allocated because stdin is not a terminal.
[11:00:35] debug sudo: sorry, you must have a tty to run sudo
[11:00:35] debug done exec
[11:00:35] debug done inject
[11:00:35] debug done injecting
[11:00:35] debug Inject Error: Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: sorry, you must have a tty to run sudo
EOF

What did you expect to happen instead?
I expected the workspace to be created successfully.

How can we reproduce the bug? (as minimally and precisely as possible)

My devcontainer.json:

{
    "name": "Test",
    "image": "quay.io/qiime2/core:2023.2",
    "forwardPorts": []
}

The SSH provider configuration:
All default settings except agent path (set to /home/<redacted>/devpod/agent, even though the same behaviour was observed with the default value).
I also tried to add the -tSSH flag, did not make a difference.

Local Environment:

  • DevPod Version: v0.1.7
  • Operating System: mac
  • ARCH of the OS: ARM64

DevPod Provider:

  • remote provider: ssh

Anything else we need to know?
I am using zsh on the remote.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.