Giter Site home page Giter Site logo

plandex-ai / plandex Goto Github PK

View Code? Open in Web Editor NEW
8.9K 75.0 668.0 34.29 MB

An AI coding engine for building complex, real-world software with LLMs

Home Page: https://plandex.ai

License: GNU Affero General Public License v3.0

Go 97.93% Shell 1.05% PLpgSQL 0.78% TypeScript 0.24%
ai ai-developer-tools gpt-4 llm openai ai-agents ai-tools cli developer-tools polyglot-programming

plandex's People

Contributors

abrl91 avatar chasnechok avatar danenania avatar eltociear avatar harleypig avatar kalil0321 avatar knno avatar mrdev023 avatar nathanwaldman avatar ryansonshine avatar slahser avatar tachyonicbytes avatar verytrap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plandex's Issues

Allow setting an OPENAI_ENDPOINT

Currently, the application is hardcoded to use the default OpenAI endpoint. This limits users to only using the official OpenAI models. To provide more flexibility and allow users to leverage alternative Large Language Models (LLMs), we should introduce a configuration option to set a custom OpenAI endpoint.

By allowing users to set a custom OPENAI_ENDPOINT, they can integrate with other LLM providers, such as AWS Bedrock via LiteLLM. This would enable users to take advantage of different LLM models and potentially benefit from cost savings or performance improvements offered by alternative providers.

Access applied changes in Plandex Cloud

Hi sir,

Thank you for sharing this amazing programming engine in the AI community! I gave it a try yesterday to ask it to generate a new code snippet based on the given instructions. However, after I committed the changes, I did not see anywhere to access the newly generated code. Am I expected to access it on Plandex Cloud, and if so, how can I access it through the terminal?

Not a bad start for a beta version, but alot of issues.

Its incredibly ineffective at actually creating proper large projects. It fills tens of files with broken stuff or indent errors, and then you have to painstakingly write "plandex tell" for each stupid error again and again. Why doesnt it test its own shit at any point?

It creates basically files for every single function, which makes it supremely hard and annoying to manually fix the errors, and it always, always fails at setting up paths correctly for Windows it often writes them like this "EProjectTestplan"

I got this error that proved impossible for it to fix after 5 tries now - ValueError: The filepath provided must end in .keras (Keras model format). Received: filepath=E:\Project\testplan\model_weights.h5" literally the only thing it had to do was change .h5 to .keras to fix it and it couldnt figure that out 5 times in a row with clear explanations on my end.

Not a bad beta, but definitely very far away from a self-sustained service.

help appears twice in available commands

plandex -h
Usage:
plandex [command] [flags]
plandex [command]

Available Commands:
apply Apply a plan to the project
branches List plan branches
build Build pending changes
cd Set current plan by name or index
changes View, copy, or manage changes for the current plan
checkout Checkout an existing plan branch or create a new one
clear Clear all context
completion Generate the autocompletion script for the specified shell
connect Connect to an active stream
continue Continue the plan
convo Display complete conversation history
current Get the current plan
delete-branch Delete a plan branch by name or index
delete-plan Delete a plan by name or index, or delete all plans with --all flag
help Display help for Plandex
help Help about any command
invite Invite a new user to the org
load Load context from various inputs
log Show plan history
ls List everything in context
models Show model settings
new Start a new plan
plans List all available plans
ps List plans with active or recently finished streams
revoke Revoke an invite or remove a user from the org
rewind Rewind the plan to an earlier state
rm Remove context
set-model Update model settings
sign-in Sign in to a Plandex account
stop Connect to an active stream
tell Send a prompt for the current plan
update Update outdated context
users List all users and pending invites and the current org
version Print the version number of Plandex

Flags:
-h, --help help for plandex

Use "plandex [command] --help" for more information about a command.

baseUrl config

in openai I can config baseUrl, is it ok to config here?

Tokens usage possibly high (Openai 429 error)

Hey there, awesome project 👍

I hit a 429 error on the openai API and wasn't sure if this was to be expected based off my usage. I used plandex to load to add a .cc and .h file (<500 lines total) for context. My goal was to convert these files into rust. It seems like I hit 57 requests and 629,169 tokens from gpt-4-turbo. Is this to be expected? It seems a bit high but I'm not too familiar with the api usage. Plandex didn't fully translate the code and left many functions as unimplemented so I wanted to run it again but it already cost me about 10 bucks via openai.

My question might be more general misunderstanding/confusion as to how all the pieces work, so if everything is as expected, is it possible to add a bit more documentation on this matter.

'host' key is not being set when 'Another host'

Using docker-compose.

$ pdx sign-in
Use Plandex Cloud or another host? Another host
✔ Host: … http://localhost:8080
✔ Your email: … [email protected]
✉️  You'll now receive a 6 character pin by email. It will be valid for 5 minutes.
✔ Please enter your pin: … ******                                   
✔ Your name: … Alan Young
🧐 You don't have access to any orgs yet.

To join an existing org, ask an admin to either invite you directly or give your whole email domain access.

Otherwise, you can go ahead and create a new org.
Create a new org now? (y)es | (n)o> y
✔ Org name: … harleypig
With domain auto-join, you can allow any user with an email ending in @nosuch.com to auto-join this org.
Enable auto-join for nosuch.com? (y)es | (n)o> n
🚨 Error signing in
  → Error signing in to new account
    → Error resolving org
      → Error prompting no orgs
        → Error creating org
          → Error sending request
            → Post "/orgs"
              → Unsupported protocol scheme ""

The log from docker output.

plandex-server-1    | 2024/04/03 01:47:21 Received request for CreateEmailVerificationHandler
plandex-server-1    | 2024/04/03 01:47:21 Development mode: Verification pin is 2V0aPB for email [email protected]
plandex-server-1    | 2024/04/03 01:47:21 Successfully created email verification
plandex-server-1    | 2024/04/03 01:47:41 Received request for CreateAccountHandler
plandex-server-1    | 2024/04/03 01:47:41 No orgs found for user
plandex-server-1    | 2024/04/03 01:47:41 Successfully created account

The accounts and auth json files.

$ jq -S '.' /home/harleypig/.plandex-home/{accounts,auth}.json
[
  {
    "email": "[email protected]",
    "host": "",
    "isCloud": false,
    "isTrial": false,
    "token": "token",
    "userId": "userid",
    "userName": "Alan Young"
  }
]
{
  "email": "[email protected]",
  "host": "",
  "isCloud": false,
  "isTrial": false,
  "orgId": "",
  "orgName": "",
  "token": "token",
  "userId": "userid",
  "userName": "Alan Young"
}

If I manually add the host (http://localhost:8080) then I'm able finish the sign up process.

$ pdx sign-in
Select an account: <Alan Young> [email protected]
🧐 You don't have access to any orgs yet.

To join an existing org, ask an admin to either invite you directly or give your whole email domain access.

Otherwise, you can go ahead and create a new org.
Create a new org now? (y)es | (n)o> y
✔ Org name: … harleypig
With domain auto-join, you can allow any user with an email ending in @nosuch.com to auto-join this org.
Enable auto-join for nosuch.com? (y)es | (n)o> n
✅ Signed in as <Alan Young> [email protected] | Org: harleypig

 plandex new  👉 start a new plan
 plandex (pl)ans  👉 list plans

docker output

plandex-server-1    | 2024/04/03 02:03:17 Received request for ListOrgsHandler
plandex-server-1    | 2024/04/03 02:03:17 authenticating request
plandex-server-1    | 2024/04/03 02:03:17 No orgs found for user
plandex-server-1    | 2024/04/03 02:03:17 Successfully listed orgs
plandex-server-1    | 2024/04/03 02:03:26 Received request for CreateOrgHandler
plandex-server-1    | 2024/04/03 02:03:26 authenticating request
plandex-server-1    | 2024/04/03 02:03:26 Successfully created org
plandex-server-1    | 2024/04/03 02:03:26 Received request for GetOrgSessionHandler
plandex-server-1    | 2024/04/03 02:03:26 authenticating request
plandex-server-1    | 2024/04/03 02:03:26 UserId: userid, Email: [email protected], OrgId: orgid
plandex-server-1    | 2024/04/03 02:03:26 Successfully got org session

Login Issue

I have installed plandex in my windows PC, But I am unable to login
Uploading Screenshot 2024-03-30 210000.png…
I am getting this type of error while logging in.
How can I solve this problem ?

plandex tell not keeping changes

Hello there 👋

Want to try out your project

install via curl bash
export key/org
plandex new
plandex load src/**/*.ts

I tried 3 times, but nothing was recorded or written into a file 😢
image

plandex make this directory so I don't think this is a permission issue 🤔
image

lm studios

is the a way to set it up with lm studios

Unable to build cli/v0.8.1

Hello, I attempted to install the Plandex CLI by following the provided steps. However, I encountered an error during the build process:

2024-04-03 22:21:31 egasdad@DESKTOP-QD0B380:~/git/plandex/app/cli$ go build term/select.go:8:2: github.com/plandex-ai/survey/[email protected]: replacement directory ../../../survey does not exist

Additionally, here is the version of Go I am using:

2024-04-03 22:21:34 egasdad@DESKTOP-QD0B380:~/git/plandex/app/cli$ go version go version go1.21.3 linux/amd64

commit:
2024-04-03 22:21:44 egasdad@DESKTOP-QD0B380:~/git/plandex/app/cli$ git show commit 1d0d8e0d495ec454a5170cf9d2a7157f76ee793a (HEAD -> main, tag: cli/v0.8.1, origin/main, origin/HEAD) Author: Dane Schneider <[email protected]> Date: Wed Apr 3 07:52:20 2024 -0700 Bump version to 0.8.1
Could anyone provide assistance or guidance on resolving this issue?

panic: runtime error: invalid memory address or nil pointer dereference

This is almost 100% my fault. I signed in with my email (josh@ my username .com) then ctrl-c'd out because I wanted to use a different email address, for my business, before changing my mind again and using my personal account. I wiped the .plandex-home directory since I was already logged in and re-installed (which recreated the directory) but I keep getting this error.

josh@MacBook-Pro frontend % plandex new
? 👋 Hey there!
It looks like this is your first time using Plandex on this computer.
What would you like to do? Sign in, accept an invite, or create an account
? Use Plandex Cloud or another host? Plandex Cloud
✔ Your email: … <REMOVED>
✉️  You'll now receive a 6 character pin by email. It will be valid for 5 minutes.
✔ Please enter your pin: … ******
🧐 You don't have access to any orgs yet.

To join an existing org, ask an admin to either invite you directly or give your whole email domain access.

Otherwise, you can go ahead and create a new org.
Create a new org now? (y)es | (n)o> y
✔ Org name: … <removed>
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x18 pc=0x1009544d4]

goroutine 1 [running]:
plandex/auth.createOrg()
	/Users/ds/projects/plandex/app/cli/auth/org.go:67 +0x84
plandex/auth.promptNoOrgs()
	/Users/ds/projects/plandex/app/cli/auth/org.go:55 +0xc4
plandex/auth.resolveOrgAuth({0x0?, 0x14000028288?, 0x14?})
	/Users/ds/projects/plandex/app/cli/auth/org.go:16 +0x98
plandex/auth.signIn({0x14000028288, 0x14}, {0x1400000f110, 0x6}, {0x0, 0x0})
	/Users/ds/projects/plandex/app/cli/auth/account.go:217 +0x9c
plandex/auth.promptSignInNewAccount()
	/Users/ds/projects/plandex/app/cli/auth/account.go:178 +0x278
plandex/auth.SelectOrSignInOrCreate()
	/Users/ds/projects/plandex/app/cli/auth/account.go:53 +0x38
plandex/auth.promptInitialAuth()
	/Users/ds/projects/plandex/app/cli/auth/account.go:33 +0x124
plandex/auth.MustResolveAuth(0x1)
	/Users/ds/projects/plandex/app/cli/auth/auth.go:26 +0x330
plandex/auth.MustResolveAuthWithOrg(...)
	/Users/ds/projects/plandex/app/cli/auth/auth.go:13
plandex/cmd.new(0x1400050a900?, {0x10138c440?, 0x4?, 0x1009df147?})
	/Users/ds/projects/plandex/app/cli/cmd/new.go:35 +0x2c
github.com/spf13/cobra.(*Command).execute(0x101331dc0, {0x10138c440, 0x0, 0x0})
	/Users/ds/go/pkg/mod/github.com/spf13/[email protected]/command.go:987 +0x814
github.com/spf13/cobra.(*Command).ExecuteC(0x101332f00)
	/Users/ds/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/ds/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039
plandex/cmd.Execute()
	/Users/ds/projects/plandex/app/cli/cmd/root.go:23 +0x24
main.main()
	/Users/ds/projects/plandex/app/cli/main.go:55 +0x90

Related, trying to switch to the plan an anonymous account is now giving me this:

josh@MacBook-Pro frontend % plandex new
? 👋 Hey there!
It looks like this is your first time using Plandex on this computer.
What would you like to do? Start an anonymous trial on Plandex Cloud (no email required)
🚨 Error creating plan
  → Project does not exist in org
josh@MacBook-Pro frontend % plandex new
🚨 Error creating plan
  → Project does not exist in org

I assume there is some other config I need to delete to do a full "reset". It was working at one point (anon account) but I deleted the home folder so I could login (I hit enter by accident when setting it up again, yeah, I'm batting 1000 today...) and now I can't get back to that working-anon state either.

Sorry, this is 100% user-error.

Can't build for NixOS: ../../../survey does not exist

I get the following error.

       > go: plandex/term imports
       >        github.com/plandex-ai/survey/v2: github.com/plandex-ai/survey/[email protected]: replacement directory ../../../survey does not exist

Package file

{
  lib,
  fetchFromGitHub,
  buildGoModule
}:

buildGoModule {
  pname = "plandex-cli";
  version = "v0.8.1";
  vendorHash = lib.fakeHash;
  src = fetchFromGitHub {
    owner = "plandex-ai";
    repo = "plandex";
    rev = "cli/v0.8.1";
    hash = "sha256-+1EETcqjvyi9W+y6nhcEFbc2bv2EjFEBxoxMrWPz7Ro=";
  };
  
  modRoot = "app/cli";
  
  meta = with lib; {
    description = "An AI coding engine for complex tasks";
    homepage = "https://github.com/plandex-ai/plandex";
    license = licenses.agpl3Only;
  };
}

When running in docker the pin cannot be copied to the clipboard

I'm running this docker-compose.yml:

version: '3.8'

services:
  plandex-postgres:
    image: postgres
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    ports:
      - "5432:5432"
    networks:
      - plandex-network
  plandex-server:
    build:
      context: .
      dockerfile: Dockerfile.server
    volumes:
      - "$HOME/projects/plandex-server:/plandex-server"
      - "/tmp/.X11-unix:/tmp/.X11-unix"
    ports:
      - "8088:8088"
    environment:
      DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@plandex-postgres:5432/${POSTGRES_DB}?sslmode=disable"
      GOENV: ${GOENV}
      DISPLAY: ${DISPLAY}
    networks:
      - plandex-network

networks:
  plandex-network:
    driver: bridge

I've changed the Dockerfile.server to the following:

FROM --platform=linux/amd64 golang:1.21.3

RUN apt-get update && \
  apt-get install -y git xsel libnotify-bin

WORKDIR /app

COPY ./shared/go.mod ./shared/go.sum ./shared/
RUN cd shared && go mod download

COPY ./server/go.mod ./server/go.sum ./server/
RUN cd server && go mod download

COPY ./server ./server
COPY ./shared ./shared

WORKDIR /app/server

ENV GOOS=linux
ENV GOARCH=amd64
RUN (rm plandex-server || true) && go build -o plandex-server .

ENV PORT=8088
EXPOSE 8088

CMD ["./plandex-server"]

Health check is working

$ GET -e http://localhost:8088/health
200 OK
Connection: close
Date: Sat, 30 Mar 2024 08:57:58 GMT
Content-Length: 2
Content-Type: text/plain; charset=utf-8
Client-Date: Sat, 30 Mar 2024 08:57:58 GMT
Client-Peer: 127.0.0.1:8088
Client-Response-Num: 1

OK

Even when running xhost +local: and xhost +local:docker (which I don't really want to do) it won't work.

Would you be willing to accept a pr that either prints the pin out to the log (only in development mode) or to a file in /plandex-server directory that the user can access?

Whitepaper Writing - plandex has trouble with staging changes to markdown files?

Dane, I LOVE this. I want to invest. :)

I asked plandex to flesh out a plan to fill out a white paper, and it did a reasonable job of marshaling GPT-4 into suggestions and some step by step plans. All good.

% plandex tell "Finish up the whitepaper in detail"         
✅ Context is up to date

 💬 User prompt 👇 

  Finish up the whitepaper in detail

 🤖 Plandex reply 👇 

  Given the nature of the task to finish up the whitepaper in detail, and     
  considering the provided context, this task involves expanding and          
  finalizing the document based on the sections outlined in the provided      
  markdown files. This task does not involve coding or system administration, 
  so the response will not include code blocks or file updates. Instead, I    
  will provide a detailed plan for completing the whitepaper, focusing on the 
  content and structure.                                                      
                                                                              
  ## Plan to Finish the Whitepaper in Detail                                  
                                                                              
  1. Expand the "Motivation" Section [snip]                   

However, it doesn't stage any changes -- I speculate that it's been designed for working with a codebase first?

% plandex ch                                       
🤷‍♂️ No changes pending

Am I doing something wrong, or is this an edge case for plandex? There doesn't seem to be a good way to allow it to introspect, e.g. when I try:

plandex tell "Update the white paper per our conversation"
✅ Context is up to date

 💬 User prompt 👇 

  Update the white paper per our conversation

 🤖 Plandex reply 👇 

  I need more information or context to make a plan for this task.            
  Specifically, could you please clarify the updates or changes you would like
  to make to the whitepaper based on our conversation?                        

 plandex (ch)anges  👉 review plan changes
 plandex (ap)ply  👉 apply plan changes to project files
 plandex log  👉 show log of plan updates
 plandex rewind (rw)  👉 rewind to a previous state

Postgres in docker compose unable to run init script

Getting the following error when running docker compose up:

plandex-postgres-1  | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01-init.sh
plandex-postgres-1  | /usr/local/bin/docker-entrypoint.sh: /docker-entrypoint-initdb.d/01-init.sh: /bin/bash: bad interpreter: Permission denied

This leads to the database and user not being created.

Stream buffer tokens too high for file 'go.mod'

i got the error

....
Next, "All tasks have been completed."
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 🏗  Built plan
📄 cmd/server/main.go ✅ | 📄 go.mod 7144 🪙 | 📄 internal/config/config.go ✅ | 📄 internal/jwt/jwt.go ✅ | 📄 internal/keycloak/client.go ✅
📄 internal/middleware/auth.go ✅ | 📄 internal/models/users.go ✅ | 📄 internal/scheduler/userinfo_updater.go ✅ | 📄 internal/server/server.go ✅

🚨 Server error
  → Stream buffer tokens too high for file 'go.mod'

while it was build my pdx tell promt

I'm not sure if this needs a new issue or not. I'll add it here and create a new one if you think it needs it.

          I'm not sure if this needs a new issue or not. I'll add it here and create a new one if you think it needs it.

I noticed the path is not what I expected.

      → /root/plandex-server/orgs/0368f56f-0794-4a83-8324-0392b07e4338/plans/3b6546fe-6408-4d3b-bfa3-b62df1cb3d3f, err

Shouldn't this be /plandex-server not /root/plandex-server? I don't think this is how I'm building the image.

$ docker inspect --format='{{json .Mounts}}' app-plandex-server-1 | jq '.' ; sudo ls -l $HOME/projects/plandex-data/plandex-data
[
  {
    "Type": "bind",
    "Source": "/home/harleypig/projects/plandex-data/plandex-data",
    "Destination": "/plandex-server",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
  }
]
[sudo] password for harleypig: 
Executing: sudo ls -l /home/harleypig/projects/plandex-data/plandex-data 
total 0

_Originally posted by @harleypig in https://github.com/plandex-ai/plandex/issues/8#issuecomment-2030916651_hfta454ivfdzmokmoo09kmt7h। ****gyu
Kuy7o000796toonb
088yrwrt

Edit plan before it's built

First off, plandex is fantastic. Thank you for open sourcing it!

A few times I've found myself wanting to edit a generated plan before it's built. Things like removing a step, or changing it slightly. For example, when working in a swift codebase and writing unit tests I have wanted it to generate a protocol for the object being mocked rather than use subclassing.

Aborting and editing the "tell" is a bit more cumbersome and indirect.

Continually getting OPENAI_API_KEY environment variable is not set

Doesn't seem to matter that the environment variable is set:

$ echo $OPENAI_API_KEY
sk-*************************
$ plan tell "Do some stuff"

🚨 OPENAI_API_KEY environment variable is not set.

Set it with:

export OPENAI_API_KEY=your-api-key

Then try again.

👉 If you don't have an OpenAI account, sign up here → https://platform.openai.com/signup

🔑 Generate an api key here → https://platform.openai.com/api-keys

I tried deleting my plan, making sure the env var is set, then created a new plan. Same result.

Consistent Server Error Encountered When Interacting with Plandex

I am experiencing a consistent server error while attempting to interact with Plandex. Whenever I try to use the plandex tell command with a prompt, I receive the following error message:

🚨 Server error
→ Error getting plan, context, convo, or summaries

This error occurs consistently and prevents me from utilizing the Plandex service effectively. I have attempted to retry the command multiple times and have waited for some time before trying again, but the issue persists.

Expected Behavior:
I expect to receive assistance or suggestions from Plandex based on the provided prompt, as intended.

Actual Behavior:
Instead, I consistently encounter a server error, as described above.

Additional Information:

Operating System: macOS
Plandex Version: latest
Internet Connection: Wi-Fi, Ethernet

plandex-server mapping not working as expected

$ pdx load bin/* lib/*
🚨 Failed to load context
  → Error committing changes
    → Error committing files to git repository for dir
      → /root/plandex-server/orgs/0368f56f-0794-4a83-8324-0392b07e4338/plans/3b6546fe-6408-4d3b-bfa3-b62df1cb3d3f, err
        → Exit status 128, output
          → Author identity unknown

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal
            → Unable to auto-detect email address (got 'root@224f0a0ed4c9.(none)')

I noticed the path is not what I expected.

      → /root/plandex-server/orgs/0368f56f-0794-4a83-8324-0392b07e4338/plans/3b6546fe-6408-4d3b-bfa3-b62df1cb3d3f, err

Shouldn't this be /plandex-server not /root/plandex-server? I don't think this is how I'm building the image.

$ docker inspect --format='{{json .Mounts}}' app-plandex-server-1 | jq '.' ; sudo ls -l $HOME/projects/plandex-data/plandex-data
[
  {
    "Type": "bind",
    "Source": "/home/harleypig/projects/plandex-data/plandex-data",
    "Destination": "/plandex-server",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
  }
]
[sudo] password for harleypig: 
Executing: sudo ls -l /home/harleypig/projects/plandex-data/plandex-data 
total 0

runtime error: index out of range [1] with length 1

Caught this one just now after a "tell" to plandex and during its thinking.

Restoring terminal...

goroutine 88 [running]:
runtime/debug.Stack()
        /usr/local/go/src/runtime/debug/stack.go:24 +0x64
runtime/debug.PrintStack()
        /usr/local/go/src/runtime/debug/stack.go:16 +0x1c
github.com/charmbracelet/bubbletea.(*Program).Run.func1()
        /Users/ds/go/pkg/mod/github.com/charmbracelet/[email protected]/tea.go:478 +0x88
panic({0x101978bc0?, 0x14000024168?})
        /usr/local/go/src/runtime/panic.go:920 +0x26c
plandex/stream_tui.streamUIModel.doRenderBuild({0x0, {{{0x140001e8a30, 0x1, 0x1}, {{...}, {...}}, 0x0}, {{0x140001e8a50, 0x1, 0x1}, ...}, ...}, ...}, ...)
        /Users/ds/projects/plandex/app/cli/stream_tui/view.go:142 +0xbb0
plandex/stream_tui.streamUIModel.renderBuild(...)
        /Users/ds/projects/plandex/app/cli/stream_tui/view.go:60
plandex/stream_tui.(*streamUIModel).getViewportDimensions(0x1400073ec00)
        /Users/ds/projects/plandex/app/cli/stream_tui/update.go:156 +0x114
plandex/stream_tui.(*streamUIModel).updateViewportDimensions(...)
        /Users/ds/projects/plandex/app/cli/stream_tui/update.go:143
plandex/stream_tui.(*streamUIModel).streamUpdate(0x1400073ec00, 0x140005b5b80)
        /Users/ds/projects/plandex/app/cli/stream_tui/update.go:311 +0x3f4
plandex/stream_tui.streamUIModel.Update({0x0, {{{0x140001e8a30, 0x1, 0x1}, {{...}, {...}}, 0x0}, {{0x140001e8a50, 0x1, 0x1}, ...}, ...}, ...}, ...)
        /Users/ds/projects/plandex/app/cli/stream_tui/update.go:38 +0x534
github.com/charmbracelet/bubbletea.(*Program).eventLoop(0x140003176c0, {0x1019bc2d0?, 0x14000516000?}, 0x1019bb408?)
        /Users/ds/go/pkg/mod/github.com/charmbracelet/[email protected]/tea.go:411 +0x6bc
github.com/charmbracelet/bubbletea.(*Program).Run(0x140003176c0)
        /Users/ds/go/pkg/mod/github.com/charmbracelet/[email protected]/tea.go:543 +0x6d8
plandex/stream_tui.StartStreamUI({0x16f0a6e02, 0x702}, 0x0?)
        /Users/ds/projects/plandex/app/cli/stream_tui/run.go:40 +0x240
plandex/plan_exec.TellPlan.func1.1()
        /Users/ds/projects/plandex/app/cli/plan_exec/tell.go:111 +0x38
created by plandex/plan_exec.TellPlan.func1 in goroutine 1
        /Users/ds/projects/plandex/app/cli/plan_exec/tell.go:110 +0x490
panic: interface conversion: tea.Model is nil, not streamtui.streamUIModel

goroutine 88 [running]:
plandex/stream_tui.StartStreamUI({0x16f0a6e02, 0x702}, 0x0?)
        /Users/ds/projects/plandex/app/cli/stream_tui/run.go:52 +0x824
plandex/plan_exec.TellPlan.func1.1()
        /Users/ds/projects/plandex/app/cli/plan_exec/tell.go:111 +0x38
created by plandex/plan_exec.TellPlan.func1 in goroutine 1
        /Users/ds/projects/plandex/app/cli/plan_exec/tell.go:110 +0x490

Some generated code is missing from file

When Plandex runs it generates code in the terminal. I thereafter apply the update and only parts of the generated code is in the file.

This is in a new project with no files to start with.

The file it generated
main.txt

Plandex output:
$gtrends_analysis$ plandex new
✅ Started new plan draft and set it to current plan

plandex (l)oad 👉 load files, dirs, urls, notes or piped data into context
plandex (t)ell 👉 describe a task, ask a question, or chat
plandex (pl)ans 👉 list plans
plandex (cu)rrent 👉 show current plan
$gtrends_analysis$ plandex tell
✅ Context is up to date

💬 User prompt 👇

I want to create a python3 script that does the following.

The inputs should be hardcoded in the beginning of the script

Inputs: data csv file, start_date, end_date defaults to current day, N_overlap
days that is overlapped, keywords file with a list of keywords to get from
google trends.

outputs: updated data csv file
the csv file should be named data_gtrend.csv

Makes sure a csv file that contains keywords from google trends is up do date.
The data should be fetched in batches that corresponds to calendar quartely
interval i.e. they start at the calendar quarter and ends at the calendar
quarter. The latest quarter will of course end at the current date. If the last
quarter that is in the data file has not ended it should be removed and replaced
with new data. You can add a blank row in the csv file to keep track of where
the last quarter begins.

The data should be normalized using overlaping days. This means the data needs
to be fetched N days before the quarter begins. This will be used to scale the
new data so it has the same amplitude as theold data. All the non empty overlap
days except should be used to calculate correction factor. However they should
be divided into 4 parts and then compared as a sanity check so they are within a
maxumum percentage distance. After that the average correcting factor should be
calculated and used to scale the new data.

print information about the current place in the fetching and eventual errors.
The oldest data should be fetched first.

The columns should be
date, keyword1, keyword2 etc

🤖 Plandex reply 👇

Let's break up this task into smaller subtasks to create a Python script
that fetches Google Trends data for given keywords, normalizes the data
using overlapping days, and updates a CSV file accordingly. The script will
handle inputs such as the data CSV file path, start and end dates, the
number of overlap days, and a file containing keywords. The output will be
an updated CSV file named data_gtrend.csv .

  1. Set up the Python environment and install necessary libraries: Determine
    the libraries needed for CSV file manipulation, fetching Google Trends data,
    and handling dates.
  2. Define hardcoded inputs: Code the section where inputs like the data CSV
    file path, start and end dates, the number of overlap days, and the keywords
    file path are hardcoded.
  3. Read keywords from the keywords file: Implement the functionality to read
    keywords from the provided file.
  4. Fetch Google Trends data: Develop the function to fetch Google Trends
    data for the given keywords in quarterly batches, including handling the
    overlap days for normalization.
  5. Normalize the fetched data: Code the logic to normalize the fetched data
    using the overlap days, including the division into four parts for a sanity
    check and calculating the average correction factor.
  6. Update the CSV file: Implement the functionality to update the existing
    CSV file with the new, normalized data, ensuring the correct format and
    handling of quarters.
  7. Logging and error handling: Add print statements throughout the script to
    inform the user of the current process stage and handle any potential errors
    gracefully.

Given the complexity and the need for thorough implementation, we'll start
with the first subtask: setting up the Python environment and installing
necessary libraries. This involves identifying the libraries required for
CSV manipulation, fetching Google Trends data, and date handling. We'll need
pandas for CSV file operations, pytrends for accessing Google Trends
data, and possibly datetime for handling dates, which is part of the
standard library.

Next, we will define hardcoded inputs in the script.

👉 To continue with the plan, we'll start by defining the hardcoded inputs
in our Python script. This includes the path to the data CSV file, the start
and end dates (with defaults to the current day), the number of overlap days
(N_overlap), and the path to the file containing the list of keywords.

• main.py:

import pandas as pd                                                       
from datetime import datetime, timedelta                                  
from pytrends.request import TrendReq                                     
                                                                          
# Hardcoded inputs                                                        
data_csv_path = 'data_gtrend.csv'                                         
start_date = datetime(2020, 1, 1)  # Example start date, adjust as needed 
end_date = datetime.now()  # Defaults to the current day                  
N_overlap = 5  # Number of overlap days                                   
keywords_file_path = 'keywords.txt'                                       
                                                                          
# Initialize Google Trends API                                            
pytrends = TrendReq(hl='en-US', tz=360)                                   

Next, we'll implement the functionality to read keywords from the provided
file. This step is crucial as these keywords will be used to fetch data from
Google Trends.

• main.py:

def read_keywords(file_path):                                             
    with open(file_path, 'r') as file:                                    
        keywords = [line.strip() for line in file.readlines()]            
    return keywords                                                       
                                                                          
keywords = read_keywords(keywords_file_path)                              

After setting up the initial inputs and reading the keywords, the next step
involves fetching Google Trends data. This requires handling the logic for
quarterly data fetching, including the overlap days for normalization. We'll
also need to ensure that we're fetching data in batches that correspond to
calendar quarterly intervals.

• main.py:

def fetch_quarterly_trends(keywords, start_date, end_date, N_overlap):    
    # Calculate the start of the current quarter                          
    current_quarter_start = start_date - timedelta(days=start_date.day - 1)
    current_quarter_start = current_quarter_start -                       

timedelta(months=(current_quarter_start.month - 1) % 3)

    # Adjust the start date for overlap                                   
    adjusted_start_date = current_quarter_start - timedelta(days=N_overlap)
                                                                          
    # Placeholder for fetched data                                        
    trends_data = []                                                      
                                                                          
    # Fetch data in quarterly batches                                     
    while adjusted_start_date < end_date:                                 
        quarter_end = adjusted_start_date + timedelta(days=90)  #         

Approximate quarter length
if quarter_end > end_date:
quarter_end = end_date

        # Fetch data for the current batch                                
        pytrends.build_payload(keywords,                                  

timeframe=f'{adjusted_start_date.strftime("%Y-%m-%d")}
{quarter_end.strftime("%Y-%m-%d")}')
data = pytrends.interest_over_time()

        trends_data.append(data)                                          
                                                                          
        # Move to the next quarter                                        
        adjusted_start_date = quarter_end                                 
                                                                          
    return pd.concat(trends_data)                                         
                                                                          
# Placeholder for normalization and CSV update logic                      

This code snippet sets up the basic structure for fetching Google Trends
data in quarterly intervals, including the overlap days. The next steps will
involve normalizing the fetched data using the overlap days and updating the
existing CSV file with the new, normalized data. These steps will handle the
normalization logic, including the division into four parts for a sanity
check, calculating the average correction factor, and updating the CSV file
accordingly.

Next, we will implement the normalization of the fetched data and update the
CSV file.

👉 Continuing from the data fetching, the next step involves normalizing the
fetched data using the overlap days. This process includes dividing the
overlap days into four parts for a sanity check, ensuring they are within a
maximum percentage distance, and then calculating the average correction
factor to scale the new data.

• main.py:

def normalize_data(fetched_data, N_overlap):                              
    # Assuming fetched_data is a DataFrame with the first column as 'date'

and the rest as keywords
# The data is already sorted by date in ascending order

    # Calculate the correction factor for each keyword                    
    correction_factors = {}                                               
    for keyword in fetched_data.columns[1:]:  # Skip the 'date' column    
        # Divide the overlap period into four parts for sanity check      
        overlap_data = fetched_data.head(N_overlap)[keyword]              
        parts = np.array_split(overlap_data, 4)                           
        part_means = [part.mean() for part in parts]                      
                                                                          
        # Sanity check: ensure the means are within a maximum percentage  

distance
max_distance = 0.2 # 20% maximum distance
if all(abs(part_means[i] - part_means[i+1])/part_means[i] <
max_distance for i in range(len(part_means)-1)):
# Calculate the average correction factor
correction_factors[keyword] = sum(part_means) /
len(part_means)
else:
print(f"Sanity check failed for {keyword}. Skipping
normalization.")
correction_factors[keyword] = 1 # No scaling

    # Apply the correction factor to scale the new data                   
    for keyword, factor in correction_factors.items():                    
        fetched_data[keyword] *= factor                                   
                                                                          
    return fetched_data                                                   

After normalizing the data, the final step is to update the existing CSV
file with the new, normalized data. This involves checking for the last
quarter in the existing data, removing it if it has not ended, and appending
the new data.

• main.py:

def update_csv(data_csv_path, new_data):                                  
    # Try to read the existing data                                       
    try:                                                                  
        existing_data = pd.read_csv(data_csv_path)                        
    except FileNotFoundError:                                             
        existing_data = pd.DataFrame()                                    
                                                                          
    # If existing data is not empty, remove the last quarter              
    if not existing_data.empty:                                           
        last_quarter_start = existing_data['date'].max() -                

pd.offsets.QuarterBegin(startingMonth=1)
existing_data = existing_data[existing_data['date'] <
last_quarter_start.strftime('%Y-%m-%d')]

    # Append the new data                                                 
    updated_data = pd.concat([existing_data, new_data])                   
                                                                          
    # Save the updated data to CSV                                        
    updated_data.to_csv(data_csv_path, index=False)                       
    print(f"Data updated successfully in {data_csv_path}")                
                                                                          
# Placeholder for the main logic that orchestrates the fetching,          

normalization, and updating process

This completes the implementation of the core functionality for the Python
script. The remaining task is to orchestrate the fetching, normalization,
and updating process within a main function or script block that utilizes
the defined functions to achieve the desired outcome.

All tasks have been completed.
─────────────────────────────────────────────────────────────────────────────────────────────────────
🏗 Built plan
📄 main.py ✅

plandex (ch)anges 👉 review plan changes
plandex (ap)ply 👉 apply plan changes to project files
plandex log 👉 show log of plan updates
plandex rewind (rw) 👉 rewind to a previous state
$gtrends_analysis$ plandex log
📝 Update 99a16a0 | Today | 7:23:05pm CEST
🏗️ Build pending changes
✏️ Implement data normalization and CSV update logic
• edit → main.py

📝 Update 1ed4daf | Today | 7:22:53pm CEST
Message #4 | 🤖 Plandex reply | 656 🪙

📝 Update 73baf89 | Today | 7:22:30pm CEST
🏗️ Build pending changes
✏️ Implemented initial setup, keyword reading, and quarterly data fetching
• new file → main.py
• edit → main.py

📝 Update 3ec8f3e | Today | 7:22:25pm CEST
Message #3 | 🤖 Plandex reply | 697 🪙

📝 Update 449ace1 | Today | 7:21:56pm CEST
Message #2 | 🤖 Plandex reply | 409 🪙

📝 Update e8cc20a | Today | 7:21:41pm CEST
Message #1 | 💬 User prompt | 320 🪙
$gtrends_analysis$ plandex ap
Apply changes to 1 file? (y)es | (n)o> y
✅ Applied changes, 1 file updated

🚨 Error resolving auth → Error selecting auth option → Unexpected escape sequence from terminal → ['\x1b' '\\']

Problem Description

Error Message

🚨 Error resolving auth
→ Error selecting auth option
→ Unexpected escape sequence from terminal
→ ['\x1b' '\']

Steps Taken

  1. Added OpenAI key.
  2. Applied plandex new command.

Output

👋 Hey there!
It looks like this is your first time using Plandex on this computer.
What would you like to do?

Start an anonymous trial on Plandex Cloud (no email required)
Sign in, accept an invite, or create an account
🚨 Error resolving auth
→ Error selecting auth option
→ Unexpected escape sequence from terminal
→ ['\x1b' '\']

Additional Information

  • Please provide assistance in resolving this issue.

image

Self-host server plandex new create project fail

Server log:

2024/04/09 14:18:13 Error creating plan: error initializing plan dir: error initializing git repo: error initializing git repository with 'main' as default branch for dir: /plandex-server/orgs/df5ad43f-fea2-408e-a0ab-5e69877c47d9/plans/f47fe1c8-b9b0-4ed4-beb9-914bc9782039, err: exit status 129, output: Unknown option: -C
usage: git [--version] [--help] [-c name=value]
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]

git version: 1.8.3.1
os: Centos7

claude integration

So happy to find this, I wrote and am using something similar, which I'll now ditch for this.

I used an external command line tool (https://llm.datasette.io/) so I could use different LLMs and compare them, and Claude Opus was better probably 80% of the time than GPT4 (I was doing pretty much all my tests in both in parallel). When Claude wasn't better, they were roughly even. There might have been one or two instances where GPT4 was better, but nothing notable.

Anyway, wanted to mention this because you might want to bump up the priority of supporting Claude, or using a third party lib like I did to abstract the LLMs away. This will give your project a real boost... until GPT5 switches it around inevitably of course

Simple CRUD request, makes multiple steps and deletes all other code that is already present, or in a previous step created.

Hi,

a simple CRUD request creates the functions in several steps, but deletes other existing functions and/or those created from the previous step in the same file.

Example of a user model:

  • Step1: 14 lines added, 50 deleted that already existed
  • Step2: the previously created 14 lines deleted and 6 new ones added -> but different functions (deleted CreateUser and database connection - added UpdateUser)
  • Step3: the previously added UpdateUser replaced with a DeleteUser function.

Result currently unusable.

Permission error while running install script

Hey, very interesting project!

I tried running the install script curl -sL https://plandex.ai/install.sh | bash on a Mac m1 and I got the following error:

Copyright (c) 2024 Plandex Inc.

Downloading Plandex tarball from https://github.com/plandex-ai/plandex/releases/download/cli%2Fv0.8.1/plandex_0.8.1_darwin_arm64.tar.gz
mv: rename plandex to /usr/local/bin/plandex: Permission denied```

`set-model` command isn't parsing arguments correctly to set a new model for a specific role

plandex set-model with no arguments is working correctly and prompts the user to update model settings or select a role to a set a new model for, but when passing arguments like this: plandex set-model planner gpt-4, it isn't updating correctly.

Likely a problem in argument parsing in the CLI command, since if there was a problem on the server, it would be impacting the no arguments form as well.

Auth error when creating a new plan

When I try to make a new project with version 0.8.1 I enter 'plandex new' but then it immediately produces the following output with no ability for any user input then exits.

$ plandex new
👋 Hey there!
It looks like this is your first time using Plandex on this computer.
What would you like to do?
> Start an anonymous trial on Plandex Cloud (no email required)
  Sign in, accept an invite, or create an account
🚨 Error resolving auth
  → Error selecting auth option
    → Incorrect function.

Using gitbash for Windows

$ git --version
git version 2.44.0.windows.1

git config user and email not set

This may or may not be related to running this under docker compose.

$ pdx load bin/* lib/*
🚨 Failed to load context
  → Error committing changes
    → Error committing files to git repository for dir
      → /root/plandex-server/orgs/0368f56f-0794-4a83-8324-0392b07e4338/plans/3b6546fe-6408-4d3b-bfa3-b62df1cb3d3f, err
        → Exit status 128, output
          → Author identity unknown

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal
            → Unable to auto-detect email address (got 'root@224f0a0ed4c9.(none)')

tmux

When using it in tmux, plandex apply gives the following:

                                        🚨 Failed to get confirmation user input
  → Failed to get user input
    → Failed to open keyboard
      → Error while reading terminfo data:termbox
        → Unsupported terminal

Today I found out there are CLI nerds out there who don't use tmux!

Benchmark on SWE-Bench

It would be interesting to see measure the performance on SWE-Bench benchmarks, so that this project can be more clearly differentiated from the increasing number of other coding agents.

Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, Karthik Narasimhan
Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We consider real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. We therefore introduce SWE-bench, an evaluation framework including 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. Claude 2 and GPT-4 solve a mere 4.8% and 1.7% of instances respectively, even when provided with an oracle retriever. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.

Docker Compose user authentication failed

Hey, I was trying this out and struck with this issue.

 ! plandex-server The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested                  0.0s 
Attaching to plandex-postgres-1, plandex-server-1
plandex-postgres-1  | 
plandex-postgres-1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
plandex-postgres-1  | 
plandex-postgres-1  | 2024-04-05 18:14:48.264 UTC [1] LOG:  starting PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
plandex-postgres-1  | 2024-04-05 18:14:48.264 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
plandex-postgres-1  | 2024-04-05 18:14:48.264 UTC [1] LOG:  listening on IPv6 address "::", port 5432
plandex-postgres-1  | 2024-04-05 18:14:48.265 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
plandex-postgres-1  | 2024-04-05 18:14:48.269 UTC [30] LOG:  database system was shut down at 2024-04-05 18:14:09 UTC
plandex-postgres-1  | 2024-04-05 18:14:48.278 UTC [1] LOG:  database system is ready to accept connections
plandex-server-1    | wait-for-it.sh: waiting 15 seconds for plandex-postgres:5432
plandex-server-1    | wait-for-it.sh: plandex-postgres:5432 is available after 0 seconds
plandex-postgres-1  | 2024-04-05 18:14:52.543 UTC [35] FATAL:  password authentication failed for user "user"
plandex-postgres-1  | 2024-04-05 18:14:52.543 UTC [35] DETAIL:  Role "user" does not exist.
plandex-postgres-1  | 	Connection matched file "/var/lib/postgresql/data/pg_hba.conf" line 128: "host all all all scram-sha-256"
plandex-server-1    | 2024/04/05 18:14:52 Error initializing database: pq: password authentication failed for user "user"

I set my .env to this:

POSTGRES_PASSWORD=password
PLANDEX_DB=plandex
PLANDEX_USER=user
PLANDEX_PASSWORD=password
PGDATA_DIR=./datadir
PLANDEX_DATA_DIR=./pdatadir
GOENV=development
PIN2LOG=true

I tried different variations/values but still same issue. Can someone please help me to figure out what's wrong?

null value in column "org_role_id" of relation "orgs_users" violates not-null constraint on initial setup

Here is my terminal output

plandex ls
? 👋 Hey there!
It looks like this is your first time using Plandex on this computer.
What would you like to do? Sign in, accept an invite, or create an account
? Use Plandex Cloud or another host? Plandex Cloud
✔ Your email: … CENSORED
✉️  You'll now receive a 6 character pin by email. It will be valid for 5 minutes.
✔ Please enter your pin: … ******
✔ Your name: … CENSORED
🧐 You don't have access to any orgs yet.

To join an existing org, ask an admin to either invite you directly or give your whole email domain access.

Otherwise, you can go ahead and create a new org.
Create a new org now? (y)es | (n)o> y
✔ Org name: … CENSORED
With domain auto-join, you can allow any user with an email ending in @oboe.email to auto-join this org.
Enable auto-join for CENSORED? (y)es | (n)o> y
🚨 Error resolving auth
  → Error selecting or signing in to account
    → Error signing in to new account
      → Error resolving org
        → Error prompting no orgs
          → Error creating org
            → Error adding org domain users
              → Error adding org users
                → Pq: null value in column "org_role_id" of relation "orgs_users" violates not-null constraint

i used real and valid data instead of CENSORED. I just don't want to have them public here.

load and rm are not intuitive

pdx load for adding files and urls and pdx rm for removing them is not intuitive.

Either load/unload or add/rm (or both, i.e., load with an alias of add) would be more clear and intuitive.

Also, a mv option would be nice. A use case would be I realized I'd accidentally created a tests directory under my tests directory and put a bunch of files there, then loaded them. I know I can do pdx rm tests/tests/* then pdx add tests/*.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.