Giter Site home page Giter Site logo

ai-shell's Introduction

AI Shell logo

A CLI that converts natural language to shell commands.

Current version

Gif Demo

Inspired by the GitHub Copilot X CLI, but open source for everyone.


AI Shell

Setup

The minimum supported version of Node.js is v14

  1. Install ai shell:

    npm install -g @builder.io/ai-shell
  2. Retrieve your API key from OpenAI

    Note: If you haven't already, you'll have to create an account and set up billing.

  3. Set the key so ai-shell can use it:

    ai config set OPENAI_KEY=<your token>

    This will create a .ai-shell file in your home directory.

Usage

ai <prompt>

For example:

ai list all log files

Then you will get an output like this, where you can choose to run the suggested command, revise the command via a prompt, or cancel:

◇  Your script:
│
│  find . -name "*.log"
│
◇  Explanation:
│
│  1. Searches for all files with the extension ".log" in the current directory and any subdirectories.
│
◆  Run this script?
│  ● ✅ Yes (Lets go!)
│  ○ 📝 Revise
│  ○ ❌ Cancel
└

Special characters

Note that some shells handle certain characters like the ? or * or things that look like file paths specially. If you are getting strange behaviors, you can wrap the prompt in quotes to avoid issues, like below:

ai 'what is my ip address'

Chat mode

Chat demo

ai chat

With this mode, you can engage in a conversation with the AI and receive helpful responses in a natural, conversational manner directly through the CLI:

┌  Starting new conversation
│
◇  You:
│  how do I serve a redirect in express
│
◇  AI Shell:

In Express, you can use the `redirect()` method to serve a redirect. The `redirect()` method takes one argument, which is the URL that you want to redirect to.

Here's an example:

\`\`\`js
app.get('/oldurl', (req, res) => {
  res.redirect('/newurl');
});
\`\`\`

Silent mode (skip explanations)

You can disable and skip the explanation section by using the flag -s or --silent

ai -s list all log files

or save the option as a preference using this command:

ai config set SILENT_MODE=true

Custom API endpoint

You can custom OpenAI API endpoint to set OPENAI_API_ENDPOINT(default: https://api.openai.com/v1

ai config set OPENAI_API_ENDPOINT=<your proxy endpoint>

Set Language

Language UI

The AI Shell's default language is English, but you can easily switch to your preferred language by using the corresponding language keys, as shown below:

Language Key
English en
Simplified Chinese zh-Hans
Traditional Chinese zh-Hant
Spanish es
Japanese jp
Korean ko
French fr
German de
Russian ru
Ukrainian uk
Vietnamese vi
Arabic ar
Portuguese pt
Turkish tr

For instance, if you want to switch to Simplified Chinese, you can do so by setting the LANGUAGE value to zh-Hans:

ai config set LANGUAGE=zh-Hans

This will set your language to Simplified Chinese.

Config UI

To use a more visual interface to view and set config options you can type:

ai config

To get an interactive UI like below:

◆  Set config:
│  ○ OpenAI Key
│  ○ OpenAI API Endpoint
│  ○ Silent Mode
│  ● Model (gpt-3.5-turbo)
│  ○ Language
│  ○ Cancel
└

Upgrading

Check the installed version with:

ai --version

If it's not the latest version, run:

npm update -g @builder.io/ai-shell

Or just use AI shell:

ai update

Common Issues

429 error

Some users are reporting a 429 from OpenAI. This is due to incorrect billing setup or excessive quota usage. Please follow this guide to fix it.

You can activate billing at this link. Make sure to add a payment method if not under an active grant from OpenAI.

Motivation

I am not a bash wizard, and am dying for access to the copilot CLI, and got impatient.

Contributing

If you want to help fix a bug or implement a feature in Issues (tip: look out for the help wanted label), checkout the Contribution Guide to learn how to setup the project.

Credit

  • Thanks to GitHub Copilot for their amazing tools and the idea for this
  • Thanks to Hassan and his work on aicommits which inspired the workflow and some parts of the code and flows

Community

Come join the Builder.io discord and chat with us in the #ai-shell room



Made with love by Builder.io

ai-shell's People

Contributors

alaeddineboughanmmi avatar beauwilliams avatar burnoutdev avatar eltociear avatar georgesstein avatar hdkiller avatar incerta avatar irian-adappty avatar iwasherd avatar jpxd avatar karasushin avatar lzhgus avatar mindgitrwx avatar omt66 avatar qihangnet avatar rafaelcv7 avatar revanapriyandi avatar sbezludny avatar snilan avatar steve8708 avatar thelastmayday avatar wkaisertexas avatar zeldocarina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-shell's Issues

Support windows

Right now this assumes you use bash on mac or Linux. Would be great to support windows too, eg by specifying that in the prompt we generate when detected

How do I customize the nodes that stream out ?

How do I customize the nodes that stream out ? (For example I am a developer, I have my own server, I have the chatGPT proxy server, I want to get the data back to the client by having the user use my server, then I go to the proxy server and get the data back)

"payload.replaceAll is not a function"

ai check ip address

┌  AI Shell
│
◇  Your script:

(node:72142) UnhandledPromiseRejectionWarning: TypeError: payload.replaceAll is not a function
    at parseContent (file:///usr/local/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:296:27)
    at file:///usr/local/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:281:19
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:72142) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)

(node:72142) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Is there any information I should provide?

Using Cloudflare proxy as OPENAI_API_ENDPOINT fails to generate correct script

I have configured the OpenAI API Endpoint address to use Cloudflare proxy API (e.g. https://github.com/x-dr/chatgptProxyAPI), and I have encountered an issue where the generated script is incorrect.

Upon investigation, I found that this may be due to a single "data:" line was split into multiple chunks, and the code stream-to-iterable.ts didn't consider this situation. This causes errors in subsequent processing, leading to the incorrect script.

Thank you for your attention and support, and I look forward to your response.

ai update command steals the whole input

Hello,

The "update" command steals the user input:

ai update all packages

Running: npm update -g @builder.io/ai-shell


up to date in 426ms

18 packages are looking for funding
  run `npm fund` for details

Another example:

 ai update current time

Running: npm update -g @builder.io/ai-shell

Should only run when there is no other arguments passed.

Allow language choice for responses

I have noticed that Explanation currently only responds in English. It would be possible to allow users to choose the language of the response to make it more user-friendly for more people.

command not found: ai

I tried following the readme to install this tool. But after installation, I get command not found. I restarted iTerm after installation. Any tips? Thanks!

macOS Monterey 12.6.3 (Intel chip)
iTerm2 Build 3.4.19

zsh --version                                                                                                     
zsh 5.8.1 (x86_64-apple-darwin21.0)

node --version                                                                                                      
v19.8.1

npm --version                                                                                                       
8.1.0

ls /usr/local/Cellar/node/19.8.1/lib/node_modules/@builder.io/ai-shell                                              
LICENSE  README.md  dist  node_modules  package.json

brew --version                                                                                                    
Homebrew 4.0.13
Homebrew/homebrew-core (git revision 39002442006; last commit 2023-03-11)

npm install -g @builder.io/ai-shell    

changed 40 packages, and audited 42 packages in 2s

13 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

ai         
zsh: command not found: ai

ai-shell                                                                                               
zsh: command not found: ai-shell

Improve error handling

I got a 429 back and it was just blobbed on the screen, hard to read what was happening.
Also, it's weird I got a 429 back as this has been my first query!

image

Option to use GPT 4

Would be nice to offer an option to use GPT4 specifically for the shell script output. We should still use 3.5 for the description as we want that to be fast and cheap, but the shell scripts are few chars so GPT4 could be a nice option or even default

RFC: Complete refactor for ai-shell

I would like to completely refactor this library and contribute the changes to this repo, but it would take some time to complete and the code would be incompatible with any new changes that are implemented to the original repo in the mean time. Therefore, would it be reasonable to contribute the refactored source code to this repo at all, or should I just fork it into a completely separate project?

The reason behind the refactor is that I would like to change how some things work and implement some additional features that would be too complicated to implement considering the current structure of the source code.

Add option for silent mode

would be nice to offer the option for a silent mode so we don't output the explanation

 
 $ ai --silent  list all log files
 | find . -name "*.log"
 | [run, revise, cancel]
 

SSL error

mac OS
After setting openai_key,
terminal calling 'ai list all log file'

write EPROTO 4543126976:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:94:16)

ai-shell v0.1.12

Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new

Stream responses from OpenAI API

Instead of making the user wait for the entire output of the shell code or description to be finished to see it, it would be great to stream the response word by word to the terminal like done on sites like chatgpt

Add error message for missing argument

λ npm exec ai
C:\Users\eltab\AppData\Local\npm-cache\_npx\afbc0d24890c4360\node_modules\ai\bin\ai.js:5
if (!args || args[0].split("/").length !== 2)

I think should send a nice message if the user doesn't send any argument

Allow users to specify a `-y` argument for not asking to execute the script

Hello,

Thank you very much for your program.

I would like to create a script with for example this command :

#!/bin/bash
ai -s open nemo in the home folder

Unfortunately it doesn't works because of the prompt :

◆ Exécuter ce script?
│ ● ✅ Oui (Allons-y!)
│ ○ 📝 Modifier
│ ○ 🔁 Réviser
│ ○ 📋 Copier
│ ○ ❌ Annuler

Could it be possible to add a -y argument for not showing this prompt ?

#!/bin/bash
ai -s -y open nemo in the home folder

Thanks ;-)

Crash on versions 0.1.14 and up

I'm on Linux Mint distribution and the package works great on version 0.1.13 but any higher version crashes on any command.
Output of any command I try starting with 'ai':

cat: /etc/upstream-release: Is a directory
node:internal/errors:867
  const err = new Error(message);
              ^

Error: Command failed: cat /etc/*release
cat: /etc/upstream-release: Is a directory

    at checkExecSyncError (node:child_process:885:11)
    at execSync (node:child_process:957:15)
    at osReleaseSync (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/lib/distro.js:21:12)
    at module.exports.get (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/lib/distro.js:47:25)
    at Object.name (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/legacy.js:4:20)
    at getOperationSystemDetails (file:///home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:488:13)
    at file:///home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:493:37
    at ModuleJob.run (node:internal/modules/esm/module_job:193:25) {
  status: 1,
  signal: null,
  output: [
    null,
    Buffer(475) [Uint8Array] [
       68,  73,  83,  84,  82,  73,  66,  95,  73,  68, 61,  76,
      105, 110, 117, 120,  77, 105, 110, 116,  10,  68, 73,  83,
       84,  82,  73,  66,  95,  82,  69,  76,  69,  65, 83,  69,
       61,  50,  48,  46,  51,  10,  68,  73,  83,  84, 82,  73,
       66,  95,  67,  79,  68,  69,  78,  65,  77,  69, 61, 117,
      110,  97,  10,  68,  73,  83,  84,  82,  73,  66, 95,  68,
       69,  83,  67,  82,  73,  80,  84,  73,  79,  78, 61,  34,
       76, 105, 110, 117, 120,  32,  77, 105, 110, 116, 32,  50,
       48,  46,  51,  32,
      ... 375 more items
    ],
    Buffer(43) [Uint8Array] [
       99,  97, 116,  58,  32,  47, 101, 116,  99,
       47, 117, 112, 115, 116, 114, 101,  97, 109,
       45, 114, 101, 108, 101,  97, 115, 101,  58,
       32,  73, 115,  32,  97,  32, 100, 105, 114,
      101,  99, 116, 111, 114, 121,  10
    ]
  ],
  pid: 55561,
  stdout: Buffer(475) [Uint8Array] [
     68,  73,  83,  84,  82,  73,  66,  95,  73,  68, 61,  76,
    105, 110, 117, 120,  77, 105, 110, 116,  10,  68, 73,  83,
     84,  82,  73,  66,  95,  82,  69,  76,  69,  65, 83,  69,
     61,  50,  48,  46,  51,  10,  68,  73,  83,  84, 82,  73,
     66,  95,  67,  79,  68,  69,  78,  65,  77,  69, 61, 117,
    110,  97,  10,  68,  73,  83,  84,  82,  73,  66, 95,  68,
     69,  83,  67,  82,  73,  80,  84,  73,  79,  78, 61,  34,
     76, 105, 110, 117, 120,  32,  77, 105, 110, 116, 32,  50,
     48,  46,  51,  32,
    ... 375 more items
  ],
  stderr: Buffer(43) [Uint8Array] [
     99,  97, 116,  58,  32,  47, 101, 116,  99,
     47, 117, 112, 115, 116, 114, 101,  97, 109,
     45, 114, 101, 108, 101,  97, 115, 101,  58,
     32,  73, 115,  32,  97,  32, 100, 105, 114,
    101,  99, 116, 111, 114, 121,  10
  ]
}

Things I've tried:

  • Installed several versions and all of them crash if it's higher than 0.1.13 (have tried 0.1.14, 16 and 20)
  • Tried with the following node versions applying fix of this answer of #48: Node v19.9.0, 18.16.0 and 16.19.0

My OS details:

Distributor ID:	Linuxmint
Description:	Linux Mint 20.3
Release:	20.3
Codename:	una

Linux linux 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Video of the issue: https://contecnika-my.sharepoint.com/:v:/g/personal/irian_caigo_es/EU1juhHdlB1BugdA3iTy0hAB5TRO_mNWaf9pgVwP8ONa2Q?e=IDzBxM

please support proxy

✖ read ECONNRESET
at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20)

ai-shell v0.1.13

I got this error. I have to use a SOCKS5 or HTTP proxy, does the proxy cause it?
Please help, thanks!

Error on using ai-shell

I am getting this error running on my Mac Studio M1 (arm64). I have set my OPENAI_KEY properly with the command: ai-shell config set OPENAI_KEY=<your token>

Versions:

$ node -v
v14.21.3
$ npm -v
9.6.4
$ ai-shell --version
0.0.17
$ ai 'what is my ip address'                    

┌   ai-shell 
│
◐  Loading   
(node:6025) UnhandledPromiseRejectionWarning: Error: Request failed with status code 429
    at createError (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/core/settle.js:17:12)
    at RedirectableRequest.handleResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/adapters/http.js:278:9)
    at RedirectableRequest.emit (events.js:400:28)
    at RedirectableRequest._processResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/follow-redirects/index.js:356:10)
    at ClientRequest.RedirectableRequest._onNativeResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/follow-redirects/index.js:62:10)
    at Object.onceWrapper (events.js:520:26)
    at ClientRequest.emit (events.js:400:28)
    at HTTPParser.parserOnIncomingClient (_http_client.js:647:27)
    at HTTPParser.parserOnHeadersComplete (_http_common.js:127:17)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:6025) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:6025) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit co ◑  Loading... 

It got stucked at this last line.

Refactoring opportunity, maybe? 🤷🏾‍♂️

Hey I was thinking that on this piece of the code on the prompt.ts we can refactor it. The value property on each option could be an anonymous function that do the actions specified on the if sequence. That way we can get rid of the "const x = answer === y" part and the different IF calculations. Then you will only need to call answer() to execute the action.

Unless I'm not picking up something it seems to me like a good idea, and I could implement it if you want. Please let me know if Im wrong or if you want me to do it. 😄

const answer = await p.select({
    message: nonEmptyScript ? 'Run this script?' : 'Revise this script?',
    options: [
      ...(nonEmptyScript
        ? [
            { label: '✅ Yes', value: 'yes', hint: 'Lets go!' },
            {
              label: '📝 Edit',
              value: 'edit',
              hint: 'Make some adjustments before running',
            },
          ]
        : []),
      {
        label: '🔁 Revise',
        value: 'revise',
        hint: 'Give feedback via prompt and get a new result',
      },
      {
        label: '📋 Copy',
        value: 'copy',
        hint: 'Copy the generated script to your clipboard',
      },
      { label: '❌ Cancel', value: 'cancel', hint: 'Exit the program' },
    ],
  });

  const confirmed = answer === 'yes';
  const cancel = answer === 'cancel';
  const revisePrompt = answer === 'revise';
  const copy = answer === 'copy';
  const edit = answer === 'edit';

  if (revisePrompt) {
    await revisionFlow(script, key, apiEndpoint, silentMode);
  } else if (confirmed) {
    await runScript(script);
  } else if (cancel) {
    p.cancel('Goodbye!');
    process.exit(0);
  } else if (copy) {
    await clipboardy.write(script);
    p.outro('Copied to clipboard!');
  } else if (edit) {
    const newScript = await p.text({
      message: 'you can edit script here:',
      initialValue: script,
    });
    if (!p.isCancel(newScript)) {
      await runScript(newScript);
    }
  }

Better handle insufficient_quota errors

I'm running :

  • Linux fedora 37
  • Node (v-14.21.2)
  • gnome-terminal

the steps to reproduce the issue

  1. run npm install -g @builder.io/ai-shell
  2. run ai-shell config set OPENAI_KEY=<API_KEY>
  3. run ai list all log files

image


update

I've tried switching to node v-16 and still rejects but with a different error message

it seem that I have exceeded my plan limit.

but it would be great to catch the error so it doesn't freeze the shell session

image

Unexpected token < in JSON at position 0

✖ Unexpected token < in JSON at position 0
    at JSON.parse (<anonymous>)
    at generateCompletion (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:211:38)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async getScriptAndInfo (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:168:18)
    at async prompt (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:427:36)

    ai-shell v0.1.6

    Please open a Bug report with the information above:
    https://github.com/BuilderIO/ai-shell/issues/new

Not compatible with the http_proxy setting

I use http proxy to speed up visiting to api.openai.com.

shell config command like:

  export http_proxy=<proxy url>
  export https_proxy=<proxy url>

I got the following errors while run ai shell

◒  Loading

✖ Hostname/IP does not match certificate's altnames: Host: api.openai.com. is not in the cert's altnames: DNS:*.<proxy-domain>, DNS:<proxy-domain>
    at new NodeError (node:internal/errors:399:5)
    at Object.checkServerIdentity (node:tls:337:12)
    at TLSSocket.onConnectSecure (node:_tls_wrap:1568:27)
    at TLSSocket.emit (node:events:512:28)
    at TLSSocket._finishInit (node:_tls_wrap:977:8)
    at ssl.onhandshakedone (node:_tls_wrap:771:12)

    ai-shell v0.1.6

I checked the proxy setting, it is woking well under other shell command, like curl.

$ curl https://api.openai.com/
{
  "error": {
    "message": "Invalid URL (GET /)",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

Empty script returned

Sometimes OpenAI will return an empty script, especially if you feed it gibberish. If an empty script is returned, I don't think there should be an option to run it.

Ai Shell Pre@2x

My fix just removes the option to run it, instead prompting to revise.

Ai-shell-revision@2x

zsh weird error

Sorry to open two issues but maybe you'd like to know.

I've done all the installation steps and I get a version out of the software, I've set the API key etc. but I get this zsh error, or a 429 on the list prompt as stated on the other issue.

Thanks for your help!!

image

Socket hang up error

OS: Ubuntu 20.04(WSL on Windows 10)
Node Version: 14
Steps:

npm install -g @builder.io/ai-shell
ai config set OPENAI_KEY=xxx
ai show cpu usage

What I got:
image

429 error hijacks text cursor

I'm using iterm2, oh-my-zsh, and startship prompt.

When initially using this package, I was getting the 429 error. It seems as if the cursor disappears and won't come back after the failed prompt. Here is an example video:

Screen.Recording.2023-04-14.at.12.54.05.PM.mov

Add Azure OpenAI support

Support for Azure OpenAI could be added, with this feature the data is not used for AI training, ideal for us to use at work, avoiding information leakage.

No copy option

Hey! In the prompt.ts file, I see an option to copy the script in the runOrReviseFlow method. However, I don't see that as an option when I'm using the tool. Is there a setting I'm missing to have it show the copy option?

connect ETIMEDOUT 31.13.94.49:443

connect ETIMEDOUT 31.13.94.49:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1195:16)

ai-shell v0.1.13

Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new

Ability to inject command without executing it

Hi @steve8708

I found myself wanting to learn some commands without executing them. For general questions, I would like the program to just insert code on terminal rather than execute it.

Screenshot 2023-04-11 at 4 22 06 AM

I'm thinking of something like this

Your script:

du -sm <directory>

Run this script?

Yes | Insert | Revise | Cancel

Is this something useful to add?

Allow users to modify generated commands before execution

One major inconvenience of using copilot-cli/ai-shell is that I often need to make tweaks to the generated command before executing it, which currently requires copying the command and then manually modifying it with the mouse. It would be beneficial if there was a way for users to modify the command before executing it.

Spelling suggestion on ReadMe.

I am not a bash wizard, and am dying for access to the copilot CLI, and got impactient.

I am not a bash wizard, and am dying for access to the copilot CLI, and got impatient.

Shell option should be set for command execution with multiple processes and pipes to work

Thanks for this cool tool!

Often I need a combination of multiple commands e.g. with grep / jq etc.
For this ChatGPT outputs correct commands with pipes, but they don't work with ai-shell since it just executes one process without the shell support for pipes. Since execa is used this can simply be fixed by using the shell-flag.

See https://github.com/sindresorhus/execa#shell-syntax and https://github.com/sindresorhus/execa#shell for more information.

Here is an example for a perfectly fine command which works when pasted directly into bash or zsh but doesn't work with ai-shell:
Bildschirm­foto 2023-04-07 um 12 22 39

TTY initialization failed: uv_tty_init returned EBADF (bad file descriptor)

Hi! I keep getting the error shown below when running basic "ai" commands. Any idea what is might be? I'm running it on Windows 10, and Git bash. For some reason, it doesn't work on Powershell, but it does on cmd.exe

$ ai How do you declare a new route in a Flask server?

T  AI Shell
|
o  Your script:

powershell
write-host "To declare a new route in a Flask server, you can add the route decorator to a function definition within the Flask app instance, like this: @app.route('/new_route')" -ForegroundColor Green


•
|
o  Explanation:

1. Initializes a PowerShell script.
2. Prints a message using the "write-host" command.
3. The message describes how to declare a new route in a Flask server by using the "@app.route('/new_route')" decorator.
4. The message is displayed in green text using the "-ForegroundColor" parameter.

•

✖ TTY initialization failed: uv_tty_init returned EBADF (bad file descriptor)
    at new SystemError (node:internal/errors:250:5)
    at new NodeError (node:internal/errors:361:7)
    at new WriteStream (node:tty:93:11)
    at ED.prompt (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/node_modules/@clack/core/dist/index.mjs:9:693)
    at Module.ee (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/node_modules/@clack/prompts/dist/index.mjs:28:7)
    at runOrReviseFlow (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/dist/cli.mjs:1736:26)
    at prompt (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/dist/cli.mjs:1732:9)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

    ai-shell v1.0.1

    Please open a Bug report with the information above:
    https://github.com/BuilderIO/ai-shell/issues/new

Need a way to edit coloring

I was able to adjust the text using kolorist. How can we change the coloring in everything else, such as the cyan line and diamond shape? A config file for editing the few settings would be useful to the noobs like me using this to help master the shell. Thank you for putting this out early. We've been waiting for that Copilot CLI too and this definitely helps scratch the itch.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.