Take chatGPT into command line.
- clone this repo
- pip3 install -U -r requirements.txt
- copy
demo_config.json
toconfig.json
- get your OPENAI_API_KEY and put it in
config.json
$ ./gptcli.py -h
usage: gptcli.py [-h] [-c CONFIG]
options:
-h, --help show this help message and exit
-c CONFIG path to your config.json (default: config.json)
Sample config.json
:
{
"key": "", // your api-key, will read from OPENAI_API_KEY envronment variable if empty
"api_base": "", // your api_base, will read from OPENAI_API_BASE envronment variable if empty
"model": "gpt-3.5-turbo", // GPT Model
"stream": true, // Stream mode
"stream_render": false, // Render live markdown in stream mode
"context": "full", // Session context mode, choices: "none", "request", "full"
"showtokens": false, // Show used token after every question
"proxy": "", // Use http/https/socks4a/socks5 proxy for requests to api.openai.com
"prompt": [ // Customize your prompt
{ "role": "system", "content": "If your response contains code, show with syntax highlight, for example ```js\ncode\n```" }
]
}
Supported model:
- gpt-3.5-turbo
- gpt-4
- gpt-4-32k
Console help (with tab-complete):
$ ./gptcli.py
gptcli> .help -v
gptcli commands (use '.help -v' for verbose/'.help <topic>' for details):
======================================================================================================
.edit Run a text editor and optionally open a file with it
.help List available commands or provide detailed help for a specific command
.load Load conversation from Markdown/JSON file
.multiline input multiple lines, end with ctrl-d(Linux/macOS) or ctrl-z(Windows). Cancel
with ctrl-c
.quit Exit this application
.reset Reset session, i.e. clear chat history
.save Save current conversation to Markdown/JSON file
.set Set a settable parameter or show current settings of parameters
.tokens Display total tokens used this session
Run in Docker:
# build
$ docker build -t gptcli:latest .
# run
$ docker run -it --rm -v $PWD/.key:/gptcli/.key gptcli:latest -h
# for host proxy access:
$ docker run --rm -it -v $PWD/config.json:/gptcli/config.json --network host gptcli:latest -c /gptcli/config.json
- Single Python script
- Session based
- Markdown support with code syntax highlight
- Stream output support
- Proxy support (HTTP/HTTPS/SOCKS4A/SOCKS5)
- Multiline input support (via
.multiline
command) - Save and load session from file (Markdown/JSON) (via
.save
and.load
command) - Integrate with
llama_index
to support chatting with documents