Giter Site home page Giter Site logo

n3d1117 / chatgpt-telegram-bot Goto Github PK

View Code? Open in Web Editor NEW
2.7K 58.0 1.2K 579 KB

馃 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python

License: GNU General Public License v2.0

Python 99.83% Dockerfile 0.17%
chatgpt openai python telegram-bot dall-e whisper

chatgpt-telegram-bot's Introduction

ChatGPT Telegram Bot

python-version openai-version license Publish Docker image

A Telegram bot that integrates with OpenAI's official ChatGPT, DALL路E and Whisper APIs to provide answers. Ready to use with minimal configuration required.

Screenshots

Demo

demo

Plugins

plugins

Features

  • Support markdown in answers
  • Reset conversation with the /reset command
  • Typing indicator while generating a response
  • Access can be restricted by specifying a list of allowed users
  • Docker and Proxy support
  • Image generation using DALL路E via the /image command
  • Transcribe audio and video messages using Whisper (may require ffmpeg)
  • Automatic conversation summary to avoid excessive token usage
  • Track token usage per user - by @AlexHTW
  • Get personal token usage statistics via the /stats command - by @AlexHTW
  • User budgets and guest budgets - by @AlexHTW
  • Stream support
  • GPT-4 support
    • If you have access to the GPT-4 API, simply change the OPENAI_MODEL parameter to gpt-4
  • Localized bot language
    • Available languages 馃嚙馃嚪 馃嚚馃嚦 馃嚝馃嚠 馃嚛馃嚜 馃嚠馃嚛 馃嚠馃嚪 馃嚠馃嚬 馃嚥馃嚲 馃嚦馃嚤 馃嚨馃嚤 馃嚪馃嚭 馃嚫馃嚘 馃嚜馃嚫 馃嚬馃嚰 馃嚬馃嚪 馃嚭馃嚘 馃嚞馃嚙 馃嚭馃嚳 馃嚮馃嚦
  • Improved inline queries support for group and private chats - by @bugfloyd
    • To use this feature, enable inline queries for your bot in BotFather via the /setinline command
  • Support new models announced on June 13, 2023
  • Support functions (plugins) to extend the bot's functionality with 3rd party services
    • Weather, Spotify, Web search, text-to-speech and more. See here for a list of available plugins
  • Support unofficial OpenAI-compatible APIs - by @kristaller486
  • (NEW!) Support GPT-4 Turbo and DALL路E 3 announced on November 6, 2023 - by @AlexHTW
  • (NEW!) Text-to-speech support announced on November 6, 2023 - by @gilcu3
  • (NEW!) Vision support announced on November 6, 2023 - by @gilcu3

Additional features - help needed!

If you'd like to help, check out the issues section and contribute!
If you want to help with translations, check out the Translations Manual

PRs are always welcome!

Prerequisites

Getting started

Configuration

Customize the configuration by copying .env.example and renaming it to .env, then editing the required parameters as desired:

Parameter Description
OPENAI_API_KEY Your OpenAI API key, you can get it from here
TELEGRAM_BOT_TOKEN Your Telegram bot's token, obtained using BotFather (see tutorial)
ADMIN_USER_IDS Telegram user IDs of admins. These users have access to special admin commands, information and no budget restrictions. Admin IDs don't have to be added to ALLOWED_TELEGRAM_USER_IDS. Note: by default, no admin (-)
ALLOWED_TELEGRAM_USER_IDS A comma-separated list of Telegram user IDs that are allowed to interact with the bot (use getidsbot to find your user ID). Note: by default, everyone is allowed (*)

Optional configuration

The following parameters are optional and can be set in the .env file:

Budgets

Parameter Description Default value
BUDGET_PERIOD Determines the time frame all budgets are applied to. Available periods: daily (resets budget every day), monthly (resets budgets on the first of each month), all-time (never resets budget). See the Budget Manual for more information monthly
USER_BUDGETS A comma-separated list of $-amounts per user from list ALLOWED_TELEGRAM_USER_IDS to set custom usage limit of OpenAI API costs for each. For *- user lists the first USER_BUDGETS value is given to every user. Note: by default, no limits for any user (*). See the Budget Manual for more information *
GUEST_BUDGET $-amount as usage limit for all guest users. Guest users are users in group chats that are not in the ALLOWED_TELEGRAM_USER_IDS list. Value is ignored if no usage limits are set in user budgets (USER_BUDGETS=*). See the Budget Manual for more information 100.0
TOKEN_PRICE $-price per 1000 tokens used to compute cost information in usage statistics. Source: https://openai.com/pricing 0.002
IMAGE_PRICES A comma-separated list with 3 elements of prices for the different image sizes: 256x256, 512x512 and 1024x1024. Source: https://openai.com/pricing 0.016,0.018,0.02
TRANSCRIPTION_PRICE USD-price for one minute of audio transcription. Source: https://openai.com/pricing 0.006
VISION_TOKEN_PRICE USD-price per 1K tokens of image interpretation. Source: https://openai.com/pricing 0.01
TTS_PRICES A comma-separated list with prices for the tts models: tts-1, tts-1-hd. Source: https://openai.com/pricing 0.015,0.030

Check out the Budget Manual for possible budget configurations.

Additional optional configuration options

Parameter Description Default value
ENABLE_QUOTING Whether to enable message quoting in private chats true
ENABLE_IMAGE_GENERATION Whether to enable image generation via the /image command true
ENABLE_TRANSCRIPTION Whether to enable transcriptions of audio and video messages true
ENABLE_TTS_GENERATION Whether to enable text to speech generation via the /tts true
ENABLE_VISION Whether to enable vision capabilities in supported models true
PROXY Proxy to be used for OpenAI and Telegram bot (e.g. http://localhost:8080) -
OPENAI_PROXY Proxy to be used only for OpenAI (e.g. http://localhost:8080) -
TELEGRAM_PROXY Proxy to be used only for Telegram bot (e.g. http://localhost:8080) -
OPENAI_MODEL The OpenAI model to use for generating responses. You can find all available models here gpt-3.5-turbo
OPENAI_BASE_URL Endpoint URL for unofficial OpenAI-compatible APIs (e.g., LocalAI or text-generation-webui) Default OpenAI API URL
ASSISTANT_PROMPT A system message that sets the tone and controls the behavior of the assistant You are a helpful assistant.
SHOW_USAGE Whether to show OpenAI token usage information after each response false
STREAM Whether to stream responses. Note: incompatible, if enabled, with N_CHOICES higher than 1 true
MAX_TOKENS Upper bound on how many tokens the ChatGPT API will return 1200 for GPT-3, 2400 for GPT-4
VISION_MAX_TOKENS Upper bound on how many tokens vision models will return 300 for gpt-4-vision-preview
VISION_MODEL The Vision to Speech model to use. Allowed values: gpt-4-vision-preview gpt-4-vision-preview
ENABLE_VISION_FOLLOW_UP_QUESTIONS If true, once you send an image to the bot, it uses the configured VISION_MODEL until the conversation ends. Otherwise, it uses the OPENAI_MODEL to follow the conversation. Allowed values: true or false true
MAX_HISTORY_SIZE Max number of messages to keep in memory, after which the conversation will be summarised to avoid excessive token usage 15
MAX_CONVERSATION_AGE_MINUTES Maximum number of minutes a conversation should live since the last message, after which the conversation will be reset 180
VOICE_REPLY_WITH_TRANSCRIPT_ONLY Whether to answer to voice messages with the transcript only or with a ChatGPT response of the transcript false
VOICE_REPLY_PROMPTS A semicolon separated list of phrases (i.e. Hi bot;Hello chat). If the transcript starts with any of them, it will be treated as a prompt even if VOICE_REPLY_WITH_TRANSCRIPT_ONLY is set to true -
VISION_PROMPT A phrase (i.e. What is in this image). The vision models use it as prompt to interpret a given image. If there is caption in the image sent to the bot, that supersedes this parameter What is in this image
N_CHOICES Number of answers to generate for each input message. Note: setting this to a number higher than 1 will not work properly if STREAM is enabled 1
TEMPERATURE Number between 0 and 2. Higher values will make the output more random 1.0
PRESENCE_PENALTY Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far 0.0
FREQUENCY_PENALTY Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far 0.0
IMAGE_FORMAT The Telegram image receive mode. Allowed values: document or photo photo
IMAGE_MODEL The DALL路E model to be used. Available models: dall-e-2 and dall-e-3, find current available models here dall-e-2
IMAGE_QUALITY Quality of DALL路E images, only available for dall-e-3-model. Possible options: standard or hd, beware of pricing differences. standard
IMAGE_STYLE Style for DALL路E image generation, only available for dall-e-3-model. Possible options: vivid or natural. Check availbe styles here. vivid
IMAGE_SIZE The DALL路E generated image size. Must be 256x256, 512x512, or 1024x1024 for dall-e-2. Must be 1024x1024 for dall-e-3 models. 512x512
VISION_DETAIL The detail parameter for vision models, explained Vision Guide. Allowed values: low or high auto
GROUP_TRIGGER_KEYWORD If set, the bot in group chats will only respond to messages that start with this keyword -
IGNORE_GROUP_TRANSCRIPTIONS If set to true, the bot will not process transcriptions in group chats true
IGNORE_GROUP_VISION If set to true, the bot will not process vision queries in group chats true
BOT_LANGUAGE Language of general bot messages. Currently available: en, de, ru, tr, it, fi, es, id, nl, zh-cn, zh-tw, vi, fa, pt-br, uk, ms, uz, ar. Contribute with additional translations en
WHISPER_PROMPT To improve the accuracy of Whisper's transcription service, especially for specific names or terms, you can set up a custom message. Speech to text - Prompting -
TTS_VOICE The Text to Speech voice to use. Allowed values: alloy, echo, fable, onyx, nova, or shimmer alloy
TTS_MODEL The Text to Speech model to use. Allowed values: tts-1 or tts-1-hd tts-1

Check out the official API reference for more details.

Functions

Parameter Description Default value
ENABLE_FUNCTIONS Whether to use functions (aka plugins). You can read more about functions here true (if available for the model)
FUNCTIONS_MAX_CONSECUTIVE_CALLS Maximum number of back-to-back function calls to be made by the model in a single response, before displaying a user-facing message 10
PLUGINS List of plugins to enable (see below for a full list), e.g: PLUGINS=wolfram,weather -
SHOW_PLUGINS_USED Whether to show which plugins were used for a response false

Available plugins

Name Description Required environment variable(s) Dependency
weather Daily weather and 7-day forecast for any location (powered by Open-Meteo) -
wolfram WolframAlpha queries (powered by WolframAlpha) WOLFRAM_APP_ID wolframalpha
ddg_web_search Web search (powered by DuckDuckGo) - duckduckgo_search
ddg_translate Translate text to any language (powered by DuckDuckGo) - duckduckgo_search
ddg_image_search Search image or GIF (powered by DuckDuckGo) - duckduckgo_search
crypto Live cryptocurrencies rate (powered by CoinCap) - by @stumpyfr -
spotify Spotify top tracks/artists, currently playing song and content search (powered by Spotify). Requires one-time authorization. SPOTIFY_CLIENT_ID, SPOTIFY_CLIENT_SECRET, SPOTIFY_REDIRECT_URI spotipy
worldtimeapi Get latest world time (powered by WorldTimeAPI) - by @noriellecruz WORLDTIME_DEFAULT_TIMEZONE
dice Send a dice in the chat! -
youtube_audio_extractor Extract audio from YouTube videos - pytube
deepl_translate Translate text to any language (powered by DeepL) - by @LedyBacer DEEPL_API_KEY
gtts_text_to_speech Text to speech (powered by Google Translate APIs) - gtts
whois Query the whois domain database - by @jnaskali - whois
webshot Screenshot a website from a given url or domain name - by @noriellecruz -
auto_tts Text to speech using OpenAI APIs - by @Jipok -

Environment variables

Variable Description Default value
WOLFRAM_APP_ID Wolfram Alpha APP ID (required only for the wolfram plugin, you can get one here) -
SPOTIFY_CLIENT_ID Spotify app Client ID (required only for the spotify plugin, you can find it on the dashboard) -
SPOTIFY_CLIENT_SECRET Spotify app Client Secret (required only for the spotify plugin, you can find it on the dashboard) -
SPOTIFY_REDIRECT_URI Spotify app Redirect URI (required only for the spotify plugin, you can find it on the dashboard) -
WORLDTIME_DEFAULT_TIMEZONE Default timezone to use, i.e. Europe/Rome (required only for the worldtimeapi plugin, you can get TZ Identifiers from here) -
DUCKDUCKGO_SAFESEARCH DuckDuckGo safe search (on, off or moderate) (optional, applies to ddg_web_search and ddg_image_search) moderate
DEEPL_API_KEY DeepL API key (required for the deepl plugin, you can get one here) -

Installing

Clone the repository and navigate to the project directory:

git clone https://github.com/n3d1117/chatgpt-telegram-bot.git
cd chatgpt-telegram-bot

From Source

  1. Create a virtual environment:
python -m venv venv
  1. Activate the virtual environment:
# For Linux or macOS:
source venv/bin/activate

# For Windows:
venv\Scripts\activate
  1. Install the dependencies using requirements.txt file:
pip install -r requirements.txt
  1. Use the following command to start the bot:
python bot/main.py

Using Docker Compose

Run the following command to build and run the Docker image:

docker compose up

Ready-to-use Docker images

You can also use the Docker image from Docker Hub:

docker pull n3d1117/chatgpt-telegram-bot:latest
docker run -it --env-file .env n3d1117/chatgpt-telegram-bot

or using the GitHub Container Registry:

docker pull ghcr.io/n3d1117/chatgpt-telegram-bot:latest
docker run -it --env-file .env ghcr.io/n3d1117/chatgpt-telegram-bot

Docker manual build

docker build -t chatgpt-telegram-bot .
docker run -it --env-file .env chatgpt-telegram-bot

Credits

Disclaimer

This is a personal project and is not affiliated with OpenAI in any way.

License

This project is released under the terms of the GPL 2.0 license. For more information, see the LICENSE file included in the repository.

chatgpt-telegram-bot's People

Contributors

aes-alienrip avatar alexhtw avatar am1ncmd avatar bestmgmt avatar bjornb2 avatar bugfloyd avatar carlsverre avatar deanxizian avatar dkvdm-bot avatar eyadmahm0ud avatar gianlucaalfa avatar gilcu3 avatar ivanmilov avatar jnaskali avatar jokerqyou avatar jvican avatar k3it avatar kristaller486 avatar ledybacer avatar mirmakhamat avatar muhammed540 avatar n3d1117 avatar nmeln avatar noriellecruz avatar peterdavehello avatar rafael-6fx avatar slippersheepig avatar stanislavlysenko0912 avatar whyevenquestion1t avatar yurnov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-telegram-bot's Issues

If we could allow the robot to be used by group members

If we could allow the robot to be used by group members, I think that would be a good idea:

  • The robot works in private mode by default, so it can only receive messages marked as REPLY.
  • Perhaps we also need to add a allowList of groups that are allowed to use the robot.

However, there are also some limitations to this approach.

  • it could lead to multiple people sharing the same conversation,
  • chatgpt's processing rate may be limited.

Session management

Similar to the browser UI, it would be cool if we would be able to save/retrieve sessions in addition to /reset
I'm thinking of:

  • /save for saving a specific session, potentially with a name
  • /sessions for listing all sessions
  • /session session_id for opening a specific session
  • /reset for resetting or deleting a session

Handle responses longer than telegram message limit

Hey, I encountered the telegram error Message_too_long when transcribing a 6 minute audio file.
Any chance to split responses longer than the limit (I think 4096 characters) into multiple messages?
Might also apply to chatGPT responses although I have not managed to get such a long response yet but I think theoretically it could be possible (4096 tokens > 4096 characters).

Thank you and the other contributors for all your hard work!

I got a error about token expired

{
    "detail": {
        "message": "Your authentication token has expired. Please try signing in again.",
        "type": "invalid_request_error",
        "param": null,
        "code": "token_expired"
    }
}

Maybe we can login in every 12 hours?

Failure in execute main.py shows TimeOut ,how can I slove that? Thanks.

:~/chatgpt-telegram-bot# python main.py
Debugger enabled on OpenAIAuth
Logging in...
Debugger enabled on OpenAIAuth
Beginning auth process
Beginning part two
Beginning part three
Beginning part four
Beginning part five
Beginning part six
Beginning part seven
Request went through
Response code is 302
New state found
Beginning part eight
Beginning part nine
SUCCESS
Part eight called
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 130, in _call_sslobject_method
result = func(*args)
File "/usr/lib/python3.10/ssl.py", line 975, in do_handshake
self._sslobj.do_handshake()
ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:997)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 67, in start_tls
ssl_stream = await anyio.streams.tls.TLSStream.wrap(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 122, in wrap
await wrapper._call_sslobject_method(ssl_object.do_handshake)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 137, in _call_sslobject_method
data = await self.transport_stream.receive()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1265, in receive
await self._protocol.read_event.wait()
File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 76, in start_tls
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 66, in start_tls
with anyio.fail_after(timeout):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_core/_tasks.py", line 118, in exit
raise TimeoutError
TimeoutError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 253, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 237, in handle_async_request
response = await connection.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 86, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 63, in handle_async_request
stream = await self._connect(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 150, in _connect
stream = await stream.start_tls(**kwargs)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 64, in start_tls
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)
httpcore.ConnectTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 183, in do_request
res = await self._client.request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1620, in send
response = await self._send_handling_auth(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1648, in _send_handling_auth
response = await self._send_handling_redirects(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1685, in _send_handling_redirects
response = await self._send_single_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1722, in _send_single_request
response = await transport.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/chatgpt-telegram-bot/main.py", line 32, in
main()
File "/root/chatgpt-telegram-bot/main.py", line 28, in main
telegram_bot.run()
File "/root/chatgpt-telegram-bot/telegram_bot.py", line 125, in run
application.run_polling()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 670, in run_polling
return self.__run(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 858, in __run
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 847, in __run
loop.run_until_complete(self.initialize())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 357, in initialize
await self.bot.initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 252, in initialize
await super().initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 499, in initialize
await self.get_me()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 1639, in get_me
return await super().get_me(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 313, in decorator
result = await func(*args, **kwargs) # skipcq: PYL-E1102
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 644, in get_me
result = await self._post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 395, in _post
return await self._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 306, in _do_post
return await super()._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 426, in _do_post
return await request.post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 167, in post
result = await self._request_wrapper(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 290, in _request_wrapper
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 276, in _request_wrapper
code, payload = await self.do_request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 200, in do_request
raise TimedOut from err
telegram.error.TimedOut: Timed out

Publish to Docker Hub

This project looks amazing! Could you publish it to Docker Hub so it鈥檒l be easy to deploy on serverless platforms? Thanks! 馃槉

Fail to transcribe audio messages

Hello, I deployed v0.1.3 with Docker Compose, but the audio message transcription feature is not working. The bot always returns "Failed to transcribe text".

(FFmpeg is installed).

Hope to add HTTP proxy

In some application environments, I can only connect to telegram and openai through proxy. This means that I need to open the global proxy, which will affect the running of other programs. I tried to add

{os.environ["http_proxy"] = " http://127.0.0.1:1231 "} But it doesn't seem to work?

Error: 'Chatbot' object has no attribute 'headers'

I did everything you said but i don't get anything back
running on Ubuntu 22.04
here logs:

(chatgpt-telegram-bot) root@linux:~/chatgpt-telegram-bot# python main.py
Logging in...
Error logging in (Probably wrong credentials)
Error refreshing session:
Error logging in
2022-12-07 14:36:54,351 - telegram.ext._application - INFO - Application started
2022-12-07 14:36:54,632 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:36:54,852 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:35,805 - root - INFO - User @Archnet is not allowed to start the bot
2022-12-07 14:37:43,588 - root - INFO - Bot started
2022-12-07 14:37:47,411 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:47,516 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:54,966 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:55,317 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'

and in telegram :
image
What's problem?

v0.1.8 doesnt work

Hi.
I downloaded a new version (tried the last commit from the master and release 0.1.8) of the bot to the server and nothing but the help command works.
Moreover, if I roll back to the previous one, it works fine.
Does not work either in a group chat or in a personal message

New `.env` file:
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
# ALLOWED_TELEGRAM_USER_IDS="USER_ID_1,USER_ID_2" ###Yes I tried: ALLOWED_TELEGRAM_USER_IDS="*"
# MONTHLY_USER_BUDGETS="100.0,100.0"
# MONTHLY_GUEST_BUDGET="20.0"
# PROXY="http://localhost:8080"
OPENAI_MODEL="gpt-3.5-turbo"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=15
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=false
N_CHOICES=1
TEMPERATURE=0.5
PRESENCE_PENALTY=0
FREQUENCY_PENALTY=0
IMAGE_SIZE="256x256"
GROUP_TRIGGER_KEYWORD="help"
IGNORE_GROUP_TRANSCRIPTIONS=true
TOKEN_PRICE=0.002
IMAGE_PRICES="0.016,0.018,0.02"
TRANSCRIPTION_PRICE=0.006
Old `.env` file:
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
ALLOWED_TELEGRAM_USER_IDS="*"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=10
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=true

What I was doing:

cd chatgpt_bot_v2/ #old
git rev-parse HEAD #34b79a5e2eadfc6b237882cb08a5d11085098dc9  (the sixth commit after 0.1.5)
docker compose up -d #(the first time I started it with the "--build" key)
###Bot works fine###
docker logs chatgpt_bot_v2-chatgpt-telegram-bot-1 #(2023-03-18 14:00:31,381 - telegram.ext._application - INFO - Application started)
docker compose down
cd ../chatgpt_bot_0.1.8/ #latest commit
git rev-parse HEAD #73d200c64e95e481e2986caeaad70d6b339fb1d9
docker compose up -d
docker logs chatgpt_bot_018-chatgpt-telegram-bot-1#(2023-03-18 14:02:08,078 - telegram.ext._application - INFO - Application started)
###Bot doesnt work###

If you need any additional data, I will try to provide it.
Thanks

Allow support for Python 3.9

Some Linux distributions have not yet included Python 3.10 in their official repositories. It would be beneficial to enable the bot to run on Python 3.9 as well.

Upon reviewing the codebase, it appears that the only Python 3.9 incompatible syntax is the usage of the new union operator (|) in the get_chat_response function of openai_helper.py.

Is there anything else that I might be overlooking? If not, we could consider adding legacy support for Python 3.9 by using the Union type from the typing module in place of the new union operator for greater compatibility.

4000 tokens for the completion?

"I am receiving an error that I have exceeded the maximum number of tokens. In this example, I have used 152 tokens for the prompt and 4000 for the completion. Why is the completion token usage so high?"

2023-03-18 11:25:18,531 - root - INFO - New message received from user @AlyxAbyss 2023-03-18 11:25:18,909 - openai - INFO - error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. Ho wever, you requested 4152 tokens (152 in the messages, 4000 in the completion). Please reduce the length of the messages or completion." error_par am=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False 2023-03-18 11:25:18,910 - root - ERROR - This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messag es, 4000 in the completion). Please reduce the length of the messages or completion. Traceback (most recent call last): File "/home/vicky/VickyAI/chatgpt-telegram-bot/openai_helper.py", line 49, in get_chat_response response = openai.ChatCompletion.create( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messages, 400 0 in the completion). Please reduce the length of the messages or completion.

Please Add a Menu

Greetings,
the usability would improve if you added Menu Buttons for Telegram.

Thanks!

Please update

the revChatGPT has V2, please update, old one can't useable

Network error

Hi, when I try to install I get an error:

chatgpt-telegram-bot-chatgpt-telegram-bot-1 | File "/usr/local/lib/python3.9/site-packages/telegram/request/_httpxrequest.py", line 223, in do_request
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | raise NetworkError(f"httpx.{err.class.name}: {err}") from err
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | telegram.error.NetworkError: httpx.ConnectError: All connection attempts failed
chatgpt-telegram-bot-chatgpt-telegram-bot-1 exited with code 1

I tried to install the previous version and everything worked! Hope for help:)

request xD

ALLOWED_TELEGRAM_CHAT_IDS="<CHAT_ID_1>,<CHAT_ID_2>,..."

Can everyone use it without selecting an ID? xD

No direct answer on voice message

I first thought it was a error.
Would it be possible to answer (with chatgpt) directly after whisper has transcribe the audio message?

This would eliminate the manual step copying of the text after transcibe.

Screenshot ChatGPTBot

Need help

How can I enable debug logging for the revChatGpt package

Plz someone help

Prompt token consumption grows until /reset

Hi.
Thank you for the updates and support of the turbo model!

I noticed that each subsequent query within a conversation uses up more prompt tokens. this continues until I reset the session. Does this sound like the correct behavior?

here is an example of the same prompt, with each iteration increasing token usage:

image

OpenAI invalid requet.

After using for a while, the boot will generate such message even though I input a very short sentence:

鈿狅笍 OpenAI Invalid request 鈿狅笍
This model's maximum context length is 4096 tokens. However, you requested 4368 tokens (3168 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.

After restarting the service, it will return normal

login failed

yesterday i succussfully run this project, but today i found my lost the connection. (In my country internet is not that good :<)
so i reboot my process, i found this login failed error:

"Auth0 did not issue an access token"

  File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 309, in part_seven
    self.part_eight(old_state=state, new_state=new_state)
  File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 363, in part_eight
    raise Exception("Auth0 did not issue an access token")
Exception: Auth0 did not issue an access token

but i have checked my account & password that is correct, and could successfully login in Chrome browser.
after retried run main.py for 2times, i found this:
"Exception: You have been rate limited."

Beginning auth process
Beginning part two
Beginning part three
You have been rate limited
Login failed
Traceback (most recent call last):
  File "/app/main.py", line 46, in <module>
    main()
  File "/app/main.py", line 40, in main
    gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 59, in __init__
    self.refresh_session()
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 309, in refresh_session
    raise exc
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 306, in refresh_session
    self.login(self.config["email"], self.config["password"])

docker-compose ERROR: No matching distribution found for openaiauth

I am getting this error when trying docker-compose on Docker Desktop (Win10)
From docker logs:

#0 9.421 [pipenv.exceptions.InstallError]: ERROR: Could not find a version that satisfies the requirement openaiauth==0.0.6 (from versions: none)
#0 9.421 [pipenv.exceptions.InstallError]: ERROR: No matching distribution found for openaiauth==0.0.6

Wehn I try to install using pip command, same error:

>  pip install openaiauth
ERROR: Could not find a version that satisfies the requirement openaiauth (from versions: none)
ERROR: No matching distribution found for openaiauth

What am I missing?

/reset command can't fix "This model's maximum context length is 4096 tokens"

After sending /reset command, there are still following errors. How to clean the history? Thanks!

鈿狅笍 OpenAI Invalid request 鈿狅笍
This model's maximum context length is 4096 tokens. However, you requested 4728 tokens (3528 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.

bot dead with httpx.LocalProtocolError

occasionally the bot seems to die with the error below and stops processing messages. restarting the docker fixes the problem. (might be related to using non-english prompts, but i haven't narrowed it down). i wonder of anyone else ran into this.

2023-03-19 18:02:54,155 - telegram.ext._updater - ERROR - Error while getting Updates: httpx.LocalProtocolError: Invalid input ConnectionInputs.SEND_HEADERS in state ConnectionState.CLOSED

About "Live answer updating as the bot types" problem

Hi!
After I updated this function, there were some problems: Sometimes when ChatGPT typed a long paragraph, it was easy to fail to load after half of the sentence (this cannot be solved by saying "Continue" with ChatGPT, this situation is true ChatGPT is not finished).

This feature also makes the "typing" on the Telegram dialog appear for a while and then disappear. I also don't know what causes this. In the absence of this function, Telegram's bot will always display "typing" until the message is sent. Relatively speaking, this function may still need some repairs, or leave an option for users to freely choose whether to enable "Live "model.

Some Installing Suggestion

  1. Please make sure install python3.10 +锛宨f the server just has 3.7/3.8, it has to be update.
  2. pip install python-dotenv
  3. pip install python-telegram-bot --pre, yes, makesure install "--pre"
  4. pip install telegram
  5. pip3 install revChatGPT --upgrade

Add session persistence

Upon redeployment the bot shouldn't lose track of its context.
I've yet to look into how the browser UI handles multiple sessions, but I assume that there is some sort of unique ID you can set. If that is the case, a potential solution would be to utilize the chat_id as a unique session ID.

Transcribe then reply

When I set VOICE_REPLY_WITH_TRANSCRIPT_ONLY to false, can I let the bot transcribe my voice and output it first, then reply it? It's more user-friendly.

Originally posted by @deanxizian in #38 (comment)

Extremely slow

Hey.
thanks for your awesome work.
the bot works, but it's extremely slow..
how may I increase its speed?
the server has enough resources.

Captcha detected

I have valid email and password in my .env, my VPS IP is from Germany, my OpenAI account was registered in Germany too.

Beginning part three
Beginning part four
Beginning part five
Error in part five
Captcha detected
Login failed
Traceback (most recent call last):
  File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 45, in <module>
    main()
  File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 39, in main
    gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 55, in __init__
    self.refresh_session()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 305, in refresh_session
    raise exc
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 302, in refresh_session
    self.login(self.config["email"], self.config["password"])
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 340, in login
    raise exc
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 333, in login
    auth.begin()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 83, in begin
    self.part_two()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 112, in part_two
    self.part_three(token=csrf_token)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 147, in part_three
    self.part_four(url=url)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 181, in part_four
    self.part_five(state=state)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 219, in part_five
    raise ValueError("Captcha detected")
ValueError: Captcha detected

How to use .env and load_dotenv()

I am trying to set up the project and enter my own parameters into the .env file which I've created. Shouldn't I use the .env.py extension instead? If I get an answer, I can add this information to the README so that any newbee would be able to understand configuration requirements

Add support for using inline mode in private conversations

Currently, the bot only supports using inline mode in group chats.
I would like to request a new feature that would allow inline mode to be used in private conversations as well.
This would be useful for users who prefer to interact with the bot one-on-one instead of in a group chat.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    馃枛 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 馃搳馃搱馃帀

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google 鉂わ笍 Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.