Giter Site home page Giter Site logo

tf2-gptchatbot's Introduction

Hi there, I'm Danil 👋

  • 🔭 I'm currently mastering my favourite projects.
  • 🌱 I’m currently learning Angular, Mastering Django and DRF.

Connect with me:

dborodin836 | LinkedIn dborodin836 | Telegram

tf2-gptchatbot's People

Contributors

dborodin836 avatar teufortressindustries avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tf2-gptchatbot's Issues

GroqCloud support?

The introduction of Llama3 by Meta, which approaches the capabilities of GPT4, has led to its inclusion in Groq's suite of available models. Unfortunately, my current hardware cannot support these models at a decent speed. However, Groq can, achieving approximately 300tps at relatively low costs, and is currently offering a free beta.
https://console.groq.com/playground
image
image

I don't want the AI to respond to every move I make.

Is there a way to configure this without scurrying around in code?

It's just a bit annoying...

Yeah, I could just not kill people, stop capturing/defending points, kill binding, or JOINING A SERVER? And AI won't generate responses to that, but it would be much nicer if I could just type 0 or FALSE in the config.ini rather than making a whole issue in there just to get this kind of feature, or maybe I'm just dumb and there's a button to turn this off, who knows?

Thanks in advance.

Снимок экрана 2024-01-04 035328

Message sending delay (+queueing)

Some servers don't allow messages to be sent too quickly, so I think adding a customizable delay before sending a continuation of the previous message might prevent this. And when multiple users are using the bot, I would like the bot to finish sending the first message it generated first, before sending the next ones. (Or somehow label the message so that it is clear who it is being sent to)

image

Groq is great, system role next?

The Groq's speed is impressive, the responses are instant as if all of them were pre-made. But it lacks personality.
While we have CUSTOM_PROMPT that addresses this issue, it would be nice to utilize the system role to determine its behavior.
I believe it's more efficient than inputting
[user message] (act like Soldier from the 2007 hit game Team Fortress 2. Soldier talks like blah blah blah... He likes buckets and blah blah blah...)
every single time.

The chatbot stops working after attempting to chat with GPT using an incorrect API key.

Users can intentionally type !pc or other !gpt commands, which causes a fatal error. The software should ignore the error to prevent the chatbot from malfunctioning. Or make an option to disable GPT commands.

Exception in thread Thread-1 (parse_console_logs_and_build_conversation_history):
Traceback (most recent call last):
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 127, in get_response
    response = send_gpt_completion_request(conversation_history, username, model=model)
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 38, in send_gpt_completion_request
    completion = openai.ChatCompletion.create(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.AuthenticationError: Incorrect API key provided: sk-11111***************************************1111. You can find your API key at https://platform.openai.com/account/api-keys.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\modules\chat.py", line 74, in parse_console_logs_and_build_conversation_history
    controller.process_line(logline)
  File "C:\AIstuff\TF2-GPTChatBot\modules\command_controllers.py", line 108, in process_line
    handler(logline, self.__shared)
  File "C:\AIstuff\TF2-GPTChatBot\modules\commands\openai.py", line 41, in handle_user_chat
    conv_his = handle_cgpt_request(
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 72, in handle_cgpt_request
    response = get_response(conversation_history.get_messages_array(), username, model)
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 141, in get_response
    main_logger(f"Unhandled error happened! Cancelling ({e})")
TypeError: 'Logger' object is not callable

image

Custom Instructions

Is there a way to put a custom instruction that comes in each request? (!gpt3 input)

Let's say this is our default prompt:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Input. Answer in less than 128 chars!

### Response:
Sure, what would you like to know?

And I want it to be:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Input. Answer in less than 128 chars! Be funny and do 9/11 jokes as if you were Walter White from Breaking Bad.

### Response:
Well, I'd say cooking up blue meth was one thing... but blowing up towers? That takes some real chemistry.

Or something like:

Below is an instruction that describes a task. Write a response that appropriately completes the request. You're an AI chatbot integrated into Team Fortress 2. If someone asks a question with no context they might be talking about the TF2.

### Instruction:
When is the next update? Answer in less than 128 chars!

### Response:
The next Team Fortress 2 update isn't confirmed, but we usually get them around October or November each year. Stay tuned for announcements on our official channels!

By the way, I'm running it locally with oobabooga text-gen, but because for some reason the --public-api cmd flag doesn't work for me and says that it couldn't connect to Cloudflare, I just set a system variable OPENAI_API_BASE=http://127.0.0.1:5000/v1 and use it as if it were connected to openai (in the recent update they changed their API to be similar to the one openai has, I believe it previously was just an extension but it's now a default option)

[TEXT-GENERATION-PUBLIC-API] might be deprecated

--public-api is unreliable, and it doesn't work for me and a couple of other users.
image
oobabooga/text-generation-webui#3570
I have a feeling the code that handles the API calls is outdated since text-gen's API was updated fairly recently.
It'd be super cool if you could update it or just get rid of it altogether.
Updating it would let people fine-tune the model's parameters and customize the instructions to fit their model's needs while also keeping things private.

Bot can't connect

Whenever i open the executable, i get a Couldn't connect! Retrying in 4 second...
I have properly added the launch options with nothing else, and put my gpt3 key. Using chatgpt from the gui works fine. Am i doing something wrong?

Better Chat Conversation History? User Isolation and Nickname Features.

Hello again. So, have you noticed that the !chat command is kinda throwing everyone's messages into one big pot? That's definitely a recipe for chaos. This can lead to several problems due to spammers, preventing an adequate conversation with the chatbot.
I've got a couple of ideas to tidy things up:

  • Make chat isolation for each user (separate chat history for each)
    OR
  • Add dynamic nickname (Adds the player’s nickname to the chat history)
    • Anonymous mode (User1, User2, User3)
    • Public mode (Scout, Engineer, Walter Hartwell White)

In practice, I've observed that people generally prefer more privacy rather than sharing the same chat space with others. Take Roleplaying as an example: a user can have a cute little conversation with the chatbot and someone interrupts it, and now the chatbot thinks that the user they were just talking to said something devious, the outcome that no user desires.

Example of the current way the chat history is handled:
User: Hello.
Assistant: Greetings! As an artificial intelligence, I am fully at your service. Feel free to inquire!
User (Different person): Do you speak French?
Assistant: I am proficient in several languages, French being one of them, so yes!
User: Can you engage in roleplaying?
Assistant: As an artificial intelligence, I am equipped to handle a variety of tasks, including engaging in roleplay across multiple languages and French.

Losing Admin Privileges Due to Community Server Name Tags

Some servers put labels or other tags in front of the username which nullifies the admin privileges and leads to several problems with !clear and !gpt4 admin commands. This issue persists in other features that have something to do with usernames.

Example:

[Player] punching bag: !clear \global
Clearing chat history for user '[Player]  punching bag'.
*SPEC* punching bag: !clear \global
Clearing chat history for user '*SPEC* punching bag'.
*SPEC* [Player] punching bag: !pcc I will burn this planet down
[*SPEC* [Play..] hahaha what you got against earth?
      ^ Who is he referring to? (I can bump up the shortened username limit but it defeats its purpose)

Possible solution: A code that will look into the tags.json file and strip off the included tags. Also, a GUI command that allows you to add tags like in a ban feature.

> tag uwu
[01:54:50] -- TAGGED 'uwu'
> untag uwu
[01:54:55] -- UNTAGGED 'uwu'

Error with time.py

I was using TF2-GPTChatBot as usual, and I got this issue the next time I tried to get a response.
I have no idea how to fix it and what caused it to suddenly pop up.
Console log:

Exception in thread Thread-1 (parse_console_logs_and_build_conversation_history):
Traceback (most recent call last):
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\time.py", line 43, in get_minutes_from_str
    struct_time = time.strptime(time_str, "%H:%M:%S")
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 562, in _strptime_time
    tt = _strptime(data_string, format)[0]
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 349, in _strptime
    raise ValueError("time data %r does not match format %r" %
ValueError: time data '2:' does not match format '%H:%M:%S'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\modules\chat.py", line 67, in parse_console_logs_and_build_conversation_history
    for logline in get_console_logline():
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\text.py", line 210, in get_console_logline
    stats_regexes(line)
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\text.py", line 164, in stats_regexes
    tm = get_minutes_from_str(time_on_server)
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\time.py", line 46, in get_minutes_from_str
    struct_time = time.strptime(time_str, "%M:%S")
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 562, in _strptime_time
    tt = _strptime(data_string, format)[0]
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 349, in _strptime
    raise ValueError("time data %r does not match format %r" %
ValueError: time data '2:' does not match format '%M:%S'

The program does not see the command on some servers

(I changed the !gpt3 to !ai, it works fine)
image
server on screenshot: 162.248.94.182:27015
image
ignore the "Couldn't connect" warning, tf2 wasn't launched at that moment.

Maybe because of the score on the left side of the nickname? There was a server where tf2 and discord chat were connected and people on discord couldn't get the AI working, while others could (they had a "discord" tag at the left side of their nickname)
server I'm talking about: 172.232.147.231:27016 (it's empty most of the time)

here's the console log even if you might not need it:
console.log

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.