Giter Site home page Giter Site logo

kuebiko's Introduction

Forks Stargazers Issues


Logo

Kuebiko

WARNING: This doc is out of date, while it still applies to the legacy mode, I recommend using the Streamer.bot and Speaker.bot Mode instead. If needed a Tutorial will be added, however until then, there will be dragons. You will need to create a custom system prompt, it is recommended to read up on prompt engineering and trying out a lot.

A Twitch Chat Bot that reads twitch chat and creates a text to speech response using google could api and openai's GPT-3 text completion model.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

YouTube Videl Tutorial !OUTDATED!

Product Name Screen Shot

This is a project to setup your very own VTuber AI similar to "Neuro-Sama".

(back to top)

Built With

  • Python

(back to top)

Getting Started

This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.

Prerequisites

  • VLC MUST BE DOWNLOADED ON YOUR COMPUTER

In order to install the prerequisites you will need to do:

  • pip
    pip install -r requirements.txt

Installation

  1. Get a OpenAI API Key at OpenAPIKey
  2. Get a Twitch API Token at TwitchToken
  3. Create a Google Cloud Project with TTS Service enabled and download JSON credentials file. GoogleCloud
  4. Clone the repo
    git clone https://github.com/adi-panda/Kuebiko/
  5. Add the Google Cloud JSON file into the project folder.
  6. Enter API Keys in creds.py:
# You're Twitch Token 
TWITCH_TOKEN = ""
# Your TWITCH Channel Name
TWITCH_CHANNEL = ""
# Your OpenAI API Key
OPENAI_API_KEY = ""
# Your Google Cloud JSON Path
GOOGLE_JSON_PATH = ""
  1. Download VTube Studio and use VBAudio Cable to route audio coming from the program.
  2. Add the following script into OBS CaptionsScript
  3. Create a new text source for captions, and set it to read from a file, select the output.txt file from the project folder.
  4. In the script options put the name of you're text source.
  5. Set the script in transform options to scale to inner bounds, and adjust the size of the captions.
  6. Enjoy! For more details watch the attatched video.
  7. IN ORDER TO CHANGE THE VOICE OF YOU'RE VTUBER you will need to change the following parameters in main.py Here is a list of supported voices
  voice = texttospeech.VoiceSelectionParams(
      language_code="en-GB",
      name= "en-GB-Wavenet-B",
      ssml_gender=texttospeech.SsmlVoiceGender.MALE,
  )

(back to top)

Usage

Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.

For more examples, please refer to the Documentation

(back to top)

Roadmap

  • Feature 1
  • Feature 2
  • Feature 3
    • Nested Feature

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License.

(back to top)

Contact

@adi_panda - [email protected] @truecaesarlp - [email protected]

Project Link: You are here

(back to top)

Acknowledgments

(back to top)

Instructions

Replace API Keys in code and add google cloud json file and program should work!

kuebiko's People

Contributors

adi-panda avatar caesarakalaeii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuebiko's Issues

AI eventually forgets about the info given in prompt_chat.txt

After a while, because of all the conversations going on. The AI forgets about the initial information set by prompt_chat.txt.

Is there a good way around this?

I tried adding a very makeshift way of pushing the prompt in again after every conversation, but that just made me get token errors, because it's adding too much to it. I'm absolutely doing it the wrong way, because I'm not too good at this stuff.

 async def event_message(self, message):
        ...
        ...
        ...
        Bot.conversation.append(
            {'role': 'system', 'content': open_file('prompt_chat.txt')})
        print('------------------------------------------------------')
        os.remove(audio_file)

        # Since we have commands and are overriding the default `event_message`
        # We must let the bot know we want to handle and invoke our commands...
        await self.handle_commands(message)
raise self.handle_error_response(openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4184 tokens (3784 in the messages, 400 in the completion). Please reduce the length of the messages or completion.

I'm basically trying to find a way to inject the prompt chat info now and then, without it ruining everything. Piling it up or what have you. Because clearly the terrible way I'm going about it, is not working.

Thanks for this project by the way, truly appreciate it. Having so much fun with it.

Discord Intergration

Hey,

I don't suppose you could rework this to instead of being for twitch, it is integrated with Discord.

As in....

It is linked to a bot that we can create in the Discord Dev Portal, it joins a channel when the Host joins and it uses Speech recognition to listen to what that person says and then responds text to speech.

This would be extremely helpful if you have the time to look in to this.

Thank you.

Doesn't work in Russia

I live in Russia, and I would like to do this, but I can’t, Google doesn’t provide this option, and there are a lot of problems with this application, can you help me please?

How to make the vtuber take action according to ChatGPT's instructions ?

hello, I want chatgpt to play various roles and include simple action instructions in the generated responses. Then the VTuber takes action according to the instructions. like:

me: can you turn around?
chatgpt: yes, i can (turn around body).
vtuber voice: yes, ican.
vtuber action : turn around

How to add fine tuned model?

How I can add my custom fine tuned model? this code triggers me error, I don't know why

chat.py
import openai

def open_file(filepath):
with open(filepath, 'r', encoding='utf-8') as infile:
return infile.read()

openai.api_key = "mykey"
openai.api_base = 'https://api.openai.com/v1/chat'

def gpt3_completion(messages, engine='davinci:ft-personal-2023-04-07-21-57-02', temp=0.9, tokens=400, freq_pen=2.0, pres_pen=2.0, stop=['DOGGIEBRO:', 'CHATTER:']):
response = openai.Completion.create(
model=engine,
messages=messages,
temperature=temp,
max_tokens=tokens,
frequency_penalty=freq_pen,
presence_penalty=pres_pen,
stop=stop)
text = response['choices'][0]['message']['content'].strip()
return text

Add word filter?

Hi, I would like to add a word filter. Is it possible?
I have read the OpenAI documentation about moderation, but I don't understand how to implement it.
Can someone help me? Thank you in advance.

Ignore chat icons?

I would like to avoid having to read the chat icons all the time. How can I implement this?
there is a way to filter chat icons?

How can I use the Sandford Alpaca model from a locally run computer instead of Open AI?

Hi, I have had a great time with this project! I was wondering if there was a way to replace the Open AI response fetch with a Stanford Llama model that I would locally run on a networked computer. That would be class because it would allow me to create my own filter allowing the Vtuber to be a bit more risque than what Open AI allows. Thank you.

gpt-3.5-turbo

Does anyone know how to implement gpt-3.5 to the code? seems like a better and cheaper option for this project than previous openai models. Thank you.

Feature req: Please integrate apipie.ai

Users want access to as much AI as they can get, they dont want to manage 50 accounts, they want the fastest AI they want the cheapest AI, and you can provide all of that for them with this update.

in addition to or in place of integrating with any aggregators - Please integrate APIpie so devs can access them all from one place/subscription and plus it also provides:

-The most affordable, reliable and fastest AI available
-One API to access ~500 Models and growing
-Language, embedding, voice, image, vision and more
-Global AI load balancing, route queries based on price or latency
-Redundancy for major models providing the greatest up time possible
-Global reporting of AI availability, pricing and performance

Its the same API format as openai, just change the domain name and your API key and enjoy a plethora of models without changing any of your code other than how you handle the models list.

This is a win win for everyone, any new AI's from any providers will be automatically integrated into your stack with this one integration. Not to mention all the other advantages.

on the master branch errors

Hey! saw there was some updates on the master branch so tried to grab them - Couldn't find the google application credentials area - was also getting this error -

File "C:\Users\jack\Desktop\AI Streamer 4\Kuebiko-master\main.py", line 3, in
from chat import *
File "C:\Users\jack\Desktop\AI Streamer 4\Kuebiko-master\chat.py", line 10, in
with open(os.path.expanduser('~') + '/.kuebikoInfo.json') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\jack/.kuebikoInfo.json'

"How to fix 'FileNotFoundError: Could not find module libvlc.dll' when running Python script for Twitch chatbot on GitHub?"

"Hi, I encountered an error when running the command 'python main.py' for the Twitch chatbot. Specifically, I got a 'FileNotFoundError' saying that the 'libvlc.dll' module could not be found in the directory specified. I checked my setup and it seems like the module is indeed missing. Can someone please advise on how to resolve this issue? Thank you!"
image

The method I have tried but not useful (same error):

  1. reinstalling the vlc library by running the following command in your command prompt:
    pip uninstall python-vlc
    pip install python-vlc
  2. manually installing the vlc library by downloading it from the website https://www.dll-files.com/libvlc.dll.html for Version 3.0.8.0 into "Kuebiko-main" folder

video tutorial?

hello, i really like your work!
but im struggling to figure out how to make everything work..
is there a way you can make a video tutorial on how to setup everything and make this work?

New Follower Instance

If I wanted the bot to thank new followers as they come in, and not only respond to chats, how could I do that?

No Sound

Chracter Mouth open and follow according to the response but sound is not coming

How can we make the AI remember the chat history?

It is an awesome project!

I am trying to implement a powerful AI-VTuber and I find this amazing repo.

But if we use the GPT-3 API, we can not fine-tune the model with the chat history.

Though, we could add the history to the prompt text.

It is still a limitation for handling a long chat history.
(OpenAI will limit the length of context)

In my opinion, we should keep a local trainable model for archiving a more real AI VTuber?

a better TTS?

can someone implement azure tts or something that's better, thanks in advance

A few different modes.

I know I am cramming the box and I'm sorry about that. I do want to get out the things I would like to see in future versions of this project. I have a few feature ideas that would probably require a true/false toggle. I came across the desire for these features as I am using the AI VTuber as a co-streamer.

  • I would like it to be able to be listening to closed captions to activate conversations with me. I have closed captions on my mic and I can also get it to work for discord too through OBS to have twitch display, I just have to tell the captions what channel to listen to. Ideally, it would listen when it's name was mentioned and then listen for 30 seconds after the last response to continue a conversation and then turn itself off if there is 30 seconds of silence. Maybe 30 seconds is too long but I am kind of spitballing here with that.

  • I would also like it to start it's own small talk conversations if possible when there has been no action on closed caption for like a minute. Maybe tell a story about it's day or something that happened to them that was funny. I mean I know part of that belongs in the prompt but it would need to be programmed in to activate those kind of conversations.

  • I also had a crazy idea of the prompt temperature changing to match the "mood" of the conversation. If people are being mean or she doesn't like the conversation her responses are more rigid and if she is in a "good mood" or people are just being nice, she can give more loose and unpredictable responses. I noticed that she has positive and negative responses, perhaps after the response if it contains negative words, it does a -.1 and if it has positive words in the response it's +.1 otherwise it is 0 on the change. I'm sure of all three, this one would be the hardest to execute so, you can tell me to get bent on this one lol. It would be cool if it could though.

If you ever want to audit what I'm doing, drop by my stream sometime.

fine tuning

Hello, I'm thinking of tuning the AI ​​to do what I want but I don't know how or where to put the final model in the code. Does anyone know how to help me?

Getting requirements to build wheel... error: subprocess-exited with error

Hi was following the video closely currently stuck at 10:37

After executing "pip install -r requirements.txt", a bunch of the requirements started to be downloaded. I've included the error below.
I appreicate any help I can get on this.

"
Using cached numpy-1.24.1.tar.gz (10.9 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [33 lines of output]
Traceback (most recent call last):
File "C:\Users\redacted\miniconda3\Lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 353, in
main()
File "C:\Users\redacted\miniconda3\Lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\redacted\miniconda3\Lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 112, in get_requires_for_build_wheel
backend = _build_backend()
^^^^^^^^^^^^^^^^
File "C:\Users\redacted\miniconda3\Lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 77, in build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\redacted\miniconda3\Lib\importlib_init
.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1310, in _find_and_load_unlocked
File "", line 488, in _call_with_frames_removed
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in load_unlocked
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "C:\Users\redacted\AppData\Local\Temp\pip-build-env-w6oo9_xy\overlay\Lib\site-packages\setuptools_init
.py", line 16, in
import setuptools.version
File "C:\Users\redacted\AppData\Local\Temp\pip-build-env-w6oo9_xy\overlay\Lib\site-packages\setuptools\version.py", line 1, in
import pkg_resources
File "C:\Users\redacted\AppData\Local\Temp\pip-build-env-w6oo9_xy\overlay\Lib\site-packages\pkg_resources_init
.py", line 2172, in
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
"

Error vlc

hello, i got this error

mmdevice audio output error: cannot initialize COM (error 0x80010106)

any solutions ?

How would I get it to talk randomly?

How would I get it to talk randomly to fill dead silence and talk about how their day was, tell a story, etc. and not just only rely on Twitch chat?

I would appreciate if someone could help me with this!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.