Giter Site home page Giter Site logo

shaunwei / realchar Goto Github PK

View Code? Open in Web Editor NEW
5.9K 57.0 720.0 83.39 MB

🎙️🤖Create, Customize and Talk to your AI Character/Companion in Realtime (All in One Codebase!). Have a natural seamless conversation with AI everywhere (mobile, web and terminal) using LLM OpenAI GPT3.5/4, Anthropic Claude2, Chroma Vector DB, Whisper Speech2Text, ElevenLabs Text2Speech🎙️🤖

Home Page: https://RealChar.ai/

License: MIT License

Python 29.70% Mako 0.08% JavaScript 37.57% CSS 0.23% Swift 21.35% Dockerfile 0.49% Shell 0.16% Kotlin 10.41%

realchar's Issues

closed

ERROR: anthropic 0.3.4 has requirement pydantic<2.0.0,>=1.9.0, but you'll have pydantic 2.0.3 which is incompatible.
ERROR: clickhouse-connect 0.6.6 has requirement urllib3>=1.26, but you'll have urllib3 1.25.8 which is incompatible.
ERROR: chromadb 0.3.29 has requirement fastapi==0.85.1, but you'll have fastapi 0.100.0 which is incompatible.
ERROR: chromadb 0.3.29 has requirement pydantic<2.0,>=1.9, but you'll have pydantic 2.0.3 which is incompatible.
ERROR: elevenlabs 0.2.19 has requirement pydantic<2.0,>=1.10, but you'll have pydantic 2.0.3 which is incompatible.
ERROR: jupyter-client 8.3.0 has requirement importlib-metadata>=4.8.3; python_version < "3.10", but you'll have importlib-metadata 1.5.0 which is incompatible.
ERROR: langsmith 0.0.5 has requirement pydantic<2,>=1, but you'll have pydantic 2.0.3 which is incompatible.
ERROR: langchain 0.0.234 has requirement pydantic<2,>=1, but you'll have pydantic 2.0.3 which is incompatible.
ERROR: langchain 0.0.234 has requirement SQLAlchemy<3,>=1.4, but you'll have sqlalchemy 1.3.12 which is incompatible.
ERROR: llama-index 0.7.9 has requirement sqlalchemy>=2.0.15, but you'll have sqlalchemy 1.3.12 which is incompatible.
ERROR: openai-whisper 20230314 has requirement tiktoken==0.3.1, but you'll have tiktoken 0.4.0 which is incompatible.
ERROR: replicate 0.8.4 has requirement pydantic<2,>1, but you'll have pydantic 2.0.3 which is incompatible.

can't have a voice conversation

I can't have a voice conversation on the official website, and I can't have a conversation on Windows, please ask me what I should do?

[Refactor] Make SpeechToText.transcribe async

I noticed there is a blocking function in the main event loop may block APIRouter's main loop. link: https://github.com/faker2048/RealChar/blob/3f6450abc4e22d7a399e1a9e2cfcf8d0f0976462/realtime_ai_character/websocket_routes.py#L198
I'm not sure if this needs to be refactored using a different approach,
but my proposal is to use run_in_executor.

According to the Python global lock, it may be necessary to use a process executor in cases where the user is using the local mode and the Whisper model is consuming CPU on the thread during transcription.

docker: pull access denied for realtime-ai-character

I follow the guidance to install it by docker, an error occur when I run 'python3 cli.py docker-run'

Error info:
Running Docker image: realtime-ai-character...
Unable to find image 'realtime-ai-character:latest' locally
docker: Error response from daemon: pull access denied for realtime-ai-character, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

closed

ubuntu@VM-0-12-ubuntu:~/RealChar$ python cli.py run-uvicorn
Running uvicorn server...
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/uvicorn", line 8, in
sys.exit(main())
File "/usr/lib/python3/dist-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/main.py", line 416, in main
run(
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/main.py", line 587, in run
server.run()
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/server.py", line 68, in serve
config.load()
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/config.py", line 467, in load
self.loaded_app = import_from_string(self.app)
File "/home/ubuntu/.local/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 848, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/ubuntu/RealChar/realtime_ai_character/main.py", line 10, in
from realtime_ai_character.character_catalog.catalog_manager import CatalogManager
File "/home/ubuntu/RealChar/realtime_ai_character/character_catalog/catalog_manager.py", line 6, in
from realtime_ai_character.utils import Singleton, Character
File "/home/ubuntu/RealChar/realtime_ai_character/utils.py", line 4, in
from langchain.schema import AIMessage, BaseMessage, HumanMessage, SystemMessage
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/init.py", line 6, in
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/agents/init.py", line 2, in
from langchain.agents.agent import (
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 16, in
from langchain.agents.tools import InvalidTool
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/agents/tools.py", line 4, in
from langchain.callbacks.manager import (
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/callbacks/init.py", line 3, in
from langchain.callbacks.aim_callback import AimCallbackHandler
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/callbacks/aim_callback.py", line 4, in
from langchain.callbacks.base import BaseCallbackHandler
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/callbacks/base.py", line 7, in
from langchain.schema.agent import AgentAction, AgentFinish
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/schema/init.py", line 3, in
from langchain.schema.language_model import BaseLanguageModel
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/schema/language_model.py", line 8, in
from langchain.schema.output import LLMResult
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/schema/output.py", line 31, in
class ChatGeneration(Generation):
File "/home/ubuntu/.local/lib/python3.8/site-packages/langchain/schema/output.py", line 40, in ChatGeneration
def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]:
File "/home/ubuntu/.local/lib/python3.8/site-packages/pydantic/deprecated/class_validators.py", line 222, in root_validator
return root_validator()(
__args) # type: ignore
File "/home/ubuntu/.local/lib/python3.8/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator
raise PydanticUserError(
pydantic.errors.PydanticUserError: If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

For further information visit https://errors.pydantic.dev/2.0.3/u/root-validator-pre-skip

APIconnection Error Error communicating with OpenAI

Hi buddy
My server and client have already up. My API-key is correct. Otherwise the server won't start.
I received a error Retrying langchain.chat_models.openai.acompletion_with_retry.._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
I used proxy clash because I am in china, I have the access for chatgpt, Please help me how to resolve this issue

chromadb problems


Issue Description:

I'm encountering an issue with the realtime_ai_character project related to a deprecated configuration of Chroma. When I try to run the project using python cli.py run-uvicorn, I receive the following error message:

Replit

ValueError: You are using a deprecated configuration of Chroma. Please pip install chroma-migrate and run chroma-migrate to upgrade your configuration. See https://docs.trychroma.com/migration for more information or join our discord at https://discord.gg/8g5FESbj for help!

Steps to Reproduce:

  1. Clone the repository and set up the environment as mentioned in the README.
  2. Run the command python cli.py run-uvicorn to start the server.

Expected Behavior:

The project should start without any errors and the server should be accessible.

Actual Behavior:

The error message mentioned above is displayed, indicating a deprecated configuration of Chroma.

Additional Information:

  • Python version: [3.9.2]
  • Chroma version: [0.4.0]
  • Other relevant details: [Linux on chromebook]

Possible Solutions:

I have tried running chroma-migrate as suggested in the error message, but it did not resolve the issue. I have also checked the Chroma configuration in my project files and environment variables, but everything seems to be in order. I'm unsure how to proceed further.

Any assistance or guidance on resolving this issue would be greatly appreciated.

urllib.error.URLError

Execute python cli.py run-uvicorn, report an errorurllib.error.URLError, the details are as follows:

> python cli.py run-uvicorn
Running uvicorn server...
2023-07-21 10:07:45,143 - __init__ - catalog_manager.py - INFO - Overwriting existing data in the chroma.
Created a chunk of size 627, which is longer than the specified 500
Created a chunk of size 703, which is longer than the specified 500
Created a chunk of size 701, which is longer than the specified 500
Created a chunk of size 512, which is longer than the specified 500
2023-07-21 10:07:47,630 - load_characters - catalog_manager.py - INFO - Loaded data for character: Elon Musk
Created a chunk of size 515, which is longer than the specified 500
Created a chunk of size 634, which is longer than the specified 500
Created a chunk of size 525, which is longer than the specified 500
Created a chunk of size 590, which is longer than the specified 500
Created a chunk of size 585, which is longer than the specified 500
Created a chunk of size 509, which is longer than the specified 500
Created a chunk of size 690, which is longer than the specified 500
2023-07-21 10:07:48,524 - load_characters - catalog_manager.py - INFO - Loaded data for character: Loki
Created a chunk of size 507, which is longer than the specified 500
2023-07-21 10:07:49,540 - load_characters - catalog_manager.py - INFO - Loaded data for character: Raiden Shogun And Ei
Created a chunk of size 535, which is longer than the specified 500
Created a chunk of size 744, which is longer than the specified 500
Created a chunk of size 808, which is longer than the specified 500
Created a chunk of size 747, which is longer than the specified 500
Created a chunk of size 641, which is longer than the specified 500
Created a chunk of size 595, which is longer than the specified 500
Created a chunk of size 748, which is longer than the specified 500
Created a chunk of size 873, which is longer than the specified 500
Created a chunk of size 602, which is longer than the specified 500
2023-07-21 10:07:50,595 - load_characters - catalog_manager.py - INFO - Loaded data for character: Bruce Wayne
Created a chunk of size 876, which is longer than the specified 500
Created a chunk of size 1072, which is longer than the specified 500
Created a chunk of size 666, which is longer than the specified 500
Created a chunk of size 946, which is longer than the specified 500
Created a chunk of size 829, which is longer than the specified 500
Created a chunk of size 786, which is longer than the specified 500
Created a chunk of size 847, which is longer than the specified 500
Created a chunk of size 674, which is longer than the specified 500
Created a chunk of size 791, which is longer than the specified 500
Created a chunk of size 571, which is longer than the specified 500
Created a chunk of size 530, which is longer than the specified 500
Created a chunk of size 505, which is longer than the specified 500
2023-07-21 10:07:51,787 - load_characters - catalog_manager.py - INFO - Loaded data for character: Steve Jobs
2023-07-21 10:07:51,788 - load_characters - catalog_manager.py - INFO - Loaded 5 characters: names ['Elon Musk', 'Loki', 'Raiden Shogun And Ei', 'Bruce Wayne', 'Steve Jobs']
2023-07-21 10:07:51,788 - __init__ - catalog_manager.py - INFO - Persisting data in the chroma.
2023-07-21 10:07:51,845 - __init__ - catalog_manager.py - INFO - Total document load: 243
2023-07-21 10:07:51,975 - __init__ - elevenlabs.py - INFO - Initializing [ElevenLabs Text To Speech] voices...
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/whisper/timing.py:58: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def backtrace(trace: np.ndarray):
2023-07-21 10:07:53,121 - __init__ - whisper.py - INFO - Loading [Local Whisper] model: [tiny]...
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1346, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1285, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1331, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1280, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1040, in _send_output
    self.send(msg)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 980, in send
    self.connect()
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1454, in connect
    self.sock = self._context.wrap_socket(self.sock,
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 501, in wrap_socket
    return self.sslsocket_class._create(
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1041, in _create
    self.do_handshake()
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1310, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.9/bin/uvicorn", line 8, in <module>
    sys.exit(main())
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/main.py", line 416, in main
    run(
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/Users/kevin/1-GR个人/16-XMDM项目代码/161-WDXM我的项目/1618-RealChar/realtime_ai_character/main.py", line 29, in <module>
    get_speech_to_text()
  File "/Users/kevin/1-GR个人/16-XMDM项目代码/161-WDXM我的项目/1618-RealChar/realtime_ai_character/audio/speech_to_text/__init__.py", line 14, in get_speech_to_text
    Whisper.initialize(use='local')
  File "/Users/kevin/1-GR个人/16-XMDM项目代码/161-WDXM我的项目/1618-RealChar/realtime_ai_character/utils.py", line 56, in initialize
    cls._instances[cls] = cls(*args, **kwargs)
  File "/Users/kevin/1-GR个人/16-XMDM项目代码/161-WDXM我的项目/1618-RealChar/realtime_ai_character/audio/speech_to_text/whisper.py", line 28, in __init__
    self.model = whisper.load_model(config.model)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/whisper/__init__.py", line 131, in load_model
    checkpoint_file = _download(_MODELS[name], download_root, in_memory)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/whisper/__init__.py", line 67, in _download
    with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
    return opener.open(url, data, timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 517, in open
    response = self._open(req, data)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 534, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1389, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1349, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)>

Mic does not work on Edge

Hey @Shaunwei, first of all, super amazing project! Love it.

Just one thing, I'm on Linux fedora latest, and and the mic works great on Chrome, but not Edge. I can select the right mic, it signals listening in the browser, but nothing picked up. So for now using it with Chrome, but would be great to have it work with Edge too.

I wish I knew how to debug this some more, don't hesitate to give me breadcrumbs.

Drop Down List Not Displaying List Items

Thank you for creating such a amazing open source project. 😄

I faced a drop down list element issue after selecting the drop down element at home page. Not Sure if it is working for all though. Below are the details 👇 :-

  1. I hit https://realchar.ai/ in my web browser google chrome.

  2. When I tried to "Select as audio input device:" from the drop down list. But, I am unable to see the 3 list items present in that drop down list at once. Also only when I hover over with the mouse the elements present in the list are displayed in drop down list.

  3. Rather, All the 3 list items including the default should be displayed to the end user.

Drop_down_element_issue

add a wake word

Hi!

I'm interested in using RealChar as a virtual voice assistant on my Raspberry Pi. Is there any plan for a wake word that will initiate a conversation in the future?

Thanks,
Kamil

solved

Building wheel for pyaudio (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 /tmp/tmpwdru9wt_ build_wheel /tmp/tmpnj6lmomx
cwd: /tmp/pip-install-de1dv_m8/pyaudio
Complete output (21 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-38
creating build/lib.linux-x86_64-cpython-38/pyaudio
copying src/pyaudio/init.py -> build/lib.linux-x86_64-cpython-38/pyaudio
running build_ext
building 'pyaudio._portaudio' extension
creating build/temp.linux-x86_64-cpython-38
creating build/temp.linux-x86_64-cpython-38/src
creating build/temp.linux-x86_64-cpython-38/src/pyaudio
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/usr/local/include -I/usr/include -I/usr/include/python3.8 -c src/pyaudio/device_api.c -o build/temp.linux-x86_64-cpython-38/src/pyaudio/device_api.o
In file included from src/pyaudio/device_api.c:1:
src/pyaudio/device_api.h:7:10: fatal error: Python.h: No such file or directory
7 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
/tmp/pip-build-env-5t0d692e/overlay/lib/python3.8/site-packages/setuptools/dist.py:771: UserWarning: Usage of dash-separated 'index-url' will not be supported in future versions. Please use the underscore name 'index_url' instead
warnings.warn(
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1

ERROR: Failed building wheel for pyaudio
Failed to build pyaudio
ERROR: Could not build wheels for pyaudio which use PEP 517 and cannot be installed directly

Add locally hosted alternatives to roadmap

Not everyone has money to pay for ElevenLabs and ChatGPT's API. Would be nice to add extensions used by Oobabooga WebUI that offer alternative functionality that runs locally. For example, running a local LLM, local TTS, local STT...etc.

solved

ubuntu@VM-0-12-ubuntu:~/RealChar$ alembic upgrade head
Traceback (most recent call last):
File "/usr/bin/alembic", line 11, in
load_entry_point('alembic==1.1.0.dev0', 'console_scripts', 'alembic')()
File "/usr/lib/python3/dist-packages/alembic/config.py", line 540, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/lib/python3/dist-packages/alembic/config.py", line 534, in main
self.run_cmd(cfg, options)
File "/usr/lib/python3/dist-packages/alembic/config.py", line 511, in run_cmd
fn(
File "/usr/lib/python3/dist-packages/alembic/command.py", line 279, in upgrade
script.run_env()
File "/usr/lib/python3/dist-packages/alembic/script/base.py", line 475, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/lib/python3/dist-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/python3/dist-packages/alembic/util/compat.py", line 174, in load_module_py
spec.loader.exec_module(module)
File "", line 848, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/ubuntu/RealChar/alembic/env.py", line 1, in
from realtime_ai_character.models.user import User
ModuleNotFoundError: No module named 'realtime_ai_character'

Setup LLM

  • Take text input from a user
  • Select a companion with companion history
  • Store history somewhere
  • Create text back to user, and stay in character

image/video support?

it would be great, if it has an eye(to see) so whatever i am seeing in physical world, i can ask my related queries to Realchar.

Init backend code

  • fastAPI
  • database - track interactions
  • allow send text and receive text based on conversation history (in memory)
  • has individual users

AI that understands user-to-user programming styles

Is it possible to build-in (or if already available, how do I cater this to me individually) a way for the character to understand individual programming 'techniques' and how some peoples stylistic and foundational approaches to programming large blocks of code differ? Trying to cater the bot to this, but running into issues having it understand my 'habits' and having problems getting the bot to stick with an idea.

closed

how to run on remote ip?not localhost? Which file to modify?thanks.

change language

Hi!

Is it possible to change the TTS-language?

Thanks,
Kamil

Consider supabase for auth layer

This would work well. Has all social logins and works for iOS native / react native too.
Www.supabase.io

Uses jwt

  • open source and free for less 512mb

will allow realtime communications.
off the top of my head - you could show number of live connections to platform in realtime.

this repo has python backend + frontend with supabase integration
https://github.com/StanGirard/quivr

https://github.com/search?q=repo%3AStanGirard%2Fquivr%20supabase&type=code

you maybe able to cherry pick all this
https://github.com/StanGirard/quivr/tree/8125d0858c474636e3ea758516f92509559b63fc/scripts

will give access to supabase / vector db

ConnectionRefusedError: [WinError 1225] The remote computer refused the network connection

`Hi all..

Win 11, powershell and vs-code.

I created a conda env with python=3.10.11.
Installed requirements.
Added APIs
Started sqlite3 test.db "VACUUM;"
alembic upgrade head
python cli.py run-uvicorn
navigate to http://localhost:8000/ and talked to Elon about a job placement interview at spaceX

Error when running python client/cli.py. The server still worked without this, but wondering how to fix. I am a novice, and most likely I am doing something wrong.

(RealChar) PS C:\Users\david\OneDrive\Projects_Mac\Projects\GPT\RealChar> python client/cli.py
Traceback (most recent call last):
File "C:\Users\david\OneDrive\Projects_Mac\Projects\GPT\RealChar\client\cli.py", line 191, in
asyncio.run(main(url))
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Users\david\OneDrive\Projects_Mac\Projects\GPT\RealChar\client\cli.py", line 182, in main
await task
File "C:\Users\david\OneDrive\Projects_Mac\Projects\GPT\RealChar\client\cli.py", line 158, in start_client
async with websockets.connect(uri) as websocket:
File "C:\Users\david\anaconda3\envs\RealChar\lib\site-packages\websockets\legacy\client.py", line 637, in aenter
return await self
File "C:\Users\david\anaconda3\envs\RealChar\lib\site-packages\websockets\legacy\client.py", line 655, in await_impl_timeout
return await self.await_impl()
File "C:\Users\david\anaconda3\envs\RealChar\lib\site-packages\websockets\legacy\client.py", line 659, in await_impl
_transport, _protocol = await self._create_connection()
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\base_events.py", line 1081, in create_connection
raise exceptions[0]
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\base_events.py", line 1060, in create_connection
sock = await self._connect_sock(
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\base_events.py", line 969, in _connect_sock
await self.sock_connect(sock, address)
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\david\anaconda3\envs\RealChar\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
ConnectionRefusedError: [WinError 1225] The remote computer refused the network connection

can't run due to chroma issue

File "D:\Anaconda3\lib\site-packages\langchain\vectorstores\chroma.py", line 184, in add_texts
self._collection.upsert(
AttributeError: 'Collection' object has no attribute 'upsert'
PersistentDuckDB del, about to run persist
Persisting DB to disk, putting it in the save folder ./chroma.db

openai whisper doesn't work

I set SPEECH_TO_TEXT_USE to OPENAI_WHISPER in .env file, and set the OPENAI_API_KEY and OPEN_AI_WHISPER_API_KEY to my apikey, I use chrome, but I find it does not work, the character speaks well, but I say in Chinese, it still cannot translate it,and finally I even remove the the last three lines of the following code in whisper.py, the speech_to_text still works, so maybe it's a bug

    def transcribe(self, audio_bytes, platform, prompt=''):
        if platform == 'web':
            audio = self._convert_webm_to_wav(audio_bytes)
        else:
            audio = sr.AudioData(audio_bytes, 44100, 2)
        if self.use == 'local':
            return self._transcribe(audio, prompt)
        elif self.use == 'api':
            return self._transcribe_api(audio, prompt)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.