Giter Site home page Giter Site logo

guangzhengli / chatfiles Goto Github PK

View Code? Open in Web Editor NEW
3.2K 23.0 479.0 3.19 MB

Document Chatbot — multiple files. Powered by GPT / Embedding.

License: MIT License

TypeScript 96.25% JavaScript 1.93% CSS 1.36% Dockerfile 0.47%
chatbot chatgpt chatgpt-api chatpdf chatfile

chatfiles's Introduction

chatfiles's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatfiles's Issues

WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.

In the process of starting the service for the first time, this kind of error is reported.
What should I do?
The following is the full text of the whole error message.

PS D:\KKK_usual tools\ChatFiles-main> docker compose up
[+] Running 2/0

  • Container chatfiles Created 0.0s
  • Container chatfiles-ui Created 0.0s
    Attaching to chatfiles, chatfiles-ui
    chatfiles | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
    chatfiles-ui |
    chatfiles-ui | > [email protected] start
    chatfiles-ui | > next start
    chatfiles-ui |
    chatfiles-ui | ready - started server on 0.0.0.0:3000, url: http://localhost:3000
    chatfiles | WARNING:llama_index.llm_predictor.base:Unknown max input size for gpt-3.5-turbo, using defaults.
    chatfiles | * Serving Flask app 'server'
    chatfiles | * Debug mode: off
    chatfiles | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
    chatfiles | * Running on all addresses (0.0.0.0)
    chatfiles | * Running on http://127.0.0.1:5000
    chatfiles | * Running on http://172.19.0.2:5000
    chatfiles | INFO:werkzeug:Press CTRL+C to quit

Local run failed but work well with your image docker

I've had this error when I run locally. But it works well when using your image docker.

Unhandled Runtime Error
Error: Objects are not valid as a React child (found: object with keys {error, success}). If you meant to render a collection of children, use an array instead.

Someone has any clue about this?

gpt回复的速度特别慢,pdf文件比较大的时候就需要加载很久

5V9$E)%S$FZZ JLMF71PWV2
第一个问题给我回复了引用的一篇文献的标题,第二第三个问题等于没有回答,显得特别智障,不知道是不是我用法出错了

2023-04-11 21:07:23 chatfiles-ui | beginning handler
2023-04-11 21:07:23 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:07:23] "POST /upload HTTP/1.1" 200 -
2023-04-11 21:07:38 chatfiles-ui | beginning handler
2023-04-11 21:07:38 chatfiles-ui | handler chatfile query: can you describe this article for me?? garnet 配位数计算.json
2023-04-11 21:07:38 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:07:38] "GET /query?message=can%20you%20describe%20this%20article%20for%20me??&indexName=garnet%20配位数计算.json HTTP/1.1" 404 -
2023-04-11 21:08:01 chatfiles-ui | beginning handler
2023-04-11 21:08:10 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
2023-04-11 21:08:10 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 33459 tokens
2023-04-11 21:08:10 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:08:10] "POST /upload HTTP/1.1" 200 -
2023-04-11 21:10:35 chatfiles-ui | beginning handler
2023-04-11 21:10:35 chatfiles-ui | handler chatfile query: can you describe this article for me? Garnets geochemical and exploration implication
2023-04-11 21:11:37 chatfiles | WARNING:/usr/local/lib/python3.8/site-packages/langchain/chat_models/openai.py:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
2023-04-11 21:12:08 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 460 tokens
2023-04-11 21:12:08 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 8 tokens
2023-04-11 21:12:08 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:12:08] "GET /query?message=can%20you%20describe%20this%20article%20for%20me?&indexName=Garnets%20geochemical%20and%20exploration%20implication HTTP/1.1" 200 -
2023-04-11 21:13:15 chatfiles-ui | beginning handler
2023-04-11 21:13:20 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
2023-04-11 21:13:20 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 21617 tokens
2023-04-11 21:13:20 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:13:20] "POST /upload HTTP/1.1" 200 -
2023-04-11 21:13:31 chatfiles-ui | beginning handler
2023-04-11 21:13:31 chatfiles-ui | handler chatfile query: can you describe this article for me? garnet 配位数
2023-04-11 21:14:02 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 321 tokens
2023-04-11 21:14:02 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 8 tokens
2023-04-11 21:14:02 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:14:02] "GET /query?message=can%20you%20describe%20this%20article%20for%20me?&indexName=garnet%20配位数 HTTP/1.1" 200 -
2023-04-11 21:14:59 chatfiles-ui | beginning handler
2023-04-11 21:14:59 chatfiles-ui | handler chatfile query: can you translate the abstract part into Chinese? garnet 配位数
2023-04-11 21:15:26 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 309 tokens
2023-04-11 21:15:26 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 9 tokens
2023-04-11 21:15:26 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:15:26] "GET /query?message=can%20you%20translate%20the%20abstract%20part%20into%20Chinese?&indexName=garnet%20配位数 HTTP/1.1" 200 -
2023-04-11 21:17:30 chatfiles-ui | beginning handler
2023-04-11 21:17:30 chatfiles-ui | handler chatfile query: summary the Introduction part garnet 配位数
2023-04-11 21:18:04 chatfiles | INFO:openai:error_code=None error_message='That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID 9292f425210eb6ff2b6db4990b291e20 in your message.)' error_param=None error_type=server_error message='OpenAI API error received' stream_error=False
2023-04-11 21:18:04 chatfiles | WARNING:/usr/local/lib/python3.8/site-packages/langchain/chat_models/openai.py:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID 9292f425210eb6ff2b6db4990b291e20 in your message.).
2023-04-11 21:19:00 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 3944 tokens
2023-04-11 21:19:00 chatfiles | INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 4 tokens
2023-04-11 21:19:00 chatfiles | INFO:werkzeug:172.18.0.3 - - [11/Apr/2023 13:19:00] "GET /query?message=summary%20the%20Introduction%20part&indexName=garnet%20配位数 HTTP/1.1" 200 -

[Deploying to Fly.io] Error failed to fetch an image or build from source: app does not have a Dockerfile or buildpacks configured

(base) u20@u20:/ChatFiles$ docker --version
Docker version 23.0.5, build bc4487a
(base) u20@u20:
/ChatFiles$ ../flyctl launch
Update available 0.0.475 -> v0.0.559.
Run "flyctl version update" to upgrade.
An existing fly.toml file was found for app chatfiles2
App is not running, deploy...
==> Building image
Remote builder fly-builder-withered-forest-5301 ready
Error failed to fetch an image or build from source: app does not have a Dockerfile or buildpacks configured. See https://fly.io/docs/reference/configuration/#the-build-section

Streaming Response

I tried several ways for streaming response but no success can anyone share me

Docker image `amd64` support

This project is interesting, however, the current docker image supports only arm/v8, to use it on amd64 machine you need to build it manually, which is not convenient.
Suggestion: use GitHub action to build the image.

几点产品上的建议

可以支持多用户
可以支持已有文件索引列表的展示及管理
可以支持远程文件的读取分析,输入URL即可,这个其实很容易的
支持token统计及openAI详细的错误露出,以方便调试。

problem when i send code to the chat

when i ask for a code i got this why?:
handler chatfile query: cual seria la ultima que menciona scrapy-think-splash-rich-fastapi-merged
beginning handler
handler chatfile query: puedes revisar este codigo? import scrapyfrom scrapy_splash import SplashRequestfrom rich.prompt import Promptfrom rich.table import Tablefrom rich.console import Consoleclass MySpider(scrapy.Spider): name = 'myspider' def start_requests(self): urls = Prompt.ask("Enter one or more URLs separated by commas: ") urls = urls.split(",") for url in urls: if self.should_use_splash(url): yield SplashRequest(url, self.parse, args={'wait': 1}, endpoint='render.html', splash_headers={ 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' } ) else: yield scrapy.Request(url, self.parse) def should_use_splash(self, url): if "javascript" in url: return True undefined
API resolved without sending a response for /api/query?message=puedes%20revisar%20este%20codigo?%20import%20scrapyfrom%20scrapy_splash%20import%20SplashRequestfrom%20rich.prompt%20import%20Promptfrom%20rich.table%20import%20Tablefrom%20rich.console%20import%20Consoleclass%20MySpider(scrapy.Spider):%20%20%20%20name%20=%20%27myspider%27%20%20%20%20def%20start_requests(self):%20%20%20%20%20%20%20%20urls%20=%20Prompt.ask(%22Enter%20one%20or%20more%20URLs%20separated%20by%20commas:%20%22)%20%20%20%20%20%20%20%20urls%20=%20urls.split(%22,%22)%20%20%20%20%20%20%20%20for%20url%20in%20urls:%20%20%20%20%20%20%20%20%20%20%20%20if%20self.should_use_splash(url):%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20yield%20SplashRequest(url,%20self.parse,%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20args={%27wait%27:%201},%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20endpoint=%27render.html%27,%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20splash_headers={%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%27user-agent%27:%20%27Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/58.0.3029.110%20Safari/537.3%27%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20)%20%20%20%20%20%20%20%20%20%20%20%20else:%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20yield%20scrapy.Request(url,%20self.parse)%20%20%20%20def%20should_use_splash(self,%20url):%20%20%20%20%20%20%20%20if%20%22javascript%22%20in%20url:%20%20%20%20%20%20%20%20%20%20%20%20return%20True%20%20%20%20%20%20%20%20, this may result in stalled requests.
^C

可以添加自定义 API 请求地址吗?

因为某些原因国内无法直接访问 OpenAI API,如果能添加 自定义 API 请求地址,就可以使用如 API2D 这样的中转服务了。

我尝试在 docker-composer 和 server.py 加了下参数,但似乎不生效 😶

image

image

how to use gpt-4?

Hi, nice work!

sadly however, i've not been able to change the model to gpt-4..
if i change this line like so in the llm.py

llm_predictor = LLMPredictor(llm=ChatOpenAI(
    temperature=0.2, model_name="gpt-4"))

i am seeing this error in the interface JSON.parse: unexpected character at line 1 column 1 of the JSON data.
and this in the shell

error - FetchError: request to http://127.0.0.1:5000/upload failed, reason: connect ECONNREFUSED 127.0.0.1:5000
    at ClientRequest.<anonymous> (file:///C:/Users/sbene/Projects/ChatFiles-main/chatfiles-ui/node_modules/node-fetch/src/index.js:108:11)
    at ClientRequest.emit (node:events:525:35)
    at Socket.socketErrorListener (node:_http_client:494:9)
    at Socket.emit (node:events:513:28)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  type: 'system',
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED',
  erroredSysCall: 'connect',
  page: '/api/upload'
}

本地运行失败,似乎是openaipublic.blob.core.windows.net连接超时

错误log:
from llm import get_index_by_index_name File "/Users/axzq/code/python/ChatFiles/chatfiles/llm.py", line 18, in <module> service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/service_context.py", line 71, in from_defaults embed_model = embed_model or OpenAIEmbedding() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/embeddings/openai.py", line 209, in __init__ super().__init__(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 55, in __init__ self._tokenizer: Callable = globals_helper.tokenizer File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/utils.py", line 38, in tokenizer enc = tiktoken.get_encoding("gpt2") File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/registry.py", line 63, in get_encoding enc = Encoding(**constructor()) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 11, in gpt2 mergeable_ranks = data_gym_to_mergeable_bpe_ranks( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/load.py", line 90, in data_gym_to_mergeable_bpe_ranks encoder_json = json.loads(read_file_cached(encoder_json_file)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/load.py", line 46, in read_file_cached contents = read_file(blobpath) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/load.py", line 24, in read_file return requests.get(blobpath).content File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/adapters.py", line 563, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/encoder.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))

Issue uploading the file

hi,
i tried to upload the PDF file, while there was no error during the upload process, however after the process is done, instead of showing name of the file on top, it shows some unparsed html code. please check the attached images:

image

image

Deploying Chatfiles in Azure

Hi @guangzhengli is it possible chatfiles can be deployed in azure web app container? I'm trying to deploy it but getting 404 error.

Im using this config in the azure container config for docker-compose.

version: '2.2' services: chatfiles: image: newpdfairegistry.azurecr.io/chatfiles-chatfiles:latest container_name: chatfiles ports: - 5000:5000 chatfiles-ui: image: newpdfairegistry.azurecr.io/chatfiles-chatfiles-ui:latest container_name: chatfiles-ui ports: - 3000:3000 - CHAT_FILES_SERVER_HOST=https://chatfiles3.azurewebsites.net:5000 - NEXT_PUBLIC_CHAT_FILES_MAX_SIZE=0 depends_on: - chatfiles

多文件支持

这个我看到已经实现了?但是最新版本要怎么操作呢,目前上传文件还是只能上传一个

上传文件提示 Node objects 和 Document objects 报错

image

在按照 requirements.txt 里的要求安装完依赖环境之后,上传文件就会提示:

File: Error: The constructor now takes in a list of Node objects. Since you are passing in a list of Document objects, please use from_documents instead.

不知道该怎么解决,辛苦您帮我看一下是不是我哪里做错了。

change language?

works cool!!! thanks, just a few questions:

  1. its possible change how language use to answer?
  2. its use a database?
  3. possible add cache?
  4. how i can control temperature
    Thanks a lot.

OpenAI API Key Required & Error fetching models.

在本地端分別運行 chatfiles-ui 與 chatfiles
已經在 /chatfiles 下建好 .env,並設定 OPENAI_API_KEY=XXXXX
但運行起來後又方文字提示 OpenAI API Key Required
到左下角將 Key 輸入後仍顯示

Error fetching models.
Make sure your OpenAI API key is set in the bottom left of the sidebar.
If you completed this step, OpenAI may be experiencing issues.

5000 占用

第一次用 docker 求问,docker compose up 以后提示 5000 端口占用,修改端口
image
但是上传文件失败了
image
请问这种情况要如何修改
image

local run failed

was trying to run the app locally, got this:
image

followed each step as specified:
cd chatfiles-ui npm install npm run dev cd chatfiles python3 server.py

my OS is Ubuntu 20.04

locally run failed

File: Error: The constructor now takes in a list of Node objects. Since you are passing in a list of Document objects, please use from_documents instead.

after I uploaded a PDF file,I got this. any help? thanks!

设置的代理域名,但访问报错。

已经在文件中设置了代理域名,docker中启动为什么访问还会报错呢?本地启动就没问题。很是奇怪。
export const OPENAI_API_HOST =
process.env.OPENAI_API_HOST || 'balabala.xyz';

Error: Client network socket disconnected before secure TLS connection was established

chatfiles-ui | [TypeError: fetch failed] {
chatfiles-ui | cause: [Error: Client network socket disconnected before secure TLS connection was established] {
chatfiles-ui | code: 'ECONNRESET',
chatfiles-ui | path: undefined,
chatfiles-ui | host: 'api.openai.com',
chatfiles-ui | port: 443,
chatfiles-ui | localAddress: undefined

mac13.2.1 (22D68)

Index file does not exist

No matter how I upload the file, restart the service, rebuild the docker container, it always responds to me: Index file does not exist.

Running docker-compose ECONNREFUSED

Got this error when uploading the a file after successfully running the docker-compose

2023-04-25 18:37:07 chatfiles-ui | FetchError: request to http://127.0.0.1:5000/upload failed, reason: connect ECONNREFUSED 127.0.0.1:5000 2023-04-25 18:37:07 chatfiles-ui | at ClientRequest.<anonymous> (file:///app/node_modules/node-fetch/src/index.js:108:11) 2023-04-25 18:37:07 chatfiles-ui | at ClientRequest.emit (node:events:525:35) 2023-04-25 18:37:07 chatfiles-ui | at Socket.socketErrorListener (node:_http_client:495:9) 2023-04-25 18:37:07 chatfiles-ui | at Socket.emit (node:events:513:28) 2023-04-25 18:37:07 chatfiles-ui | at emitErrorNT (node:internal/streams/destroy:151:8) 2023-04-25 18:37:07 chatfiles-ui | at emitErrorCloseNT (node:internal/streams/destroy:116:3) 2023-04-25 18:37:07 chatfiles-ui | at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { 2023-04-25 18:37:07 chatfiles-ui | type: 'system', 2023-04-25 18:37:07 chatfiles-ui | errno: 'ECONNREFUSED', 2023-04-25 18:37:07 chatfiles-ui | code: 'ECONNREFUSED', 2023-04-25 18:37:07 chatfiles-ui | erroredSysCall: 'connect'

想知道作者怎么解决token超出1024时的解决方案,实在程序中分别输入token吗

2023-04-17 10:06:55 chatfiles | Token indices sequence length is longer than the specified maximum sequence length for this model (3390 > 1024). Running this sequence through the model will result in indexing errors
2023-04-17 10:06:58 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
2023-04-17 10:06:58 chatfiles | INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 10086 tokens
2023-04-17 10:06:58 chatfiles | INFO:werkzeug:172.18.0.2 - - [17/Apr/2023 02:06:58] "POST /upload HTTP/1.1" 200 -

docker-compose up error

chatfiles | standard_init_linux.go:211: exec user process caused "exec format error"
chatfiles-ui | standard_init_linux.go:211: exec user process caused "exec format error"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.