Giter Site home page Giter Site logo

kuafuai / devopsgpt Goto Github PK

View Code? Open in Web Editor NEW
6.4K 6.4K 830.0 10.9 MB

Multi agent system for AI-driven software development. Combine LLM with DevOps tools to convert natural language requirements into working software. Supports any development language and extends the existing code.

Home Page: https://www.kuafuai.net

License: Other

Python 17.76% Smarty 0.28% HTML 43.96% CSS 6.60% JavaScript 22.63% Batchfile 0.20% Shell 0.25% Dockerfile 0.04% Mako 0.04% Less 8.25%

devopsgpt's People

Contributors

bdoycn avatar booboosui avatar cclauss avatar charging-kuafuai avatar eltociear avatar ericxinwu avatar inceabdullah avatar mikkiyang avatar richardkelly2014 avatar sonicrang avatar suibber avatar woshilmx avatar yakejiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

devopsgpt's Issues

Failed to access GPT

Get this from both windows and Docker.

Error: Failed to access GPT, please check whether your network can connect to GPT and terminal proxy is running properly.

Some files failes to write or check.

編輯應用列表有BUG

newService = ApplicationService.create_service(appID, service["service_name"], service["service_git_path"], service["service_workflow"], service["service_role"], service["service_language"], service["service_framework"], service["service_database"], service["service_api_type"], service["service_api_location"], service["service_container_name"], service["service_container_group"], service["service_region"], service["service_public_ip"], service["service_security_group"], service["service_cd_subnet"], service["service_struct_cache"])
KeyError: 'service_name'

會變成創建新的
但是我只是要修改

docker depoly

我發現這好像是國內鏡像
是否能改成國外或者過內

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f091fb04400>, 'Connection to pypi.tuna.tsinghua.edu.cn timed out. (connect timeout=15)')': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
ERROR: Could not find a version that satisfies the requirement flask-sqlalchemy (from versions: none)
ERROR: No matching distribution found for flask-sqlalchemy

Use with a local model

What would it take to modify for use with gpt4all or use with another local model method?

What technical modification suggestions would you have?

Expecting value: line 1 column 2 (char 1) 后端服务返回异常,请联系管理员检查终端服务日志以及浏览器控制台报错信息。

"question": "结果是否需要可视化展示?",
"reasoning": "可视化需要前端配合开发",
"answer_sample": "不需要可视化"

}
]

请检查以上问题,如有需要补充或者修改的,请提出,非常感谢!
Traceback (most recent call last):
File "F:\oyworld\GPT\DevOpsGPT\backend\app\controllers\common.py", line 10, in decorated_function
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\controllers\step_requirement.py", line 23, in clarify
msg, success = clarifyRequirement(userPrompt, globalContext, appArchitecture)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\pkgs\prompt\prompt.py", line 19, in clarifyRequirement
return obj.clarifyRequirement(userPrompt, globalContext, appArchitecture)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\pkgs\prompt\requirement_basic.py", line 80, in clarifyRequirement
return json.loads(message), success
^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json_init_.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)

Comparative advantage and unique features of DevOpsGPT over MetaGPT

Hello,

I came across the DevOpsGPT project recently, and I must say, I am intrigued by the scope and potential of your solution. I have observed its capability to automate software development by leveraging Language Models in concert with DevOps tools to convert language-based requirements into functional software. This unique approach can significantly enhance development efficiency and reduce communication overheads.

However, I also came across a similar project named MetaGPT, which emphasizes multiple GPT roles to tackle complex tasks in software development. It has the ability to take single line requirements as inputs and output user stories, competitive analysis, requirements, data structures, APIs, documents etc., essentially providing all processes of a software company along with carefully orchestrated SOPs.

I am thus interested in understanding what competitive advantage does DevOpsGPT have over MetaGPT? Specifically, what are the core differences and unique selling propositions that make DevOpsGPT stand out? Also, how does DevOpsGPT cope with complex tasks compared to the multi-role approach of MetaGPT?

Any insights into this comparison would help us to make a more informed decision about which AI-infused DevOps solution would better suit our requirements.

Thank you for your time!

希望取得联系,并支持 InternLM

尊敬的 DevOpsGPT 应用开发者,我是 InternLM 社区开发者&志愿者尖米, 大佬开源的工作对我的启发很大,希望可以探讨使用 InternLM 实现 DevOpsGPT 的可能性和实现路径,我的微信是 mzm312,希望可以取得联系进行更深度的交流;

Expecting value: line 1 column 1 (char 0)

Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm.py", line 15, in chatCompletion
message, success = obj.chatCompletion(context)
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 73, in chatCompletion
raise e
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 59, in chatCompletion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm.py", line 19, in chatCompletion
message, success = obj.chatCompletion(context)
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 73, in chatCompletion
raise e
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 59, in chatCompletion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/controllers/common.py", line 9, in decorated_function
result = func(*args, **kwargs)
File "/root/DevOpsGPT/backend/app/controllers/step_requirement.py", line 23, in clarify
msg, success = clarifyRequirement(userPrompt, globalContext, appArchitecture)
File "/root/DevOpsGPT/backend/app/pkgs/prompt/prompt.py", line 19, in clarifyRequirement
return obj.clarifyRequirement(userPrompt, globalContext, appArchitecture)
File "/root/DevOpsGPT/backend/app/pkgs/prompt/requirement_basic.py", line 80, in clarifyRequirement
return json.loads(message), success
File "/usr/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
172.30.192.1 - - [14/Sep/2023 18:35:53] "POST /step_requirement/clarify HTTP/1.1" 200 -

Unable to push the code to github

Hi,

What changes do I have to make in order to push the DevOpsGPT generated code to my github?

I have made changes in env.yaml file as per following:
DEVOPS_TOOLS: "github"
GIT_ENABLED: true
GIT_URL: "https://github.com"
GIT_API: "https://api.github.com"
GIT_TOKEN:
GIT_USERNAME:
GIT_EMAIL:

When I click on "submit code", it tries to push the code to kuafuai's repo and gives following error:
"Failed to push code. In the ./workspace/19/kuafuai/template_freestyleApp directory. git push failed."

前端无法登录进去

页面无法登录
1691126562685

前端的登录请求响应为200
image

后端的响应状态为304
1691126903130

界面也无法正常显示
image

该怎么解决啊,跪求大佬指点!

Comparative Analysis of YiVal and DevOpsGPT: Unique Selling Points and Competitive Edges

Hello Contributors and Community,

I recently found that there are two very interesting projects, DevOpsGPT and YiVal, both of which are based on AI and specifically large language models, but seemingly aiming at two different aspects in the AI-ML deployment process. I wanted to open a discussion to understand the unique competitive advantages and features that YiVal holds in comparison to DevOpsGPT.

At the outset, let me provide a brief understanding of the two projects:

  • DevOpsGPT: An AI-Driven Software Development Automation Solution that combines Large Language Models with DevOps tools to convert natural language requirements into working software, thereby enhancing development efficiency and reducing communication costs. It generates code, performs validation, and can analyze existing project information and tasks.

  • YiVal: An GenAI-Ops framework aimed at iterative tuning of Generative AI model metadata, params, prompts, and retrieval configurations, tuned with test dataset generation, evaluation algorithms, and improvement strategies. It streamlines prompt development, supports multimedia and multimodel input, and offers automated prompt generation and prompt-related artifact configuration.

Looking at both of these, it seems they provide unique features to cater to different needs in the AI development and deployment pipeline. However, I'm curious to further understand the unique selling points and specific competitive advantages of YiVal.

Here are a few questions that might be worth discussing:

  1. DevOpsGPT seems to convert natural language requirements into working software while YiVal seems focused on fine-tuning Generative AI with test dataset generation and improvement strategies. In what ways does YiVal outperform DevOpsGPT in facilitating a more robust and efficient machine learning model iteration and training process?

  2. One of the highlighted features of YiVal is its focus on Human(RLHF) and algorithm-based improvers along with the inclusion of a detailed web view. Can you provide a bit more insight into how these features are leveraged in YiVal and how they compare to DevOpsGPT's project analysis and code generation features?

  3. DevOpsGPT offers a feature to analyze existing projects and tasks, whereas YiVal emphasizes streamlining prompt development and multimedia/multimodel input. How does YiVal handle integration with existing models and datasets? Is there any scope for reverse-engineering or retraining established models with YiVal?

  4. In terms of infrastructure, how does YiVal compare to DevOpsGPT? Do they need similar resources for deployment and operation, or does one offer more efficiency?

  5. Lastly, how is the user experience on YiVal compared to DevOpsGPT? I see YiVal boasts a "non-code" experience for building Gen-AI applications, but how does this hold up against DevOpsGPT's efficient and understandable automated development process?

I'd appreciate any insights or thoughts on these points. Looking forward to stimulating discussions!

Run.bat issue in windows 11

When I try to run the code for docker install in conda env I get this error...

(devopsgpt) PS J:\GPT.MODELS\DevOpsGPT> docker run -it
"docker run" requires at least 1 argument.
See 'docker run --help'.

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Create and run a new container from an image


When I run run.bat from folder I get ...

    1 file(s) copied.

Installing missing packages...
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the file C:\Program.
Environment variable HTTP_PROXY not defined
Environment variable HTTPS_PROXY not defined
Environment variable ALL_PROXY not defined
'http_proxy' is not recognized as an internal or external command,
operable program or batch file.
'https_proxy' is not recognized as an internal or external command,
operable program or batch file.
'all_proxy' is not recognized as an internal or external command,
operable program or batch file.
000
Environment variable HTTP_PROXY not defined
Environment variable HTTPS_PROXY not defined
Environment variable ALL_PROXY not defined
'http_proxy' is not recognized as an internal or external command,
operable program or batch file.
'https_proxy' is not recognized as an internal or external command,
operable program or batch file.
'all_proxy' is not recognized as an internal or external command,
operable program or batch file.
000


etc repeating

I did config env file with api for openai and github

Thx in advance

Back end listens but curl gives 404

Back end listens but curl gives 404

ubuntu@devopsllm:~/DevOpsGPT-master$ netstat -nat | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN <<

but back end gives 404,

ubuntu@devopsllm:~/DevOpsGPT-master$ curl http://127.0.0.1:8081
<!doctype html>

<title>404 Not Found</title>

Not Found

The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.

Stuck on "Thinking"?

I realized I had to keep the initial description really short, got past that problem, but now part way through it, it just freezes and says thinking?

What am I doing wrong? lol. Someone please help me, and tell me what I gotta do to be more helpable, is there a log file somewhere I should send?

Local Response JSON Serialize

I replaced the OpenAI code in "llm_basic.py" with a request from the RWKV-Runner API and tested using model "RWKV-4-World-3B-v1-20230619-ctx4096" but I got some errors after running a simple task, although the API configured correctly, I tested it using a prompt from "subtask_basic.py" and the returned ["choices"][0]["message"]["content"] from results looks correct like the "rwkv_response.json" window in screenshot

Capture10
Capture11

启动项目,出现报错信息 ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events'

启动命令python3.11 backend/run.py
启动项目,出现报错信息如下:

WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.

  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:8081
  • Running on http://192.168.101.173:8081
    Press CTRL+C to quit
    Traceback (most recent call last):
    File "/Users/xxl/Documents/app/DevOpsGPT/backend/run.py", line 51, in
    app.run(host=BACKEND_HOST, port=BACKEND_PORT, debug=BACKEND_DEBUG)
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/flask/app.py", line 889, in run
    run_simple(t.cast(str, host), port, self, **options)
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/serving.py", line 1097, in run_simple
    run_with_reloader(
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/_reloader.py", line 440, in run_with_reloader
    reloader = reloader_loops[reloader_type](
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/_reloader.py", line 315, in init
    from watchdog.events import EVENT_TYPE_OPENED
    ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events' (/Users/xxl/anaconda3/lib/python3.11/site-packages/watchdog/events.py)

when i click adjust code it wipes the existing code

when i click adjust code it wipes the existing code
it should submit the existing code

option to auto complete uncoded todo annotation, lots of code is not auto completed need to copy paste the todo annotation to the adjust code box and submit it to get the code

Help with Backend host config

Is any one able to help me with my env.yaml file? I keep getting an error saying "failed to read backend config......" What am I missing?

C:\Users\Todd\Desktop\DevOpsGPT\backend>python run.py
←[91mError: Failed to read the BACKEND_HOST configuration, please 【copy a new env.yaml from env.yaml.tpl】 and reconfigure it according to the documentation. ←[0m
env - Copy.txt

Custom LLMs & APIs Endpoints

Will it support the local llm models like LlaMa and RWKV using the local APIs like oobobga Text Generation WebUI ?
This would solve the problems of insufficient OpenAI credits and limited tokens and the countries that ChatGPT access is still unavailable in yet

运行服务失败

执行脚本
image
运行python程序
image
备注:测试key在其他程序中是可用的

我是个15年经验加构师, 我认为你们这项目能成.

我近几个月一直在使用AI开发, 基本上我80% 以上的开发都已转移到了ai 开发, 但是现有的自动化产品都不能实现我想像的功能, 你们这个非常接近了. 软件开发就是得到中途不断修改模块需求.

程序员我认为是可以大量替代的, 但加构师很难, 因为需求很繁杂, 只能由人担任.

我也想加入你们的开源项目.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.