Giter Site home page Giter Site logo

groqagenticworkflow's Introduction

Agentic Workflow README

Welcome to the Agentic Workflow project! This project aims to create an AI-powered solution that generates profitable Python scripts through collaboration between AI agents. The agents work together to break down tasks, write code, review and refactor, and ensure the generated scripts are efficient, well-documented, and capable of producing real profit. image

Project Overview

The Agentic Workflow project consists of the following key components:

  • agentic.py: The main script that orchestrates the collaboration between AI agents.
  • agent_functions.py: Contains utility functions used by the agents during the workflow.
  • code_execution_manager.py: Manages code execution, testing, optimization, and documentation generation.
  • browser_tools.py: Provides tools for web browser interaction and web scraping.
  • crypto_wallet.py: Implements a cryptocurrency wallet for handling transactions.
  • task_manager.py: Manages and tracks tasks assigned to the agents.
  • system_messages/: Contains system messages that guide the behavior of each agent.

Getting Started

To run the Agentic Workflow project, follow these steps:

  1. Clone the repository:

    git clone https://github.com/Drlordbasil/GroqAgenticWorkflow.git
    
  2. Navigate to the project directory:

    cd GroqAgenticWorkflow
    
  3. Install the required dependencies:

    pip install langchain langchain_groq spacy requests beautifulsoup4 selenium webdriver_manager pytest pylint black bitcoinlib
    
  4. Run the agentic.py script:

    python agentic.py
    

The script will initiate the collaboration between the AI agents, and you can monitor the progress and generated code in the console output.

AI Agents

The Agentic Workflow project involves the following AI agents:

  • Bob (Project Manager Extraordinaire): Leads the team, breaks down the project into manageable tasks, and assigns them to the other agents. Bob ensures the project stays on track and meets its goals.

  • Mike (AI Software Architect and Engineer): Responsible for code analysis, feature development, algorithm design, and code quality assurance. Mike infuses the project with cutting-edge AI capabilities.

  • Annie (Senior Agentic Workflow Developer): Focuses on user interface design, workflow optimization, error handling, and cross-platform compatibility. Annie creates intuitive and efficient workflows.

  • Alex (DevOps Engineer Mastermind): Handles environment setup, code execution, testing, deployment, and maintenance. Alex ensures the project runs smoothly and efficiently.

JSON Tools

The agents utilize JSON tools to automate tasks, gather information, and enhance the agentic workflow solution. Some of the key JSON tools include:

  • search_google: Searches Google for relevant information.
  • scrape_page: Scrapes a web page for relevant information.
  • test_code: Tests the provided code.
  • optimize_code: Optimizes the provided code and offers suggestions.
  • generate_documentation: Generates documentation for the provided code.
  • execute_browser_command: Executes a browser command for web interaction.

Agents can invoke these tools using specific JSON formats within their responses.

Contributing

We welcome contributions to the Agentic Workflow project! If you'd like to contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them with descriptive messages.
  4. Push your changes to your forked repository.
  5. Submit a pull request detailing your changes.

Please ensure that your code adheres to the project's coding standards and includes appropriate documentation.

License

The Agentic Workflow project is licensed under the MIT License.

Contact

If you have any questions, suggestions, or feedback, please feel free to contact the project maintainer, Anthony Snider (Drlordbasil), at [email protected].

Happy coding!

groqagenticworkflow's People

Contributors

drlordbasil avatar javacaliente avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

groqagenticworkflow's Issues

Import and Optimization Errors in Agentic Workflow Development Project

Description

During the testing phase of our groundbreaking Python program aimed at transforming the AI industry in agentic workflows, we encountered several critical errors. The project, a collaboration between our team's agentic workflow developers, AI software engineers, and a DevOps engineer, aims to set new standards in the field. However, the execution of our test suite failed to run due to a ModuleNotFoundError, and further optimization efforts led to an attribute error in the 'enchant' module. Additionally, Git-related errors suggest issues with repository detection.

Steps to Reproduce

  1. Initiating the test suite for our Python program.
  2. Encountering a ModuleNotFoundError for the 'add' module upon execution.
  3. Observing an attribute error during optimization efforts and Git repository errors.

Expected Behavior

  • Successful execution of the test suite without import errors.
  • Correct recognition and utilization of the 'enchant' module's attributes during optimization.
  • Proper detection and interaction with the Git repository, if applicable.

Actual Behavior

  • No tests were executed due to a ModuleNotFoundError.
  • An optimization error regarding the 'enchant' module attribute was observed.
  • Git commands indicated the current directory is not recognized as a git repository.

Error Messages and Performance Data

ModuleNotFoundError: No module named 'add' Error during optimization: module 'enchant' has no attribute 'Broker' fatal: not a git repository (or any of the parent directories): .git

Performance data and function call statistics were generated, indicating the program's execution path and time spent on various calls.

Environment

  • Operating System: Windows 11
  • Python Version: Python 3.11
  • Collaboration Context: The issue was encountered during the testing phase of our agentic workflow project, involving roles and tasks distributed among team members focused on AI software engineering, agentic workflow development, and DevOps.

Additional Context

The program in question is part of a larger effort to innovate within the AI industry, emphasizing the creation of efficient, robust, and transformative agentic workflows. Our team, consisting of senior agentic workflow developers, AI software engineers, and a DevOps engineer, collaborates closely to address these technical challenges.

Given the complexity of our project and the specialized roles involved, resolving these errors is crucial for progressing towards our goal of setting new industry standards. Any insights or suggestions on addressing the import error, the optimization issue, and the Git repository detection problem would be highly appreciated.

Attached Files and Documentation

  • Program files and error logs have been included as attachments to this issue for further examination.
    import os
    import subprocess
    import tempfile
    import logging
    import cProfile
    import pstats
    import io
    import ast
    import astroid
    import pylint.lint
    import traceback
    class CodeExecutionManager:
    def init(self):
    self.logger = logging.getLogger(name)
    self.workspace_folder = "workspace"
    os.makedirs(self.workspace_folder, exist_ok=True)

    def save_file(self, filepath, content):
    filepath = os.path.join(self.workspace_folder, filepath)
    try:
    with open(filepath, 'w', encoding='utf-8') as file:
    file.write(content)
    self.logger.info(f"File '{filepath}' saved successfully.")
    return True
    except Exception as e:
    self.logger.error(f"Error saving file '{filepath}': {str(e)}")
    return False

    def read_file(self, filepath):
    filepath = os.path.join(self.workspace_folder, filepath)
    try:
    with open(filepath, 'r', encoding='utf-8') as file:
    content = file.read()
    self.logger.info(f"File '{filepath}' read successfully.")
    return content
    except FileNotFoundError:
    self.logger.error(f"File '{filepath}' not found.")
    return None
    except Exception as e:
    self.logger.error(f"Error reading file '{filepath}': {str(e)}")
    return None

    def test_code(self, code):
    if not code:
    return None, None

      with tempfile.TemporaryDirectory(dir=self.workspace_folder) as temp_dir:
          script_path = os.path.join(temp_dir, 'temp_script.py')
          with open(script_path, 'w') as f:
              f.write(code)
    
          try:
              output = subprocess.check_output(['python', '-m', 'unittest', 'discover', temp_dir], universal_newlines=True, stderr=subprocess.STDOUT, timeout=30)
              self.logger.info("Tests execution successful.")
              return output, None
          except subprocess.CalledProcessError as e:
              self.logger.error(f"Tests execution error: {e.output}")
              return None, e.output
          except subprocess.TimeoutExpired:
              self.logger.error("Tests execution timed out after 30 seconds.")
              return None, "Execution timed out after 30 seconds"
          except Exception as e:
              self.logger.error(f"Tests execution error: {str(e)}")
              return None, str(e)
    

    def execute_command(self, command):
    try:
    result = subprocess.run(command, capture_output=True, text=True, shell=True)
    self.logger.info(f"Command executed: {command}")
    return result.stdout, result.stderr
    except Exception as e:
    self.logger.error(f"Error executing command: {str(e)}")
    return None, str(e)

def format_error_message(error):
return f"Error: {str(error)}\nTraceback: {traceback.format_exc()}"

def run_tests(code):
code_execution_manager = CodeExecutionManager()
test_code_output, test_code_error = code_execution_manager.test_code(code)
if test_code_output:
print(f"\n[TEST CODE OUTPUT]\n{test_code_output}")
if test_code_error:
print(f"\n[TEST CODE ERROR]\n{test_code_error}")

def monitor_performance(code):
with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False, dir="workspace") as temp_file:
temp_file.write(code)
temp_file_path = temp_file.name

profiler = cProfile.Profile()
profiler.enable()

try:
    subprocess.run(['python', temp_file_path], check=True)
except subprocess.CalledProcessError as e:
    print(f"Error executing code: {e}")
finally:
    profiler.disable()
    os.unlink(temp_file_path)

stream = io.StringIO()
stats = pstats.Stats(profiler, stream=stream).sort_stats('cumulative')
stats.print_stats()

performance_data = stream.getvalue()
print(f"\n[PERFORMANCE DATA]\n{performance_data}")

return performance_data

def optimize_code(code):
try:
# Save the code to a temporary file
with tempfile.NamedTemporaryFile(delete=False, suffix=".py") as tmp:
tmp.write(code.encode('utf-8'))
tmp_file_path = tmp.name

    # Setup Pylint to use the temporary file
    pylint_output = io.StringIO()

    # Define a custom reporter class based on BaseReporter
    class CustomReporter(pylint.reporters.BaseReporter):
        def _display(self, layout):
            pylint_output.write(str(layout))

    pylint_args = [tmp_file_path]
    pylint_reporter = pylint.lint.Run(pylint_args, reporter=CustomReporter())

    # Retrieve optimization suggestions
    optimization_suggestions = pylint_output.getvalue()
    print(f"\n[OPTIMIZATION SUGGESTIONS]\n{optimization_suggestions}")

    # Cleanup temporary file
    os.remove(tmp_file_path)

    return optimization_suggestions
except SyntaxError as e:
    print(f"SyntaxError: {e}")
    return None
except Exception as e:
    print(f"Error during optimization: {str(e)}")
    return None

def pass_code_to_alex(code, alex_memory):
alex_memory.append({"role": "system", "content": f"Code from Mike and Annie: {code}"})

def send_status_update(mike_memory, annie_memory, alex_memory, project_status):
mike_memory.append({"role": "system", "content": f"Project Status Update: {project_status}"})
annie_memory.append({"role": "system", "content": f"Project Status Update: {project_status}"})
alex_memory.append({"role": "system", "content": f"Project Status Update: {project_status}"})

def generate_documentation(code):
try:
module = ast.parse(code)
docstrings = []

    for node in ast.walk(module):
        if isinstance(node, (ast.FunctionDef, ast.ClassDef, ast.Module)):
            docstring = ast.get_docstring(node)
            if docstring:
                docstrings.append(f"{node.name}:\n{docstring}")

    documentation = "\n".join(docstrings)
    print(f"\n[GENERATED DOCUMENTATION]\n{documentation}")

    return documentation
except SyntaxError as e:
    print(f"SyntaxError: {e}")
    return None

def commit_changes(code):
subprocess.run(["git", "add", "workspace"])
subprocess.run(["git", "commit", "-m", "Automated code commit"])

add new llama embedding and RAG memory.

Current class needs work:

import ollama
import chromadb

class LlamaRAG:
  def __init__(self):
    self.documents = [
      "Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels",
      "Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands",
      "Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall",
      "Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight",
      "Llamas are vegetarians and have very efficient digestive systems",
      "Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old",
    ]
    self.client = chromadb.Client()
    self.collection = self.client.create_collection(name="docs")

  def store_documents(self):
    for i, d in enumerate(self.documents):
      response = ollama.embeddings(model="mxbai-embed-large", prompt=d)
      embedding = response["embedding"]
      self.collection.add(
        ids=[str(i)],
        embeddings=[embedding],
        documents=[d]
      )

  def query_documents(self, prompt):
    response = ollama.embeddings(
      prompt=prompt,
      model="mxbai-embed-large"
    )
    results = self.collection.query(
      query_embeddings=[response["embedding"]],
      n_results=1
    )
    data = results['documents'][0][0]
    output = ollama.generate(
      model="stablelm2",
      prompt=f"Using this data: {data}. Respond to this prompt: {prompt}"
    )
    return output['response']
  

if __name__ == "__main__":
  rag = LlamaRAG()
  rag.store_documents()
  prompt = "What are some interesting facts about llamas?"
  response = rag.query_documents(prompt)
  print(response)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.