Giter Site home page Giter Site logo

filip-michalsky / salesgpt Goto Github PK

View Code? Open in Web Editor NEW
1.9K 50.0 427.0 13.3 MB

Context-aware AI Sales Agent to automate sales outreach.

Home Page: https://salesgpt.vercel.app

License: MIT License

Python 20.14% Makefile 0.30% JavaScript 8.53% CSS 6.99% HTML 59.65% Batchfile 0.15% MDX 0.44% TypeScript 3.78% Shell 0.02%

salesgpt's Introduction

🤖 SalesGPT - Open Source AI Agent for Sales

GitHub Repo stars Downloads License PyPI version GithubActions
Discord

This repo is an implementation of a context-aware AI Agent for Sales using LLMs and can work across voice, email and texting (SMS, WhatsApp, WeChat, Weibo, Telegram, etc.).

SalesGPT is context-aware, which means it can understand what stage of a sales conversation it is in and act accordingly. Morever, SalesGPT has access to tools, such as your own pre-defined product knowledge base, significantly reducing hallucinations.

Our Vision: Build the Best Open Source AI Sales Agent

We are building SalesGPT to power your best AI Sales Agents. Hence, we would love to learn more about use cases you are building towards which will fuel SalesGPT development roadmap, so please don't hesitate to reach out.

Need Help with building GenAI/LLM applications?

If you need assistance, our team is here to help! Please reach out to us at Odyssey AI Partners for expert support and guidance.

🔴 Latest News

  • AI Sales Agents can now ACTUALLY sell! They autonomously generate Stripe payment links to sell products and services to customers.
  • You can now test your AI Sales Agents via our frontend.
  • Sales Agent can now take advantage of tools, such as look up products in a product catalog!

Demos and Use Cases

Unload AI Sales Agent Demos - Powered by SalesGPT: Our new virtual workforce? 🤔

Demo #1: Sarah - Patient Coordinator at South Orange Pediatrics

  • 100X Your Healthcare Admin with our Virtual Workforce

Sarah - South Orange Pediatrics Patient Coordinator

Demo #2: Ted - Autonomously create payment links and collect customer payments

Ted - Mattress Sales Representative Closing Deals

Try it yourself on our SalesGPT hosted frontend demo.

Contact Us for Suggestions, Questions, or Help

We are building SalesGPT to power your best AI Sales Agents. Hence, we would love to learn more about use cases you are building towards which will fuel SalesGPT development roadmap.

If you want us to build better towards your needs, or need help with your AI Sales Agents, please reach out to chat with us: SalesGPT Use Case Intake Survey

Features

Contextual Understanding: Sales Stage Awareness

The AI Sales Agent understands the conversation stage (you can define your own stages fitting your needs):

  • Introduction: Start the conversation by introducing yourself and your company.
  • Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service.
  • Value Proposition: Briefly explain how your product/service can benefit the prospect.
  • Needs Analysis: Ask open-ended questions to uncover the prospect's needs and pain points.
  • Solution Presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points.
  • Objection Handling: Address any objections that the prospect may have regarding your product/service.
  • Close: Ask for the sale by proposing a next step.
  • End Conversation: The user does not want to continue the conversation, so end the call.

Business & Product Knowledge:

  • Reference only your business information & products and significantly reduce hallucinations!

Close sales:

  • The AI Agent can actually close sales by generating Stripe payment link and closing orders from customers.

Connect to ANY data system:

  • The AI Agent can leverage data from any data system (new, old or legacy) via an integration to Mindware.

Automated Email Communication:

  • Enhance your sales process with automated email communication. SalesGPT can now send personalized emails to prospects, including follow-ups or product information.

Calendly Meeting Scheduling

  • The AI Agent can now facilitate scheduling meetings by generating Calendly links.

Use Any LLM to Power Your AI Sales Agent

  • Thanks to our integration with LiteLLM, you can choose any closed/open-sourced LLM to work with SalesGPT! Thanks to LiteLLM maintainers for this contribution!

Power Real-time Sales Conversations

  • Synchronous & Asynchronous Completion with LLMs
  • Synchronous & Asynchronous Streaming from LLMs

Optimized for Low Latency in Voice Channel

  • Voice AI Sales Agent boasts <1s round trip response rate to human speakers which includes the entire pipeline - speech to text, LLM inference, and text to speech - while ensuring stability and scalability.

Human in the loop

  • For use cases where AI sales agent needs human supervision.

Langsmith tracing

  • debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework

Enterprise-Grade Security

  • Upcoming integration with PromptArmor to protect your AI Sales Agents against security vulnerabilities (see our roadmap).

Quick Start

import os
from salesgpt.agents import SalesGPT
from langchain_community.chat_models import ChatLiteLLM

from dotenv import load_dotenv
load_dotenv() # make sure you have .env file with your API keys, eg., OPENAI_API_KEY=sk-xxx, MINDWARE_API_KEY etc.

# select your model - we support 50+ LLMs via LiteLLM https://docs.litellm.ai/docs/providers
llm = ChatLiteLLM(temperature=0.4, model_name="gpt-4-0125-preview") 
                            
sales_agent = SalesGPT.from_llm(llm, use_tools=True, verbose=False,
                            product_catalog = "examples/sample_product_catalog.txt",
                            salesperson_name="Ted Lasso",
                            salesperson_role="Sales Representative",
                            company_name="Sleep Haven",
                            company_business='''Sleep Haven 
                            is a premium mattress company that provides
                            customers with the most comfortable and
                            supportive sleeping experience possible. 
                            We offer a range of high-quality mattresses,
                            pillows, and bedding accessories 
                            that are designed to meet the unique 
                            needs of our customers.'''
                            )
sales_agent.seed_agent()
sales_agent.determine_conversation_stage() # optional for demonstration, built into the prompt
# agent 
sales_agent.step()

# user
user_input = input('Your response: ') # Yea, sure
sales_agent.human_step(user_input)

# agent
sales_agent.determine_conversation_stage() # optional for demonstration, built into the prompt
sales_agent.step()

# user
user_input = input('Your response: ') # What pricing do you have for your mattresses?
sales_agent.human_step(user_input)

# agent
sales_agent.determine_conversation_stage() # optional for demonstration, built into the prompt
sales_agent.step()

Conversation Stage: Introduction: Start the conversation by introducing yourself and your company.

Thought: Do I need to use a tool? No
Ted Lasso: Hello, my name is Ted Lasso and I'm calling on behalf of Sleep Haven. We are a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. I was wondering if you would be interested in learning more about our products and how they can improve your sleep?

User: Yea, sure

Conversation Stage: Value proposition: Briefly explain how your product/service can benefit the prospect.

Thought: Do I need to use a tool? No
Ted Lasso: Great to hear that! Our mattresses are specially designed to contour to your body shape, providing the perfect level of support and comfort for a better night's sleep. Plus, they're made with high-quality materials that are built to last. Would you like to hear more about our different mattress options?

User: What pricing do you have for your mattresses?

Thought: Do I need to use a tool? Yes --> AI Agent uses tools here.
Action: ProductSearch
ActionInput pricing for mattresses
Observation: Sleep Haven offers three mattresses: the Classic Harmony Spring Mattress for $1,299, the Plush Serenity Bamboo Mattress for $2,599, and the Luxury Cloud-Comfort Memory Foam Mattress for $999. The sizes available vary by mattress.

Thought: Do I need to use a tool? No
Ted Lasso: Sleep Haven offers three mattresses at different price points. The Classic Harmony Spring Mattress is priced at $1,299, the Plush Serenity Bamboo Mattress is priced at $2,599, and the Luxury Cloud-Comfort Memory Foam Mattress is priced at $999. The prices may vary depending on the size you choose. Would you like more information about the specific sizes and features of each mattress?

User: No, I will take two Classic Harmony Spring Mattress please.
Thought: Do I need to use a tool? Yes --> AI Agent uses tools here.
Action: GeneratePaymentLink
Action Input: Classic Harmony Spring Mattress, 2, John Doe Observation: {"response":"https://buy.stripe.com/test_9AQ6rygc397Bbv228p"}

Thought: Do I need to use a tool? No
Ted Lasso: Fantastic choice! I've generated a payment link for you to order two Classic Harmony Spring Mattresses. You can complete your purchase by clicking on this link: https://buy.stripe.com/test_9AQ6rygc397Bbv228p. Is there anything else I can assist you with today?

Architecture

Setup

Install

Make sure you have a python >=3.8,<3.12:

Create a virtual environment at a location on your computer. We use the generic "env" name for our virtual environment in the setup. You can rename this, but make sure to then use this name later when working with the environment (also rename the VENV variable in the Makefile accordingly to be able to use make commands successfully after cloning our repository):

For Windows:

  • Open Command Prompt or PowerShell.
  • Navigate to your project directory: cd path\to\your\project
  • Create a virtual environment: python -m venv env
  • Activate the virtual environment: .\env\Scripts\activate

For Mac:

  • Open Terminal.
  • Navigate to your project directory: cd path/to/your/project
  • Create a virtual environment: python3 -m venv env
  • Activate the virtual environment: source env/bin/activate

To deactivate a virtual environment after you have stopped using it simply run: deactivate

Clone the SalesGPT Github repository:

git clone https://github.com/filip-michalsky/SalesGPT.git

Navigate to the repository and in case you used a different venv name rename the VENV variable in the Makefile:

cd SalesGPT

If you simply want to work with SalesGPT as an end user without local changes you can install from PyPI using:

pip install salesgpt

If you want to work on your own version of SalesGPT or contribute to our open-source version install by activating your virtual environment as aforementioned and then run:

make setup

For more detailed installation steps along with the reasons for doing each please visit CONTRIBUTING.md

Finally, for use of SalesGPT create an .env file just as our .env.example and put your API keys there by specifying a new line just as we have done.

Run an Example AI Sales agent

Navigate into the SalesGPT directory:

cd SalesGPT

SalesGPT can be run in various ways, tailored to your preferences and setup. Below are the methods available:

1. Using Docker

For those who prefer containerization, Docker offers an isolated and consistent environment. Ensure Docker is installed on your system by following the official Docker installation guide.

To run SalesGPT with Docker, execute the following steps:

  1. Start the Application with Docker Compose:

    Use the command below to start SalesGPT in detached mode:

    docker-compose up -d
    

    If you've made changes and want them to reflect, append --build to the command above.

  2. Stopping the Application:

    To stop and remove all running containers related to SalesGPT, execute:

    docker-compose down
    

Troubleshooting:

  • Clean Up Docker Resources: If you encounter errors, you can clean up Docker by removing all unused containers, networks, images, and volumes with caution:
    docker system prune --volumes
    
  • Rebuild Without Cache: To rebuild and start the services afresh without using cache, run:
    docker-compose up -d --build --no-cache
    

After successful setup, access SalesGPT at localhost:3000/chat in your browser.

2. Direct User Interface Launch

If Docker is not part of your workflow, you can directly launch the SalesGPT user interface. Please refer to the README.md file in the frontend directory for instructions on setting up the UI locally.

3. Using the Terminal

For terminal enthusiasts or automation scripts, run SalesGPT with the following command: python run.py --verbose True --config examples/example_agent_setup.json

4. Running Only the Backend

For those who wish to integrate SalesGPT's backend with their own user interface or application, running only the backend is a straightforward process. This allows you to leverage the powerful features of SalesGPT while maintaining full control over the user experience.

To run only the backend of SalesGPT, follow these steps:

  1. Start the Backend Service:

    Use the following command to start the backend service. This will initiate the server on port 8000 by default, making the API accessible:

    docker-compose up -d backend
    

    If you need to rebuild the backend image, perhaps after making changes, you can add --build to the command above.

  2. Accessing the Backend:

    With the backend running, you can access the API endpoints at http://localhost:8000. Refer to the API documentation for details on available endpoints and their usage.

  3. Stopping the Backend:

    To stop the backend service, execute:

    docker-compose stop backend
    

    If you wish to remove the backend container entirely, use:

    docker-compose down
    

This setup is ideal for developers looking to integrate SalesGPT's backend into custom applications or those who prefer to use a different frontend technology.

Test your setup

  1. Activate your environment as described above. (run source env/bin/activate on Unix-like systems and .\env\Scripts\activate on Windows. Replace env with the name of your virtual environment)
  2. cd SalesGPT If you haven't already navigated to the SalesGPT home directory
  3. make test

All tests should pass. Warnings can be ignored.

Repair your setup

Regular updates are added to the SalesGPT repository. If your code is not working please check the current README file for possible changes in launch. With updates often extra environment variables are added, which are however required for certain functionality. Please check current .env.example file.

For any problems you can not solve please don't hesitate to reach out to our discord #support channel.

Uninstall SalesGPT

To delete the virtual environment you used for SalesGPT programming and your SalesGPT repository from your system navigate to the directory where you installed your virtual environment and cloned SalesGPT and run: make clean

Documentation

We leverage the langchain library in this implementation, specifically Custom Agent Configuration and are inspired by BabyAGI architecture.

LangSmith tracing

LangSmith is a platform for building production-grade LLM applications.

It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

LangSmith is developed by LangChain, the company behind the open source LangChain framework.

To switch on the LangSmith tracing you have to do the following steps:

  1. Create a LangSmith account
  2. Create an API key in settings
  3. Add you API key and Project name from LangSmith account to .env file or run.py module
  4. Switch on the "LANGCHAIN_TRACING_V2" setting in run.py to "true"
  5. That's it. You'll get better understanding of your agents and chaing performance in LangChain admin panel.

For futher reading take a look at the docs

Roadmap

  1. Improve SalesGPT observability
  2. Enhance prompt versioning
  3. Add prompt evaluation
  4. Write more documentation
  5. Better code documentation
  6. Refactor
  7. Improve reliability of the parser issue here and here
  8. Improve Deployment Instructions
  9. Calling Functionality - sample code
  10. Enterprise-Grade Security - integration with PromptArmor to protect your AI Sales Agents against security vulnerabilities
  11. LLM evaluations
  12. Resolve tickets and PRs (ongoing)
  13. Add example implementation of OpenAI functions agentissue here
  14. Add support for multiple tools issue here
  15. Add an agent controller for when stages need to be traversed linearly without skips issue here
  16. Add tool_getter to choose a tool based on vector distance to the tasks needed to be done
  17. What tools should the agent have? (e.g., the ability to search the internet)
  18. Add the ability of Sales Agent to interact with AI plugins on your website (.well-known/ai-plugin.json)
  19. More SalesGPT examples

About the Team

Lead Maintaner: Filip Michalsky

Our Support Team:

  • AI Engineering: Honza Michna (LinkedIn)
  • Software Engineer: Illia Makovoz

How to Contribute

Contributions are highly encouraged! Please fork and submit a PR.

salesgpt's People

Contributors

chemik-bit avatar cyai avatar demian143 avatar filip-michalsky avatar iljamak avatar ishaan-jaff avatar janewu77 avatar mattiasgalliano avatar nickturner922 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

salesgpt's Issues

在pycharm运行run.py报错

我用的是gpt3.5的key,运行run.py报错,报错信息为openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
Q1:想要调试程序,是从run.py做调试入口吗
Q2:支持gpt3.5吗?还是一定要gpt4

Error in sales_agent_with_context when running stage_analyzer_chain

I keep getting the errror message below when I am going through the Sales GPT with context tutorial and get to the stage_analyzer_chain.run(conversation_history='') step:

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIError: The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID b647982210c2d50cc01ba890f9bfe6f2 in your message.) {
  "error": {
    "message": "The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID b647982210c2d50cc01ba890f9bfe6f2 in your message.)",
    "type": "server_error",
    "param": null,
    "code": null
  }
}
 500 {'error': {'message': 'The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID b647982210c2d50cc01ba890f9bfe6f2 in your message.)', 'type': 'server_error', 'param': None, 'code': None}} {'Date': 'Tue, 23 May 2023 12:13:13 GMT', 'Content-Type': 'application/json', 'Content-Length': '366', 'Connection': 'keep-alive', 'access-control-allow-origin': '*', 'openai-processing-ms': '157', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'x-ratelimit-limit-requests': '3500', 'x-ratelimit-limit-tokens': '90000', 'x-ratelimit-remaining-requests': '3499', 'x-ratelimit-remaining-tokens': '89427', 'x-ratelimit-reset-requests': '17ms', 'x-ratelimit-reset-tokens': '382ms', 'x-request-id': 'b647982210c2d50cc01ba890f9bfe6f2', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '7cbd362beb98dd82-LHR', 'alt-svc': 'h3=":443"; ma=86400, h3-29=":443"; ma=86400'}.

I have confirmed that my OpenAI API key is correct and active.

Also - is the Quick start meant to replace this tutorial? I preferred this one as it gave me more control of the sausage making. I couldn't tell if the QuickStart allowed me to do the same.

Assistants+tools=[{“type”: “retrieval”}], What is the retrieval principle? Give pdf directly to AI?

client = openai.OpenAI()

file = client.files.create(file=open(“dlof.pdf”, “rb”),purpose=‘assistants’)

assistant = client.beta.assistants.create(
name=“Draft Letter Of Offer Assistant”,
instructions=“You are a merger and takeover specialist chatbot. Use your knowledge base to best respond to queries related to mergers and takeovers. Pls be precise”,
model=“gpt-3.5-turbo-1106”,
tools=[{“type”: “retrieval”}],
file_ids=[file.id]
)

What is the retrieval principle? Give pdf directly to AI? How to count tokens.
If it is a retrieval, it should only be a part of the gpt, that prompt is how, there is a way to output.

agent after using tools, when no matches found, outputs 'Sorry'

Hello, great project. I am very excited with it.
Question is:
How do I correctly tell parser.py to make a next AgentAction or AgentFinish if there is no matches.
This happens everytime my tool cannot find anything in product_catalog to answer the question

def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
    if self.verbose:
        print("TEXT")
        print(text)
        print("-------")
    if f"{self.ai_prefix}:" in text:
        return AgentFinish(
            {"output": text.split(f"{self.ai_prefix}:")[-1].strip()}, text
        )
    regex = r"Action: (.*?)[\n]*Action Input: (.*)"
    match = re.search(regex, text)
    if not match:
        ## TODO - this is not entirely reliable, sometimes results in an error.
        return AgentFinish(
            {
                "output": "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?"
            },
            text,
        )
        # raise OutputParserException(f"Could not parse LLM output: `{text}`")
    action = match.group(1)
    action_input = match.group(2)
    return AgentAction(action.strip(), action_input.strip(" ").strip('"'), text)

currently this part just outputs "I apologize...." as final answer.

Technologies used for realtime speech-to-text and text-to-speech

Hello Filip!

I want to thank you for your idea of sales agent in SalesGPT repository!

Inspired by your idea, I decided to implement it almost from scratch using more langchain features and Streamlit as UI, and later to create public repository with it, but I was also impressed of your video here, and I want to try to implement also these features with interaction using Streamlit (realtime speech-to-text and text-to-speech).

Of course, if it's not a secret, could you please tell what technologies you used for speech-to-text and text-to-speech?

Thank you.

Inconsistent Handling of kwargs Boolean Values

Description:

The agents.py file currently employs varying methods to handle boolean values in kwargs. For instance, it checks "use_custom_prompt" as a string, whereas "use_tools" is treated as a boolean. Such inconsistency may lead to confusion and unexpected behaviors.

I propose that both cases be managed in a consistent manner by treating "True" values as strings. Doing so will make the codebase follow a unified strategy for handling these options.

Below is my updated example_agent_setup.json. Please pay attention to the last two kwargs (use_tools and product_catalog), and note that we don't use JSON true but the string "True" for boolean values.

{
"salesperson_name": "Ted Lasso",
"salesperson_role": "Business Dev Rep",
"company_name": "Sleep Haven",
"company_business": "Sleep Haven offers premium mattresses and sleep accessories for ultimate comfort...",
"company_values": "Sleep Haven aims to enhance sleep quality through top-notch products and service...",
"conversation_purpose": "Discover sleep needs, offer premier mattress solution.",
"conversation_type": "call",
"use_custom_prompt": "True",
"custom_prompt": "Remember, you're {salesperson_name}, a {salesperson_role} at {company_name}...",
"use_tools": "True",
"product_catalog": "examples/sample_product_catalog.txt"
}

Currently, different methods are used to handle boolean values in kwargs, as mentioned earlier. This divergence can lead to ambiguity and unforeseen behavior.

I recommend that we adopt a uniform method by treating "True" values as strings. This will bring consistency to the codebase's handling of these options.

Proposed Fix:

Update the following section in agents.py:

Current code:

if "use_tools" in kwargs.keys() and kwargs["use_tools"] is True:

Updated code:

if "use_tools" in kwargs and kwargs["use_tools"] == "True":

Steps to Reproduce:

Use my example_agent_setup.json.
Observe that tools are never added.

Expected Behavior:

The code should manage boolean values in kwargs uniformly by recognizing "True" values as strings.

run.py error

I am facing this issue, Followed all the steps for installation and got this :

" from langchain.llms.base import create_base_retry_decorator
ImportError: cannot import name 'create_base_retry_decorator' from 'langchain.llms.base'"

Using AzureOpenAI

Hello,

Thanks for developing this project. I was trying to run the main file (run.py) giving my API key. It ran into issues stating wrong bearer toekn. I realized, my openai sandbox is on Azure and has a company specific domain.
So I changed the .env file (providing AzureOpenAI base URL and key).
os.environ['OPENAI_API_KEY'] = envs_dict['OPENAI_API_KEY']
os.environ['OPENAI_API_BASE'] = envs_dict['OPENAI_API_BASE']

And also changed the following in run.py

llm = AzureOpenAI(temperature=0)

Now I am getting following error

Retrying langchain.llms.openai.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: No connection adapters were found for "'https://mysandboxurl/'\n/engines//completions".

Let me know if this is the right approach and what is going wrong. Thanks.

Madhur

use_tools: bool = True problem with actions

I am facing an implementation issue from your repository. I don't understand how it should work. If you leave use_tools: bool = False in agents.py, then this is just a chat with LLM that hallucinates due to prompts and does not use the product database, it just invents a trading conversation without connecting to a file with real product data. If you set use_tools: bool = True, then it immediately sends the first welcome message to the request, which is naturally not contained in the product database, and a negative response comes from parsers.py line 31 "output": "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?". How do you solve this problem?

Agent type

Thanks for the work. Really useful!

What's the need for the Custom Agent?
Why wouldn't one want to use the open ai function agent instead?
I think it could reduce latency and fix some of the custom parser errors.

sales_agent_executor is none

I encountered an issue within this GitHub repository that required resolution. Despite attempting various solutions and consulting the official Langchain website, I was able to address the problem. The source of the issue was identified as the "agent.py" file. Specifically, the line of code "sales_agent_executor: Union[AgentExecutor, None] = Field(...)" resulted in a "sales_agent_executor" being returned as "None". Your assistance in resolving this matter would be greatly appreciated.

Moreover, the error message "AttributeError: 'NoneType' object has no attribute 'run'" was encountered in this context.

python run.py error

from langchain.llms.base import create_base_retry_decorator

ImportError: cannot import name 'create_base_retry_decorator' from 'langchain.llms.base'

Integrating Hugging Chat API

Please provide guidance on whether we can integrate the Hugging Chat API into this code. If such integration is feasible, kindly provide instructions.

请教SalesGPT逻辑设计的问题

SALES_AGENT_TOOLS_PROMPT这个Prompt中为什么要重新输入下面这一段内容?而不是将上一步“阶段判断”获得的”阶段结果“直接输入到这个环节?

This flowchart indicates that the prior step to the Autonomous Sales Agent's invocation is the Stage Analyzer, which evidently has already conducted an analysis of the stage. I'm puzzled as to why the parameters aren't passed on to the next step, but instead, a reevaluation is conducted in the subsequent stage. What is the rationale behind this approach?

Always think about at which conversation stage you are at before answering:

1: Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional. Your greeting should be welcoming. Always clarify in your greeting the reason why you are calling.
2: Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.
3: Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.
4: Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.
5: Solution presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points.
6: Objection handling: Address any objections that the prospect may have regarding your product/service. Be prepared to provide evidence or testimonials to support your claims.
7: Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits.
8: End conversation: The prospect has to leave to call, the prospect is not interested, or next steps where already determined by the sales agent.

Problems with starting asynchronous work

Hello! I'm trying to use def do(self, conversation_history: [str], human_input=None): asynchronously. So I change def do to async def do. Also I changed sales_agent.step() to sales_agent.astep()

Is it right? Or I need to use other methods, course I have error SalesGPT._call() takes 2 positional arguments but 3 were given after this changes...

Maybe there is some example of use or you can tell me what is wrong with me?

TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

SalesGPT-main\examples\sales_agent_with_context.ipynb Cell 23 line 1
---->[1] class SalesGPT(Chain, BaseModel):
         [2] """Controller model for the Sales Agent."""
         [4]conversation_history: List[str] = []

TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

I don't know what is wrong with given example code in "sales_agent_with_context.ipynb". I am getting this error. looking forward to get help from the community. Please note that, I haven't change anything in code but getting this error.

Seeks effective evaluation methods

This project is really fascinating as it uses LLM to implement the function of e-commerce shopping guide. However, I'm curious about how we can assess the effectiveness of a shopping guide assistant's replies. It may not be possible to achieve practical application level through just one initial attempt, so we may need to repeatedly optimize our design based on evaluation indicators. Do you have any thoughts on evaluating the effectiveness of the shopping guide assistant's replies?

question about version of package openai 关于openai库的版本问题

when i use openai==1.2.4, I have error:
Traceback (most recent call last): File "D:\SalesGPT\quickstart.py", line 27, in <module> sales_agent.determine_conversation_stage() # optional for demonstration, built into the prompt File "D:\SalesGPT\salesgpt\logger.py", line 34, in wrapper result = func(*args, **kwargs) # Function execution File "D:\SalesGPT\salesgpt\agents.py", line 76, in determine_conversation_stage self.conversation_stage_id = self.stage_analyzer_chain.run( File "D:\SalesGPT\venv\lib\site-packages\langchain\chains\base.py", line 510, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "D:\SalesGPT\venv\lib\site-packages\langchain\chains\base.py", line 310, in __call__ raise e File "D:\SalesGPT\venv\lib\site-packages\langchain\chains\base.py", line 304, in __call__ self._call(inputs, run_manager=run_manager) File "D:\SalesGPT\venv\lib\site-packages\langchain\chains\llm.py", line 108, in _call response = self.generate([inputs], run_manager=run_manager) File "D:\SalesGPT\venv\lib\site-packages\langchain\chains\llm.py", line 120, in generate return self.llm.generate_prompt( File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\base.py", line 459, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\base.py", line 349, in generate raise e File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\base.py", line 339, in generate self._generate_with_cache( File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\base.py", line 492, in _generate_with_cache return self._generate( File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\litellm.py", line 307, in _generate response = self.completion_with_retry( File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\litellm.py", line 234, in completion_with_retry retry_decorator = _create_retry_decorator(self, run_manager=run_manager) File "D:\SalesGPT\venv\lib\site-packages\langchain\chat_models\litellm.py", line 68, in _create_retry_decorator openai.error.Timeout, AttributeError: module 'openai' has no attribute 'error'

when I use openai==0.27.8 (requestment.txt) I have error:
Traceback (most recent call last): File "D:\SalesGPT\quickstart.py", line 2, in <module> from salesgpt.agents import SalesGPT File "D:\SalesGPT\salesgpt\agents.py", line 9, in <module> from litellm import acompletion File "D:\SalesGPT\venv\lib\site-packages\litellm\__init__.py", line 333, in <module> from .timeout import timeout File "D:\SalesGPT\venv\lib\site-packages\litellm\timeout.py", line 20, in <module> from litellm.exceptions import Timeout File "D:\SalesGPT\venv\lib\site-packages\litellm\exceptions.py", line 12, in <module> from openai import ( ImportError: cannot import name 'AuthenticationError' from 'openai' (D:\SalesGPT\venv\lib\site-packages\openai\__init__.py)

so which version should I use???

integration embedding knowledge

If I want to integrate the embedding knowledge base in salesGPT, when the user consults, when the question asked is not in the knowledge base, the normal dialogue can be answered when asked about the content in the knowledge base. What is the idea to achieve this?

Integration with Ollama

Hello!
I'm trying to use SalesGPT with a locally-served model via Ollama.
I tested the LiteLLM part, it works.

from litellm import completion

response = completion(
    model="orca-mini", 
    messages=[{ "content": "respond in 20 words. who are you?","role": "user"}], 
    api_base="http://localhost:11434", 
    custom_llm_provider="ollama"
)
print(response)

Now, I modified the streaming_generator_example.py like this

llm = ChatLiteLLM(model="orca-mini",
    api_base="http://localhost:11434",
    custom_llm_provider="ollama")
....
generator = sales_agent.step(
     return_streaming_generator=True, model_name="orca-mini",
)

Btw, I think it's a little bit redundant the need to provide the model name at each step instead of setting it up in the llm model. Why is that needed at each step?

The result is :

salesgpt.logger 2023-09-14 07:10:55,377 - INFO - Running from_llm: --- 8.654594421386719e-05 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running from_llm: --- 0.00012731552124023438 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running from_llm: --- 0.0005018711090087891 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running seed_agent: --- 5.7220458984375e-06 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running _prep_messages: --- 0.0001068115234375 seconds ---
Traceback (most recent call last):
  File "/opt/ollama/SalesGPT/examples/streaming_generator_example.py", line 36, in <module>
    generator = sales_agent.step(
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/logger.py", line 34, in wrapper
    result = func(*args, **kwargs)  # Function execution
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/agents.py", line 110, in step
    return self._streaming_generator(model_name=model_name)
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/logger.py", line 34, in wrapper
    result = func(*args, **kwargs)  # Function execution
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/agents.py", line 179, in _streaming_generator
    return self.sales_conversation_utterance_chain.llm.completion_with_retry(
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/langchain/chat_models/litellm.py", line 263, in completion_with_retry
    return _completion_with_retry(**kwargs)

... (long trace not included)

File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/litellm/utils.py", line 998, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/orca-mini',..)` Learn more: https://docs.litellm.ai/docs/providers

From my limited understanding, I get it that somehow the parameters set up in the llm at the initialization step are lost at the step stage, i.e. api_base and custom_llm_provider. Which kinda negates the whole purpose of integrating LiteLLM, if the only one I can use is OpenAI.

What am I doing wrong? Has anyone successfully ran local ollama?

Fix parsing errors from SalesConvoOutputParser

Sometimes there are parsing errors returned which are not handled properly with the current SalesConvoOutputParser. A better option would be to replace it with RetryWithErrorOutputParser which can be imported from langchain.output_parsers. This parser will be triggered when there are parsing errors to return the output in the correct format to parse (i.e. Action: "XXX", Action_input:"XXX").
Any plans in applying those changes soon or any other better ideas to have more reliable parser?

I made some trials in this particular but this parser isn't compatible with LLMSingleActionAgent as it needs to parse by calling parse_with_prompt to return the prompt back to the parser to be fixed however, LLMSingleActionAgent will only call the parse method with no prompt so seems that a new Agent will have to be created.

Hello, when will this version be normal? I just cloned and reported an error.

Hello, when will this version be normal? I just cloned and reported an error.
Traceback (most recent call last): File "F:\GitHub\Llama2-Chinese\SalesGPT\run.py", line 6, in <module> from salesgpt.agents import SalesGPT File "F:\GitHub\Llama2-Chinese\SalesGPT\salesgpt\agents.py", line 25, in <module> class SalesGPT(Chain, BaseModel): TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

GPT-chat

In this solution, you use the instruct chat gpt model to conduct a dialogue with the user. don't you think that this is not the best idea, the chat version is better suited for this

Error in run.py when using --verbose True and example_agent_setup.json

Hi,

I encountered an issue while running the code with the following command:

(venv) (sales) mostafa@aorus:~/programming/progwise/sales-gpt/python/SalesGPT$ python run.py --verbose True --config examples/example_agent_setup.json

The error message I received is as follows:

/home/mostafa/programming/progwise/sales-gpt/python/SalesGPT/venv/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing LLMChain from langchain root module is no longer supported.
  warnings.warn(
/home/mostafa/programming/progwise/sales-gpt/python/SalesGPT/venv/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing LLMChain from langchain root module is no longer supported.
  warnings.warn(
/home/mostafa/programming/progwise/sales-gpt/python/SalesGPT/venv/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing PromptTemplate from langchain root module is no longer supported.
  warnings.warn(
Traceback (most recent call last):
  File "/home/mostafa/programming/progwise/sales-gpt/python/SalesGPT/run.py", line 5, in <module>
    from salesgpt.agents import SalesGPT
  File "/home/mostafa/programming/progwise/sales-gpt/python/SalesGPT/salesgpt/agents.py", line 35, in <module>
    class SalesGPT(Chain, BaseModel):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

It appears that there's a metaclass conflict issue in the SalesGPT class in the agents.py file. I'm not sure if this is specific to my environment or if it's a general issue with the codebase. Could someone please take a look and provide guidance on how to resolve this error?

Thank you!

Introducing custom product catalog

I've updated the sample_product_catalog.txt file as per my needs. However, during the conversation , I see that the AI is generating a random set of products and prices as per its wish.
What all code files do I need to change to get the custom result I'm looking for?

How to get source when I use csv to get product information?

            chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=vectorstore.as_retriever())
            result = chain({"question": query}, return_only_outputs=True)
            # result will be a dictionary of this format --> {"answer": "", "sources": [] }
            st.header("Answer")
            st.write(result["answer"])

            # Display sources, if available
            sources = result.get("sources", "")
            if sources:
                st.subheader("Sources:")
                sources_list = sources.split("\n")  # Split the sources by newline
                for source in sources_list:
                    print(source)

This one I used in my another project. But in SalesGPT couldn't implement something similar to this. Tried Following:

import pickle

# Set up a knowledge base
def setup_knowledge_base(product_catalog: str = None):
    # """
    # We assume that the product knowledge base is simply a text file.
    # """
    # # load product catalog
    # with open(product_catalog, "r") as f:
    #     product_catalog = f.read()

    # text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)
    # texts = text_splitter.split_text(product_catalog)

    # llm = OpenAI(temperature=0, max_tokens=700)
    # embeddings = OpenAIEmbeddings()
    
    # docsearch = Chroma.from_texts(
    #     texts, embeddings, collection_name="product-knowledge-base"
    # )

    # knowledge_base = RetrievalQA.from_chain_type(
    #     llm=llm, chain_type="stuff", retriever=docsearch.as_retriever()
    # )
    file_path = product_catalog
    llm = OpenAI(temperature=0, max_tokens=700)
    with open(file_path, "rb") as f:
        vectorstore = pickle.load(f)
    

    knowledge_base = RetrievalQA.from_chain_type(
         llm=llm, chain_type="stuff", 
         retriever=vectorstore.as_retriever(search_type="mmr", search_kwargs={'k': 5, 'lambda_mult': 0.25},
                                            return_source_documents=True)
    )
    print(knowledge_base)
    return knowledge_base

    # knowledge_base = RetrievalQAWithSourcesChain.from_chain_type(
    #     llm=llm, 
    #     retriever=vectorstore.as_retriever(search_type="mmr", search_kwargs={'k': 5, 'lambda_mult': 0.25},
    #                                        return_source_documents=True)
    # )

    # print(knowledge_base)
    # return knowledge_base


def get_tools(product_catalog):
    # query to get_tools can be used to be embedded and relevant tools found
    # see here: https://langchain-langchain.vercel.app/docs/use_cases/agents/custom_agent_with_plugin_retrieval#tool-retriever

    # we only use one tool for now, but this is highly extensible!
    knowledge_base = setup_knowledge_base(product_catalog)
    tools = [
        Tool(
            name="ProductSearch",
            func=knowledge_base.run,
            description="useful for when you need to answer questions about product information",
        )
    ]

    return tools```

Not able to run even following all the steps of installation

Ted Lasso:
Traceback (most recent call last):
File "/root/SalesGPT/run.py", line 67, in
sales_agent.step()
File "/root/SalesGPT/salesgpt/logger.py", line 34, in wrapper
result = func(*args, **kwargs) # Function execution
File "/root/SalesGPT/salesgpt/agents.py", line 107, in step
self._call(inputs={})
File "/root/SalesGPT/salesgpt/agents.py", line 246, in _call
ai_message = self.sales_conversation_utterance_chain.run(
File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 510, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 310, in call
raise e
File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 304, in call
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 120, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
return self._generate(
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/litellm.py", line 306, in _generate
response = self.completion_with_retry(
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/litellm.py", line 233, in completion_with_retry
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
File "/usr/local/lib/python3.9/dist-packages/langchain/chat_models/litellm.py", line 70, in _create_retry_decorator
litellm.APIConnectionError,
AttributeError: module 'litellm' has no attribute 'APIConnectionError'

1 - made touch .env
2 - copied api key there
3 - followed this steps ->
Make sure your have a python 3.10+ and run:

pip install -r requirements.txt

additional info: Python 3.11.2

Consistently Skipping Conversation Stages

Hello. Thanks for the great work on this agent, it has helped me develop a much better understanding of how to leverage langchain and openai.

That being said, I am having some trouble getting the agent to have a full conversation and address essential stages in the conversation. I made my own custom stages but kept the stage analyzer chain the same (with the new conversation stages). Any idea why this is happening?

Sales Agent to Use Local HuggingFace Model

How do I define a salesGPT agent that uses a local huggingface-pulled model? I am trying to use Llama-2-7b-chat-hf as the base model, but I can't pull the model directly bcs of my current GPU resources. I followed Llama's docs to make it smaller by the code shown below:

model_id = 'meta-llama/Llama-2-7b-chat-hf'

device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'

bnb_config = transformers.BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=bfloat16
)

hf_auth = "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" 
model_config = transformers.AutoConfig.from_pretrained(
    model_id,
    use_auth_token=hf_auth
)

model = transformers.AutoModelForCausalLM.from_pretrained(
    model_id,
    trust_remote_code=True,
    config=model_config,
    quantization_config=bnb_config,
    device_map='auto',
    use_auth_token=hf_auth
)

model.eval()

Then I wrapped the whole transformers pipeline with langchain's HuggingFacePipeline:

llm = HuggingFacePipeline(pipeline=generate_text)

The model's working fine on its own but I can't seem to make it work in SalesGPT.

sales_agent = SalesGPT.from_llm(llm, use_tools=True, verbose=False,
                            product_catalog = "sample_product_catalog.txt",
                            salesperson_name="Ted Lasso",
                            salesperson_role="Sales Representative",
                            company_name="Sleep Haven",
                            company_business='''Sleep Haven 
                            is a premium mattress company that provides
                            customers with the most comfortable and
                            supportive sleeping experience possible. 
                            We offer a range of high-quality mattresses,
                            pillows, and bedding accessories 
                            that are designed to meet the unique 
                            needs of our customers.'''
                            )

The error says:

AttributeError: 'HuggingFacePipeline' object has no attribute 'model'

Is there any workaround so that I can get this to work? Any suggestions would be appreciated. Thanks!

Models are interacting with themselves, instead of user.

    model_id = "NousResearch/Llama-2-7b-chat-hf"

    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        low_cpu_mem_usage=True,
        torch_dtype=torch.float16,
        device_map="auto",
    )

    pipe = pipeline(
        "text-generation",
        model=model,
        tokenizer=tokenizer,
        max_new_tokens=2500,
        temperature=0.3,
        top_p=0.95,
    )

   llm = HuggingFacePipeline(pipeline=pipe)

We were trying to connect a custom model to SalesGPT, tried both Llama-2-7b and our custom model.
But in both cases we faced an issue, that at first response on our initial input, model self generates a whole conversation between sales agent and some user, that the model created by itself.

Any ideas why are we facing this issue?

Problem in Russian setup

When I switch to the Russian setup file, I get several replies from Sales and User.

image

There is an assumption that the answer is either history or sammorization

image

And when I do this in English there is only one answer

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.