Giter Site home page Giter Site logo

b5marwan / zozo-assistant Goto Github PK

View Code? Open in Web Editor NEW

This project forked from amirrezahmi/zozo-assistant

0.0 0.0 0.0 18.2 MB

Zozo Assistant is a voice-activated chatbot that performs tasks based on user commands. It utilizes speech recognition, NLP, and ML to provide interactive responses. Users can listen to music, set alarms, check the time, weather, and engage in conversations. With AI technology and voice interaction, Zozo enhances the user experience.

License: MIT License

Python 100.00%

zozo-assistant's Introduction

Zozo Assistant

Zozo Assistant is a Python-based voice-controlled assistant that uses natural language processing to answer questions and perform various tasks. But also you can use it if you don't have a microphone. The assistant can provide answers to pre-defined questions, play music, say weather, set alarms, tell the time and date, and engage in general conversation. This application leverages the OpenAI API for sophisticated language processing, but also includes a fallback pipeline model for situations where the API is not accessible. The code in train.py trains a chatbot using a Linear Support Vector Machine (SVM) classifier. It reads a dataset from a file named datamain.txt, which contains a collection of questions and corresponding answers. The chatbot learns from this dataset and creates a pipeline model using the CountVectorizer, TfidfTransformer, and LinearSVC from scikit-learn. The text data is preprocessed using a custom function that removes punctuation, converts text to lowercase, and applies lemmatization to reduce words to their base or root form. This preprocessing step helps to improve the model's ability to understand and respond to a variety of inputs. Once trained, the model is saved as model2.joblib for future use when the API is not available.

Features

  • Speech recognition: Zozo can listen to user input through a microphone and convert it into text using the SpeechRecognition library.
  • Text-to-speech: Zozo can respond to users by converting text into speech using the pyttsx3 library.
  • Weather: Users can ask for the current weather information for a location, and Zozo will provide the information.
  • Music player: Zozo can play a collection of music files in the "music" folder. Users can control playback with voice commands or by entering options.
  • Alarm: Users can set alarms by specifying the duration in seconds. Zozo will play a sound after the specified time has elapsed.
  • Date and time: Users can inquire about the current date and time, and Zozo will provide the information.
  • OpenAI integration: Utilize the OpenAI API for advanced language processing (API key required).
  • Fallback pipeline model: If the OpenAI API is unavailable, the assistant falls back to a pre-trained pipeline model.
  • Clean and minimalist UI design

Prerequisites

  • Python 3.x
  • OpenAI API
  • OpenWeather API
  • pyaudio
  • joblib
  • nltk
  • scikit-learn
  • pygame
  • tkinter
  • requests
  • pyttsx3
  • word2number
  • SpeechRecognition
  • jsonlib-python3
  • openai
  • numpy
  • pandas

Getting Started

Installation

  1. Clone the repository:
  git clone https://github.com/Amirrezahmi/Zozo-Assistant.git
  1. Navigate to the project directory:
  cd Zozo-Assistant
  1. Install the required dependencies by running the following command:
  pip install -r requirements.txt
  1. As stated earlier, we are using pyttsx3 in this program. It is essential to note that this program has been developed on a Windows-powered device, which may result in encountering errors with certain libraries on other operating systems, such as pyttsx3. pyttsx3 uses speech synthesis engines that depend on your operating system. Make sure that the corresponding speech engine is correctly installed and configured. For example, on Linux, pyttsx3 uses espeak. You might need to install it in case you are using Linux:
sudo apt-get update && sudo apt-get install espeak

Usage

  1. Open the main.py and ui.py files and locate the OPENAI_API_KEY and apiKey variables. Paste your OpenAI API key into OPENAI_API_KEY variable, and paste your OpenWeather API key into apiKey variable.
  2. If you don't have a microphone or encounter any issues with the microphone-related functions, in the beginning of running main.py the program asks you that do you have a microphone or not and you can answer no if you don't. In ui.py I haven't implemented a microphone button yet and you should write your input prompt or use buttons, but I'll implement a button for microphone soon!
  3. If model2.joblib dosen't work, delete it and run train.py again. Because sometimes the model only works on specefic version of Python that has trained before.
  4. Run the main.py script (for console-based) or ui.py script (for user interface (UI)) to start the Zozo Assistant.
  5. At the beggining say "Zozo" or "Hey Zozo" to get the Asisstant attention. (If you chossed the microphone option at the begging.)
  6. Interact with the chatbot by asking questions, playing music, weather, setting alarms, or requesting the current date and time or any other prompts.
  7. To exit the chatbot, say "bye", "goodbye" or "exit".

Basic and Intuitive User Interface

Zozo Assistant has undergone a transformation from a console-based project to a visually captivating user interface (UI). Although the UI remains basic, it provides a simple and intuitive way for users to interact with the assistant, enhancing their overall experience.

Key Features:

  • Basic and minimalist UI design that focuses on functionality.
  • Easy navigation through the assistant's features.
  • Intuitive controls and straightforward interaction methods.
  • Streamlined functionality for seamless execution of commands.

Despite its basic nature, the UI of Zozo Assistant ensures that users can effortlessly interact with the assistant's capabilities without any unnecessary complexity. The clean and minimalist design allows for a distraction-free experience, enabling users to focus on the assistant's functionality and make the most out of their interactions.

Feel free to modify the text according to your preferences and specific features of your UI.

Examples

This section includes hands-on, practical examples demonstrating how to interact with the Zozo-Assistant. Here, you will find examples for both console-based and GUI-based interactions.

Console-based Interactions

Uncover different scenarios for interacting with the Zozo-Assistant using the console. This section delves into specific situations, portraying what happens when the microphone is accessible and when it isn't.

Scenario: Microphone Inaccessible

This part describes the interaction flow when the user does not grant microphone access. After the initial question from the program about microphone accessibility, if the user responds with "no", the subsequent interaction process unfolds as demonstrated in the screenshot below:

Scenario: Microphone Accessible

A video demonstration showcasing what happens when the user provides microphone access โ€” that is, when the microphone accessibility question is answered with a "yes". This video depicts how the program responds and guides on the subsequent steps the user needs to take.

vid1.4.mp4

GUI Interactions

This subsection contains a visual guide on how to navigate and interact with the Zozo-Assistant using the graphical user interface (GUI).

In the ui.py, I will apply minor changes to make it more beautiful in the future, which may not be in this video because this is just a software test.

ggg.2.mp4

Implementation on Raspberry Pi

In this section, we will discuss the process of implementing the Zozo-Assistant project on a Raspberry Pi. Raspberry Pi is a small, affordable single-board computer that provides the necessary processing power for running our program. We chose to use the Raspberry Pi Model 4 with 8 GB RAM for this implementation.

Installation

To install the operating system on the Raspberry Pi, follow these steps:

  1. Download the desired operating system image for Raspberry Pi 4, such as Raspbian (now called Raspberry Pi OS), Ubuntu, or OSMC.
  2. Use software like Etcher to write the downloaded operating system image to a microSD memory card.
  3. Insert the memory card into the Raspberry Pi and connect the necessary cables, including the HDMI cable for monitor connection and power cable.
  4. Power up the Raspberry Pi.

Instead of using an HDMI cable, we used a LAN cable to connect the Raspberry Pi to the modem. We obtained the IP address using the PuTTy program and connected to it using the RealVNC viewer program.

Setup

When the Raspberry Pi boots up, follow these steps for the initial setup:

  1. Configure the date, language, network, and other settings as required.
  2. Update the operating system to get the latest updates and benefits from the facilities. Use the command line with appropriate commands to perform the system update.

Installing pip and libraries

To install pip and easily install libraries with pip in the terminal on Raspberry Pi, follow these steps:

  1. Open a terminal on your Raspberry Pi.
  2. Update the package list by running the following command:
sudo apt-get update
  1. Install the python3-pip package, which provides the pip command for Python 3, by running the following command:
sudo apt-get install python3-pip
  1. Once the installation is complete, you can use the pip command to install libraries. For example, to install a library named library-name, run the following command:
pip install library-name

Replace library-name with the actual name of the library you want to install.

Running the Program

To run the Zozo-Assistant program on the Raspberry Pi, follow these steps:

  1. Prepare the Raspberry Pi operating system by installing Python. Update the Python version using the following commands:
sudo apt update
sudo apt upgrade
sudo apt install python3
  1. Transfer the Python program (e.g., main.py) written in VSCode to the Raspberry Pi. You can use FTP, SCP, or transfer the program using a USB flash drive.
  2. Access the Raspberry Pi terminal and navigate to the directory where the program is saved. Execute the program using the following command:
python3 main.py

This command will run the Zozo-Assistant program and display the result.

Testing and Debugging

Before running the program on the Raspberry Pi, it's recommended to test and debug it in a Linux environment. Ensure that Python is installed on the board, and adjust the code if any compatibility issues arise. In our case, we encountered problems due to the difference in Python versions between our development environment (Windows) and Raspberry Pi (Linux). We had to manually downgrade our code and fix the created bugs one by one to make it work.

Additionally, make sure to install all the required libraries in the Linux environment for successful program execution.

Please note that these instructions are specific to the Zozo-Assistant project and may vary depending on the specific requirements of your project. For further guidance, refer to the documentation provided by Raspberry Pi and the Zozo-Assistant project.

That's it! This subsection provides an overview of how to implement the Zozo-Assistant project on a Raspberry Pi, including installation, setup, and running the program. Feel free to modify and expand upon this section to provide more detailed instructions or address specific issues that may arise during the implementation process.

Happy coding!

Contributing

Contributions are welcome! If you'd like to contribute to this project, please follow these steps:

  1. Fork the repository.
  2. Create a new branch: git checkout -b my-new-branch.
  3. Make your changes and commit them: git commit -m 'Add some feature'.
  4. Push to the branch: git push origin my-new-branch.
  5. Submit a pull request.

License

This project is licensed under the MIT License.

Credits

  • OpenAI- For providing the chatbot API.
  • tkinter- For creating the user interface (UI) components.
  • NLTK - Natural Language Toolkit for text processing.
  • scikit-learn- Machine learning library for building the question-answering model.
  • PyAudio- For audio input/output functionality.
  • pygame- Library for playing music files.
  • pyttsx3- Text-to-speech library for speech output.
  • SpeechRecognition- Library for speech recognition functionality.
  • word2number- Library for converting words to numbers.

Contact

For any questions or inquiries, please contact [email protected]

zozo-assistant's People

Contributors

amirrezahmi avatar l8lsoheil8l avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.