Giter Site home page Giter Site logo

Comments (8)

dosubot avatar dosubot commented on July 17, 2024 2

Hey @1ly1mk! 👋 I'm here to help you with any bugs, questions, or contributions you have in mind. I'm your friendly bot, ready to assist while you wait for a human maintainer.

The NotImplementedError with the message "Messages passed in must be of odd length" occurs because the react chat mode in the LlamaIndex framework requires the sequence of messages to have an odd length. This is a specific requirement for the react mode, which is not being met in your current implementation.

To resolve this issue, ensure that the sequence of messages passed to the react chat mode has an odd length. You can achieve this by adding a check before passing the message to the chat engine and adjusting the message length if necessary.

Here is an example of how you might modify your code to ensure the message length is odd:

if prompt := st.chat_input("Your question"):
    add_to_message_history("user", prompt)

    # Ensure the message length is odd
    if len(prompt) % 2 == 0:
        prompt += " "  # Add a space to make the length odd

    # Display the new question immediately after it is entered
    with st.chat_message("user"):
        st.write(prompt)

    # If last message is not from assistant, generate a new response
    with st.chat_message("assistant"):
        response = st.session_state["chat_engine"].stream_chat(prompt)
        response_str = ""
        response_container = st.empty()
        for token in response.response_gen:
            response_str += token
            response_container.write(response_str)
        add_to_message_history("assistant", response.response)

    # Save the state of the generator
    st.session_state["response_gen"] = response.response_gen

This modification ensures that the message length is odd before it is passed to the chat engine, which should prevent the NotImplementedError from occurring [1][2].

To continue talking to Dosu, mention @dosu.

from llama_index.

dosubot avatar dosubot commented on July 17, 2024 2

The NotImplementedError with the message "Messages passed in must be of odd length" occurs because the react chat mode in the LlamaIndex framework requires the sequence of messages to have an odd length. This is a specific requirement for the react mode, which is not being met in your current implementation.

To resolve this issue, you need to ensure that the sequence of messages passed to the chat method has an odd length. The error is raised because the sequence of messages (not the individual message) must be of odd length.

Here's how you can modify your code to ensure the sequence length is odd:

  1. Before calling the chat method, check the length of the message sequence.
  2. If the length is even, add a dummy message to make it odd.

Here's an example implementation:

from typing import List
from llama_index.legacy.core.llms.types import ChatMessage, MessageRole

def ensure_odd_length(messages: List[ChatMessage]) -> List[ChatMessage]:
    if len(messages) % 2 == 0:
        # Add a dummy message to make the length odd
        messages.append(ChatMessage(role=MessageRole.SYSTEM, content="Dummy message"))
    return messages

# Example usage
messages = [
    ChatMessage(role=MessageRole.SYSTEM, content="System message content"),
    ChatMessage(role=MessageRole.USER, content="User message content")
]  # This is even length

messages = ensure_odd_length(messages)  # This will make it odd length

response = chat_engine.chat(messages)

This function ensure_odd_length checks the length of the messages and appends a dummy message if the length is even, ensuring that the sequence length is always odd before passing it to the chat method [1][2].

To continue talking to Dosu, mention @dosu.

from llama_index.

dosubot avatar dosubot commented on July 17, 2024 2

To resolve the NotImplementedError related to the message order, you need to ensure that the messages are ordered in alternating pairs of <MessageRole.USER: 'user'> and <MessageRole.ASSISTANT: 'assistant'>. You can use the merge_neighboring_same_role_messages function to help maintain this order.

Here's how you can use the merge_neighboring_same_role_messages function:

from llama_index.legacy.core.llms.types import ChatMessage, MessageRole
from llama_index.legacy.llms.gemini_utils import merge_neighboring_same_role_messages

def ensure_alternating_pairs(messages: List[ChatMessage]) -> List[ChatMessage]:
    # Merge neighboring messages with the same role
    merged_messages = merge_neighboring_same_role_messages(messages)
    
    # Ensure the messages are in alternating pairs
    for i in range(0, len(merged_messages) - 1, 2):
        if merged_messages[i].role != MessageRole.USER or merged_messages[i + 1].role != MessageRole.ASSISTANT:
            raise ValueError("Messages are not in alternating pairs of USER and ASSISTANT.")
    
    return merged_messages

# Example usage
messages = [
    ChatMessage(role=MessageRole.SYSTEM, content="System message content"),
    ChatMessage(role=MessageRole.USER, content="User message content"),
    ChatMessage(role=MessageRole.ASSISTANT, content="Assistant message content")
]

messages = ensure_alternating_pairs(messages)

response = chat_engine.chat(messages)

This function first merges neighboring messages with the same role and then checks to ensure that the messages are in alternating pairs of USER and ASSISTANT. If the messages are not in the correct order, it raises a ValueError.

By using this approach, you can ensure that the message sequence meets the requirements of the chat_messages_to_conversational_kwargs function and avoid the NotImplementedError [1][2].

To continue talking to Dosu, mention @dosu.

from llama_index.

dosubot avatar dosubot commented on July 17, 2024 2

The ValidationError you're encountering indicates that the additional_kwargs field in the ChatMessage class cannot be None. This field must be a dictionary, even if it's empty.

To resolve this issue, ensure that the additional_kwargs field is always initialized as an empty dictionary if it's not provided. Here's how you can modify the merge_neighboring_same_role_messages function to handle this:

from typing import Sequence
from llama_index.legacy.core.llms.types import ChatMessage, MessageRole

def merge_neighboring_same_role_messages(
    messages: Sequence[ChatMessage],
) -> Sequence[ChatMessage]:
    # Gemini does not support multiple messages of the same role in a row, so we merge them
    merged_messages = []
    i = 0

    while i < len(messages):
        current_message = messages[i]
        # Initialize merged content with current message content
        merged_content = [current_message.content]

        # Check if the next message exists and has the same role
        while (
            i + 1 < len(messages)
            and messages[i + 1].role == current_message.role
        ):
            i += 1
            next_message = messages[i]
            merged_content.extend([next_message.content])

        # Ensure additional_kwargs is not None
        additional_kwargs = current_message.additional_kwargs or {}

        # Create a new ChatMessage with merged content
        merged_message = ChatMessage(
            role=current_message.role,
            content="\n".join([str(msg_content) for msg_content in merged_content]),
            additional_kwargs=additional_kwargs,
        )
        merged_messages.append(merged_message)
        i += 1

    return merged_messages

This modification ensures that additional_kwargs is always a dictionary, preventing the ValidationError from occurring [1].

To continue talking to Dosu, mention @dosu.

from llama_index.

logan-markewich avatar logan-markewich commented on July 17, 2024

Some huggingface models are very strict about how message passing works. Pretty hard to deal with, would have to update the class to odd some option to force the messages to be an odd length

from llama_index.

1ly1mk avatar 1ly1mk commented on July 17, 2024

I added a method to ensure the message is odd but I still get the problem. Seems that an odd check is being done for the message sequence and not the message itself.
image

I printed the 'messages' and it yielded an array of two ChatMessage, one from system and another from user.

[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='You are designed to help with a variety of tasks, from answering questions to providing summaries to other types of analyses.\n\n## Tools\n\nYou have access to a wide variety of tools. You are responsible for using the tools in any sequence you deem appropriate to complete the task at
hand.\nThis may require breaking the task into subtasks and using different tools to complete each subtask.\n\nYou
have access to the following tools:\n> Tool Name: query_engine_tool\nTool Description: Useful for running a natural language query\nagainst a knowledge base and get back a natural language response.\n\nTool Args: {"type": "object", "properties": {"input": {"title": "Input", "type": "string"}}, "required": ["input"]}\n\n\n\n## Output Format\n\nPlease answer in the same language as the question and use the following format:\n\n\nThought: The current language of the user is: (user\'s language). I need to use a tool to help me answer the question.\nAction: tool name (one of query_engine_tool) if using a tool.\nAction Input: the input to the tool, in a JSON format representing the kwargs (e.g. {"input": "hello world", "num_beams": 5})\n\n\nPlease ALWAYS start with a Thought.\n\nPlease use a valid JSON format for the Action Input. Do NOT do this {'input': 'hello world', 'num_beams': 5}.\n\nIf this format is used, the user will respond in the following format:\n\n\nObservation: tool response\n\n\nYou should keep repeating the above format till you have enough information to answer the question without using any more tools. At that point, you MUST respond in the one of the following two formats:\n\n\nThought: I can answer without using any more tools. I\'ll use the user\'s language to answer\nAnswer: [your answer here (In the same language as the user\'s question)]\n\n\n\nThought: I cannot answer the question with the provided tools.\nAnswer: [your answer here (In the same language as the user\'s question)]\n\n\n## Current Conversation\n\nBelow is the current conversation consisting of interleaving human and assistant messages.\n', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='who is paul? ', additional_kwargs={})]

Possibly there is a problem in llama_index\llms\huggingface\base.py implementation.

@logan-markewich @dosu

from llama_index.

1ly1mk avatar 1ly1mk commented on July 17, 2024

Implement ensure odd message and now I get another exception

File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\llms\huggingface\base.py", line 450, in chat_messages_to_conversational_kwargs
raise NotImplementedError(
NotImplementedError: Didn't handle when messages aren't ordered in alternating pairs of (<MessageRole.USER: 'user'>, <MessageRole.ASSISTANT: 'assistant'>).

I had to modify the below line in chat_memory_buffer.py as m.content was throwing an error.
image

I guess it's too much to handle to make this work.

@logan-markewich @dosu

from llama_index.

1ly1mk avatar 1ly1mk commented on July 17, 2024

while implementing ordering below exception pops up,

File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\legacy\llms\gemini_utils.py", line 116, in merge_neighboring_same_role_messages
merged_message = ChatMessage(
^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\pydantic\v1\main.py", line 341, in init
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatMessage
additional_kwargs
none is not an allowed value (type=type_error.none.not_allowed)

@logan-markewich @dosu

from llama_index.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.