Comments (2)
@deveshasha Hello there! I'm here to help you with any issues you encounter. If you have any questions or need assistance with bug fixes, feel free to ask!
To address the issue where the parse_to_query_result
method in the QdrantVectorStore
class expects a "text" key in the metadata, you can modify the method to check if the "text" key exists before attempting to access it. If the key does not exist, you can handle it gracefully by setting a default value or skipping the operation.
Here is an example modification:
def parse_to_query_result(self, response: List[Any]) -> VectorStoreQueryResult:
"""
Convert vector store response to VectorStoreQueryResult.
Args:
response: List[Any]: List of results returned from the vector store.
"""
nodes = []
similarities = []
ids = []
for point in response:
payload = cast(Payload, point.payload)
try:
node = metadata_dict_to_node(payload)
except Exception:
metadata, node_info, relationships = legacy_metadata_dict_to_node(
payload
)
text = payload.get("text", "Default text") # Check if "text" key exists
node = TextNode(
id_=str(point.id),
text=text,
metadata=metadata,
start_char_idx=node_info.get("start", None),
end_char_idx=node_info.get("end", None),
relationships=relationships,
)
nodes.append(node)
ids.append(str(point.id))
try:
similarities.append(point.score)
except AttributeError:
# certain requests do not return a score
similarities.append(1.0)
return VectorStoreQueryResult(nodes=nodes, similarities=similarities, ids=ids)
In this modification, the payload.get("text", "Default text")
line checks if the "text" key exists in the payload. If it does not, it sets text
to "Default text". This approach ensures that the method handles missing "text" keys gracefully by providing a default value [1][2][3].
To continue talking to Dosu, mention @dosu.
from llama_index.
I was getting the same error:
ValidationError: 1 validation error for TextNode
text
none is not an allowed value (type=type_error.none.not_allowed)
I was creating a VectoreStoreIndex with already created collection from before, few weeks time.
After some hours debugging and trying some stuff I've realized that after making a new collection in qdrant with openai embeddings, resolved the error.
I did not go into research to what exactly caused the bug but I guess something changed in the way that qdrant vectore store is created and that did not match with old version of creating embeddings.
I'm not sure if that makes sense but it resolved the error. Just sharing if someone else has the same problem :)
from llama_index.
Related Issues (20)
- [Question]: How to insert/delete document to/from VectorStoreIndex when using IngestionPipeline? HOT 2
- Compatibility issue between Qdrant and DSPy when Qdrant is used as the VectorStoreIndex's storage context HOT 5
- [Question]: AttributeError: 'property' object has no attribute 'context_window' HOT 1
- [Question]: The created knowledge graph does not have edge relationships neo4j HOT 13
- [Documentation]: Some of the URL Not Working HOT 3
- [Question]: Unable to understand how document storage works in case nodes are deleted HOT 1
- [Documentation]: Broken 'Examples' Link HOT 3
- [Feature Request]: Add a notebook to show llamaindex agent works with graphRAG and Vertex AI
- [Bug]: File rename error in llama-index-finetuning/llama_index/finetuning/mistralai/utils.py HOT 1
- [Question]: How to enable "Calling function" print out after querying from Multi-Document Agent example HOT 3
- [Question]: Access LLM's response object CompleteResponse() attribute `additional_kwarg` in RAG HOT 2
- [Bug]: Error in initializing neo4j HOT 2
- Indexes cannot be created correctly using the MilvusVectorStore. HOT 12
- How should the dim parameter value of MilvusVectorStore be calculated? HOT 4
- [Bug]: ERROR: Failed building wheel for pystemmer HOT 1
- How to deploy open-source embedding models in auto-merging retriever: ValueError: shapes (1024,) and (384,) not aligned: 1024 (dim 0) != 384 (dim 0) HOT 2
- [Bug]: No module named 'llama_index.llms.openai.base HOT 1
- [Bug]: [OpenAILike] Cannot use llm_chat_callback on an instance without a callback_manager attribute HOT 4
- [Feature Request]: Version pinning for sub packages HOT 2
- I wonder how to use llama_index to retrieve the Milvus collection after it is created and indexed using the MilvusVectorStore. HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama_index.