Giter Site home page Giter Site logo

pinecone-io / examples Goto Github PK

View Code? Open in Web Editor NEW
2.5K 56.0 926.0 357.77 MB

Jupyter Notebooks to help you get hands-on with Pinecone vector databases

License: MIT License

Jupyter Notebook 99.98% Python 0.02%
ai jupyter-notebook llm python semantic-search vector-database

examples's Introduction

Long term memory for Artificial Intelligence

Pinecone Examples

This repository is a collection of sample applications and Jupyter Notebooks that you can run, download, study and modify in order to get hands-on with Pinecone vector databases and common AI patterns, tools and algorithms.

Two kinds of examples

This repo contains:

  1. Production ready examples in ./docs that receive regular review and support from the Pinecone engineering team
  2. Examples optimized for learning and exploration of AI techniques in ./learn and patterns for building different kinds of applications, created and maintained by the Pinecone Developer Advocacy team.

We appreciate your feedback and contributions. Please see our contribution guide for information on how to contribute to this repo.

Getting started

Please see our Getting started guide in our learn section for detailed instructions and a walkthrough of setting up and running a Jupyter Notebook in Google Colab for experimentation.

We love feedback!

As you work through these examples, if you encounter any problems or things that are confusing or don't work quite right, please open a new issue :octocat:.

Getting support and further reading

Visit our:

Collaboration

We truly appreciate your contributions to help us improve and maintain this community resource!

If you've got ideas for improvements, want to contribute a quick fix like correcting a typo, or patching an obvious bug, feel free to open a new issue or even a pull request. If you're considering a larger or more involved change to this repository, its organization or the functionality of one of the examples, please first open a new issue :octocat: and state your proposed changes so we discuss them together before you invest a ton of time or effort into making changes. Thanks for your understanding and collaboration.

examples's People

Contributors

acatav avatar akojanic avatar ashraq1455 avatar aulorbe avatar bhavishpahwa avatar cfossguy avatar co7e avatar coryroyce avatar dandv avatar dosticjelena avatar fsxfreak avatar fy avatar gdj0nes avatar gkogan avatar halfabrane avatar jaglinux avatar jamescalam avatar jseldess avatar keitazoumana avatar miararoy avatar ne-msft avatar rajat08 avatar rschwabco avatar sriramgs avatar stankokuveljic avatar startakovsky avatar yaakovs avatar zackproser avatar zboyles avatar zeke-emerson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

Notebook Crashes when loading model to CUDA

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

Hi, I was executing your notebook, but somehow my Kernel crashes every time I try to load the model. I am using an NVIDIA GeForce RTX 4090 GPU. The point at which it crashes -


from torch import cuda, bfloat16
import transformers

device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'

model = transformers.AutoModelForCausalLM.from_pretrained(
    'mosaicml/mpt-7b-instruct',
    trust_remote_code=True,
    torch_dtype=bfloat16,
    max_seq_len=2048
)
model.eval()
model.to(device)
print(f"Model loaded on {device}")

I am using the latest version of the transformers library and the model checkpoint. I have also ensured that all the dependencies are up to date.

Any assistance would be greatly appreciated. Thank you!

Expected Behavior

Model should get loaded on CUDA

Steps To Reproduce

Ran it as the same way as your code.

If possible can you confirm if you were using Colab Pro?

Relevant log output

No response

Environment

- **OS**: Ubuntu
- **Language version**: Python3

My System Specs -
NVIDIA GeForce RTX 4090
PyTorch -2.0.1, 
Cuda 11.7

Additional Context

No response

Confusing claim in "Langchain AI Handbook" - Chapter 6

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

Chapter 6 of the "Langchain AI Handbook" reads:

Observation: �[33;1m�[1;3m The ratio of the prices of stocks 'ABC' and 'XYZ' in January 3rd is 0.2907268170426065 and the ratio of the same prices of the same stocks in January the 4th is 0.2830188679245283.�[0m
Thought:�[32;1m�[1;3m Do I need to use a tool? No
AI: The answer is 0.4444444444444444. Is there anything else I can help you with?�[0m

�[1m> Finished chain.�[0m
Spent a total of 2518 tokens

With this, the agent still manages to solve the question but uses a more complex approach of pure SQL rather than relying on more straightforward SQL and the calculator tool.


However, the agent actually fails to solves the question since it hallucinates the end result (0.29 * 0.28 != 0.44).
Just as minor improvement, the sentence could be changed to read that the agent actually fails to solve complex queries.

Expected Behavior

N/A

Steps To Reproduce

N/A

Relevant log output

No response

Environment

- **OS**:
- **Language version**:
- **Pinecone client version**:

Additional Context

No response

Question on Hybrid search ecomm example

Hello :)

Simple question: is it possible to have multiple images per product following the same approach with e-commerce hybrid search example? I don't see a way to encode multiple images for same data record (when we do it in batches).

[Bug] redundancy in URL: gpt-4-langchain-docs.ipynb

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

the domain_full variable adds en/latest on to en/latest, causing the BS4 module to recursively 404 across langchain

from bs4 import BeautifulSoup
import urllib.parse
import html
import re

domain = "https://python.langchain.com/en/latest/"
domain_full = domain+"en/latest/"

soup = BeautifulSoup(res.text, 'html.parser')

Expected Behavior

it should recursively pull the documentation from langchain

Steps To Reproduce

attempt to run the script, by the fourth notebook action it will 404

Relevant log output

No response

Environment

- M1 Mac

Additional Context

No response

[Bug] Error in Step 12 of the colab notebook for the NER Powered Semantic Search example

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

I was trying to run this example on colab: NER-Powered Semantic Search

Got error at Step 12 to create the embeddings. We do this in batches of 64 to avoid overwhelming machine resources or API request limits. (see screenshot attached below).

error

Expected Behavior

expected no error in this step, like the previous steps in the colab notebook.

Steps To Reproduce

Simply click the link provided on the Example page to open the colab notebook example provided by Pinecone and execute each step in sequence on the colab notebook.

Relevant log output

No response

Environment

- **OS**:
- **Language version**:
- **Pinecone client version**:

Additional Context

No response

Query vector dimension 76800 does not match the dimension of the index 768

I get the following error
ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Mon, 06 Mar 2023 17:34:00 GMT', 'x-envoy-upstream-service-time': '2', 'content-length': '110', 'server': 'envoy', 'connection': 'close'})
HTTP response body: {"code":3,"message":"Query vector dimension 76800 does not match the dimension of the index 768","details":[]}

For the code snippet:

import random

batch_size = 100
triplets = []

for i in tqdm(range(0, len(pairs), batch_size)):
# embed queries and query pinecone in batches to minimize network latency
i_end = min(i+batch_size, len(pairs))
queries = [pair[0] for pair in pairs[i:i_end]]
print(len(queries))
pos_passages = [pair[1] for pair in pairs[i:i_end]]
#print((pos_passages))
# create query embeddings
query_embs = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
# search for top_k most similar passages
res = index.query(query_embs.tolist(), top_k=1)
# iterate through queries and find negatives
for query, pos_passage, query_res in zip(queries, pos_passages, res['results']):
top_results = query_res['matches']
# shuffle results so they are in random order
random.shuffle(top_results)
for hit in top_results:
neg_passage = pairs[int(hit['id'])][1]
# check that we're not just returning the positive passage
if neg_passage != pos_passage:
# if not we can add this to our (Q, P+, P-) triplets
triplets.append(query+'\t'+pos_passage+'\t'+neg_passage)
break

Can't get Table-QA demo to run

Hello!

I am interested in reproducing your process from this notebook:
https://github.com/pinecone-io/examples/blob/7b4de065f9fb3b3bfe5458b06e6746bac4bb37c4/search/question-answering/table-qa.ipynb

However, I am repeatedly getting stuck on one line. In the section "Initialize Table Reader" you run the QA pipeline,
pipe(table=tables[id], query=query)

This usually raises an exception: IndexError: index out of range in self

I'm running python 3.9.16 on an AWS ubuntu GPU instance.

Wondering if you might be able to provide any tips for debugging or removing variables.

FWIW I also tried the Google Colab version of this notebook and ran into problems on the same line.

Thank you

testing_lsh seems to be broken?

Hi, been trying to view your testing_lsh ipynb notebook but have issue opening it both on github and jupyter notebook could you please take a look

chunking system for long upserted files

I have /upsert'ed a document (JSON) using chatgpt-retrieval-plugin that looks like the following:

{
        "id": "Series",
        "text": " Series[f,x,Subscript[x, 0],n]  generates a power series expansion for f about the point x=Subscript[x, 0] to order (x-Subscript[x, 0])^n, where n is an explicit integer. Series[f,x->Subscript[x, 0]] generates the leading term of a power series expansion for f about the point x=Subscript[x, 0]. Series[f,x,Subscript[x, 0],Subscript[n, x],y,Subscript[y, 0],Subscript[n, y],…] successively finds series expansions with respect to x, then y, etc.Some related keywords are approximate formulas, approximation of functions, approximations, asymptotic expansions, series expansions, Taylor polynomial, Maclaurin series, Taylor series, power series, Laurent series, Puiseux series, asympt, laurent, mtaylor, order, powcreate, powexp, powlog, powpoly, powser, powsolve, series, taylor, series, Taylor series. ",
        "metadata": {
            "notes": "Series can construct standard Taylor series, as well as certain expansions involving negative powers, fractional powers, and logarithms. Series detects certain essential singularities. On[Series::esss] makes Series generate a message in this case. Series can expand about the point x=∞. Series[f,{x,0,n}] constructs Taylor series for any function f according to the formula f(0)+f^′ (0)x+f^′′ (0)x^2/2+… f^(n) (0)x^n/n!. Series effectively evaluates partial derivatives using D. It assumes that different variables are independent. The result of Series is usually a SeriesData object, which you can manipulate with other functions. Normal[series] truncates a power series and converts it to a normal expression. SeriesCoefficient[series,n] finds the coefficient of the n^th-order term. The following options can be given: Analytic FEPrivate`ImportImage[FrontEnd`FileName[{Documentation, Miscellaneous}, ExampleJumpLink.png]] True whether to treat unrecognized functions as analytic Assumptions FEPrivate`ImportImage[FrontEnd`FileName[{Documentation, Miscellaneous}, ExampleJumpLink.png]] $Assumptions assumptions to make about parameters SeriesTermGoal Automatic number of terms in the approximation"
        }
    }

However, when I /query, it returns something like this:

"query": "Laurent expansion",
      "results": [
        {
          "id": "Series_1",
          "text": "related keywords are approximate formulas, approximation of functions, approximations, asymptotic expansions, series expansions, Taylor polynomial, Maclaurin series, Taylor series, power series, Laurent series, Puiseux series, asympt, laurent, mtaylor, order, powcreate, powexp, powlog, powpoly, powser, powsolve, series, taylor, series, Taylor series.",
          "metadata": {
            "notes": "Series can construct standard Taylor series, as well as certain expansions involving negative powers, fractional powers, and logarithms. Series detects certain essential singularities. On[Series::esss] makes Series generate a message in this case. Series can expand about the point x=∞. Series[f,{x,0,n}] constructs Taylor series for any function f according to the formula f(0)+f^′ (0)x+f^′′ (0)x^2/2+… f^(n) (0)x^n/n!. Series effectively evaluates partial derivatives using D. It assumes that different variables are independent. The result of Series is usually a SeriesData object, which you can manipulate with other functions. Normal[series] truncates a power series and converts it to a normal expression. SeriesCoefficient[series,n] finds the coefficient of the n^th-order term. The following options can be given: Analytic FEPrivate`ImportImage[FrontEnd`FileName[{Documentation, Miscellaneous}, ExampleJumpLink.png]] True whether to treat unrecognized functions as analytic Assumptions FEPrivate`ImportImage[FrontEnd`FileName[{Documentation, Miscellaneous}, ExampleJumpLink.png]] $Assumptions assumptions to make about parameters SeriesTermGoal Automatic number of terms in the approximation",
            "document_id": "Series"
          },
          "embedding": null,
          "score": 0.779730856
        }

Note: Here, first part of the "text" field is missing.
What sort of chunking mechanism is being used here? Is it the RecursiveCharacterTextSplitter explained here ?

Also, if so, what sort of context-overlapping method does it use if one part of answer for the query lies in the first chunk and the second part lies in the next chunk?

[Bug] Double quotation around SQL query issue in 06-langchain-agents.ipynb

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

When replicating in a Python notebook, the SQL Database tool produces this error, because of the double quotation marks ("")
OperationalError: near ""SELECT stock_ticker, price, date FROM stocks WHERE (stock_ticker = 'ABC' OR stock_ticker = 'XYZ') AND (date = '2023-01-03' OR date = '2023-01-04') LIMIT 5"": syntax error

Expected Behavior

Single quotation marks: "SELECT stock_ticker, price, date FROM stocks WHERE (stock_ticker = 'ABC' OR stock_ticker = 'XYZ') AND (date = '2023-01-03' OR date = '2023-01-04') LIMIT 5"

This can be done by changing the description to:

sql_tool = Tool( ... ,description="Useful for when you need to answer questions about stocks and their prices. The SQL query should be outputted plainly, do not surround it in quotes or anything else.")

Steps To Reproduce

Run with colab

Relevant log output

No response

Environment

Colab and Replit

Additional Context

Adding the extra description seems to prevent double quotes

Retrieval Enhanced Generative Question Answering with OpenAI - Dictionary problem

Hi,

I've been following the code here: https://github.com/pinecone-io/examples/blob/master/generation/generative-qa/openai/gen-qa-openai/gen-qa-openai.ipynb with a different dataset, and get an error (TypeError: string indices must be integers) for ids_batch = [x['id'] for x in meta_batch] and further down for cleaning up the meta data.

from tqdm.auto import tqdm
from time import sleep

batch_size = 100  # how many embeddings we create and insert at once

for i in tqdm(range(0, len(new_data), batch_size)):
    # find end of batch
    i_end = min(len(new_data), i+batch_size)
    meta_batch = new_data[i:i_end]
    # get ids
    ids_batch = [x['id'] for x in meta_batch]
    # get texts to encode
    texts = [x['text'] for x in meta_batch]
    # create embeddings (try-except added to avoid RateLimitError)
    try:
        res = openai.Embedding.create(input=texts, engine=embed_model)
    except:
        done = False
        while not done:
            sleep(5)
            try:
                res = openai.Embedding.create(input=texts, engine=embed_model)
                done = True
            except:
                pass
    embeds = [record['embedding'] for record in res['data']]
    # cleanup metadata
    meta_batch = [{
        'start': x['start'],
        'end': x['end'],
        'title': x['title'],
        'text': x['text'],
        'url': x['url'],
        'published': x['published'],
        'channel_id': x['channel_id']
    } for x in meta_batch]
    to_upsert = list(zip(ids_batch, embeds, meta_batch))
    # upsert to Pinecone
    index.upsert(vectors=to_upsert)

I have got around this by amending the code as shown below. However I'm not sure why there is a problem with accessing the dictionary keys (plus I'm not sure if my amendments make the vectors inaccurate) - Appreciate any help you can provide!

from tqdm.auto import tqdm
from time import sleep

batch_size = 100  # how many embeddings we create and insert at once

for i in tqdm(range(0, len(data), batch_size)):
    # find end of batch
    i_end = min(len(data), i+batch_size)
    meta_batch = data[i:i_end]
    # get ids
    ids_batch = meta_batch['acn_num_ACN']#[x['acn_num_ACN'] for x in meta_batch]
    # get texts to encode
    texts = meta_batch['Report 1_Narrative']#[x['Report 1_Narrative'] for x in meta_batch]
    # create embeddings (try-except added to avoid RateLimitError)
    try:
        res = openai.Embedding.create(input=texts, engine=embed_model)
    except:
        done = False
        while not done:
            sleep(5)
            try:
                res = openai.Embedding.create(input=texts, engine=embed_model)
                done = True
            except:
                pass
    embeds = [record['embedding'] for record in res['data']]
    # cleanup metadata
    #meta_batch = [{
  #      'acn_num_ACN': x['acn_num_ACN'],
  #      'Time_Date': x['Time_Date'],
  #     'Report 1_Narrative': x['Report 1_Narrative'],
   #     'Report 1.2_Synopsis': x['Report 1.2_Synopsis']
  #  } for x in meta_batch]
    #meta_batch = [{'acn_num_ACN': ['acn_num_ACN'], 'Time_Date': ['Time_Date'], 'Report 1_Narrative': ['Report 1_Narrative'], 'Report 1.2_Synopsis': ['Report 1.2_Synopsis']}]
    to_upsert = list(zip(ids_batch, embeds, data))
    # upsert to Pinecone
    index.upsert(vectors=to_upsert)

[Bug] Unable to reproduce zero-shot object detection

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

I tried running the zero=shot object detection notebook at https://www.pinecone.io/learn/series/image-search/zero-shot-object-detection-clip/ without any changes but am unable to reproduce the result given on the website. The bounding boxes output for the cat and butterfly are identical

Kindly help me debug this.

Expected Behavior

The expected output is the set of correct bounding boxes for cat and butterfly.

Steps To Reproduce

The colab notebook with outputs is https://colab.research.google.com/drive/1Js1aame9wOYSkiOz0Fi14aemLSeycR1R?usp=sharing

Relevant log output

No response

Environment

- **OS**: Colab runtime 
- **Language version**:
- **Pinecone client version**:

Additional Context

No response

[Bug] Example stops with error when running with GPU (CUDA) "Expected all tensors to be on the same device, but found at least two devices"

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

When running the example abstractive-question-answering.ipynb I get the following error in cell 18, calling generate_answer(query)

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[38], line 1
----> 1 generate_answer(query)

Cell In[37], line 5, in generate_answer(query)
      3 inputs = tokenizer([query], max_length=1024, return_tensors="pt")
      4 # use generator to predict output ids
----> 5 ids = generator.generate(inputs["input_ids"], num_beams=2, min_length=20, max_length=40)
      6 # use tokenizer to decode the output ids
      7 answer = tokenizer.batch_decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    112 @functools.wraps(func)
    113 def decorate_context(*args, **kwargs):
    114     with ctx_factory():
--> 115         return func(*args, **kwargs)

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/transformers/generation/utils.py:1329, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
   1321         logger.warning(
   1322             "A decoder-only architecture is being used, but right-padding was detected! For correct "
   1323             "generation results, please set `padding_side='left'` when initializing the tokenizer."
   1324         )
   1326 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
   1327     # if model is encoder decoder encoder_outputs are created
   1328     # and added to `model_kwargs`
-> 1329     model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
   1330         inputs_tensor, model_kwargs, model_input_name
   1331     )
   1333 # 5. Prepare `input_ids` which will be used for auto-regressive generation
   1334 if self.config.is_encoder_decoder:

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/transformers/generation/utils.py:642, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
    640 encoder_kwargs["return_dict"] = True
    641 encoder_kwargs[model_input_name] = inputs_tensor
--> 642 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
    644 return model_kwargs

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/transformers/models/bart/modeling_bart.py:811, in BartEncoder.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
    808     raise ValueError("You have to specify either input_ids or inputs_embeds")
    810 if inputs_embeds is None:
--> 811     inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
    813 embed_pos = self.embed_positions(input)
    814 embed_pos = embed_pos.to(inputs_embeds.device)

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/torch/nn/modules/sparse.py:162, in Embedding.forward(self, input)
    161 def forward(self, input: Tensor) -> Tensor:
--> 162     return F.embedding(
    163         input, self.weight, self.padding_idx, self.max_norm,
    164         self.norm_type, self.scale_grad_by_freq, self.sparse)

File ~/.pyenv/versions/3.11.3/envs/pinecone/lib/python3.11/site-packages/torch/nn/functional.py:2210, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2204     # Note [embedding_renorm set_grad_enabled]
   2205     # XXX: equivalent to
   2206     # with torch.no_grad():
   2207     #   torch.embedding_renorm_
   2208     # remove once script supports set_grad_enabled
   2209     _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

Expected Behavior

Should run without errors

Steps To Reproduce

On a system with an NVIDIA GPU supported by PyTorch:

  1. clone the repo
  2. start jupyter lab
  3. open and run the notebook

Relevant log output

N/A

Environment

- **OS**: Ubuntu 22.04
- **Language version**: Python 3.11.3
- **Pinecone client version**: 2.2.2

Additional Context

To fix the bug, change the following line in function generate_answer(query):
inputs = tokenizer([query], max_length=1024, return_tensors="pt")
to:
inputs = tokenizer([query], max_length=1024, return_tensors="pt").to(device)

[Bug] Missing documentation on creation of ndarray for serialization

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

The below line of code throws an error Type Error expected ndarray for serialization.

embeddings = model.encode(sentences)
embeddings.shape

Expected Behavior

We simply need to convert the model encoding to a list, which will resolve the above array.

Steps To Reproduce

Modify embeddings = model.encode(sentences) to embeddings = model.encode(sentences).tolist()

Relevant log output

No response

Environment

- **OS**:
- **Language version**:
- **Pinecone client version**:

Additional Context

Here's the link to the page which references the above issue: https://www.pinecone.io/learn/series/nlp/dense-vector-embeddings-nlp/

I'm happy to work on this issue and update the documentation as well, feel free to assign it to me

faiss_tutorial/intro.ipynb wrong file naming

# saving data
split = 256
file_count = 0
for i in range(0, sentence_embeddings.shape[0], split):
    end = i + split
    if end > sentence_embeddings.shape[0] + 1:
        end = sentence_embeddings.shape[0] + 1
    file_count = '0' + str(file_count) if file_count < 0 else str(file_count)
    with open(f'./sim_sentences/embeddings_{file_count}.npy', 'wb') as fp:
        np.save(fp, sentence_embeddings[i:end, :])
    print(f"embeddings_{file_count}.npy | {i} -> {end}")
    file_count = int(file_count) + 1

It seems file_count < 10 is intended

Support for opensource LLM's

Team,
I would like to write the embeddings generated through non open ai(Hugging face embedding) into pine cone.I would like to query them from an open source LLM (Google/flan-t5). Is it natively supported by the existing Lang chain pinecone integration?If yes,can you please guide me with some examples

efConstruction and M

In the paper, to find the neighbors of each node, you need to first find efConstruction candidate neighbors by SEARCH_LAYER function. Then find M neighbors among the candidate neighbors for connection by SELECT_NEIGHBORS_HEURISTIC function. So you need efConstruction to be always greater than M. But why in the experiment, your ef Construction appears to be less than M.

[Bug] Incorrect link to pinecone-datasets generation

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

Verbatim from langchain-retrieval-augmentation.ipynb:

We will download a pre-embedding dataset from pinecone-datasets. Allowing us to skip the embedding and preprocessing steps, if you'd rather work through those steps you can find the [full notebook here](https://colab.research.google.com/github/pinecone-io/examples/blob/master/docs/langchain-retrieval-augmentation.ipynb).

The full notebook here link does not point to a reference which explains how pinecone-datasets was embedded and preprocessed.

Personally, I would like to learn how to embed webpages of documentation (e.g. LangChain Docs) for retrieval augmentation use case. Thank you for your attention!

Expected Behavior

The full notebook here link correctly point to a reference which explains how pinecone-datasets was embedded and preprocessed.

Steps To Reproduce

  1. Go to langchain-retrieval-augmentation.ipynb under the Building the Knowledge Base section
  2. Click on the full notebook here link
  3. You will see that the link does not bring you to a reference that explains how pinecone-datasets was embedded and preprocessed

Relevant log output

No response

Environment

- **OS**:
- **Language version**:
- **Pinecone client version**:

Additional Context

No response

[Bug] canopy-sdk version in notebook is not available

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

Running the first cell in the canopy data prep notebook returns the error

ERROR: Could not find a version that satisfies the requirement canopy-sdk==0.1.0 (from versions: 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.2.0)
ERROR: No matching distribution found for canopy-sdk==0.1.0

Expected Behavior

the canopy-sdk pip package installs successfully

Steps To Reproduce

  1. Open the canopy data prep notebook
  2. Connect to a runtime
  3. Run the first cell

Dataset Generation

Is this your first time submitting a feature request?

  • I have searched the existing issues, and I could not find an existing issue for this feature
  • I am requesting a straightforward extension of existing functionality

Describe the feature

Could you please share more about dataset generation? It would be useful as we could create similar datasets for other tools! Thanks a lot!

Describe alternatives you've considered

No response

Who will this benefit?

No response

Are you interested in contributing this feature?

No response

Anything else?

No response

for semantic_text_search.ipynb

I got
ApiAttributeError: QueryResponse has no attribute 'results' at ['['received_data']']['results']
error when API Request.

Query:

query = "Is too much CO2 in the ocean bad for the environment? Research supports this claim."
vector_embedding = model.encode(query).tolist()
response = index.query([vector_embedding], top_k=3, include_metadata=True)
h.printmd(f"#### A sample response from Pinecone \n ==============\n \n ```python\n{response}\n```")

A sample response from Pinecone
==============

{'matches': [{'id': '10481',
              'metadata': {'month': 3.0, 'source': 'other', 'year': 2018.0},
              'score': 0.478467762,
              'sparseValues': {},
              'values': []},
             {'id': '14516',
              'metadata': {'month': 7.0, 'source': 'other', 'year': 2018.0},
              'score': 0.469798923,
              'sparseValues': {},
              'values': []},
             {'id': '20125',
              'metadata': {'month': 2.0, 'source': 'other', 'year': 2017.0},
              'score': 0.44912678,
              'sparseValues': {},
              'values': []}],
 'namespace': ''}

[Bug] Error creating index in Semantic Search notebook

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

In the Semantic Search notebook, when I try to create the index (code below), I always get an error (see log output) and am not able to proceed with the notebook. In the "Creating an index" step of the semantic search notebook, running:

import time

# only create index if it doesn't exist
if index_name not in pinecone.list_indexes():
    pinecone.create_index(
        name=index_name,
        dimension=len(dataset.documents.iloc[0]['values']),
        metric='cosine'
    )
    # wait a moment for the index to be fully initialized
    time.sleep(1)

# now connect to the index
index = pinecone.GRPCIndex(index_name)

Expected Behavior

I expect the index to be created successfully, so I can proceed with the notebook.

Steps To Reproduce

  1. Go to https://colab.research.google.com/github/pinecone-io/examples/blob/master/docs/semantic-search.ipynb.
  2. Run through the steps until you get to the create an index code.
  3. Run that and you should get the error mentioned above.

Relevant log output

WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7c49135055a0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /databases
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7c4913504070>: Failed to establish a new connection: [Errno -2] Name or service not known')': /databases
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7c49123b0430>: Failed to establish a new connection: [Errno -2] Name or service not known')': /databases
---------------------------------------------------------------------------
gaierror                                  Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/urllib3/connection.py](https://localhost:8080/#) in _new_conn(self)
    173         try:
--> 174             conn = connection.create_connection(
    175                 (self._dns_host, self.port), self.timeout, **extra_kw

24 frames
gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

NewConnectionError                        Traceback (most recent call last)
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7c49123b05e0>: Failed to establish a new connection: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

MaxRetryError                             Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py](https://localhost:8080/#) in increment(self, method, url, response, error, _pool, _stacktrace)
    590 
    591         if new_retry.is_exhausted():
--> 592             raise MaxRetryError(_pool, url, error or ResponseError(cause))
    593 
    594         log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)

MaxRetryError: HTTPSConnectionPool(host='controller.pinecone_environment.pinecone.io', port=443): Max retries exceeded with url: /databases (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7c49123b05e0>: Failed to establish a new connection: [Errno -2] Name or service not known'))


### Environment

```markdown
- **OS**: MacOS 14.1
- **Pinecone client version**: ? (in google colab)

Additional Context

No response

Out of dataset answer and reference link provided for RAG example

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

I'm using the RAG example and feeding my own database of 1 football article.
The Pinecone DB is a brand new database and only contains vectors from the football article.

When I do qa_with_sources(query="Who is Sachin Tendulkar") it provides me an answer and a link as a reference. This is not the expected behavior.

I have not fed any article about Sachin Tendulkar to the database. How and why/where from is it getting the answer and the link?

Now, If I add more articles only about football, push the vector count in the database to around 90. And then I ask the same question, query="Who is Sachin Tendulkar", it is not able to give the answer, which is the expected behavior.

I wonder if the fullness of the vector db makes it more accurate? Has anyone else seen this?

Expected Behavior

Since the database does not contain any article or mention of Sachin Tendulkar, it should not provide any answer, and instead say "This is not mentioned in the database".

Steps To Reproduce

Create a new Vector DB on pinecone. Use this example to feed in a football article.

Run query="Who is Sachin Tendulkar". Note the result contains a reference and an answer. (Unexpected)

Now, create a more full db, with more articles and ask the same query. Note that the results is empty as expected.

Relevant log output

Answer is as above

Environment

No response

Additional Context

No response

There seems to be an error in the example code in the URL below.

https://www.pinecone.io/learn/locality-sensitive-hashing/

It looks like the example code in that URL needs to be changed.

need edited

for a_rows, c_rows in zip(band_a, band_c):
    if a_rows == c_rows:
        print(f"Candidate pair: {b_rows} == {c_rows}") # I think this part needs to be edited.
        # we only need one band to match
        break

after edited

for a_rows, c_rows in zip(band_a, band_c):
    if a_rows == c_rows:
        print(f"Candidate pair: {a_rows} == {c_rows}")
        # we only need one band to match
        break

ExtractiveQAPipeline

Can I control the length of the answer when I use ExtractiveQAPipeline with extractive qa?

[Bug] mpt-30b-chat answering questions based on langchain does not work

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

run the code examples/learn/generation/llm-field-guide/mpt/mpt-30b-chatbot.ipynb :
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
it only returns:
Explain to me the difference between nuclear fission and fusion.
without model's answer

Expected Behavior

The answer to the model can be returned

Steps To Reproduce

import torch
import transformers
from transformers import StoppingCriteria, StoppingCriteriaList
from torch import cuda, bfloat16

device = f'cuda:0' if cuda.is_available() else 'cpu'

model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-chat',
trust_remote_code=True,
load_in_8bit=True, # this requires the bitsandbytes library
max_seq_len=8192,
init_device=device,
device_map="auto"
)
model.eval()
#model.to(device)
print(f"Model loaded on {device}")

tokenizer = transformers.AutoTokenizer.from_pretrained("mosaicml/mpt-30b-chat")

stop_token_ids = [
tokenizer.convert_tokens_to_ids(x) for x in [
['Human', ':'], ['AI', ':']
]
]

#define custom stopping criteria object
class StopOnTokens(StoppingCriteria):
def call(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
for stop_ids in stop_token_ids:
if torch.eq(input_ids[0][-len(stop_ids):], stop_ids).all():
return True
return False

stopping_criteria = StoppingCriteriaList([StopOnTokens()])

stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids]

generate_text = transformers.pipeline(
model=model,
tokenizer=tokenizer,
return_full_text=True, # langchain expects the full text
task='text-generation',
# we pass model parameters here too
stopping_criteria=stopping_criteria, # without this model rambles during chat
temperature=0.1, # 'randomness' of outputs, 0.0 is the min and 1.0 the max
top_p=0.15, # select from top tokens whose probability add up to 15%
top_k=0, # select from top 0 tokens (because zero, relies on top_p)
max_new_tokens=128, # mex number of tokens to generate in the output
repetition_penalty=1.1 # without this output begins repeating
)

res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])

Relevant log output

[2023-08-30 17:16:12,664] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2023-08-30 17:16:13.203919: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Instantiating an MPTForCausalLM model from /root/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-30b-chat/54f33278a04aa4e612bca482b82f801ab658e890/modeling_mpt.py
You are using config.init_device='cuda:0', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Loading checkpoint shards: 100%|█████████████████████████████████████| 7/7 [01:07<00:00,  9.62s/it]
Model loaded on cuda:0
The model 'MPTForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/opt/conda/lib/python3.8/site-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
  warnings.warn(
Explain to me the difference between nuclear fission and fusion.

Environment

- **OS**: ubuntu20.04
- **Language version**:  Python 3.8.16
- **Pinecone client version**:  not use

Additional Context

No response

Error with Langchain demo notebook

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

When running this notebook: https://colab.research.google.com/github/pinecone-io/examples/blob/master/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb

from datasets import load_dataset
causes this error:


AttributeError Traceback (most recent call last)

in <cell line: 1>()
----> 1 from datasets import load_dataset
2
3 data = load_dataset("wikipedia", "20220301.simple", split='train[:10000]')
4 data

8 frames

/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/init.py in
85 #
86
---> 87 class Condition(threading._Condition):
88 # XXX
89 if sys.version_info < (3, 0):

AttributeError: module 'threading' has no attribute '_Condition'

Expected Behavior

It should import load_dataset

Steps To Reproduce

Open this notebook and run all:
https://colab.research.google.com/github/pinecone-io/examples/blob/master/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb

Relevant log output

No response

Environment

Colab notebook

Additional Context

No response

[Bug] Indexing problem with langchain retrieval augmentation

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

 49%|████████████████████████████████████████▍                                         | 4935/10000 [05:52<02:22, 35.66it/s]E0807 11:33:40.238867000 140704491832896 ssl_transport_security_utils.cc:105] Corruption detected.
E0807 11:33:40.238921000 140704491832896 ssl_transport_security_utils.cc:61] error:100003fc:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_RECORD_MAC
E0807 11:33:40.238930000 140704491832896 secure_endpoint.cc:305]       Decryption error: TSI_DATA_CORRUPTED
 49%|████████████████████████████████████████▍                                         | 4937/10000 [05:54<06:03, 13.95it/s]

When the following code runs:

for i, record in enumerate(tqdm(data)):
    # first get metadata fields for this record
    metadata = {
        'wiki-id': str(record['id']),
        'source': record['url'],
        'title': record['title']
    }
    # now we create chunks from the record text
    record_texts = text_splitter.split_text(record['text'])
    # create individual metadata dicts for each chunk
    record_metadatas = [{
        "chunk": j, "text": text, **metadata
    } for j, text in enumerate(record_texts)]
    # append these to current batches
    texts.extend(record_texts)
    metadatas.extend(record_metadatas)
    # if we have reached the batch_limit we can add texts
    if len(texts) >= batch_limit:
        ids = [str(uuid4()) for _ in range(len(texts))]
        embeds = embed.embed_documents(texts)
        index.upsert(vectors=zip(ids, embeds, metadatas))
        texts = []
        metadatas = []
        

https://www.pinecone.io/learn/series/langchain/langchain-retrieval-augmentation/ is the reference example that running into an issue.

The implementation is using python 3.8 and on macos (Intel) box.

Expected Behavior

The indexing process should iterate through the data we’d like to add to our knowledge base, creating IDs, embeddings, and metadata — then adding these to the index.

As we do this in batches.

this is from: https://www.pinecone.io/learn/series/langchain/langchain-retrieval-augmentation/

Steps To Reproduce

  1. activate conda env using python 3.8 (to be compatible with tiktoken)
  2. run this in a jupyter notebook
  3. Error when this part of the code is in the for-loop:
if len(texts) >= batch_limit:
    ids = [str(uuid4()) for _ in range(len(texts))]
    embeds = embed.embed_documents(texts)
    index.upsert(vectors=zip(ids, embeds, metadatas))
    texts = []
    metadatas = []```
    
    The error is around this, I think ... but I might be wrong:
    ```PineconeException                         Traceback (most recent call last)

Cell In[28], line 21
19 ids = [str(uuid4()) for _ in range(len(texts))]
20 embeds = embed.embed_documents(texts)
---> 21 index.upsert(vectors=zip(ids, embeds, metadatas))
22 texts = []
23 metadatas = []```

Relevant log output

No response

Environment

- **MacOS**: Ventura 13.4.1 (c) (22F770820d)
- **Language version**: `langchain                 0.0.162`
- **Pinecone client version**: `pinecone-client           2.2.2`

Additional Context

I am doing this while connected to a vpn.

[bug] Notebook link incorrect : semantic search notebook

In the notebook : semantic search , there is a link to a notebook that is supposed to provide more information about how to prepare a dataset for semantic search. the link goes to https://github.com/pinecone-io/examples/blob/master/search/semantic-search/semantic-search.ipynb but this notebook does not exist. I think the intended notebook is present here https://github.com/pinecone-io/examples/blob/2619f941c03cc2179bf2672abd56be73df5549e8/learn/search/semantic-search/semantic-search.ipynb

steps to reproduce: open the notebook: semantic search, in the section data download try to open this notebook from the data download section: "If you'd rather see how it's all done, please refer to this notebook."

[Bug] 'threading' has no attribute '_Condition' on 05-langchain-retrieval-augmentation.ipynb

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

from datasets import load_dataset

data = load_dataset("wikipedia", "20220301.simple", split='train[:10000]')
data

Expected Behavior

loading wikipedia dataset

Steps To Reproduce

run https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb on google colab

Relevant log output

---------------------------------------------------------------------------

AttributeError                            Traceback (most recent call last)

<ipython-input-2-1665814e1737> in <cell line: 1>()
----> 1 from datasets import load_dataset
      2 
      3 data = load_dataset("wikipedia", "20220301.simple", split='train[:10000]')
      4 data

8 frames

/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py in <module>
     85 #
     86 
---> 87 class Condition(threading._Condition):
     88     # XXX
     89     if sys.version_info < (3, 0):

AttributeError: module 'threading' has no attribute '_Condition'

Environment

google colab

Additional Context

No response

[Bug] Issue with learn/generation/llm-field-guide/llama-2/llama-2-13b-retrievalqa.ipynb

Is this a new bug?

  • I believe this is a new bug
  • I have searched the existing issues, and I could not find an existing issue for this bug

Current Behavior

Getting
ImportError: cannot import name 'Pinecone' from 'pinecone'
on both colab and local machine

Expected Behavior

Given example should work fine and not break

Steps To Reproduce

Just run the notebook on colab

Relevant log output

We have 2 doc embeddings, each with a dimensionality of 384.
Traceback (most recent call last):
  File "/home/ajay/rag.py", line 26, in <module>
    from pinecone  import Pinecone
ImportError: cannot import name 'Pinecone' from 'pinecone' (/home/ajay/rag_venv/lib/python3.10/site-packages/pinecone/__init__.py)

Environment

- **OS**: ubuntu 22.04
- **Language version**: 3.10
- **Pinecone client version**: 2.2.2

Additional Context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.