Giter Site home page Giter Site logo

lm-polygraph's People

Contributors

aktsvigun avatar artemvazh avatar cant-access-rediska0123 avatar iinemo avatar kirill-fedyanin avatar rvashurin avatar sergeypetrakov avatar speedofmagic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

lm-polygraph's Issues

Get the uncertainty scores without rerun the models (for NumSets, Deg, Ecc)

Thank you for providing the codes for the previously generated text! They have been very helpful, and I've successfully used them for Lexical Similarity analysis. I'm planning to test them for other measurements, including NumSets, Degree matrix (Deg), and Eccentricity.

I noticed that these measurements require two additional statistics: semantic_matrix_entail and semantic_matrix_contra. According to the original paper, I know that these are calculated using DeBERTa over generated samples. I'm wondering if there are any short code snippets available to compute these matrices and feed them into the estimator function.

Thanks!

Error loading larger models - You shouldn't move a model when it is dispatched on multiple devices

The code

model = WhiteboxModel.from_pretrained(
    "tiiuae/falcon-40b-instruct",
    cache_dir="~/cache/",
    device_map='auto', 
    offload_folder="offload_folder"

Throws the error You shouldn't move a model when it is dispatched on multiple devices.

While

model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", 
                                             trust_remote_code=True, 
                                             cache_dir="~/cache/",
                                             device_map="auto",
                                             offload_folder="offload_folder")

seems to work fine :/

Demo doesn't work.

Thank you for the amazing framework! Today when I was trying the following codes (simplest demo), I got the error massage saying that:

return UncertaintyOutput(ue[0], input_text, texts[0], model.model_path, estimator.level)
TypeError: UncertaintyOutput.__init__() takes 5 positional arguments but 6 were given

I am wondering whether the framework is ready to use or you are still implementing them?

from lm_polygraph.utils.model import WhiteboxModel
from lm_polygraph.estimators import *
from lm_polygraph.utils.manager import estimate_uncertainty

ue_method = MeanPointwiseMutualInformation()
estimator = SemanticEntropy()

model = WhiteboxModel.from_pretrained(
    "bigscience/bloom-560m",
    device="cuda:0",
)

input_text = "Who is George Bush?"
estimate_uncertainty(model, ue_method, input_text=input_text)

Get the uncertainty scores without rerun the models

Thanks again for your work!

I noticed that in your framework, we need to first run the model then get the uncertainty scores. While it's perfectly fine when using free models, it could be expensive when working with charging APIs like ChatGPT.

Specifically, I'm curious if there's a way to obtain uncertainty measures for previously generated texts without having to rerun the model.

Any information or suggestions you can offer in this regard would be greatly appreciated. I look forward to hearing from you and learning more about this possibility.

using the openai API for a BlackBox model for non OPENAI hosted platforms

Hi,

thanks for providing the community with this library. I believe uncertainties of LLM queries are an important topic. I tried to play around with this library and am a bit stuck. So I'd like to use a remote model that is accessible through the openai library. For this, I have to provide a custom OPENAI_API_BASE and my OPENAI_API_KEY. However, the library tells me that it doesn't know how to query the remote model?

Here is the code that I drafted given your example:

def main():
    print(f":: black box test, using Mistral-7B-Instruct-v0.2 from {os.environ["OPENAI_API_BASE"]}")
    model = BlackboxModel(openai_api_key=os.environ["OPENAI_API_KEY"], model_path="Mistral-7B-Instruct-v0.2", parameters={"openai_api_base": os.environ["OPENAI_API_BASE"]})

    print(model.parameters)

    print(":: using estimator EigValLaplacian")
    estimator = EigValLaplacian(verbose=True)
    answer = estimate_uncertainty(
        model, estimator, input_text="When did Albert Einstein die?"
    )
    print(">>",answer)

So I get the following error:

:: using estimator EigValLaplacian
Traceback (most recent call last):
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/examples/./black_box.py", line 23, in <module>
    main()
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/examples/./black_box.py", line 16, in main
    answer = estimate_uncertainty(
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 166, in estimate_uncertainty
    man()
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 400, in __call__
    batch_stats = self.calculate(batch_stats, self.stat_calculators, inp_texts)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 534, in calculate
    raise e
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 518, in calculate
    new_stats = stat_calculator(
                ^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/stat_calculators/sample.py", line 46, in __call__
    temperature=model.parameters.temperature,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'temperature'

I tried a couple of things, but I am simply unclear where to supply the temperature?

Best
P

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.