Giter Site home page Giter Site logo

lm-polygraph's Issues

Semantic entropy is using probabilites greater than 1

For semantic entropy they were using the classwise probability as defined by
image
Here is an example of that calculation from the same paper.
image
However, the way this is calculating it, they are adding up all the sample texts outputted, but not taking into account that often sample texts repeat which gets you probabilites greater than 1.
For example, lets say the model outputs 5 outputs. ['Paris','Paris','Paris','Its Paris','London'] with the following likelihoods [0.6,0.6,0.6,0.3,0.1]. Based on the way this library was calculating it theyd get the probability of the first class as 0.6+0.6+0.6+0.3=2.1 and the second class as 0.1. But how can that first class be a probability greater than 1? It shouldn't be because it should be only adding non-repeating classes together. So since those first three outputs are the same then the class probabilites should be 0.9 and 0.1.

In the code you can see it in semantic_entropy.py inside the estimators folder.
for i in range(len(hyps_list)): class_likelihoods = [ np.array(loglikelihoods_list[i])[np.array(class_idx)] for class_idx in self._class_to_sample[i] ] class_lp = [ np.logaddexp.reduce(likelihoods) for likelihoods in class_likelihoods ] if log_weights[i] is None: log_weights[i] = [0 for _ in hyps_list[i]] semantic_logits[i] = -np.mean( [ class_lp[self._sample_to_class[i][j]] * np.exp(log_weights[i][j]) for j in range(len(hyps_list[i])) ] )
class_lp portion is summing all outputs in each class instead of all unique outputs in each class.
This means that the more outputs you generate the larger the uncertainty will get.

Demo doesn't work.

Thank you for the amazing framework! Today when I was trying the following codes (simplest demo), I got the error massage saying that:

return UncertaintyOutput(ue[0], input_text, texts[0], model.model_path, estimator.level)
TypeError: UncertaintyOutput.__init__() takes 5 positional arguments but 6 were given

I am wondering whether the framework is ready to use or you are still implementing them?

from lm_polygraph.utils.model import WhiteboxModel
from lm_polygraph.estimators import *
from lm_polygraph.utils.manager import estimate_uncertainty

ue_method = MeanPointwiseMutualInformation()
estimator = SemanticEntropy()

model = WhiteboxModel.from_pretrained(
    "bigscience/bloom-560m",
    device="cuda:0",
)

input_text = "Who is George Bush?"
estimate_uncertainty(model, ue_method, input_text=input_text)

Error loading larger models - You shouldn't move a model when it is dispatched on multiple devices

The code

model = WhiteboxModel.from_pretrained(
    "tiiuae/falcon-40b-instruct",
    cache_dir="~/cache/",
    device_map='auto', 
    offload_folder="offload_folder"

Throws the error You shouldn't move a model when it is dispatched on multiple devices.

While

model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", 
                                             trust_remote_code=True, 
                                             cache_dir="~/cache/",
                                             device_map="auto",
                                             offload_folder="offload_folder")

seems to work fine :/

using the openai API for a BlackBox model for non OPENAI hosted platforms

Hi,

thanks for providing the community with this library. I believe uncertainties of LLM queries are an important topic. I tried to play around with this library and am a bit stuck. So I'd like to use a remote model that is accessible through the openai library. For this, I have to provide a custom OPENAI_API_BASE and my OPENAI_API_KEY. However, the library tells me that it doesn't know how to query the remote model?

Here is the code that I drafted given your example:

def main():
    print(f":: black box test, using Mistral-7B-Instruct-v0.2 from {os.environ["OPENAI_API_BASE"]}")
    model = BlackboxModel(openai_api_key=os.environ["OPENAI_API_KEY"], model_path="Mistral-7B-Instruct-v0.2", parameters={"openai_api_base": os.environ["OPENAI_API_BASE"]})

    print(model.parameters)

    print(":: using estimator EigValLaplacian")
    estimator = EigValLaplacian(verbose=True)
    answer = estimate_uncertainty(
        model, estimator, input_text="When did Albert Einstein die?"
    )
    print(">>",answer)

So I get the following error:

:: using estimator EigValLaplacian
Traceback (most recent call last):
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/examples/./black_box.py", line 23, in <module>
    main()
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/examples/./black_box.py", line 16, in main
    answer = estimate_uncertainty(
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 166, in estimate_uncertainty
    man()
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 400, in __call__
    batch_stats = self.calculate(batch_stats, self.stat_calculators, inp_texts)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 534, in calculate
    raise e
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/utils/manager.py", line 518, in calculate
    new_stats = stat_calculator(
                ^^^^^^^^^^^^^^^^
  File "/home/steinb95/development/lm-polygraph/lm-polygraph/src/lm_polygraph/stat_calculators/sample.py", line 46, in __call__
    temperature=model.parameters.temperature,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'temperature'

I tried a couple of things, but I am simply unclear where to supply the temperature?

Best
P

Example for normalizaiton

Hi, Team! Thank you very much for the library construction.
I see there are many normalizers to transform uncertainty score into a probability, can we have a notebook example of how to use it with different estimators? Like the next step after "estimate_uncertainty()".

Thank you very much!
Best,
Johnny

Dockerfile adjustments

  1. The path to requirements.txt is incorrect in the Dockerfile -- it is located in the main directory of the project.
  2. The app directory and the CMD ["polygraph_server"] command are only required if one desires to run frontend. I did not use the frontend app so I skipped those. It might make sense to make a separate Dockerfile for those who only intend to use the methods of the framework to avoid installing extra packages.
  3. There is no jupyter in the requirements which is needed to run the demo notebooks.
  4. CUDA drivers are not pulled by default in the Dockerfile but I assume this depends on the specific hardware configurations. I have used nvcr.io/nvidia/pytorch:24.05-py3 image to use CUDA on my cluster.

Get the uncertainty scores without rerun the models

Thanks again for your work!

I noticed that in your framework, we need to first run the model then get the uncertainty scores. While it's perfectly fine when using free models, it could be expensive when working with charging APIs like ChatGPT.

Specifically, I'm curious if there's a way to obtain uncertainty measures for previously generated texts without having to rerun the model.

Any information or suggestions you can offer in this regard would be greatly appreciated. I look forward to hearing from you and learning more about this possibility.

Get the uncertainty scores without rerun the models (for NumSets, Deg, Ecc)

Thank you for providing the codes for the previously generated text! They have been very helpful, and I've successfully used them for Lexical Similarity analysis. I'm planning to test them for other measurements, including NumSets, Degree matrix (Deg), and Eccentricity.

I noticed that these measurements require two additional statistics: semantic_matrix_entail and semantic_matrix_contra. According to the original paper, I know that these are calculated using DeBERTa over generated samples. I'm wondering if there are any short code snippets available to compute these matrices and feed them into the estimator function.

Thanks!

Load custom estimators and stat_calculators in the evaluation script

  1. The evaluation script should be able to load custom estimators and stat_calculators (see https://github.com/IINemo/lm-polygraph/tree/proposal)
  2. The evaluation script should not be aware how stat_calculators or estimators are created. This should be encapsulated in the correspondent factories.
  3. The factory for stat_calculators should have an access to the "environment" object, so each factory is aware what objects were created by factories for other stat_calculators.

The proposal:

  1. Implements loading stat_calculators using a custom python module. The factory module should implement the function: load_stat_calculator
  2. Implements loading estimators using a custom python module.
  3. Manager accepts the builder_stat_calculators (environmental object that allows factories for stat_calculators to communicate with each other). For now the construction of stat_calculators and estimators is implemented inside the Manager.
  4. defaults -- implements default factories for implemented stat_calculators and estimators.

generate_texts on wbmodel ignores generation parameters and stopping critera.

What title says.

This causes foundation models to generate lots of unnecessary text, introduces potential discrepancy between sampling and greedy generation with generate, and possible other less obvious problems. Problematic behavior can be reproduced by only having blackbox_sample_texts in required stats with no sample_texts on any foundation model with few-shot continuation prompt.

This calls for some streamlining of generation when using white box. Do we really need a separate generation method to pretend we are black-box when calculating things like semantic matrix on WB model? Can we call self.generate instead of self.model.generate in generate texts?

@ArtemVazh @cant-access-rediska0123 your thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.