Giter Site home page Giter Site logo

Comments (7)

MrBerkley avatar MrBerkley commented on July 1, 2024 1

This is a solid and exceptionally valid piece of work. While most of us more proficient writers understand COT for LLM prompting... Its so hard to train this to a wider demographic outside of the bubble... because of that so many experience LameLLMFatique where, the LLM just produces trash that they could have written better.

Go stuff...

--TheAIMogul (micahberkley.com)

from eugeneyan-comments.

eugeneyan avatar eugeneyan commented on July 1, 2024 1

I think that prompt would work decently, but doesn't guarantee the output will always start with "blah blah". Maybe fail about 0.1 - 1% of the time at large scale. With prefilling though, the following would work 100% of the time.

messages=[
    {
        "role": "user",
        "content": input,
    },
    {
        "role": "assistant",
        "content": "blah blah"  # Prefilled response
    }
]

from eugeneyan-comments.

joetann avatar joetann commented on July 1, 2024

This is an incredible piece of work, thank you for publishing it!

Have you explored using an LLM step to rate a previous step’s output with recommendations, and then the subsequent step follows through on those recommendations?

From a brief attempt at this I’ve seen some potential value there, would love to know if this is a technique you’ve heard of or used yourself.

from eugeneyan-comments.

kailashsp avatar kailashsp commented on July 1, 2024

I would love to hear your thoughts on metaprompt available in anthropic docs. It does construct a chain of thought instruction for the task given but I wondered whether there is any improvements cause of the lack of benchmarks for this.
https://docs.anthropic.com/en/docs/helper-metaprompt-experimental

from eugeneyan-comments.

eugeneyan avatar eugeneyan commented on July 1, 2024

Have you explored using an LLM step to rate a previous step’s output with recommendations, and then the subsequent step follows through on those recommendations? — @joetann

Agreed on it being useful. That's the second step here where an LLM step helps with validation, filtering, and fixing.

It does construct a chain of thought instruction for the task given but I wondered whether there is any improvements cause of the lack of benchmarks for this. — @kailashsp

I think metaprompting a CoT is useful. That said, for most tasks, after I've looked at dozens (sometimes hundreds) of inputs and outputs, I usually have a good sense of what the "shape" of the prompt needs to be, and prefer to roll my own.

from eugeneyan-comments.

cometyang avatar cometyang commented on July 1, 2024

Can the "prefill response" technique be replaced by a prompt such as "please begin with blah blah"?

from eugeneyan-comments.

ozan-s avatar ozan-s commented on July 1, 2024

Thanks! Great piece of work.
I have a question about the temperature. It is possible to set temperature higher than 1. You can even set it to 10 in theory, but that will make the output unusable, right? OpenAI decided to cap it at 2.0. What is the reason for saying temperature is a value between 0.0 and 1.0?

Here is my understanding:
When an LLM generates text, it outputs a set of logits for each possible next token. These logits are raw scores that can be converted into probabilities.
Temperature Scaling: The temperature T modifies the logits before they are passed through the softmax function. scaled_logit = logit / temperature

High Temperature (T > 1): When the temperature is high, the logits are divided by a large number, resulting in smaller differences between them. This makes the probability distribution more uniform, meaning the model will sample from a wider range of tokens, leading to more random and diverse outputs.

Low Temperature (0 < T < 1): When the temperature is low, the logits are divided by a small number, amplifying the differences between them. The model will favor the tokens with higher original logits, making the output more deterministic and conservative.

Temperature of 1 (T = 1): When the temperature is set to 1, the logits remain unchanged. The output probabilities directly reflect the logits provided by the model without any scaling, offering a balance between randomness and determinism.

from eugeneyan-comments.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.