Comments (3)
Hi, a message that we call GPT-4V API is divided into 4 parts:
- system message: Provide a comprehensive introduction to GPT-4V, mainly describing the games GPT-4V is currently playing and its role positioning.
- user message part1: This is the text description part before the few-shot examples that include images, such as the current task definition and description. Clearly, it cannot be used as an instruction for the image. Therefore, following logical order, we place this preceding description of the few-shot example within a user message.
- image introduction message:This includes some few-shot examples, images, and their instructions. Because few-shot examples may contain replies from GPT-4V as assistant messages, we combine the few shots and the new images and their prompts into this part of the message.
- user message part2: This part is the specific prompt, for example, some observable information, historical information about reflection, including the constraints and formats of the output.
Finally, all the message items will be combined into a complete message to call the GPT-4V API to get response.
You can refer to the template of the prompts: https://github.com/BAAI-Agents/Cradle/tree/main/res/prompts
A minor note: To ensure everyone can participate in the discussion, it would be better if communication could be in English. Thank you very much for your attention and support.
from cradle.
Hi, a message that we call GPT-4V API is divided into 4 parts:
- system message: Provide a comprehensive introduction to GPT-4V, mainly describing the games GPT-4V is currently playing and its role positioning.
- user message part1: This is the text description part before the few-shot examples that include images, such as the current task definition and description. Clearly, it cannot be used as an instruction for the image. Therefore, following logical order, we place this preceding description of the few-shot example within a user message.
- image introduction message:This includes some few-shot examples, images, and their instructions. Because few-shot examples may contain replies from GPT-4V as assistant messages, we combine the few shots and the new images and their prompts into this part of the message.
- user message part2: This part is the specific prompt, for example, some observable information, historical information about reflection, including the constraints and formats of the output.
Finally, all the message items will be combined into a complete message to call the GPT-4V API to get response.
You can refer to the template of the prompts: https://github.com/BAAI-Agents/Cradle/tree/main/res/prompts
A minor note: To ensure everyone can participate in the discussion, it would be better if communication could be in English. Thank you very much for your attention and support.
Thank you very much for your timely reply and patient explanation. Fortunately, your explanation of the prompt handling process closely aligns with my previous understanding. However, there is still one issue that perplexes me:
In the section of your code related to the image introduction message:
paragraph_input = params.get(constants.IMAGES_INPUT_TAG_NAME, None)
if I understand correctly, it exclusively extracts the content tagged as IMAGES_INPUT_TAG_NAME (i.e., 'image_introduction') from params. Yet, I have not found the code responsible for extracting the "few shot" sections, such as those within decision_making.json:
"few_shots": [
{
"introduction": "Here are some examples of the positions of the bounding box shown in the image.",
"path": "",
"assistant": ""
},…]
from cradle.
The few_shots field is reserved for updating extensions. In the current version, this field is not used.
In the current version few shot examples are included in the image_introduction. So only the image_introduction field is parsed.
Thank you.
from cradle.
Related Issues (6)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cradle.