This project investigates human-robot alignment using a Furhat robot. Participants rank items, discuss with the robot, and re-rank to explore alignment dynamics. The study assesses whether alignment depends on concept nature and its correlation with trust in robots, contributing to human-robot interaction insights.
We’re currently using querying the gpt-3.5-turbo-instruct as the LLM to generate our responses. But this model only have a context window of 4,096 tokens, which is no a lot compared to some other model like gpt-4-turbo-preview which can take over 128,000 tokens (and also the training data is more recent).
As we'll be trying a lot of prompt engineering options, we should make the switch from the one prompt to another as simple as possible. Beginning by structure the prompt engineering parameters, an potentially add the loading of configuration file for better integration.
Using ChatGPT, resume the dialog history to have context of the previous dialog history. In this way we can manage the conext window minimazing the losing information.