Answering given paragraphs
Answering the question given the top paragraphs—with subrecipes!
Now all we have to do is combine the paragraph finder with the question-answering recipe we built earlier on.
We could just paste in the code from the question-answering recipe. However, we can also directly reuse it as a subrecipe. If you have the code in qa.py
, we can directly import and use it:
Running the same command again…
…you should get an answer like this:
Take a look at the trace to see how it all fits together:
Exercises
We’re taking a fixed number of paragraphs (3) and sticking them into the prompt. This will sometimes result in prompt space not being used well, and sometimes it will overflow. Modify the recipe to use as many paragraphs as can fit into the prompt. (Hint: A prompt for current models has space for 2048 tokens. A token is about 3.5 characters.)
We’re classifying paragraphs individually, but it could be better to do ranking by showing the model pairs of paragraphs and ask it which better answers the question. Implement this as an alternative.
(Advanced) Implement debate with agents that have access to the same paper, and let agents provide quotes from the paper that can’t be faked. Does being able to refer to ground truth quotes favor truth in debate?
Last updated