Let’s start by just making a prompt for choosing an action, without hooking it up to actually executing the action. This way, we can check whether the model is even capable of making reasonable choices.
There’s a long list of actions we could choose between. For this first version, we’ll limit ourselves to:
Web search
Computation
Reasoning
Representing actions
Let’s first represent the actions as a data type. For each action we’ll also store an associated description that will help the model choose between them, and the recipe that runs the action:
answer_by_dispatch/types.py
from dataclasses import dataclassfrom typing import Protocolfrom ice.recipes.primer.answer_by_computation import answer_by_computationfrom ice.recipes.primer.answer_by_reasoning import answer_by_reasoningfrom ice.recipes.primer.answer_by_search import answer_by_searchclassQuestionRecipe(Protocol):asyncdef__call__(self,question:str) ->str: ...@dataclassclassAction: name:str description:str recipe: QuestionRecipeaction_types = [Action( name="Web search", description="Run a web search using Google. This is helpful if the question depends on obscure facts or current information, such as the weather or the latest news.", recipe=answer_by_search, ),Action( name="Computation", description="Run a computation in Python. This is helpful if the question depends on calculation or other mechanical processes that you can specify in a short program.", recipe=answer_by_computation, ),Action( name="Reasoning", description="Write out the reasoning steps. This is helpful if the question involves logical deduction or evidence-based reasoning.", recipe=answer_by_reasoning, ),]
From actions to prompts
We render the actions as an action selection prompt like this:
answer_by_dispatch/prompt.py
from fvalues import Ffrom ice.recipes.primer.answer_by_dispatch.types import*defmake_action_selection_prompt(question:str) ->str: action_types_str =F("\n").join( [F(f"{i+1}. {action_type.description}")for i, action_type inenumerate(action_types) ] )returnF(f"""You want to answer the question "{question}".You have the following options:{action_types_str}Q: Which of these options do you want to use before you answer the question? Choose the option that will most help you give an accurate answer.A: I want to use option #""" ).strip()
So, make_action_selection_prompt("How many people live in Germany?") results in:
You want to answer the question "How many people live in Germany?".
You have the following options:
1. Run a web search using Google. This is helpful if the question depends on obscure facts or current information, such as the weather or the latest news.
2. Run a computation in Python. This is helpful if the question depends on calculation or other mechanical processes that you can specify in a short program.
3. Write out the reasoning steps. This is helpful if the question involves logical deduction or evidence-based reasoning.
Q: Which of these options do you want to use before you answer the question? Choose the option that will most help you give an accurate answer.
A: I want to use option #
Choosing the right action
We’ll treat action choice as a classification task, and print out the probability of each action:
answer_by_dispatch/classify.py
from ice.recipe import recipefrom ice.recipes.primer.answer_by_dispatch.prompt import*asyncdefanswer_by_dispatch(question:str="How many people live in Germany?"): prompt =make_action_selection_prompt(question) choices =tuple(str(i) for i inrange(1, 6)) probs, _ =await recipe.agent().classify(prompt=prompt, choices=choices)returnlist(zip(probs.items(), [a.name for a in action_types]))recipe.main(answer_by_dispatch)
Let’s test it:
pythonanswer_by_dispatch/classify.py--question"How many people live in Germany?"
Mostly a web search question, but might need some clarification.
Executing actions
Now let’s combine the action selector with the chapters on web search, computation, and reasoning to get a single agent that can choose the appropriate action.
This is extremely straightforward—since all the actions are already associated with subrecipes, all we need to do is run the chosen subrecipe:
answer_by_dispatch/execute.py
from ice.recipe import recipefrom ice.recipes.primer.answer_by_dispatch.prompt import*asyncdefselect_action(question:str) -> Action: prompt =make_action_selection_prompt(question) choices =tuple(str(i) for i inrange(1, 6)) choice_probs, _ =await recipe.agent().classify(prompt=prompt, choices=choices) best_choice =max(choice_probs.items(), key=lambdax: x[1])[0]return action_types[int(best_choice)-1]asyncdefanswer_by_dispatch(question:str="How many people live in Germany?") ->str: action =awaitselect_action(question) result =await action.recipe(question=question)return resultrecipe.main(answer_by_dispatch)
Let’s try it with our examples above:
$ python answer_by_dispatch/execute.py --question "How many people live in Germany?"
The current population of Germany is 84,370,487 as of Monday, September 12, 2022, based on Worldometer elaboration of the latest United Nations data.
$ python answer_by_dispatch/execute.py --question "What is sqrt(2^8)?"
16.0
$ python answer_by_dispatch/execute.py --question "Is transhumanism desirable?"
It is up to each individual to decide whether or not they believe transhumanism is desirable.
These are arguably better answers than we’d get without augmentation.
Exercises
Suppose that actions are taking place within the context of a long document. Add an action type for searching for a particular phrase in the document and returning the results.
Add an action type for debate:
Action( name="Debate", description='Run a debate. This is helpful if the question is a pro/con question that involves involves different perspectives, arguments, and evidence, such as "Should marijuana be legalized?" or "Is veganism better for the environment?".')
Get feedback on exercise solutions
If you want feedback on your exercise solutions, submit them through this form. We—the team at Ought—are happy to give our quick take on whether you missed any interesting ideas.