# Factored Cognition Primer

You’ll learn how to:

* Amplify language models like GPT-3 through recursive question-answering and debate
* Reason about long texts by combining search and generation
* Run decompositions quickly by parallelizing language model calls
* Build human-in-the-loop agents
* Use verification of answers and reasoning steps to improve responses
* And more!

<figure><img src="https://393762053-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFqoUXVrYie7Ht7Fi4JrU%2Fuploads%2FIzVjtC27coFsPMhx6i95%2FCleanShot%202022-09-16%20at%2016.44.png?alt=media&#x26;token=e17729b3-09a2-413e-b581-75d4943d3001" alt=""><figcaption><p>Example of a decomposition for <a href="chapters/long-texts">reasoning about papers</a>.</p></figcaption></figure>

The book focuses on techniques that are likely to remain relevant for better language models.

<details>

<summary>How to cite this book</summary>

Please cite this book as:

{% code overflow="wrap" %}

```
A. Stuhlmüller and J. Reppert and L. Stebbing (2022). Factored Cognition Primer. Retrieved December 6, 2022 from https://primer.ought.org.
```

{% endcode %}

BibTeX:

```latex
@misc{primer2022,
  author = {Stuhlmüller, Andreas and Reppert, Justin and Stebbing, Luke},
  title = {Factored Cognition Primer},
  year = {2022},
  howpublished = {\url{https://primer.ought.org}},
  urldate = {2022-12-06}
}
```

</details>
