β»οΈ Re-Prompting
One of the challenges to Prompt Engineering, is generating texts of over 2,000 words. A solution to this is to use the Recurssive Reprompting and Revision framework.
- Prompt a general-purpose language model to construct a structured overarching plan
- Generate story passages by repeatedly putting contextual information from the plan and current state into a language model prompt
- Rerank different continuations for plot coherence and premise relevance
- Edit the best continuation for factual consistency
This technique provides significantly better results when compared to a base model. This is like an expanded version of chain of thought prompting. The prompts dynamically reconstruct at each step by selectively manifesting contextually relevant information from the initial plan.
You may notice, that this is similar to a human's writing process.
Other readers enjoyed - ChatGPT
Use Chat GPT more effectively, without technical knowledge
Planβ
Given an input Premise this method suggests a plan or outline for the whole entire text. It extracts the critical elements from the prompt, then generates an outine based on those elements.
Draftβ
Draft is where the most important magic happens. Here, the prompts direct multiple instructions to provide context and build a fitting text. This includes the relevant context, previous outlines, a summary, and what should happen in the next section. This gives each section of the output context as to what has happened, and an idea of what will happen. This creates less room for hallunication, and reduces the margin of error.
Most importantly the Draft resamples the text several times. That means that the best output is always selected.
Rewriteβ
To further improve the quality of the output, the rewrite module acts as an editor. It ranks the output with regards to coherence, and relevance.
Editβ
The edit module makes more edits to continuously refine the a passage produced by the system. The idea is to fact check for inaccuracies and correct them. It does this by maintaining a knowledge base for each character, and checking the information against that knowledge base.
The technique was proposed by a group from Meta AI, UC Berkeley, and UCLA.