# CH02 Prompt

The prompt step is based on documents retrieved from the searcher **The process by which a language model creates questions or commands to use** is. This step **Based on the information retrieved** This is an essential step in generating a response that best responds to end-user questions.

### Need for prompt <a href="#id-1" id="id-1"></a>

1. **Context setting** : The prompt serves to set the language model to work in a specific context. This allows the model to generate more accurate and relevant answers based on the information provided.
2. **Information integration** : Information retrieved from multiple documents may contain different perspectives or content. Integrate this information at the prompt stage, and adjust it to a format that allows the model to utilize it efficiently.
3. **Improved response quality** : The quality of the model's response to the question is highly dependent on the composition of the prompt. A well-organized prompt helps the model provide more accurate and useful information.

### RAG prompt structure <a href="#rag" id="rag"></a>

* Instruction
* Questions (user input questions)
* Context (Search information)

### The importance of prompts <a href="#id-2" id="id-2"></a>

The prompt phase plays an important role in the RAG system.

Through this step, the language model is about the user's question **Generate responses in an optimized way** can, **Direct impact on system-wide performance and user satisfaction** Goes. If the prompt is not well organized, the model may work inefficiently, and as a result, it is more likely to generate a response that does not meet the needs of the user.

### Reference <a href="#id-3" id="id-3"></a>

* [prompt](https://wikidocs.net/233351)
* [LangChain Prompts](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/)
