"build a chatbot capable of extracting any kind of information from a set of documents ... your own Document Assistant from scratch, using GPT-3 and Langchain"
Good explainer of using embeddings to get around token limits "using LangChain, which is an open-source library designed to simplify the utilization of LLMs with Chain of Thoughts". Detailed instructions are provided for the following steps:
Because the prompt specifies that the question should only be based on the provided chunks, hallucinations are avoided. For the same reason the temperature is set to zero, making GPT choose "the next token with the highest probability based on the previous ones... reduces the likelihood of less plausible tokens being generated."
More Stuff I Like