Good introduction to GPTs in general and a useful guide to getting a GPT to talk to your own knowledgebase via its API.
Presents GPTs as a "very similar concept to ... open-source projects like Agents which LangChain, a popular framework for building LLM applications describes as ... to use a language model to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. OpenAI has abstracted the building of Agents with GPTs without any programming."
Author provides the prompt underlying a GPT called “Reverse Fashion Search" - the prompt "provides an overview on what is should do, a step-by-step process and constraints ... [using] techniques like few-shot prompt by providing an example on how to speak... some detailed reasoning steps for the model to follow."
Author shows how to "give our GPT API access to other systems ... by using an action. Actions use Web API’s ... gives ChatGPT the ability to make requests ... This interface follows the OpenAPI specifications format... (also known as Swagger) specification ... at least version 3.0".
Author also covers retrieval augmented generation (RAG), but here he's guessing what GPTs do based on the Assistants API - a reasonable assumption, but not confirmed: "Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries."
As he points out, if this is happening it's convenient but "doesn’t provide any control on how documents are split (chunked), indexed or how similarity search occur... [so] If your GPT has a complex dataset, you might want to consider implementing your own RAG"
He ends with an example showing "how inter-GPTs communication might work... checkout my Message In a Bottle GPT":
More Stuff I Like