Curated Resource ( ? )

LLaMA & Alpaca: “ChatGPT” On Your Local Computer 🤯 | Tutorial

my notes ( ? )

"how you can run state-of-the-art large language models on your local computer" using two models which are "comparable or even outperform GPT [but can] run on your local computer... released under a non-commercial license":

  • "dalai library" - allows us to run LLaMA & Alpaca, provides an API
  • "LLaMA: foundational (or broad) language model ... 13x smaller than GPT-3 [but] outperforms [it] on most benchmarks.... LLaMA-7B takes up around 31GB ... trained primarily on English
  • Alpaca model: fine-tuned LLaMA capable of following instructions" like ChatGPT, about 4 GB.

From a followup Q&A:

  • training these models with your own data: " Stanford released the data used, the code for generating the instruction-following data and the code for fine-tuning the LLaMA model... the documentation is very well written... check out the Alpaca-LoRA repository... if you want to fine-tune the model on your own (single) GPU ... [as it] uses low-rank adaption (LoRA) ... parameter-efficient fine-tuning technique...
  • LLaMA model's input & output token limit: 2048 tokens
  • to get code, use a non-quantized version of Alpaca on a GPU
  • the models cannot access the internet and dont learn, but do remember interactions within a single session.

Read the Full Post

The above notes were curated from the full post medium.com/@martin-thissen/llama-alpaca-chatgpt-on-your-local-computer-tutorial-17adda704c23.

Related reading

More Stuff I Like

More Stuff tagged guide , llm , llama , alpca

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.