Curated Resource ( ? )

Using Copilot for Obsidian with a local LLM and vector store

my notes ( ? )

"There are now several Obsidian plugins available that allow you to use local LLMs instead of a commercial LLM provider", so PKM explorer tries Copilot for Obsidian plugin on his Windows laptop, because this plugin "not only offers the possibility to have a question and answer dialogue ... but also lets you [index your vault]... you can query the content... has three modes of operation:

  • Chat — default conversation with the installed LLM
  • Long Note QA — to ask questions about the active note in your vault
  • Vault QA (beta) — to ask questions about all information in your vault, based on an indexed version of the vault... uses your entire vault as context"

It can also perform NLP on selected text.

It works with multiple LLM providers via API, as well as local LLMs via Ollama or LM Studio... "you can use a local embedding model to index your vault". The full article provides more detail on installation and configuration and then provides one example per mode - TL:DR; seems to be accurate, but slow on the mid-range laptop used:

  • Long Note QA mode: several minutes
  • Vault QA mode: ~9 minutes

Read the Full Post

The above notes were curated from the full post medium.com/@PKMExplorer/using-copilot-for-obsidian-with-a-local-llm-and-vector-store-4c508da3e97f.

Related reading

More Stuff I Like

More Stuff tagged productivity , ai , obsidian

See also: Digital Transformation , Innovation Strategy , Personal Productivity , Science&Technology

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.