or
or
or
&
&
Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York Times
www.nytimes.com
Card image

Online chatbots are supposed to have "guardrails ... to prevent their systems from generating hate speech, disinformation and other toxic material. Now there is a way to easily poke holes in those safety systems... and use any of the leading chatbots to generate nearly unlimited amounts of harmful information... [using] a method gleaned from …

31/07/2023
Llama 2: an incredible open LLM
www.interconnects.ai
Card image

Summarises a recent Meta paper on Llama 2, "a continuation of the LLaMA... Big picture, this is a big step for the LLM ecosystem when research sharing is at an all-time low and regulatory capture at an all-time high" - so Meta continues its improbable position as good guy in the OS movement (at least when it comes to AI, but also possibl…

LLaMA & Alpaca: “ChatGPT” On Your Local Computer 🤯 | Tutorial
medium.com

"how you can run state-of-the-art large language models on your local computer" using two models which are "comparable or even outperform GPT [but can] run on your local computer... released under a non-commercial license":"dalai library" - allows us to run LLaMA & Alpaca, provides an API"LLaMA: foundational …

08/04/2023
Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.