Curated Resource ( ? )

How knowledge graphs improve generative AI | InfoWorld

my notes ( ? )

"despite the immense potential of LLMs, they come with a range of challenges... hallucinations, the high costs associated with training and scaling, the complexity of addressing and updating them, their inherent inconsistency, the difficulty of conducting audits and providing explanations, predominance of English language content... [they're also] poor at reasoning and need careful prompting for correct answers. All of these issues can be minimized by supporting your new internal corpus-based LLM by a knowledge graph... an information-rich structure that provides a view of entities and how they interrelate...

Beyond simply querying a LG for patterns, "you can also compute over the graph using graph algorithms and graph data science... uncovering facts previously obscured and lead to valuable insights [while]... embeddings (encompassing both its data and its structure) can be used in machine learning pipelines or as an integration point to LLMs."

How to make LLMs and KGs work together?

Use an LLM to create a knowledge graph

Use LLMs to "process a huge corpus of text data ... to produce a knowledge graph ... [which] can be inspected, QA’d, and curated... [and] is explicit and deterministic about its answers in a way that LLMs are not."

Use a knowledge graph to train an LLM

Train an LLM "on our existing knowledge graph ... [to] build chatbots that are very skilled with respect to our products and services and that answer without hallucination."

Use a knowledge graph on the interaction path with an LLM to enrich queries and responses

"intercept messages going to and from the LLM and enrich them with data from our knowledge graph... [which is] used to enrich the prompt given to the LLM. Similarly, on the way back from the LLM, we can take embeddings and resolve them against the knowledge graph to provide deeper insight"

Use knowledge graphs to create better models

referencing "interesting research from Yejen Choi at the University of Washington ... an LLM is enriched by a secondary, smaller AI ... “critic.” looks for reasoning errors in the responses of the LLM, and in doing so creates a knowledge graph for downstream consumption by another training process that creates a “student” model. The student model is smaller and more accurate than the original LLM on many benchmarks because it never learns factual inaccuracies or inconsistent answers to questions."

Read the Full Post

The above notes were curated from the full post www.infoworld.com/article/3707814/how-knowledge-graphs-improve-generative-ai.amp.html?utm_source=pocket_saves.

Related reading

More Stuff I Like

More Stuff tagged ai , knowledge management , knowledge graph , llm

See also: Digital Transformation , Innovation Strategy , Personal Productivity , Science&Technology , Business , Large language models

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.