Autonomous Knowledge Agents
Until recently, Retrieval-Augmented Generation, RAG
, has largely focused on retrieving information from a known knowledge source for seeding a Language Model for a response. RAG
has shown great potential for grounding LM responses for a given knowledge set. However, how do you manage large sets of text where the detailed knowledge contained in the text is unknown?
TrustGraph
solves this problem through a Naive Extraction process with 3 autonomous knowledge agents. The Naive Extraction process ingests any text corpus and builds a knowledge model with no previous information about the text corpus. This approach enables an automated, high-efficacy RAG pipeline for large sets of text where no human analysis previously exists. The true power of TrustGraph
is the ability to perform a Naive Extraction on a text corpus, store that knowledge model, and then be able to share that knowledge model to any TrustGraph
deployment. The modular architecture of TrustGraph
enables true “plug and play” knowledge sharing.