NetApp has announced a new partnership with NVIDIA, by bringing the NVIDIA Inference Microservices (NIM) to flick the ON switch for the Retrieval-Augmented Generation (RAG) technique.
What is RAG? If you ask me, this represents one of the features that can be integrated into generative AI applications, enabling them to retrieve data and facts from external sources.
Think of it as an assistant designated to a particular LLM capable of fetching specialized information on a given topic. RAG serves as that specific assistant.
In the context of NetApp’s implementation, they are linking the new NVIDIA NeMo Retriever microservices, bundled with the NVIDIA AI Enterprise platform, into their own intelligent data infrastructure.
From the customer’s standpoint, this means that all NetApp ONTAP users can now do better in leveraging customized LLMs trained on proprietary data, thanks to this “new assistant,” facilitating interaction with their data to access business insights without compromising security or privacy.
Delving deeper, customers can now directly query data from spreadsheets, documents, presentations, technical drawings, images, meeting recordings, or even raw data from ERP/CRM systems through simple prompts, eliminating additional complexities.
NetApp anticipates a further reduction in friction, cost, and time to value for RAG through this partnership, ultimately benefiting both established and emerging enterprises, regardless of whether their data resides on-premises or across public clouds, provided it is appropriately safeguarded, accessed, and utilized effectively.