Kong announced the launch of the latest version of Kong AI Gateway, which introduces new features to provide the artificial intelligence (AI) security and governance guardrails needed to make Generative AI (GenAI) and Agentic AI production-ready.
New updates include automated Retrieval-Augmented Generation (RAG) pipelines aimed at preventing Large Language Model (LLM) hallucinations and a Personally Identifiable Information (PII) sanitisation plugin which enables the sanitisation and protection of personal data, passwords, codes, and more than 20 categories of PII across 12 different languages and most major AI providers.
“As artificial intelligence continues to evolve, organisations must adopt robust AI infrastructure to harness its full potential,” said Marco Palladino, CTO and Co-Founder, Kong. “With this latest version of AI Gateway, we’re equipping our customers with the tools necessary to implement Agentic AI securely and effectively, ensuring seamless integration without compromising user experience. Moreover, we’re helping solve some of the biggest challenges with LLMs, such as cutting down on hallucinations and improving data security and governance.”
Kong’s AI Gateway 3.10 is also available as part of Kong Konnect, the API lifecycle platform purpose-built to power API-driven innovation at scale. APIs remain a complex challenge if they aren’t high-quality, secure, discoverable, performant, and resilient. Konnect addresses this directly by unifying, securing, and streamlining API operations while delivering a seamless experience for both API providers and consumers.
By transforming APIs into strategic assets with built-in governance, automation, and monetisation, Konnect accelerates innovation, shortens time to market, and enables the development of superior digital experiences and AI-driven applications.
Key features of Kong AI Gateway 3.10 include:
An AI RAG Injector which:
- Reduces LLM hallucinations: the new automated RAG pipelines feature helps address one of the biggest challenges to working with LLMs, their tendency to ‘hallucinate’, or provide an inaccurate response. AI Gateway 3.10 provides functionality enabling the automatic query of a vector database to insert relevant data to a prompt on-the-fly, ensuring the LLM is augmenting results with known knowledge sources
- Improves decurity and vompliance: it also gives teams more control and the ability to consistently improve LLM response accuracy and reduce hallucinations, at scale. By offloading the RAG pipeline responsibility to the AI Gateway, this now puts the vector database behind the protective layer that is the Kong AI Gateway
- Enhances developer productivity and experience: while RAG pipelines are effective for mitigating hallucinations, building them is a time-intensive and manual process for developers. AI Gateway 3.10 simplifies this process, helping to improve developer productivity with a low code/no-code approach to integrating applications with existing pipelines. This is achieved through the AI Gateway’s out-of-the-box capability to generate embeddings for an incoming prompt, then fetch all relevant data, and automatically append it to the request
Automatic PII Sanitisation which: - Enhances security and governance: this new feature enables teams to more easily sanitise and protect personal data, passwords, and more than 20 categories of PII across 12 different languages and most major AI providers. This update will also enable platform owners to enforce sanitisation at the global platform level, removing the need for developers to manually code the sanitisation into every application they are building, and helping ensure that sanitisation is implemented consistently each time
- Improves the end user experience: while other sanitisation products may be limited to replacing sensitive data with a token or by redacting it entirely, 3.10 optionally reinserts the original sanitised data into the response before it reaches the end user, based on configuration settings defined by the platform owner. This can ensure that users will always still receive the data they need, and in a seamless and personalised fashion.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.