Securing enterprise LLM gateways & misconfigured proxy exposure detection + mitigation framework for exposed LLM endpoints, API gateways, model download hooks

Authors

  • Ankita Sharma TSB Bank, London Author

DOI:

https://doi.org/10.14741/ijaie/v.12.4.1

Keywords:

Huge Language Models; API Gateways; Proxy Misconfiguration; AI Infrastructure Security; System-Level Co-Design; Supply Chain security; Enterprise AI Governance

Abstract

The quick process of enterprise adoption of large language models (LLMs) via API gateways, reverse proxies, and orchestration layers has enabled a novel category of security threats related to architecture misconfigured as opposed to model-specific behavior. Open LLM endpoints, permissive proxy settings, and unprotected model download handles are facilitating more and more data leakage, unauthorized inference, supply-chain failure and trust boundary breaches. This paper describes a system level detection and mitigation system framework to secure enterprise LLCM gateways based on the co-design principles of cyber-physical systems, embedded security, and safety-critical architectures.
The framework is both based on semantic configuration analysis and runtime monitoring, and secure-by-design architecture patterns to highlight early exposure detection and design-time mitigation of the gateway. The paper shows how the knowledge of cross-domain automotive, IoT, and CPS security can be used to design resilient LLM infrastructure to enable scalable, trustworthy, and governable enterprise AI deployments.

Downloads

Published

2024-11-30

How to Cite

Securing enterprise LLM gateways & misconfigured proxy exposure detection + mitigation framework for exposed LLM endpoints, API gateways, model download hooks. (2024). International Journal of Advance Industrial Engineering, 12(04), 1-15. https://doi.org/10.14741/ijaie/v.12.4.1