Amazon Bedrock Cognito Security

Secure Generative AI Prompt Vault & API

By Jake CollyerFocus: Gen AI Practitioner & Access Control

As AI adoption scales, managing and securing proprietary system prompts becomes a critical infrastructure challenge. Hardcoding prompts into frontend applications exposes them to extraction. This project isolates highly tuned LLM prompts behind a secure, authenticated AWS backend.

The Architecture Flow

// Secure Prompt Retrieval & Execution

[ React Client ] → (JWT Token) → [ Amazon Cognito ]
                                              ↓
[ API Gateway ] → [ AWS Lambda ] ← [ DynamoDB (Prompt Vault) ]
                               ↓
                       [ Amazon Bedrock ]

1. Authentication & Authorization

Users must authenticate via Amazon Cognito User Pools. Once authenticated, the frontend receives a JSON Web Token (JWT). API Gateway uses a Cognito Authorizer to validate this token before allowing any requests to pass through to the compute layer, ensuring only authorized users can trigger API costs.

2. The Vault (DynamoDB)

System prompts, guardrail instructions, and model parameters (temperature, Top-P) are stored as items in an Amazon DynamoDB table. Lambda fetches the specific prompt template needed for the user's request, dynamically injecting their variables securely on the backend.

3. Model Invocation

With the prompt assembled securely on the server, the Lambda function uses Boto3 to invoke foundation models hosted on Amazon Bedrock, returning only the final, processed output back through API Gateway to the user.

Security Mastery:

By decoupling the prompts into DynamoDB, we achieve two things: First, the intellectual property of the prompt is never exposed to the client. Second, prompt engineers can update and refine the system instructions in the database without requiring a new deployment of the Lambda code or the frontend application.