Global Edge Latency Routing
Serving dynamic API traffic to a global audience from a single AWS Region (e.g., `us-east-1` in Virginia) guarantees a poor user experience for clients in Tokyo or Sydney due to the laws of physics. This architecture solves global latency by routing users to the infrastructure closest to them.
The Architecture Flow
[ DNS Query: api.jakecollyer.cloud ]
↓
[ Route 53 (Latency Policy) ]
↙ ↘
[ us-east-1 (Virginia) ] [ ap-northeast-1 (Tokyo) ]
1. Latency-Based Routing
I configured Amazon Route 53 utilizing a Latency Routing Policy. When a user requests the application, Route 53 checks AWS's internal telemetry map and automatically resolves the DNS request to the AWS Region that provides the lowest latency for that specific user.
2. API Gateway Edge Optimization
For services that must remain in a single region (like a master database), I utilized Edge-Optimized API Gateway endpoints. This deploys a hidden CloudFront distribution in front of the API, allowing global users to terminate their TLS connection at an Edge Location near them, and ride the dedicated AWS backbone network to the origin region, bypassing the chaotic public internet.