6 Lessons Learned Deploying FastAPI to Google Cloud Run
I built starter-fastapi, a deployment-ready starter template for bootstrapping high-performance Python APIs. As developers, we often spend the first few days of a new project just wiring things together—setting up the database, configuring the linter, wrangling Docker, and figuring out the CI/CD pipeline.
My goal: move faster and save tokens by not having AI assistants recreate boilerplate from scratch on every project. The result is a template pre-packaged with modern best practices like SQLModel, uv, Ruff, and a complete, automated deployment pipeline.
Why Google Cloud Run?
I could have simplified this by deploying to Vercel, Render, or other platform-as-a-service providers. I chose Google Cloud Run because it aligns with my long-term goals: automating deployments of MCP servers and AI agents to Cloud Run and Vertex AI Agent Engine. The ultimate objective is building a workflow where agents can automatically create and deploy new agents.
Similarly, Cloud Run can deploy directly from GitHub repositories. I chose GitHub Actions to gain hands-on experience with CI/CD automation, which will be essential for more complex agent deployment workflows.
While the local development experience was smooth, automating the deployment to Google Cloud Run using GitHub Actions revealed several nuances. It's one thing to deploy from your laptop; it's significantly more complex to configure automated deployments that run securely and reliably. Here are the 6 key lessons I learned during the process, which I hope will save you some debugging time.