Ollama (1)

LLM Rapid Prototyping: Local vs On-Cloud Deployment

by Ali Khoramshahi

Like many AI enthusiasts over the past few years, I’ve been experimenting with LLMs, RAG workflows, and comparing various models—including top-tier open-weight models. In this post, I’ll share my experience setting up a local development environment versus a fast-prototyping cloud deployment using infrastructure-as-code. I began…