Project Templates์ถ์ฒ: Show HN์กฐํ์ 7
Show HN: LLM Gateway for OpenAI/Anthropic Written in Golang
By oatmale2026๋
2์ 19์ผ
**Show HN: LLM Gateway for OpenAI/Anthropic Written in Golang**
I spent a bunch of years building Shopify subscriptions software, living in the land of failed payments, retries, and "if this breaks, it breaks real money." We built a lot of automation around recovery: intelligent retry logic, routing decisions, backoffs, and all the messy edge cases you only find at scale.When I started building AI/LLM features, I kept running into the same class of problems - except harder to reason about. Multiple providers, model quirks, intermittent failures, retries/fallbacks, and a constant question of "what actually happened?" Observability was the recurring pain point. I wanted something that didn't feel like a black box, especially once you're running real workloads and latency or errors spike for reasons that aren't obvious.So I started building the tool I wished I had: an open-source LLM gateway / proxy in Go.I fell into Go mostly for practical reasons: high concurrency and throughput without fighting the runtime, and a strongly-typed codebase that stays pleasant as it grows. Over time it turned into something more personal - I've found my home in Go, and this project is where I've been putting that energy.Open source is a deliberate choice here. Coming from payments + ecommerce, trust isn't a tagline - it's operational...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ Show HN์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.
I spent a bunch of years building Shopify subscriptions software, living in the land of failed payments, retries, and "if this breaks, it breaks real money." We built a lot of automation around recovery: intelligent retry logic, routing decisions, backoffs, and all the messy edge cases you only find at scale.When I started building AI/LLM features, I kept running into the same class of problems - except harder to reason about. Multiple providers, model quirks, intermittent failures, retries/fallbacks, and a constant question of "what actually happened?" Observability was the recurring pain point. I wanted something that didn't feel like a black box, especially once you're running real workloads and latency or errors spike for reasons that aren't obvious.So I started building the tool I wished I had: an open-source LLM gateway / proxy in Go.I fell into Go mostly for practical reasons: high concurrency and throughput without fighting the runtime, and a strongly-typed codebase that stays pleasant as it grows. Over time it turned into something more personal - I've found my home in Go, and this project is where I've been putting that energy.Open source is a deliberate choice here. Coming from payments + ecommerce, trust isn't a tagline - it's operational...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ Show HN์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.
