Project Templates์ถ์ฒ: Show HN์กฐํ์ 13
Show HN: EdgeAI-OS โ Air-gapped Linux distro where AI is a system primitive
By neuralweaves2026๋
2์ 9์ผ
**Show HN: EdgeAI-OS โ Air-gapped Linux distro where AI is a system primitive**
I built a bootable Linux distribution that treats AI as a system primitive โ like CPU or memory. Designed for security-conscious environments where data cannot leave the network.The problem: Most AI requires cloud APIs, which means your data leaves your control. For banks, healthcare, defense, and regulated industries โ that's a non-starter.The solution: EdgeAI-OS runs everything locally. Your data never leaves the machine.Security features: - 100% offline operation โ air-gap friendly, zero network dependencies - No external API calls โ all inference runs locally on CPU - Command risk assessment โ every command classified as Safe/Moderate/Dangerous - Dangerous pattern blocking โ prevents rm -rf /, curl|bash, fork bombs, etc. - Open source & auditable โ MIT licensed, inspect every line of code - No data exfiltration โ nothing phones home, everWhat's in the ISO: - Local LLMs (TinyLlama 1.1B + SmolLM 135M) โ runs on CPU, no GPU needed - ai-sh: natural language shell where 80% of queries resolve instantly via templates - Multi-tier routing: simple queries โ fast model, complex โ larger modelExample ai-sh session: what time is it...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ Show HN์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.
I built a bootable Linux distribution that treats AI as a system primitive โ like CPU or memory. Designed for security-conscious environments where data cannot leave the network.The problem: Most AI requires cloud APIs, which means your data leaves your control. For banks, healthcare, defense, and regulated industries โ that's a non-starter.The solution: EdgeAI-OS runs everything locally. Your data never leaves the machine.Security features: - 100% offline operation โ air-gap friendly, zero network dependencies - No external API calls โ all inference runs locally on CPU - Command risk assessment โ every command classified as Safe/Moderate/Dangerous - Dangerous pattern blocking โ prevents rm -rf /, curl|bash, fork bombs, etc. - Open source & auditable โ MIT licensed, inspect every line of code - No data exfiltration โ nothing phones home, everWhat's in the ISO: - Local LLMs (TinyLlama 1.1B + SmolLM 135M) โ runs on CPU, no GPU needed - ai-sh: natural language shell where 80% of queries resolve instantly via templates - Multi-tier routing: simple queries โ fast model, complex โ larger modelExample ai-sh session: what time is it...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ Show HN์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.