GitHub Trending์ถ์ฒ: GitHub Trending Daily All์กฐํ์ 1
vllm-project/vllm-omni
By GitHub Trending Daily All2026๋
3์ 23์ผ
**vllm-project/vllm-omni**
A framework for efficient model inference with omni-modality models Easy, fast, and cheap omni-modality model serving for everyone | Documentation | User Forum | Developer Slack | WeChat | Paper | Slides | Latest News ๐ฅ [2026/03] Check out our first public project deepdive at the vLLM Hong Kong Meetup. [2026/03] vllm-omni-skills is a community-driven collection of AI assistant skills that help developers work with vLLM-Omni more effectively. These skills can be used with popular agentic AI coding assistants like Cursor IDE, Claude, Codex, and more. [2026/02] We released 0.16.0 - A major alignment + capability release that rebases onto upstream vLLM v0.16.0 and significantly expands performance, distributed execution, and production readiness across Qwen3-Omni / Qwen3-TTS, Bagel, MiMo-Audio, GLM-Image and the Diffusion (DiT) image/video stackโwhile also improving platform coverage (CUDA / ROCm / NPU / XPU), CI quality, and documentation. [2026/02] We released 0.14.0 - This is the first stable release of vLLM-Omni that expands Omniโs diffusion / image-video generation and audio / TTS stack, improves distributed execution and memory efficiency, and broadens platform/backend coverage (GPU/ROCm/NPU/XPU)...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ GitHub Trending Daily All์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.
A framework for efficient model inference with omni-modality models Easy, fast, and cheap omni-modality model serving for everyone | Documentation | User Forum | Developer Slack | WeChat | Paper | Slides | Latest News ๐ฅ [2026/03] Check out our first public project deepdive at the vLLM Hong Kong Meetup. [2026/03] vllm-omni-skills is a community-driven collection of AI assistant skills that help developers work with vLLM-Omni more effectively. These skills can be used with popular agentic AI coding assistants like Cursor IDE, Claude, Codex, and more. [2026/02] We released 0.16.0 - A major alignment + capability release that rebases onto upstream vLLM v0.16.0 and significantly expands performance, distributed execution, and production readiness across Qwen3-Omni / Qwen3-TTS, Bagel, MiMo-Audio, GLM-Image and the Diffusion (DiT) image/video stackโwhile also improving platform coverage (CUDA / ROCm / NPU / XPU), CI quality, and documentation. [2026/02] We released 0.14.0 - This is the first stable release of vLLM-Omni that expands Omniโs diffusion / image-video generation and audio / TTS stack, improves distributed execution and memory efficiency, and broadens platform/backend coverage (GPU/ROCm/NPU/XPU)...
---
**[devsupporter ํด์ค]**
์ด ๊ธฐ์ฌ๋ GitHub Trending Daily All์์ ์ ๊ณตํ๋ ์ต์ ๊ฐ๋ฐ ๋ํฅ์ ๋๋ค. ๊ด๋ จ ๋๊ตฌ๋ ๊ธฐ์ ์ ๋ํด ๋ ์์๋ณด์๋ ค๋ฉด ์๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ์ธ์.