GitHub Trending์ถœ์ฒ˜: GitHub Trending Daily All์กฐํšŒ์ˆ˜ 6

ggml-org/ggml

By GitHub Trending Daily All
2026๋…„ 2์›” 23์ผ
**ggml-org/ggml**

Tensor library for machine learningggml Roadmap / Manifesto Tensor library for machine learning Note that this project is under active development. Some of the development is currently happening in the llama.cpp and whisper.cpp repos Features Low-level cross-platform implementation Integer quantization support Broad hardware support Automatic differentiation ADAM and L-BFGS optimizers No third-party dependencies Zero memory allocations during runtime Build git clone https://github.com/ggml-org/ggml cd ggml # install python dependencies in a virtual environment python3.10 -m venv .venv source .venv/bin/activate pip install -r requirements.txt # build the examples mkdir build && cd build cmake .. --config Release -j 8 GPT inference (example) # run the GPT-2 small 117M model ../examples/gpt-2/download-ggml-model.sh 117M ./bin/gpt-2-backend -m models/gpt-2-117M/ggml-model.bin -p "This is an example" For more information, checkout the corresponding programs in the examples folder. Using CUDA # fix the path to point to your CUDA compiler cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc .. Using hipBLAS cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIP=ON Using SYCL # linux source /opt/intel/oneapi/setvars.sh cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ....

---

**[devsupporter ํ•ด์„ค]**

์ด ๊ธฐ์‚ฌ๋Š” GitHub Trending Daily All์—์„œ ์ œ๊ณตํ•˜๋Š” ์ตœ์‹  ๊ฐœ๋ฐœ ๋™ํ–ฅ์ž…๋‹ˆ๋‹ค. ๊ด€๋ จ ๋„๊ตฌ๋‚˜ ๊ธฐ์ˆ ์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด์‹œ๋ ค๋ฉด ์›๋ณธ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”.