ollama: ROCM usage #77

Closed
opened 2024-02-23 22:34:21 +00:00 by skobkin · 2 comments
Owner
Following #76. It's needed to find a way to run LLM's using ROCM. See: - https://github.com/ollama/ollama/issues/738 - https://github.com/ollama/ollama/blob/main/docs/development.md#linux-rocm-amd - ✅ https://hub.docker.com/r/bergutman/ollama-rocm
skobkin added the
to be researched
bug
enhancement
labels 2024-02-23 22:34:21 +00:00
skobkin self-assigned this 2024-02-23 22:34:21 +00:00
skobkin added a new dependency 2024-02-23 22:34:35 +00:00
skobkin referenced this issue from a commit 2024-03-07 02:04:30 +00:00
Author
Owner

Detected iGPU, but didn't work due to missing ROCm module for such target. Need to test on full-size GPU.

Detected iGPU, but didn't work due to missing ROCm module for such target. Need to test on full-size GPU.
Author
Owner

Already fixed in recent version. Current ollama stack works fine on discrete GPU.

Already fixed in recent version. Current `ollama` stack works fine on discrete GPU.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Depends on
#76 ollama
skobkin/docker-stacks
Reference: skobkin/docker-stacks#77
No description provided.