Can you successfully convert this GGUF to a model supported by Ollama?
Can you successfully convert this GGUF to a model supported by Ollama?I tried to convert this model to ollama, but it kept failing, and it would prompt "Error: 500 Internal Server Error: unable to load model: "
Please upgrade your Ollama to the latest version.
Hmm... Actually, my Ollama is the latest version 0.17.4,
OKοΌI'll have a try~
I also get:
"500 Internal Server Error: unable to load model: C:\Users*****.ollama\models\blobs\sha256-884bc358ea21ccba6e26761675c20ac0d06653e89e6c2bc211ba101d3c6977b7"
Thanks for reporting this.
Iβll re-fine-tune and repackage the model today. I originally completed the fine-tuning the day after the base model was released, so there may be some conflicts with underlying libraries or dependencies that have since changed. Iβll rebuild the package using the latest versions to improve compatibility. Iβve also noticed that some Ollama users have encountered a few bugs recently, so Iβm looking into that as well. Apologies for the inconvenience, and thanks for your patience.
Thanks for reporting this.
Iβll re-fine-tune and repackage the model today. I originally completed the fine-tuning the day after the base model was released, so there may be some conflicts with underlying libraries or dependencies that have since changed. Iβll rebuild the package using the latest versions to improve compatibility. Iβve also noticed that some Ollama users have encountered a few bugs recently, so Iβm looking into that as well. Apologies for the inconvenience, and thanks for your patience.
Could you please share the finetune code for learning, thank you very much
Hello, when I use the latest version of Mac Ollama to call it, there is no chat option, which makes it impossible for me to use openclaw. Is this a configuration problem, or are there any precautions I should take?
Ollama only supports text mode for qwen35/qwe35moe for now for third party GGUFs (their own quants, from their official repo, work fine tho), until a PR that will update llama.cpp backend is merged. So for now, you need to create a Modelfile without mmproj and create the model, something like this (tested with another GGUF from HF):
# point to downloaded GGUF
FROM ./Qwen3.5-27B.Q4_K_M.gguf
# use Ollama engine
TEMPLATE {{ .Prompt }}
RENDERER qwen3.5
PARSER qwen3.5
# suggested parameters for the official model, you may tweak it
PARAMETER top_p 0.95
PARAMETER presence_penalty 1.5
PARAMETER temperature 1
PARAMETER top_k 20
LICENSE " Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
"
If you use the HF link, like ollama pull hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q4_K_M it may include the MMPROJ file and it will fail to start. Server will report that qwen35 architecture is not supported, although it works fine without vision and with their own model, of course. Make sure you have the latest Ollama version too.
I have the same error:
ollama run hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q8_0
pulling manifest
pulling 283f5c4dac2b: 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 28 GB
Error: 400:
Hi, is anyone else having this issue? When I run it, I get a 400 error. I'm using Ollama ROCm.
I can reproduce this error. Same for pullhf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q8_0
I have the same error:
ollama run hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q8_0
pulling manifest
pulling 283f5c4dac2b: 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 28 GB
Error: 400:
Pull the GGUF directly from HF and then create a Modelfile to use the model.


