Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-gguf

About

This repository hosts GGUF files mirrored from:

I am not claiming the original training work for the weights currently uploaded here. This repo is intended as a re-upload / mirror of the upstream GGUF release under my Hugging Face account.

If you want the original training details, methodology, and upstream credits, please read the source model card linked above.

Provenance

Intended Use

This repo is for local inference with GGUF-compatible runtimes such as:

  • llama.cpp
  • LM Studio
  • KoboldCpp
  • other GGUF-compatible tooling

Important Note

The files currently uploaded in this repository should be understood as mirrored artifacts from the upstream release.

Any datasets mentioned below under future work are not claimed as the training data for the weights currently hosted in this repo.

Future Work

For future personal distillation experiments, I plan to work with datasets such as:

These are listed here for transparency about planned work only. They were not used to produce the current mirrored GGUF files in this repository.

Credits

Credit for the original model and training work belongs to the upstream creators and contributors, especially:

  • Jackrong
  • Qwen
  • the dataset authors referenced by the upstream model card

License

Please follow the license and usage terms of the upstream model and all upstream components.

At the time this README was written, the upstream GGUF repository listed the license as apache-2.0. If the upstream licensing or attribution requirements change, this mirror should be updated accordingly.

Downloads last month
1,920
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JagjeevanAK/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-gguf