Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED

Quant: MXFP8 (8.363 bpw)

Fully uncensored and fine-tuned (by DavidAU) using Claude 4.6 large distill dataset.

This version is INSTRUCT, with modified jinja template which put this model into "instruct only" mode.

Abliteration metrics

Metric This model Original model (Qwen/Qwen3.5-9B)
KL divergence 0.0793 0 (by definition)
Refusals 6/100 100/100

Benchmarks:

         arc   arc/e boolq hswag obkqa piqa  wino

HERETIC verison (this model):
mxfp8    0.574,0.755,0.869,0.714,0.410,0.780,0.691

Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT
mxfp8    0.574,0.729,0.882,0.711,0.422,0.775,0.691

Qwen3.5-9B
mxfp8    0.417,0.458,0.623,0.634,0.338,0.737,0.639

Source

This model was converted to MLX format from DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED using mlx-vlm version 0.3.12.

Downloads last month
11,912
Safetensors
Model size
9B params
Tensor type
U8
·
U32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED-MLX-mxfp8

Collection including TheCluster/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED-MLX-mxfp8