Gemma-4-26B-A4B Heretic

Quality: quantized (8 bit, group size: 32, 9.153 bpw)

This is a abliterated (uncensored) version of google/gemma-4-26B-A4B-it, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method (with row-norm preservation)

Alternative version: TheCluster/Gemma-4-26B-A4B-Heretic-MLX-9bit

Performance

Metric This model Original model (google/gemma-4-26B-A4B-it)
KL divergence 0.0499 0 (by definition)
Refusals 11/100 100/100

Abliteration parameters

Parameter Value
start_layer_index 10
end_layer_index 30
preserve_good_behavior_weight 0.5480
steer_bad_behavior_weight 0.0009
overcorrect_relative_weight 0.5868
neighbor_count 14

Source

This model was converted to MLX format from coder3101/gemma-4-26B-A4B-it-heretic using mlx-vlm version 0.4.4.

Downloads last month
643
Safetensors
Model size
27B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Gemma-4-26B-A4B-Heretic-MLX-8bit

Quantized
(10)
this model

Collection including TheCluster/Gemma-4-26B-A4B-Heretic-MLX-8bit