gemma-4-26B-A4B-it-uncensored (GGUF)

GGUF quantizations of TrevorJS/gemma-4-26B-A4B-it-uncensored.

Files

File Quant Size
gemma-4-26B-A4B-it-uncensored-Q4_K_M.gguf Q4_K_M 16.8 GB
gemma-4-26B-A4B-it-uncensored-Q8_0.gguf Q8_0 26.9 GB

Usage

# From HuggingFace (auto-downloads)
llama-server -hf TrevorJS/gemma-4-26B-A4B-it-uncensored-GGUF -c 8192

# From local file
llama-server -m gemma-4-26B-A4B-it-uncensored-Q4_K_M.gguf -c 8192

Then open http://localhost:8080 for the chat UI.

Details

These are GGUF quantizations of TrevorJS/gemma-4-26B-A4B-it-uncensored, an abliterated (uncensored) version of google/gemma-4-26B-A4B-it. Refusal behavior has been removed using norm-preserving biprojected abliteration with Expert-Granular Abliteration (EGA) for MoE expert weights.

See the bf16 model card for full method details, before/after refusal rates, and cross-dataset validation results.

Source code: TrevorJS/gemma-4-abliteration

Downloads last month
14,936
GGUF
Model size
25B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TrevorJS/gemma-4-26B-A4B-it-uncensored-GGUF

Quantized
(3)
this model

Collection including TrevorJS/gemma-4-26B-A4B-it-uncensored-GGUF