EXTREME POWER: This model exceeds Gemma 4 26-A4B in critical benchmarks, at only 4.5B activated parameters.

gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking

16 bit precision fine tune of Gemma 4 "E4B" (an 8B parameter model - see below) Heretic/Uncensored using "The Deckard" in house datasets [5] via Unsloth using SUPER STRONG/ MUCH DEEPER tuning methods (but using full dataset collection).

Then trained on 3 "Universe" datasets [in house].

The stronger/deeper tuning will bring more character and intelligence to the model it will also be a wee bit darker, and intense.

Stronger details, and much stronger character.

Universe adds logic, insight, and robust character.

Jinja template modified to set mode to "thinking" as default.

[also provided "instruct" jinja template too]

Model is for all use cases, 128k context.

Note that Gemma's "E4B" is actually an 8B parameter model with roughly 4.5 billion parameters activated, it is roughly a partial "MOE" (mixture of experts) model.

Processes Text, Image with variable aspect ratio and resolution support (all models), Video, and Audio (featured natively on the E2B and E4B models).

Read more at Gemma's "E4B" [ https://huggingface.co/google/gemma-4-E4B-it ] for details, recommended base settings and detailed breakdown of benchmarks too.


IN HOUSE BENCHMARKS [by Nightmedia]:

               arc-c arc/e boolq hswag obkqa piqa  wino

gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking
q8 thinking    0.502,0.701,0.743,0.658,0.418,0.761,0.635

gemma-4-E4B-it-The-DECKARD-V3-Expresso-HERETIC-UNCENSORED-Thinking
q8 instruct    0.447,0.572,0.828,0.651,0.418,0.752,0.634

gemma-4-E4B-it-The-DECKARD-V2-Strong-HERETIC-UNCENSORED-Instruct
Instruct-mxfp8 0.444,0.553,0.831,0.646,0.412,0.751,0.630

gemma-4-E4B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking
Instruct-mxfp8 0.436,0.528,0.839,0.637,0.416,0.748,0.627

gemma-4-E4B-it [non heretic, base model]
Instruct-mxfp8,0.404,0.489,0.825,0.586,0.392,0.734,0.661

gemma-4-26B-A4B-it [non heretic, base model]
mxfp8          0.454,0.598,0.871,0.582,0.394,0.723,0.645

NOTES:

  • Often "instruct" mode will test higher than "thinking" (mode) for some tests.
  • Still compiling some stats for models as of this writing.

[more to come]

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking

Finetuned
(47)
this model
Finetunes
2 models
Quantizations
3 models

Collections including DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking