Mixed Precision GGUF layer quantization of gemma-4-31B-it by Google

Original model: https://huggingface.co/google/gemma-4-31B-it

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

   LAYER_TYPES='[
   [0 ,"Q5_K_M"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q3_K_L"],[4 ,"Q4_K_S"],[5 ,"Q3_K_L"],[6 ,"Q4_K_S"],[7 ,"Q3_K_L"],
   [8 ,"Q4_K_S"],[9 ,"Q3_K_L"],[10,"Q4_K_S"],[11,"Q3_K_L"],[12,"Q4_K_S"],[13,"Q3_K_L"],[14,"Q4_K_S"],[15,"Q3_K_L"],
   [16,"Q4_K_S"],[17,"Q3_K_L"],[18,"Q4_K_S"],[19,"Q3_K_L"],[20,"Q4_K_S"],[21,"Q3_K_L"],[22,"Q4_K_S"],[23,"Q3_K_L"],
   [24,"Q4_K_S"],[25,"Q3_K_L"],[26,"Q4_K_S"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
   [32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
   [40,"Q4_K_M"],[41,"Q4_K_S"],[42,"Q4_K_M"],[43,"Q4_K_S"],[44,"Q4_K_M"],[45,"Q4_K_S"],[46,"Q4_K_M"],[47,"Q4_K_S"],
   [48,"Q4_K_M"],[49,"Q4_K_S"],[50,"Q4_K_M"],[51,"Q4_K_S"],[52,"Q4_K_M"],[53,"Q4_K_S"],[54,"Q4_K_M"],[55,"Q4_K_L"],
   [56,"Q5_K_S"],[57,"Q5_K_M"],[58,"Q5_K_L"],[59,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

The quant was tested for very strong performance over a small set of curated reasoning prompts and sized to slightly smaller than Q4_K_M with minumum quant across layers at Q3_K_L.

Comparison:

Quant size PPL Comment
Q4_K_M 18.7e9 15.7 modified PPL, see discussion below.
Q4_K_H 18.2e9 15.4 modified PPL, 0.5B smaller than Q4_K_M

Usage:

gemma 4 31B it is a vision capable dense RL model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository.

Thinking:

By default the model will not create a RL reasoning block and just outputs

<|channel>thought
<channel|>

at the start of gen. To get it to fill in the think block use a system prompt with:

<|think|>

as the first token. This is a special token in the model vocab and must be tokenized as such to work. No other text in the system prompt besides the think token is needed to get it to fill in the RL block though other text can be added if desired.

Speculation:

Speculation can be used effectively with the model. A recommended low overhead speculator is gemma-3-270m-it-256k. To use this speculator the inference platform must support dynamic vocab translation between draft and target.

On a 2x 4070 setup (1 RPC) approx performance with fixed speculation block size of ND=2 samples on cuda backend using a custom speculator with a downstream server is:

Q QKV NKV ND gen tps
Q4_K_H F16 32k 0 21
" " " 2 33
" " " 3 31
" Q8_0 56k 0 21
" " 2 32
" " 3 31

The model was found to be highly capable on reasoning tasks when skipping think block. However on hard or trick questions the model can just wing a bogus response in non think mode, but turning on RL normally fixed it on the test eval prompts used.

Vision:

The model was tested in vision mode on a couple pretty tough bird ID image and found to exhibit poor performance in both think and nonthink mode, not even considering the correct answer in its responses. As a comparision gemma3 27B went 1 for 2 and Qwen3 27B completely aces these tough (quite blurry images of a small bird) ID tests. The model did a great job on some text based image prompts though.

Code:

The Q4_K_H model was tested across a small set of code gen prompts and found to be excellent in its ability to generate working code on all of the test prompts.

Llama.cpp inference/isssues:

Llama.cpp minimum version to run gemma-4-31B-it should be b8648 and above due to correction of the Gemma 4 tokenizer.

The model cannot compute valid perplexity due to the instruct tune forcing it to generate

<|channel>thought

as assitant gen independent of previous prompt contents. To work around this problem a modifed perplexity is computed by overwriting the beginning of the perplexity chunk contents with the forced assistent gen as follows:

      # chunk is a string of text to eval perplexity on
      injects='model\n<|channel>thought\n<channel|>'
      chunk="${injects}${chunk:${#injects}}"

logprobs are skipped over the beginning part of the perplexity prompt using a modified llama.cpp downstream server to compute perplexity. Discussion at: https://github.com/ggml-org/llama.cpp/issues/21388

Benchmarks:

A full set of both math and vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
gemma-4-31B-it.Q4_K_H.gguf Q4_K_H 18.2e9 B 0.5B smaller than Q4_K_M
gemma-4-31B-it.mmproj.gguf F16 1.2e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
567
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for steampunque/gemma-4-31B-it-MP-GGUF

Quantized
(88)
this model