Model Card for Qianfan-OCR-4B-GGUF

This repository contains GGUF format quantizations (Q4_K_M, Q5_K_M, Q6_K, Q8_0) of Qianfan-OCR, a 4B-parameter end-to-end document intelligence model developed by the Baidu Qianfan Team. It unifies document parsing, layout analysis, and document understanding within a single vision-language architecture.

Model Details

Model Description

Qianfan-OCR is designed to replace traditional multi-stage OCR pipelines. Instead of chaining separate layout detection and text recognition modules, it performs direct image-to-Markdown conversion. It introduces Layout-as-Thought, an optional thinking phase where the model generates structured layout representations before producing the final output.

  • Developed by: Baidu Qianfan Team
  • Shared by: Abhiray
  • Model type: Multimodal Vision-Language Model (VLM)
  • Text Backbone: Qwen3-4B
  • Vision Encoder: Qianfan-ViT (AnyResolution up to 4K)
  • Language(s) (NLP): Multilingual (192 languages supported)
  • License: Apache-2.0

Model Sources

Uses

Direct Use

  • Document Parsing: High-fidelity Image-to-Markdown conversion.
  • Table/Formula Recognition: Extracting complex tables (merged cells) and LaTeX formulas.
  • Key Information Extraction (KIE): Structured data extraction from receipts, invoices, and IDs.
  • Visual Question Answering (DocVQA): Reasoning over charts, graphs, and structured documents.

Out-of-Scope Use

  • The model is not intended for generating creative fiction or conversational roleplay; it is optimized for high-accuracy document intelligence and visual grounding.

How to Get Started with the Model

1. Requirements

You need both a Quantized Model File (.gguf) and the Vision Projector (qianfan-ocr-mmproj.gguf).

2. Running with Koboldcpp (Recommended)

./koboldcpp \
  --model qianfan-ocr-4b-Q4_K_M.gguf \
  --mmproj qianfan-ocr-mmproj.gguf \
  --usecuda 0 \
  --gpulayers 99 \
  --contextsize 8192 \
  --remotetunnel
Downloads last month
846
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abiray/Qianfan-OCR-GGUF

Quantized
(5)
this model

Paper for Abiray/Qianfan-OCR-GGUF