Qwen3.5-35B-A3B Q4 Quantization Comparison
Sentiment Mix
Geography
Expert Signals
r/LocalLLaMA
source • 2 mentions
mim722
author • 1 mention
TitwitMuffbiscuit
author • 1 mention
Extracted Claims
Qwen3.5-35B-A3B is awesome.
Supported by 1 story
there is a substantial progress , still hoping for qwen3.5-4b [https://github.com/djouallah/semantic\_sql\_testing](https://github.com/djouallah/semantic_sql_testing)
Supported by 1 story
Qwen3.5-35B-A3B Q4 Quantization Comparison.
Supported by 1 story
This is a Q4 quantization sweep across all major community quants of Qwen3.5-35B-A3B, comparing faithfulness to the BF16 baseline across different quantizers and recipes.
Supported by 1 story
The goal is to give people a data-driven basis for picking a file rather than just grabbing whatever is available.
Supported by 1 story
For the uninitiated: **KLD (KL Divergence):** "Faithfulness." It shows how much the quantized model's probability distribution drifts from a baseline (the probability distribution of the original weights).
Supported by 1 story
It is derived from the total information loss (Cross Entropy).
Supported by 1 story
Related Events
Qwen3.5 122B in 72GB VRAM (3x3090) is the best model available at this time — also it nails the “car wash test”
Product Launch • 2/26/2026
OpenAI o3 and o4-mini System Card
LLMs • 2/27/2026
Gemini 2.5 Pro Preview: even better coding performance
LLMs • 2/26/2026
Gemini 2.5: Updates to our family of thinking models
LLMs • 2/26/2026
Start building with Gemini 3
LLMs • 2/26/2026