in

Alibaba Unveils QwQ-32B, a Compact Reasoning Model Rivaling DeepSeek-R1

Alibaba Unveils QwQ-32B, a Compact Reasoning Model Rivaling DeepSeek-R1

This year’s Double 11 shopping festival concluded on Nov. 11. Credit: Alibaba

On March 6, Alibaba released and open-sourced its new reasoning model, QwQ-32B, featuring 32 billion parameters. Despite being significantly smaller than DeepSeek-R1, which has 6,710 billion parameters (with 3.7 billion active), QwQ-32B matches its performance in various benchmarks. QwQ-32B excelled in math and coding tests, outperforming OpenAI’s o1-mini and distilled versions of DeepSeek-R1. It also scored higher than DeepSeek-R1 in some evaluations like LiveBench and IFEval. The model leverages reinforcement learning and integrates agent capabilities for critical thinking and adaptive reasoning. Notably, QwQ-32B requires much less computational power, making it deployable on consumer-grade hardware. This release aligns with Alibaba’s AI strategy, which includes significant investments in cloud and AI infrastructure. Following the release, Alibaba’s US stock rose 8.61% to $141.03, with Hong Kong shares up over 7%.[Jiemian, in Chinese]

What do you think?

Newbie

Written by Buzzapp Master

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

    China’s AI agent Manus gains traction amid growing demand for autonomous AI

    China’s AI agent Manus gains traction amid growing demand for autonomous AI

    Review: Mickey 17’s dark comedic antics make for a wild cinematic ride

    Review: Mickey 17’s dark comedic antics make for a wild cinematic ride