Although Google had previously boosted its presence by releasing the image-editing model Nano Banana before Gemini 3 Pro, it had remained silent for far too long in terms of foundational models.

Over the past six months or so, everyone has been buzzing about OpenAI’s new moves or marveling at Claude’s dominance in the coding field. Yet, no one has mentioned Gemini, which hasn’t seen a version upgrade in eight months.

No matter how impressive Google’s cloud business and financial reports are, its presence in the core circle of AI developers has been gradually diluted.

Fortunately, after experiencing it firsthand, Xiaobang (a hypothetical reviewer) found that Gemini 3 Pro didn’t disappoint us.

However, it’s still too early to draw a conclusion. The AI track has long moved beyond the stage where models impress with sheer parameter counts. Everyone is now competing in terms of applications, implementation, and cost.

Whether Google can adapt to the new version and environment remains an unknown.

I asked Gemini 3 Pro to describe itself in one sentence, and this is what it said:

“I’m no longer in a rush to prove to the world how smart I am; instead, I’m starting to figure out how to make myself more useful.” — Gemini 3 Pro

On the LMArena leaderboard, Gemini 3 Pro claimed the top spot with an Elo score of 1501, setting a new record for comprehensive AI model evaluation. This is an extremely impressive achievement, and even Sam Altman (CEO of OpenAI) congratulated it on Twitter.

In mathematical ability tests, the model achieved 100% accuracy in the code execution mode of AIME 2025 (the American Invitational Mathematics Examination). In the GPQA Diamond scientific knowledge test, Gemini 3 Pro scored an accuracy rate of 91.9%.

Test results from the MathArena Apex math competition showed that Gemini 3 Pro scored 23.4%, while other mainstream models generally scored below 2%. Additionally, in a test named Humanity’s Last Exam, the model achieved a score of 37.5% without using any tools.

Google introduced a code generation feature called “vibecoding” in this update. This feature allows users to describe their requirements in natural language, and the system then generates corresponding code and applications.

In a test within the Canvas programming environment, after a user described “making an electric fan with adjustable rotation speed,” the system generated complete code, including a rotating animation, a speed control slider, and an on/off button, in about 30 seconds.