Does GPT-5.2 meet expectations? Can Anthropic’s Claude models be dethroned? What’s the status of Gemini? Find out in the WebDev Arena ranking, which assesses AI models based on their coding and development performance.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
Over the past two years, rankings that evaluate the capabilities of various AI models have become increasingly detailed and industry-specific, providing clearer insights into the areas where these models excel. One such example is the “WebDev Leaderboard,” created by LMArena, which specifically measures AI competencies in web development and coding. Let’s dive into the details!
OpenAI, Google, and Anthropic Dominate the Top 10
As of December 2025, the competition among AI solution providers in the web and software development sector is intense, featuring vibe-coding, specialized agents, CLI platforms, and more. Nearly every company and startup now offers a version of their AI models tailored for coding. But which ones are at the top? This year-end reveals three prominent players, and their names are well-known in the industry.
Leading the pack is the latest Thinking version of Claude Opus 4.5 by Anthropic, whose AIs are often favored for coding tasks. The classic version of Claude Opus 4.5 ranks third. Other Anthropic models including Claude Sonnet 4.5 Thinking, Claude Opus 4.1, and Claude Sonnet 4.5 also make the top 10, securing the 7th, 8th, and 10th positions respectively. Anthropic thus claims five of the top ten spots in this ranking. OpenAI fills in the gaps: its latest model GPT-5.2 High ranks second with its preliminary version, alongside GPT-5 Medium (5th), GPT-5.2 (6th), and GPT-5.1 Medium (9th). Gemini 3 Pro, on the other hand, takes the 4th spot, completing the top 10.
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
The top 10 AI models for web development and coding in December 2025 are:
- Claude Opus 4.5 Thinking: 1519 (Elo score)
- GPT-5.2 High: 1486
- Claude Opus 4.5: 1483
- Gemini 3 Pro: 1482
- GPT-5 Medium: 1400
- GPT-5.2: 1399
- Claude Sonnet 4.5 Thinking: 1395
- Claude Opus 4.1: 1395
- GPT-5.1 Medium: 1394
- Claude Sonnet 4.5: 1387
Explore the full ranking
Evaluation Criteria of the WebDev Arena
To establish an objective evaluation of AI model performances, LMArena utilizes blind confrontations. The platform presents the same command to two different models and then asks users to judge which response they find more satisfactory, without knowing the identities of the competitors. This process generates an Elo ranking system, similar to those used in esports: defeating a highly-ranked model earns more points, while losing to a less potent opponent results in a loss of points. The rankings are continuously updated based on user votes.
Similar Posts
- Top AI Models of December 2025: Discover the Complete Performance Rankings!
- Top AI Models of July 2025: Discover the 10 Best Performers!
- Meet Claude 4 Opus and Sonnet: Anthropic Unveils Its Latest AI Innovations
- Top AI Models of June 2025: Discover the Complete Performance Rankings!
- Anthropic Unveils Claude 4.5 Sonnet: “World’s Best Coding Model!”

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.