Trillion Labs Unveils 'Tri 21B Think,' Ranks in Global Top 30

News|
|
By Kim Ji-young
|
Trillionlabs unveils 'Tri 21B Think'... Ranked in Global Top 30 - Seoul Economic Daily Technology News from South Korea
Trillionlabs unveils 'Tri 21B Think'... Ranked in Global Top 30

Trillion Labs, an artificial intelligence model startup, has unveiled "Tri 21B Think," a Think model that maximizes reasoning capabilities by applying reinforcement learning to its existing "Tri 21B." The model has been ranked in the global top 30 on the leaderboard of Artificial Analysis (AA), a global AI model performance analysis organization.

The newly released Tri 21B Think is built on Trillion Labs' existing Tri 21B model, with enhanced "thinking" capabilities through reinforcement learning that expand the reasoning process. Rather than simply generating answers, the model deploys thinking steps in token form during problem-solving and implements a "backtracking" structure that allows it to return to previous steps for review when necessary. The backtracking structure is characterized by exponentially improving the AI's ability to solve complex and difficult tasks the longer it thinks. Through this, Tri 21B Think can deliver exceptional performance in areas that transcend the limitations of existing LLMs, including complex data analysis (deep research), autonomous AI agents, and advanced mathematics and coding. Tri 21B Think has 21 billion (21B) parameters and is designed to fit on a single GPU.

"We are very pleased to contribute to Korea's leap toward becoming one of the top three global AI powers through this AA listing," said Shin Jae-min, CEO of Trillion Labs. "Having confirmed our technological capability to deliver overwhelming performance with limited resources, we will enhance model completeness with the goal of achieving global top performance, establishing our position as a global AI model beyond national representative status."

Related Video

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.