Separator

Kakao Open-Sources Kanana-2 Model Optimized for Agentic AI

Separator

image

Kakao has introduced Kanana-2, its most sophisticated in-house large language model (LLM) with improved performance and efficiency, now available as open source and tailored for agentic artificial intelligence (AI) systems. The firm disclosed that it has made available three new models: Base, Instruct, and Thinking.

The Instruct model is notable for its enhanced capacity to follow directives due to post-training advancements. The Thinking model excels in reasoning capabilities. This is the first instance of Kakao releasing its reasoning model as open source while providing total public access to model weights for developers interested in fine-tuning them with their own data.

Since the introduction of its proprietary Kanana series the previous year, the company has progressively broadened its open-source offerings, transitioning from lightweight models to Kanana-1.5, which was designed for tackling complex problems. Kanana-2 signifies the company’s most significant advancement in research, yielding substantial gains in performance and efficiency with a focus on creating AI that comprehends user intent and acts proactively.

Also Read: 5 Interesting APAC CTO Appointments in August 2025

 “The success of innovative AI services relies fundamentally on the performance and efficiency of the foundational language models,” stated Kim Byung-hak, Performance Lead at Kakao Kanana.

In addition to striving for sheer performance, we are dedicated to developing practical AI models that can be rapidly deployed and function effectively in actual applications, while openly sharing them to benefit the global AI research community.

Also Read: 5 Key CTO Appointments in Asia for October 2025

The newest LLM greatly enhances two key features essential for agentic AI: tool calling and instruction following. In comparison to its predecessor, Kanana-1.5-32.5b, the performance of multi-turn tool calling has increased by over three times, enabling the model to better understand and execute intricate step-by-step requests.Language support has also expanded from Korean and English to six languages, adding Japanese, Chinese, Thai and Vietnamese.

The model applies multi-head latent attention to process longer pieces of text without slowing down and a mixture of experts (MoE) structure that activates only the parts needed when responding to a question.

 

Also Read: 5 CMO Appointments in Companies across Asia in November 2025

This approach saves computing power, speeds up responses and allows the system to handle many requests at once with ease.

Benchmark tests show Kanana-2 Instruct delivers performance on par with the latest top LLMs such as Alibaba’s Qwen3-30B-A3B. The Thinking model also demonstrated advanced reasoning ability, comparable to Qwen3-30B-A3B, in multi-step problem-solving benchmarks, validating its potential as a reasoning-oriented AI.

 

Current Issue




🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...