Kernel-level optimization, built for production.
To unlock peak inference performance by delivering production-grade kernel engineering and a hardware-agnostic acceleration stack (Nova Engine).
A world where inference efficiency is a competitive advantage—measurable, deployable, and independent of any single silicon vendor.
Born from SNU Deep Learning Lab (DLLab), LLM Core AI focuses on software that unlocks peak hardware potential. We restore performance lost to general-purpose Python frameworks by optimizing at the primitive kernel level with C++ and CUTLASS. We build technical moats with Nova Engine and prove real-world value via Agentic Commerce—our 'Dual-Engine' strategy.
A multidisciplinary team of systems engineers and applied researchers focused on real performance and real deployments.