Foxconn, Taiwan’s leading electronics manufacturer, has introduced its first large language model, “FoxBrain,” aiming to enhance manufacturing and supply chain management.
According to the company, the model was trained using 120 Nvidia H100 GPUs over a four-week period. Built on Meta’s Llama 3.1 architecture, FoxBrain is optimized for traditional Chinese and Taiwanese language styles, making it Taiwan’s first AI model with advanced reasoning capabilities.
While acknowledging a slight performance gap compared to China’s DeepSeek distillation model, Foxconn stated that FoxBrain’s overall capabilities are nearing world-class standards. Initially developed for internal use, the AI model supports a range of applications, including data analysis, decision-making, document collaboration, mathematics, problem-solving, and code generation.
Foxconn plans to collaborate with technology partners to expand FoxBrain’s applications and share its open-source insights to promote AI-driven innovation in manufacturing, supply chain optimization, and intelligent decision-making.
Nvidia played a key role in the project by providing support through its Taiwan-based supercomputer, “Taipei-1,” and offering technical guidance throughout the model’s training process. Taipei-1, located in Kaohsiung, is the largest supercomputer in Taiwan and is owned and operated by Nvidia.