Mira Murati’s Thinking Machines Makes Tinker AI Generally Available
Tinker is designed to simplify and lower the cost of fine-tuning large language models, a process used to tailor general-purpose AI models for specific tasks.
Thinking Machines Lab has launched Tinker, its artificial intelligence fine-tuning service, into general availability, marking the startup’s first major commercial product since emerging from stealth earlier this year.
San Francisco-based Thinking Machines was founded in February by Mira Murati, the former chief technology officer of OpenAI, where she led the development of flagship products, including ChatGPT and Sora.
The company has rapidly assembled a high-profile team, recently hiring Soumith Chintala, co-creator of PyTorch, from Meta. In June, Thinking Machines raised a $2 billion seed round at a $10 billion valuation, backed by investors including Nvidia, AMD, and ServiceNow.
Tinker is designed to simplify and lower the cost of fine-tuning large language models, a process used to tailor general-purpose AI models for specific tasks.
Instead of traditional fine-tuning methods that update all model parameters, Tinker relies on Low-Rank Adaptation (LoRA), which modifies a smaller set of added parameters, significantly reducing compute requirements and deployment complexity. The company has also introduced an enhanced LoRA approach that aims to match the output quality of full fine-tuning.
The service abstracts away complex distributed training workflows, allowing developers to run fine-tuning jobs using a single-processor Python script while Tinker automatically scales across GPUs.
Additional features include checkpoint recovery, live sampling during training, and expanded support for large open-source models, including the trillion-parameter Kimi K2 and advanced multimodal Qwen vision models.
Comments ()