Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons
Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI’s o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test time scaling. Notably, the researchers were able to showcase the methodology and replicate the model’s behaviour at an extremely low cost while using far fewer compute resources.
Researchers Develop S1-32B AI Model
The researchers detailed the methodology and process of developing the model in a study published in the pre-print journal arXiv. The process involved creating a synthetic dataset from a different AI model and using several new techniques such as ablation and supervised fine-tuning (SFT). The model is available in a GitHub listing.
It should be noted that the AI model was not built from scratch. The developers used the Qwen2.5-32B-Instruct and distilled it to create the s1-32B large language model (LLM). Released in September 2024, the model is capable but given its size and lack of reasoning capabilities, it cannot match up to OpenAI’s o1.
During the process, the researchers used the Gemini Flash Thinking application processing interface (API) to generate reasoning traces and responses. A total of 59,000 triplets of questions, reasoning traces (the chain of thought or CoT), and responses were extracted from the API. A dataset called the s1K was then created by selecting 1,000 high-quality, diverse, and difficult questions as well as the reasoning traces and the responses.
After creating the s1K dataset, the researchers performed supervised fine-tuning on the Qwen2.5-32B-Instruct model. For this, basic fine-tuning hyperparameters were used. The distillation process took 26 minutes of training on 16 Nvidia H100 GPUs.
Till this point, the researchers had no idea how OpenAI trained the models to “think” and how it managed to stop the thinking process. Without this, a model runs the risk of overthinking indefinitely as it second-guesses its output wasting valuable processing power.
While fine-tuning the model, the researcher found something interesting. They found that they could manipulate the inference time by adding
With the s1-32B model, the researchers added a “wait” command to force it to think beyond the usual inference period. Once added, the model began second-guessing and verifying its output. Then, the tag was used to either shorten this test time scaling phase or lengthen it.
Then, the researchers also experimented with several other phrases such as “alternatively”, and “hmm”, but found that the best performance metrics were achieved when using the “wait” tag. By bringing the model close to the performance of o1, the researchers claim that this might be the method used by OpenAI to fine-tune its reasoning models.
A TechCrunch report claims that the researchers were able to create the s1-32B AI model under $50 (roughly Rs. 4,380), highlighting that creating a post-training structure for reasoning models can be done at an extremely low cost.