Microsoft, in collaboration with OpenAI, has consistently integrated AI capabilities into its products and services, while also developing smaller models tailored for specific use cases. Recently, Microsoft Research introduced Orca, a novel AI model that learns by emulating the behaviours of extensive language models.
The research paper highlights that Orca has been specifically designed to overcome the limitations typically associated with smaller models, accomplishing this by imitating the reasoning processes found in larger foundational models like GPT-4.
By leveraging large language models such as GPT-4, language models like Orca can be fine-tuned and tailored to specific tasks. The advantage of Orca's smaller size is its reduced demand for computing resources during operation. This allows researchers to optimise their models based on their specific needs and run them independently, without dependence on a large-scale data centre.
Based on the research paper, Orca, an AI model empowered by 13 billion parameters, possesses the capability to imitate and acquire knowledge from vast language models like GPT-4. The model is built upon the foundations of Vicuna.
With the assistance of GPT-4, which is rumoured to contain over one trillion parameters, Orca can learn explanations, intricate thought processes, and various complex instructions step-by-step.
By leveraging extensive and diverse imitation data, Microsoft is harnessing the power of Orca to advance progressive learning. Notably, Orca has already achieved a significant milestone by surpassing Vicuna's performance by 100% on challenging zero-shot reasoning benchmarks such as Big-Bench Hard (BBH). Additionally, the new model boasts a remarkable 42% increase in speed compared to conventional AI models when evaluated on AGIEval.
Despite its smaller size, Orca is reputed to exhibit comparable reasoning abilities to ChatGPT when evaluated on benchmarks like BBH. Moreover, it demonstrates competitive performance on standardized academic tests such as SAT, LSAT, GRE, and GMAT, although it may not reach the same level as GPT-4 in those specific assessments.
According to the Microsoft research team, Orca has the ability to learn by utilising step-by-step explanations generated by both human experts and more advanced language models. Furthermore, it is anticipated that Orca will continue to enhance its skills and capabilities over time.