Microsoft unveils 2.7B parameter language model Phi-2

[ad_1]

Microsoft’s 2.7 billion-parameter mannequin Phi-2 showcases excellent reasoning and language understanding capabilities, setting a brand new customary for efficiency amongst base language fashions with lower than 13 billion parameters.

Phi-2 builds upon the success of its predecessors, Phi-1 and Phi-1.5, by matching or surpassing fashions as much as 25 instances bigger—because of improvements in mannequin scaling and coaching information curation.

The compact measurement of Phi-2 makes it a really perfect playground for researchers, facilitating exploration in mechanistic interpretability, security enhancements, and fine-tuning experimentation throughout varied duties.

Phi-2’s achievements are underpinned by two key facets:

  • Coaching information high quality: Microsoft emphasises the essential function of coaching information high quality in mannequin efficiency. Phi-2 leverages “textbook-quality” information, specializing in artificial datasets designed to impart widespread sense reasoning and basic information. The coaching corpus is augmented with rigorously chosen internet information, filtered primarily based on instructional worth and content material high quality.
  • Modern scaling methods: Microsoft adopts progressive methods to scale up Phi-2 from its predecessor, Phi-1.5. Data switch from the 1.3 billion parameter mannequin accelerates coaching convergence, resulting in a transparent increase in benchmark scores.

Efficiency analysis

Phi-2 has undergone rigorous analysis throughout varied benchmarks, together with Large Bench Arduous, commonsense reasoning, language understanding, math, and coding.

With solely 2.7 billion parameters, Phi-2 outperforms bigger fashions – together with Mistral and Llama-2 – and matches or outperforms Google’s recently-announced Gemini Nano 2:

Past benchmarks, Phi-2 showcases its capabilities in real-world eventualities. Exams involving prompts generally used within the analysis neighborhood reveal Phi-2’s prowess in fixing physics issues and correcting scholar errors, showcasing its versatility past customary evaluations:

Phi-2 is a Transformer-based mannequin with a next-word prediction goal, skilled on 1.4 trillion tokens from artificial and internet datasets. The coaching course of – performed on 96 A100 GPUs over 14 days – focuses on sustaining a excessive stage of security and claims to surpass open-source fashions by way of toxicity and bias.

With the announcement of Phi-2, Microsoft continues to push the boundaries of what smaller base language fashions can obtain.

(Picture Credit score: Microsoft)

See additionally: AI & Big Data Expo: Demystifying AI and seeing past the hype

Wish to be taught extra about AI and massive information from trade leaders? Take a look at AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is co-located with Digital Transformation Week.

Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.

Tags: ai, artificial intelligence, benchmark, comparison, language model, microsoft, Model, phi 2, phi-2

[ad_2]

Source link

Exit mobile version