{"id":6373,"date":"2022-10-22T20:25:16","date_gmt":"2022-10-22T20:25:16","guid":{"rendered":"https:\/\/en.topic.lk\/?p=6373"},"modified":"2022-10-22T20:25:18","modified_gmt":"2022-10-22T20:25:18","slug":"microsofts-supercomputer-lays-out-vision-for-future-ai-work","status":"publish","type":"post","link":"https:\/\/topic.lk\/6373\/","title":{"rendered":"Microsoft\u2019s supercomputer lays out vision for future AI work…"},"content":{"rendered":"
Microsoft has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models.<\/p>\n
Built in collaboration with and exclusively for OpenAI<\/a>, the supercomputer hosted in Azure was designed specifically to train the OpenAI company\u2019s AI models. It represents a key milestone in a partnership to jointly create new supercomputing technologies in Azure.<\/p>\n It\u2019s also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organisations and developers to build upon.<\/p>\n Microsoft Chief Technical Officer Kevin Scott explained that potential benefits extend far beyond narrow advances in one type of AI model and added that it was possible to see new applications in the future that are challenging to comprehend at the moment.<\/p>\n A new class of multitasking AI models<\/strong><\/p>\n A new class of models developed by the AI research community has proven that some of those small tasks of yesterday\u2019s models can be performed better by a single massive model which can deeply absorb the nuances of language, grammar, knowledge, concepts and context that it can excel at multiple tasks.<\/p>\n As part of a companywide AI at Scale<\/a> initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models<\/a>, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics and other productivity products.<\/p>\n The goal, Microsoft says, is to make its large AI models, training optimization tools and supercomputing resources available through Azure AI services and GitHub so that the power of AI at scale can be leveraged.The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.<\/p>\n According to OpenAI CEO Sam Altman, Microsoft was able to build their dream system. He also added that OpenAI\u2019s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use.<\/p>\n Accordingly, Microsoft will soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning, giving developers access to the same family of powerful language models that the company has used to improve language understanding across its products.<\/p>\n It also unveiled a new version of DeepSpeed, an open-source deep learning library for PyTorch that reduces the amount of computing power needed for large, distributed model training. Along with this announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime, an open-source library designed to enable models to be portable across hardware and operating systems.<\/p>\n Learning the nuances of language<\/strong><\/p>\n Designing AI models that might one day understand the world more like people do starts with language. However, these deep learning models are now far more sophisticated than earlier versions.<\/p>\n Two years ago, the largest models had 1 billion parameters. The Microsoft Turing model<\/a> for natural language generation now stands as the world\u2019s largest publicly available language AI model with 17 billion parameters.<\/p>\n In what\u2019s known as \u201cself-supervised\u201d learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet.\u00a0 As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language.<\/p>\n AI at Scale<\/strong><\/p>\n One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a \u201cpre-trained\u201d model and simply fine tune for different tasks with much smaller datasets and resources.<\/p>\n The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of productivity offerings over the last few years. It has significantly advanced caption generation<\/a> and question answering in Bing.<\/p>\n