Google, during a recent online Google I/O conference, unveiled a managed service that promises to make it easier to build artificial intelligence (AI) models in addition to setting the stage for the unification of machine learning and IT operations.
At the same time, Google also unveiled a Language Model for Dialogue Applications (LaMDA) that promise to make it possible for chatbots to engage in more open-ended conversations and a multitask unified model (MUM) that can be employed to respond to natural language queries against related text, images, and videos in 75 different languages.
Finally, Google also announced its fourth-generation tensor processing units (TPUs) that on average will run AI models 2.7 times faster than the previous generation of TPUs.
Everything announced at Google I/O 2021 crn.com.au - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from crn.com.au Daily Mail and Mail on Sunday newspapers.
Users can manage data and prototype, deploy, and interpret models without needing formal machine learning training, the company said. Specific capabilities of Vertex AI include:
Accessing the Google AI toolkit powering Google internally, including pre-trained APIs for computer vision, video, natural language, and structured data.
Faster deployment of AI applications via MLOps features such as Vertex Vizier, to increase the rate of experimentation; Vertex Feature Store, to serve, share, and reuse machine learning features; and Vertex Experiments, to accelerate deployment of models.
Tools such as Vertex Continuous Monitoring and Vertex Pipelines streamline machine learning workflow. These tools are intended to remove the complexity of self-service model maintenance and repeatability.
Google launches Vertex AI platform to help companies adopt machine learning operations
SHARE
Google LLC is helping companies embrace the idea of what’s known as machine learning operations with the launch of a new managed platform today that it says will accelerate the deployment and maintenance of artificial intelligence models.
MLOps is to machine learning what DevOps is to application development. With MLOps, the idea is to add discipline to the development and deployment of machine learning models by defining processes that make machine learning development more reliable and productive.
The discipline brings together all of the engineering pieces that are required to deploy, run and train AI models. That includes steps such as data collection, data verification, feature engineering, resource management, configuration, model analysis and so on.