Google to preview new Vertex AI tools at its virtual AI Summit

Estimated read time 3 min read

[ad_1]

Google’s new tools and partnerships are designed to make machine learning easier to deploy and work with in the real world.

August 19, 2018 Mountain View / CA / USA - Google logo on one of the buildings situated in Googleplex, the company's main campus in Silicon Valley
Image: Sundry Photography/Adobe Stock

Google announced a new set of product features and partnerships Thursday for its Vertex AI platform designed to make deploying machine learning models into production environments easier at scale. The new tools, features and partnerships will be previewed at its virtual Applied AI Summit today at noon EDT.

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

New Vertex AI features

Training Reduction Server

Vertex AI Training Reduction Server, which supports both Tensorflow and PyTorch, optimizes bandwidth and reduces latency of multi-node distributed training on NVIDIA GPUs. According to Google, TRS “significantly reduces the training time required for large language workloads, like BERT, and further enables cost parity across different approaches.”

TRS also simplifies the deployment of Jupyter Notebooks by reducing 12 deployment steps to a single click. This feature is designed to help eliminate routine tasks and accelerate ML deployment into production.

Tabular Workflows

Tabular Workflows includes a glassbox and managed AutoML pipeline that lets users see and interpret each step in the model building and deployment process. This allows data scientists to train large datasets of over a terabyte without sacrificing accuracy. Users can choose which parts of the process they want to automate and which parts they engineer themselves.

Elements of Tabular Workflows can also be integrated into existing Vertex AI pipelines. Google also added new managed algorithms, including advanced research models like TabNet, model feature selection and model distillation. Future additions to Workflows will include Google proprietary models such as Temporal Fusion Transformers as well as open source models such as XGboost and Wide & Deep.

Serverless Spark

To fast track the deployment of ML models into production and further integrate data modeling capabilities directly into the data science environment, Google announced the Serverless Spark tool as well as partnerships with Neo4j and Labelbox to help ML model builders work with structured data, graph data and unstructured data.

For structured data, Google Serverless Spark will allow data scientists to launch a serverless spark session on their notebooks and interactively develop code.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

For graph data, Google is announcing a partnership with Neo4j that lets data scientists explore, analyze and engineer features from connected data in Neo4j and then deploy models with Vertex AI from a single platform. Using Neo4j Graph Data Science and Vertex AI, data scientists utilize graph-based inputs across use cases such as fraud and anomaly detection, recommendation engines, customer 360 and logistics.

For unstructured data, Google’s partnership with Labelbox allows data scientists to use unstructured data to build machine learning models on Vertex AI.

Example-based Explanations

To help data scientists manage and maintain ML models that are in production, Google is previewing Vertex AI Example-based Explanations. Using example-based explanations to quickly diagnose and treat issues, data scientists can identify mislabeled examples in their training data or discover what data to collect to improve model accuracy.

[ad_2]

Source link

You May Also Like

More From Author