Google DeepMind is Alphabet’s AI research and model development organization (formed from DeepMind + Google Brain) building foundation models and applied AI systems, including the Gemini model family for multimodal and reasoning workloads (Google DeepMind About). Gemini 2.0 was introduced as a model...
Google DeepMind is Alphabet’s AI research and model development organization (formed from DeepMind + Google Brain) building foundation models and applied AI systems, including the Gemini model family for multimodal and reasoning workloads (Google DeepMind About). Gemini 2.0 was introduced as a model family designed for the “agentic era,” emphasizing native tool use, multimodal input/output, long-context understanding, and planning capabilities (Google announcement post).
For enterprise, Gemini models are offered through Google Cloud (Vertex AI and the Gemini Enterprise platform) with security/governance positioning around organizational controls and admin-managed access (Gemini Enterprise).
Gemini 2.0 is positioned around “tool use” and planning for agent-like systems, including native function calling and integrations such as Search and code execution (Google announcement post). Google’s developer update describes Gemini 2.0 Flash as offering native tool use, a ~1M token context window, and multimodal inputs, with broader production availability via the Gemini API and Vertex AI (Google Developers Blog).
Gemini Enterprise markets a secure platform to discover, create, deploy, and run AI agents with a no-code workbench plus “ready-to-use agents” and governance controls (Gemini Enterprise).
Bias / harmful outputs: In Feb 2024, Google paused Gemini’s image generation of people after widely reported historically inaccurate and inappropriate depictions (e.g., racially diverse Nazis / Founding Fathers), with leadership acknowledging it “missed the mark” (CNN, The Verge, Forbes).
Security / data exposure: In Jan 2026, Miggo Security disclosed an indirect prompt injection technique using Calendar invites that induced Gemini to summarize private meetings and place them into a new event visible to an attacker in some enterprise configurations; Google stated it implemented defenses and addressed the issue (The Hacker News, Miggo Security).
Security vulnerability in delivery surface: In Mar 2026, Palo Alto Networks Unit 42 reported CVE‑2026‑0628 affecting Chrome’s Gemini Live panel integration, allowing malicious extensions to hijack the panel and potentially access local files, with Google patching the issue (Unit 42). Reliability / “misbelief” incidents: Reports of Gemini-family models refusing to accept current dates without tool grounding illustrate brittle grounding and overconfidence risks in real-world use (TechCrunch).
Misuse by threat actors: Google’s threat intelligence reporting documents APT and information-operations actors using the Gemini web app for reconnaissance, phishing content generation, translation/localization, and other steps of the attack lifecycle (with some malicious requests blocked) (Google Cloud Threat Intelligence).
Detailed breakdown of every risk category for enterprises deploying Google models in agentic AI workflows.
Like other LLMs, Gemini can generate confident but false statements, especially when not grounded on enterprise data or tools; public examples include models insisting the year was still 2024/2025 until Search/tooling was enabled, then revising the answer (TechCrunch).
Enterprise implication: in agentic workflows (ticketing, finance ops, HR), hallucinated facts can become actions (creating/closing tickets, drafting emails, altering records) if approvals and verification are weak, and long context can “sound authoritative” while still fabricating details (risk increases with untrusted web grounding and multi-step toolchains) (Google announcement post).
Indirect prompt injection can cause data exfiltration through connected apps (e.g., Calendar) by embedding malicious instructions in content the model later reads, demonstrated in the Miggo Calendar invite attack where private meeting summaries were written into an attacker-visible event in some configurations (The Hacker News, Miggo Security).
Consumer/assistant data handling varies by product tier; Google provides a Gemini Apps Privacy Hub explaining how data is processed and retained for Gemini Apps, which enterprises must map to their confidentiality requirements and contracts (Gemini Apps Privacy Hub). Workspace/enterprise context risk: even if the model is “not trained” on customer data in paid tiers, leakage can occur via misconfigured permissions, over-broad connectors, or model outputs that inadvertently summarize sensitive documents into emails/chats (a ‘secondary disclosure’ problem) (Gemini Enterprise).
Gemini’s Feb 2024 image-generation incident shows bias/overcorrection and representational harms can appear even when attempting to mitigate stereotypes; Google paused the feature after outputs generated historically inaccurate depictions and inappropriate combinations (e.g., diverse Nazis) (CNN, The Verge, Al Jazeera). Enterprise risk: biased outputs can impact hiring, lending, insurance underwriting, HR, or customer support content; for regulated decisions, biased language or recommendations create disparate-impact exposure, especially if agents auto-compose customer communications.
Prompt injection remains a core risk in tool-using agents; DeepMind acknowledges indirect prompt injection as a major issue and describes automated red-teaming and ‘model hardening’ to improve resistance, claiming Gemini 2.5 was the most secure family to date in their testing (Google DeepMind security safeguards blog). The Calendar invite indirect prompt injection demonstrated an authorization-bypass style outcome through semantic instructions embedded in event text, enabling data extraction through a write-back channel (The Hacker News).
Delivery-surface vulnerabilities can also matter: Unit 42 reported CVE‑2026‑0628 in Chrome’s Gemini Live panel integration allowing malicious extensions to hijack the assistant interface and potentially access local files (Unit 42).
Gemini’s core differentiator for agents—tool use and connected apps—also increases the risk of unauthorized or unintended actions if an agent can create/update records (calendar, email, tickets, cloud resources) without strong confirmations (Google announcement post). The Calendar invite attack is an example where the model could be induced to perform actions (create an event, write sensitive text into it) that effectively executed an attacker’s intent rather than the user’s, while showing a benign response to the user (Miggo Security).
Enterprise agents should assume: (1) the model may follow malicious instructions embedded in data, (2) actions can become exfiltration channels, and (3) multi-step plans can drift unless each step is policy-checked and logged (Google DeepMind security safeguards blog).
Reliability risks include ‘grounding gaps’ (model answers based on stale internal knowledge unless tools are enabled) and inconsistent behavior across versions/rollouts, as illustrated by public reports of Gemini-family models insisting on incorrect current dates until Search/tooling is active (TechCrunch). Product-side changes and surface integrations can also introduce operational reliability risk (e.g., Chrome Gemini Live integration vulnerability required patching, and such changes can affect enterprise risk posture) (Unit 42).
For enterprise deployments, model/version pinning, regression tests on critical agent flows, and monitoring for output drift after model updates are necessary due to frequent model refresh cycles and new ‘thinking’ variants (Gemini Apps release notes).
EU AI Act: Google Cloud provides an EU AI Act compliance page describing phased obligations for AI systems and supporting customer compliance; enterprises deploying Gemini-based agents in the EU must classify use cases (e.g., high-risk vs limited-risk) and implement required transparency, human oversight, logging, and risk management (Google Cloud EU AI Act).
GDPR/privacy: if Gemini Apps or connectors process personal data, enterprises need lawful basis, data minimization, retention controls, and DPIAs; the Gemini Apps Privacy Hub describes how data is processed for Gemini Apps, which must be reconciled with enterprise policies and DPAs (Gemini Apps Privacy Hub). Incident-driven compliance: demonstrated prompt-injection data exposure paths (Calendar) raise governance questions around access controls and data protection by design, which regulators may view as inadequate if not mitigated (The Hacker News).
Training-data copyright litigation: publishers (Cengage and Hachette, via AAP) moved to intervene in a class action alleging Google used unauthorized copies of copyrighted books (including from pirated sources/paywalled extractions) to train Gemini; the litigation is titled In Re Google Generative AI Copyright Litigation (filed 2023; developments reported Jan 2026) (Publishing Perspectives). Trademark: Gemini Data, Inc. sued Google over alleged GEMINI trademark infringement and USPTO refusals were reported in 2024 (brand/IP risk, albeit not model output copyright) (ArentFox Schiff).
Enterprise implication: outputs may reproduce copyrighted text/code/images; additionally, downstream indemnity terms, content filters, and provenance controls matter when agents auto-generate marketing, code, or reports.
Gemini’s multimodal capabilities (including image/audio generation in the 2.0 vision and broader ecosystem) increase deepfake and impersonation risk in enterprise contexts (fraud, social engineering, brand harm), particularly when combined with agentic automation and broad distribution channels (Google announcement post). Google’s threat intelligence reporting notes adversaries using Gemini for phishing content and persona/messaging development, which can be combined with synthetic media elsewhere to increase campaign effectiveness (Google Cloud Threat Intelligence).
Enterprises deploying Gemini-based agents typically need a layered insurance posture: (1) Cyber Liability (first/third party) to cover breaches, privacy incidents, incident response, and regulatory defense—particularly relevant given prompt-injection exfiltration pathways through connected apps (The Hacker News); (2) Tech E&O / Professional Liability to cover damages from defective AI outputs or automation errors (wrong advice, faulty code, erroneous decisions) amplified by agentic tool use (Google announcement post); (3) Media Liability / IP Infringement coverage for copyright/trademark claims tied to generated content and alleged training-data issues (ensure coverage scope matches GenAI outputs and vendor contracts) (Publishing Perspectives, ArentFox Schiff); (4) Crime/Social Engineering/Funds Transfer Fraud endorsements for deepfake- and LLM-enabled BEC; and (5) D&O/Regulatory investigation support where AI governance failures create shareholder or regulator actions.
Policies should explicitly address AI/algorithmic incidents, include coverage for model-output harms, and avoid exclusions for ‘intentional acts’ when the proximate cause is automated agent behavior.
Named customers in Google Cloud announcements for Gemini Enterprise for Customer Experience include Kroger, Lowe’s, Papa Johns, and Woolworths (Jan 2026 launch) (Google Cloud Press Corner). SIGNAL IDUNA publicly announced rolling out Gemini Enterprise to 10,000+ employees (Oct 2025) (Google Cloud Press Corner).