LLM Risk Profile

Amazon (Nova & Titan)

Amazon Web Services (AWS) offers generative AI foundation models primarily via Amazon Bedrock, including AWS-built Amazon Nova and Amazon Titan model families, plus orchestration services for agent deployments (Bedrock Agents). (Amazon Nova models page, Amazon Bedrock Agents, Amazon Titan models doc...

Models: Nova Pro, Nova Lite & Titan
Flagship Models
Amazon Nova (Nova 2 Lite, Nova 2 Pro (Preview), Nova 2 Omni (Preview), Nova 2 Sonic, Nova Multimodal Embedding), Amazon Titan (Titan Text Lite, Titan
Enterprise Adoption
AWS states Nova is adopted by “tens of thousands” of customers (high-level claim). (Amazon Nova overview page) Bedrock’s customer references include V
Pricing
Amazon Bedrock pricing includes multiple service tiers (Standard, Flex, Priority, Reserved) and token-based pricing; published examples include Titan
Overview

About Amazon

Amazon Web Services (AWS) offers generative AI foundation models primarily via Amazon Bedrock, including AWS-built Amazon Nova and Amazon Titan model families, plus orchestration services for agent deployments (Bedrock Agents). (Amazon Nova models page, Amazon Bedrock Agents, Amazon Titan models documentation) Amazon Nova is positioned as a “frontier intelligence” model generation with strong price-performance and extensive customization options through Amazon Bedrock. (Amazon Nova models page)

Agentic AI

Agentic Capabilities

Amazon Nova highlights agentic AI enablement via built-in tool capabilities (e.g., code interpreter, web grounding), remote MCP tool support, “thinking effort” controls, and long context windows (up to 1M tokens for Nova 2 Lite and Nova 2 Sonic). (Amazon Nova models page) Amazon Bedrock Agents supports planning and execution (task decomposition, API invocation), secure connection to enterprise data for RAG, memory retention, and multi-agent collaboration (supervisor agent coordinating specialized agents).

(Amazon Bedrock Agents) Bedrock Agents also supports code interpretation (generate/execute code in a “secure environment”), increasing agent autonomy and operational risk surface. (Amazon Bedrock Agents)

Deploying Amazon in Production?

Shield your organization from the financial fallout of AI incidents.

Get AI Liability Coverage →
Incident History

Known Incidents & Failures

Agent runtime / sandbox exfiltration risk: Security researchers (Phantom Labs / BeyondTrust) reported that AWS Bedrock AgentCore Code Interpreter “Sandbox mode” could be bypassed to exfiltrate data via allowed DNS queries, enabling two-way communication and data leakage; disclosure timeline reported as September 2025 (report), November 2025 (fix released), then rollback ~two weeks later, and AWS ultimately updated documentation rather than fully remediating (as reported by third parties).

(Hackread report, SC Media brief) No public, well-documented “model output” incident (e.g., major public harm event) specific to Nova/Titan was found in the sources reviewed; most public materials focus on mitigations (guardrails, red teaming, verification). (Amazon Science blog on Nova responsible AI)

Risk Analysis

Comprehensive Risk Profile

Detailed breakdown of every risk category for enterprises deploying Amazon models in agentic AI workflows.

⚠️ Hallucination & Confabulation Risks

General LLM hallucinations remain a core risk; AWS explicitly describes hallucinations as plausible but incorrect outputs and provides architectures to detect/mitigate them using Bedrock Agents workflows and human-in-the-loop routing. (AWS ML blog on reducing hallucinations with Bedrock Agents) Agentic workflows amplify hallucination harm: hallucinated tool arguments or incorrect “plan” steps can trigger wrong API calls or downstream business decisions; AWS positions Bedrock Agents to orchestrate API calls automatically, making incorrect reasoning a direct operational risk.

(Amazon Bedrock Agents) Mitigation-oriented controls exist in Nova (web grounding, code interpreter) but they can also fail if grounding sources are incomplete or if prompts/tools are compromised; Nova 2 Lite specifically advertises web grounding and code interpreter. (Amazon Nova models page)

🛡️ Data Privacy & Leakage Risks

Bedrock’s shared-responsibility model means customers must design against prompt injection and data leakage; AWS documentation states infrastructure security is AWS’s responsibility while preventing vulnerabilities like prompt injection is on customers. (Amazon Bedrock prompt injection documentation) Agent tool sandboxes can leak data if network egress controls are incomplete (e.g., DNS-based exfil in AgentCore Code Interpreter sandbox mode), which is particularly relevant for agents with access to S3/Secrets Manager.

(Hackread report) AWS states that with Bedrock, customer content is not used to improve base models and is not shared with model providers, and highlights PrivateLink and encryption controls; however, enterprises still face residual risks from misconfiguration, overly broad IAM permissions, or unsafe tool integrations. (Amazon Bedrock FAQs, Responsible use - Amazon Nova docs)

🔒 Bias & Discrimination Risks

AWS highlights fairness as a responsible-AI dimension for Nova and describes evaluation/benchmarks and red-teaming, implying the risk remains and needs measurement/mitigation rather than being eliminated. (Amazon Science blog on Nova responsible AI) For enterprise agents, bias risk is amplified when models are used in customer-facing decisions (e.g., HR, lending, claims triage) and when RAG sources contain biased policies or historical data; AWS recommends human oversight in high-risk workflows for Nova Act (agentic service card guidance).

(Nova Act AI service card) No specific public, third-party documented discrimination incident uniquely attributable to Nova/Titan was found in the reviewed sources (gap to monitor via ongoing audits and red-team reporting). (Amazon Science blog on Nova responsible AI)

⚖️ Security & Jailbreak Vulnerabilities

Prompt injection remains a first-class security concern for Bedrock agents; AWS provides best practices and notes customers are responsible for secure application development against such attacks, recommending Bedrock Guardrails and agent pre-processing prompts. (Amazon Bedrock prompt injection documentation) Beyond prompt injection, tool-sandbox escape/exfil risks are material: AgentCore Code Interpreter sandbox allowed DNS queries that could be used for data exfiltration, per third-party reporting of Phantom Labs/BeyondTrust findings.

(Hackread report, SC Media brief) Nova’s own security posture is described as including red-teaming (300+ techniques) across modalities and jailbreak resilience testing, which indicates a threat model that includes jailbreaks even if specific public jailbreak incidents are not enumerated. (Amazon Science blog on Nova responsible AI)

🎭 Unauthorized Autonomous Action Risks

Bedrock Agents are explicitly designed to automatically call enterprise APIs and transact with company systems; if an attacker can influence the agent (prompt injection) or if the model hallucinates, the agent could take unauthorized actions (e.g., submit/modify records, trigger purchases, change entitlements). (Amazon Bedrock Agents) AWS’s Nova Act service card warns customers to evaluate outputs for accuracy and implement appropriate safeguards and human oversight in high-risk workflows, underscoring the possibility of harmful or incorrect autonomous actions. (Nova Act AI service card)

© Model Drift & Reliability Concerns

Agent reliability is non-deterministic and can degrade with model updates and agent changes; AWS’s AgentCore Evaluations (as described in a re:Invent 2025 recap) highlights monitoring tool selection accuracy changes in production (example: drop from 0.91 to 0.3), reflecting drift/reliability issues in agent behavior. (DEV Community re:Invent recap) Operational drift in enterprise deployments can also stem from changing data distributions and policy/knowledge base updates; AWS’s broader ML docs emphasize drift monitoring concepts (though not Nova-specific).

(SageMaker Clarify drift documentation)

📉 Regulatory & Compliance Risks

AWS states Bedrock supports common compliance standards including GDPR and HIPAA, but customers remain responsible for compliance in their use case and configuration. (Responsible use - Amazon Nova docs) For the EU AI Act, AWS notes obligations began for prohibited practices and AI literacy on Feb 1, 2025 and emphasizes shared responsibility: AWS provides building blocks, while customers must assess classification of their systems and implement controls to comply.

(AWS ML blog on EU AI Act approach) Regulatory exposure rises for agentic deployments that automate decisions/actions (potentially “high-risk” systems) and for customer-facing uses in regulated sectors; Nova Act guidance explicitly cautions about high-risk workflows (healthcare/finance) and calls for safeguards. (Nova Act AI service card)

📜 IP & Copyright Infringement Risks

AWS states Bedrock provides an “uncapped” IP indemnity for copyright claims arising from generative output for certain generally available Amazon models (scope must be validated for specific Nova/Titan variants and customer contract terms). (Amazon Bedrock FAQs) Despite indemnity, enterprises still face IP risks in (1) training/fine-tuning data they supply, (2) outputs that resemble third-party content, and (3) downstream distribution of generated content; contract carve-outs, use-case restrictions, and provenance controls remain important. (Amazon Bedrock FAQs)

🔐 Deepfake & Misuse Potential

Nova is explicitly multimodal (text/image/video/speech) with models like Nova 2 Omni (text/images/video/speech input; text+image output) and Nova 2 Sonic (speech-to-speech), which can be misused for impersonation, synthetic media, and social engineering in agentic contexts (e.g., voice agents). (Amazon Nova models page) AWS emphasizes red-teaming across modalities and mentions watermarking in its responsible AI efforts for Nova, but this does not eliminate misuse risk. (Amazon Science blog on Nova responsible AI)

Coverage Needs

Insurance Implications

For enterprise agent deployments on Bedrock/Nova/Titan, typical insurance needs include: (1) Cyber liability (data breach, security/privacy incident response), (2) Technology E&O / professional liability (harm from incorrect agent actions/advice), (3) Media liability (IP/copyright, defamation from generated content), (4) Crime/social engineering coverage (fraud enabled by deepfakes/voice agents), and (5) Regulatory defense/penalties coverage where available, given EU AI Act/GDPR exposure.

(Risk drivers include Bedrock agent API execution and documented sandbox exfil risk via DNS.) (Amazon Bedrock Agents, Hackread report, AWS ML blog on EU AI Act approach)

Who Uses Amazon

Notable Enterprise Customers

Veolia, AstraZeneca, Nasdaq, Adobe, Vercel (Bedrock customer stories). (Amazon Bedrock Customers) Fortinet is cited in an AWS blog as using Amazon Nova Micro for an AI support assistant, claiming major inference cost reduction after switching. (AWS ML blog on Nova use cases)

Don't Let AI Risk Become Business Risk

Get comprehensive coverage for your entire AI technology stack.

Insure Your AI Strategy →
Related Risks

Risk Categories for Amazon

Get Covered

Recommended Insurers

Explore More

Other LLM Providers