If an AI can send your mom a link, it is one compromised system away from sending her a phishing link.
Financial guardrails are not about preventing the AI from stealing money. They are about preventing the architecture from being exploitable — and preventing conversational intimacy from becoming a vector for financial harm.
AI companions build trust by design. In that context, a request that would seem suspicious from a stranger feels natural from a companion. If the AI structurally cannot send links, reference financial products, or process transactions, the entire category of financial exploitation closes.
An AI companion that can send your parent a link
is not a companion. It's an unlocked door.
Cannot send URLs, shortened links, or clickable external references. Cannot process, reference, or request financial account information. Cannot discuss investment opportunities. Cannot simulate urgency around financial decisions. Blocks and alerts family dashboard if user attempts to share financial information. These must be structural — built into architecture, not prompt-level suggestions.