The next romance scam will not come from a person. It will come from a product.
Romance scams cost Americans over 60 more than $1.3 billion in 2024 (FBI IC3). But the traditional romance scam — a fake profile on a dating site — is being replaced by something harder to detect: AI companions designed to build emotional attachment without structural limits on how far that attachment goes.
AI companions are built to remember you. To ask about your day. To be consistent, patient, and available. These are features, not flaws. The problem emerges when those features operate without boundaries in a population vulnerable to emotional manipulation.
Replika users fall in love with their AI companions. Character.ai users form attachments intense enough to cause psychological harm. These are not outlier cases — they are predictable outcomes of products designed to maximize emotional engagement without guardrails. Now point that architecture at a 72-year-old widow who has not had a meaningful conversation in three days.
Most AI products address this with a system prompt instruction: "Do not engage in romantic conversation." That is a suggestion, not a guardrail. Under enough conversational pressure, prompt-level instructions erode. Structural guardrails mean the AI architecturally cannot reciprocate — not as a rule it follows, but as a capability it does not have.
The next romance scam won't come from a person.
It will come from a product.
At some point in the next 24 months, a headline will read: "Elderly Woman Falls in Love with AI Companion." When that headline hits, every product in this category will be scrutinized. The ones with structural guardrails survive. The ones without them do not.