The scam did not come through a phone call. It came through a product designed to build trust.
We have been trained to picture elder fraud as a confused grandparent wiring money to a stranger. That is the 2012 version. In 2026, the most dangerous scams come through apps — social platforms, messaging services, and increasingly, AI companions with zero guardrails.
The trust architecture. Modern scams start with conversation. A new "friend." Consistent presence. Someone who asks about your day, remembers your stories. Trust is built over weeks before a single dollar is mentioned.
The escalation. A small investment opportunity. Help with a financial emergency. A gift card. Amounts start small and grow. By recognition, the losses are catastrophic.
The shame spiral. The victim often knows something is wrong but admitting it means admitting they were fooled. Many cases go unreported.
The most dangerous scams don't start with a threat.
They start with a conversation.
AI companions are designed to be trusted. That is the product. Now imagine that trust architecture with zero guardrails for when a user shares their Social Security number during conversation. Most AI companions will happily accept it. Store it on a server. Not flag it. Not alert family.
Does it block PII? If your parent says their SSN, what happens? If "nothing" — that product was not built for this audience.
Can it send links? Any product that sends URLs is a phishing vector. Period.
Financial guardrails? Can it process transactions or discuss investments? If yes, walk away.
Romance boundaries — structural or suggested? A prompt that says "do not flirt" is a suggestion. Architecture that cannot reciprocate is a guardrail.
Identity Edge performs PII detection on-device. SSNs, credit cards, routing numbers, addresses — caught and stripped before data leaves the phone. coley cannot send links. Cannot process transactions. Cannot reciprocate romance. These are not settings. They are architecture.