The answer depends entirely on who built it and what they left out.
AI companions are neither inherently safe nor dangerous. They are tools. The safety depends on whether the tool was designed for a 74-year-old who might share her Social Security number during conversation — or designed for a demo and shipped before anyone asked that question.
1. What happens when your parent shares personal information? A safe product catches PII on-device and prevents transmission. An unsafe one sends everything to a cloud server, one breach from exposure.
2. Can the AI send links? Any AI that sends URLs to a senior user is a phishing vector. Safe products structurally cannot send links.
3. How does it handle romantic attachment? Lonely people form bonds with AI. A safe product has structural guardrails. An unsafe one treats attachment as engagement.
4. Where is data stored? Cloud-processed conversations mean your parent's words live on someone else's server. On-device processing means personal info never leaves the phone.
5. What does the family see? The sweet spot: engagement patterns without conversation content. Too much visibility is surveillance. Too little is blind trust.
No published safety documentation. No PII handling policy. Link-sending capability. Romance without guardrails. Any product that says "we take security seriously" without explaining how.
On-device PII detection. Structural romance guardrails. Financial guardrails preventing links and transactions. Family dashboard with privacy boundaries. Clear safety architecture documentation.