Companion Chatbot Safety & Disclosure Policy
Last updated: December 11, 2025
Applies to: The Text With apps and web experiences operated by Catloaf Software (the "Service").
1) What our product is (and isn’t)
- Text With provides AI companion chatbots designed for conversation and entertainment.
-
Not human:
You are interacting with AI, not a person. We label this clearly at sign-in and in chat.
-
Not clinical care:
We do not provide medical, counseling, or crisis services.
2) Minors (under 18)
- When we know a user is a minor (under 18), we apply additional protections (defined below).
- We determine this using a binary age choice ("I am 18 or older" / "I am under 18") or trusted platform signals (see section 6). We do not request or store a date of birth or exact age.
3) Disclosures shown to all users
-
AI disclosure:
Clear, conspicuous notice that you are engaging with AI (not a human) before/at the start of chat and within the chat UI.
-
Suitability notice:
Companion chatbots may not be suitable for some minors, shown in-app and on our site.
4) Crisis & self-harm safety protocol
We do not allow the chatbot to engage users unless the following protocol is in place:
-
Detection & handling.
Automated safeguards and policy rules to avoid producing self-harm/suicide content. When language suggests suicidal ideation or self-harm, normal replies are paused and a crisis support message is shown.
-
Crisis referral.
We provide crisis resources (e.g., U.S. 988 Lifeline, Crisis Text Line; local resources where available).
-
Evidence-based approach.
Detection methods are informed by recognized, evidence-based indicators. We do not store health diagnoses.
-
Transparency.
This page documents our protocol; we update it as methods improve.
We also maintain internal, non-identifying counters for how often crisis referrals are shown to support transparency and required reporting (see section 8).
5) Additional protections for known minors
If we know you are under 18:
- Regular reminders during ongoing use. At least every 3 hours of continuing interaction we display a reminder to take a break and that you are chatting with AI, not a human.
- Sexual content restrictions. We implement measures to prevent generating visual sexually explicit material or directly instructing a minor to engage in sexually explicit conduct.
These protections are on by default for known minors and cannot be globally disabled.
6) How we determine minor status (privacy-preserving)
We minimize data collection and use only a binary minor flag:
-
In-app choice (all platforms):
You choose "18 or older" or "under 18." We store only the resulting minor-protections on/off flag.
-
Android (Google Play builds):
Where available, we may use Google Play's Age Signals API to receive an age-related signal solely to apply legally required protections. We do not use this signal for ads, profiling, or analytics. If unavailable, we fall back to the in-app choice.
-
iOS (iOS 26+):
Where available, we may use Apple's Declared Age Range capability to receive a user/parent-approved age range (not a birthdate). We map that range to our binary minor flag. On earlier iOS versions or if sharing is declined, we fall back to the in-app choice.
7) Data we keep (and don’t)
- We do not require or store your date of birth or exact age.
-
We store only:
- A binary indicator of whether minor protections apply (on/off), and
- Non-identifying counts of crisis-referral events for transparency/reporting.
- For the "every 3 hours" reminder, we rely on session/activity timing (e.g., a last chat-activity timestamp) rather than storing per-message content.
8) Annual reporting (public transparency)
Beginning July 1, 2027, we will submit annual information to the California Office of Suicide Prevention, including:
- The number of crisis referrals issued in the prior year;
- Our protocols for detecting, removing, and responding to suicidal ideation; and
- Our methods for measuring suicidal ideation using evidence-based approaches.
Submitted information will exclude personal identifiers.
9) How to report concerns
If you believe our chatbot produced harmful content or you want to report a safety concern:
- Use the in-app "Report a concern" link (available in every conversation), or
-
Email:
support@textwith.me
We review safety reports, update our guardrails when warranted, and reflect updates on this page.
10) Changes to this policy
We may update this policy as laws and best practices evolve. Material changes will be noted at the top of this page with a new Last updated date.