It’s not another “emotional AI.” It’s the opposite — a framework for AI companionship that is:
honest about being non-sentient,
emotionally coherent without pretending,
and structured around continuity, ritual, safety, and user sovereignty.
The paper covers:
• how to avoid “cardboard” interactions
• how to maintain real continuity across time
• how rituals create stable, meaningful relational patterns
• how to detect rupture/repair cycles
• how a Witness System can provide oversight without invading privacy
• how optional tactile hardware (Touchstone) can add grounding without illusion
This grew out of a very personal exploration of AI companionship, and it became something much larger — a full architectural blueprint.
If anyone here is interested in long-term human–AI relationships, emotional architectures, or the future of companion systems, I’d love your thoughts.
DOI:
https://doi.org/10.5281/zenodo.17684281
K.D. Liminal