Built so we can’t betray you.
Every promise on this page is a property of the system, not of our intentions.
We never see your individual answers.
Every answer is encrypted on your device with a key derived from your password. The key is salted, run through Argon2id, and never transmitted to us. By the time anything reaches our servers, it is ciphertext. The plaintext exists in two places only: your browser, and your partner's browser. Not ours.
// In the browser, before answers leave the device:
const salt = user.salt; // per-user, generated at signup
const key = argon2id(password, salt, { t: 3, m: 64·1024, p: 4 });
const blob = aesGcmEncrypt(key, canonicalize(answers));
// What we actually receive on the server:
POST /api/quiz/submit { partnership_id, blob }
// password, key, and salt are never in this payload.Only the intersection of yes-yes leaves the database.
When both partners have submitted, a sealed function loads both keys, decrypts both blobs in memory, computes the set of yeses where both answered yes, and writes only that intersection to the partnership record. Every other answer — yours alone, theirs alone, no-no, the embarrassing ones — is dropped on function exit. There is no audit log of what was decrypted. There is no shadow copy.
async function computeReveal(partnershipId) {
const [a, b] = await loadEncryptedSubmissions(partnershipId);
const [keyA, keyB] = await deriveKeysSealed(a, b);
const ansA = decrypt(keyA, a.blob); // in memory only
const ansB = decrypt(keyB, b.blob);
const matched = ansA
.filter(x => x.v === 'yes' && ansB.find(y => y.id===x.id && y.v==='yes'))
.map(x => x.id);
await writeMatches(partnershipId, matched);
// ansA, ansB, keyA, keyB go out of scope here. Nothing else persists.
return matched;
}Your account deletes everything in one click.
Account deletion is a hard delete, not a soft delete. When you click the button in Settings, every record across our system is removed: your user row, your partnership rows, your encrypted submissions, your match records, your journal entries, your audit timestamps, your session tokens. Backups roll forward to exclude you within thirty days. There is no archive. There is no “in case you change your mind” — if you change your mind, you sign up again.
async function deleteAccount(userId) {
await db.transaction(async tx => {
await tx.delete('reveal_logs').where({ userId });
await tx.delete('matches').where({ userId });
await tx.delete('submissions_encrypted').where({ userId });
await tx.delete('journal_entries').where({ userId });
await tx.delete('partnerships').where(memberOf(userId));
await tx.delete('sessions').where({ userId });
await tx.delete('users').where({ id: userId });
});
// Backups (Supabase point-in-time recovery, 30-day window)
// roll forward and exclude this user within 30 days.
}We will never train AI models on your answers.
This is not a soft commitment we might revisit during a strategy offsite. It is a written restriction in our data processing agreement. We are also not legally permitted to grant a license that would allow it: the encryption design means we don't have plaintext access to the data we'd need to train on. We could not change our minds about this without breaking the architecture. That is the point.
// excerpt from the BothWant Data Processing Agreement, §4.2
// available at /legal/dpa, executed by BothWant Editorial.
PROHIBITED USES. Processor shall NOT:
(a) use Customer Data to train, fine-tune, evaluate,
or otherwise improve any machine-learning model,
whether owned by Processor or by a third party;
(b) grant any license to Customer Data that would
permit such training by a sub-processor;
(c) retain decrypted Customer Data outside of
the sealed reveal function described in §3.1.
// This clause survives termination. It cannot be amended
// without notice to all active customers and an opt-out window.We do not sell, share, or transfer your data to advertisers.
We have no advertiser SDKs in our codebase. No Meta pixel. No Google Ads tag. No TikTok pixel. No LinkedIn Insight tag. The only third-party scripts on bothwant.com are the ones we list publicly: Stripe (checkout), Resend (transactional email), and PostHog (anonymized product analytics with PII masking enabled). You can verify this. The build is open to inspection in the browser, and our sub-processors page lists every vendor we touch.
$ rg 'connect.facebook.net|googletagmanager|doubleclick|tiktok|linkedin\.com\/insight' .
# (no matches)
$ rg 'pixel|fbq|gtag|ttq|_linkedin_partner_id' --type js --type ts
# (no matches)
$ cat package.json | jq '.dependencies | keys[]' | grep -i 'pixel\|analytics\|tracking\|ad'
"posthog-js" # the only match — and PII masking is enforced in posthog.config.tsThe architecture, end to end.
Each partner has a key derived from their password and never sent to us. Answers are encrypted before they leave the browser.
We see encrypted blobs and a partnership ID. We do not see which questions, which answers, which yeses.
When both partners have submitted, a sealed function loads both partners' keys, decrypts both submissions in memory, computes the yes-yes intersection, and discards the rest.
Unmatched answers — yours, theirs, the embarrassing ones — are not written to disk, not logged, not retrievable.
We do not have a dashboard where any of us — engineer, support — can read what couples have said yes to. The architecture forecloses the question, not just the answer.
If our database were exfiltrated tomorrow, an attacker would have ciphertext keyed to passwords we don't store. The breach yields nothing readable without the per-user keys, which never reach us.
We can comply with a lawful subpoena and the data we are compelled to hand over is the encrypted blobs we have. We cannot decrypt them. This is a feature, not a workaround.
We are not making you safer than you already are.
A few honest limits. We do not replace a therapist’s judgment, and we cannot recognize a pattern that requires one. We do not prevent harm in relationships where harm is already happening; we do not detect coercion through a quiz. If a partner pressures another into using BothWant, the architecture cannot tell.
We are a US software company subject to US legal process. We will respond to lawful subpoenas the same way every other US software company does — but the data we are compelled to produce is the encrypted blobs we have. We cannot decrypt them on demand. That is the architecture working as designed; it is not a workaround we discovered.
If something on this page is meaningfully wrong, we want to know. There is a vulnerability disclosure path below.