Yewmark.
Begin

May 15, 2026

AI journaling and privacy: what to look for.

AI journaling is genuinely new territory. Five years ago the privacy question for a journaling app was “what does the server store?” and the answer was “your entries.” That’s now the floor. The new questions are about what happens when an AI model is in the loop.

Here are the questions worth asking, whatever app you’re considering.

1. Does the app send your entries to a model provider?

If the app uses AI to summarize, ask questions, or generate insight, your entries are being sent somewhere — OpenAI, Anthropic, Google, or a smaller provider like Groq or Cerebras. The question is which provider and under what terms.

This isn’t inherently bad. It is inherently a third party. If the app’s privacy page doesn’t name the providers, that’s a flag — not because the providers are sinister, but because you can’t evaluate the chain if it’s opaque.

(Yewmark uses Groq, Cerebras, and OpenAI — named in the privacy page. Per-provider data-use terms are all “no training on API inputs”, which is the relevant guarantee for journaling content.)

2. Does the provider train on inputs?

This is the load-bearing question. “Will what I wrote end up in the next version of the model?”

The major API providers (OpenAI API, Anthropic API, Groq API, Cerebras Inference) all answer “no” for paid API traffic. The consumer products (ChatGPT.com, Claude.ai) often answer “yes by default, opt out in settings” for free tiers. The difference matters: API traffic goes through different legal terms than consumer signups.

For a journaling app, you want to be in the API-traffic category, not the consumer-product category. Ask: “is my data subject to model training?” If the answer isn’t a clear no, treat the app as “might train on me eventually.”

3. Does the app train its own models on you?

Separate from the provider, the app can also have a training pipeline. Some companies pitch “we’ll fine-tune a model on your journal so it knows you better.” This requires retaining your text in a form that’s usable for training — which is a much higher-touch use of your data than just running a query and forgetting.

Read the privacy page for the phrase “to improve our services.” That’s a common dog-whistle for training on your data. Sometimes it’s benign telemetry; sometimes it’s “your journal trains our next model.” Ask the company specifically.

(Yewmark doesn’t train models. Period. No internal training pipeline, no anonymized aggregation, no “model improvement” carve-out. Your entries are processed for the call you made and not retained for any other purpose.)

4. Is the data encrypted, and where?

Three kinds of encryption matter:

  • In transit. Every app worth using does TLS over HTTPS now. This is the floor.
  • At rest. The server’s disk is encrypted, so a stolen hard drive doesn’t expose the data. Most apps do this. Some don’t mention it because they don’t.
  • End-to-end. Even the operator can’t read your entries — the encryption keys live on your device. This is the strongest guarantee. It’s also incompatible with most AI features. If the server can’t read your entries, neither can the model. The handful of E2EE journaling apps that exist either don’t do AI, or do it client-side at significant cost.

Yewmark does TLS in transit + at-rest encryption + does not do E2EE. This is a deliberate tradeoff. If E2EE is non-negotiable for you, an app like Day One Premium is the right pick — we’re honest about that.

5. Where does the app run? Who can subpoena it?

Less talked about, but: the legal jurisdiction of the company and its server determines what kind of legal request can extract your data. A US-based company can be subpoenaed by US authorities; an EU-based one is subject to EU data law (which is generally more protective).

Yewmark is a one-person operation, the server is in Germany, the operator is in the EU. This is a small story but a true one. We’d rather you know it.

6. Can you export and delete?

The strongest privacy guarantee is the ability to leave. An app that won’t let you export your entries cleanly, or won’t honor a deletion request, is asking you to trust them indefinitely — which is a worse position than trusting them while having an exit.

Yewmark: full JSON or Markdown export from Settings, account deletion from Settings with a 7-day grace window (sign back in within the window to restore; after that, hard-purged). Both are first-class features, not buried in support tickets.

The honest summary

If you’re evaluating AI journaling apps, the privacy answers you want, in plain terms:

  1. “We use these specific providers, by name.”
  2. “None of those providers train on our traffic.”
  3. “We don’t train on you ourselves either.”
  4. “TLS in transit and at-rest encryption (E2EE doesn’t work with our AI features — if you need it, here’s an honest alternative).”
  5. “Here’s where we’re based and what law applies.”
  6. “Here’s the export button. Here’s the delete button. They work.”

If an app can’t answer all six clearly, treat that as the data point it is. Yewmark’s privacy page answers all six.