A companion piece to what we won’t add. That post was about features. This one is about telemetry.
Yewmark has no Google Analytics, no Plausible, no Fathom, no PostHog, no Mixpanel, no Segment, no Hotjar, no LogRocket, no FullStory, no Heap. No session replay. No tracking pixels. No marketing attribution scripts. No third-party JavaScript that “helps us understand how users interact with the product.”
The thing you would normally see in a privacy policy — the long list of vendors with sub-processor agreements — is short, and intentionally so. Stripe handles billing. Brevo sends transactional email. Cloudflare sits in front of the site. That’s the list.
What we do collect
It’s easier to be specific than vague. Per AI call, the database records: the kind of call (chat, digest, title suggestion), the model that served it, which provider routed it, a token count, and a timestamp. No message text. No reply text. Nothing that would reconstruct what you said to the model or what came back.
This is enough to run the service. We can see when a provider is failing and route around it; when usage is approaching the daily Groq cap; when latency on a specific model degrades. We cannot see what anyone wrote. That gap is the point.
Server logs are short-lived and contain HTTP request lines — method, path, status code — not bodies. Errors flow into a self-hosted error tracker that runs on the same VPS as everything else; nothing leaves the box. Sentry’s default “send PII” flag is off.
The one thing we count
Today we built a small page-view counter. When a browser opens a page on yewmark.com, the frontend fires a single POST to our backend with the path that was opened — /, /pricing, /blog/four-voices. The backend increments a row in a table keyed by date and path. That’s it.
The row stores: a date, a path, and a count. There is no user identifier. No IP address. No cookie. No browser fingerprint. No referrer. No session ID. The same browser opening the same page twice produces a row that says “2” instead of “1.” The counter cannot tell those two visits apart, by design.
The point is to know whether anyone is actually reading the site, and where they’re landing. We were tired of having a marketing page and no idea whether it was getting any traffic at all. So we built the smallest thing that answered the question, and stopped there. The counts are private — visible to the operator at /admin/funnel, never published, never shared with anyone.
What we deliberately don’t do
It would be easy to build any of these. We’ve looked at them and chosen to not.
- UTM-based attribution tied to identity. The signup form does have a hidden field that captures a one-time signup source if the user arrived from a link with
?ref=or?utm_source=, used to figure out where signups come from. It’s recorded at signup, never updated, never refined by ongoing tracking. - Session replay. Tools like Hotjar and FullStory record actual user sessions — cursor movement, scroll position, keystrokes — so the operator can “watch” what users do. We don’t use any of them. They’re especially inappropriate for journaling apps. We’d rather not know.
- A/B testing infrastructure. No experiments running on signed-in users. No bucketed feature flags. If we want to know whether something works, we ask people, or we ship it and watch retention — not split users into invisible buckets without their consent.
- Email open and click tracking. Brevo can add tracking pixels and link wrappers to outgoing emails. We turned those off. Our emails are plain text. If you open one, no one knows. If you click the verify link, that’s a normal HTTP request — not an event in a marketing pipeline.
- Marketing cookies of any kind. The site sets one cookie related to authentication; that’s the whole cookie list. No consent banner because there’s nothing to consent to.
Why this is hard to fake
The hard part isn’t saying any of this. The hard part is that once you ship the third-party JS, it’s difficult to un-ship without breaking other things. So you don’t. And then a year later your privacy page is a long list of vendors, and the people you onboarded under the original promise are using a product that no longer matches it.
The simplest defense is to start from no, and only add a measurement tool when you can answer the question “what specifically would I do differently if I had this data?” in concrete terms. Most of the time the honest answer is “I’d feel better about whether anyone is using it.” That’s not a product change. That’s comfort, and comfort doesn’t justify telemetry.
The exceptions
If something breaks — a failed payment, a stuck deploy, an error rate spike — the error tracker logs enough to find it. If a user emails support with a problem, we look at their account by email and see what we need. There is no inviolable principle here: there are users with bills to pay and a service to keep running, and minimum viable observability has to exist.
What there is is a posture: every additional thing we record should justify itself against the cost of recording it. The cost isn’t storage; it’s trust. Once we’ve started recording something, we have to be the kind of people who can be trusted with it — in our security, in our retention policies, in how we’d behave if someone asked us to hand it over.
It’s easier to be that kind of company when there isn’t much to ask for.