Elites Generation

Impact / Metrics

How we measure whether we matter.

The Foundation measures three kinds of thing: outcomes that take years to move, leading indicators that hint at whether the outcomes are coming, and process checks that keep the work safe. There is a fourth kind that most platforms measure, and we will not. That list is below, in full.
Instruments
Validated, where published instruments exist
Review
Third-party research partners
Participation
Opt-in, never coerced
First publication
With the year-one annual report

The metrics hierarchy.

Not all metrics are equal. The Foundation works in three tiers. Lagging metrics are the ones that matter. Leading metrics are the ones we can see move first. Process metrics are the ones that keep everyone safe while the lagging metrics take their time.

Tier one

Lagging: outcomes in people's lives.

These are the numbers the Foundation exists to move. They change slowly, over quarters and years. They are measured with validated instruments, in partnership with third-party researchers, and only from members who opted into measurement. When they move, the Foundation is doing its job.

  • Loneliness.

    Measured using the UCLA Loneliness Scale or an equivalent peer-reviewed short form, via opt-in periodic surveys. Reported as population-level distribution change, not an individual score.

  • Relationship quality.

    Measured using validated relational wellbeing instruments across friend, family, romantic, and community ties. Self-reported, opt-in, aggregated.

  • Life satisfaction and subjective wellbeing.

    Established indices (SWLS or equivalent) administered on a long interval, again opt-in. Used to tell whether the platform is a net positive in members' lives over time.

  • Belonging.

    Sense-of-belonging scales, reported by community and at the population level. This is where the Ubuntu thesis is tested.

Tier two

Leading: the shape of the work.

Leading indicators move before the outcomes do. They are helpful for operations, and they are honest to publish. They are not success in themselves. They hint at whether success is likely.

  • Connection formation.

    Number of friendships, mentorships, accountability pairings, and romantic journeys initiated that members self-report as meaningful at the three-month mark.

  • Community participation depth.

    Not how many people join a community, but how many show up to conversations, meetups, and facilitator-led moments. Breadth over count.

  • Facilitator and volunteer health.

    Retention, reported satisfaction, and training completion across the facilitator program.

  • Companion helpfulness.

    Opt-in self-report on whether members felt the AI companion was useful, intrusive, absent, or right. Not a net-promoter score.

Tier three

Process: the quiet work that keeps people safe.

Process metrics are not impressive. They are operational. We publish them because a Foundation cannot ask for trust without being legible about the systems that hold trust up.

  • Moderator response times.

    Median and 95th-percentile time from report to first reviewer touch. Measured and published by report severity.

  • Safety escalation accuracy.

    Rate at which safety escalations reached the right on-call team, reviewed by an independent clinical advisor.

  • Privacy and data-request handling.

    Export, deletion, and data-access requests fulfilled, and the median turnaround time. Any request that took longer than policy, with an explanation.

  • Security incidents.

    Count, class, and public post-mortem links. Zero hides nothing. Non-zero does not either.

What we deliberately do not measure, or refuse to publicize.

There are numbers we could produce, and numbers any product company would celebrate, that we will not center here. Some of them are counter to the mission. Some are simply uninteresting relative to the outcomes that matter. Either way, they do not belong on this page.

  • Daily active users, monthly active users, stickiness.

    These measure how often members open the app. They do not measure whether the app served them. Optimizing for them would quietly corrupt every other decision the Foundation makes.

  • Session length and time-in-app.

    The Foundation does not want to keep people in the app. It wants to help them live better outside of it. A shorter session that surfaced the right friend is a better session.

  • Engagement scores and swipe rates.

    These numbers exist to tell a growth team what to tune. The Foundation has no growth team whose job is to tune them. We will not produce them as impact.

  • Virality and referral coefficient.

    Communities grow at the pace communities grow. We will not optimize the platform to manufacture sharing behavior, so we will not brag about the coefficient.

This list is not a rhetorical flourish. It is a boundary. If a board, a funder, or a future leadership team ever proposes moving an item from this list onto the metrics we celebrate, the charter is the document that says no.

How we measure.

We prefer validated instruments from the published literature to novel scales of our own invention. Where a peer-reviewed instrument exists to measure something, we will use it, and we will cite it. Where one does not, we will work with research partners to develop one, and the instrument will be published alongside the finding.

Surveys are opt-in. We never gate product access behind a survey. We do not retry a dismissed prompt. We do not reward completion with perks. Coerced data is not useful data, and the Foundation does not want it even if it were.

Independent review.

A Foundation that grades its own homework is not worth much. Our research committee is independent of the leadership team. Our financial auditors are an external firm appointed by the board’s audit committee, beginning in year two. Our outcome findings will be reviewed with research partners, and we will publish dissenting notes from those partners when they exist.

Cadence and first publication.

The outcome and leading metrics publish once a year, inside the annual report. Process metrics publish twice a year. The first numbers on this page will appear at the end of our first fiscal year, in 2027. Until then, the page describes the commitments, not the readings.

When the first report lands, this page will be updated with the current-year numbers and a link to the prior-year archive inside the annual report.

The full transparency surface.

Metrics are one of four ways the Foundation accounts for itself in public. Financials, privacy, data, security, and governance each have their own documents.