← Back to Blog
GuideMarch 9, 2026· 14 min read

App analytics for indie developers: what to track and what to ignore

Your analytics dashboard has 200 metrics. You have time to look at maybe five. The problem isn't getting data. The problem is knowing which numbers actually change your decisions and which ones just generate anxiety.

The indie analytics trap

Here's what typically happens. You launch your app, install Firebase or Mixpanel, and suddenly you have access to event streams, funnels, cohort tables, retention curves, and 47 different ways to slice your user base. You spend an hour poking around. Then another hour. By the end of the week you've learned a lot about your analytics tool and almost nothing about your users.

This is especially dangerous when you're alone. At a company, there's a product manager whose literal job is to stare at dashboards. You don't have that person. You're also the designer, the developer, the marketer, and customer support. Every minute spent in an analytics dashboard is a minute not spent building or talking to users.

So the question isn't "what can I track?" It's "what's the minimum I need to track to make better decisions than I'd make with no data at all?"

The five numbers that actually matter

I'm going to be opinionated here. If you're a solo developer with fewer than 10,000 users, these are the only metrics worth checking regularly. Everything else is either a vanity metric or something you can't act on yet.

1. Day-1 retention

Of the people who open your app today, what percentage open it again tomorrow? This single number tells you more about product-market fit than anything else you can measure.

If day-1 retention is below 20%, you have a product problem. People downloaded the app, looked at it, and decided it wasn't worth coming back to. No amount of marketing fixes that. No push notification strategy fixes that. The app itself needs work.

If it's between 20-40%, you have something. Not great, but people are finding enough value to return. Focus on understanding what brings them back and double down on it.

Above 40% on day 1 is strong for most categories. Some categories like messaging or habit trackers naturally run higher; casual games run lower. Context matters, but as a rough benchmark, 40% means you're building something people want.

2. Weekly active users (WAU) trend

Not the absolute number. The trend. Is it going up, down, or flat? That's it. You don't need a fancy dashboard for this. A spreadsheet with one number per week works fine.

WAU matters more than DAU for most indie apps because not every app needs daily usage. A subscription-priced expense tracker that people use three times a week is healthy. A meditation app used on weekends is fine. DAU can make these apps look like failures when they're not.

The trend line is what you care about. A flat line after launch is normal (you stop promoting, downloads slow). A declining line means you're losing users faster than you're gaining them, which is the most common way indie apps die. A growing line, even slowly, means word of mouth or organic discovery is working.

3. Conversion rate (free to paid)

If you have any form of monetization, whether it's a subscription, one-time purchase, or in-app purchase, you need to know what percentage of users pay you money.

For freemium apps, 2-5% conversion is typical. If you're below 2%, either your free tier gives away too much, your paid tier doesn't offer enough, or you're attracting the wrong users. Each of those is a different problem with a different fix. The pricing guide goes deeper on diagnosing which one you're dealing with.

Above 5% is good. Above 10% usually means your free tier is too restrictive, which sounds like a good problem but actually limits growth. People won't recommend an app they barely got to use.

4. Where users drop off

This isn't a single number. It's the one funnel you should build: from app open to the moment of value. What's the sequence of screens between "first launch" and "the user does the thing your app exists for"?

For a workout app, that might be: open app, skip onboarding, browse workouts, start a workout. For a budgeting app: open, connect a bank account, see transactions categorized. Whatever it is, track each step and see where people bail.

You'll almost always find that most drop-off happens at one specific point. Maybe 60% of users never get past the account creation screen. Maybe nobody finishes onboarding. Fixing that one choke point usually moves retention more than any feature you could build.

5. Revenue per user (if applicable)

Total revenue divided by total users over a given period. This is the number that tells you whether your business math works.

If your average revenue per user is $0.50/month and you need $3,000/month to go full-time, you need 6,000 active paying users. Can you get there? With your current growth rate, how long would that take? Is that realistic?

This exercise is more useful than most business plans because it's grounded in your actual numbers, not projections from a spreadsheet. If the math doesn't work at current rates, you need to either increase the price, increase conversion, or find a cheaper way to acquire users.

What to ignore (at least for now)

Some metrics are useful at scale but actively harmful when you're small. They either fluctuate too wildly to mean anything or they tempt you into optimizing the wrong thing.

Total downloads

Downloads measure marketing, not product quality. You can get 10,000 downloads from a Reddit post and lose 9,800 of those users in a week. The number goes up, you feel good, and nothing has actually improved.

Downloads matter when you're diagnosing an ASO problem (nobody's finding your listing) or when you're comparing the effect of different marketing channels. As a standalone metric to check daily? Ignore it.

Session duration

Long sessions can mean engagement or confusion. A user spending 20 minutes in your settings screen isn't having a good time. A user spending 3 minutes in your workout app might have had a perfect experience.

Session duration is only useful if you pair it with specific screens or flows, and at that point you're just doing funnel analysis, which is already on the list.

Daily active users (raw number)

When you have 200 users, your DAU will swing between 15 and 80 depending on whether it's a Tuesday or a weekend. You'll see a spike, get excited, then see a dip and panic. None of it means anything at low volume. WAU trend smooths out the noise.

Bounce rate

This is a web metric that doesn't translate well to mobile. People open apps, glance at them, and close them. That's normal behavior. A high "bounce rate" in an app might just mean people checked something quickly, which is fine.

Anything with a sample size under 100

If 3 out of 7 users did something, that's not a 43% conversion rate. That's 3 people. Don't make decisions based on it. Wait until you have enough data for the numbers to stabilize, which usually means at least a few hundred users going through a given flow.

Setting up analytics without losing a week

You don't need a complex setup. Here's what actually works for a solo developer.

App Store Connect / Google Play Console (free, already there)

You already have this. It gives you downloads, impressions, conversion from listing view to download, and basic retention curves. The retention data is rough (it groups by install date, not cohort behavior) but it's enough to spot trends.

The most underused feature is the "conversion rate" on your App Store listing itself: what percentage of people who see your page actually download the app. If that's below 20%, your listing needs work before you worry about in-app metrics. You're losing people before they even try the product.

One analytics SDK (pick one, stick with it)

PostHog, Mixpanel, Amplitude, or Firebase Analytics. They all do roughly the same thing. Pick the one that has a free tier generous enough for your scale and that you find least annoying to use.

PostHog is open source and has a generous free tier. Firebase is free but the interface is a mess. Mixpanel's free tier covers 20 million events per month, which is plenty. Amplitude is good but its free tier is more limited.

The tech stack guide covers this choice in more detail, but honestly, the differences are small enough that you should just pick one and move on. Switching analytics tools later is annoying but not catastrophic.

RevenueCat (if you have subscriptions)

RevenueCat's dashboard gives you MRR, conversion, churn, and trial-to-paid rates out of the box. It's the best free analytics for subscription apps because it's built specifically for that use case. If you're doing subscriptions and not using RevenueCat, you're building reports that already exist. Once you can see your numbers, the next step is understanding what drives churn and what to do about it.

The events you should actually log

When you set up your analytics SDK, you need to decide what events to track. The temptation is to track everything. Resist it. Every event you add is noise in your data and complexity in your code. Here's a starting list that works for most apps.

app_opened — Every time the app comes to the foreground. This is your baseline for all engagement metrics.

onboarding_completed — Or whatever your initial setup flow is. The percentage of users who finish onboarding vs. those who don't is one of your most actionable numbers.

core_action_completed — The one thing your app exists for. Logged a workout. Saved an expense. Created a note. Scanned a document. This is your "aha moment" event. Name it whatever makes sense for your app.

paywall_viewed and purchase_completed — The conversion funnel. How many people see the paywall vs. how many buy.

error_occurred — With enough detail to reproduce it. This isn't strictly analytics, but logging errors to the same system means you can correlate crashes with user behavior.

That's it. Five events. You can add more later when you have specific questions, but starting with these five covers the full lifecycle from install to payment.

Reading retention curves (without a stats degree)

Retention curves look intimidating but they're telling you a simple story. Plot the percentage of users who come back on day 1, day 7, day 14, day 30 after installing.

The shape of the curve matters more than the exact numbers. A curve that drops steeply and then flattens out is normal. It means you lose a bunch of people initially (tourists, accidental downloads, people who downloaded five apps and picked one) but the people who stay, stay.

A curve that keeps declining without flattening is bad. It means even your engaged users are leaving over time. Something about the product isn't sticky enough. Check whether people are completing the core action. If they are and still leaving, the problem might be that the core action isn't valuable enough or frequent enough.

A curve that flattens above 10% at day 30 is solid for most categories. You have a real user base that finds ongoing value. Below 5% at day 30 means most people try it once and forget about it.

Compare your curves week over week. If the shape is improving, your product changes are working. If it's static despite updates, you're building features nobody asked for. The validation guide covers how to figure out what users actually want before you build it.

Cohort analysis for people who hate spreadsheets

Cohort analysis sounds academic but the idea is simple: group users by when they signed up and compare their behavior. Users from January vs. users from February vs. users from March.

Why this matters: if you shipped a big onboarding improvement in February, your February cohort should have better retention than January. If it doesn't, your improvement didn't work. Without cohorts, you're looking at a blended average that hides whether things are getting better or worse.

Most analytics tools show cohort tables automatically. If yours doesn't, a Google Sheet works. One row per week, columns for day-1, day-7, day-30 retention. Update it weekly. Takes five minutes.

The key thing to look for: are newer cohorts retaining better than older ones? If yes, your product is improving. If no, you're spinning your wheels.

When to use analytics vs. when to just talk to people

Analytics tell you what's happening. They don't tell you why. If 40% of users drop off at the account creation screen, analytics will show you that number. They won't tell you if it's because the form is confusing, because people don't want to create an account, or because the keyboard covers the submit button on small phones.

For the "why," you need to talk to users. Even five conversations can explain what a thousand data points can't. Read your app store reviews regularly. They're unsolicited user interviews. The 2-3 star reviews are especially useful because those people cared enough to write something but weren't happy. That's exactly who you want to hear from.

A good rule: use analytics to find problems, use conversations to understand them, use analytics again to verify your fix worked.

The weekly review (15 minutes, not more)

Don't check your analytics daily. Seriously. Small apps have too little data for daily numbers to mean anything, and the emotional rollercoaster of watching numbers bounce around will drive you crazy.

Instead, do a 15-minute weekly review. Same day each week, same time if possible. Look at:

WAU trend: up, down, or flat?

Conversion rate: any change from last week?

Retention for the latest cohort: better or worse than the previous one?

Biggest drop-off point: still the same screen, or has it shifted?

Revenue: on track for the month?

Write down what you see and one action you'll take based on it. Not three actions. One. If retention dropped, maybe you look into what changed. If conversion is low, maybe you adjust the paywall. If everything is fine, great. Close the dashboard and go build something.

The discipline of doing this weekly and writing it down is more valuable than any sophisticated analytics setup. It forces you to actually look at the data, make a decision, and then see the result next week. That feedback loop is the entire point.

Analytics mistakes that waste your time

Tracking too many events

If you have 50 custom events, you're not going to look at 45 of them. They add SDK overhead, slow down your app, and clutter your dashboard. Start with five. Add one when you have a specific question that your current events can't answer.

A/B testing with 100 users

You need hundreds or thousands of users per variant for an A/B test to mean anything statistically. If you split 100 users into two groups of 50, the results are noise. At small scale, just ship the change, compare the cohorts, and move on. You can A/B test when you have the volume for it.

Comparing yourself to industry benchmarks

"The average day-1 retention for fitness apps is 27%." Cool. Your app might be nothing like the average fitness app. Benchmarks are useful as a sanity check (if you're at 3%, something is wrong) but useless as a target. Your own trend line is the only benchmark that matters.

Building custom dashboards too early

I've seen solo developers spend a weekend building a beautiful Grafana dashboard with 12 panels before they have 50 users. That's procrastination disguised as productivity. Use your analytics tool's default views. They're good enough.

Ignoring qualitative data

The best "analytics" for a small app is often a single email from a user saying "I love your app but I stopped using it because X." One email like that is worth more than a week of staring at funnels. Make it easy for people to reach you. Put a feedback link in the settings screen.

Platform-specific quirks

If you're in multiple markets, be aware that user behavior varies significantly by country. The localization guide covers this in depth, but the analytics angle is worth noting separately.

Japanese users tend to have higher retention but lower willingness to leave reviews. US users download more freely but churn faster. Markets with lower smartphone penetration often show higher engagement because people use fewer apps more intentionally.

If you're targeting multiple countries, segment your analytics by country from day one. A blended global retention number hides the fact that your app might be working well in one market and failing in another. Our geo arbitrage scanner can help you identify which markets to prioritize based on competition levels and opportunity gaps.

Privacy and analytics: the practical version

App Tracking Transparency (ATT) on iOS means a lot of users will opt out of tracking. This doesn't break your analytics if you're doing it right. First-party analytics (events within your own app) don't require ATT permission. You only need permission for cross-app tracking, which as an indie developer, you probably aren't doing.

Be honest about what you collect and why. Users are more privacy-conscious than they were five years ago. A clear privacy policy and minimal data collection aren't just legal requirements. They're competitive advantages when your competitors are sending data to 15 different SDKs.

PostHog and Plausible can be self-hosted if you want to keep everything on your own infrastructure. That's more work to maintain but it simplifies your privacy story considerably.

From metrics to decisions

Here's the real test for any metric: does it change what you do next? If you look at a number and your reaction is always "huh, interesting" followed by doing nothing different, stop tracking it.

Day-1 retention dropped? You investigate the onboarding flow. Conversion rate went up after changing the paywall? You keep the change. WAU is growing but revenue is flat? Maybe your pricing model needs rethinking. Each number should connect to an action.

The best analytics setup is one you actually use. Five events, a weekly review, and a willingness to talk to users will get you further than a sophisticated data pipeline that nobody looks at. If you're building something people want (and our scanners can help you figure out if you are), the data will confirm it. And if you're not, the data will tell you that too, which is the most valuable thing it can do.

If you're still in the idea phase and haven't launched yet, the MVP guide covers how to build something small enough to test quickly, and the profitable ideas guide can help you pick what to build in the first place. And once you've launched, understanding why users delete apps will help you act on the retention data you're tracking.

Find app opportunities backed by real data

AppOpportunity scans the App Store for abandoned apps, angry users, rising niches, and gaps across 5 countries. Stop guessing, start building where the data says to.

See Pricing →