Product Analytics in a feature factory

There were some heated discussions in the product world some months ago when two product realities clashed significantly.
On the one hand, you have the ideal picture of product management: a bunch of really smart people who take a lot of care in product discovery and research (Lenny's podcast with Marty was a good example covering this: https://www.lennysnewsletter.com/p/product-management-theater-marty).
They excel at prioritizing different initiatives based on their findings. Then, they test features with a small set of users, supported by a well-defined measuring system. Once released, they measure performance, collect feedback, and either iterate on features or develop adjacent ones. It's this ideal product-building machine that creates minimal waste while making significant progress.
But on the other hand, you have the feature factories. These are the little dirty secrets of product management. In theory, you think, "Oh, I work in product, so that's great—I can shape things." But the truth is that there are often other forces in your company telling you what to build. I even know product teams without anyone directing them who are still feature factories because the head of product doesn't really care, or the whole team doesn't care. They're just randomly building stuff. Or as Marty calls them: "Product Management theater".

The heartbeat of a product team is a feature release. A product team starts getting questioned when they can't produce features anymore. "Hey, why don't you just build a new feature?"
Quick service announcement:
I have finished my Analytics Implementation Workbook, and I am launching a Reviewer Program. So, if you would like to get a free copy of the book and review it, click the link below. Seats are limited on a monthly basis, first come, first serve.

So there was this heated discussion between the purists or idealists on one side and the pragmatists on the other. Where the idealists say, "Hey, if you're running a feature factory, you're not doing product management."
While I definitely have some sentiment for that view (and would count myself in this idealistic camp), you cannot bend reality. If you end up in a position where you are a feature factory, you just build the stuff and do your job.
What I want to explore today is what this actually means for product analytics. You could make the case that when you're just a feature factory and your only driving force is to create features, it doesn't matter if you measure them. I mean, you just build features. So far, no one has really asked what kind of features you should prioritize based on anything other than people telling you how to prioritize them. So why should we measure them? Why does it even matter?
Truth be told, there's always this weird situation I've seen plenty of times: you have a feature factory setup, but you still have to do product reporting because people expect it. Even when, in the end, the reports are like the feature factory itself—no one actually cares about them. You just have to produce the report, and everyone is happy when it lands in their inbox. Obviously, don't expect that anyone would actually check the report.
It's a strange space. So, I want to take some time to look into the different options you might have when you're in this scenario.
Why do Product analytics at all?
Even when it sounds a little bit ridiculous, it's a fair question that goes much deeper into the value product analytics can bring.
The idealistic approach would suggest that measurement is an essential part of a good product development setup. However, in a lot of setups that would call themselves good product management setups, the implementation and utilization of product analytics are surprisingly poor. These setups often consisted of either very high-level product metrics or granular interaction measures that didn't provide useful insights for product features.
Product analytics is not easy to implement. It's challenging to find the right level of detail that provides meaningful insights to improve product experience. Because of that, especially for a "feature factory," there might be a reasonable argument to leave out product analytics altogether.

Instead of creating vanity reports that will never have any impact, it might be more honest to simply acknowledge that you don't care about data. You could potentially replace quantitative data with qualitative methods like interviews, tests, and demos, avoiding quantitative measurements entirely.
There is no imperative that you must run your product with quantitative measurements. It's worth critically reviewing whether you truly need quantitative data for your specific product context.
Doing Product Analytics for yourself
But let's say you are highly motivated and find yourself as an analyst in a feature factory. You got hired, and the craziest thing is that people are often brought into the role of a product analyst without a clear idea of the benefits they can provide.
In the classic idealistic product development flow, product analytics is a driving force in the build-measure-learn cycle. However, in many feature factories, the process is reduced to simply "measuring" without truly learning. As a product analyst in such an environment, you might feel frustrated, but there are still opportunities to approach your work idealistically.
Even when features are built through management decisions without user-centric input, you can still measure them in a meaningful way. You can track feature adoption, measure the impact on overall product performance, and develop a comprehensive understanding of product mechanics. This includes identifying core and supporting use cases and expressing them through relevant metrics.
This approach might seem like you're creating a personal utopia, but it's more about muscle training. By building an idealistic system, you develop a deep understanding of the product that might not yet exist in your organization. When the right moment comes—perhaps when new people join and start asking different questions—you'll be prepared with insights.
In my experience, working in feature factories can be unfulfilling, but occasionally, things change. Someone new might arrive and start asking probing questions. Without your prepared insights, these inquiries might quickly fade away. However, with your comprehensive analysis, you can provide valuable information and potentially find allies who become interested in understanding product performance.
For instance, you might discover that out of 10 features rolled out last year, only two gained significant traction. By investigating why these two features resonated more with users, you can identify small but meaningful improvements. You might find that a particular rollout strategy or presentation method worked better and can suggest incremental changes to the product team.

While this approach won't create a revolution in feature factory environments—where systemic issues run deep—it offers opportunities for gradual, meaningful improvement. By creating small, consistent "dripping drops" of insight, you can slowly shape how products are built and understood.
Better ingredients for Product Analytics in a feature factory
So, what are the ingredients if you really want to do a good job in a feature factory?
The first ingredient is getting the right kind of data. You need to assess your current position and available resources. In some environments, you might already have some data and the possibility to obtain missing information. For example, you could introduce measurement requirements early on in JIRA tickets for new features. You can create a very lean approach for a tracking plan using this awesome book:

If you face resource constraints, there are several strategies to collect data. One approach is to ask developers to implement explicit auto-tracking (something better than the thing tools offer), such as creating a generic event tracker for all button and link clicks. You capture detailed context in properties like button text, position, and destination. The data can be dumped into a data warehouse, where you can later make sense of the information and transform it into meaningful events.
Another method is to read request access to the application's database or create a read replica. By extracting information directly from the database, you can build a complete product analytics setup without using an SDK. This approach provides more flexibility in analyzing data. I describe it in this video:
The second ingredient is knowing what to do when no one is listening. First, find allies within the company who are also frustrated with the feature factory model. Group together and share insights that might interest them. You can also create a public channel for your insights, such as a newsletter highlighting feature adoption and performance.
Be strategic about how you share information. Some insights might be best shared directly with the product team, while others can be broadcast more widely. The goal is to create awareness gradually. Over time, people may become more interested in your analysis and seek your input.
The key is to do the best job possible with the resources available. Even in a feature factory, you can support the team and gradually build credibility. While the situation might not be ideal, you can still make meaningful contributions and potentially influence the organization's approach to product development and analytics.
What are your experiences working as a product analyst in a feature factory. Feel free to drop me a DM on LinkedIn: https://www.linkedin.com/in/timo-dechau/