The challenge of releasing internal products
From deep understanding to defining and communicating metrics, here’s my approach to refactor a complex internal product with more than 15 features
Hello, I’m Tiago Ferreira, Sr. Product Manager in Brazil with +6 years of experience crafting products. With The Next Movement, I want to share part of my product management experience with the whole world, but also talk about career more broadly, technology, good books, and - why not? - philosophy, music, culture, gossip, just like an open diary. If you enjoy reading my article, subscribe and share it with your friends 🤓
After more than 5 years of working with customer-facing products, I thought that working with an internal product could be easy. I couldn’t be more wrong.
Usually, internal products have fewer users, but it needs to keep track of every action from them. And, when I received the challenge to lead an internal B2B product, responsible for configuring and setting prices to all SKUs sold by retail partners, the company didn’t have any event tracking or product analytics plugged in.
The strategy was to refactor this product because we had more than 15 features, an awful user experience and interface, and high latency to call internal APIs.
After talking with internal users, we discovered that they spent more than 10 minutes completing structural actions, like creating specific prices for the partners, configuring campaigns, benefits, and approving them to reflect in the partner’s ecosystem.
To understand how I went from zero to one to release a new version of the product, I’ll walk you through the process I made, which metrics were defined and what was the strategy we approached to launch the first version of the product.
Deeply understand the product: internal products are well known for lacking careful attention to user experience and user interface. In my case, I needed to evaluate a huge product used by five different business units. First of all, I created my user and printed every screen on my Miro board detailing each user flow. When I had some doubts, I talked to my peers in design and engineering. And, considering that our users were internal, I created specific Teams groups divided by areas. They helped me a lot to understand the product.
Align with the Product Designer and Tech Lead: they knew everything about the product and had already started the discussions on how to improve the experience, interface, and tech stack. Also, the Product Designer had a lot of rich material with user interviews and feedback, which contributed to significantly reducing my knowledge gap about the product. However, the important thing in this topic is to align a mutual understanding of how they used to work with the engineering team, what was already defined about the product, and the challenges of the upstream.
Investigates customer needs: first of all, don’t ignore what was already collected about the customer needs. I saw a lot of previous videos with users explaining how they made an action and read all the materials prepared by the Product Designer. Nevertheless, the work needs to evolve and, when I had the minimum knowledge to contribute, I brought more questions to answer with a business perspective. Also, documenting everything on Confluence was crucial.
Analyze all available data: we didn’t have something like Amplitude or Google Analytics connected to the product. But all the quantity of pricing, benefits, partnerships, campaigns, and more were tabulated on the database. After having access to it (please: invest in MySQL knowledge, which helped me a lot in this phase), I concluded that features focused on pricing were the main activity in the product - followed by campaigns and benefits. With that came my first assumption: reduce the friction to configure prices, campaigns, and benefits. For a product with more than 15 features, that gave us focus on which parts of the product we needed to prioritize.
Prioritization and alignment: after determining the focus, I need to communicate to stakeholders the refactoring plan. Before understanding the customer needs and the available data, I had to guide the development team to initiate based on the user’s feedback. We had to operate in that way because I hadn’t enough time to understand first before the development. Yeah, dual track, but the bad news is that the team didn’t have all the details about the product for at least one month, so they need to investigate during the development.
Bring all the details needed downstream: I confess I don’t like to spend a lot of time writing user stories. But, after a couple of retros with the team claiming that they needed specific details before development, I saw that as an opportunity to deepen even more my knowledge. That feedback allowed me to create the first PRD of the product and align even more with Tech Lead and Product Designer to explain user flows and activities.
Determine how to track events: if the product hasn’t collected the data you need, what would you do? Create the foundation to collect the data needed. As I mentioned, I knew the business data generated by the product: how many SKUs were priced, the amount of benefits created, and so on. However, we didn’t collect how users interact with the product. Unfortunately, our business unit didn’t have data scientists, but after talking with the business intelligence lead, he explained the process to map events for them, so they can generate the Google Analytics 4 code and implementation. I mapped the events with Product Designer and Front-end Developer to guarantee that the main actions were tracked. Also, we plugged Microsoft Clarity, to see heatmap details and recorded sessions from the users.
Conduct reviews to show progress: after developing the main features of the product, the QA became responsible for presenting the new user experience and interface to the stakeholders. I mediated those presentations and learned that, unless the user finally puts their hands into the product, they don’t come with enough questions. Even though, the review sessions allowed the users to create familiarity with the new product until the launch.
Plan the first release: considering the complexity of this product, we needed at least six months to create the first version. To create this plan, I aligned which features to sunset, why some of them were prioritized, and understood the development cycle time to predict the first release. It’s important to note that we didn’t achieve the first date due to avoiding some risks, like one action started on the legacy product not reflecting on the new product, not having users correlated with their actions and some infrastructural issues. But we postponed for one month, which we thought was plausible. Yeah, delays could happen, especially with complex products.
Determine how you collect feedback: considering that we work remotely at the company, I created different Teams channels divided by business units the users come from: Sales, Finance, Customer Service, etc. On the release day, I explained our communication plan and how they could contribute with feedback, answering a form, or simply detailing anything on those chats. Also, we created a dedicated page explaining each business rule of the features. It worked at the beginning, but I needed to reinforce the usage of those channels because some of those users brought bugs and different scenarios on forums where managers kept asking when we were fixing any of those problems.
Create a balance between bugs found and implementing new features: when you launch a product, you have to deal with not anticipated scenarios found by the users. Some of them are urgent, some aren’t. As the Product Manager, you need to understand and create a balance between them. We still had a lot of features to develop, so I created a system to determine the bug priorities: if it affected pricing, then it was a priority. Some of those scenarios were reported as bugs, when it was just technical improvements - which, of course, wasn’t considered a priority. I controlled the work in progress with 60% of new features and 40% of bugs and tech improvements.
Stay aligned with stakeholders: communication isn’t enough and, even after a successful release, I reported when the new releases will come, how much time we predicted to solve a bug, and so on. In those conversations, I started presenting the metrics of our product
Evaluate the release: those were the metrics that I followed just after the release:
Stickiness: how many internal users are interacting with this new product / # of existing users (in a weekly basis);
Actions completed: % of actions completed on the new product;
Average duration of actions completed;
# of events per feature;
Average time response calling internal APIs.
We followed more metrics as we evaluated the internal product.
As you read, that was a hell of a challenge to lead a product with internal users. That wasn’t my first experience refactoring a complex platform. But, by creating a vision, aligning, determining metrics, and communicating all the time with stakeholders during the development, we succeded and delivered a new platform effectively collecting data and improving user experience.
And, of course, the hardest work came after the launch by improving the defined metrics, finding new correlations, and consistently collecting user feedback.
Did you also lead or are you leading an internal product? Share your experience in the comments section below.