Objective Platform Campaign Editor

Case study

Objective Platform is a marketing measurement solution that leverages machine learning and data modelling to help businesses make smarter media investment decisions. A key feature of the product is the Media Scenario Planner, which allows users to build and compare multiple versions of complex media plans and forecast their outcomes.

In this case study, I focus on one part of that feature — the Campaign Editor — where users can create and manage campaigns within the scope of each scenario.


When I joined, that flow was implemented by the team of full-stack developers without any designer supervision and grew with new features over time. Redesigning it from that point was quite a challenge.

My role

I was responsible for shaping the end-to-end user experience of the Campaign Editor redesign. My role covered discovery, ideation, and delivery. I began by conducting stakeholder interviews and mapping out user journeys to understand pain points across media planners, performance marketers, and managers. From there, I translated these insights into wireframes, prototypes, and design explorations that were tested iteratively with users.

Problem

The previous campaign editor had never been designed and had already been live for several months before I joined. This made it relatively straightforward to identify areas for improvement, even without conducting fresh user research, as many pain points had already surfaced during client alignments and product demos.

To define the goals for this iteration, I collaborated closely with our PO, PM, and sales team, aligning on the priorities we wanted to address in the redesign.

How might we allow users to create the most optimal campaign without overwhelming them with data?

What made the editor so difficult to use?

During the discovery phase of the project, I conducted in-person interviews with stakeholders, the product owner and salesperson to understand the technical rules behind this project and what our users want to get from it. We also defined current issues of the editor. 

  1. Confusion with the Scenario Editor

Campaign Editor looks exactly the same as its parent, Scenario Editor, and users have no idea in which place of the app are they right now.

  1. Mishandled error prevention

Users were often complaining at selecting the dates outside of its parent (scenario) scope, and they didn't know about it until saving the campaign

  1. Lack of chierarchy + endless scroll

The research showed that the most important thing for the users is the forecast, and to see it they need to scroll a lot through the entire list of campaigns, which can be extremely long in some cases.

  1. Unclear guidance

The users weren't aware which form fields need to be filled in first in order to obtain further information or extra options to fill in, therefore app often look like it has a lot of empty states

  1. Missing comparison points

As the product grew, we noticed that a lot of users are creating multiple campaigns that differ slightly, in order to see how their outcome changes after the small adjustments, and include the ones that are the most efficient. However, in order to do that they had to open each campaign in the editor separately.
The comparison overview became a neccessity.

Most popular user case was not actually to be able to edit the campaign, but to create multiple tweaked versions of them and select the one that will be the most efficient.

Goals

The redesign of the Objective Platform Campaign Editor aimed to transform a previously overwhelming and data-heavy interface into a more intuitive, structured, and actionable experience of creating and editing a campaign.


One of the primary goals was to improve the overview experience, providing users with a clear, high-level view of campaigns without being buried in excessive data.


Another key focus was establishing a clear hierarchy of data, so information is displayed progressively rather than all at once, helping users focus on what’s important at each stage of creating or editing the campaign.

To further support decision-making, big goal of this redesign was to enable a campaign comparison, allowing users to evaluate different campaigns side by side.


Finally, one of the requested by stakeholders features was assistance with media optimization, offering both manual and automated guidance to help users improve performance efficiently.


And, since the previous editor was designed by the developers, improving aesthetic experience also came in as a priority without a question. 

Starting off the design

I started with the most difficult thing, which was to create a flow that would cover several ways of starting the campaign and all the edge cases. This process was reiterated several times with the data science team and product managers, also on the later stages, since the technicalities behind it were pretty complicated.

At this stage I really wanted to separate campaigns editor from the scenario editor, to enable easier copying between scenarios, but after long debates it turned out to be technically impossible.

Solution

On the overview page, I introduced a scenario timeline that displays all campaigns assigned to it, with each campaign represented in a distinct color. This not only makes the scenario easier to scan but also provides a visual sense of sequencing and overlap. From this view, users can directly access a new comparison feature, which allows them to evaluate between two and five campaigns side by side.


Within the campaign editor, I shifted the creation process to a step-by-step flow, helping users enter data in a guided manner and reducing the chance of errors.

For editing existing campaigns, the redesign focused on improving the overview experience and creating a clearer hierarchy of information, so that users no longer face an overload of data at once but can instead work through campaigns in a more structured and efficient way. It also allowed them to hide all unnecessary form fields and have a summary of the information they eneterd and focus on the last and most important section, whichj s forecast.

I also introduced a budget allocation table linked to the assigned media. This table includes the option to select one campaign as a baseline, making it easier for users to read the data and compare performance across campaigns.


I wrapped everything in a friendly user interface which aligned with the design system that I created previously and made a prototype in Figma.

How do we approach the testing without being able to implement any 3rd party libraries?

Testing challenges

User testing for Objective Platform was always a challenge. Due to contract restrictions and data sensitivity, I was not allowed to use third-party testing tools. At the same time, the product’s complexity—especially the number of layout variables and calculation logic—made lightweight prototype testing less effective.

As a team, we decided the best approach was to implement the redesigned layouts with a working backend and release a beta version to a selected group of users. This allowed us to observe real interactions and gather meaningful feedback.

We focused our testing on three key questions:


  1. Do users know how to create a campaign?

  2. Do users know how to edit a campaign?

  3. Is the new comparison solution satisfying?


The first two tests were successful—users were able to create and edit campaigns with little guidance during client sessions. The comparison feature, however, produced mixed results: some users preferred a simplified, high-level view for quick scanning, while others wanted a deeper dive into detailed data. This tension highlighted an opportunity to refine the comparison tool further to accommodate different working styles. Which lead us to…

Reiteration to address users complaints

To enable the compromise for both types of users, I implemented "simple first" approach and enabled table row expansion in case users want to deep dive into data and detailed budget allocating.


Another important feature at this point was selecting one comparison (also one media channel) as a baseline to which the remaining ones can be compared, and included percentage calculations of these differences.


Last but not least, we prevented errors before campaign comparison with disabling the comparing in case less than 2 campaigns are selected.



Sessions with the dev team

As after any major feature design, I handled sessions with the Development Team (and at least one person from Data Science Team) to make sure everything is clear, that flow doesn't have any mistakes and to answer their questions. The cooperation within the team was a crucial part of the process.


Since I have a background in front-end development I was also often checking our test environment and polishing the templates myself when the implemented screens were not aligned with the designs that I provided.

The result

There are always more improvements to make, but I was happy with calling the flow ready enough to be released. Users were completing scenarios with ease, comprehending the functionality and were happy about the amount of control they were given over the campaign editing.


When I was leaving the company there were more improvements on the product pipeline to be resolved (for example handling the upgrading of the model version of the parent scenario), but the current state of the editor was a good start.