Objective Platform Campaign Editor

Caste study

Objective Platform is a marketing measurement solution that leverages machine learning and data modelling to help businesses make smarter media investment decisions. A key feature of the product is the Media Scenario Planner, which allows users to build and compare multiple versions of complex media plans and forecast their outcomes.

In this case study, I focus on one part of that feature — the Campaign Editor — where users can create and manage campaigns within each scenario.
When I joined, that flow was assembled by full-stack developers without any designer supervision and grew with new features over time. Redesigning it from that point was quite a challenge.

My role

I was responsible for shaping the end-to-end user experience of the Campaign Editor redesign. My role covered discovery, ideation, and delivery. I began by conducting stakeholder interviews and mapping out user journeys to understand pain points across media planners, performance marketers, and managers. From there, I translated these insights into wireframes, prototypes, and design explorations that were tested iteratively with users.

Goals

The redesign of the Objective Platform Campaign Editor aimed to transform a previously overwhelming and data-heavy interface into a more intuitive, structured, and actionable experience.


One of the primary goals was to improve the overview experience, providing users with a clear, high-level view of campaigns without being buried in excessive data.


I also decided a step-by-step approach for entering campaign details, reducing the need for users to jump back and forth between sections.


Another key focus was establishing a clear hierarchy of data, so information is displayed progressively rather than all at once, helping users focus on what’s important at each stage.

To further support decision-making, big goal of this redesign was to built a campaign comparison tool, allowing users to evaluate different campaigns side by side.


Finally, one of the requested by stakeholders features was assistance with media optimization, offering both manual and automated guidance to help users improve performance efficiently.


We also wanted totry and separate the campaign editor from the scenario editor, but it ended up technically impossible due to the finesse and complexity of the data models they are tied up to.

What makes the editor
so difficult to use?

During the discovery phase of the project, I conducted in-person interviews with stakeholders, the product owner and salesperson to understand the technical rules behind this project and what our users want to get from it. We also defined current issues of the editor. 


  1. Confusion with the Scenario Editor

Campaign Editor looks exactly the same as its parent, Scenario Editor, and users have no idea in which place of the app are they right now.

  1. Mishandled error prevention

Users were often complaining at selecting the dates outside of its parent (scenario) scope, and they didn't know about it until saving the campaign

  1. Lack of chierarchy + endless scroll

The research showed that the most important thing for the users is the forecast, and to see it they need to scroll a lot through the entire list of campaigns, which can be extremely long in some cases.

  1. Unclear guidance

The users weren't aware which form fields need to be filled in first in order to obtain further information or extra options to fill in, therefore app often look like it has a lot of empty states

  1. Missing comparison points

As the product grew, we noticed that a lot of users are creating multiple campaigns that differ slightly, in order to see how their outcome changes after the small adjustments, and include the ones that are the most efficient. However, in order to do that they had to open each campaign in the editor separately.
The comparison overview became a neccessity.

The process

1

Discovery

Step 2

Defining

Step 3

Wireframing

1

Testing

Step 2

UI

Step 3

Launch

1

Feedback

Step 2

Iterating

Step 3

Launch

Starting off the design

I started with the most difficult thing, which was to create a flow that would cover several ways of starting the campaign and all the edge cases. This process was reiterated several times with the data science team and product managers, also on the later stages, since the technicalities behind it were pretty complicated.

Next, the UI Design

I wrapped everything in a friendly user interface which aligned with the design system that I created previously and created a prototype in Figma.

Kickoff session with the dev team

As after any major feature implementation, I handled a session with the development team to make sure everything is clear and to answer their questions.

Testing issues

User testing for OP was always a tricky case, since I was not allowed to use any 3rd party software due to contract restrictions and data sensitivity.


On the other hand, product and calculation complexity including a lot of layout variables made prototype testing not as effective.

The best we could do as a team was to implement the layouts with a working backend and introduce beta version to some of our users, and

Reiterating

As after any major feature implementation, I handled a session with the development team to make sure everything is clear and to answer their questions.

The result

There are always more improvements to make, but I was happy with calling the flow ready enough to be built. Users were completing scenarios with ease, comprehending the functionality.