Audience Builder is a product designed for employees of Ibotta to build advertising segments with. 30,000 employee hours are projected to be saved by this product annually.


Ibotta employees had to work in multiple systems, tediously poke around in code (they were not engineers) spending many hours annually on a workflow that could be made much simpler. Research, both quantitative and qualitative, was conducted to identify the most time-consuming and painful parts of the workflow. Quick iteration and frequent usability testing constructed the framework for the MVP.



Ibotta employees needed a less tedious, 100% accurate, and less time-consuming method for creating advertising segments.

The challenge was designing a product that is cohesive both visually and functionally to the current enterprise ecosystem while accommodating the unique workflow of building segments. It must be simple enough so that a new employee can use it in their first week without errors.

Time was the primary constraint. We had two months from kickoff to MVP deployment. 


Analyzed quantitative data to understand the most used criteria for building segments and the magnitude of the problem (time consumption). This was the focal point of what the MVP needed to address.

Conducted moderated interviews with stakeholders to understand their current workflow and the positive and negative moments in the user journey. Mapping information architecture and understanding the full customer journey was the goal to identify which critical parts of the experience the MVP needed to address.

Paper draft of user flow

Information architecture

Audience Builder in context of the enterprise ecosytem
Audience Builder input fields

Customer Journey


Initial concept

The layout of the form fields and the types of form fields used to build the segment was critical to the goal of saving employees' time. Research showed that under the current system there were many user errors due to the precision required in building the segment with code.

This first concept aimed to provide safeguards in the input fields to ensure a 100% accurate output. My first thought was to use radio buttons to select mutually exclusive options, dropdowns for values that required an exact match, and clearly defined parts of criteria broken out into multiple conditions that could be configured with logic operators. 

Setup screen exploration 1

Initial results

Overwhelmingly, user feedback identified the flow to be demanding and distracting, requiring them to think through the inputs in a way that reads differently than how they logically processed the inputs before.

Specifically, the radio buttons were distracting and the order or the input fields were confusing. The real-time feedback displayed by the population bubbles was helpful in making sure that the logic they had inputted was outputting an expected value. 

Concept 2

To reduce the cognitive load of the form fields I re-worked the order of the form fields and changed their functions. The radio buttons were replaced with dropdowns, for example. Options in the dropdown selectors had revised copy that more clearly matched how users thought processed the data they needed to input.

The newly proposed review page packaged all of the inputs into a plain-text format to help users read the inputted data as a sentence, like how they would have received the request in the first place.

Setup screen exploration 2
Review screen exploration 1

Concept 2

usability test results

Testing revealed a slowdown in form field completion and a constant re-checking of dropdown selections. Click path movement from the first dropdown to the last produced different results among different people.

This potentially meant that the information architecture was irrational and inconsistent with how a user interpreted the request that needed to be translated into data that then needed to be inputted into these form fields. The tabbed slider of the logic operators confused users who only needed one condition. 

Concept 3

The previous concepts had a Z-shaped field format and contributed to long completion times and sustained confusion. This concept aimed to remove extraneous features, like the colored population bubbles. This allowed us to take advantage of the extra space by utilizing an L-shaped format where the inputs could read like a sentence. Feedback from the previous tests revealed that this would be helpful in reducing input error.

Setup screen exploration 3
Review screen exploration 2

Concept 3

usability test results

Feedback on the form field layout was mostly positive and had quicker completion rates.  Removing extraneous content, like the population bubbles helped users focus on the task. Although this format tested more positively, the L-shaped field format caused responsive design issues where fields would end up vertically stacking anyway. 


New technical limitations surfaced due to the condensed timeline which prevented features from making it to MVP, like estimated population sizes, publishing segments directly to other software in the ecosystem, and logic operators connecting multiple conditions.

Phase 1 was a large pivot from the original vision for the product. Inputting data and reviewing that data for accuracy were still the most important parts of the customer journey and the MVP needed to make those experiences easier in order to be successful. Due to this, a compromise was made to forgo many of the visual elements found in the other enterprise tools in the ecosystem in order to accommodate a feature (UPS Transformer) that was ultimately more valuable in achieving our goal of reducing employee labor hours.

Visually, we created a three-column format that fully disclosed all inputs, as opposed to the progressively disclosed single-column format in previous iterations. Functionally the output was still code, but the product ensured 100% accuracy by writing the code based on the inputs in the form fields. 


MVP service blueprint


MVP user flow


MVP shipped product



usability test results

Ultimately, the test results were positive! Time to create an audience that originally took 30 minutes with a large margin of error was reduced to 5-8 minutes with zero margin of error.

Phase 2 exploration

Immediately, the plan was to grow the product to fix more parts of the customer journey that were broken. Many of the changes proposed were already tested in the original concepts, or with other products in the enterprise ecosystem.

Phase 2 specifically would address the "verify output" and "publish segment" sections of the customer journey that still took a lot of employees' time. Users could easily see the status and make quick edits to the segments they had published on the homescreen which would help with "publish segment" issues. Bringing back the review screen with real-time segment size evaluation would help with "verifying the output" issues. 

Additionally, Phase 2 would align better visually with the enterprise ecosystem. A dynamic setup screen that progressively discloses options to reduce the cognitive load of the single-page format and a review screen that summarizes all inputs into plain-text English with supporting visuals were some of the proposed changes in the new concept.

Home page exploration
Setup screen phase 2 exploration
Review screen phase 2 exploration

Role & responsibilities

I was the lead designer on a squad with a Product Manager, Engineering Manager, and four engineers. User research, usability testing, information architecture, prototyping, and handoff to development were my responsibilities. Collaboration with other designers on the team to ensure parity across the enterprise ecosystem was an important part of this process. 

Get in touch

Designed & built by Chris Del Bene

© 2020