
Modernizing Mailjet’s Email Editor
Mailjet’s email editor sat at the heart of the product — directly shaping onboarding, activation, and retention. But research revealed that the editor wasn’t failing because of missing features. It was failing because users couldn’t predict what it would do next. Invisible styling scope, no recovery paths, and ambiguous system feedback were quietly eroding confidence at scale.
I led research strategy and experience definition across discovery and validation — synthesizing 9,300+ NPS verbatims, card sorting, competitive benchmarking, and usability testing into a coherent system direction. The work reframed how branding, editing, and trust should function as a cohesive system. The results validated the approach.
fewer support tickets
NPS pts increase
adopted within 2 months
Timeline:
Research ran from Oct–Dec 2025
Role:
Lead UX Researcher · Design Partner
Mailjet’s drag-and-drop email editor was the center of the user’s workflow — and it had become a liability. The sending infrastructure was strong. The editing experience had drifted. Over 300 critical support tickets arrived each quarter related to editor performance. Users reported being stuck, confused, and blocked.
“Our e-mail has stopped working for the 4th time in 2 months. We need this fixed right now.”
“Très urgent.” — A customer whose account blockage halted all of their business communications.
But the core problem wasn’t bugs or missing features. Signals across 9,300+ NPS verbatims, usability testing, and competitive benchmarking consistently pointed to the same underlying pattern: users couldn’t understand where controls lived, how styling was scoped, what composite blocks allowed, or whether their actions had actually succeeded. The system worked. Users couldn’t tell.
At the same time, the market had largely commoditized around similar email editor UI patterns. Visual novelty wasn’t a differentiator anymore. That created a strategic opening: compete on confidence, predictability, and trust — not on feature count.

Legacy editor
I led this work from early discovery through system definition and validation, partnering closely with product design and engineering to translate research into durable experience principles and behavior models.
My role extended beyond identifying usability issues to defining how the editor should behave as a system — particularly around branding, inheritance, and trust feedback. I owned research strategy, synthesized signals across multiple sources, and articulated the design intent that aligned teams around a shared north star.
Experience strategy and design intent definition
Research planning and execution across discovery and validation
Moderated and unmoderated usability testing (wireframes and prototypes)
Unmoderated card sorting to map user mental models
Competitive benchmarking across 12 tools and pattern analysis
Cross-signal synthesis (NPS, support tickets, interviews, usability)
System behavior and IA recommendations
Microcopy and user messaging strategy
Engineering alignment, feasibility tradeoffs, and documentation support
The strategic framing for this project came from a shift in how we understood the problem. Initial thinking was about improving individual controls. Research kept pointing somewhere else.
THE REFRAME: The editor didn’t need more features. It needed users to be able to predict what it would do next.
Confidence in outcomes — not feature depth — was the primary driver of satisfaction and retention. This reframe reshaped every subsequent decision. Rather than optimizing isolated interactions, the work focused on modernizing the mental model of the editor itself: clarifying what’s happening, where actions apply, and how users recover when things go wrong.
Three principles followed:
Trust over novelty — make scope, inheritance, and system state explicit, rather than adding new capability
Recovery over restriction — design for the moments when things go wrong, not just when they go right
Confidence through transparency — surface system state so users always know what happened and what’s possible next
NPS Verbatim Synthesis (9,300+ Comments)
Recurring friction themes surfaced consistently across customer feedback: no autosave or versioning, UI instability, rigid template behaviors, and poor error messaging. These themes consistently pointed to a lack of system feedback and recoverability — issues that couldn’t be solved through surface-level UI changes alone. They directly informed requirements for draft history and recovery, inline warnings, flexible block reuse, and clear override visibility.
Unmoderated Card Sorting (30 Blocks + 20 Settings)
Two exercises over two weeks evaluated how users grouped 30 content blocks and 20 settings. Recruitment constraints capped the sample at 12 participants, but grouping patterns repeated consistently across participants and locales, providing strong directional confidence. These patterns directly informed how blocks, settings, and styling controls were grouped and surfaced — reducing guesswork during editing.

What participants were shown

What we saw
Competitive Benchmarking (12 Tools)
Competitive analysis revealed consistent market gaps in brand enforcement clarity, collaboration ergonomics, testing and validation visibility, and embedded AI guidance. Rather than reinventing editor layouts, benchmarking reinforced an opportunity to differentiate through brand-safe flexibility, inline trust feedback, and assistive (not generative) AI — particularly for non-technical users operating at scale.

🔍 Prototype validation through early Figma Make user testing.
Customer Interviews: Workflow Reality Check
Semi-structured interviews with external customers across technical and non-technical roles confirmed the earlier signals. Users compensate for editor uncertainty with external tools — Canva, ChatGPT, Email on Acid. Repetitive workflows magnify even small inefficiencies. AI is valued as an assistant for summarization, debugging, and optimization — not as a creative replacement. Trust and predictability matter more than flexibility alone.
Usability Testing (Interactive Prototype)
Unmoderated, think-aloud usability tests used an interactive Figma Make prototype to validate IA, mental-model alignment, and trust-related interactions. Task success rates on core flows ran at 80% — preview before sending, adjust section background, add subject line, use the AI assistant. Friction most often occurred during mid-flow editing and system feedback moments, not initial discoverability. When frustration appeared, it was tied to unclear system responses rather than not knowing what to do next.

🔍 Prototype validation through early Figma Make user testing.
Evidence Note
Evidence is drawn from internal usability testing, NPS analysis, and competitive research conducted at Mailjet. Artifacts shown are anonymized.
Each design decision translated a specific research finding into concrete system behavior. The goal was to reduce ambiguity, increase predictability, and embed trust directly into the editing workflow.
Tabbed Content | Style | Settings Panels
Research insight: Users mentally separate what they’re editing from how it looks and how it behaves — but the editor treated all three as one undifferentiated surface.
System decision: Organized controls into clearly scoped Content, Style, and Settings panels to align with user mental models and reduce cognitive load during editing.
Impact: Reduced guesswork and prevented accidental cross-scope changes, especially during mid-flow edits.
Tiered Block Library (Standard vs. Pre-Built)
Research insight: Pre-built blocks accelerate creation but introduce hesitation when editability is unclear. Users didn’t know what they could and couldn’t change.
System decision: Separated standard blocks from pre-built blocks to make levels of flexibility and control explicit.
Impact: Preserved editing speed while restoring confidence — allowing users to choose intentionally between structure and freedom.
Testing Library and Post-Action Feedback
Research insight: Autosave, testing, and recovery were perceived as business-critical trust signals. Users relied on external tools because the editor gave no confirmation that actions had succeeded.
System decision: Improved visibility into testing history and post-action system feedback — save, send, and test confirmations made explicit.
Impact: Reduced reliance on external validation tools and increased the confidence to proceed without second-guessing.

Early concept for new IA

After testing —Informing high fidelity mocks
Evidence Note
Evidence is drawn from internal usability testing, NPS analysis, and competitive research conducted at Mailjet. Artifacts shown are anonymized.
The results of the modernization validated the research direction across every key signal — support load, user sentiment, and adoption.

Informing high fidelity mocks
A 60% Drop That Freed the Support Team
The most immediate signal was the support ticket reduction. The flood of “it’s broken” messages slowed to a trickle — freeing the support team to handle more complex, strategic queries instead of recurring usability problems. That’s a structural win, not just a satisfaction metric.
Sentiment Shifted Before Adoption Was Pushed
The 25-point NPS increase was backed by qualitative evidence. Where users once described the editor as something to work around, they began describing it as something that worked for them. “The new editor is so much faster!” and “I love the new interface, it’s so intuitive.” Sentiment doesn’t shift like that from incremental UI polish. It shifts when the underlying model changes.
Voluntary Adoption as a Trust Signal
50% of active users switching to the new editor within two months — without a forced migration — is strong evidence of product-market fit for the design direction. Users recognized its value and chose it. That’s the metric that matters most for a redesign that was fundamentally about trust.

Informing high fidelity mocks
This work addressed foundational breakdowns in how users understand, trust, and scale their work within the Mailjet editor ecosystem. The goal was to reframe the editor from a collection of features into a trust-driven system — where branding, templates, and user actions behave predictably across contexts.
Rather than optimizing isolated interactions, this project focused on modernizing the mental model of the editor itself — clarifying what’s happening, where actions apply, and how users recover when things go wrong. These moments quietly determine whether users feel confident enough to proceed, experiment, and scale their work over time.
The work was strategic in five concrete ways:
**Excavated real user mental models**, revealing where legacy information architecture, invisible styling scope, and ambiguous system states caused hesitation, rework, and errors during creation
**Addressed competitive gaps** where other editors optimize for speed or novelty but underinvest in predictability, recovery, and trust — especially for teams working at scale
**Reframed reliability as a UX surface**, embedding clarity through guardrails, visible scope, autosave signals, and reversible actions rather than relying on post-hoc documentation
**Established a scalable foundation** for Passport-level patterns that extend beyond the editor into onboarding, Brand Kit governance, automations, and future AI-assisted workflows
**Directly targeted churn drivers** surfaced through research and support feedback — particularly moments where users felt unsure, blocked, or afraid to proceed
The result wasn’t just a cleaner editor. It was a system designed to feel safe to use at scale — one where users could move faster because they understood what the product would do next.
The biggest lesson from this project was about where the real problem lives. The instinct in editor redesigns is to go looking for missing features, better components, fresher UI. The research kept returning to something more fundamental: users had built up uncertainty about the system itself. They’d learned not to trust it. No amount of visual polish fixes that.
The reframe — from “how do we improve the editor?” to “how do we make the editor trustworthy?” — was the decision that made everything else coherent. Once the right question was in place, each design decision had a clear test: does this reduce uncertainty, or does it add more?
I also came away with a stronger appreciation for cross-signal synthesis as a methodology. No single source — not NPS verbatims, not card sorting, not interviews alone — told the full story. The confidence came from triangulation: when 9,300 comments, 12 usability participants, and 12 competitive tools all pointed at the same gap, that’s not a finding. That’s a direction.
The next step is high-fidelity validation on the decisions that carry the most risk: scoped styling inheritance, composite block editing flows, and version history. Success won’t be measured by task completion rates alone — it will be measured by whether users feel confident enough to stop reaching for external tools.


