Paid Search Case Study

Role: Solo analyst • Stack: Python (Jupyter), Tableau Public, GitHub

Executive Summary

I used a historical paid search dataset to show how I’d approach a real marketing ops problem—from raw data to Tableau dashboard—to diagnose revenue, cost, and profitability by device and theme and recommend smarter budget and bidding decisions.

Scope: 5 months of search campaigns split by device, keyword match type, and promo theme

Core KPIs: Revenue, Cost, P&L, CTR, CPA, Impression-to-Conversion Rate

Outcome: Identified that the program is unprofitable overall (–$73K P&L), heavily driven by inefficient theme mix and under-invested desktop, and outlined a set of budget, bidding, and testing recommendations.

End-to-End Process

Data & Engineering

  • Sourced an Paid Ads Search Campaign dataset from Kaggle (to work with public, anonymized data)

  • Version-controlled the project in Github

  • Explored and cleaned raw data in a Jupyter notebook (using Python)

  • Standardized dimensions, further segmented dataset on common elements (device type, search keyword match, search theme)

Model & Aggregation

  • Validated data against raw files to ensure no data loss or double counting

  • Documented all transformations so pipeline could be reproduced or scheduled

Visualization & Storytelling

  • Designed Tableau dashboard with a KPI header (see below)

  • Published to Tableau Public and linked to GitHub repo with code and documentation

Operational Lens

  • Because the dataset is static, I treated this as a completed campaign. For a real account, I would:

    • Connect to live data through a data warehouse (like Snowflake) or ad platform API and schedule daily or multiple-times-per-day refresh

    • Add in simple alerting rules (CPA or P&L threshold by device or theme)

    • Extend the model to include targets for easier day-to-day decision-making.

Insights gained

  1. Desktop is the most impactful device, despite lower volume.

    Mobile generates ~2.5× the impressions of Desktop, but Desktop’s impression-to-conversion rate is roughly double that of Mobile.

    Implication: We’re buying a lot of relatively inefficient mobile traffic. I would explore shifting budget and/or raising bids on desktop until marginal CPA equalizes across devices.

  2. The campaign is not profitable at current economics.

    Across the full period, the campaign produces $562K in revenue on $635K in cost, for a P&L of –$73K.

    Even with strong CTR, the combination of CPA and impression-to-conversion rate leaves too little margin

  3. Theme mix is a major lever.

    High-volume themes such as Coupon/Discount drive a large share of revenue and impressions but are also responsible for much of the negative P&L.

    Lower-volume themes show more efficient acquisition.

    Implication: In a live campaign, I would propose reallocating budget away from unprofitable high-spend themes and testing increased investment in the more efficient ones.

  4. Clear holiday seasonality shapes spend decisions.

    Both impressions and cost spike in November, consistent with holiday shopping behavior. CTR stays healthy, but CPA does not improve enough to offset the higher spend, so losses widen in peak season.

    Implication: For an ongoing campaign, I’d recommend tighter CPA targets, with more frequent monitoring, and device-specific bid adjustments during November to protect margin while capturing incremental demand.