Website Content Export
This file consolidates user-facing source content from:
index.htmlprojects/index.htmlprojects/*/index.htmlprojects/projects-data.jsblog/index.html_posts/2026-01-25-five-lessons-from-my-first-year-as-cos.md_posts/2026-03-14-a-few-thoughts-on-ai-no-one-asked-for.md
Home
Navigation
- Jenny Hurn
- About
- Work
- Writing
- Contact
Hero
Jenny Hurn
Descriptor (rotating):
- Chief of Staff
- Builder of programs that matter.
- Solver of ambiguous problems.
- Maker of things that delight.
- Designer of experiences worth having.
- Operator at the early-stage edge.
- AI aficionado.
- Champion of the team behind the work.
I take on the projects, puzzles, and problems nobody’s claimed yet.
Marquee
- Bootcamps & Webinars
- Website Redesign
- Onboarding Programs
- Certification Curriculum
- Core Value Formation
- AI SDR Motion
- Best Places to Work
- Field Events
- Plans & Pricing
- Agentic Data Bootcamp
About
If there are two things I love, it’s a project and a puzzle.
I’m drawn to the problems where the path doesn’t exist yet and someone has to forge it. That’s been true whether I was building a go-to-market motion from scratch, designing the programs that make a team effective, or figuring out how to launch something into the world that didn’t exist before. I treat it like a puzzle, a game: find the root of the problem, map a route to the desired outcome, and execute with a relentless pursuit of excellence.
But what makes that work meaningful to me is that it’s always about people — the team doing the work, the customers receiving it, the practitioners in the room. I think a lot about human flourishing: what it takes for people to do their best work, feel genuinely connected to what they’re building, and actually enjoy being part of it. That shapes everything from how I design an onboarding program to how I run a campaign.
And I love this work. The craft of untangling threads. The art of understanding an audience. That thrilling moment something clicks into place or a to-do list is thoroughly crossed off. I think that’s why I’ve found such great joy in working with AI. It’s a tool to make this work more efficient, effective, and delightful for others; a way to think bigger and move faster toward something worth making.
The thing that truly excites me though is that the world is always full of interesting puzzles to solve and incredible people to solve them with.
Selected Work (Home Cards)
Projects, puzzles, and other obsessions pursuits.
- Agentic Data & Analytics Bootcamp (Marketing & Sales, AI Assisted) We built a two-day, multi-track practitioner bootcamp from nothing in six weeks — speakers, partners, full content architecture. The category was new and so was the playbook. By the end, people weren’t just learning about agentic data engineering. They were building data pipelines with agents.
- Ascend Website Redesign (Brand & Marketing, AI Assisted) We were launching a new product and our website wasn’t telling the right story. I led a full overhaul — new positioning, new design system, new content architecture — shipped in six weeks. Website traffic doubled. Conversions increased 5x. Also: Otto got a glow-up.
- New Hire Onboarding Program (People & Ops, AI Assisted) Onboarding existed but it wasn’t consistent — different roles, different levels of support, different experiences depending on who you happened to sit next to. I built a program that worked the same way regardless of role or time zone. 96% of new hires reported feeling more effective from day one.
- Agentic Data Engineering Certification — 101, 201 & 301 (Education, AI Assisted) We wanted to establish Ascend as the definitive voice in agentic data engineering. So I built a three-course certification curriculum — 101, 201, and 301 — in 48 hours using a multi-agent workflow. Research agents, writer agents, eight parallel reviewer agents. Twenty-one production-ready modules came out.
- Core Value Formation (Culture, AI Assisted) A core values program that worked fine on paper and didn’t feel alive. Redesigned the recognition model, ran an in-person rollout with a scavenger hunt through California Avenue. The bar: do the values change anything? Ours did.
- AI SDR Outreach Motion (Revenue, AI Assisted) Getting outbound right was iteration — tested AI SDR, Artisan, HubSpot’s agent, and others before landing on AmpleMarket. Micro-campaigns of 20–50 people beat big blasts. The signal layer separated research from spam. Pipeline without added headcount.
- Offsites, Hackathons & Team Events (People, AI Assisted) For a remote-first team, connection isn’t a nice-to-have — it’s infrastructure. Quarterly in-person kickoffs, themed Hackdays, weekly all-hands, Demo Days. Within my first six months at Ascend, employee engagement improved 50%.
- Built In Best Places to Work (Culture) Building an award-winning culture requires a lot of intentionality. Winning four consecutive Built In Best Places to Work awards was less about the award and more about creating a space for everyone on the team to thrive.
- Field Events & Webinar Program (Demand Gen, AI Assisted) For four years, I owned Ascend’s events program end to end — from large industry conferences to a webinar program I built from the ground up. Thousands of MQLs a year and a brand presence that showed up consistently wherever our customers were.
- Plans & Pricing Rollouts (GTM & Strategy, AI Assisted) Rebuilding pricing from the ground up is one of the hardest things a company can do. I held product, sales, finance, CS, and existing customers in one coherent story from start to finish. The launch landed without a single surprise.
Writing (Home)
- How I use Cursor for Agentic Operations (Blog · Coming soon)
- A Few Thoughts on AI No One Asked For (Blog · March 14, 2026)
- Five Things I Learned in My First Year as Chief of Staff (Blog · January 25, 2026)
Now
Working on
Developing agentic operations with Cursor + experimenting with adversarial agents.
Thinking about
How to equip teams for the agentic era: the culture, tech, and process.
For Fun
- Cooking through Good Things by Samin Nosrat. 2.Keeping up that Wordle streak. 3. Redwood walks.
Contact
Got a puzzle you’re looking to solve? Let’s talk.
I’m at my best when the work is early-stage and the problems don’t have owners yet. If that sounds like where you are, I’d love to hear about it.
Footer: Delighted to meet you.
Projects
Projects Index
Title: Projects
Description: Case studies across GTM, people operations, and strategy programs.
Selected Work intro:
Projects, puzzles, and other obsessions pursuits.
Project list on this page mirrors the 10 project cards listed in the Home section.
Project Detail Pages
1) Agentic Data & Analytics Bootcamp
Path: projects/bootcamp/index.html
Description: Two-cohort practitioner bootcamp program with 5,000+ registrants and hands-on agentic data engineering labs.
Summary: Built and launched a free, two-day, multi-track bootcamp program to help data teams adopt agentic workflows with real pipeline builds.
Outcome: Reached 5,000+ registrants across two cohorts with high post-event engagement and follow-up signal.
Ownership: Owned curriculum architecture, lab documentation, speaker operations, and post-event conversion motions.
Impact: 5K+ registrants
Category: Marketing & Sales
Tags: Demand Gen, Education, Partner Marketing, Microsoft Azure
Links
AI-Assisted Notes
- The bootcamp was itself a live demonstration of what agentic AI can do for data teams — which made it feel right to use AI throughout the production process too.
- The first phase was content: drafting the curriculum, building out the lab guides in Docusaurus, and iterating on communications and campaigns. But the most interesting part was using agents to review the lab documentation itself — review agents that connected directly to Ascend’s own agents via MCP servers, running the same prompts and steps we were asking participants to use. They tested the documentation from the inside. It let us catch problems, refine prompts, and ship cleaner labs without relying on a heavy human QA cycle.
- Post-event, I built a data pipeline using AI to analyze the registrant and participant data — identifying which cohort members to prioritize for follow-up and routing them into targeted outbound email sequences. The bootcamp produced a lot of signal. The pipeline made sure we acted on it.
Body
Everyone was talking about AI. Fewer people could explain how to actually use it — safely, effectively, at scale. In the data space especially, there was a lot of noise and not much of a path. Data teams wanted to move but didn’t know where to start. So we built one.
The Agentic Data & Analytics Bootcamp was a free, two-day virtual program — multi-track, hands-on, with partner sessions from MotherDuck and Microsoft Azure. Participants didn’t just learn about agentic data engineering. They built real data pipelines with AI agents before they left.
What I owned
- Curriculum architecture and lab guides in Docusaurus
- Speaker logistics and registrant campaigns
- Post-event follow-up: credentials, hackathon scoring, and email sequences
Across two cohorts, the event drew over 5,000 registrants, well above industry benchmarks for participation rates. Over 200 LinkedIn posts followed, from participants eagerly sharing about the things they’d built and learned — from folks who were genuinely proud of what they’d made. That doesn’t happen by accident. It happens when an experience gives people something worth talking about.
When we started, we didn’t have a playbook to copy from. We figured a lot out in real time. But it was such an impactful experience. I’d do it again. (And I did.)
What participants said
- “Big takeaway for me: agentic AI is less about clever prompts and more about thoughtful system design. With the right context, tools, triggers, and guardrails, agents become safe and useful.”
- “In just a few hours, we built a complete carbon data pipeline and live analytics dashboard — something that traditionally would take days or weeks. We are entering a phase where data systems are no longer just pipelines — they are becoming autonomous collaborators.”
- “This experience really highlighted how agentic workflows can reduce manual effort, improve reliability, and make analytics far more operational and scalable.”
- “Huge thanks to the team for the expert guidance and a flawless event!”
Screens
- Bootcamp promo banner
- Live virtual bootcamp
- Curriculum & lab hub
- Bootcamp agenda sample
- Bootcamp slides
- Bootcamp speakers
2) Ascend Website Redesign
Path: projects/website/index.html
Description: Full website positioning and content system overhaul that drove a 5x conversion increase after launch.
Summary: Led a full website rebuild to match Ascend’s product evolution and launch of Agentic Data Engineering positioning.
Outcome: Website traffic doubled and conversion increased 5x within two weeks of launch.
Ownership: Owned positioning framework, audience architecture, and page-by-page narrative across the redesign.
Impact: 5× increase in conversion
Category: Brand & Marketing
Tags: Brand, Messaging, Conversion, Category Creation
AI-Assisted Notes
- This project happened in early 2024, before agents were a reliable part of the workflow. What existed were powerful language models — capable, but only as focused as the context you gave them. So I treated the AI setup as an architecture problem.
- I built a dedicated ChatGPT project for the site with clear guidelines, rules, and a positioning framework baked in from the start. Each major section of the site got its own dedicated thread — not just for organization, but as a form of context engineering: keeping the model’s working memory focused on one part of the story at a time rather than letting it drift. The rules I set up weren’t just style preferences. They were the beginnings of an agentic harness — the same kind of structured AI guidance that would later become much more explicit in our agentic work.
- What that setup made possible was a fundamentally different pace of work. What would have taken weeks of document drafts happened in days — not because the AI was doing the thinking, but because it could hold context and push back on positioning in a way that made the thinking sharper and faster. I wasn’t generating copy and editing it. I was working through the hardest positioning questions out loud with something that could actually respond.
- Otto’s redesign was a separate AI adventure entirely — Nano Banana, a lot of image generations, and one moment where I genuinely thought: that’s him.
Body
We had spent a year rebuilding Ascend into an AI-native data engineering platform. The product had fundamentally changed, but the website hadn’t kept up. It wasn’t converting, it wasn’t telling the right story, and we were about to launch a new category: Agentic Data Engineering. We needed a new language to help users and prospects understand the things we had built and the power they could harness from AI, and so we decided not to just refresh the website but rebuild everything.
What I owned
- Product marketing and positioning framework for a new category
- Audience mapping across eight buyer personas
- Content architecture and page-by-page copy across 40+ new designs
The build happened in six weeks with the team at FruitBowl Digital — they built all our new pages and migrated eight years of WordPress content. It was a massive undertaking under a tight timeline, and to watch everyone on the team rise to the occasion was genuinely inspiring.
Alongside the site, we gave Otto a proper glow-up. He started as a Slack emoji someone generated at a conference table in Vegas three years ago. After about two hours and $15 on Nano Banana, he became the 3D character you see across the platform today. Friendly, curious, a little scrappy — which is pretty on-brand for how a lot of the best things at Ascend got made.
Screens
- Homepage — Hero & Positioning
- Product & Use Case Pages
- Website content
- Website social proof
- Website information architecture in Miro
- Website design phase in Figma
3) New Hire Onboarding Program
Path: projects/onboarding/index.html
Description: Structured onboarding program design that helped 96% of new hires feel effective from day one.
Summary: Built a role-aware onboarding system to replace inconsistent new-hire experiences across teams and time zones.
Outcome: 96% of new hires reported feeling effective from day one.
Ownership: Owned onboarding program design, process documentation, and tracking workflows.
Impact: 96% effective from day one
Category: People & Ops
Tags: Employee Experience, Operations, Retention
AI-Assisted Notes
- The most recent version of this program — a 40+ page onboarding guide I built for Dru, our Fractional VP of Marketing — was built almost entirely with AI assistance. I used Claude to help structure the guide, draft role-specific sections, and keep the voice consistent across a document that long. AI is genuinely good at holding, crafting, and disseminating massive amounts of context at once. Giving agents access to internal docs, a new hire’s background, what they need to know in week one, produces tailored, high quality resources to empower folks to start making an impact right away.
- But the best part of leveraging these new AI skills to onboard Dru? He used AI to turn the onboarding doc into a podcast — and listened to it on his commute into the office on day one. By the time we sat down for our first meeting, he’d already absorbed the whole thing. That’s the version of onboarding I was trying to build.
Body
When I started at Ascend, my first real project was formalizing our onboarding program. My own onboarding had been ad hoc — meetings with my hiring manager, a lot of questions, figuring it out as I went. The reality was that most new hires were getting different levels of support depending on their role. There wasn’t a formalized structure in place to ensure people could hit the ground running with clear expectations and all the tools and resources they needed.
What we put in place
- Pre-day-one communication so people felt like insiders before they logged in
- A structured first week and role-specific ramp paths
- Buddy pairing and 30 / 60 / 90 check-ins to catch what was slipping
The design principle was simple: by the end of week one, someone should feel like they belong here and know what they’re supposed to be doing. That sounds obvious but it’s actually pretty hard to pull off consistently across different roles, levels, and time zones. The program had to scale without losing the personal quality that makes early days feel meaningful.
Building out tracking systems, new hire checklists, and a consistent onboarding program resulted in 96% of new hires reporting they felt effective in their role from day one. It enabled new teammates to start shipping code and taking on-call shifts faster than they had before — and more importantly, it gave people a real foundation to success long-term.
What new hires said
- “I like that the process includes training on different areas of the company that may not directly relate to what I do — it helps build context and an overall process understanding.”
- “[I appreciated] the onboarding docs tailored to my role with well documented things to do. I also enjoyed the clear team overview.”
- “People are incredibly smart and helpful here — huge fan of the AMA [Ask me Anything] session as a forum to learn more about the company, technology, and team.”
Screens
- New Hire Welcome Kit
- 30/60/90 Ramp Framework
4) Agentic Data Engineering Certification — 101, 201 & 301
Path: projects/certification/index.html
Description: Built a three-course certification program with 21 modules in a 48-hour multi-agent production cycle.
Summary: Connected two years of agentic content into a structured learning path for foundations, systems design, and production operations.
Outcome: Delivered 21 modules across three course levels in a 48-hour build window.
Ownership: Designed the agentic authoring/review architecture and owned curriculum quality gates.
Impact: 21 modules · 48-hour build
Category: Education
Tags: Curriculum, Credentialing, Category Creation, Community
AI-Assisted Notes
- Working with agents to build this in 48 hours was genuinely one of the most exciting things I’ve done. The whole thing started in Cursor’s planning mode — which is less like prompting and more like writing a spec. Before a single agent ran, I worked through a thorough implementation plan that defined the architecture, prompts for each subagent, the module outline and curriculum structure, rules and guidelines for what production-ready content actually meant in this context, and how we would grant certifications to folks who completed the courses.
- Having a clear, specific spec before we built meant we weren’t figuring out the architecture mid-run. The agents had what they needed from the start. That’s what made 48 hours possible — not speed for its own sake, but the kind of upfront clarity that let everything downstream move fast and stay coherent.
Body
The knowledge existed. For two years, Ascend had led the bleeding edge of experimenting and exploring, pushing the boundaries on what data teams could do to leverage AI. We’d published our findings: blog posts on agent architecture, lab guides on context engineering, deep dives into observability and governance. Each piece came from the real work we were doing with our own data workflows and the vision of helping data teams become more agentic.
The problem was that none of it was connected. A data engineer trying to go from ‘I get what agents are’ to ‘I can run them reliably in production’ had to collect the pieces themselves and figure out the order. The curriculum wasn’t there. Then our internal hackathon gave us 48 hours and a deadline.
The approach I took was, in hindsight, the only right one: use the same kind of agentic system the curriculum teaches to build the curriculum itself. Here’s how it worked:
The architecture
- Research subagents worked in parallel — one per module — pulling from two years of our published blog posts, labs, and documentation to build source-verified briefs before a single word of new content was written
- Writer subagents consumed those briefs and produced full, comprehensive modules
- Eight parallel reviewer subagents ran simultaneously against each draft, each with a distinct persona: a senior data engineer, an instructional designer, a legal reviewer, and five more — none of them talking to each other
- A synthesis subagent read all eight reports, deduplicated findings, issued a ranked change list, and rendered a verdict: needs revision or ready for human review
- Rerun the cycle until the synthesis subagent had no major issues to report. The synthesis subagent owned the readiness decision, not me. That constraint mattered — it meant I didn’t need to review the content until it had been thoroughly checked against the highest standards for veracity, readability, and efficacy.
The result: 21 modules across three courses — 101 for foundations, 201 for systems design, 301 for production operations — each put through multiple rounds of parallel expert review, in the same 48-hour window that would have produced maybe three or four decent drafts by hand. What surprised me wasn’t the speed. It was how the review structure changed the content. Eight independent reviewers with no shared context don’t just catch errors — they surface tensions. The modules that went through the most revision rounds are consistently the strongest. The process didn’t save us from hard thinking. It organized and accelerated it.
Screens
- 101 Foundations Track — Introduction
- 101 Foundations Track — Maturity Assessment
- Certification Modules Overview
- 301 Advanced Practitioner Track — Module Quiz
5) Core Value Formation
Path: projects/values/index.html
Description: Company-wide values and recognition program redesign focused on decision-quality and cultural durability.
Summary: Restructured stale company values into a practical operating system supported by continuous recognition rituals.
Outcome: Established value frameworks that held up in real hiring, feedback, and performance decisions.
Ownership: Owned value redesign, recognition model updates, and in-person rollout programming.
Impact: 4 values that inspired
Category: Culture
Tags: Culture, Org Design, Facilitation, Leadership
AI-Assisted Notes
- I used AI at two distinct points in this project. The first was in redesigning the recognition program — using it to generate ideas across a wide range, challenge the assumptions baked into the existing model, and map specific recognition formats to each of our four values. That ideation phase would have taken multiple team workshops; with AI it happened in a few focused sessions.
- The second was in producing the in-person rollout: agenda design, activity instructions, the value-themed scavenger hunt riddles for California Avenue. AI wrote the first drafts of everything and I refined them. The human work was in knowing which ideas would actually land with our team and which would feel forced. I find collaborating with agents to execute creative and delightful ideas enables me to deliver experiences that are far more powerful than I could deliver without these tools at hand.
Body
We had core values at Ascend — but some of them had started to feel stale. They weren’t guiding day-to-day decisions the way we needed them to. At the same time, we kept finding ourselves repeating two phrases that weren’t formally part of our values yet but clearly needed to be: Spark Delight and Consistently Curious.
The reasoning was clear. As we built a brand new platform, it became critical to orient the entire team — product, engineering, customer-facing — around building delightful experiences for our users. Delight wasn’t a nice-to-have; it was a strategic direction. And as we pushed into agentic systems and the leading edge of AI, staying consistently curious wasn’t optional. Without it, we knew we’d fall behind.
So we took a step back and restructured our core values to include both — and used that moment to revamp the recognition program around all four values: I Got It, 1 Team 1 Dream, Spark Delight, and Consistently Curious.
The redesign moved us toward a continuous recognition model with real-time callouts, value-specific award formats, and more intentional rewards. We also ran an in-person rollout for the team: a scavenger hunt through California Avenue with value-themed riddles, a LEGO bridge-building challenge for ‘I Got It,’ micro-research sprints for ‘Consistently Curious,’ and a proper quarterly awards ceremony to tie it all together.
Engagement skyrocketed. Belief in what the team could accomplish increased. Morale improved. The bar I always use for values work: do they change anything? Does someone make a different hiring decision, give harder feedback, hold a tougher line — because of the values? Ours passed that test. They showed up in performance reviews, in how we talked about hard decisions, and in what we celebrated. That’s what made the program worth doing.
Screens
- Core Values
- Core Values Rollout Slides
6) AI SDR Outreach Motion
Path: projects/aisdr/index.html
Description: Signal-based outbound architecture that improved targeting quality and scaled pipeline without added headcount.
Summary: Built an AI-assisted outbound motion around micro-campaigns and signal quality rather than high-volume blasts.
Outcome: Improved outreach precision and pipeline generation while maintaining existing team size.
Ownership: Owned tool evaluation, signal logic, message architecture, and iterative campaign optimization.
Impact: Agentic targeting at scale
Category: Revenue
Tags: Revenue, Pipeline, AI, GTM, AmpleMarket, Outbound
Body
Getting outbound right is mostly a story of iteration. I tested a lot — AI SDR tools like Artisan, HubSpot’s outbound agent, and others — before landing on AmpleMarket as our primary platform. AmpleMarket gave us a highly customizable agentic framework where we could leverage our own agentic expertise to tune messaging and craft more effective outbound than any other tool allowed.
One of the earliest and most important things we learned: micro campaigns outperform broad ones. Hyper-targeted lists of 20–50 people, built around a specific signal or moment, consistently outperformed larger blasts. It forced better messaging — you can’t be generic when your list is that focused — and it showed up in the results.
The signal layer is the part most teams underinvest in. It’s the difference between outbound that feels like research and outbound that feels like spam. We built the ICP criteria, the signal logic, and the message architecture around that principle — and kept tuning based on what was getting replies versus getting ignored.
Other learnings: micro-campaigns of 20–50 people beat big blasts; the signal layer separated research from spam; LinkedIn outreach was far more effective than any other channel.
Screens
- Signal Targeting & Sequence Architecture
- Personalization & Message Testing
7) Offsites, Hackathons & Team Events
Path: projects/offsites/index.html
Description: Distributed-team event and engagement program design that contributed to a 50% eNPS improvement.
Summary: Built recurring offsite, hackday, and all-hands programs to strengthen alignment and connection in a remote-first team.
Outcome: Employee net promoter score improved by 50% within the first six months of the program.
Ownership: Owned event strategy, facilitation design, logistics systems, and cross-functional execution.
Impact: 50% increase in engagement
Category: People
Tags: Culture, Employee Experience, Remote, Engagement
AI-Assisted Notes
- Event production has a lot of surface area. I use AI to draft and iterate on vendor research, logistics documentation, agenda design, internal communications, activity instructions, slide deck prep and review, and more.
- Collaborating with AI and agents where I can enables me to focus on the things that are more impactful — the judgment calls that it doesn’t have the full sensibility to understand: what activities would fit the team’s energy, how to structure a dinner for a group that hasn’t been in a room together in six months, what the arc of the experience should feel like. Agentic workflows give me the space to be more intentional about these decisions that deeply impact the experience for everyone involved.
Body
I’ve run a lot of events in my career, and the thing I’ve learned is that the success of an offsite is almost entirely determined by decisions that feel small — the dinner format on night one, how much unstructured time you leave, whether there’s a space to gather after hours and play board games or a round of liar’s dice (an Ascend team favorite). The big agenda items matter, but the experience people remember is usually in between them.
The best offsites aren’t just agendas; they’re journeys. I think carefully about how to lead people through an experience that builds on itself, finding the perfect moments for alignment and focus, creative collaboration, and genuine connection. For a distributed team, in-person time is the opportunity to shape how people experience the team and the vision — to create the shared reference points that change how everyone works together for the rest of the year.
What the program looked like
- Quarterly in-person kickoffs to align the team around strategy and priorities
- Quarterly themed Hackdays (our most recent, “Agents Assemble,” was an Avengers-themed event with a level of commitment to the bit that I stand by)
- Monthly team retrospectives — a time for folks to ask questions about anything across the business, from the roadmap to customer satisfaction
- Weekly all-hands and Demo Days where the team could show off their work and get recognized
Within my first six months at Ascend, we saw a 50% improvement in eNPS. I’d be the first to say that’s not entirely attributable to events — a lot of things go into whether people feel good about where they work. But building out these programs has been one of the most rewarding parts of my time at Ascend, and I’ve loved continuing to invest in them.
Screens
- Company All-Hands & Offsites
- Team Volunteering
- Hackathon & Team Sprints (AI in Wonderland)
- Hackathon & Team Sprints (Agents Assemble)
8) Built In Best Places to Work
Path: projects/builtin/index.html
Description: Culture and people-program operations that supported four consecutive Built In Best Places to Work recognitions.
Summary: Sustained people programs and submission quality that translated culture operations into repeated external recognition.
Outcome: Earned four consecutive Built In Best Places to Work awards.
Ownership: Owned annual submissions and the underlying people-program infrastructure behind the recognition.
Impact: 4 consecutive wins
Category: Culture
Tags: Recognition, Awards, Culture, Employee Experience
Body
Building an award-winning culture requires a lot of intentionality. Winning four consecutive Built In Best Places to Work awards was less about the award and more about creating a space for everyone on the team to thrive.
The Built In award is based on compensation, benefits, and the programs a company has in place to support its people. You can’t manufacture it — if the underlying programs aren’t there, the score reflects that. I owned the submission process each year, which meant telling the story of what we’d actually built clearly and accurately. But the harder work was always the underlying programs themselves: onboarding, offsites, recognition, benefits design, the culture work. The award is downstream of all of that.
Four years is a long time to sustain something in a startup environment where everything else is changing constantly. Headcount fluctuated, the product evolved, the market shifted. What stayed consistent was a genuine commitment to the people side of the company. The award was nice. The part that mattered was what it confirmed.
Screens
- Award Campaigns & Press
- Employee Survey Results
9) Field Events & Webinar Program
Path: projects/field-events/index.html
Description: End-to-end event and webinar program operations that generated thousands of qualified leads annually.
Summary: Built and ran conference, webinar, and partner event programs with structured follow-up and content reuse loops.
Outcome: Produced thousands of marketing-qualified leads per year while compounding brand presence.
Ownership: Owned event strategy, execution, enablement, and post-event conversion workflows.
Impact: 6K+ annual leads
Category: Demand Gen
Tags: Events, Co-Marketing, Pipeline, Webinars, Sequel.io, Snowflake, Databricks
AI-Assisted Notes
- AI is present throughout the entire events program — from planning to execution to follow-up. At the planning stage, I use AI agents to draft content calendars, build out event decks, and develop hands-on lab guides. During execution, AI helps me move faster on the things that need to happen at volume: recapping sessions, turning transcripts into blog posts, drafting persona-segmented follow-up sequences. After each event, I use AI to identify the highest quality opportunities in our lead data and personalize outbound to those folks at something closer to live speed.
- AI gives me the ability to effectively harvest the work we put into these programs. The event generates signal. AI helps make sure we actually act on it.
Body
If you’ve ever joined an Ascend webinar, attended a meetup, or visited our booth at a conference, we’ve probably crossed paths! I’ve owned Ascend’s field events program end to end — from in-person conferences to a virtual webinar program I built from the ground up, across both owned and partner channels.
What the program covered
- In-person conference presence with teams of up to 20 people
- Pre-event training and messaging prep so the team could show up ready to have real conversations…not just hand out swag (iykyk)
- Diligent follow-up motion on the return to turn booth interactions into actual pipeline
- A virtual webinar program running up to 20 events a year across owned and partner channels
The webinar program became one of our most consistent drivers of pipeline and thought leadership. But the operational burden of running it manually was genuinely overwhelming at that volume. Moving to Sequel.io changed everything — setup went from days to hours, which let us run more events, respond faster to what the market was asking about, and give the program a real home on the Ascend website. The program compounded: each session became a blog post, a follow-up sequence, a series entry.
The result was thousands of marketing qualified leads a year and a brand presence that showed up consistently wherever our customers were.
Screens
- Field Events — Snowflake World Tour Booth
- Field Events — Snowflake Summit Booth
- Field Events — Sequel.io Event Hub
- Field Events — Sequel.io Dashboard
- Field Events — Hands-on Lab Build
- Field Events — AWS Webinar Promo
10) Plans & Pricing Rollouts
Path: projects/pricing/index.html
Description: Cross-functional pricing and free-trial rollout leadership across product, sales, finance, and customer teams.
Summary: Led pricing model, launch messaging, and alignment work for tier and free-trial rollouts during platform transition.
Outcome: Delivered pricing changes with aligned internal execution and low customer confusion at launch.
Ownership: Owned cross-functional coordination, scenario modeling support, and rollout communication strategy.
Impact: Zero surprises at launch
Category: GTM & Strategy
Tags: Pricing, GTM, Platform, Free trial, Launch, Sales Enablement
AI-Assisted Notes
- One of my favorite parts of leveraging AI for this project was building out pricing calculators for customers — interactive applications they could use to better understand how their usage would impact their pricing. With AI, generating these kinds of code-based applications becomes really easy, especially when we have a model in a spreadsheet that we can share with the LLM to generate those artifacts. It also builds a lot of trust and delight for customers to have a tailored pricing calculator that’s bespoke to their needs and patterns.
Body
Launching a new platform means launching new pricing from scratch — and doing it in a way that’s fair to existing customers, reflects the actual cost of new capabilities like AI, and makes sense for a product that didn’t exist in quite this form before. I partnered with leadership, product, and engineering to work through the models and tiers, iterating as the platform developed and making sure nothing we built created surprises for customers who’d been with us before the transition.
The launch of our free trial added another layer of complexity. A 14-day free trial for developers sounds simple, but it requires real clarity on conversion rate assumptions, infrastructure cost coverage, and what ‘free’ actually costs the business before it becomes a customer. We needed numbers we could stand behind — not just pricing that felt right, but pricing that held up when you ran it through the spreadsheet.
The hardest part of pricing work is that it touches everything simultaneously: product, sales, finance, customer success, and the customers themselves. My job was to hold all of those threads coherent — making sure the tiers made sense, the messaging was consistent across every function, and the launch landed without confusion or noise. Pricing changes that go well are invisible. That’s the goal.
Screens
- Platform Tiers, Models & Migration
- Free Trial, Finance & Launch Comms
Blog
Blog Index
Title: Blog
Practical lessons on operator work, team building, and navigating AI in real life.
Blog list template renders posts with:
- Post date
- Post title
- Optional post description
Post 1
Path: _posts/2026-03-14-a-few-thoughts-on-ai-no-one-asked-for.md
Title: A Few Thoughts on AI No One Asked For
Date: 2026-03-14
Description: Candid thoughts on AI, creativity, cost, and uncertainty.
- I cannot believe how much I can get done.
- Sometimes, when I’m loading the dishwasher, I think about prompting an AI to do it instead. Then I sigh and load the dishwasher.
- I wonder if it’s possible for me to have a single conversation where I don’t bring up AI. Maybe I’ll try tomorrow…
- I don’t remember what my own writing sounds like anymore.
- The most powerful thing I’ve done with AI is start treating it like a team.
- Multi-agent workflows where agents critique each other and embody different perspectives are the clearest technological argument I’ve seen for why diverse teams produce better work.
- AI content has a texture. A smoothness. A certain confident vagueness. I think the reason we find “AI slop” distasteful is because we see through the thinly veiled lack of experience attached to what is actually being said.
- The cost (financial, environmental, ethical, behavioral, etc.) is real, and I don’t talk about it enough.
- Creativity really matters to me. Making with AI is genuinely delightful because it allows us to produce something out of seemingly nothing. The act of creation is inherently spiritual, and I like to truly marvel at what we can build and do now.
- Sometimes, when I’m away from my laptop, I wish I could telepathically tell Cursor to get to work on an idea.
- Actually, I generally wish I could telepathically tell AI what to do and waste less time typing. Wispr Flow is pretty cool though…
- I worry about what happens when the path of least resistance to feeling understood is something optimized to make you feel understood.
- Working with AI is the closest thing to journaling I do and the deepest metacognition I have time for. Making reasoning legible to something forces a useful kind of clarity.
- The most dismissive people haven’t used it seriously. The most utopian haven’t thought seriously about what it costs.
- I don’t know how to reconcile the genuine flourishing it makes possible with the genuine potential for harm. I don’t have a framework for that. I’m not sure one exists yet.
- Prompting is a skill. It is not evenly distributed. That gap is going to matter more than we’re admitting. I think a lot about who’s getting left behind.
- We are all just figuring this out as we go. That’s wild.
- The dishwasher still won’t load itself… for now.
Post 2
Path: _posts/2026-01-25-five-lessons-from-my-first-year-as-cos.md
Title: Five Things I Learned in My First Year as Chief of Staff
Date: 2026-01-25
Description: Reflections on transitions, excellence, and my undying love for to-do lists
Nothing, truly nothing, gives me more joy than crossing tasks off a to-do list. I’ve excelled at this my entire career. From simple tasks like sending emails to giant initiatives that require multi-phase implementation, I’ve honed execution and operationalization into a superpower.
For my first three years at Ascend.io, a startup building an agentic data engineering platform, I wore a lot of hats: people operations, marketing, sales, customer success, and sometimes, even product. Not sequentially. Simultaneously. I was building the team culture and the outbound pipeline in the same week, writing the onboarding program and the campaign brief in the same afternoon. That breadth was its own education, and I’m grateful for every chaotic, overextended minute of it.
But when I stepped into the Chief of Staff role, something shifted. There were parts of the job I’d been doing for years without the title. And there were parts I hadn’t touched — finding a voice in our leadership team, being a real participant in strategic conversations, holding the thread between where we were and where we were trying to go. The title changed, and then, gradually, so did I.
A year in, here’s what I’ve learned.
1. I didn’t just use AI this year. I learned to lead it.
I was already using AI before I became a Chief of Staff. Coming from the nonprofit world, I’ve always had to stretch every resource as far as it will go. Leveraging tools that let me do more with less is a deeply held value of mine. What changed this year was how I related to them.
Somewhere in the middle of planning a two-day virtual bootcamp for thousands of registrants, building an outbound pipeline from scratch, producing technical content, running a company-wide hackathon, and handling the scope of employee experience — I stopped thinking of AI as a tool I was operating and started thinking of it as a team I was leading.
That reframe changed everything. When you manage an agentic system the way you’d manage a team (with clear context sharing, defined roles, a feedback loop, and explicit expectations), the output is different. Better. More useful. And the “relationship” becomes something you can actually build on. The project the system collaborates on develops a deeper understanding of patterns that work and documents paths to success that exponentially improve results over time.
I’m not claiming AI replaced human collaboration or that it’s without limits. But I will say this: for the first time in my career, I felt like the ceiling on what I could accomplish wasn’t set by how many people were on my team.
2. The to-do list is a trap.
I hate to admit it. But it’s true for me.
I am, at my core, an achiever. I have been since I was a kid doing ballet and soccer and choir and reading thirty books over the summer between second and third grade. Checking things off feels good. It has always felt good. And in most operational roles, being someone who gets things done is the whole job.
But the Chief of Staff role asked something different of me, and it took me longer than I’d like to admit to hear the ask.
The immediate gratification of clearing a to-do list can become its own form of avoidance. When you’re moving fast and producing a lot, it’s easy to tell yourself you’re being effective — and miss the fact that you’re not spending nearly enough time asking where are we actually going and is this the best path to get there?
Creating space to think - not just about what needs to happen this week, but about direction, long-term trajectory, and whether the strategy underneath the tactics is sound - is harder than it sounds when your inbox is full, your calendar is packed, and you’re on your sixth Diet Coke of the day. But it is the work. The rest is just logistics.
I had to build that muscle deliberately. I still have to protect that space like it’s sacred, because it is.
3. I said yes to everything. I’d do it again.
In the past year, I have executed two two-day developer bootcamps, built a dozen data pipelines from scratch, written seven technical blog posts, designed 15 live hands-on workshops and webinars, collaborated with countless partners and customers, experimented with 20+ AI SDR outbound sequences, produced four company hackathons and three in-person offsites, managed speaker logistics across tens of events, built pricing calculators, and developed sales pitches. Among other things.
Most of them I had never done before. And many I wasn’t really sure how to get started on.
I said yes anyway. Every time.
Here’s what I learned: breadth is how you build range. Every time I said yes to something I had never done, I either discovered I was capable of more than I thought — or I discovered exactly where my edges are, which is equally valuable. Either way, I came out of it more confident than I went in.
Is it always sustainable? No. There were weeks (or months) this year that were genuinely WAY too much. But the accumulated confidence of having figured out hard things I’d never tried before is not something I would trade. It compounds. The things I accomplished because I was willing to try gave me more and more capacity to attempt bigger and harder things.
If you’re early in your career and you’re wondering whether to raise your hand for the thing outside your lane, definitely raise your hand. I have never regretted it.
4. Strategy requires curiosity, not answers.
I used to think being strategic meant having the right framework, the right analysis, the right answer ready to present. I was wrong about that.
The most strategically valuable thing I did this year was ask questions — often questions that felt almost too basic to ask out loud. Why are we doing it this way? What problem are we actually solving? Who is this for, and do they know it exists? What would we do if this didn’t work?
Good questions create more forward motion than confident answers. They surface assumptions. They reveal the places where a team has quietly gotten out of alignment. They invite the kind of thinking that produces something genuinely better than what was already on the table.
I don’t think I’m naturally patient enough to sit in questions — I want to move, to solve, to cross things off (see lesson two). But I’ve learned that the pause before the answer is often where the actual strategy lives.
Curiosity is not a soft skill. It is the work.
5. Ambiguity is either fog or open road.
The Chief of Staff role is, at its core, a role you define as you go. There is no universal playbook, no standard org chart position, no agreed-upon set of KPIs that tells you whether you’re doing it right. What the role means depends entirely on the company, the CEO, and the moment you’re in.
That ambiguity can feel like fog — disorienting, isolating, impossible to navigate. Or it can feel like open road.
What I’ve learned is that the difference isn’t the ambiguity itself. It’s what you do with it.
The job of a Chief of Staff is to create clarity — not just for yourself, but for everyone around you. To take the swirling mass of competing priorities, half-formed strategies, and unspoken tensions and turn them into something the team can actually move toward. When you do that well, clarity becomes momentum. People move faster. Decisions get easier. The whole organization benefits from having someone who is willing to name the thing nobody has named yet and build the structure that lets everyone else get on with their work.
That is not a support function. That is leadership.
I am still learning what that looks like at every new stage of growth. I suspect I will be for a while. But I’ve stopped waiting for someone to hand me the map. The terrain keeps changing anyway.