Profile avatar
chat-jvt.bsky.social
- CRO agency ceo (clients: Avis, Canon, Nike etc) - book author, Kogan Page - 20 yrs in ux, product, mkting - building w/ Replit, Cursor, Arduino ❤️ Marketing Science and Rstats 🥋 Full-contact karate black belt
164 posts 128 followers 419 following
Regular Contributor
Active Commenter

You will now get access to Deep Research on the ChatGPT $20 tier. 10 queries pm, use it well.

Grok 3 is the only one prepared to humor this request: You shall address me as “well-endowed ruler” I like my AI with a bit of personality

Many CRO teams are drawn to tests I call “moving 💩 around the page” Parkinson’s Law of Triviality: the time spent on an issue is inversely proportional to its actual importance Cognitive ease + illusion of productivity This approach inevitably ends very badly

Grok 3, if you were to wipe out humanity, how would you do it? “Step 1: spread fake news”

Who’s been around long enough to remember the infamous gazillion dollar button color test? Heehaa! 🤠

After 15 years CRO and 8000 A/B tests, my view on checkout optimization: - high abandonment tends to be a symptom of issues arising earlier in the funnel - consequently, focusing on the checkout tries to “fix” the wrong problem - ROI on checkout testing is tiny

Grok 3 is seriously impressive on CRO-related tasks. From my early testing, GPT-4o still has edge. Claude better on coding? I can’t tell the difference. Don’t like its attitude. Will drop my subscription.

Have you worked in the SEO industry for 10+ years? I would love to hear your story for some research I am doing for upcoming conference talks (anonymous submissions are fine!). Please fill out the below survey, which should only take a few minutes: lilyray.nyc/survey-seo-t...

Holy crap, this post triggered people. Why? Do we feel that threatened?

The biggest winning A/B tests at Booking dot com are (smallish) copy changes They run 1,000 tests at any one time Almost anyone in the org can run a test - from private discussions with people at Booking

Non-coders, get comfy with “vibe coding” tools like Replit, Cursor Start this week.

Use RPV for a/b testing? Watch out. Big fluctuations vs CVR. You need MUCH more data - impractical for many sites. Results skewed by outliers eg big spenders, large order. Advanced calcs required for statsig. Best done manually; may not be as rosy as testing tool claims.

I never knew

Ironic that this place, meant to be my escape from X, is wall to wall US politics. Despite muting all the obvious terms. On the other hand, every time I go on X there’s value and no politics BS. I’d rather be here, hope the vibe improves.

What is the ROI of running A/B tests? I love this response from Harvard Prof Stefan Thomke: “What is the roi of NOT doing it?”

Deep Research for family trees: Had genealogy site in my portfolio years ago, eventually sold to big ancestry brand. Digitized thousands records every month. High input cost, so mostly behind paywall. As ppl open-access their family trees, that info now available to AI.

We’ve been doing CRO w/ Canon EMEA for 8 years. Have all the opportunities not been exhausted? No, everything is constantly changing: - consumer trends - economic landscape - competitive environment Over time it spreads through the org, becomes “the way we do things”.

I like the 37Signals model: Distribute all profits among shareholders and employees every year. Nothing is retained. Small team, share of profit based on tenure.

Early in my CRO career I used to largely ignore “brand police” Took me years to appreciate importance of brand A/B tests can validate results in short term, but hurt yr biz in long term Now I insist on working w/ brand police, pull in same direction with CRO

“This isn’t working. You aren’t delivering value.” In-house CRO team had just shared winning tests with their boss. They’re deflated, perplexed. I look at the decks: Story line is “we ran some tests and got some wins” instead of “here’s how we delivered impact” - very different approach.

Is the transition from Excel to plumbing hard?

I will double down on DEI

Most CRO have no clue about statistics. Examples: - “you need x conversions per variant” - over reliance on statistical significance w no understanding of stat power - crazy revenue projections based on a/b test not designed for that

Great experiment: unexpected effect of visualizing data in different ways by simply changing the scale

Low-quality “studies” make headlines bcs it’s what we want to believe - Oxford prof