Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't that what the tracking stuff is supposed to track? Measure things like how 'annoyed' people get by bounce rate and whatever other relevant metrics.


Yes, but how do you determin the actual reason for a bounce? The test would need to have all the same starting conditions and then let some users have a better performing version or something like that. But at that point one would probably rollout the better performing version anyway. Maybe artificially worsen the performance and observe how the metrics change. And then it is questionable, whether the same amount by which performance decreased would have the same effect in reverse, if the performance increased by that amount. Maybe up to a certain point? In general probably not. In general it is difficult, because changing things to perform better is usually accompanied by visual and functionality changes as well.


Add a 500ms delay for group A and compare to group B who don't have the delay. After a week of this compare the sales figures.


I doubt a company would be willing to deliberately risk losing sales by testing a worse version. AB tests are great in theory, but in practice, to test the current slow system against a faster one, you have to do the optimization work which the test is supposed to justify. That’s why AB testing is often used for quick wins, pricing points or purchase flows, but rarely the big costly questions.

Surveys could be used to explain the bounce rate, but getting feedback from people who leave is one of the hardest the recruit well for. Usability tests could help with that though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: