Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

2014 Startup Digest San Fran July Survey Analysis

742 visualizaciones

Publicado el

Startup Digest Silicon Valley-San Francisco edition July 2014 "experimental" user survey results, data analysis and findings.

Publicado en: PYMES y liderazgo
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

2014 Startup Digest San Fran July Survey Analysis

  1. 1. 2014 Startup Digest SF/SV July user survey analysis David Kim and Peter Shin
  2. 2. Summary •Learn to ask better questions –Win: we have a high net promoter score (8)! –Loss: we do not have clear drivers of success •Of 96 unique responses: –30% were Founders, VCs, or C-level readers –5% were interns/students •We should focus on success metrics that matter
  3. 3. Net promotion score distribution - 5 10 15 20 25 30 35 3 4 5 6 7 8 9 10 # responses score distribution of net promotion scores
  4. 4. Cross-section of responses
  5. 5. But What Factors Drive NPS? •And are those factors statistically significant? •Hypothesis (very limited given the data set we collected – see appendix for the survey): –Organizers promote SD more highly –Years lived in the area is also positively correlated to score •Or mathematically: –NPS = a*(Organizer) + b*(Yrs lived) + intercept –Where Organizer is a dummy variable (i.e. 0 or 1)
  6. 6. Regression results •Surprisingly, longer you live here, less likely you are to recommend the digest to friends. Call it cynicism •Sadly, neither variables are statistically significant –specifically, t-stat not above 2 (or p-value not low enough) •Moreover, adjusted R^2 suggests that these factors basically have no explanatory power of the NPS
  7. 7. Conclusion & Afterthoughts •In conclusion, we know we have a very high net promoter score (8) –But, we don’t know why. We did not design our survey with the intent of discovering key metric drivers, but that would have been nice •100 responses among tens of thousands of readers is rather small –Key takeaway is we did not have a baseline starting out –Now we have one, and we can improve upon it
  8. 8. Afterthoughts, cont. •We suspect that NPS is a vanity metric •The real measures that matter are: –Reach (equivalent to revenues for a business, and reflects sharing) –Click-through rates (reflects usefulness to users, and therefore reflects quality perception) •In future surveys or partnership initiatives, we have the above baseline, and can optimize efforts that increase those outcomes
  9. 9. Appendix: data clean-up •Original survey link: – •Data clean-up notes –Organizer column: 0 = no, 1 = yes –Yrs lived: reduced <1 yr and n/a to 0 –Job position: created new categories to reflect responses as closely as possible
  10. 10. Appendix: data tables