SlideShare una empresa de Scribd logo
1 de 36
Rating Scales for Collective Intelligence in Innovation Communities Why Quick and Easy Decision Making Does Not Get it Right > Christoph Riedl IvoBlohm Jan Marco Leimeister Helmut Krcmar
1. Problem Setting
So, there are large data pools… How do you select the best ideas?
 2. Theory Background
Motivation Additional Slide (not partofthepresentation) ,[object Object]
Organization’s absorptive capacity is limited (Cohen et al. 1990; Di Gangi et al. 2009)
Idea selection pivotal problem of Open Innovation (Hojer et al. 2010, Piller/Reichwald 2010),[object Object]
Dimensions of Idea Quality An idea‘s originality and innovativeness Ease of transforming an idea  into a new product An idea‘s value for the organization An idea‘s concretization and  maturity Source: [1, 2, 3]
 3. Research Model
Research Model Judgment Accuracy Rating Scale H1+ Rating Satisfaction H2+ H1: 	The granularity of the rating scale positively influences its rating accuracy. H2: 	The granularity of the rating scale positively influences the users' satisfaction with their ratings.
Research Model User Expertise Judgment Accuracy   H3a Rating Scale H1+ Rating Satisfaction H2+ H3a: 	User expertise moderates the relationship between rating scale granularity and rating accuracy such that the positive relationship will be weakened for high levels of user expertise and strengthened for low levels of user expertise.
Research Model User Expertise Judgment Accuracy   H3a H3b Rating Scale H1+ Rating Satisfaction H2+ H3b: 	User expertise moderates the relationship between rating scale granularity and rating satisfaction such that the positive relationship will be strengthened for high levels of user expertise and weakened for low levels of user expertise.
Research Methodology ,[object Object]
Multi-method study
Web-based experiment
Survey measuring rating satisfaction of participants
Independent expert (N=7) rating of idea quality (based on Consensual Assessment Technique, [1] and [2]),[object Object]
Participant Demographics N = 313
Participant Demographics
Participant Demographics Additional Slide (not partofthepresentation)
Screenshot of system
Research Design  Promote/Demote Rating 5Star Rating ComplexRating
So much for the data space and its attributes. Next, we have to think about who our users are and what they want to do. All lifelogging applications are first of all about  5. Results
Correct Identification of Good and Bad Ideas
Error Identifying Top Ideas as Good and Bottom Ideas as Bad
Rating Accuracy (Fit-Score)
Factor Analysis of Idea Quality Additional Slide (not partofthepresentation)
Participants’ Rating Satisfaction
ANOVA Results N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05
ANOVA Results Post-hoc comparisons: Complex rating scale leads to  significantly higher rating accuracy  than  promote/demote rating and  5-star rating (p < 0.001)
Testing Moderating Effects – Recodingof Rating Scales Additional Slide (not partofthepresentation) 	Moderators are variables that alter thedirectionorstrengthoftherelationshipbetween a predictorand an outcome ,[object Object]
TestingHypotheses 3a and 3b requiresrecodingofratingscaleintodummy variables,[object Object]

Más contenido relacionado

La actualidad más candente

Positioning and presenting design science research for maximum impact
Positioning and presenting design science research for maximum impactPositioning and presenting design science research for maximum impact
Positioning and presenting design science research for maximum impact
Nauman Shahid
 
Eapp Evaluation
Eapp EvaluationEapp Evaluation
Eapp Evaluation
PGAEAPP
 
Hevner design-science
Hevner design-scienceHevner design-science
Hevner design-science
shmushmu
 

La actualidad más candente (20)

Positioning and presenting design science research for maximum impact
Positioning and presenting design science research for maximum impactPositioning and presenting design science research for maximum impact
Positioning and presenting design science research for maximum impact
 
Design Thinking for Data Science #StrataHadoop
Design Thinking for Data Science #StrataHadoopDesign Thinking for Data Science #StrataHadoop
Design Thinking for Data Science #StrataHadoop
 
Eapp Evaluation
Eapp EvaluationEapp Evaluation
Eapp Evaluation
 
[0417] seunghyeong choe
[0417] seunghyeong choe[0417] seunghyeong choe
[0417] seunghyeong choe
 
Why is Test Driven Development so hard to implement in an analytics platform?
Why is Test Driven Development so hard to implement in an analytics platform?Why is Test Driven Development so hard to implement in an analytics platform?
Why is Test Driven Development so hard to implement in an analytics platform?
 
Lecture 5 Teaching Design Thinking 2016
Lecture 5 Teaching Design Thinking 2016Lecture 5 Teaching Design Thinking 2016
Lecture 5 Teaching Design Thinking 2016
 
Why So Many ML Models Don't Make It To Production?
Why So Many ML Models Don't Make It To Production?Why So Many ML Models Don't Make It To Production?
Why So Many ML Models Don't Make It To Production?
 
Qualitative Options with Online Communities
Qualitative Options with Online Communities Qualitative Options with Online Communities
Qualitative Options with Online Communities
 
Audience feedback
Audience feedbackAudience feedback
Audience feedback
 
Audience feedback
Audience feedbackAudience feedback
Audience feedback
 
Missing values in recommender models
Missing values in recommender modelsMissing values in recommender models
Missing values in recommender models
 
Why Share?
Why Share?Why Share?
Why Share?
 
Hevner design-science
Hevner design-scienceHevner design-science
Hevner design-science
 
Brightfind world usability day 2016 full deck final
Brightfind world usability day 2016   full deck finalBrightfind world usability day 2016   full deck final
Brightfind world usability day 2016 full deck final
 
Design Science Introduction
Design Science IntroductionDesign Science Introduction
Design Science Introduction
 
Opponents' questions from doctoral defense of Yue Dai
Opponents' questions from doctoral defense of Yue DaiOpponents' questions from doctoral defense of Yue Dai
Opponents' questions from doctoral defense of Yue Dai
 
Influencing visual judgement through affective priming
Influencing visual judgement through affective primingInfluencing visual judgement through affective priming
Influencing visual judgement through affective priming
 
The Secret Sauce for Effective Usability Testing
The Secret Sauce for Effective Usability Testing The Secret Sauce for Effective Usability Testing
The Secret Sauce for Effective Usability Testing
 
Evidencing research impact
Evidencing research impactEvidencing research impact
Evidencing research impact
 
Collaboration. Customers. Conflict? Bridging the Gap between Agile and UX
Collaboration. Customers. Conflict?    Bridging the Gap between Agile and UXCollaboration. Customers. Conflict?    Bridging the Gap between Agile and UX
Collaboration. Customers. Conflict? Bridging the Gap between Agile and UX
 

Similar a ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Margaret-Anne Storey
 
Aligning Learning Analytics with Classroom Practices & Needs
Aligning Learning Analytics with Classroom Practices & NeedsAligning Learning Analytics with Classroom Practices & Needs
Aligning Learning Analytics with Classroom Practices & Needs
Simon Knight
 
Big Data & Business Analytics: Understanding the Marketspace
Big Data & Business Analytics: Understanding the MarketspaceBig Data & Business Analytics: Understanding the Marketspace
Big Data & Business Analytics: Understanding the Marketspace
Bala Iyer
 

Similar a ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final (20)

Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
 
Analytics (as if learning mattered) - RIDE Symposium, University of London 10...
Analytics (as if learning mattered) - RIDE Symposium, University of London 10...Analytics (as if learning mattered) - RIDE Symposium, University of London 10...
Analytics (as if learning mattered) - RIDE Symposium, University of London 10...
 
CAQDAS 2014 From graph paper to digital research our Framework journey
CAQDAS 2014 From graph paper to digital research our Framework journeyCAQDAS 2014 From graph paper to digital research our Framework journey
CAQDAS 2014 From graph paper to digital research our Framework journey
 
In Focus presentation: Analytics: as if learning mattered
In Focus presentation: Analytics: as if learning matteredIn Focus presentation: Analytics: as if learning mattered
In Focus presentation: Analytics: as if learning mattered
 
8-year Evaluation of GameBus: Status quo in Aiming for an Open Access Platfor...
8-year Evaluation of GameBus: Status quo in Aiming for an Open Access Platfor...8-year Evaluation of GameBus: Status quo in Aiming for an Open Access Platfor...
8-year Evaluation of GameBus: Status quo in Aiming for an Open Access Platfor...
 
Unit3 productdevelopmentconcepttopf
Unit3 productdevelopmentconcepttopfUnit3 productdevelopmentconcepttopf
Unit3 productdevelopmentconcepttopf
 
Cognifide content usabilitytesting-csa2017-v0.1
Cognifide content usabilitytesting-csa2017-v0.1Cognifide content usabilitytesting-csa2017-v0.1
Cognifide content usabilitytesting-csa2017-v0.1
 
SharePoint "Moneyball" - The Art and Science of Winning the SharePoint Metric...
SharePoint "Moneyball" - The Art and Science of Winning the SharePoint Metric...SharePoint "Moneyball" - The Art and Science of Winning the SharePoint Metric...
SharePoint "Moneyball" - The Art and Science of Winning the SharePoint Metric...
 
A New Model for Testing
A New Model for TestingA New Model for Testing
A New Model for Testing
 
Week2 chapters1 3
Week2 chapters1 3Week2 chapters1 3
Week2 chapters1 3
 
Aligning Learning Analytics with Classroom Practices & Needs
Aligning Learning Analytics with Classroom Practices & NeedsAligning Learning Analytics with Classroom Practices & Needs
Aligning Learning Analytics with Classroom Practices & Needs
 
Lecture 6 Teaching Computational Thinking 2016
Lecture 6 Teaching Computational Thinking 2016Lecture 6 Teaching Computational Thinking 2016
Lecture 6 Teaching Computational Thinking 2016
 
General Tips to Fast-Track Your Quantitative Methodology
General Tips to Fast-Track Your Quantitative MethodologyGeneral Tips to Fast-Track Your Quantitative Methodology
General Tips to Fast-Track Your Quantitative Methodology
 
Big Data & Business Analytics: Understanding the Marketspace
Big Data & Business Analytics: Understanding the MarketspaceBig Data & Business Analytics: Understanding the Marketspace
Big Data & Business Analytics: Understanding the Marketspace
 
Modeling Framework to Support Evidence-Based Decisions
Modeling Framework to Support Evidence-Based DecisionsModeling Framework to Support Evidence-Based Decisions
Modeling Framework to Support Evidence-Based Decisions
 
An Introduction to Design Thinking for Evaluation
An Introduction to Design Thinking for EvaluationAn Introduction to Design Thinking for Evaluation
An Introduction to Design Thinking for Evaluation
 
Analytics in Context: Modelling in a regulatory environment
Analytics in Context: Modelling in a regulatory environmentAnalytics in Context: Modelling in a regulatory environment
Analytics in Context: Modelling in a regulatory environment
 
Shared Rationales in Group Activities
Shared Rationales in Group ActivitiesShared Rationales in Group Activities
Shared Rationales in Group Activities
 
[CXL Live 16] Opening Keynote by Peep Laja
[CXL Live 16] Opening Keynote by Peep Laja[CXL Live 16] Opening Keynote by Peep Laja
[CXL Live 16] Opening Keynote by Peep Laja
 
Evaluating the Impact of Design Thinking in Action III
Evaluating the Impact of Design Thinking in Action IIIEvaluating the Impact of Design Thinking in Action III
Evaluating the Impact of Design Thinking in Action III
 

ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

  • 1. Rating Scales for Collective Intelligence in Innovation Communities Why Quick and Easy Decision Making Does Not Get it Right > Christoph Riedl IvoBlohm Jan Marco Leimeister Helmut Krcmar
  • 3.
  • 4.
  • 5.
  • 6. So, there are large data pools… How do you select the best ideas?
  • 7. 2. Theory Background
  • 8.
  • 9.
  • 10. Organization’s absorptive capacity is limited (Cohen et al. 1990; Di Gangi et al. 2009)
  • 11.
  • 12. Dimensions of Idea Quality An idea‘s originality and innovativeness Ease of transforming an idea into a new product An idea‘s value for the organization An idea‘s concretization and maturity Source: [1, 2, 3]
  • 13. 3. Research Model
  • 14. Research Model Judgment Accuracy Rating Scale H1+ Rating Satisfaction H2+ H1: The granularity of the rating scale positively influences its rating accuracy. H2: The granularity of the rating scale positively influences the users' satisfaction with their ratings.
  • 15. Research Model User Expertise Judgment Accuracy H3a Rating Scale H1+ Rating Satisfaction H2+ H3a: User expertise moderates the relationship between rating scale granularity and rating accuracy such that the positive relationship will be weakened for high levels of user expertise and strengthened for low levels of user expertise.
  • 16. Research Model User Expertise Judgment Accuracy H3a H3b Rating Scale H1+ Rating Satisfaction H2+ H3b: User expertise moderates the relationship between rating scale granularity and rating satisfaction such that the positive relationship will be strengthened for high levels of user expertise and weakened for low levels of user expertise.
  • 17.
  • 20. Survey measuring rating satisfaction of participants
  • 21.
  • 24. Participant Demographics Additional Slide (not partofthepresentation)
  • 26. Research Design Promote/Demote Rating 5Star Rating ComplexRating
  • 27. So much for the data space and its attributes. Next, we have to think about who our users are and what they want to do. All lifelogging applications are first of all about 5. Results
  • 28. Correct Identification of Good and Bad Ideas
  • 29. Error Identifying Top Ideas as Good and Bottom Ideas as Bad
  • 31. Factor Analysis of Idea Quality Additional Slide (not partofthepresentation)
  • 33. ANOVA Results N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05
  • 34. ANOVA Results Post-hoc comparisons: Complex rating scale leads to significantly higher rating accuracy than promote/demote rating and 5-star rating (p < 0.001)
  • 35.
  • 36.
  • 37. Regression Results There is no direct and no moderating effect of user expertise. The scale with the highest rating accuracy / rating satisfaction should be used for all user groups.
  • 38. Correlations of Expert Rating and Rating Scales Additional Slide (not partofthepresentation)
  • 40. Limitations Expert as base-line Forced choice
  • 41.
  • 42.
  • 43.
  • 44. Design and test of a model to analyze the influence of the rating scale on rating quality and user satisfaction.
  • 45.
  • 46. Simple scales have low rating accuracy and low satisfaction  Design recommendations for user rating scales for idea evaluation
  • 47. Rating Scales for Collective Intelligence in Innovation Communities > Christoph Riedl IvoBlohm Jan Marco Leimeister Helmut Krcmar riedlc@in.tum.de twitter: @criedl
  • 48. Image credits: Title background: Author collection Starbucks Idea: http://mystarbucksidea.force.com/ The Thinker: http://www.flickr.com/photos/tmartin/32010732/ Information Overload: http://www.flickr.com/photos/verbeeldingskr8/3638834128/#/ Scientists: http://www.flickr.com/photos/marsdd/2986989396/ Reading girl: http://www.flickr.com/photos/12392252@N03/2482835894/ User: http://blog.mozilla.com/metrics/files/2009/07/voice_of_user2.jpg Male Icon: http://icons.mysitemyway.com/wp-content/gallery/whitewashed-star-patterned-icons-symbols-shapes/131821-whitewashed-star-patterned-icon-symbols-shapes-male-symbol1-sc48.png Harvard University: http://gallery.hd.org/_exhibits/places-and-sights/_more1999/_more05/US-MA-Cambridge-Harvard-University-red-brick-building-sunshine-grass-lawn-students-1-AJHD.jpg Notebook scribbles: http://www.flickr.com/photos/cherryboppy/4812211497/ La Cuidad: http://www.flickr.com/photos/37645476@N05/3488148351/ Theory and Practice: http://www.flickr.com/photos/arenamontanus/2766579982 Papers: [1] Amabile, T. M. (1996). Creativity in Context. Update to Social Psychology of Creativity. 1 edition, Westview Press, Oxford, UK. [2] Blohm, I., Bretschneider, U., Leimeister, J. M. and Krcmar, H. (2010). Does collaboration among participants lead to better ideas in IT-based idea competitions? An empirical investigation. In Proceedings of the 43th Hawaii Internat. Conf. System Sci. p. Kauai, Hawai. [3] Dean, D. L., Hender, J. M., Rodgers, T. L. and Santanen, E. L. (2006). Identifying quality, novel, and creative ideas: Constructs and scales for idea evaluation. Journal of the Association for Information Systems, 7 (10), 646-698.

Notas del editor

  1. Following the open innovation paradigm and using Web 2.0 technologies, large-scale collaboration has been enabledLaunch of online innovation communities
  2. Before we dive into the development of our research model, let me give you some theory background
  3. More info on experimental design
  4. Thank you!