Jump to content


Peer-reviewed or other validation of thematic auto-coding?

autocoding themes validation

  • Please log in to reply
1 reply to this topic

#1 witman


    Casual Member

  • Members
  • Pip
  • 1 posts
  • Gender:Male

Posted 20 February 2016 - 12:20 AM

I'm writing a paper using some textual analysis done by Nvivo's auto-coding using themes. Is there a peer-reviewed (or QSR-written) paper that verifies the validity and quality of Nvivo's analysis using this technique? I found Andrea Goncher's paper called Using automated text analysis to evaluate students’ conceptual understanding from the Proceedings of the Australasian Association for Engineering Education (AAEE2014), but it's about Nvivo 10, it doesn't claim very high levels of validity. I'm hoping that either QSR has done a study of the theme-based auto-coding in Nvivo 11, or that you can point me to another source.


I asked this question of QSR support also, but thought this community might have an answer.



Paul Witman


Assoc. Prof., IT Management, California Lutheran University School of Management

Paul D. Witman

Assoc. Professor, IT Management

California Lutheran University, School of Management

#2 dstoneky


    Casual Member

  • Members
  • Pip
  • 2 posts

Posted 10 December 2016 - 11:58 PM

Good question - to which I'll add a corollary. Can QSR provide more description about this process? What is the specific modeling process used here? I'm guessing this is a form of cluster analysis but more description is essential to a transparent process that can be validated and triangulated against other automated coding models. 

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users