Final Thesis: Assisted Interview Transcription for QDAcity

Abstract: Qualitative research deals with a wide array of unstructured input data. One common technique for data gathering is performing interviews which are recorded and subsequently transcribed for analysis. While QDAcity, a cloud-based solution for qualitative data analysis (QDA), already supported the analysis stage of this process, the transcription had to be performed either manually or with an external tool or service.

Transcribing research data already involves crucial decisions, like deciding what and how to transcribe (i.e., filler words, pauses). These decisions affect the following analysis, and consequently, the results. Therefore the researcher is often well-advised to either transcribe the interview themselves or at least carefully correct the transcription.

In this thesis, we present an extension of the current capabilities of QDAcity focused on the automation and correction of transcription content based on interview media.

In our approach, first, an automated transcription from cloud-based services is obtained. This transcription output can be corrected with a user interface that aids this task. Subsequently, as the last step, it can be exported into the data format needed for further domain analysis.

Keywords: Speech To Text, Assisted Transcription, Transcription Editor

PDF: Master Thesis

Reference: Hugo Ibanez Ulloa. Assisted Interview Transcription for QDAcity. Master Thesis. Friedrich-Alexander-Universität Erlangen-Nürnberg: 2021.

Friedrich-Alexander-Universität Erlangen-Nürnberg