1. ABOUT THE DATASET ------------ Title: The Reading Everyday Emotion Database (REED) Creator(s): Jia Hoong Ong, Florence Yik Nam Leung, & Fang Liu Organisation(s): School of Psychology and Clinical Language Sciences, University of Reading Rights-holder(s): University of Reading Publication Year: 2021 Description: We developed a set of audio-visual recordings of emotions called the Reading Everyday Emotion Database (REED). Twenty-two native British English adults (12 females + 10 males) from a diverse age range and with drama/acting experience were recorded producing utterances of various lengths in spoken and sung conditions in 13 various emotions (neutral, the 6 basic emotions, and 6 complex emotions) using everyday recording devices (e.g., laptops, mobile phones, etc.). All the recordings were validated by a separate, independent group of raters (n = 155 adults), and the database consists only of recordings that were recognised above chance. Cite as: Ong, J.H., Leung, F.Y.N., & Liu, F. (2021): The Reading Everyday Emotion Database (REED). University of Reading. Dataset. https://doi.org/10.17864/1947.000336 Related publication: Ong, J.H., Leung, F.Y.N., & Liu, F. (In prep.). The Reading Everyday Emotion Database (REED): A set of audio-visual recordings of emotions in music and language. 2. TERMS OF USE ----------------- Copyright University of Reading 2021. The complete REED database is available to authorised users subject to a Data Access Agreement between the University of Reading and a recipient organisation. A copy of the University of Reading Data Access Agreement is included with this item. To request access to the database, please complete a data access request at https://redcap.link/data-request. A subset of example clips from the database is made available for use under a Creative Commons Attribution-NonCommercial 4.0 International Licence (https://creativecommons.org/licenses/by-nc/4.0/). Those clips are listed in the 'example clips' folder and only those clips should be used for publication and presentation purposes. 3. PROJECT AND FUNDING INFORMATION ------------ Title: Cracking the Pitch Code in Music and Language: Insights from Congenital Amusia and Autism Spectrum Disorders Dates: 01-12-2016 - 30-11-2022 Funding organisation: European Research Council (ERC) Grant no.: Starting Grant 678733 (CAASD) 4. CONTENTS ------------ File listing: (i) data_validation.csv -- Data for the validation task (ii) UoR-DataAccessAgreement-000336.pdf -- The REED Data Access Agreement (iii) example_clips.zip -- Example clips of the REED (iv) InfoSheet.pdf -- Information sheet (the copy used for participants involved in the REED study) (v) ConsentForm.pdf -- Consent form (the copy used for participants involved in the REED study) Not available in current dataset: (i) The REED (will be sent to user once they have signed the Data Access Agreement) 4.1. Data for the validation task -------------------------------------- The .csv file from the validation task as described in the Related Publication. Details of the columns are as the following: - participant = participant code. - file = the recording file that was validated. The files are named in the following convention: domain_utterance_speaker_emotion.mp4. See Section 4.2 The REED for more details. - domain = whether the file was spoken ("speech") or sung ("song"). - utterance = whether the utterance produced was the syllable 'ah' ("ah"), the phrase 'Happy birthday to you' ("birthday"), or the sentence 'The music played on while they talked' ("music"). - speaker = the speaker's ID. FW = female; MW = male, followed by digits representing their speaker code. - item = the recorded emotion (represented by the first three characters of the emotion) followed by the token number. For example, 'hop03' refers to Hopeful Token 3. See Section 4.2 The REED for a list of the emotions. - emotion = the emotion that was produced. - list = the list assigned to the participant (participants each completed one of 15 lists). - genuineness = 5-point scale rating on the genuineness of the expression (1= Not at all genuine; 5= Completely genuine). - intensity = 5-point scale rating on the intensity of the expression (1= Not at all intense; 5= Completely intense). - recog = emotion label chosen by the participant. - Correct = whether participant's recognition label matched the expression/was correct ("1") or not ("0"). - foil01 = one of the foil words presented. - foil02 = one of the foil words presented. - foil03 = one of the foil words presented. - foil04 = one of the foil words presented. - selected = whether the file was selected ("1") to be included in the database ("0" if not). 4.2. The REED -------------------------------------- The database contains 762 audio-visual .mp4 files, organised in four subfolders, each corresponding to a recording condition: (i) spoken "ah" ('spoken_ah' subfolder) (ii) spoken "Happy birthday to you" ('spoken_bday' subfolder) (iii) spoken "The music played on while they talked" ('spoken_music' subfolder) (iv) sung "Happy birthday to you" ('sung_bday' subfolder) The files are named in the following convention: domain_utterance_speaker_emotion.mp4 Domain: sp = speech so = song Utterance: ah = "ah" bd = "Happy birthday to you" mu = "The music played on while they talked" Speaker: FW = female MW = male Digits = speaker code Emotion: ang = angry dis = disgust emb = embarrassed fea = fearful hap = happy hop = hopeful jea = jealous neu = neutral pro = proud sad = sad sar = sarcastic str = stressed sur = surprised 5. METHODS -------------------------- For a detailed description of the methodology, please refer to the publication below: Ong, J.H., Leung, F.Y.N., & Liu, F. (In prep.). The Reading Everyday Emotion Database (REED): A set of audio-visual recordings of emotions in music and language.