jobid=A.0.0215
Overview
PhD Position Acoustic and Psychoacoustic Modelling for Sound Quality Assessment in Hospital PICUs
Join to apply for the PhD Position Acoustic and Psychoacoustic Modelling for Sound Quality Assessment in Hospital PICUs role at Delft University of Technology .
Job Description
Noise pollution causes harmful health effects. Paediatric Intensive Care Units (PICUs) are particularly sensitive environments, which can negatively alter the physiological and emotional development of neonatal and paediatric patients. Current solutions for managing sound in PICUs focus primarily on quantifying sound pressure levels (SPL) in decibels and occasionally displaying visual cues when thresholds are exceeded. However, these approaches fall short in addressing the complex human perception of sound and fail to assist clinical staff in interpreting or mitigating harmful noise. They offer limited insight into how specific sound events affect patient well-being or staff performance, and they fail to take into account psychoacoustic and contextual dimensions that play a critical role in shaping the overall experience of sound in sensitive care environments.
To overcome these limitations, we are developing a novel digital platform—Auditory Footprints—which aims to provide real-time, perception-informed soundscape analysis in PICUs. As part of this project, we invite applications for a PhD position focused on the development of algorithms that will form the core of this platform. This work will support the definition and implementation of a Sound Quality Index (SQI), a new metric that reflects both the physical and perceptual dimensions of indoor hospital acoustics.
The research will involve modelling multiple layers. First, traditional acoustic metrics will be extracted from experimental recordings in hospitals and analysed over time. In parallel, state-of-the-art psychoacoustic metrics—such as time-varying loudness, sharpness, roughness, fluctuation strength, and tonality—will be employed to characterise how sounds are experienced by humans (e.g. in listening experiments in our laboratories). Together, these models will enable a detailed understanding of both the physical and perceptual sound environment.
Building on these foundations, the research will then explore how sound perception is influenced by contextual and affective factors. Using statistical methods, the PhD candidate will develop predictive tools for estimating the perceived affective quality of the soundscape, framed by the Pleasantness and Eventfulness dimensions described in ISO 12913-3. These models will be trained and validated using real perceptual data from PICU nurses’ ratings collected during the initial stages of the project.
A further component of the work will involve the creation of an automatic sound event classification system. Using audio data annotated by perceived acoustic similarity and conventional metrics and potentially employing deep learning techniques, such as convolutional recurrent neural networks (CRNNs), the candidate will develop algorithms capable of accurately identifying key sound events (e.g., alarms, human speech, and mechanical noise). These events, once classified, will contribute to the computation of the SQI and provide actionable feedback to clinical staff.
Finally, contextual information such as time of day, room occupancy, and nurse shift data will be used to dynamically weight the SQI, ensuring that the sound quality assessments reflect the operational realities of the clinical environment. The result will be a flexible, integrated index that synthesises acoustic, psychoacoustic, perceptual, and contextual data into a real-time metric suitable for implementation in clinical settings.
This PhD project offers a unique opportunity to contribute to a pioneering interdisciplinary initiative that merges sound computing, machine learning, human perception modelling, and healthcare research. The candidate will collaborate with experts from academia, hospitals, and industry to create a solution that not only pushes the boundaries of indoor soundscape research but also has a direct societal impact on improving healthcare environments and outcomes.
Job Requirements
- MSc Degree in Computer Science, Acoustics, Audio Engineering, or related fields.
- Strong background in physics and mathematics, ideally knowledge in sound computing, signal processing, and psychoacoustic modelling.
- Strong background in scientific programming, e.g., MATLAB, Python, R.
- Experience with signal processing and machine learning for audio (e.g. speech modelling, sound event detection, source separation, etc) is an advantage.
- Ability to learn independently and passion for research.
- Strong communication skills in English.
- Team player, open to discussion and constructive criticism.
- Open-minded and excited for multidisciplinary input.
- Positive attitude to diverse approaches and inclusive behaviour.
- Previous involvement in scientific research is a plus.
- Interest in healthcare solutions is highly valued.
Employment Terms
- Doctoral candidates will be offered a 4-year period of employment, in the form of two contracts: an initial 1.5-year contract with a go/no-go assessment within 15 months, followed by a contract for the remaining 2.5 years contingent on performance.
- Salary and benefits are in accordance with the Collective Labour Agreement for Dutch Universities, ranging from €3059 per month in the first year to €3881 in the fourth year.
- You will be enrolled in the TU Delft Graduate School, with access to a research environment, supervisors, and a mentor.
- Flexible work schedules and a customizable compensation package are available.
Application
Application deadline: 30 September 2025 . Please apply via the application button and upload your CV and cover letter. Optionally, upload best 3 documents (or links) demonstrating your experience in the topics above. Address applications to Dr. Elif Özcan Vieira. Some details about English proficiency, relocation, and pre-employment checks may apply as part of the process.
Contacts
For more information about this vacancy, contact Dr. Roberto Merino Martinez ( ) and Dr. Elif Özcan ( ) and check the project website Auditory Footprints.
#J-18808-Ljbffr