Resumo:
In recent years, several researches about automated music transcription have been conducted; however, only a few are dedicated to the a cappella musical style, and specific datasets to this style are scarce. Recently, some articles have been published proposing machine learning approaches to extract musical notes sung by choirs. This work is the result of experiments conducted based on publications focused on extracting individual musical notes for each voice from vocal quartets audio recordings. Modifications were made to neural network architectures for voice assignment, achieving relatively better results with lower computational cost. The resulting best-performing model was integrated into a complete system that takes an audio file as input and returns individual transcriptions as output; this result was made available for use in a web application accessible to the general public.