{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":" Media Tech Brief By HackerNoon","title":"Solos: A Dataset for Audio-Visual Music Analysis -  Experiments","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/ab27cb4b\"></iframe>","width":"100%","height":180,"duration":464,"description":"\n        This story was originally published on HackerNoon at: https://hackernoon.com/solos-a-dataset-for-audio-visual-music-analysis-experiments.\n             In this paper, researchers introduce Solos, a clean dataset of solo musical performances for training machine learning models on various audio-visual tasks. \n            Check more stories related to media at: https://hackernoon.com/c/media.\n            You can also check exclusive content about #audio-visual, #dataset, #multimodal, #music, #solos, #music-performance-dataset, #audio-visual-machine-learning, #instrumental-recordings,  and more.\n            \n            \n            This story was written by: @kinetograph. Learn more about this writer by checking @kinetograph's about page,\n            and for more stories, please visit hackernoon.com.\n            \n                \n                \n                In this paper, researchers introduce Solos, a clean dataset of solo musical performances for training machine learning models on various audio-visual tasks.\n        \n        ","thumbnail_url":"https://img.transistorcdn.com/xDjj43Rgf39suZ71hFZChar2GCP04ymXUWPagWTF1uk/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzQxNDI0LzE2ODM1/ODMwMzAtYXJ0d29y/ay5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}