Welcome to Zhiyao Duan's Homepage!

(photo taken in 2018)

Zhiyao Duan ()

Associate Professor
Department of Electrical and Computer Engineering (primary)
Department of Computer Science (secondary)
Goergen Institute for Data Science (affiliated)
University of Rochester

Google Scholar Profile
Research Statements: 2019 2012
Teaching Statements: 2019 2012
Service Statements: 2019

Mailing Address:
University of Rochester
720 Computer Studies Building
Rochester, NY 14627, USA.

Phone: +1 (585) 275-5302
Email: firstname <dot> lastname @ rochester.edu

News

  • 04/2022 - I ended my sabbatical leave at Kwai Inc. and returned to UofR!
  • 03/2022 - The web version of BachDuet is online! Improvise duet counterpoint with AI here.
  • 02/2022 - I joined the editorial board of IEEE Open Access Journal for Signal Processing (OJ-SP) for 2022-2023.
  • 12/2021 - I will be a guest editor for the Transactions of International Society for Music Information Retrieval (TISMIR) Special Collection on Cultural Diversity in MIR.
  • 11/2021 - I'm elected as President-Elect of the ISMIR Society for 2022-2023 and will become the President for 2024-2025!
  • 11/2020 - I will serve as a Scientific Program Co-Chair of ISMIR 2021.
  • 07/2020 - I'm promoted to Associate Professor with tenure on July 1, 2020!
  • 05/2020 - I'll be taking a sabbatical leave from UofR from June 2020 to August 2021 to be a Principal Research Scientist at Kwai Inc.
  • 11/2019 - I gave a tutorial on Audiovisual Music Processing with Drs. Slim Essid and Sanjeel Parekh from Telecom ParisTech and my student Bochen Li at ISMIR2019.
  • 09/2019 - AIR lab welcomes 3 new PhD students and 2 visiting PhD students!
  • 08/2019 - Congratulations to our new graduate Dr. Emre Eskimez (co-advised with Prof. Wendi Heinzelman)!
  • 07/2019 - I taught the Music & Math course to high school students at the Upward Bound program for the third time. Very glad to see students' appreciation!
  • 06/2019 - AIR lab welcomes 6 undergraduate students this summer (3 domestic and 3 international)!
  • 06/2019 - I gave a keynote talk at the 2019 Midwest Music and Audio Day (MMAD2019). Yujia and Christos also gave great presentations there!
  • 03/2019 - I received an NSF CAREER Award for an exciting research project on Human-Computer Collaborative Music Making! Thank you, NSF!
  • 02/2019 - I gave a talk at MARL at NYU. Yujia, Christos and myself also had a good time at NEMISIG2019 in Brooklyn College.
  • 02/2019 - I had a great time in the Dagstuhl seminar on Melody and Voice Processing.
  • 01/2019 - Two overview papers (automatic music transcription and audio-visual analysis of musical performances) were published in the IEEE Signal Processing Magazine special issue on Recent Advances in Music Signal Processing.
  • 09/2018 - AIR lab welcomes two new PhD students: Ge Zhu and Christos Benetatos!
  • 08/2018 - Our University of Rochester Multi-Modal Music Performance (URMP) dataset is finally online. Check this out.
  • 06/2018 - Two papers were accepted to ISMIR 2018, one on visual performance generation and the other on music harmonization. Congrats, Bochen and Yujia!
  • 02/2018 - Five papers were accepted to ICASSP 2018. Congrats Yichi, Bochen, Ray, Emre, and Zhihan!
  • 12/2017 - Andrea passed his PhD defense. Congratulations, Dr. Cogliati, my first PhD student!!
  • 10/2017 - Our ISMIR paper won one of the best paper award nominations. Congratulations, Bochen!
  • 08/2017 - I received an NSF BIGDATA grant to develop Audio-Visual Scene Understanding algorithms with Chenliang Xu from CS. Thanks for your generous support, NSF!
  • 07/2017 - I received a University of Rochester AR/VR Pilot Grant to develop a synthetic talking face to assist hearing-impaired along with Ross Maddox from BME and Chenliang Xu from CS. Thanks for your generous support, UR!
  • 07/2017 - I received a University of Rochester AR/VR Pilot Grant to develop spatial audio techniques for live streaming, with Ming-Lun Lee from ECE and Matthew Brown from Eastman. Thanks for your generous support, UR!
  • 07/2017 - Our SMC paper won one of the best paper awards. Congratulations, Bochen!
  • 06/2017 - Two papers were accepted by WASPAA 2017.
  • 06/2017 - Two papers were accepted by ISMIR 2017.
  • 06/2017 - Andrea, Yichi and Zhiyao attended MMAD and gave presentations.
  • 05/2017 - I gave talks at USTC, SUSTC, PKU-Shenzhen, SJTU, and Fudan University in China.
  • 04/2017 - One paper was accepted by SMC 2017.
  • 02/2017 - We hosted NEMISIG 2017 + HAMR at the University of Rochester!
  • 02/2017 - Our lab received a GPU donation from NVIDIA. Thanks for your generous support, NVIDIA!
  • 12/2016 - Three papers were accepted by ICASSP 2017.
  • 12/2016 - I gave a talk on "Complete Music Transcription" at the Music Signal Processing session at the 5th joint meeting between the Acoustical Society of America and the Acoustical Society of Japan.
  • 11/2016 - I gave a talk on "Complete Music Transcription" at the WNYISPW 2016 workshop.
  • 11/2016 - I gave a talk on "The Machine Musicianship" at Beihang University.
  • 11/2016 - I gave a talk on "AIR Lab Research Overview" at the Chinese Sound and Music Technology (CSMT) workshop.
  • 09/2016 - I gave a talk on "Sound Interactions" at Indiana University Bloomington.
  • 08/2016 - I gave a talk on "Sound Retrieval through Vocal Imitation" at the RIASE workshop.
  • 08/2016 - I received an NSF grant to develop Algorithms for Query by Example of Audio Databases with Bryan Pardo from Northwestern University! Thanks for your generous support, NSF!
  • 08/2016 - I received a University of Rochester Goergen Institute for Data Science Collaborative Pilot Award Program in Health Analytics to work on ECG Signal Analysis with Mina Attin from School of Nursing! Thanks for your generous support, UR!

AIR Lab Is Recruiting

I am looking for strongly motivated PhD students to work with me in the Audio Information Research (AIR) lab on cool computer audition projects. Students are expected to have a solid background in mathematics, programming, and academic writing. Experiences in music activities will be a plus. If you are interested, please apply through the ECE program at the university's admission website, and mention my name in your personal statement. If you apply through the CS program, please remind me through email, as I do not review all CS applications. If you are in the Rochester area, please feel free to stop by my office for a chat.

If you are a master's or undergrad student who wants to do a project/thesis with me, you are welcome too. Please send me an email or stop by my office.

Brief Bio

I am an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester since July 2013. I also hold a secondary appointment in the Department of Computer Science and am affiliated faculty of the Goergen Institute for Data Science.

I received my Ph.D. from the Department of Electrical Engineering and Computer Science at Northwestern University under the supervision of Prof. Bryan Pardo. I received my bachelor and master's degrees from the Department of Automation at Tsinghua University in 2004 and 2008, respectively, under the supervision of Prof. Changshui Zhang.

I was a visiting researcher in the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University in 2007, and in the Perception and Neurodynamics Laboratory at the Ohio State University in 2013. I interned in the Speech group of Microsoft Research Asia (MSRA) in 2007 and 2008, and in the Advanced Technology Labs of Adobe Systems Inc. in 2011.

Research Interests

My research interests lie primarily in the emerging area of Computer Audition, which is about designing intelligent algorithms and systems that can understand sounds, including music, speech and environmental sounds. This is an interdisciplinary area that draws from many fields including signal processing, machine learning, psychoacoustics, music theory, etc. Specific problems that I have been working on include automatic music transcription, sound source separation, audio-score alignment, music annotation and recommendation, speech enhancement and emotion analysis, sound retrieval, and audio-visual analysis of music performances.

Our work is funded by the National Science Foundation under grants No. 1617107, titled "III: Small: Collaborative Research: Algorithms for Query by Example of Audio Databases" (project website), No. 1741472, titled "BIGDATA: F: Audio-Visual Scene Understanding" (project website), and No. 1846174, titled "CAREER: Human-Computer Collaborative Music Making". Our work is also funded by the University of Rochester internal pilot awards on AR/VR and health analytics.

Updated on May 4, 2022