Wei Ping

Principal Research Scientist
NVIDIA
Santa Clara, CA

Email: wping at nvidia dot com


Google Scholar   arXiv  
LinkedIn   Twitter  

About me

I am a principal research scientist at NVIDIA Applied Deep Learning Research team, working on generative models and large language models. Prior to this, I was the text-to-speech team lead at Baidu Silicon Valley AI Lab (founded by Andrew Ng). Before that, I obtained my PhD in machine learning from University of California, Irvine in 2016. I am passionate about building the state-of-the-art generative models for text, audio and multi-modal data.

 

Selected Publications

* My student intern

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. [code]  preprint 2023.
Boxin Wang*, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro

BigVGAN: A Universal Neural Vocoder with Large-Scale Training. [demo] [code]  ICLR 2023.
Sang-gil Lee*, Wei Ping, Boris Ginsburg, Bryan Catanzaro, Sungroh Yoon

Factuality Enhanced Language Models for Open-Ended Text Generation. [code]  NeurIPS 2022.
Nayeon Lee*, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. [code]  NeurIPS 2022.
Boxin Wang*, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro

Long-Short Transformer: Efficient transformers for language and vision. [code]  NeurIPS 2021.
Chen Zhu*, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro.

DiffWave: A versatile diffusion model for audio synthesis. [project]  ICLR 2021 (Oral).
Zhifeng Kong*, Wei Ping, Jiaji Huang, Kexin Zhao, Bryan Catanzaro.

WaveFlow: A compact flow-based model for raw audio. [project]  ICML 2020.
Wei Ping, Kainan Peng, Kexin Zhao, Zhao Song.

Non-autoregressive neural text-to-speech. [demo]  ICML 2020.
Kainan Peng, Wei Ping, Zhao Song, Kexin Zhao.

ClariNet: Parallel wave generation in end-to-end text-to-speech. [demo]  ICLR 2019.
Wei Ping, Kainan Peng, Jitong Chen.

Deep Voice 3: Scaling text-to-speech with convolutional sequence learning. [demo][more samples]  ICLR 2018.
Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, John Miller.

Neural voice cloning with a few samples[demo]  NeurIPS 2018.
Sercan O. Arik, Jitong Chen, Kainan Peng, Wei Ping, Yanqi Zhou.

Deep Voice 2: Multi-speaker neural text-to-speech[demo]  NIPS 2017.
Sercan O. Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou.

Learning infinite RBMs with Frank-WolfeNIPS 2016.
Wei Ping, Qiang Liu, Alexander Ihler.

Decomposition bounds for marginal MAP.  NIPS 2015.
Wei Ping, Qiang Liu, Alexander Ihler.

Marginal structured SVM with hidden variables.  ICML 2014.
Wei Ping, Qiang Liu, Alexander Ihler.