Wei Ping

Principal Research Scientist
NVIDIA
Santa Clara, CA

Email: wping at nvidia dot com


Google Scholar   arXiv  
LinkedIn   Twitter  

About me

I am a principal research scientist at NVIDIA Applied Deep Learning Research team, working on large language models and generative models. Prior to this, I was the text-to-speech team lead at Baidu Silicon Valley AI Lab (founded by Andrew Ng). Before that, I obtained my PhD in machine learning from University of California, Irvine in 2016. I am passionate about building the state-of-the-art generative models for text, audio and multi-modal data.

 

Selected Publications

* My student intern

ChatQA: Building GPT-4 Level Conversational QA Modelspreprint 2024.
Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining. [code]  preprint 2024.
Boxin Wang*, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro.

Retrieval meets Long Context Large Language ModelsICLR 2024.
Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro.

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. [code]  EMNLP 2023.
Boxin Wang*, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi et al.

Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image CaptioningFindings of EMNLP 2023.
Zhuolin Yang*, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie et al.

BigVGAN: A Universal Neural Vocoder with Large-Scale Training. [demo] [code]  ICLR 2023.
Sang-gil Lee*, Wei Ping, Boris Ginsburg, Bryan Catanzaro, Sungroh Yoon.

Factuality Enhanced Language Models for Open-Ended Text Generation. [code]  NeurIPS 2022.
Nayeon Lee*, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro.

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. [code]  NeurIPS 2022.
Boxin Wang*, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro.

Long-Short Transformer: Efficient transformers for language and vision. [code]  NeurIPS 2021.
Chen Zhu*, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro.

DiffWave: A versatile diffusion model for audio synthesis. [project]  ICLR 2021 (Oral).
Zhifeng Kong*, Wei Ping, Jiaji Huang, Kexin Zhao, Bryan Catanzaro.

WaveFlow: A compact flow-based model for raw audio. [project]  ICML 2020.
Wei Ping, Kainan Peng, Kexin Zhao, Zhao Song.

Non-autoregressive neural text-to-speech. [demo]  ICML 2020.
Kainan Peng, Wei Ping, Zhao Song, Kexin Zhao.

ClariNet: Parallel wave generation in end-to-end text-to-speech. [demo]  ICLR 2019.
Wei Ping, Kainan Peng, Jitong Chen.

Deep Voice 3: Scaling text-to-speech with convolutional sequence learning. [demo][more samples]  ICLR 2018.
Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, John Miller.

Neural voice cloning with a few samples[demo]  NeurIPS 2018.
Sercan O. Arik, Jitong Chen, Kainan Peng, Wei Ping, Yanqi Zhou.

Deep Voice 2: Multi-speaker neural text-to-speech[demo]  NIPS 2017.
Sercan O. Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou.

Learning infinite RBMs with Frank-WolfeNIPS 2016.
Wei Ping, Qiang Liu, Alexander Ihler.

Decomposition bounds for marginal MAP.  NIPS 2015.
Wei Ping, Qiang Liu, Alexander Ihler.

Marginal structured SVM with hidden variables.  ICML 2014.
Wei Ping, Qiang Liu, Alexander Ihler.