Tianxing He (贺天行)

Hi! I'm a PhD student at MIT. I am supervised by Prof. James Glass, who runs the SLS group. My research interest lies in natural language processing and deep learning. Most of my works during my PhD is focused on neural language generation.

I did my bachelor and master degree at Shanghai Jiao Tong University, and my research there was supervised by Prof. Kai Yu, who runs the SJTU SpeechLab. At SJTU I was in the ACM honored class.

My fiancee and I raise two corgis Minnie&Mickey! We post their photos on RED , and Instagram .

Email  /  Google Scholar  /  Twitter

profile photo
Research

My research interest lies in natural language processing and deep learning. Most of my works during my PhD is focused on neural language generation. Representative papers are highlighted.

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models
Tianxing He, Kyunghyun Cho, James Glass
On Arxiv

We compare a variety of approaches under a few-shot knowledge probing setting, where only a small number (e.g., 10 or 20) of example triples are available. In addition, we create a new dataset named TREx-2p, which contains 2-hop relations.

Exposure Bias versus Self-Recovery: Are Distortions Really Incremental for Autoregressive Text Generation?
Tianxing He, Jingzhao Zhang, Zhiming Zhou, James Glass
EMNLP 2021

By feeding the LM with different types of prefixes, we could assess how serious exposure bias is. Surprisingly, our experiments reveal that LM has the self-recovery ability, which we hypothesize to be countering the harmful effects from exposure bias.

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models
Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl
EACL 2021

We explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy.

Analyzing the Forgetting Problem in the Pretrain-Finetuning of Dialogue Response Models
Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
EACL 2021

After finetuning of pretrained NLG models, does the model forget some precious skills learned pretraining? We demonstrate the forgetting phenomenon through a set of detailed behavior analysis from the perspectives of knowledge transfer, context sensitivity, and function space projection.

A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation
Moin Nadeem, Tianxing He (equal contribution), Kyunghyun Cho, James Glass
AACL 2020

We identify a few interesting properties that are shared among existing sampling algorithms for NLG. We design experiments to check whether these properties are crucial for the good performance.

Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
Jingzhao Zhang, Tianxing He, Suvrit Sra, Ali Jadbabaie
ICLR 2020

We provide a theoretical explanation for the effectiveness of gradient clipping in training deep neural networks. The key ingredient is a new smoothness condition derived from practical neural network training examples.

Negative Training for Neural Dialogue Response Generation
Tianxing He, James Glass
ACL 2020

Can we "correct" some detected bad behaviors of a NLG model? We use negative examples to feed negative training signals to the model.

AutoKG: Constructing Virtual Knowledge Graphs from Unstructured Documents for Question Answering
Seunghak Yu, Tianxing He, James Glass
Preprint

We propose a novel framework to automatically construct a KG from unstructured documents that does not require external alignment.

An Empirical Study of Transformer-based Neural Language Model Adaptation
Ke Li, Zhe Liu, Tianxing He, Hongzhao Huang, Fuchun Peng, Daniel Povey, Sanjeev Khudanpur
ICASSP 2020

We propose a mixer of dynamically weighted LMs that are separately trained on source and target domains, aiming to improve simple linear interpolation with dynamic weighting.

Detecting Egregious Responses in Neural Sequence-to-sequence Models
Tianxing He, James Glass
ICLR 2019

Can we trick dialogue response models to emit dirty words?

On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation
Tianxing He, Yu Zhang, Jasha Droppo, Kai Yu
ISCSLP 2016

We attempt to train a bi-directional RNNLM via noise contrastive estimation.

Exploiting LSTM Structure in Deep Neural Networks for Speech Recognition
Tianxing He, Jasha Droppo
ICASSP 2016

We design a LSTM structure in the depth dimension, instead of its original use in time-step dimension.

Recurrent Neural Network Language Model with Structured Word Embeddings for Speech Recognition
Tianxing He, Xu Xiang, Yanmin Qian, Kai Yu
ICASSP 2015

We restructure word embeddings in a RNNLM to take advantage of its sub-units.

Reshaping Deep Neural Network for Fast Decoding by Node-Pruning
Tianxing He, Yuchen Fan, Yanmin Qian, Tian Tan, Kai Yu
ICASSP 2014

We prune neurons of a DNN for faster inference.


The design and code of this website is borrowed from Jon Barron's site.