# Learning Word Representation by Jointly Modeling Syntagmatic and Paradigmatic Relations

## Introduction

Existing models for learning word representations focus on either syntagmatic or paradigmatic relations alone. We propose two novel distributional models (PDC and HDC) for word representation using both syntagmatic and paradigmatic relations via a joint training objective. The proposed models are trained on a public Wikipedia corpus, and the learned representations are evaluated on word analogy and word similarity tasks. The results demonstrate that the proposed models can perform significantly better than all the state-of-the-art baseline methods on both tasks.

## Paper

Note: There is a mistake (missing the minus sign) in negative sampling objective function of PDC and PDC, and the precision of syntactic in Table 2 left out the gram9 subtask in the paper of ACL Anthology version. These have been fixed.

Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. Learning word representations by jointly modeling syntagmatic and paradigmatic relations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics.

Citation

@inproceedings{Fei:Learning,
author = {Fei Sun and Jiafeng Guo and Yanyan Lan and Jun Xu and Xueqi Cheng},
title = {Learning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations},
booktitle = {Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
year = {2015},
publisher = {Association for Computational Linguistics},
pages = {136--145},
location = {Beijing, China},
url = {http://aclweb.org/anthology/P15-1014}
}