2024
The Fall of ROME: Understanding the Collapse of LLMs in Model Editing
Wanli Yang, Fei Sun, Jiajun Tan, Xinyu Ma, Du Su, Dawei Yin, Huawei Shen
The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse
Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng
Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?
Hexiang Tan, Fei Sun, Wanli Yang, Yuanzhuo Wang, Qi Cao, Xueqi Cheng
Unlink to Unlearn: Simplifying Edge Unlearning in GNNs
Jiajun Tan, Fei Sun, Ruichen Qiu, Du Su, Huawei Shen
2022
Debiasing Learning for Membership Inference Attacks Against Recommender Systems
Zihan Wang, Na Huang, Fei Sun, Pengjie Ren, Zhumin Chen, Hengliang Luo, Maarten de Rijke, Zhaochun Ren
Recommendation Unlearning
Chong Chen, Fei Sun, Min Zhang, Bolin Ding
Contrastive Learning for Sequential Recommendation
Xu Xie, Fei Sun, Zhaoyang Liu, Jinyang Gao, Bolin Ding, Bin Cui
Neural Re-ranking in Multi-stage Recommender Systems: A Review
Weiwen Liu, Yunjia Xi, Jiarui Qin, Fei Sun, Bo Chen, Weinan Zhang, Rui Zhang, Ruiming Tang
2021
Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation
Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding
CausCF: Causal Collaborative Filtering for Recommendation Effect Estimation
Xu Xie, Zhaoyang Liu, Shiwen Wu, Fei Sun, Cihang Liu, Jiawei Chen, Jinyang Gao, Bin Cui, Bolin Ding
Unified conversational recommendation policy learning via graph-based reinforcement learning
Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, Wai Lam
Variation Control and Evaluation for Generative Slate Recommendations
Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei and Yongfeng Zhang
Explore User Neighborhood for Real-time E-commerce Recommendation
Xu Xie, Fei Sun, Xiaoyong Yang, Zhao Yang, Jinyang Gao, Wenwu Ou and Bin Cui
To appear in ICDE 2021
Towards Long-term Fairness in Recommendation
Yingqiang Ge, Shuchang Liu, Ruoyuan Gao, Yikun Xian, Yunqi Li, Xiangyu Zhao, Changhua Pei, Fei
Sun, Junfeng Ge, Wenwu Ou and Yongfeng Zhang
To appear in WSDM 2021
2020
MTBRN: Multiplex Target-Behavior Relation Enhanced Network for Click-Through Rate Prediction
Yufei Feng, Fuyu Lv, Binbin Hu, Fei Sun, Kun Kuang, Yang Liu, Qingwen Liu and Wenwu Ou
To appear in CIKM 2020
Improving End-to-End Sequential Recommendations with Intent-aware Diversification
Wanyu Chen, Pengjie Ren, Fei Cai, Fei Sun and Maarten de Rijke
To appear in CIKM 2020
Privileged Features Distillation at Taobao Recommendations
Chen Xu, Quan Li, Junfeng Ge, Jinyang Gao, Xiaoyong Yang, Changhua Pei, Fei Sun, Jian Wu,
Hanxiao Sun and Wenwu Ou
To appear in KDD 2020
Understanding Echo Chambers in E-commerce Recommender Systems
Yingqiang Ge, Shuya Zhao, Honglu Zhou, Changhua Pei, Fei Sun, Wenwu Ou and Yongfeng Zhang
To appear in SIGIR 2020 Industry Track
Learning Personalized Risk Preferences for Recommendation
Yingqiang Ge, Shuyuan Xu, Shuchang Liu, Zuohui Fu, Fei Sun, and Yongfeng Zhang
To appear in SIGIR 2020
Intent Preference Decoupling for User Representation on Online Recommender System
Zhaoyang Liu, Jinyang Gao, Haokun Chen, Fei Sun, Xu Xie, Yanyan Shen, Bolin Ding
To appear in IJCAI 2020
Node Conductance: A Scalable Node Centrality Measure on Big Networks
Tianshu Lyu, Fei Sun, Yan Zhang
To appear in PAKDD 2020
2019
BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer
Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou and Peng Jiang
SDM: Sequential Deep Matching Model for Online Large-scale Recommender System
Fuyu Lv, Taiwei Jin, Changlong Yu, Fei Sun, Quan Lin, Keping Yang and Wilfred Ng
A Pareto-Efficient Algorithm for Multiple Objective Optimization in E-Commerce Recommendation
Xiao Lin, Hongjie Chen, Changhua Pei, Fei Sun, Xuanji Xiao, Hanxiao Sun, Yongfeng Zhang,
Wenwu Ou and Peng Jiang
RecSys 2019,
Best Long Paper Runner-up
{
PDF }
Personalized Re-ranking for E-commerce Recommender Systems
Changhua Pei, Yi Zhang, Yongfeng
Zhang, Fei Sun, Xiao Lin, Hanxiao Sun, Jian Wu, Peng Jiang, Junfeng Ge and
Wenwu Ou
Compositional Network Embedding for Link Prediction
Tianshu Lyu, Fei Sun, Peng Jiang, Wenwu Ou and Yan Zhang
Improving Multi-turn Dialogue Modelling with Utterance ReWriter
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu and Jie Zhou
Tag2Gauss: Learning Tag Representations via Gaussian Distribution in Tagged Networks
Yun Wang, Lun Du, Guojie Song, Xiaojun Ma, Lichen Jin, Wei Lin, Fei Sun
Deep Session Interest Network for Click-Through Rate Prediction
Yufei Feng, Fuyu Lv, Weichen Shen, Menghan Wang, Fei Sun, Yu Zhu, Keping Yang
Exact-K Recommendation via Maximal Clique Optimization
Yu Gong, Yu Zhu, Lu Duan, Qingwen Liu, Ziyu Guan, Fei Sun, Wenwu Ou, Kenny Zhu
POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion
Wen Chen, Pipei Huang, Jiaming Xu, Xin Guo, Cheng Guo, Fei Sun, Chao Li, Andreas Pfadler,
Huan Zhao, Binqiang Zhao
Value-aware Recommendation based on Reinforced Profit Maximization in E-commerce Systems
Changhua Pei, Xinru Yang, Qing Cui, Xiao Lin, Fei Sun, Peng Jiang, Wenwu Ou, Yongfeng Zhang
The Web Conference 2019
{
PDF }
2018
Multi-Source Pointer Network for Product Title Summarization
Fei Sun, Peng Jiang, Hanxiao Sun, Changhua Pei, Wenwu Ou, and Xiaobo Wang
Abstract
In this paper, we study the product title summarization problem in E-commerce applications for display on
mobile devices.
Comparing with conventional sentence summarization, product title summarization has some extra and
essential constraints.
For example, factual errors or loss of the key information are intolerable for E-commerce applications.
Therefore, we abstract two more constraints for product title summarization:
(i) do not introduce irrelevant information; (ii) retain the key information (e.g., brand name and
commodity name).
To address these issues, we propose a novel multi-source pointer network by adding a new knowledge encoder
for pointer network.
The first constraint is handled by pointer mechanism.
For the second constraint, we restore the key information by copying words from the knowledge encoder with
the help of the soft gating mechanism.
For evaluation, we build a large collection of real-world product titles along with human-written short
titles.
Experimental results demonstrate that our model significantly outperforms the other baselines.
Finally, online deployment of our proposed model has yielded a significant business impact, as measured by
the click-through rate.
BibTeX
@inproceedings{Sun:MPN:CIKM2018,
author = {Sun, Fei and Jiang, Peng and Sun, Hanxiao and Pei, Changhua and Ou, Wenwu and Wang, Xiaobo},
title = {Multi-Source Pointer Network for Product Title Summarization},
booktitle = {Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
series = {CIKM '18},
year = {2018},
isbn = {978-1-4503-6014-2},
location = {Torino, Italy},
pages = {7--16},
numpages = {10},
url = {http://doi.acm.org/10.1145/3269206.3271722},
doi = {10.1145/3269206.3271722},
acmid = {3271722},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {extractive summarization, pointer network, title summarization},
}
{download}
Modeling Consumer Buying Decision for Recommendation Based on Multi-Task Deep Learning
Qiaolin Xia, Peng Jiang, Fei Sun, Yi Zhang, Xiaobo Wang, and Zhifang Sui
2016
Sparse Word Embeddings Using ℓ1 Regularized Online Learning
Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng
Abstract
Recently, Word2Vec tool has attracted a lot of interest for its promising performances in a
variety of natural language processing (NLP) tasks.
However, a critical issue is that the dense word representations learned in Word2Vec are lacking
of interpretability.
It is natural to ask if one could improve their interpretability while keeping their performances.
Inspired by the success of sparse models in enhancing interpretability, we propose to introduce sparse
constraint into Word2Vec.
Specifically, take the Continuous Bag of Words (CBOW) model as an example in our study and add the
ℓ1 regularizer into its learning objective.
One challenge of optimization lies in that: stochastic gradient descent (SGD) cannot directly produce
sparse solutions with ℓ1 regularizer in online training.
To solve this problem, we employ the Regularized Dual Averaging (RDA) method, an online optimization
algorithm for regularized stochastic learning.
In this way, the learning process is very efficient and our model can scale up to very large corpus to
derive sparse word representations.
The proposed model is evaluated on both expressive power tasks and interpretability task.
The results show that, compared with origin CBOW model, the proposed model can obtain state-of-the-art
results with better interpretability using less than 10% non-zero elements.
BibTeX
@InProceedings{Sun:Sparse,
author = {Fei Sun and Jiafeng Guo and Yanyan Lan and Jun Xu and Xueqi Cheng},
title = {Sparse Word Embeddings Using $\ell_1$ Regularized Online Learning},
booktitle = {Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence},
year = {2016},
pages = {2915--2921}
}
{download}
Semantic Regularities in Document Representations
Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng
Abstract
Recent work exhibited that distributed word representations are good at capturing linguistic regularities
in language.
This allows vector-oriented reasoning based on simple linear algebra between words.
Since many different methods have been proposed for learning document representations, it is natural to
ask whether there is also linear structure in these learned representations to allow similar reasoning at
document level.
To answer this question, we design a new document analogy task for testing the semantic regularities in
document representations, and conduct empirical evaluations over several state-of-the-art document
representation models.
The results reveal that neural embedding based document representations work better on this analogy task
than conventional methods, and we provide some preliminary explanations over these observations.
BibTeX
Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng
Abstract
Distributional hypothesis lies in the root of most existing word representation models by inferring word
meaning from its external contexts.
However, distributional models cannot handle long-tail words very well and fail to identify some
fine-grained linguistic regularity as their ignoring the word forms.
On the contrary, morphology points out that words are built from some basic units, i.e., morphemes.
Therefore, the meaning and function of long-tail words can be inferred from the words sharing the same
morphemes, and many syntactic relations can be directly identified based on the word forms.
However, the limitation of morphology is that it cannot infer the relationship between two words that do
not share any morphemes.
Considering the advantages and limitations of both approaches, we propose two novel models to build better
word representations by modeling both external contexts and internal morphemes in a jointly predictive
way, called BEING and SEING.
These two models can also be extended to learn phrase representations according to the distributed
morphology theory.
We evaluate the proposed models on similarity tasks and analogy tasks.
The results demonstrate that the proposed models can outperform state-of-the-art models significantly on
both word and phrase representation learning.
BibTeX
@InProceedings{Sun:Inside,
author = {Fei Sun and Jiafeng Guo and Yanyan Lan and Jun Xu and Xueqi Cheng},
title = {Inside Out: Two Jointly Predictive Models for Word Representations and Phrase Representations},
booktitle = {Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {2821--2827}
}
{download}
2022
Studying the Impact of Data Disclosure Mechanism in Recommender Systems via Simulation
Ziqian Chen, Fei Sun, Yifan Tang, Haokun Chen, Jinyang Gao, Bolin Ding
Graph Neural Networks in Recommender Systems: A Survey
Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, Bin Cui
Semantic Models for the First-Stage Retrieval: A Comprehensive Review
Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun,, Ruqing Zhang, Xueqi Cheng