Reading papers on fairness of recommendation system (6)

Time:2021-10-24

As the reading record of the last paper, I decided to summarize my rough and intensive reading papers so far, and then state some personal opinions and ideas in this research field.

Paper summary

The bias and unfairness in the recommendation system are born with the birth of the recommendation algorithm, not deliberately. Some typical deviations and their main solutions in the papers I have read are as follows:

(1) Demographic equality

describeUsers should not receive different recommendation results because of their gender, age, race and other characteristics. Pursuing this fairness will obviously affect the accuracy of the recommendation system, but it is necessary in scenarios such as job information recommendation.
SolutionUsing adversary learning to remove sensitive information from user embeddings vector[1]。 Or more recently, through orthogonality regularization, the biased user vector is orthogonal to the unbiased user vector as much as possible, so as to distinguish the two[2]

电影爱好者的评分情况示意图

(2) Position bias

describeMatthew effect. The attention received by each object in ranking will be affected by the display position. The items in the front position are easier to be noticed by the user than the items in the back position, and it is also easier to get clicks, which makes the model’s perception of user preferences deviate, estimates the CTR inaccuracy, and further enlarges it through the feedback loop.
SolutionIt can be transformed into an integer linear programming (ILP) problem with sorting quality as constraints[3]
电影爱好者的评分情况示意图

(3) Selection bias

describeSelection bias mainly comes from the explicit feedback of users, such as the score of items. Users tend to score items that they are interested in and rarely score items that they are not interested in, resulting in the problem of missing not at random (MNAR). The observed scores are not representative samples of all scores, resulting in selection bias.
SolutionFrom the perspective of causal inference, the observed data can be weighted by inverse propensity score (IPS) to build an unbiased estimator for the ideal evaluation index (propensity score can be regarded as the observed probability of each data)[3]
电影爱好者的评分情况示意图

(4) Exposure bias

describeExposure bias mainly comes from the user’s implicit feedback, such as click. Users can only see part of the items exposed by the system and respond by clicking. However, the interaction not included in the data does not necessarily mean that the user does not like it, or the user may not know the item.
SolutionSimilarly, from the perspective of causal inference, exposing goods to users can be regarded as applying drugs to patients, and only a few patients (users) know their reactions to a few treatment methods (items). We can also construct an unbiased estimator based on the propensity score method[4]
电影爱好者的评分情况示意图

(5) Popularity bias

describeThe global popularity of recommended items will affect their ranking, resulting in the recommendation system may recommend the most popular rather than the most relevant items to users. This is unfair to unpopular goods. Unpopular goods can be compared to new stores. The goods sold may be of good quality, but they are not recommended by the recommendation system, so that stores have to choose other platforms.
SolutionThe in processing method based on regularization can be adopted, and the Pearson correlation coefficient between the prediction score of the user item pair and the popularity corresponding to the item can be used as the regular term. The deviation can be eliminated by minimizing the regular term and recommendation error.[5]From the perspective of causal inference, it can also be concluded that the popularity of items is a confounding factor between exposed items and interaction. Therefore, the impact of popularity on the exposure of items should be eliminated, but the impact of popularity on interaction (capturing the user’s conformity Psychology) should be retained, that is, the popularity deviation should be used[6]
电影爱好者的评分情况示意图

Personal view

In the recommendation system, user behavior data is observed rather than experimental[3]Therefore, there will be various deviations, such as the user’s selection deviation of items, the system’s exposure deviation of items, etc. directly taking the model to fit the data and ignoring the deviation will lead to poor performance and damage the user’s experience and trust in the recommendation system to a certain extent. Therefore, removing the recommendation system deviation has become a new research direction in the field of recommendation system.
However, most of the methods used in academia to remove the deviation still make some modifications to the original machine learning algorithm, and achieve the purpose of removing the deviation by modifying the objective function, adding optimization algorithm constraints, adding regular terms and so on. According to Judea pearl, the Turing prize winner, it is difficult to grasp the internal logic contained in the data simply by relying on the traditional machine learning method based on data fitting and ignoring the data generation process, which often produces statistical classical problems such as “Simpson paradox”[7]。 At present, most recommendation system depolarization methods are based on black box model, or limit the model parameters in the form of optimization algorithm constraints, which is difficult to mine the data generation process, so as to fundamentally remove the deviation.
With the rise of causal inference in recent years, preference score, counter factual thought and removing confusion factor have also been paid more and more attention in this field, which provides some ideas for the depolarization of recommendation system. At present, the research ideas in this field can be summarized as follows: unbiased estimation is proposed based on ideal objective function and relevant assumptions, and relevant technologies are used to establish a relationship between ideal objective and unbiased estimation to achieve the purpose of unbiased estimation. At the same time, experiments are carried out on full simulated data sets, semi synthetic data sets and real data sets to further verify the effectiveness of the method. Causal inference technology has been widely used in the depolarization of recommendation system, which also indicates that the interpretability of recommendation system has been paid attention to, which indicates that when people design models, they only consider the “black box” to improve the recommendation effect, and gradually pay attention to the interpretation of data and the embodiment of the internal mechanism of the model, which often needs to be analyzed and reasoned in combination with specific application scenarios.
Although great progress has been made in the unbiased recommendation system, some problems deserve further research: put forward appropriate benchmark data sets and standard evaluation indicators to better evaluate the unbiased recommendation system; In the real world, bias is usually dynamic rather than static. It is of certain research value to explore the evolution of bias and the impact of dynamic bias on recommendation system; Designing better strategies (such as causal inference) to balance fairness and recommendation accuracy in many recommendation scenarios is also a direction worthy of research. Among them, the importance of causal inference in the whole field of artificial intelligence has been paid more and more attention, and its characteristics of revealing the data generation process and avoiding the data correlation trap coincide with the depolarization of the recommendation system. Personally, I think that the depolarization of recommendation system based on causal inference is a promising direction. At present, the research of causal inference in this field has just started, and there is a lot of exploration space. I intend to continue in-depth research in this direction and as the topic of graduation design.

reference

  • [1] Wu L, Chen L, Shao P, et al. Learning Fair Representations for Recommendation: A Graph-based Perspective[C]//Proceedings of the Web Conference 2021. 2021: 2198-2208.
  • [2] Wu C, Wu F, Wang X, et al. Fairness-aware News Recommendation with Decomposed Adversarial Learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(5): 4462-4469.
  • [3] Schnabel T, Swaminathan A, Singh A, et al. Recommendations as treatments: Debiasing learning and evaluation[C]//international conference on machine learning. PMLR, 2016: 1670-1679.
  • [4] Saito Y, Yaginuma S, Nishino Y, et al. Unbiased recommender learning from missing-not-at-random implicit feedback[C]//Proceedings of the 13th International Conference on Web Search and Data Mining. 2020: 501-509.
  • [5] Zhu Z, He Y, Zhao X, et al. Popularity-Opportunity Bias in Collaborative Filtering[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 2021: 85-93.
  • [6] Zhang Y, Feng F, He X, et al. Causal Intervention for Leveraging Popularity Bias in Recommendation[J]. arXiv preprint arXiv:2105.06067, 2021.
  • [7] Pearl J. Causality[M]. Cambridge university press, 2009.

Recommended Today

Swift advanced (XV) extension

The extension in swift is somewhat similar to the category in OC Extension can beenumeration、structural morphology、class、agreementAdd new features□ you can add methods, calculation attributes, subscripts, (convenient) initializers, nested types, protocols, etc What extensions can’t do:□ original functions cannot be overwritten□ you cannot add storage attributes or add attribute observers to existing attributes□ cannot add parent […]