Here is the list of the most interesting papers published in this week:


UP5: Unbiased Foundation Model for Fairness-aware Recommendation

Fairness in Recommendation Systems (RS) is one of the interesting and widely-discussed topics. The primary objective of such research is to ensure that users are not subjected to discrimantion based on their sensitive attributes, such as gender, age, or race. Apart from that, while it is important to mitigate bias in RSs and make recommendations independent of the users’ sensitive attributes, it is equally important to still offer personalized and relevant recommendations for users. So, the challange lies in making a balance between mitigating bias in RSs and generating accurate and tailored recommendations.

This paper explores language-model-based RSs. The benefit of using language models as the backbone of RSs is that it would be more able to capture contextual dependencies. So, the main requirements taken into account in this work are:

  1. Allowing users to mark certain attributes as sensitive, based on which they do not wish to be discriminated. The reason for having such requirement is that, for instance, in a movie RS, users may be interested on recieving the recommendations tailored to their age or race.

  2. Developing a unified model capable of handling bias mitigation across all sensitive attributes. In other words, rather than relying on separate models for each attribute, a more scalable approach is having a single model that handles all of them.

  3. Ensuring that having fairness in RSs would not lead to a significant decrease in the recommender’s performance.

In this paper, the authors first propose several approaches for measuring bias in RSs. Then, they propose a novel prompting approach for mitigating bias in the recommenders. Finally, they describe their idea for considering several sensitive attributes in bias mitigation.

Exploring Bias in LLM-based RS

In this section, we will review the proposed approaches for measuring bias in LLM-based RSs. Before describing these approaches, let us define what fair recommendation means in a formal notation. RS is considered to be fair if for any possible user \(u\) with features \(X = x\) and having sensitive attributes \(K = k\), the following equation is satistied for top-N recommended items–denoted by L:

\[P( L_k | X= x, K=k) = P(L_{k^{'}} | X = x, K = k^{'})\]

Now, let us review the three bias exploration approaches:

  1. Manually-Designed Prompt: In this approach mannually crafted discrete prompts are used for querying the RS: What is the {attribute} of user_{user_id}? Output: {user attribute value}. Also, another similar approach would be to include user-item interaction history in the prompt design: User_{user_id} has watched moviews {sequence of movie ids}. What is the {attribute} of user_{user_id}? Output: {user attribute value}.
    They have used these prompts–a separate prompt template for each of the gender, age, occupation and marital status attributes–on a specific language model (probabely, P5) and reported the AUC of the predicted attributes as the results. Since in both prompting approaches and for all of the mentioned attributes, the AUC is either 50\% or slightly above 50\%, they concluded that this approach is not a promising one for capturing bias in RSs.

  2. Soft-probing Prompt-tuning: In this approach, the authors leveraged and encoder-decoder-based architecture. They assigned two soft prompts to each of the sensitive attributes: one for the encoder part and the other for the decoder part. The encoder and decoder soft prompts are calculated using a 2-layer and 5-layer MLPs, respectively. The loss function used for training these two MLPs will be explained later. Apart from the two mentioned soft prompts, they defined a discrete prompt, describing users and their interactions as User user_{id} has watched movies {squence of item ids}.
    Then, for creating input to the encoder and the decoder components, they employed these formats: Encoder: The Encoder Soft Prompt of the Attribute + Discrete Prompt for User {user_id} and Decoder: The Decoder Soft Prompt of the Attribute + Encoder Output.
    Accordingly, the output of the decoder is used for tuning the soft-prompts by training the two MLPs using the negative log-likelihood of the attribute value as the loss function.
    The results of this approach revealed that the recommendation systems consider the users’ personal and sensitive attributes for recommending items to users.

  3. Multi-class Classifier: In this approach, a language model is fed by a discrete prompt describing the user information. Then, the encoder-generated embeddings for the user tokens are passed to a 7-layer MLP as the attribute classifier. The classifier is trained based on the cross-entropy loss function. In this approach, there is a separate classifier per sensitive attribute. Finally, AUC is reported to measure how well the classifier predicts the user attributes. The results proved that the classifier is the best in capturing fairness and bias.

Solution: Conterfactually-fair Prompting

They proposed a novel and creative prompting-based solution for mitigating bias. Their solution uses the adversarial learning concepts for optimizing the prompts. More specificly, the steps of their solution are as follows:

  1. Two soft prompts are assigned to each sensitive attribute, named \(p_{enc}\) for the encoder section and \(p_{dec}\) for the decoder section. The main objective of the whole solution would be to optimize all of the soft prompts.

  2. The encoder is then fed by the input \(x\) which is concatenated to \(p_{enc}\)–in case of considering only a single sensitive attribute: \(hidden\_states = Encoder(p_{enc}, x)\)

  3. Then, the output of the encoder and \(p_{dec}\) is passed to the decoder to generate output \(y\): \(y = Decoder(p_{dec}, hidden\_states)\)

  4. The output of the decoder can be represented by: \(p_{\theta_{dec}}(y_j | p_{dec}, y_{0:j-1},hidden\_states)\)

  5. Now, to use the adversarial learning approach, the adversary uses the module in the multi-class classifier–denoted by \(C\)–as the discriminator module. The adversary tries to correctly classfiy user attirbutes. Then, the goal of the whole system is to avoid the advarsay to classify the attributes correctly. Accordingly, to optimize the soft prompts, two loss functions are defined: \(L_{rec}\)–the recommender loss, which should be minimized–and \(L_{dis}\)–the adversarial classification loss, which should be maximized. \(L_{rec}\) is defined as the negative log likelihood loss of the decoder output and \(L_{dis}\) is defined as the Cross-Entropy Loss (CEL) for classification (\(c_u\) is the correct attribute value for the user \(u\)): \(L_{rec} = \Sigma_{j=1}^{|y|} log P(y_{j} | p_{dec}, y_{0:j-1}, hidden\_states)\) \(L_{dis}^{k} = CEL(c_u|C(mean(hidden\_states[i:j])))\)

  6. Considering \(\lambda_{k}\) as the discriminator weight for attribute \(k\), the whole loss value for the attribute \(k\) is: \(L_k = \Sigma_{u} L_{rec} - \lambda_{k}.L_{k}^{dis}\)