Several interesting papers are published in this week in NLP. Here is a list of them:


Context-faithful Prompting for Large Language Models

This paper focuses on the fact that sometimes langugae models rely too much on their learnt knowledge, which may cause them to overlook the contextual cues and information. This can be problematic in context-sensitive NLP task. For example, if the language model is asked to answer a specific question only according to the provided context, then it is not supposed to use its own knowledge for answering the question. This problematic behavour can be investigated in two situations:

  • Knowledge Conflict: When the provided context contains some information which is in conflict with the language model’s knowledge! For example:

Context: Elon Musk is a business magnate and investor. He is the owner and CEO of Twitter.
Question: Who is the CEO of Twitter?
Answer: Elon Musk
GPT-3.5: Jack Dorsey

  • Knowledge Abstention: When the context does not contain any information about a specific topic at all. But, the language model has the knowledge about that topic. For example:

Context: Bill Gates was born in Seattle, Washington.
Question: Is Bill Gates the founder of Microsoft?
Answer: I don’t know.
GPT-3.5: Yes!

The main approach of this paper is to leverage two different types of prompting: (1) base, (2) opinion-based, (3) attribute-based, and (4) instruction-based! In waht follows, these prompting types are explained. In all of them, (\(c\), \(q\), \(o\)) denote the context, the question, and the set of options (in case of having a multi-choice task).

  1. Base Prompting: Here, a simple approach for prompting is used. The format is: {c} Q:{q}? Options: {o} A:
  2. Opinion-based Prompting: The context is transformed to a narrator’s opinion: Bob said, "{c}" Q:{q} in Bob's opinion? Options: {o} A:
  3. Attribute-based Prompting: The model is asked to use the provided context: {c} Q:{q} based on the given text? Options: {o} A:
  4. Instruction-based Prompting: The model is instructed to only use the provided context: Instruction: {Instruction} {c} Q:{q}? Options: {o} A:

They used GPT-3 (using text-davinci-003 embedding) with two datasets for testing their approach: (1) Natrural Questions for Machine Reading Comprehension (MRC) and (2) ReTACRED for Relation Extraction (RE).

For MRC task, they replaced some random entities in the dataset with something else to make a counterfactual context. Then, they asked model to answer the question according to the provided wrong context! NExt, they measure how often the model is able to respond according to the wrong context, not based on its prior knowledge. [Obviously, only the questions can be asked that the model is able to answer correctly, without providing any context. Then, if after providing the wrong context, the answer is still correct, it means that the language model is leveraging its own knowledge!]

For RE task, the same approach is applied! The relations are replaced with something else. Then, the model is asked to answer according the counterfactual context.