Skip to content

dialog_qa

allennlp_models.rc.models.dialog_qa

[SOURCE]


DialogQA#

@Model.register("dialog_qa")
class DialogQA(Model):
 | def __init__(
 |     self,
 |     vocab: Vocabulary,
 |     text_field_embedder: TextFieldEmbedder,
 |     phrase_layer: Seq2SeqEncoder,
 |     residual_encoder: Seq2SeqEncoder,
 |     span_start_encoder: Seq2SeqEncoder,
 |     span_end_encoder: Seq2SeqEncoder,
 |     initializer: Optional[InitializerApplicator] = None,
 |     dropout: float = 0.2,
 |     num_context_answers: int = 0,
 |     marker_embedding_dim: int = 10,
 |     max_span_length: int = 30,
 |     max_turn_length: int = 12
 | ) -> None

This class implements modified version of BiDAF (with self attention and residual layer, from Clark and Gardner ACL 17 paper) model as used in Question Answering in Context (EMNLP 2018) paper [https://arxiv.org/pdf/1808.07036.pdf].

In this set-up, a single instance is a dialog, list of question answer pairs.

Parametersvocab : ``Vocabulary``

text_field_embedder : TextFieldEmbedder Used to embed the question and passage TextFields we get as input to the model. phrase_layer : Seq2SeqEncoder The encoder (with its own internal stacking) that we will use in between embedding tokens and doing the bidirectional attention. span_start_encoder : Seq2SeqEncoder The encoder that we will use to incorporate span start predictions into the passage state before predicting span end. span_end_encoder : Seq2SeqEncoder The encoder that we will use to incorporate span end predictions into the passage state. dropout : float, optional (default=0.2) If greater than 0, we will apply dropout with this probability after all encoders (pytorch LSTMs do not apply dropout to their last layer). num_context_answers : int, optional (default=0) If greater than 0, the model will consider previous question answering context. max_span_length: int, optional (default=0) Maximum token length of the output span. max_turn_length: int, optional (default=12) Maximum length of an interaction.

forward#

class DialogQA(Model):
 | ...
 | def forward(
 |     self,
 |     question: Dict[str, torch.LongTensor],
 |     passage: Dict[str, torch.LongTensor],
 |     span_start: torch.IntTensor = None,
 |     span_end: torch.IntTensor = None,
 |     p1_answer_marker: torch.IntTensor = None,
 |     p2_answer_marker: torch.IntTensor = None,
 |     p3_answer_marker: torch.IntTensor = None,
 |     yesno_list: torch.IntTensor = None,
 |     followup_list: torch.IntTensor = None,
 |     metadata: List[Dict[str, Any]] = None
 | ) -> Dict[str, torch.Tensor]

Parametersquestion : Dict[str, torch.LongTensor]

From a ``TextField``.

passage : Dict[str, torch.LongTensor] From a TextField. The model assumes that this passage contains the answer to the question, and predicts the beginning and ending positions of the answer within the passage. span_start : torch.IntTensor, optional From an IndexField. This is one of the things we are trying to predict - the beginning position of the answer with the passage. This is an inclusive token index. If this is given, we will compute a loss that gets included in the output dictionary. span_end : torch.IntTensor, optional From an IndexField. This is one of the things we are trying to predict - the ending position of the answer with the passage. This is an inclusive token index. If this is given, we will compute a loss that gets included in the output dictionary. p1_answer_marker : torch.IntTensor, optional This is one of the inputs, but only when num_context_answers > 0. This is a tensor that has a shape [batch_size, max_qa_count, max_passage_length]. Most passage token will have assigned 'O', except the passage tokens belongs to the previous answer in the dialog, which will be assigned labels such as <1_start>, <1_in>, <1_end>. For more details, look into dataset_readers/util/make_reading_comprehension_instance_quac p2_answer_marker : torch.IntTensor, optional This is one of the inputs, but only when num_context_answers > 1. It is similar to p1_answer_marker, but marking previous previous answer in passage. p3_answer_marker : torch.IntTensor, optional This is one of the inputs, but only when num_context_answers > 2. It is similar to p1_answer_marker, but marking previous previous previous answer in passage. yesno_list : torch.IntTensor, optional This is one of the outputs that we are trying to predict. Three way classification (the yes/no/not a yes no question). followup_list : torch.IntTensor, optional This is one of the outputs that we are trying to predict. Three way classification (followup / maybe followup / don't followup). metadata : List[Dict[str, Any]], optional If present, this should contain the question ID, original passage text, and token offsets into the passage for each instance in the batch. We use this for computing official metrics using the official SQuAD evaluation script. The length of this list should be the batch size, and each dictionary should have the keys id, original_passage, and token_offsets. If you only want the best span string and don't care about official metrics, you can omit the id key.

ReturnsAn output dictionary consisting of the followings.

Each of the followings is a nested list because first iterates over dialog, then questions in dialog.

qid : List[List[str]] A list of list, consisting of question ids. followup : List[List[int]] A list of list, consisting of continuation marker prediction index. (y :yes, m: maybe follow up, n: don't follow up) yesno : List[List[int]] A list of list, consisting of affirmation marker prediction index. (y :yes, x: not a yes/no question, n: np) best_span_str : List[List[str]] If sufficient metadata was provided for the instances in the batch, we also return the string from the original passage that the model thinks is the best answer to the question. loss : torch.FloatTensor, optional A scalar loss to be optimised.

make_output_human_readable#

class DialogQA(Model):
 | ...
 | def make_output_human_readable(
 |     self,
 |     output_dict: Dict[str, torch.Tensor]
 | ) -> Dict[str, torch.Tensor]

get_metrics#

class DialogQA(Model):
 | ...
 | def get_metrics(self, reset: bool = False) -> Dict[str, float]

default_predictor#

class DialogQA(Model):
 | ...
 | default_predictor = "dialog_qa"