allennlp.models.semantic_parsing.quarel

class allennlp.models.semantic_parsing.quarel.quarel_semantic_parser.QuarelSemanticParser(vocab: allennlp.data.vocabulary.Vocabulary, question_embedder: allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder, action_embedding_dim: int, encoder: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder, decoder_beam_search: allennlp.state_machines.beam_search.BeamSearch, max_decoding_steps: int, attention: allennlp.modules.attention.attention.Attention, mixture_feedforward: allennlp.modules.feedforward.FeedForward = None, add_action_bias: bool = True, dropout: float = 0.0, num_linking_features: int = 0, num_entity_bits: int = 0, entity_bits_output: bool = True, use_entities: bool = False, denotation_only: bool = False, entity_encoder: allennlp.modules.seq2vec_encoders.seq2vec_encoder.Seq2VecEncoder = None, entity_similarity_mode: str = 'dot_product', rule_namespace: str = 'rule_labels')[source]

Bases: allennlp.models.model.Model

A QuarelSemanticParser is a variant of WikiTablesSemanticParser with various tweaks and changes.

Parameters
vocabVocabulary
question_embedderTextFieldEmbedder

Embedder for questions.

action_embedding_dimint

Dimension to use for action embeddings.

encoderSeq2SeqEncoder

The encoder to use for the input question.

decoder_beam_searchBeamSearch

When we’re not training, this is how we will do decoding.

max_decoding_stepsint

When we’re decoding with a beam search, what’s the maximum number of steps we should take? This only applies at evaluation time, not during training.

attentionAttention

We compute an attention over the input question at each step of the decoder, using the decoder hidden state as the query. Passed to the transition function.

dropoutfloat, optional (default=0)

If greater than 0, we will apply dropout with this probability after all encoders (pytorch LSTMs do not apply dropout to their last layer).

num_linking_featuresint, optional (default=10)

We need to construct a parameter vector for the linking features, so we need to know how many there are. The default of 8 here matches the default in the KnowledgeGraphField, which is to use all eight defined features. If this is 0, another term will be added to the linking score. This term contains the maximum similarity value from the entity’s neighbors and the question.

use_entitiesbool, optional (default=False)

Whether dynamic entities are part of the action space

num_entity_bitsint, optional (default=0)

Whether any bits are added to encoder input/output to represent tagged entities

entity_bits_outputbool, optional (default=False)

Whether entity bits are added to the encoder output or input

denotation_onlybool, optional (default=False)

Whether to only predict target denotation, skipping the the whole logical form decoder

entity_similarity_modestr, optional (default=”dot_product”)

How to compute vector similarity between question and entity tokens, can take values “dot_product” or “weighted_dot_product” (learned weights on each dimension)

rule_namespacestr, optional (default=rule_labels)

The vocabulary namespace to use for production rules. The default corresponds to the default used in the dataset reader, so you likely don’t need to modify this.

decode(self, output_dict: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor][source]

This method overrides Model.decode, which gets called after Model.forward, at test time, to finalize predictions. This is (confusingly) a separate notion from the “decoder” in “encoder/decoder”, where that decoder logic lives in FrictionQDecoderStep.

This method trims the output predictions to the first end symbol, replaces indices with corresponding tokens, and adds a field called predicted_tokens to the output_dict.

forward(self, question: Dict[str, torch.LongTensor], table: Dict[str, torch.LongTensor], world: List[allennlp.semparse.worlds.quarel_world.QuarelWorld], actions: List[List[allennlp.data.fields.production_rule_field.ProductionRule]], entity_bits: torch.Tensor = None, denotation_target: torch.Tensor = None, target_action_sequences: torch.LongTensor = None, metadata: List[Dict[str, Any]] = None) → Dict[str, torch.Tensor][source]

In this method we encode the table entities, link them to words in the question, then encode the question. Then we set up the initial state for the decoder, and pass that state off to either a DecoderTrainer, if we’re training, or a BeamSearch for inference, if we’re not.

Parameters
questionDict[str, torch.LongTensor]

The output of TextField.as_array() applied on the question TextField. This will be passed through a TextFieldEmbedder and then through an encoder.

tableDict[str, torch.LongTensor]

The output of KnowledgeGraphField.as_array() applied on the table KnowledgeGraphField. This output is similar to a TextField output, where each entity in the table is treated as a “token”, and we will use a TextFieldEmbedder to get embeddings for each entity.

worldList[QuarelWorld]

We use a MetadataField to get the World for each input instance. Because of how MetadataField works, this gets passed to us as a List[QuarelWorld],

actionsList[List[ProductionRule]]

A list of all possible actions for each World in the batch, indexed into a ProductionRule using a ProductionRuleField. We will embed all of these and use the embeddings to determine which action to take at each timestep in the decoder.

entity_bitstorch.Tensor, optional (default=None)

Tensor encoding bits for the world entities.

denotation_targettorch.Tensor, optional (default=None)

If model’s field denotation_only is True, this is the tensor target denotation.

target_action_sequencestorch.Tensor, optional (default=None)

A list of possibly valid action sequences, where each action is an index into the list of possible actions. This tensor has shape (batch_size, num_action_sequences, sequence_length).

metadataList[Dict[str, Any]], optional (default=None).
A dictionary of metadata for each batch element which has keys:
question_tokensList[str], optional.

The original string tokens in the question.

world_extractionsnltk.Tree, optional.

Extracted worlds from the question.

answer_indexList[str], optional.

Index of the correct answer.

get_metrics(self, reset: bool = False) → Dict[str, float][source]

We track three metrics here:

1. parse_acc, which is the percentage of the time that our best output action sequence corresponds to a correct logical form

2. denotation_acc, which is the percentage of examples where we get the correct denotation, including spurious correct answers using the wrong logical form

3. lf_percent, which is the percentage of time that decoding actually produces a finished logical form. We might not produce a valid logical form if the decoder gets into a repetitive loop, or we’re trying to produce a super long logical form and run out of time steps, or something.