This paper is a paper about entity recognition and relation extraction by Dr. Chen Danqi. The text uses a pipeline method instead of joint learning method, which surpasses previous models and achieves SOTA in ace04 / 05 and scierc. Among them:
- Ner uses the span based model instead of the traditional sequence annotation model (that is, all the span in the sentence whose length is less than or equal to n are gathered into a candidate set, and then the candidate sets are classified, and each candidate span generates a corresponding embedding representation according to the model).
- Re uses the “entity boundary + entity type” method to modify the input sentence.
- Using cross sense context can effectively improve the performance of NER and re. The specific implementation method is to directly cut the above and below of the sentence and splice them into the sentence. The length of context is calculated by $(w − n) = 2 $. In the experiment, the length of sentence is 100 and N $.
- Ner model consists of pre training model (Bert, Albert) as encoder, then two layers of FFNN and the last sofrmax layer.
- Re model is obtained by pre training model (Bert, Albert) and a softmax layer. Among them, < s: method >, < / s: method >, < o: task >, < / O: task > integrate entity boundary and entity type into sentences.
- In order to solve the huge time cost of entity pair input to re model classification, this paper proposes an approximate method: instead of adding the identifier of “entity boundary + entity type” in the original sentence, they are spliced to the back of the sentence, and the position embedding is used to share the location correspondence of the original entity, At the same time, the attention mask is set so that it can only be attached to the original sentence without seeing the spliced identifier. The purpose of this method is to reuse the hidden vectors of all the tokens in the sentence, so that the token and identifier marker of the sentence are independent of each other.
- Specific examples are as follows: