Natural Language Inference Models for Few-Shot Text Classification: A Real-World Perspective
Romero Mogrovejo, David Orlando
MetadataShow full item record
[EN] Pivoting tasks as entailment problems have shown to be very effective in different applications like question answering and relation extraction. On the other hand, language models trained for entailment tasks have demonstrated to be very effective in settings with small training sets and have better generalization abilities. In view of this, in this work we recast text classification as an entailment problem, specifically, I build PET-NLI, an approach that uses the same architecture and training procedure as Pet (Pattern-Encoding Training), but in this case using natural language inference (NLI) models. Overall, PET-NLI shows to have the same benefits of PET for true few-shot scenarios despite using a different type of language model. This approach is tested on true-few shot scenarios of the RAFT benchmark. As a result, PET-NLI outperforms bigger models like GPT-3 and achieves competitive performances when it is compared with the original PET architecture and the Human Baseline.