Source  
Manifestation of Transformation based models in NLP for longer sequences On March 26, Friday, 2021

Natural language processing (NLP) models based on Transformers are a mainstay of modern NLP research. Key innovation in Transformers is the introduction of a self-attention mechanism, which computes similarity scores for all pairs of positions in an input sequence. But, current hardware and model sizes typically limit the input sequence to roughly 512 tokens. This prevents Transformers from being directly applicable to tasks that require larger context, like question answering, document summarization or genome fragment classification.

Read more at ai.googleblog.com

Source  
Open Domain Long Form Question Answering - Progress and Hurdles On March 25, Thursday, 2021

Long-form question answering (LFQA) is a fundamental challenge in natural language processing (NLP). In "Hurdles to Progress in Long-form Question Answering" (to appear at NAACL 2021), authors present a new system for open- domain long-form questions. They say their system leverages sparse attention models and retrieval-based models to generate a paragraph-length answer. The system achieves a new state of the art on ELI5, the only large-scale publicly available dataset.

Read more at ai.googleblog.com

Source  
Are NLP models unable to solve Math Word Problems? - Microsoft Researchers On March 18, Thursday, 2021

Recent natural language processing (NLP) models have shown an ability to achieve reasonably high accuracy on simple Math Word Problems (MWP). A Microsoft Research team recently took a closer look at just how NLP models do this, with surprising results. Their study provides "concrete evidence" that existing MWP solvers tend to rely on shallow heuristics.

Read more at syncedreview.com

TAGS