- Browse by Author
Browsing by Author "Joty, Shafiq"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Con-S2V: A Generic Framework for Incorporating Extra-Sentential Context into Sen2Vec(Springer, 2017) Saha, Tanay Kumar; Joty, Shafiq; Al Hasan, Mohammad; Computer and Information Science, School of ScienceWe present a novel approach to learn distributed representation of sentences from unlabeled data by modeling both content and context of a sentence. The content model learns sentence representation by predicting its words. On the other hand, the context model comprises a neighbor prediction component and a regularizer to model distributional and proximity hypotheses, respectively. We propose an online algorithm to train the model components jointly. We evaluate the models in a setup, where contextual information is available. The experimental results on tasks involving classification, clustering, and ranking of sentences show that our model outperforms the best existing models by a wide margin across multiple datasets.Item Regularized and Retrofitted models for Learning Sentence Representation with Context(ACM, 2017-11) Saha, Tanay Kumar; Joty, Shafiq; Hassan, Naeemul; Al Hasan, Mohammad; Computer and Information Science, School of ScienceVector representation of sentences is important for many text processing tasks that involve classifying, clustering, or ranking sentences. For solving these tasks, bag-of-word based representation has been used for a long time. In recent years, distributed representation of sentences learned by neural models from unlabeled data has been shown to outperform traditional bag-of-words representations. However, most existing methods belonging to the neural models consider only the content of a sentence, and disregard its relations with other sentences in the context. In this paper, we first characterize two types of contexts depending on their scope and utility. We then propose two approaches to incorporate contextual information into content-based models. We evaluate our sentence representation models in a setup, where context is available to infer sentence vectors. Experimental results demonstrate that our proposed models outshine existing models on three fundamental tasks, such as, classifying, clustering, and ranking sentences.