Nikita Soni

About Me

Hello Hello!

I am a NLP (Natural Language Processing, a subfield of Artificial Intelligence) researcher and PhD student at Stony Brook University, New York, co-advised by H. Andrew Schwartz and Niranjan Balasubramanian . Prior, I was working in the software industry exploring multiple facets of the software engineering world (details in CV). Personally, I'm a very outdoorsy person but also enjoy my own company.

Let's chat more over a cup of coffee.

PersonalNikita Soni
ProfessionalNikita Soni
Research Purpose

I am enthused about NLP's expanding outreach in our lives and its unexplored abilities to understand human nature better and more efficiently than ever. Language is more than words, it expresses identities, psychologies, cultures and much more. I find myself challenged in directing NLP language models to look beyond the current limitations and consider the human behind the language. The purpose of my research is to enable the growth of more empathy in an AI-future centric world, thereby augmenting humanity rather than detracting from it.

Research Publications

Human Language Modeling

(To Appear in ACL-Findings-2022: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.)conference

Nikita Soni, Matthew Matero, Niranjan Balasubramanian, and H. Andrew Schwartz

  • A hierarchical extension to the language modeling problem whereby a human-level exists to connect sequences of documents (e.g. social media messages) and capture the notion that human language is moderated by changing human state. We introduce, HaRT, a large-scale transformer model for the HuLM task, pre-trained on approximately 100,000 social media users, and demonstrate it’s effectiveness in terms of both language modeling (perplexity) for social media and fine-tuning for 4 downstream tasks spanning document- and user-levels: stance detection, sentiment classification, age estimation, and personality assessment. Results on all tasks meet or surpass the current state-of-the-art

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

EMNLP Findings(2021)conference

[pdf] Matthew Matero, Nikita Soni, Niranjan Balasubramanian, and H. Andrew Schwartz

  • A hierarchical message encoder pre-trained over Twitter and applied to the task of stance prediction. The model is trained using a variant of masked language modeling; where instead of predicting tokens, it seeks to generate an entire masked (aggregated) message vector via reconstruction loss within a sequence of messages connected by authorship.
Experience

Education

Research Internships

Software Engineering Jobs