Hi! I am a PhD Candidate in Computer Science at Stony Brook University. I work in WINGS Lab under the guidance of Prof. Samir Das. I also closely work with Prof. Aruna Balasubramanian. My research interests are broadly in Multimedia Systems, Mobile Computing, and Wireless Networking, with a current focus on improving the quality of experience of Internet Video applications (e.g., 360-Degree Video, AR/VR). More recently, I started exploring AI-powered Video Compression Research.
Streaming 360-degree Videos Using Super-Resolution
Mallesham Dasari, Arani Bhattacharya, Santiago Vargas, Pranjal Sahu, Aruna Balasubramanian, Samir R. Das
INFOCOM 2020 (Conference on Computer Communications)
Paper Slides Code
Spectrum Protection from Micro-Transmissions using Distributed Spectrum Patrolling
Mallesham Dasari, Muhammad Bershgal Atigue, Arani Bhattacharya, Samir R. Das
PAM 2019 (Conference on Passive and Active Network Measurements)
Paper Slides Data
Impact of Device Performance on Mobile Internet QoE
Mallesham Dasari, Santiago Vargas, Arani Bhattacharya, Aruna Balasubramanian, Samir R. Das, and Michael Ferdman
IMC 2018 (Conference on Internet Measurements)
Paper Slides Data
Video compression plays a central role for Internet video applications in reducing the network bandwidth requirement. Traditional algorithm-driven compression methods have served well to realize today's Internet video applications with an acceptable user experience. However, emerging 4K/8K/360-degree video streaming, and AR/VR applications require orders of magnitude more bandwidth than today's applications. The monolithic, application-unware nature of the current generation compression algorithms is not scalable to realize such nearfuture applications over the Internet. This project explores data-driven techniques to significantly change the landscape of the source compression algorithms and improve the experience of next-generation video applications.
The interactive and immersive applications such as Augmented Reality (AR) and Virtual Reality (VR) have significant potential for various tasks like industrial training, collaborative robotics, remote operation, etc. A key challenge to deliver these applications is to provide accurate and robust tracking of multiple agents (humans and robots) involved in every-day, challenging environments. Current AR/VR solutions rely on visual tracking algorithms (e.g., SLAM/Odometry) that are highly sensitive to environment (e.g., lighting conditions). This project explores augmenting the RF-positioning (e.g., WiFi/UWB) to improve the tracking in terms of accuracy, robustness, and scalability across multiple agents.
This class is about fundamental principles of wireless and mobile networking. Some of the topics that we will cover are the following: