Bio: I am currently a postdoc at Carnegie Mellon University, working with Prof. Anthony Rowe and Prof. Srinivasan Seshan. My research spans across Immersive Media, XR Systems (Mixed Reality, AR/VR), Networks, Mobile, Wireless, and Wearable Computing.
I received a PhD in Computer Science from Stony Brook University, under the guidance of Prof. Samir Das. During my PhD, I was also closely mentored by Prof. Aruna Balasubramanian. I also hold an M.Tech and a B.E from Osmania University, Hyderabad, India.
RenderFusion: Balancing Local and Remote Rendering for Interactive 3D Scenes
Edward Lu, Sagar Bharadwaj, Mallesham Dasari, Connor Smith, Anthony Rowe, Srinivasan Seshan
IEEE ISMAR 2023 (Conference on Augmented and Mixed Reality)
Paper Slides Code Teaser
Scaling VR Video Conferencing
Mallesham Dasari, Edward Lu, Michael W. Farb, Nuno Pereira, Ivan Liang, Anthony Rowe
IEEE VR 2023 (Conference on Virtual Reality and 3D User Interfaces)
Paper Slides Code Teaser
RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensing
Mallesham Dasari, Ramanujan Seshadri, Karthikeyan Sundaresan, Samir R. Das
ACM IMWUT/UbiComp 2023 (Conference on Interactive, Mobile, Wearable and Ubiquitous Technologies)
Paper Data AR Game Stay tuned for more artifacts!
Swift: Adaptive Video Streaming with Layered Neural Codecs
Mallesham Dasari, Kumara Kahatapitiya, Samir R. Das, Aruna Balasubramanian, Dimitris Samaras
USENIX NSDI 2022 (Conference on Networked Systems Design and Implementation)
Paper Slides Code Video
Cyclops: An FSO-based Wireless Link for VR Headsets
Himanshu Gupta, Max Curran, Jon Longtin, Torin Rockwell, Kai Zheng, Mallesham Dasari
ACM SIGCOMM 2022 (Conference on Data Communications)
PARSEC: Streaming 360-Degree Videos Using Super-Resolution
Mallesham Dasari, Arani Bhattacharya, Santiago Vargas, Pranjal Sahu, Aruna Balasubramanian, Samir R. Das
IEEE INFOCOM 2020 (Conference on Computer Communications)
Paper Slides Code Video
Impact of Device Performance on Mobile Internet QoE
Mallesham Dasari, Santiago Vargas, Arani Bhattacharya, Aruna Balasubramanian, Samir R. Das, and Michael Ferdman
ACM IMC 2018 (Conference on Internet Measurements)
Paper Slides Data Video
The Internet has seen a remarkable change in long distance communication in terms of voice and video calls in just three decades. However, despite the past advances, today's applications (e.g., Zoom/FaceTime) still lack the essential subtleties of ``Telepresence'' i.e., everyday face-to-face co-located communication with realistic eye contact, body language, and physical presence in a virtual space. While the concept has been around for decades, only recent advances in high performance graphics hardware, better depth sensing technology, and faster software pipelines have made it possible to consider practical real-time 3D telepresence systems. This project investigates several research questions— 1) How to capture and digitize a 3D scene with low latency and practical bitrates to stream on the Internet in real-time? 2) Can the traditional 2D content distribution strategies work well for 3D streaming? 3) How to render high quality 3D content on constrained AR/VR headsets? 4) What kind of 3D applications can we envision to bring the everyday serendipity virtually?
Video compression plays a central role for Internet video applications in reducing the network bandwidth requirement. Traditional algorithm-driven compression methods have served well to realize today's Internet video applications with an acceptable user experience. However, emerging 4K/8K/360-Degree video streaming, and AR/VR applications require orders of magnitude more bandwidth than today's applications. The monolithic, application-unware nature of the current generation compression algorithms is not scalable to realize such nearfuture applications over the Internet. This project explores data-driven techniques to significantly change the landscape of the source compression algorithms and improve the experience of next-generation video applications.
The interactive and immersive applications such as Augmented Reality (AR) and Virtual Reality (VR) have significant potential for various tasks like industrial training, collaborative robotics, remote operation, etc. A key challenge to deliver these applications is to provide accurate and robust tracking of multiple agents (humans and robots) involved in every-day, challenging environments. Current AR/VR solutions rely on visual tracking algorithms (e.g., SLAM/Odometry) that are highly sensitive to environment (e.g., lighting conditions). This project explores augmenting the RF-positioning (e.g., WiFi/UWB) to improve the tracking in terms of accuracy (< 1cm level), robustness (with diverse environmental conditions), and scalability across multiple agents. The key challenges here are how to leverage two completely different modalities to complement with each other with little or no infrastructure support.
This class is about fundamental principles of wireless and mobile networking. Some of the topics that we will cover are the following: