I am an Assistant Professor in the Department of Electrical and Computer Engineering at Northeastern University. I direct the Spatial Intelligence Research Group (SINRG). I am interested in advancing research in spatial intelligence, which spans several interconnected areas of computing and human-centered technologies:
-
Immersive Multimedia Foundations
Representation, compression, and streaming of dynamic spatial media including 4D point clouds, meshes, volumetric video, world models, and digital twins for real-time immersive experiences.
-
Internet and Wearable Systems
Networked systems and communication for smart glasses and immersive applications, and distributed infrastructure that supports spatial computing at scale.
-
HCI, Visualization, and Graphics
Interfaces and visual systems that enable humans to interact with spatial data through AR/VR platforms, collaborative environments, and interactive visual representations of complex environments.
-
Education, Learning, and Workforce Training
Using immersive technologies to transform education, training, and skill development through experiential learning systems and collaborative digital environments.
My work has been supported by the U.S. National Science Foundation (NSF), the National Security Agency (NSA), and the U.S. Department of Defense (DoD). It has received the Best Reproducible Paper Award at ACM Multimedia Systems (MMSys) 2025, the Best Demo Award at ACM HotMobile 2025, the 2025 Innovation Award from the AI-RAN Alliance, the Best Demo Award at DARPA SRC Symposium 2023, and the Best Paper Award at IEEE Symposium on Multimedia 2021.
Selected Publications
A full list of papers can be found here.
Teaching
An interdisciplinary course covering emerging immersive media, computer networks, vision, and graphics. In addition to lectures, the course includes experiential sessions with a variety of state-of-the-art XR headsets.
- Fundamental problems of networked applications
- XR fundamentals: headsets, glasses, wearables
- XR content representations
- 2D, flat 360, 3D and volumetric videos: RGB-D, point cloud, mesh, NeRF
- Monocular, stereoscopic, and multiview videos
- Acquiring XR content for network delivery
- Compression for RGB, depth, point clouds, and mesh sequences
- Streaming fundamentals: stored, live, and interactive protocols
- Streaming XR content: videos, point clouds, meshes, holograms, spaces
- Local streaming via WiFi, mmWave, and optical wireless links
- Remote and hybrid rendering
- Visual and wireless sensing for person tracking
- ARKit, Unity, Open3D, and networked XR platforms
- Building XR systems such as 3D telepresence and spatial web applications
- Tracking fundamentals: eyes, hands, face, head, body; outside-in and inside-out systems
A course on the fundamental principles of wireless and mobile networking, including wireless signals and protocols, spectrum sharing, RF localization, mobile transport, sensing, mobile video, device performance, energy management, and deep learning for mobile and wireless systems.