Faculty Recruiting Support CICS

Dolby Laboratories and CICS Announce Corporate Partnership

The UMass Amherst Manning College of Information and Computer Sciences (CICS) and Dolby Laboratories, an industry leader in audio and imaging technologies, have announced an innovative partnership aimed at advancing the future of immersive technologies. This initiative brings together interested faculty, researchers, and industry experts to drive breakthroughs in sound, imaging, AI, and data processing.  

The partnership was initiated through a joint effort between Donna M. and Robert J. Manning Dean Laura Haas, and Senior Vice President of the Advanced Technology Group at Dolby and CICS advisory board member Shriram Revankar. Shlomo Zilberstein, professor and CICS associate dean for research and engagement, also played a crucial role in facilitating the collaboration between faculty members and Dolby researchers. 

“From the press, to television and film, to the digital revolution, media continues to be a key driver in how we experience the world around us and will be at the forefront of how we design the future of immersive experiences and human connection as a society,” says Revankar. “At Dolby, we’re excited to collaborate with premier academic institutions like the University of Massachusetts Amherst, to encourage deep research in media with the aim of creating new knowledge that will fuel a whole new generation of innovation.” 

“CICS and Dolby Labs share a vision of fostering innovation and driving advancements in technology. This partnership provides an excellent opportunity to leverage our complementary strengths and expertise, with the potential to shape the future of audiovisual and immersive technologies,” says Zilberstein. “We are thrilled for the opportunity to work with Dolby to provide big, long-term breakthroughs that will drive the next generation of immersive consumer experiences." 

As part of this partnership, Dolby is enabling research internship opportunities available to CICS students and others, providing access to cutting-edge technologies, mentorship, and opportunities to work on passion projects. The customizable nature of these projects is expected to foster long-term, mutually beneficial research collaborations.  

“Dolby is really giving CICS students unprecedented leeway,” says Zilberstein. “Their willingness to make many of their researchers available for mentorship for students to pursue their interests and passions is an extraordinary opportunity that should provide extraordinary learning and professional development opportunities to our students.” 

In addition to the internships, Dolby has partnered with CICS faculty to provide gift funding for research projects of faculty interest that could advance the state-of-the-art in immersive experiences. A virtual workshop held in late 2022 allowed Dolby and faculty members to establish relationships, discuss research priorities, and identify potential collaboration opportunities. Following the workshop, Dolby Laboratories provided an initial gift to support multiple projects of faculty interest – five of which have already been identified. In the future, new projects will be identified through mutual visits, workshops, and research collaborations. In addition, Dolby intends to provide additional gift funding in support of research at CICS on a regular basis moving forward.  

“I'm confident that Dolby’s generous funding will produce exciting innovations that are of interest to Dolby Laboratories, whose technologies provide awe-inspiring experiences for billions of people around the world,” says Zilberstein. 

Left to right: Shlomo Zilberstein (CICS), Brett Crockett (Dolby Labs), Samir Hulyalkar (Dolby Labs), and Shriram Revankar (Dolby Labs)

Left to right: Shlomo Zilberstein (CICS), Brett Crockett (Dolby Labs), Samir Hulyalkar (Dolby Labs), and Shriram Revankar (Dolby Labs)

 

The five projects selected to receive initial Dolby seed funding grants are:   

Auditing and Correcting Bias in Learned Models in ML-Driven Image and Sound Processing 
Led by Professor Yuriy Brun, this project aims to address bias in machine learning (ML) models in domains relevant to Dolby Laboratories. In particular, the project will make progress toward understanding the impact such bias has on potential customers. The project will measure 1. how bias in ML models affects users’ trust in products that rely on such models and 2. whether training models on users who are members of certain protected groups generalize to other groups.  

Analysis via Deep Architectures with Stage-Optimized Fusion  
Assistant Professor Madalina Fitereau Brostean will lead this project with a focus on improving multimedia analysis by optimizing the fusion mechanisms in deep multimodal architectures. 

Physics-Award Audio Visual Learning 
Led by Assistant Professor Chuang Gan, this project seeks to employ physics-aware audio-visual learning, which utilizes the information contained in visual data sets and considers it along with the environment that produces the sound, to capture and mimic more natural sounds from the physical world.  

Mod-Squad: Designing Mixture of Experts as Modular Audiovisual Learners  
This project, co-led by Chuang Gan and Chair of the Faculty and Professor Erik Learned-Miller, will work towards developing a new audiovisual learning model called Mod-Squad, which aims to address the challenges of optimizing cooperation and specialization between vision and audio modalities. Mod-Squad uses a modular design that integrates mixture of experts layers into a vision transformer backbone network. This design allows a model to be divided into groups of experts that can either be shared among or specialized for different types of data, tasks, and modalities—ensuring that experts can cooperate on tasks when it is helpful while developing deep specialization in specific tasks or types of data, improving overall performance. 

Estimating Pose and Articulation of Objects from Videos 
Led by Associate Professor Subhransu Maji, this project focuses on estimating the position and movement of objects in images or videos. The team will work towards two main goals: 1. estimating relative pose, or creating a system to determine the position and orientation of an object using two different images, and 2. grouping point trajectories, or developing methods to track points on objects and group them based on their motion and appearance, helping to estimate 3-D shapes. This approach will help estimate the 3-D shape of deformable objects by tracking points with common motion and appearance, allowing for high-resolution videos and faster frame rates.