One of the reasons students attend Penn is for access to faculty and research opportunities, yet few undergraduates take advantage of this opportunity. While classroom experience is essential, so is the opportunity to create new knowledge while examining the unknown. The Digital Media Design program is closely related to the Center for Human Modeling and Simulation and the ViDi Center for Digital Visualization. Research projects are undertaken by heterogeneous teams of graduate and undergraduate students and visitors. Undergraduates who contribute in a substantial way become co-authors in publications. In the past few years, summer researchers in HMS and ViDi have had papers published in notable computer graphics academic conferences and journals.
From year to year the internship topics vary depending on funding, research needs, and student interests. The projects have overall faculty guidance, but students are expected to learn new software systems, do extensive programming, contribute to archival materials (software, documentation and written papers), and orally present their work to others. Posters and participation in department-wide research demonstrations is strongly encouraged.
Many of the participating undergraduate students from previous summers are recognized through authorship in published papers.
Project: SPACES -Spatialized Performance And Ceremonial Event Simulations
Keywords: Computer graphics and animation, human figures, archaeological site reconstruction.
Although computerized crowd simulations exist, most effort has been directed toward low-level navigation, collision avoidance, and trajectory realism. “Higher-level” organization is left to user discretion, artistic decisions, or creative goals. Crowds are often behaviorally homogeneous with only vague overall purpose. SPACES will center on processional environments: what activities occur where, about how long they last, what objects agents carry/use/play, sound and motion coordination, and interpersonal interactions. We will develop a user interface to control such parameters. SPACES will use the UnReal game engine for graphic visual experiences. SPACES will allow a user to embed herself as a crowd participant and active performer. SPACES requires highly realistic graphic environments, and responsive behaviors in the other characters to cement the sense of cultural presence: “the feeling of being and making sense there together.” What better way to experience the ethos of a bygone culture than by being embedded in its public ceremonial practices? The exemplar we will be using is the Inca site of Pachacamac (Peru). Extensive in architecture and clearly designed for large-scale events, Pachacamac serves as our test-bed for immersive exploration of hypothetical cultural experiences.
Requirements: Computer programming, graphics, and 3D modeling experience necessary.
Project: Sign Language Movement, Energetics, and Evolution Analysis
This NSF REU (Research Experience for Undergraduates) will involve learning and using a software toolkit to compute physical and biomechanically appropriate measures of human movement during sign language performances.These measures will be used to assess the evolution of signing performance in several available datasets, in particular, to see if and how energetics and muscular workloads affect sign evolution.The suite of tools has been written by Dr. Aline Normoyle, whose PhD Dissertation is a primary resource, but will also involve reading and understanding the background biomechanics and physics of human movement.The software computes a number of measures such as torques, accelerations, and work.We do not know which measures are operative in sign language evolution, so much of the analytics are oriented to finding meaningful measures in the given datasets. You will take the joint position data from analyzed video sequences, format the data for the analytic tools, run the analyses, and visualize the results with suitable graphs and diagrams.You will also assist Drs. Badler and Normoyle with writing up the results of this study for publication or submission to a scholarly meeting.
Project: Building and Improving Physics-based Simulation Infrustracture
There exists strong value of augmenting algorithms for use with high-performance platforms, especially multi-GPU and distributed memory architecture, for scalable simulations. Large-scale simulations are becoming more demanding and popular than ever. There are many efforts invested in algorithms that scale with supercomputers or cloud servers. The biggest advantage of HPC over cloud computing is that supercomputer nodes are usually more tightly coupled through faster and more stable connections (e.g. infiniband). For physics-based simulations, the frequent communications among its subdomains heavily relies on these interconnections. On the other hand, there has been a trend towards the deployment of general-purpose GPU (GPGPU) as an accelerator in modern high-end HPC systems. This internship position focuses on building and improving large scale, massively parallel physics-based simulation infrustracture on super computers, multi-GPU platforms, and distributed servers.
Requirements: Candidate should be familiar with modern C++ programming. Experience with HPC such as single node multi-threading, CUDA, OpenCL, or MPI is strongly recommended.
Project: Numerical Modeling of Soft Interactions for Robots
The goal of this project is to bridge the gap between the soft robots currently fabricated in
GRASP lab with the fast simulation tools developed in SIG lab. Soft and semi-rigid robots are a
new and active area in robotics research because they promise safer, more robust interactions
with the environment and with humans. However, soft robot design suffers from limitations in
computational models of these interactions. Existing approaches to simulation are slow and
computationally expensive, making it cumbersome to iterate over designs in a virtual
environment. The field would benefit greatly from a fast, accurate, and preferably
differentiable simulation for soft body interactions.
At the same time, researchers in computer graphics have excelled at creating real-time and high-resolution simulations of soft bodies ranging from foams to fabrics to hair to fluids like water and mud. However, the focus of this work has been on creating visually reasonable animations, rather than conducting quantitative comparisons to real physical systems. As a result, the physical accuracy of these simulations has never been evaluated.
This project will explore the applicability of simulations originating from the computer graphics community to real-world physical robots. The work will require close collaboration between Cynthia Sung (mechanical engineering) and Chenfanfu Jiang (computer science) to merge ideas from both groups. Cynthia Sung’s expertise is in rapid fabrication and robot design, and she will contribute experimental instrumentation and robotics intuition necessary to understand the robotic implications of the work. Chenfanfu Jiang’s expertise is in material point methods for high performance simulation, and he will contribute algorithmic and physics intuition needed to pinpoint gaps between the experiments and reality. Given the disciplinary language barriers and traditional separation of the two fields, it would be unlikely for either group to be able to explore this project without a consolidated and concentrated effort.
Position 1: focusing on Physics-based Simulation:
The student will work with existing physics-based simulation code. At the beginning of the summer, some time will be necessary for the student to understand the code base and how to use the system. The student will coordinate with robotics team members, who are planning the physical experimental setup, to create the same experimental conditions within the simulation and output virtual results. As the experiments progress, the students will collectively work to understand the difference between the simulation results and the physical results produced, and this student will work to integrate those insights into the simulation code base.
Needed background: The student will need to be proficient in C++.
Learning opportunities: The student will learn state of the art computer graphics techniques and will also see how well those techniques generalize to real, physical systems.
Position 2: focusing on 3D Modeling:
This student will be responsible for creating 3D models for the soft robots and their test environments. These models will be used both in the simulations and in the experimental tests. The student will work directly with robotics students in printing out the 3D models verifying mechanical properties (e.g., elasticity), which need to be inputted into the simulation. As new insights about the simulation and physical systems are understood, the student will refine the 3D models to take experimental constraints into account.
Needed background: The student will need to be familiar with 3D modeling tools such as Blender or Maya. These tools are commonly used by students in digital media design or others interested in computer animations.
Learning opportunities: The student will learn how to create complex 3D models and will gain intuition on when 3D models are physically (not just virtually) feasible.
Project: Interactive Authoring of Augmented Reality Task Content
The goal of this project is to develop an innovative in situ authoring system that transforms the way augmented reality content is created for many training and education tasks. This will be accomplished by capturing the movements, actions and verbal descriptions of an instructor, trainer, designer, etc. as they actually perform a task. The system will then automatically segment the captured task data into goal-directed subtasks and adapt it for use in a “Helping Hands” application to automatically prompt and guide users while they perform the task themselves, taking into account the specifics of the task environment (physical layout, location of parts and objects, etc.).
Project: Augmented Reality Enhancement of Medical Simulation and Training Applications
The goal of this project is to develop a prototype Augmented Reality (AR) surgical navigation system for orthopedic applications (insertion of pins, rods, plates, etc.) using the Microsoft Hololens and Magic Leap devices. Knowledge of the Unity game engine required.
Project: Deep Learning of high-resolution 3D volumetric views given-low resolution 2D image inputs
This project will investigate the configuration and training of Convolutional Neural Network models that allow low resolution camera inputs to generate both camera position/orientation data and high-resolution volumetric views as neural network outputs.
Project: Development of Augmented Reality Applications for Large Screen Displays
The goal of this project is to create a compelling Unity demo of a user holding/wearing an android-based smartphone interacting with content shown on a large screen display. Both a handheld mode and a head-mounted display mode (using the GearVR HMD) will be developed using the Google ARcore SDK and the Vuforia7 computer vision plugin. The project involves estimating the position and orientation of the smartphone/HMD with respect to the display screen (and optionally the user’s head with respect to the handheld smartphone) in order to create new multi-user game experiences and crowd-sourced augmented reality application content.
Background Image: Jeremy Newlin (DMD 2013, CGGT 2014)