When Virtual Comes to Life
We are PerSiVal (which stands for Pervasive Simulation and Visualization)–three noble knights who harness augmented reality (AR) to transport colorful representations from the virtual world to the real world around you.
To understand what we mean, take a look in our magic mirror. It gives quite a good, simplified explanation of the principle.
When computers understand your movements
We use computer programs and a camera to identify your body movements, focusing on your arms (with you waving, making circles with your arms, or even posing like a bodybuilder!). The computer processes the information and then shows you what your muscles look like during those movements. To make that possible, we’ve taught our program a little bit about biomechanics and human muscles.
Program power: How we make muscles dance!
Then we showed the program a few tricks to enable it to work on small devices too by, for example, using several devices and computers. This is referred to as “working on distributed systems”.
And, finally, we taught it to display your muscles on the screen in such a way that they appear in the right place on your arm on the photos. That’s what we mean by “augmented reality” above: virtual elements that seem to blend seamlessly with the real world.
Adaptive Simulation and Interaction
The aim of “Pervasive Simulation” is to enable people to use simulation technology anytime, anywhere both in their private and their professional lives. Users can interact with simulations in real time by means of mobile devices like tablets or mixed reality headsets.
Visualization of simulation results at a fast enough speed (i.e., in real time) and with the algorithms required to make the system react immediately to the user paves the way for a range of new applications.
For instance, one part of our “PerSiVal” project network is working on the visualization of complex biomechanical simulations straight onto the human body. This is done in augmented reality (AR).
Neural networks and real-time simulations for fascinating visualizations
This visualization uses complex neural networks, with highly optimized evaluation, in order to ensure a sufficiently lightweight application for devices with limited performance capacities. However, the data that these networks have learned comes from a simulation that can only be calculated on a high-performance computer.
The simulation results delivered by the neural networks are rendered and mapped onto the right place on the human body in real time, i.e., at least 30 images per second.
Adaptive simulation systems for a wide variety of computer platforms
With the growing variety of computer platforms, from mobile devices to cloud computing to high-performance computers, simulation systems for everyday use need to be able to be implemented in a very heterogenous communication and calculation infrastructure.
This is exactly the focus of the work carried out by the “Distributed Systems” department – one of the three pillars of this project, along with biomechanical simulation and human-machine interaction.
As the project progresses, the aim is to include data from physiological sensors in wearables like smartwatches or other sources. This will help achieve even more complex interactive real-time visualization.
Multiple disciplines, complex requirements
Due to its “human-in-the-loop” nature and the seamless support for user mobility, pervasive simulation brings unique requirements, far beyond the state of the art.
The Adaptive Simulation and Interaction project network has conducted research in various disciplines, including biomechanics, quantum calculations, human perception, structural mechanics, naturalistic brain activity, and chemical molecule structures.
Our aim is to create powerful models and algorithms that enable complex simulations to be adapted simply for precisely such systems. We develop methods for automatic distribution across different types of hardware and approaches for model reduction/acceleration and visualization in virtual and augmented reality.