(RISAP)
Realtime Interactive Software for Audiovisual Performacne
Current Research:
Imagine a live-performance, where with a simple hand gesture, the hall begins to fill with sound and images begin to appear on a screen. The sonic and visual events seem intrinsically interconnected and occur simultaneously as if one is the cause of the other. The artist begins to sculpt and manipulate each audiovisual event with more hand gestures in a similar manner to a performer simultaneously creating musical passages on their instrument, and an artist creating colourful brush strokes with their paintbrush. Members of a music ensemble begin to react to and interact with these audiovisual events, and with their own gestures, begin to create complementary musical and visual passages. The level of interactivity, reactivity and communication between the members of the ensemble is reminiscent of the way a jazz ensemble functions, where as one member initiates an improvised musical passage, other members can respond to this and adjust their own playing to follow the musical decisions of the other members. In order to make a performance of this nature possible, my proposal is to create RISAP (Realtime Interactive Software for Audiovisual Performance), which integrates many years of experience as a jazz performer, digital artist/musician, and software designer.
Based on the current limitations in processing speed, however, audiovisual artists must often choose between utilizing complex and extremely detailed images which can take days or even weeks to render, and creating comparatively simple visual material which can be generated and controlled in realtime. With the current advancements in computer-graphics technology, it is now possible to generate and manipulate photo-realistic videos in realtime using game engine technology such as Unreal, and creative coding environments such as openFrameworks. Game audio, however, has not advanced as quickly, and still relies primarily on static sample-based material. The cutting-edge research being done into procedural game audio and audiovisual interactions by Dr. Robert Hamilton of Stanford University [1], focuses on the bi-directional communication of data between digital games and realtime audio synthesis programming environments such as SuperCollider. Sound and music are interactively generated in realtime by mapping streams of game data such as character motion, environmental cues, and game AI, to artistically controlled sonic events. My work is concerned with developing new theoretical frameworks, methods and software which integrate diverse audio, visual and gaming software environments through the bi-directional communication of data, combine a variety of communication types such as visual, audio, and spatial into a single unified events, and unify many modes of interaction, such as gesture-recognition, machine listening, image and data analysis and the collaborative control of shared data between a networked group of performers. This will result in software and methods with emergent properties and strengths, which may prove to be exponentially more powerful than the sum of its individual parts. This framework, which extends communications concepts explored in multimodality theory [2], may not only be capable of enhancing the range and complexity of information and emotional content we can communicate, but also can affect the context and perception of the media. Abstract ideas, images and audio may become more accessible and understandable when presented together in this way. Gesture-tracking devices such as Leap Motion, a device capable of recognising a variety of distinct hand gestures, and the Xbox Kinect, a sensor capable of simultaneously tracking full body motion of multiple performers, offer a powerful way in which human interaction with various types of digital media such as visual elements in virtual reality and gaming applications [3], and digital audio in musical performances [4], becomes more intuitive, expressive, and natural.
Aims:
1. Develop new theoretical frameworks to convert physical gesture and digital information into audiovisual media
2. Develop new methods to intuitively interact with audiovisual media in realtime, exploring single user and group-based interaction
3. Design software, which will enhance the level of visual and sonic detail possible during and audiovisual performance while retaining a high level of realtime control
4. Utilise this software for artistic performance and within immersive virtual reality and gaming frameworks, creating a stronger synergy between sonic and visual interaction
Objectives:
1. Integrate bi-directional communication between interactive audio, visual and gaming environments such as SuperCollider, openFrameworks and the Unreal Gaming Engine
2. Develop new mapping algorithms and software, which transform physical gesture and various types of data into complex audiovisual results
3. Create flexible and intuitive modes of human/computer interactions using a variety of advanced gesture-sensor technologies including Leap-motion and Xbox Kinect
4. Develop the RISAP software, enabling users to generate and manipulate highly detailed and complex audiovisual events in realtime by unifying multiple media types from a variety of simultaneously controlled software environments
5. Extend the output of the software beyond artistic performance by generating user dependent audio and visual material within an interactive gaming environment and generatively creating and interacting with virtual environments.
Methodology:
RISAP will be developed using an Agile Development research methodology, in which an iterative flexible process of coding, testing, and recoding would be employed as the development process progresses and as new needs and goals present themselves.
[1] Establish bi-directional communication between SuperCollider, openFrameworks, and Unreal using OSC (open sound control) protocol.
[2] Develop the capacity to generate highly detailed realtime visuals in Unreal and openFrameworks, using the generative capabilities afforded by C++ programming within these environments.
[3] Design robust audio-synthesis functions in SuperCollider, which can react to data coming from the visual environments as well as send data to them
[4] Leap Motion and Xbox Kinect sensors will be used to track physical gestures such as subtle hand movements. Various mapping algorithms will be employed which transform each gesture into complex audiovisual events. Additional input methods will be integrated
[5] A gestural performance language will be developed to control the software.
[6] Integrate wireless communication and networked data sharing allowing bi-directional communication of data between members of an ensemble using RISAP on separate computers.
[7] A series of performances and workshops.
[8] Write and submit journal article[s].
Outcomes:
1. Completion, evaluation and release of RISAP as open-source software, making it available to electroacoustic composers, game designers, VJs and audiovisual artists
2. Publication of article[s] explaining the theory, methodology, and usage of the software within The
Computer Music Journal and/or SBC Journal on Interactive Systems.
3. I will give multiple performances and presentations at international conferences such as ICMC (International Computer Music Conference), SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques), and GDC (Game Developers Conference)
References:
[1] Hamilton, R. (2014). Procedural Music, Virtual Choreographies and Avatar Physiologies. Conference presented at the GDC (Game Developers Conference). San Francisco. Retrieved from http://www.gdcvault.com/play/1020752/Procedural-Music-Virtual-Choreographies
[2] Bernsen, N. O., (2008). Multimodality Theory, in Tzovaras, D (Ed.), Multimodal User Interfaces: From Signals to Interaction (pp. 5 - 29). Heidelberg, Germany: Springer.
[3] Fanini, B. (2014). A 3D Interface to Explore and Manipulate Multi-scale Virtual Scenes using the Leap Motion Controller. Paper presented at the ACHI 2014: The Seventh International Conference on Advances in Computer-Human Interactions. Barcelona, Spain.
[4] Vasliakos, K. (2016). An Evaluation of Digital Interfaces for Music Composition and Improvisation. (Doctoral dissertation). Retrieved from
http://epreints.keele.ac.uk/1606/
Realtime Interactive Software for Audiovisual Performacne
Current Research:
Imagine a live-performance, where with a simple hand gesture, the hall begins to fill with sound and images begin to appear on a screen. The sonic and visual events seem intrinsically interconnected and occur simultaneously as if one is the cause of the other. The artist begins to sculpt and manipulate each audiovisual event with more hand gestures in a similar manner to a performer simultaneously creating musical passages on their instrument, and an artist creating colourful brush strokes with their paintbrush. Members of a music ensemble begin to react to and interact with these audiovisual events, and with their own gestures, begin to create complementary musical and visual passages. The level of interactivity, reactivity and communication between the members of the ensemble is reminiscent of the way a jazz ensemble functions, where as one member initiates an improvised musical passage, other members can respond to this and adjust their own playing to follow the musical decisions of the other members. In order to make a performance of this nature possible, my proposal is to create RISAP (Realtime Interactive Software for Audiovisual Performance), which integrates many years of experience as a jazz performer, digital artist/musician, and software designer.
Based on the current limitations in processing speed, however, audiovisual artists must often choose between utilizing complex and extremely detailed images which can take days or even weeks to render, and creating comparatively simple visual material which can be generated and controlled in realtime. With the current advancements in computer-graphics technology, it is now possible to generate and manipulate photo-realistic videos in realtime using game engine technology such as Unreal, and creative coding environments such as openFrameworks. Game audio, however, has not advanced as quickly, and still relies primarily on static sample-based material. The cutting-edge research being done into procedural game audio and audiovisual interactions by Dr. Robert Hamilton of Stanford University [1], focuses on the bi-directional communication of data between digital games and realtime audio synthesis programming environments such as SuperCollider. Sound and music are interactively generated in realtime by mapping streams of game data such as character motion, environmental cues, and game AI, to artistically controlled sonic events. My work is concerned with developing new theoretical frameworks, methods and software which integrate diverse audio, visual and gaming software environments through the bi-directional communication of data, combine a variety of communication types such as visual, audio, and spatial into a single unified events, and unify many modes of interaction, such as gesture-recognition, machine listening, image and data analysis and the collaborative control of shared data between a networked group of performers. This will result in software and methods with emergent properties and strengths, which may prove to be exponentially more powerful than the sum of its individual parts. This framework, which extends communications concepts explored in multimodality theory [2], may not only be capable of enhancing the range and complexity of information and emotional content we can communicate, but also can affect the context and perception of the media. Abstract ideas, images and audio may become more accessible and understandable when presented together in this way. Gesture-tracking devices such as Leap Motion, a device capable of recognising a variety of distinct hand gestures, and the Xbox Kinect, a sensor capable of simultaneously tracking full body motion of multiple performers, offer a powerful way in which human interaction with various types of digital media such as visual elements in virtual reality and gaming applications [3], and digital audio in musical performances [4], becomes more intuitive, expressive, and natural.
Aims:
1. Develop new theoretical frameworks to convert physical gesture and digital information into audiovisual media
2. Develop new methods to intuitively interact with audiovisual media in realtime, exploring single user and group-based interaction
3. Design software, which will enhance the level of visual and sonic detail possible during and audiovisual performance while retaining a high level of realtime control
4. Utilise this software for artistic performance and within immersive virtual reality and gaming frameworks, creating a stronger synergy between sonic and visual interaction
Objectives:
1. Integrate bi-directional communication between interactive audio, visual and gaming environments such as SuperCollider, openFrameworks and the Unreal Gaming Engine
2. Develop new mapping algorithms and software, which transform physical gesture and various types of data into complex audiovisual results
3. Create flexible and intuitive modes of human/computer interactions using a variety of advanced gesture-sensor technologies including Leap-motion and Xbox Kinect
4. Develop the RISAP software, enabling users to generate and manipulate highly detailed and complex audiovisual events in realtime by unifying multiple media types from a variety of simultaneously controlled software environments
5. Extend the output of the software beyond artistic performance by generating user dependent audio and visual material within an interactive gaming environment and generatively creating and interacting with virtual environments.
Methodology:
RISAP will be developed using an Agile Development research methodology, in which an iterative flexible process of coding, testing, and recoding would be employed as the development process progresses and as new needs and goals present themselves.
[1] Establish bi-directional communication between SuperCollider, openFrameworks, and Unreal using OSC (open sound control) protocol.
[2] Develop the capacity to generate highly detailed realtime visuals in Unreal and openFrameworks, using the generative capabilities afforded by C++ programming within these environments.
[3] Design robust audio-synthesis functions in SuperCollider, which can react to data coming from the visual environments as well as send data to them
[4] Leap Motion and Xbox Kinect sensors will be used to track physical gestures such as subtle hand movements. Various mapping algorithms will be employed which transform each gesture into complex audiovisual events. Additional input methods will be integrated
[5] A gestural performance language will be developed to control the software.
[6] Integrate wireless communication and networked data sharing allowing bi-directional communication of data between members of an ensemble using RISAP on separate computers.
[7] A series of performances and workshops.
[8] Write and submit journal article[s].
Outcomes:
1. Completion, evaluation and release of RISAP as open-source software, making it available to electroacoustic composers, game designers, VJs and audiovisual artists
2. Publication of article[s] explaining the theory, methodology, and usage of the software within The
Computer Music Journal and/or SBC Journal on Interactive Systems.
3. I will give multiple performances and presentations at international conferences such as ICMC (International Computer Music Conference), SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques), and GDC (Game Developers Conference)
References:
[1] Hamilton, R. (2014). Procedural Music, Virtual Choreographies and Avatar Physiologies. Conference presented at the GDC (Game Developers Conference). San Francisco. Retrieved from http://www.gdcvault.com/play/1020752/Procedural-Music-Virtual-Choreographies
[2] Bernsen, N. O., (2008). Multimodality Theory, in Tzovaras, D (Ed.), Multimodal User Interfaces: From Signals to Interaction (pp. 5 - 29). Heidelberg, Germany: Springer.
[3] Fanini, B. (2014). A 3D Interface to Explore and Manipulate Multi-scale Virtual Scenes using the Leap Motion Controller. Paper presented at the ACHI 2014: The Seventh International Conference on Advances in Computer-Human Interactions. Barcelona, Spain.
[4] Vasliakos, K. (2016). An Evaluation of Digital Interfaces for Music Composition and Improvisation. (Doctoral dissertation). Retrieved from
http://epreints.keele.ac.uk/1606/