Giuseppe Torre's Portfolio Feed Digital Artist 2014-11-21T08:47:16+11:00 Giuseppe Torre https://muresearchlab.com tag:www.muresearchlab.com,2014:/feed/ ©2014 Giuseppe Torre multimedia installations tag:www.muresearchlab.com,2014:/installation/ 2014-11-21T08:19:30+11:00 @page_description
AI Prison tag:www.muresearchlab.com,2014:/installation/aiprison/ 2014-11-21T08:47:16+11:00 @page_description

AI Prison

Brief Description: Guarded by the OS and MMU, the program is unable to self-determine its memory. As now, strong-AI is improbable, if not impossible.

Personal Glitches tag:www.muresearchlab.com,2014:/installation/glitches/ 2014-11-21T08:20:16+11:00 @page_description

Brief Description: This work was submitted for the 0p3nr3p0.net modular art/archive. 0p3nr3p0.net project was developed as an open port for anyone anywhere to submit glitchart works that can be represented by a URL ( i.e. video, images, sound, web)Technical Description: Creation of glitches for ‘panorama’ photos using iPhone. Glitches were created by purposely using this feature in a wrong manner rather than code hacking.

Exhibited at:

0P3NR3P0.NET (2013) London(UK)

Entertainment does not come for free... tag:www.muresearchlab.com,2014:/installation/digitalaphorisms2/ 2014-11-21T08:20:15+11:00 @page_description

Brief Description: Thanks to advances in technology, a restricted number of digital artists, in the line of a long lasting tradition that goes from the Prometheus of the Greek mythology to the most recent Hollywood movie ’Robot and Frank’, focused their creativity on the many possible ways of evoking agency into the inanimate: the computer/robot. By evoking agency, the aim is therefore to enhance the perceived parity and mutuality of the conver- sation between humans and computers. Yet, a truly convincing non- human agent enabling a peer-to-peer conversation is no-more than a dream, shared and offered by the artist to an audience. Following a func- tional approach, it appears that the real agents are only the humans. In contrast, freed from any ’intelligentia’, the computer becomes an enslaved entertainment/facilitator tool for the agents’ needs. This view, in line with current human-centered views in HCI, is explained here in term of human- computer-human (HCH) interactions. In light of these considerations, I am speculating that the perception of a computer’s agency could be evoked with a different approach. This approach reverses the roles for each element con- stituting the system: thus, making the human a tool for the communication between non-human agents. In my work titled Entertainment does not come for free... I attempt to provide the dream for agency to the non- humans (i.e. the computers) by creating a speculative computer- human-computer interaction.Technical Description: Developed in Objective-C and openFrameworks.

Exhibited at:CSIS University of Limerick ( 2013), Limerick Lifelong Learning Event 2013 - Faber Studio - Limerick

Flies tag:www.muresearchlab.com,2014:/installation/digitalaphorisms/ 2014-11-21T08:20:13+11:00 @page_description

Flies - gone wild

Brief Description: Like flies in a cage our thoughts randomly move. A neurone fires. A multi- tude of neurones fire and create thoughts, consciousness. In doing so, the quadrants (i.e. neurons) acts in parallel with no clear distinction between conscious and unconscious mental states. The artwork, thus, is an attempt to artistically visualise Daniel Dennett’s Multiple Drafts Model (MDM) of consciousness.Technical Description:This works investigates the aesthetics of pointers within the C++ toolkit openFrameworks. The work is based around the Algo2012 class examples by Zach Lieberman. The scope is to transform simple coding examples in potential, fully developed, artworks.

Exhibited at: Limerick Lifelong Learning Event - Faber Studio - Limerick (2013)

Pollen tag:www.muresearchlab.com,2014:/installation/pollen/ 2014-11-21T08:20:11+11:00 @page_description

Giuseppe Torre: Concept / Art Director

Mark O'Leary: Graphic Designer - Openframework Programmer

Brian Tuohy : Network - Max/Msp Programmer

Danilo Tumminello: Logo

Brief Description:

POLLEN is an interactive 3D audio/visual installation for any number of computers connected to the network. Specifically designed for large computer Lab, it aims at the regeneration of those spaces through the creation of a fully immersive multimedia art installation.

Technical Description:

3D World

A 3D physics emulator library has been integrated into the 3D virtual world enabling pollen to collide and freely fly/bounce around. The four delimiting walls are fitted with narrow slits enabling the pollen to fly/bounce onto the adjacent computers (left/right/front/behind)

Interactive

When the user move or pass in front of a computer the camera detects the movement triggering a small "earthquake" into the virtual environment, thus enabling the pollen to freely move as lifted in the wind.

Multichannel Audio

Each computer , connected to its own speaker placed right next to it, will trigger an algorithmically generated sound when receiving one or many pollen from its neighbors. The displacement of the computers/speakers is responsible for the fully immersive 3D audio setup. Thus people can walk into the sound!

Presented at:

University of Limerick - Digital Media & Arts Research Centre - 11th December 2009, Universitá di Palermo - Facoltá di Scienze Politiche - Accademia di Belle Arti di Palermo - 17th -19th MArch 2010

Voodoo Bodies tag:www.muresearchlab.com,2014:/installation/voodoo/ 2014-11-21T08:20:10+11:00 @page_description
01.

Photographer: Dorota Konczewska

Programmer: Giuseppe Torre

Brief Description:Voodoo Bodies is an Interactive Photography Exhibition. The exhibition space has been fitted with cameras which track the audience movements, data which is then used to feed a music generator software. That is, as the audience keeps moving, the music generated adds in complexity.Technical Description:The software is developed in EyesWeb, an opensource toolkit that enables image tracking and manipulation. For the installation, the software calcu- lated the amount of movement in the area delimited by the photo stands. This data was then sent to Max/Msp for the sound generation engine and algorithmically generated music.

Presented at:

Mediated Bodies - Limerick (IRL) - 2007

live performances tag:www.muresearchlab.com,2014:/performances/ 2014-11-21T08:19:28+11:00 @page_description
Twister tag:www.muresearchlab.com,2014:/performances/twister/ 2014-11-21T08:20:03+11:00 @page_description

Twister (a prototype digital instrument) is developed by Nicholas Ward Composition and performance by Giuseppe Torre

Brief Description: The performance was designed so that a sequence of nuanced gestural per- formative actions could both be visually and sonically legible to audience members. The use of real-world metaphors that link the visible gesture, constrained by the physical limitations of the instrument, to the audible sonic output via a complex hierarchy of mapping strategies have been found to be a useful tool that can enhance both the performer and audience experience of the live performanceTechnical Description: The Twister interface is a prototype DMI that was initially developed using a movement based DMI design process. The intention here was to explore how movement, as described using Laban Movement Analysis, might explicitly drive a DMI design process. The interface is deliberately limited in the sensing systems that are employed (accelerometer, rotation, two buttons). The work explores this limitation whilst focusing on the range of gesture that the device supports and how this range might be mapped to sonic outcomes. In that regard, the presented work shows how we moved from a DMI design based on movement analysis to explore how the movement constraints imposed by the newly created DMI have influenced the design of the performance and the establishment of coherent audiovisual metaphors.

performed at:

Digital Media & Arts Research Centre University of Limerick www.dmarc.ie

Impromptus#1 tag:www.muresearchlab.com,2014:/performances/impromptus/ 2014-11-21T08:20:02+11:00 @page_description

Free form impro for the video work: HEAVEN and EARTH MAGIC by Harry Smith

Marco Cappelli: Guitar and EFX Giuseppe Torre: Live Electronics

Brief Description:Duo free-form impro for the video work: HEAVEN and EARTH MAGIC by Harry Smith. For this live performance I had the honour to play with interna- tionally renown avant-garde guitar player Marco Cappelli. The performance evolves as a free-form improvisation created extempore on the video artwork of New York based and grammy Award artist Harry Smith. For this perfor- mance I make use of various digital technologies both software and hardware (MIDI guitar and Kinect Camera) that allow me to capture and re-synthesis live input from the guitar player in real-time.Technical Description:Software used: Ableton Live, PD, Max. Hardware: MIDI Guitar, iPad, Kinect Camera.

Performed at:

Dr. John's - Limerick - 14th of November 2012, Teatro Garibaldi Palermo (ITA) - 18th of December 2012

Agorá tag:www.muresearchlab.com,2014:/performances/agora/ 2014-11-21T08:20:08+11:00 @page_description

(PLEASE USE HEADPHONES! - Audio is in Binaural format)
(If you would like to receive the surround version please get in touch and I will ship it to you)

Programme Notes: Agor ́a (from Greek word meaning ”gathering space”/ ”public square”) is a live music performance that investigates the performer’s musical memo- ries that are revealed through the performer’s hand gestures. This pro- cess is enabled via a newly developed data-glove named Pointing-at that allows the performer to browse, shape and re-create live and algorith- mically his musical memories in a quadraphonic surround setup.

There are in our existence spots of time, That with distinct pre-eminence retainA renovating virtue, whence-depressedBy false opinion and contentious thought,Or aught of heavier or more deadly weight,In trivial occupations, and the roundOf ordinary intercourse-our mindsAre nourished and invisibly repaired;A virtue, by which pleasure is enhanced,That penetrates, enables us to mount,When high, more high, and lifts us up when fallen.(Wordsworth 1850, verses 208-218)

Technical Description: The Pointing-at data glove is the result of a collaborative research project between the Tyndall National Institute of Research in Cork and the IDC - DMARC at the University of Limerick. This performance investigates the use of the data glove for the purpose of a live music performance in which gestures are used to create and manipulate sounds in real-time. Special attention was paid to the mapping algorithms that link gestures and sounds so to facilitate the audience understanding of the cause-effect mechanisms.

performed at:

10th International Symposium on Computer Music Multidisciplinary Re- search (CMMR) Sound, Music and Motion - Marseille - France, NOVA - LyricFM Irish National Radio (March 16th 2014), NOVA - LyricFM Irish National Radio (October 13th 2013),CMC Concert - DIT Conservatoire of Music - Dublin (2012), Ormston House - Limerick (2012), Border Line Club - Palermo - Italy (Invited Performance)(2012)This work has been recently mentioned, analysed and referenced in the following journal article: Fischman, R. (2013), ‘A Manual Action Expres- sive System (MAES)’, Organised Sound, 18(3), 328-245.

betav-#1 tag:www.muresearchlab.com,2014:/performances/betav/ 2014-11-21T08:20:07+11:00 @page_description

Performers: Leon McCarthy - live visuals and live electronics, Giuseppe Torre - Pointing-at glove and live electronics

Brief Description:

betAV audio-visual performance aims to make audiences aware of the impact that industrial fishing practices are having on the seas and our world at large. Taking its lead from the 2009 documentary ‘The End of the Line, the performance repurposes some of its video content to present a new live audio- visual interpretation. Motion graphics illustrate data on the yearly catches of yearly of pelagic fish in the Mediterranean. Further animations present data on the health benefits of eating fish and furthermore, steps we can take to maintain fish stocks into the future.

In this performance I contribute to the music which I create live with my Pointing-at data glove as well as many other gizmos.

(part I) poignant / (part II) relentless / (part III)relieve

Technical Description: A duo performance for Pointing-at glove and live visual and quadriphonic audio setup.

Performed at:

Live iXem 2011 - Decemeber 8th 2011 Tonnara Florio di Favignana - Isole Egadi (|Italy)

Mani tag:www.muresearchlab.com,2014:/performances/mani/ 2014-11-21T08:20:06+11:00 @page_description

Performers: Robert Sazdov - laptop and live electronics, Giuseppe Torre - Pointing-at data glove

Brief Description: A duo performance for Pointing-at glove and live electronics. Theme is con- cerned with social inequalities. Mani means hands in Italian but it also sounds like the word ‘money’ in English. Our future is in few people’s hands. Do you feel safe?Technical Description: This performance investigates the use of hand gestures for the creation and manipulation of sounds in real-time for a live performance. Gestures’ vocabu- lary is built according to theatrical features that reconnect to the underlying storyboard presented in the performance. This is done so that the cause- effect links between gestures and sounds can be re-established within a context that helps to classify the gesture in its metaphor- ical context.

performed at:

New Interfaces for Musical Expression Conference 2010 - Sydney - Australia

Molitva tag:www.muresearchlab.com,2014:/performances/molitva/ 2014-11-21T08:20:05+11:00 @page_description

Performers: Giuseppe Torre - Pointing-at glove, Robert Sazdov - live electronics, Dorota Koncezwska - voice

Brief Description:Molitva is a composition for voice, live electronics, and gestural control Pointing-at glove. The main music theme is a revisiting of traditional macedonian chants now mixed with electronics. Technical Description:the performance investigate the use of the Pointing-at data glove in a live performance win which multiple players/performers are involved. In order to enhance the cause-effect links between gestures and sounds each performer was allocated a specific range of sonic characteristics to work with. In doing this, the data glove worked exclusively with granular sounds and their live manipulation by hand gestures.

Performed at:

Re:New Festival - Copenhaghen (Denmark) 2009, New Interfaces for Musical Expression 2009 - Pittsburgh (USA), SOUNDINGS - Limerick (IRL) 2009

A/V compositions tag:www.muresearchlab.com,2014:/music/ 2014-11-21T08:19:27+11:00 @page_description
Walk To The Pub tag:www.muresearchlab.com,2014:/music/walktopub/ 2014-11-21T08:19:56+11:00 @page_description
01.

Music: Seamus Fogarty & Giuseppe Torre

M A Y D A Y tag:www.muresearchlab.com,2014:/music/mayday/ 2014-11-21T08:20:01+11:00 @page_description

Music: Seamus Fogarty and Giuseppe Torre

Video: Giuseppe Torre

Brief Description:

what goes up must come down. . .eventually.

Mayday began its life as an acoustic piece of music only to then modulate and transform itself into an acousmatic quasi-psychedelic a/v excursion. The compositional process involved taking various rich acoustic sources, which are then sequenced and manipulated in an order that creates tension, alluding to some destination that is never quite reached. The video speculates accordingly. While advances in Digital Arts have provided us with many new and exciting sounds and video tools, Mayday is a reminder of how powerful the combination of organic acoustic and real-time video sources with today's technology can be.

Follow the raising pitch and do not leave it!

Technical Description: Audio is generated via spectral and temporal processing of acoustic instru- mental sources.Video is generated via a feedback loop systems activated by multiple camera. Light sources are created in real-time and represents the input element of the feedback loop system created for the performance.

Performed at:

NOVA - LyricFM Irish National Radio (February 2nd 2014), Sonoim- agenes 2010 that will be held on at the Recoleta Cultural Center and at the UNLa’s University Campus Buenos Aires - Argentina (19-21 Oc- tober 2010), B-LINK FESTIVAL of NEW COMMUNICATIONS Beograd - Serbia (November 16th-20th 2011), AUROPOLIS & UK PAROBROD Beograd - Serbia (25th-26th February 2011), Iridescent World Ljubljana - Slovenia (25th January 2011), TWEAK-Soundings Festival, Limerick - Ireland(23 September 2010), VISIONS FROM THE FUTURE second edition 2010 Partial ’Iridescent’ not found Worlds Milan - Italy (10 June 2010), VISIONS FROM THE FUTURE second edition 2010 Partial ’Irides- cent’ not found Worlds Turin - Italy (28 May 2010)

Nina project tag:www.muresearchlab.com,2014:/music/ninaproject/ 2014-11-21T08:20:00+11:00 @page_description

Dorota Konczewska: voice

Filippo Fanó: piano

Alessandro Paternesi: drums

Joe O’Callaghan: guitar

Peter Hanagan: double bass

Giuseppe Torre: electronics

Brief Description: The project revisited and rearranged famous songs from the repertoire of legendary singer Nina Simone. The sextet was made of irish nationals and international jazz players. Some of the players are today enjoying a great artistic career and one of them, Paternesi, is considered to be one of the most talented young drummers in Italy and Europe. Playing with this band was a highly valuable experience.Technical Description: Jazz quintet project with the addition of electronics. In contrast with much of the relevant literature where the electronics are designed as support and guide to the other performers (i.e. background track), the electronics are designed to be the sixth instrument with solos and scored parts. Thus, the attention focused on the range of acoustic frequencies not covered by the other instruments (i.e. extremely low or extremely high) and used glitch type of acoustic sources.

Trilogy tag:www.muresearchlab.com,2014:/music/trilogy/ 2014-11-21T08:19:59+11:00 @page_description

Brief Description:Music composition based on a real story documented on a book(unfortunately I cannot remember the title and I lost the book. If you know the book please get in touch). The story is about a young girl kidnapped at a very young age and forced into prostitution. For years she lived in a small room in a small house at border between two states that was also close to a railroad barrier. The composition enacts the girl playing with the only sounds the girls hear from her room as a way to escape her horrible reality. The music was composed for a dance show.

E-Flute tag:www.muresearchlab.com,2014:/music/eflute/ 2014-11-21T08:19:57+11:00 @page_description

Giancarlo Scarvagliari: composer

Emilio Galante: flute

Giuseppe Torre: Max software

Brief Description:Composition for flute and live electronics composed for and performed by internationally renown flute player and educator Emilio Galante. The composer is the great Maestro Giancarlo Scarvaglieri!Technical Description: For this work I developed the software according to the composer instructions and performer’s need.

Performed at:

FestivArt - Piccolo Teatro Unical - Universita della Calabria - Italy (2008), Mondi Sonori - Conservatorio di Music di Trento - Italy (2008), Sonata Is- land Festival - Italy (2008)

Air in Slow Motion tag:www.muresearchlab.com,2014:/music/airinslowmotion/ 2014-11-21T08:19:56+11:00 @page_description

Brief Description: An imaginary travel of the air inside a trumpet in Bb This piece of music is based on the concept of ’sound design’. The main aspect is the fast change rate of the f-table. Thus, when you render (perform in real time mode) you will enjoy a little ’movie by the csound’s f-table windows! Obviously the air travels in slow motion.Technical Description: CSound composition which explores the possibility of making wave- form movies from the spectral analyser of the CSound interface.

Performed at:

EAR Sound Electric 2005 - University of Maynooth (Ireland)

E&sup2-Jazz; tag:www.muresearchlab.com,2014:/music/E-Jazz/ 2014-11-21T08:19:53+11:00 @page_description

Valentina Tumminello: voice

Giuseppe Torre: electronic re-arrangment

Brief Description

This is one of my first and early attempts to computer music. Decided to approach jazz standards and re-arrange them in an electronic fashion..

Performed at:

Estate Terrasini 2005

softwares tag:www.muresearchlab.com,2014:/softwares/ 2014-11-21T08:19:27+11:00 @page_description
ML-AL for Ableton Live tag:www.muresearchlab.com,2014:/softwares/mlal/ 2014-11-21T08:19:45+11:00 @page_description

M4L module for Ableton Live which enables basic gesture trackingand mapping. The module is based on the gesture-follower patch ofthe IRCAM's FTM library and it makes use of the native capabilitiesof the Live API. The mapping between the gestures and audibleoutput is designed within the module and it enables the simultaneouscontrol of macro features of the Ableton interface such as clipselection, volume and panning of the track/s.

Test it with an iPad or iPhone app that can send OSC messages

(Step 1) download, extract and move to Cycling74 folder [THIS].

(Step 2) download [M4L module] and place it into ~/Library/ApplicationSupport/Ableton/Library/Presets/MIDIEffects.

(Step 3) you can use it immediately with these TouchOSC patches:[iPhone]



Download [PDF] of below paper submitted to the 50th AISB Symposium.

Introduction

Machine Learning (ML) toolkits such as Wekinator [11], SARC EyeswebCatalog[3], IRCAM MnM toolbox[4] and OpenCV [8] are betweenthe most well-known toolkits available to artists and engineers[2]. Most of these tools have Open Sound Control (OSC) [9] capabilitieswhich allow them to communicate with third-party softwarewhich create and manage the audio/visual elements of the performance.The process connecting gestures to audio-visual outputs isgenerally dealt with ad hoc algorithms to suits the performance orthe performer's needs.The module described in this paper, ML-AL, is based on the IRCAM'sMnM toolbox and it exploits the M4L [6] capabilities to directlyinterface gesture recognition algorithms with the audio/visualelements within the software Ableton Live [5]. In that regard themodule does not present a new algorithm or a novel approach in MachineLearning but rather implements existing technologies withinone popular music software. However, the design of the module offersan approach to the use of gesture tracking algorithms for the discretecontrol of system's parameters rather than a continuous soundgenerating mechanism and/or continuos controllers.

ML-AL FEATURES The ML-AL module has three sections each of which is dedicated toa specific task. These are visually separated by the blue vertical linesas depicted in Fig.1. The sections are: 1. Data-Input: it serves the purpose of retrieving data from a connecteddevice via OSC.2. Gesture Recognition: it performs gesture recognition analysis ona user-built gesture dictionary.3. Mapping: it links the result of the performed gesture analysis to aseries of Ableton Live native functions.

Data-Input The data input section receives from any device that can transmitaccording to the OSC protocol. In this section the user can decide through which port number the communication needs to be established.The data needs to be formatted according to the followingOSC address:  1. /naccxyz: this is the data which will be read by the gesture recognitionsoftware for the creation and subsequent anaysis of the gestures.It needs to be a list of three numbers (e.g. the accelerationreadings from a 3-axis accelerometer).  2. /n1n/ push2: The sender device must have a push button whichworks as a gate. This push button serves the purpose of clearlymarking beginning and end points of a gesture by letting the datathrough when depressed and stopping the data when released.

Gesture Recognition The gesture recognition section is based on the gesture-follower examplepatch offered in the IRCAM's MnM toolkit. This toolkit waschosen because fully compatible with the M4L package available inAbleton Live. The patch proved to be a reliable and a ready-madesoftware that could successfully implement gesture recognition. Furthermore,it is written in Max and this allows for easy manipulationand addition of user elements. It is important to notice that the patchis not making explicit use of the time progression features that it enables[1]. Rather, the patch is used as a gesture classifier.The author acknowledges the limitations that such a system imposes.This choice was dictated by privileging artistic needs overtechnical possibilities. In particular, machine learning was thought, atleast in this early version of the module, to be useful for the discretecontrol of macro elements of a live performance, such as the triggeringof pre-recorded loops, rather then acting upon an eventual soundgenerating mechanisms or continuous controllers. This is a strategythat the authors has found productive in severals other works [10].

MappingThe data used for the control of the Ableton interface is the likeliestnumber value retrieved by the gesture analysis routine performed bythe gesture recognition section and made available for mapping eachtime the push button is released.The ML-AL module can store up to eighty presets. These presetscan store the start/stop mode of a clip, volume and panning of up toeight different audio or MIDI Ableton tracks. The likeliest numbervalue is then used to recall one of these presets on-the-fly.MODE OF USE The presented module has been tested and interfaced with an iPadrunning a custom patch made with TouchOSC and complying withthe specifications outlined in Section 2.1. The mode of operation of the module consists of the followingsteps:1. load the module onto a MIDI track.2. input OSC port number over which to establish the connection(data-input section)3. enable 'Learn' mode in the gesture recognition section4. enable 'start' (toogle) and perform a gestures.5. disable 'start' (toggle)6. repeat steps 3 and 4 for as many gestures as you require. Makesure that to each gesture corresponds a different phrase number.7. disable 'Learn' modeBefore using the module in performance mode, it is required tostore some presets (preferably of equal quantity of the gestures performedduring the training steps).1. create up to eight audio or midi tracks in Ableton Live.2. import or create as many clip as desired in each track3. create a combination of playing clips, volume and panning settingsand store these to a preset number (shift+click on one of the circlesin the ML-AL module preset object).4. repeat step 3Now the module is ready to be used in performance mode. Enablestart in the Gesture Recognition section and play a gesture. Thegesture number performed, if successfully recognised, will recall theequivalent preset number.

Performance Scenario The ML-AL module was preliminary tested by developing a shortalgorithmic process controlled by gestures performed using an iPad..Four audio tracks were added to the Ableton project and each filledwith two clips with distinct sound properties (high drones, lowdrones , granular, irregular percussive patterns). The following configurationwas thought as 4-bit resolution system (four tracks by twoclips and with a clip per track always playing) given therefore a totalof sixteen possible combinations of playing clips stored as presets.The ML-AL module was trained with sixteen different gestureseach of which mapped to a preset number. At performance time, themodule response was fast and with a good rate of accuracy in recognisinggestures. A drawback for the system is represented by the effortrequired by the user to learn sixteen different gestures. However,the quality of a live performance is not necessarily measured by thequantity of gestures performable or performed. The module capabilities,in conjunction with more traditional continuos controllers (faders) for the manipulation of audio effects, offers already a goodsound palette for performance purposes.CONCLUSION This paper has presented and described the features of a new M4Lmodule for Ableton Live which enables basic gesture recognition andmapping. The module, based on the IRCAM's MnM toolkit, offerssimple and easy to use mapping of recognised gestures to basic controlparameters available in Ableton Live such as clip selection, volumeand panning settings for up to eight tracks. The module makesuse of a highly efficient gesture recognition algorithm working on anon-continuos classification method. The mapping algorithm enablesquick recall of Ableton presets so to, for example, control the macroelements of a performance such as the intervention of scenes and/or aselected combination of clips. The perceived latency during preliminarytesting was found neglegible thus making the module suitablefor live performances. ML-AL is freely available at [7].

REFERENCES [1] F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guedy, andN. Rasamimanana, 'Continuous realtime gesture following and recognition',in Gesture in Embodied Communication and Human-ComputerInteraction, eds., Stefan Kopp and Ipke Wachsmuth, volume 5934 ofLecture Notes in Computer Science, 73–84, Springer Berlin Heidelberg,(2010).[2] B. Caramiaux and A. Tanaka, 'Machine Learning of Musical Gestures',in Proceeding of New Interface for Musical Expression Conference,NIME'13, (May 2013 KAIST, Daejeon, Korea,).[3] Eyesweb Catalog. available at: http://www.somasa.qub.ac.uk/ngillian/sec.html [accessed 27th of January 2014].[4] Ircam Ftm. available at: http://ftm.ircam.fr/ [accessed 27th of January2014].[5] Ableton Live. available at: https://www.ableton.com/ [accessed 27th ofJanuary 2014].[6] Max for Live. available at: https://www.ableton.com/en/live/maxforlive/[accessed 27th of January 2014].[7] ML-AL. available at: http://muresearchlab.com/softwares/mlal/ [accessed27th of January 2014].[8] OpenCV. available at: http://opencv.org/ [accessed 27th of January2014].[9] OpenSoundControl. available at: http://opensoundcontrol.org/ [accessed27th of January 2014].[10] G. Torre, The Design of a New Musical Glove: A Live PerformanceApproach, (Ph.D. Thesis) University of Limerick, 2013.[11] The Wekinator. available at: http://wekinator.cs.princeton.edu/ [accessed27th of January 2014].

The Design of a New Musical Glove: A Live Performance Approach tag:www.muresearchlab.com,2014:/softwares/PhD/ 2014-11-21T08:19:44+11:00 @page_description

download full dissertation here.

ABSTRACT
A live performance using novel technologies is a highly complex system in which anthropological, sociological, psychological, musicological and technical issues are heavily involved. The New Interfaces for Musical Expression (NIME) community has presented a new approach to music performance often heavily technologically mediated while outputting a great deal of new digital instruments since 2001. Within this broad research eld, important issues such as hardware interface design, mapping strategies, skilled performance and compositional approaches have been considered. Many NIME practitioners have explored the development of `gestural controllers' in the hope of achieving natural and intimate interaction while also designing clear interactions between performer's gesture and sound from an audience perspective. This thesis expands on this notion through the consideration of the possibilities for enhancing the audience engagement and understanding of the underlaying structures and mechanics of the live performance.
To this end, a newly developed data glove named Pointing-at is developed. A number of live performances in which the data glove is used are presented and discussed. The analysis of both the theoretical and practical elements of the research formed the basis for the development of an approach to the design of nuanced gestural performative actions that are both visually and sonically legible to audience members. In that regard, the use of metaphors that are coherent to the theme of the performance have been found to be a useful tool that can enhance both the performer and audience experience of the live performance.

AHRS Max Library tag:www.muresearchlab.com,2014:/softwares/AHRS/ 2014-11-21T08:19:43+11:00 @page_description
01.

The AHRS Library (Attitude Heading Reference System) is a set of Max externals that allows you to perform a series of basic calculations for 3D/4D vectorial math used in aerodynamics.

If you are using a three axis accelerometer and a three-axis magnetometer check out the"ahrs_triad" object which enables you to find the orientation of your cluster of sensor with respect to the Earth fixed coordinates.

download library here.
download source code here.

Latex UL Thesis Template tag:www.muresearchlab.com,2014:/softwares/ULThesisTemplate/ 2014-11-21T08:19:40+11:00 @page_description
01.

download Bachelor Thesis template here.


download Master Thesis template here.


download PhD Thesis template here.




SUPERMIDI v.01 tag:www.muresearchlab.com,2014:/softwares/Supermidi/ 2014-11-21T08:19:39+11:00 @page_description
01.

SUPERMIDI is a teaching tool for MIDI which I have developed for my students.

download Standalone Application (MAC OS X) here.


download Max patches here.




Publications tag:www.muresearchlab.com,2014:/publications/ 2014-11-21T08:30:26+11:00 @page_description

Books

Torre, G. (2008) Sensor Tracking Technology and the Wearable Wireless: Some Prelimiinary Developments, India, Research Signpost (ISBN 978-81- 308-0217-6). [HERE]

Conferente Proceedings

McGuire, T. & Torre, G. (2014). A Genetically Generated Drone A/V Composition Using Video Analysis as a ‘Disturbance Factor’ to the Fitness Function, 3rd International Workshop on Musical Metacreation (MUME 2014), held at the tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'14)

Ward, N. and Torre, G., (2014) Constraining Movement as a Basis for DMI Design and Performance, eds., NIME '14: Proceedings of the Conference on New Interfaces for Musical Expression, 30 June - 3 July 2014, Goldsmith University London, London, UK

Torre, G. (2014), 'Machine Learning with Ableton Live', AISB 2014 Con- vention: The Society for the Study of Artificial Intelligence and Simulator Behaviour, Goldsmith University of London, 1-4 April 2014, London, UK. [HERE] Barenca, A., Torre, G., (2011) 'The Manipuller: Strings Manipulation and Multi-Dimensional Force Sensing', eds., NIME '11: Proceedings of the Con- ference on New Interfaces for Musical Expression, , 30 May 1 June 2011, Oslo, Norway, 232 - 235 [HERE]

Torre, G., O'Leary, M., Tuohy, B., (2010) 'A Multimedia Interactive Network Installation', eds., NIME '08: Proceedings of the Conference on New Interfaces for Musical Expression,15-18th June 2010, Sydney, Australia, 103 - 106 [HERE]

Torre, G., Torres, J. and Fernstrom, M. (2008), 'The Development of Motion Tracking Algorithms for Low Cost Inertial Measurement Units', in Camurri, A. and Gualtiero, V., eds., NIME '10: Proceedings of the Conference on New Interfaces for Musical Expression, Genova, June 5-7, 2008, Genova: InfoMus, Casapaganini and Universita degli Studi di Genova, 375 - 376. [HERE]

O'Flynn B., Torre G., Fernstrom M., Winkler T., Lynch A., Barton J., Angove P., O'Mathuna C. - Celeritas - A Wearable Sensor System for Interactive Digital Dance Theatre, Body Sensor Network 2007 - 4th International Workshop on Wearable and Implantable Body Sensor Network. [HERE]

Torre, G., Fernstrom, M., O'Flynn, B. and Angove, P. (2007), 'Celeritas: Wearable Wireless System', in Singer, E. and Parkinson, C., eds., NIME '07: Proceedings of the Conference on New Interfaces for Musical Expression, New York, June 6-10, 2007, New York: ACM, 205-208. [HERE]

Brucato J., Torre G., E-SpeakS (Electronic Speaking System): A User-Centered and Scenario-Based Design Approach to Develop Interactive VUI's for Home Automation, 'HUMAN-LIKE INTERACTION CONTEST' DESIGN COMPETITION 2007 (Milan- Italy) (winner 2nd prize)

Technical Reports

Torre G., Fernstrom M., Cahill M., (2007) An Accelerometer and Gyroscope Based Sensor System for Dance Performance, Technical Report: UL-CSIS-07-2 [HERE]Torre, G. et al., (2010 - 2011) A Practical Introduction to Communication Protocols in Digital Arts, Report for NAIRTL Grant Funding.

Video Documentaries Torre, G., (2013) Pointin-at Data Glove: A Documentary, (Duration 1 hour ca.), Developed as part of PhD. [HERE]

Music CDs Fantasia - Liberamente Ispirata agli Estudio Sencillos di Leo Brower - Teatro del Sole Label CDSS 38384 director: M. Cappelli [HERE]

Invited Talks Machine Learning within Ableton Live - 50th AISB Symposium - Goldsmiths University of London - 1st of November 2014

O' Media Centre International Lectures (Centri Europei di ricerca sul suono e nuovi media) - 3rd of November 2011

Facolta di Musicologia - Universita di Pavia sede di Cremona - 4th of November 2011 Hosted seminars Dublin - Data Transmission and Mapping Strategies for Live Performance - Irish Sound Science and Technology Association (ISSTA) - 26th of August 2013Palermo - Video Mapping Techniques - Live iXem - 10th of December 2011-

About tag:www.muresearchlab.com,2014:/about/ 2014-11-21T08:40:38+11:00 @page_description
02.

Hi, I am a multimedia artist, software engineer, researcher and educator. My art work, spanning between live electronic performances, a/v compositions and multimedia installations, has been performed throughout Europe, USA, South America and Australia. These include: EAR Festival - Dublin (2005), Dahghda - Limerick (2007), Sonata Island Festival - Milan (2008), Mondi Sonori - Trento (2008), Festivart - Cosenza (2008), Soundings - Limerick (2009, 2010), NIME - Pittsburgh (2009) and Sydney (2010), Re:NEW‘09 - Copenhagen (2009), Festival Sonoimagenes - Buenos Aires (2010), Vision from the Future - Turin (2010), University of Palermo and Accademia Belle Arti - Palermo (2010), New media Festival in UK‘PAROBROD’ - Beograd (2011), Live!Xem Festival - Palermo (2011), Ormston House - Limerick (2012), Teatro Garibaldi Aperto - Palermo (2012).

I am currently Lecturer and Course Director for the BSc in Music Multimedia and Performance Technology at the University of Limerick where I also teach on the Master in Music Technology and the Master in Interactive Media. My research interests focus on the development of live a/v performances that include the use of newly developed digital instruments. As artist and technologist, I am also interested in the use of new technologies for the development of multimedia artworks that open a debate on contemporary socio-economic and political issues.

I am currently member of the New Interfaces for Music Expression Community (NIME) and Treasurer of the Irish Sound, Science and Technology Association (ISSTA).

I can be contacted at: torrejuseppe@gmail.com.