TWiki> MusIC Web>SeminarSeries (30 Nov 2009, Main.s0787014)EditAttach

Coming Events

Nikki Moran: Title TBA

Tuesday, 19 January 2010

Location TBA

Abstract

TBA

Matthis Rath: Research from Technische Universität Berlin

Tuesday, 24 November 2009

Lecture Room A, Alison House

Abstract

Research topics:
- sound synthesis and processing
- sound and interaction design
- psychophysics, multimodal interaction

Jamie Allen: Physical Relationships In Art Practice

Tuesday, 10 November 2009

Common Room, Alison House

Abstract

"I am interested in technologies that suggest ways of reinventing traditional relationships to art and performance. My work in digital media, music, performance and public art seeks to create physical relationships between people and with media."

Past Events

David Moffat: "Some emotional and cognitive aspects of background music"

Tuesday 2 June 2009 at 1715, Common Room, Alison House

Abstract

The new field of "affective computing" recognizes that emotion and other kinds of affect are important to consider in designing computer interfaces of many kinds, and this is especially true for entertainment media such as video games and films. It is not easy to see how to make systems that measure or evoke particular emotional states in users, but this talk will show the results of some work that uses physiological devices to record data and correlate with known or expected emotional reactions. The effects of background music can be seen in the data, but not necessarily understood.

Stefan Bilbao: "Numerical Sound Synthesis"

Tuesday 12 May 2009 at 1715, Informatics Forum

Abstract

Physical approaches to sound synthesis have seen many advances in the last several years; among the most significant are attempts to view synthesis as a form of simulation, employing techniques used in more mainstream areas of engineering and the physical sciences. The usual problems arise: accuracy, stability, and computational expense--but the sounds produced can be of an exceedingly natural character, and gestural control is greatly simplified.

In this talk, an overview of these methods is presented, followed by a wide variety of sound examples, and, time permitting, a presentation of piece of music composed using these methods, in the Matlab programming environment.

Stefan Bilbao is currently a Senior Lecturer in the Music Subject Area, and works from the Acoustics and Fluid Dynamics group, in the School of Physics. He was previously a lecturer at the Sonic Arts Research Centre, at the Queen's University Belfast, a post-doctoral research fellow at the Stanford Space Telecommunications and Radioscience Laboratories, and a PhD student in Electrical Engineering at the Center for Computer Research in Music and Acoustics, at Stanford University.

Paul Keene: Physicality, Movement, Realisation. . .

Tuesday 21 April 2009 at 1715, Common Room, Alison House

Abstract

How are we moved by performance, in the participatory, liminal instant? Where and when is movement in performative expression? From exacting measurement to philosophical discussion, different approaches have found various types of answers arising out of various strands of enquiry. One area to look for solutions is in non-verbal communication, through gesture— body-centred instantiation occurring in the locative, noetic moment of creativity. This ‘physical’ vantage-space may help to facilitate a way of re-imagining what may be gleaned from this convergence in concept. How might this emergent exploration look? What might this transliteration of perspectives feel like? I will propose an introduction to an adoption of a ‘syncretist’ approach in investigation.

Prof. Bob Ladd: Is the "tritone paradox" a special case of a larger phenomenon in pitch perception?

Tuesday 4 November 2008 at 1730, Common Room, Alison House, Music

Abstract

In the 1980s Diana Deutsch demonstrated the existence of what she called the "tritone paradox". This illusion arises with Shepard tones - complex tones consisting only of pure tone partials always an octave apart, whose pitch class (identity as Bb or F# or whatever) is unambiguous but whose octave (fundamental pitch - C2, C4, etc.) can be uncertain. Deutsch showed that when two Shepard tones whose pitch classes are a tritone apart (e.g. C and F#) are played in sequence, some listeners hear the pitch going up and others hear it going down. She attributes this to special properties of the tritone. She also found that listeners' behaviour varies depending on the pitch class of the stimulus tones, which she says shows that listeners have a kind of latent absolute pitch.

However, behavioural results similar to Deutsch's first finding - that some listeners hear a rise while others hear a fall - have been obtained by other researchers using a wide variety of complex-tone stimuli. The conclusion to be drawn from this other research seems to be that, given artificially constructed complex tones, some listeners are more likely to perceive a "missing fundamental" as the pitch of the complex, while others perceive the pitch of the complex to be the lowest or most prominent partial. I show how this more general behavioural difference could be the source of Deutsch's basic paradox. I also show how a methodological peculiarity of her original experiment could account for the apparent finding of absolute pitch.

This talk is in preparation for running one or more experiments on this topic, and I will be very grateful for critical feedback.

Dr Ricardo Climent: "When Research = Fun: Russian Disco & Acute"

Tuesday 7 October 2008 at 1715, Student Common Room, Alison House, Music

Abstract

Research is one of the fun parts of my life. This seminar encapsulates a couple of my recent music compositions written in 2007: Acute for percussion quartet and fixed media with Searched Objects and Russian Disco, for clarinet, flute and computer with dynamic score. These two works started as a couple of hilarious ideas for a music composition and ended up being two serious pieces of works in my research output. Ironically, I had lots of fun when creating and producing them.

Links:
NOVARS Research Centre, University of Manchester, UK (serves as co-director)
Scores and recordings for Russian Disco and Acute Personal website

Dr Renaud Brochard: "Cerebral and behavioural correlates of rhythmic accents"

Tuesday 13 May 2008 at 1715, 6th Floor Common Area, Appleton Tower, School of Informatics

Abstract

This presentation will focus on research aimed at exploring the metrical, periodic component of rhythm perception. Meter implies the perceived alternation of strong (accented) and weak (unaccented) beats and generally leads to the sensation of an underlying pulse.

In the first part, I will present a series of experiments in which electro- and magneto-encephalography techniques were used to investigate the cerebral activity related to subjective accenting in auditory sequences. Our results show that violations of metric expectations (induced e.g. by local changes in intensity) evoke specific, sometimes lateralized, brain responses. This neural activity suggests that, in the absence of physical cues for metric accents within the auditory signal, listeners superimpose, by default, a binary structure onto regular sound sequences. The effect of musical expertise will also be addressed.

In the second part, I will focus on the role of sensory modalities on meter perception. It has recently been shown that individuals can easily tap in synchrony with an auditory beat but not with an equivalent visual rhythmic sequence, suggesting that the sensation of meter may be inherently auditory. I will report recent experimental work comparing tactile with auditory rhythmic stimulations. Implications for models of rhythm processing will then be discussed.

- Potter, D.D., Abecasis, D., Fenwick, M. & Brochard, R. (2008 – In Press) Perceiving rhythm where none exists: a study of brain event-related potentials Cortex.

- Brochard, R., Abecasis, D., Potter, D., Ragot, R. & Drake, C. (2003) The tick-tock of our internal clock: EEG evidence of subjective accents in isochronous sequences. Psychological Science, 14(4) 362-366.


Anna Jordanous: "Score Following: An Artificially Intelligent Musical Accompanist"

Tuesday 29 April 2008 at 1715, Student Common Room, Alison House, School of Music

Abstract

For performing musicians, musical accompanists may not always be available during practice, or an available accompanist may not have the technical ability necessary. As a solution to this problem, many musicians practise with pre-recorded accompaniment. Such an accompaniment is fixed and does not interact with the musician’s playing: the musician must adapt their performance to match the recording. To synchronise accompaniment with the soloist, it is preferable that an accompanist should be able to follow the musician through the score as they play, rather than the other way around. During performance, musicians may deviate from what is written in the score (either intentionally, by adding their own musical interpretation, or accidentally, by making performance errors). The accompanist should adjust their playing to follow the soloist.

This work investigates how an artificial musician can follow a human musician through the performance of a piece (perform score following) using a Hidden Markov Model of the piece’s musical structure. The computer musician is designed to interact with the human musician and provide accompaniment as a human accompanist would: musically and in real time. Having successfully implemented this representation, the performances of the resulting artificial accompanists has been evaluated both qualitatively, by human testers and quantitatively, by objective criteria based on that used at the Music Infomation Retrieval and EXchange Conference in 2006. The artificial accompanists can, in general, accompany human performers with a reasonable degree of accuracy. Testing has also raised an interesting reflection on the nature of co-operation between soloist and accompanist, and more generally on the role of the computer musician in ensemble performance.

Eduardo Miranda: "On the Artificial Phonology of Sacra Conversazione Opus 3"

Tuesday 15 April 2008 at 1715, Student Common Room, Alison House, School of Music

Abstract

Sacra Conversazione Opus 3 is a piece in five movements for strings orchestra, percussion and synthesized voices, which was premiered recently in a contemporary music festival in Plymouth by a local professional orchestra. The piece was inspired by an evolutionary metaphor whereby simple vocal sounds evolve to vowels, syllabic forms, then to words, phrases, and so on.

This talk will focus mainly on the technical challenges I have encountered to compose the piece, which required research into phonology and technology for synthesising vocal sounds. I used physical modelling and formant synthesis techniques to generate simulations of immature vocal tract control and voice development, often producing surreal vocal passages. Analysis-resynthesis techniques were employed to dissect sampled speech from a variety of languages (Amharic, Catalan, Cantonese, Croatian, Dutch, French, Galician, German, Hebrew, Hindi, Irish, Persian, Swedish, Thai, and Turkish), in order to re-synthesise new (non-existent) utterances by combining segments from these languages. The piece is entirely “sung” in a non-existent artificial language.

I will also discuss a few personal composition and aesthetic dilemmas that I had to address in this piece, including the combination of “weird” synthesized voices with a standard acoustic orchestra, the behaviour of an audience not very familiar with contemporary music trends, and severe budgetary constraints to buy time from a professional orchestra to rehearse the piece.

Examples will be given and a movie of the concert (excerpts) will be shown.

Mark Steedman: "Formal Grammars for Computational Musical Analysis: The Blues and the Abstract Truth"

Tuesday 18 March 2008 at 1715, 6th Floor Common Area, Appleton Tower, School of Informatics

Sandra Quinn: "Distortions in the perception of time in audition"

Tuesday 19 February 2008 at 1715, 6th Floor Common Area, Appleton Tower, School of Informatics

Abstract

Timing is important for the expressive nature of a piece of music and for perceiving the temporal flow of a melody. Although this is the case our understanding of how different musical structures influence our temporal judgments is still unknown. In a number of studies, I show how these structures influence the perception of timing in music. For example, my studies have assessed children and adults abilities to determine the appropriate tempo for a piece of music. The results show that the appropriate tempi varies from melody to melody for each sample, but the children opt for slower overall tempi compared with the adult sample (Quinn and Watt, 2006; Quinn, O’Hare and Riby, in preparation). The results also suggest that children use rhythmical patterns to make their responses, whilst adults use pitch structures to make their judgments. Further studies show that listeners can discriminate a series of tones with a regular onset (target or beat) when they are presented without interleaved tones. However, the ability to detect the regular beat is more difficult when the interleaved tones are placed at random onsets between each beat in the target. When the interleaved tones are a different pitch or intensity from the target, performance improves. In the final condition an interleaved tone was placed at one of three, four, five, six and seven sub-divisions between each of the target tones. For more typical sub-divisions in western music (i.e. three, four and six sub-divisions) the listeners were better at detecting the target. For sub-divisions that are less frequently used in western music (i.e. five and seven sub-divisions) performance was poorer. It is suggested that the ability to detect the target relies on auditory streaming. Where the interleaved tones are presented at random listeners perceive a single stream that makes it difficult to detect the target. However, when the interleaved tones are placed at regular sub-divisions (typically used in Western music) listeners perceive two separate streams allowing them to detect the target. These studies suggest that pitch, volume and rhythm are crucial for making temporal judgments.

François Pachet: "Reflexive Interactions"

Wednesday 05 December 2007 at 1715, 6th Floor Common Area, Appleton Tower, School of Informatics

Abstract

Reflexive interactions are interactions with machines in which the user is able to manipulate models of himself. These types of interactions have proven very successful in creating stimulating environemnts, in particular for boosting creativity. I will show several reflexive interaction projects conducted at Sony CSL in the domain of music improvisation and classification.

Simon Dixon: "Towards Computational Models of Music Performance"

Tuesday 20 November 2007 at 1715, 6th Floor Common Area, Appleton Tower, School of Informatics

Abstract

Studies of expressive music performance require precise measurements of the parameters (such as timing, dynamics and articulation) of individual notes and chords. Particularly in the case of the great performers, the only data usually available to researchers are audio recordings and the score, and digital signal processing techniques are employed to estimate the higher level "control parameters" from the audio signal. In this presentation, I describe two techniques for extraction of timing information from audio recordings. The first technique involves finding the times of the beats in the music, for which the interactive beat tracking and annotation system BeatRoot was developed, which was rated best in audio beat tracking in the MIREX 2006 evaluation. The second method is audio alignment, which has been implemented in the software MATCH, whereby multiple interpretations of a musical excerpt are aligned, giving an index of corresponding locations in the different recordings. This software can be used for the automatic transfer of content-based metadata from one recording to another, or for following a live performance.

STEIM members: STEIM Workshops 2007

This will be several days of workshops, talks and a concert by members of STEIM, the centre for research & development of instruments & tools for performers in the electronic performance arts based in Amsterdam, The Netherlands. The schedule of public events follows:

STEIM's Mobile Touch Exhibition, Open daily from 0900 - 1700

Tuesday 6 - Thursday 8 November 2007 from 0900 - 1700, until 1130 on Thursday 8 November, Alison House Lab Room, Alison House, School of Music

Talk with Daniel Schorno and Frank Baldé: Title to follow

Tuesday 6 November 2007 at 1715, Student Common Room, Alison House, School of Music

Abstract

Details forthcoming

Public talk/demonstration with Members of STEIM,

Wednesday 7 November 2007 at 1100, Alison House Atrium, Alison House, School of Music

Concert with Takuro Mizuta Lippit/DJ Sniff

Wednesday 7 November 2007 at 1930, doors open at 1900, Alison House Atrium, Alison House, School of Music

Conversation with members of STEIM

Thursday 8 November 2007 at 1000, 3rd Floor Rooms 3.03, 3.05, Appleton Tower, School of Informatics

Events for Students/Faculty/Registered People ONLY

MONDAY

1300 – 1445 ‘Composing the Now’: Development of personal instruments for performance, Appleton Tower : Room 6.01, Daniel Schorno will present a live demonstration of STEIM hardware

1600 – 1730 Live testing, and practical adaptation with work-in-progress software, Appleton Tower : Room 6.01, A more in-depth presentation than the first workshop with a demonstration of LiSa and junXion

TUESDAY

0930 – 1100 Composition students chat Alison House: Lab Room and St Cecilia’s Hall: Laigh Room, Working with musical ideas, performance ideas, improvisation versus composition

1200 – 1330 Workshop/talk to hardware students Alison House: Lab Room, A focus on optical, motion, sound sensors with a bit of software and the role of haptics in practical performance

1430 – 1600 ‘Touch sound’ with Daniel Schorno and Frank Baldé Alison House: Sound Design Labs A focus on embodiment of sonic communication

WEDNESDAY

0900 – 1000 Steim’s Mobile Touch Exhibition Alison House: Lab Room Talk to and interaction with children, explaining at a basic level how the technology works letting them play with all the instruments, showing things in workgroups, making music together with the instruments

Sofia Dahl: "Studying expression and control in musicians' movements"

Tuesday 23 October 2007 at 1715, Student Common Room, Alison House, School of Music

Abstract

When playing an instrument, musicians primarily use their hand and arm movements to produce sounds intended for musical communication. The fact that music is a form of communication puts musicians in a somewhat different position compared to other experts in, for instance, sports or industry. For a musician, the evaluation of a movement has less to do with measurable units of time and distance, and more to do with communication through sounds. In order to reach an acceptable and professional level, a musician needs to carefully refine his or her movements using auditory feedback - a process that may take many years. Furthermore, it is not enough for a professional musician to perform specialized, complex motor tasks under a strict time constraint. For a performance to be truly successful it should also convey something to the listener.

In this talk I will discuss some of the questions that arise when studying movements in music performance: How can movements primarily intended for the production of communicative sounds be evaluated? What characterizes an "efficient" and "appropriate" movement? Do expressive gestures somehow facilitate the control of sound production? If so, is it at all possible to disentangle expression from control when studying musicians' movements?

Bryan Pardo: "Separation of Harmonic Instruments from Stereo Music Recordings"

Thursday 13 September 2007 at 1715, 6rd Floor Common Area, Appleton Tower, School of Informatics

Abstract

A key problem facing us today is multimedia retrieval and management – how to retrieve, process, and store what one seeks from the huge and ever-growing mass of available data. Music, from mp3s to ring tones to digitized scores, is one of the most popular categories of multimedia. In accessing and manipulating music, people often wish to perform tasks that require the ability to separate a recording containing a mixture of simultaneous sounds into its component sound sources. This is called source separation. Effective source separation would be of great utility in applications such as music transcription, vocalist identification, post-production of pre-existing recordings, sample-based musical composition, multi-channel expansion of stereo recordings, and structured audio coding. In this talk, Bryan Pardo will discuss and demonstrate recent work in his laboratory on automatic separation of sound sources in a stereo mixture to isolate individual instruments from a musical mixture. This work improves on existing binary time-frequency masking. It does so by replacing the winner-take-all assignment of energy for each time-frequency frame in the mixture with an approach that allows assignment of energy from each frame to multiple sources. This improves separation of mixtures with significant overlap between the harmonics of different sound sources, a common case in music recordings.

Biography

Bryan Pardo heads the Northwestern University Interactive Audio Laboratory and is an assistant professor in the Northwestern’s department of Electrical Engineering and Computer Science, with a courtesy appointment in the School of Music. Prior to Northwestern, he taught in the Music Department of Madonna University in Detroit. Bryan’s academic credentials include a B. Mus. in Jazz Composition from the Ohio State University, a Master’s degree in Jazz and Improvisation from the University of Michigan and a Ph.D. in Computer Science, also from the University of Michigan. In addition to his academic career, Bryan worked professionally as a software developer for SPSS and as a researcher for General Dynamics. He remains active in the music field, has been featured on a number of albums, and performs on saxophone and clarinet throughout the Midwestern United States.

Gordon Miller: "The Spectral Analyser in Mixing & Mastering - Implementation & Use"

Tuesday 24 April 2007 at 1715, 3rd Floor, Rm 3.04, Appleton Tower, School of Informatics

Abstract

In this talk I will explore two main themes:

1. The construction of software spectral analysers, both in general, and specifically with reference to my xspect program. This will contain quite detailed DSP material, including Fast Fourier Transform, data tapering, averaging and normalization. I will briefly explain the design decisions behind xspect, and implementation issues including algorithm choice, scheduling, threads and drawing optimizations.

2. The fruitful use of spectral analysers in music post-production. This will explore the increasing use of graphical EQ and spectral compression plugins, which have opened new possibilities for improving the spectral balance of recordings. I will talk in particular about the Voxengo Gliss EQ and Voxengo Soniformer plugins.

The two themes overlap since the intended end use will determine the choices in design and implementation.

Simon Emmerson: "Theory, action (music)"

Tuesday 27 March 2007 at 1715 Student Common Room, Alison House, School of Music

Abstract

Where is theory when it comes to making music? How and where does it function? Musicologists have a view, composers have a multitude of views, performers too (some of them to dismiss its role as a distraction). From hidden code to overt political and social message bearer, theory has been everywhere at some time or other. Mechanical tools were extensions of human gesture and energy fields. Computer tools, however, move to embrace human abstraction and thought processes (‘theory’ included). Paradoxically, therefore, they may enable a more flexible relationship of theory to practice. I shall suggest a project in the phenomenology of theory in practice. What do composers and performers feel about the use of theory in practice? Or put another way, what does theory use make them feel? Some preliminary observations about where we might begin...

Petri Toiviainen: "Measuring, Modelling, and Visualizing the Dynamics of Tonality"

Tuesday 6 February 2007 at 1715 Student Common Room, Alison House, School of Music

Abstract

Perception of music is an active dynamic process: while music unfolds in time, we constantly form expectations about its possible continuations. These expectations operate on several levels of music, such as melodic, harmonic, rhythmic, and tonal. The fulfilment or violation of these expectations gives rise to patterns of tension and relaxation, which, it has been suggested, is one source of emotions evoked by music. Therefore, understanding dynamical aspects of music perception is crucial for obtaining a more comprehensive picture of the musical mind. The dynamics of music perception can be studied with various methods, including listening experiments and computational modelling.

Music in many styles is organized around one or more stable reference tones. In Western music, this phenomenon is referred to as tonality. As music unfolds in time, the tonality percept often changes. For instance, the clarity of tonality can change over time. Furthermore, the particular piece of music may contain modulations from one key to another. These changes in perceived tonality may be important in the creation of expectancies, tension, and emotions.

I will discuss methods for measuring the dynamics of music perception, in particular the perception of tonality, by means of listening experiments. Furthermore, I will talk about approaches to simulate this process with models of pitch perception, short-term memory, and the self-organizing map (SOM). The SOM allows for dynamic visualization of perceived tonal context, making it possible to examine the clarity and locus of tonality at any given point of time. Finally, I will discuss computational methods for the analysis of tonal structure of musical works.

Richard Coyne: "Design for Voice"

Tuesday 5 December 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

I report on a recent exploration into the role of the human voice, in its various manifestations, and as it features as a consideration in the design of urban environments. I canvas issues of the priority of vision over sound in architectural design, and the difficulties of designing with and for the movement of voice. I begin with the "open outcry" of the marketplace, and migrate across territoriality, through inflection, repetition, reproduction, ruse, ambience, performance, resistance, and the cut. As well as serving as a medium of communication and a musical instrument, the voice defines territory. As suggested by Deleuze the voice also deterritorializes.

Martin Parker: "Lines, tones & filaments, recent projects"

Tuesday 21 November 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

I will discuss several preoccupations when making work; reactive and resistant interfaces and the trade-off between complexity and transparency in computer-based performance systems.

I'll explain three recent projects, Linetones (for computer and graphic artist, 2006), Filament (public sound installation, 2006) and The Spectral Tourist (for computer and joystick, 2003).

Steve Goodman: "Audio Virology: On the Algorithmic Contagion of the Body"

Tuesday 14 November 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

Drawing from recent philosophies and cultural theories of affect and the virtual, this paper will explore the emergence, propagation and mutation of musical algorithms across digital and analog sonic ecologies. Via the concept of the 'abstract machine' [Deleuze & Guattari] and the ‘nexus’ [Whitehead], we will investigate the way in which numerical systems pass through populations, attaching themselves to, and traversing bodies, producing sonic affects, movements and sensations in their wake.

Challenging the formalist ambitions of generative musics, we will question whether real sonic mutation, beyond mere variation within the pre-coded possibility space of the digital [e.g. via generative randomness, cellular automata and artificial life], requires a conjunction with the analog as contended by Brian Massumi in his text 'Parables for the Virtual'. Where is the ‘virtual’, or potential for mutation within software based musics? The discussion will be illuminated through the discussion of a number of viral sonic fictions in which artificial acoustic agencies infect and proliferate both through actual populations and computational networks, and the anomalous affects they generate en route.

David Murray-Rust: "Thinking inside the box"

Tuesday 7 November 2006 at 1715, 3rd Floor, Rm 3.04, Appleton Tower, School of Informatics

Abstract

As a side project to my work with musical agents, I was asked to create a demo for CISA to present part of my research to a general audience, e.g. open days. This resulted in the creation of the AgentBox , a tangible interface for a multi-agent system. In this talk I will give a very brief overview of the musical agent system, talk about how the box itself works, and give a very short performance using it. I'm trying to keep this as short as possible to allow everyone to have a bit of a play with the system, as it's much more interesting as a hands-on experience.

I am very interested in feedback as to how much of the interaction is accessible to people in general, so that the demo can be improved.

Many thanks go to CISA for sponsoring this project.

Bernard Bel: "The Bol Processor project: musicological and technical issues"

Tuesday 31 October 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

This presentation will be supported by demonstrations on a running version of BP2 connected to a MIDI synthesizer.

Bol Processor 2 (BP2) is a program for music composition and improvisation with real-time MIDI, MIDI file, and Csound output. It produces music with a set of rules (a compositional grammar) or from text ‘scores’ typed or captured from a MIDI instrument. These rule sets are very similar to the formal grammars (context-free, context-sensitive, etc.) that are used in computer science to define machine-readable languages. As a compositional tool, Bol Processor has been successful at modeling music of many styles including Western classical music, serial music, contemporary art music including minimalism, Indian classical music, and jazz. More information about the capabilities of BP2 is available at http://www.lpl.univ-aix.fr/~belbernard/music/. BP2 began its life as a shareware program developed by Bernard Bel with the help of Jim Kippen and Srikumar Karaikudi Subramanian. It currently runs on the Classic MacOS, though its OMS MIDI driver, including QuickTime, music only runs on machines booting MacOS versions 7-9. Recently the project has been open-sourced by Sourceforge at http://sourceforge.net/projects/bolprocessor/ with the help of Anthony Kozar. We are hoping that Bol Processor 3 will at least run on MacOS X, and ports to Windows and Linux are also possible depending on the desires and expertise of the group of developers that will be assembled.

http://www.lpl.univ-aix.fr/~belbernard/music/BolProcessorOverview/ is a link to various slides, sound files and documents related to the talk.

Richard Brown: "Interactivity and emergence in digital and electrochemical sound systems"

Tuesday 24 October 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

Richard Brown is the Research Artist in Residence at Edinburgh Informatics and creates interactive artworks using multimedia, electronics, computers, chemicals and electricity. In his talk he will present three distinct sound systems: "The Lyre Bird", an interactive audio installation at Perth Concert Hall; "Biotica", an emergent A-Life system; "The preservation of Entropy" and other electrochemical systems inspired by Gordon Pask.

For more details of his work see: http://www.mimetics.com http://www.inf.ed.ac.uk/research/programmes/air/rair/index.html

Andy Clark: "Music for Embodied Minds"

Tuesday 10 October 2006 at 1715, 3rd Floor, Appleton Tower, School of Informatics

Abstract

In this talk I introduce some of the key themes that characterize recent work on the 'embodied mind', and ask what (if anything) they might tell us about the nature of musical performance and cognition? I end by applying some of the emerging themes to a simple case of rhythm perception, asking under what conditions we experience a sound as pulsating. Might such experience be somehow linked to an agent's dispositions to produce appropriate bodily actions?

(Health Warning: this talk is identical with the union of two previous talks presented to workshops at the Institute for Music and Human Development. The talks were 'Beyond the Naked Brain' and 'The Pulsatingness Puzzle'.)

Julia Deppert: "Make numbers sing - a personal approach to algorithmic composition"

Tuesday 3 October 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

When attempting to organise musical harmony according to my own given set of rules, I soon reached the limits of structures I was able to control 'by hand'. It made me turn to programming in Common Lisp, which I have continued ever since, further developing the algorithms concerning harmony as well as exploring other procedures such as those for the creation and display of different rhythms.

I want to introduce my own approach to composing music with the help of a computer, covering the reasons behind my doing, the way the procedures work and finally the results in examples of my music.

Keith Stenning: "Music and language: some less than half-baked analogies"; Followed by Opening Series discussion

Tuesday 26 September 2006 at 1715, Student Common Room, Alison House, School of Music

Abstract

The largely implicit models of language held by different disciplines vary radically. Discussions of language evolution bring this out starkly. My own recent work on human reasoning, (and that of many others) explores the idea that human language developed out of planning functions rather than more obviously communication capacities. This has radical effects on what is seen as evolutionarily novel and what is ancestral, and so changes the problem of explaining human cognitive evolution.

This talk considers what this change of perspective might have to offer to those interested in the origins of music and the relation between music and language. I am by no means an expert on music, so the tenor or the talk will be to offer the audience a sketch of how planning works as a model of language capacities, along with some questions about ways the model might apply to music. I hope to learn a lot.

Tim Blackwell: "Swarming and Music"

Tuesday 9th May 2006 at 5.15pm Department of Informatics Room 3.04, Appleton Tower, Crichton Street

Abstract

Music is a pattern of sounds in time. A swarm is a dynamic pattern of individuals in space. The structure of a musical composition is shaped in advance of the performance, but the organisation of a swarm is emergent, not pre-planned. What use, therefore, might swarms have in music?

Here I consider this question with a particular emphasis on swarms as performers, rather than composers. In Swarm Music, human improvisers interact with a music system that can listen, respond and generate new musical material. The novelty arises from the patterning of an artificial swarm. Swarm Music is a prototype of an autonomous, silicon-based improviser that could, without human intervention, participate on equal terms with the musical activity of an improvising group.

David Murray-Rust: "MAMA: An architecture for interactive musical agents"

Tuesday 21st February 2006 at 5:15p.m., University of Edinburgh, Department of Informatics Room 3.04, Appleton Tower, Crichton Street, EH8 9LE

Abstract

In this talk I will discuss MAMA (Musical Acts - Musical Agents), an architecture for musical agents. I will discuss the anatomy of a musical agent, and how it relates to other agents. I will explore the issue of representing music such that distributed, logical agents can play a given "score". This will use the notion of Musical Acts - a layer of abstraction which allows an agent to reason about the actions of others. Finally, there will be a short demo (technology permitting) of a current case study using the system.

Colwyn Trevarthen: "The Sociable Impulse of Musicality, Motor Source of Musical Intelligence and Skill"

Tuesday 7th February 2006 at 5:15pm Department of Informatics Room 3.04, Appleton Tower, Crichton Street

Abstract

Music, like language, is not a thing, so much as something human bodies do together. It is not just information to be processed or structure to be understood in single heads, or just an acquired skill. Information on how we are capable of acquiring communication by language and music has come from analysis of the expressive engagements between infants and other people. The theory of "Communicative Musicality" identifies the principles of the messages conveyed by the polyrhythms and energy-regulating dynamic forms of all human body movement. Infants demonstrate an innate capacity for sharing the pulse, quality and narrative of human thinking, made evident by the gesture and voice of agency. This sympathy in conscious action, with the emotions of self-presentation in community, is the motivating impulse for all cultural forms of music.

Michèle Weiland: "MINIMALLY SUPERVISED MACHINE LEARNING FOR THE GENERATION OF LARGE-SCALE MUSICAL STRUCUTRES"

Tuesday 6th December 2005 at 5.15pm Department of Informatics Room 3.04, Appleton Tower, Crichton Street

Abstract

This paper attempts to demonstrate the approach of using machine learning techniques to learn and generate large-scale musical structures. Musical works have several defining levels of representation, or musical parameters, such as metre, duration, phrase structure, cadential patterns or pitch. We build networks of models that represent musical parameters, aiming to learn both the local dependencies of the elements that make up the parameters, and the interdependencies between the different parameters. We extract structured probabilities from a musical data set, which are then used to generate new pieces of music, providing the models with only a minimum of supervision and expert knowledge. The musical examples in this paper show that the generated material is believable on the larger scale, and follows the main rules of the musical data that was used for training our models.

Nigel Osborne "Analog - Digital - Orchestral: The experiences of the East European Avant Garde"

Tuesday 1st November 2005 at 5.15pm Department of Informatics Room 3.04, Appleton Tower, Crichton Street

Abstract

The history of new music in Eastern Europe after 1945 is paradoxical. On the one hand, the commissars of the late Stalinist period and the doctrine of socialist realism exerted a conservative pressure on new art-making. On the other hand, a powerful "alternative, Eastern modernism" flowered in oppositional, underground and samizdat networks. In certain places, at certain times, this experimentalism was even welcomed and celebrated in the "official" art world.

One such place and time was Poland in the years after the political changes of 1956. Here there was a meeting point of western and eastern modernisms. Of particular significance was the encounter between speculative post-war Western musical modernism, Eastern expressive and dramatic modernist forms and the emerging new music technology. The presentation examines this encounter, from the music of "poor theatre" and workers' clubs to the symphony orchestra and early electronic music studio.

Panel Debate: "What can composers do for computers and vice versa"

Tuesday 10th October 2005 at 5.15pm Lecture Room A, Alison House

Abstract

To kick start the new season of MusIC talks, we will be holding a panel debate, around the theme "What can computers do for composers and what can composers do for computing". We will have brief statements from our panellists - Peter Nelson, Michael Edwards, Martin Parker and Dave Murray-Rust, and opportunity for general debate and questions from the audience.

-- DaveMurrayRust - 02 Dec 2005

Topic revision: r86 - 30 Nov 2009 - 17:11:29 - Main.s0787014
MusIC.SeminarSeries moved from MusIC.MusICSeminarSeries on 22 Dec 2005 - 23:21 by DaveMurrayRust - put it back
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies