Interview

2005/Jul/14
Audio Version | Text Version
Video
Publication
Notes
Access
1

Education

My musical education began normally - I studied piano, had piano lessons and then went to the Conservatory in Viseu. Meanwhile, when I stopped having piano lessons, I began to get involved in things like rock bands, pop groups and jazz bands, playing with other kinds of people and basically improvising. Then I went back to the Conservatory, and it was about then that I decided to explore improvisation and composition more, at the same time. When I was about 19 or 19, I decided to try and see what there was in terms of composition and music courses in general, but with a strong compositional element, and I ended up going to the University of Edinburgh. It was more or less by chance, but I thought I should get out of Portugal - not for any particular reason, but simply because I wanted to try something else. So I ended up at Edinburgh University, where I did a four-year course, a general music course, but with a substantial element of electroacoustic-instrumental composition, which served my needs at the time. Then I decided to continue with electroacoustic composition. I did a master's at the University of East Anglia, in Norwich. This was a research degree rather than a taught one, building up a portfolio. It was a master's in electroacoustic composition, but I decided not to do electroacoustic music, but rather to do something with installations and improvisation, and to get away as far as possible from the tradition of tape music, something which is very strong in England. I spent a year trying to do things with different kinds of notation, improvisation and installation. Then I decided to return to Edinburgh, where meanwhile a professor of architecture, Richard Coyne, had arrived. He writes a great deal about design, architecture and the relationship between design and new technology. Thus there came about the possibility of doing a hybrid doctorate, between the Faculty of Music, where I had studied before, and the Faculty of Architecture, above all with regard to new technology. And so I spend four years developing a portfolio made up of installations and various pieces that explore the ideas of design and or architecture, and what is the relationship of these ideas with music, space in relation to music and the new technologies in relation to music.

2
The relation between composition and improvisation

I think that improvisation began seriously perhaps before going to Edinburgh, during the early years in Viseu, when I began to play free jazz with various musicians in Viseu.  We began to develop something which was neither improvisation nor composition; it was more or less what rock bands do: the rehearse a particular thing and study a particular type of material and know that this particular thing will happen, but it’s not composed in terms of notes. Conditions are created so that a certain kind of music happens, but the specific materials that will occur are not necessarily created.  And in terms of method, I think something stayed.  When I began to write more formal compositions, in a more serious way, I always tried to include this aspect, which gives greater freedom in performance (I don’t like the idea because that’s not really why I do things).  But it has more to do with creating conditions so that the music happens in a way which is not reading what is written, or performing what has already been performed. One of the experiments I began to do in Edinburgh was, for example, to collaborate with plastic artists.  I would develop a notation that was graphic but which also had an aesthetic-plastic aspect.  In other words, the piece was structured like a normal composition, but the notation was bivalent, it wasn’t traditional.  And I continued this kind of work for my master’s and doctorate. This was all, shall we say, to give space and time to something I find very important, the moment when music happens live.  I think it shouldn’t just be the repeating music that already exists in the imagination or a score – it’s a fragile kind of situation, I think what’s what characterizes a musical situation.  That is, it’s that feeling of somebody watching and listening to a musician on the stage, and at any moment something can “go wrong”, it can all collapse -  a string could break, a musician could make a mistake, there are many things which could happen.  I think it’s these things which make musical performance important, and the only reasons why we continue to go to concerts – there’s an attraction towards this kind of fragility, which is difficult to define.  More precisely, it’s a danger.  And I think that the systems that are somewhere between composition and improvisation make use of this danger and make it part of music, so that the performance is not merely a way of expressing the music on the stave, but rather a form of expressing danger, which I think is more interesting.
3
Improvisation, determinism of technology and the possibility of error

I think that my interest in technologies has a lot to do with improvisation, especially the kind we were talking about just now with regard to danger.  There are too many artists who use new technology as a way of realizing dreams and things imagined, and an important part of technology is error.  Afterwards, either the error that occurs in improvisation comes to be part of the music, or else it gives way to a permanently frustrating situation.  The new technologies have a system of error that can be interesting – when an improvisation situation is created in which there are musicians playing an acoustic instrument, for example, and a more or less interactive computer part, which responds to particular structures or parameters that a musician creates.  I think that it’s important that the computer also have the possibility of error.  In terms of programming a computer to do certain things, it is possible to programme it to do determinist things, but it’s also possible to programme it to do completely aleatoric things – neither of these possibilities is very interesting, but rather, what’s in the middle.  In other words, what I find more interesting is knowing more or less what might happen, but to give an opportunity to what one can call “an error”. More recently I’ve been investing some energy in exploring the type of error, from an aesthetic point of view, from a sonic point of view, or just thinking about “how this mistake might sound”.  Looking, for example, at the work of people like Nick Collins – the American Nick Collins, not the English one – who does what he calls “hardware hacking”.  He takes machines and tries to explore them in terms of sound, but in a different way, in a way for which they were not designed.  It’s a freeing of error, if you like.  And I think that this has a lot to do with improvisation, with the systems I develop on the computer either for improvisation or for pieces which are written, but which have an interactive component.
4
Sound and image in the process and the result of composition: music and space

I don’t make much of a distinction; I think that’s something I learned.  It’s related to the Faculty of Architecture in Edinburgh and the fact of being perhaps more distant from the tradition of being “the composer”.  For example, when I was teaching architecture projects, with pupils developing digital architectures and interactive projects and site studies, one of the attitudes that I think remained and that impressed me considerably was treating a creative project as a response to an extant situation.  That is, an architect designs for a particular site, yes?  He doesn’t normally design in the abstract, he designs the response to a particular site and in response to a particular programme: how many bathrooms are necessary, how many corridors, how many bedrooms, what are the materials of the area, and so on.  It never comes from nothing.  This was always something that interested me, and during perhaps the last ten years I have tried to include this in composing.  Composition is not necessarily a way of expressing my internal ideas, but a reaction to a situation that exists: a concert, a musician, a particular visual system, spatial system, etc.  One of the things I did during my doctorate was to develop various kinds of specific works for particular spaces, either based acoustically on particular spaces, to be placed in particular places.  I think that this is the height of the attitude that a composition only exists if there is a space and if there is a situation in which it can exist in reaction to it.  Any other way makes no sense.

With regard to architecture, one of the things that interested me at the beginning was the fact that, when one enters a space, the temporal structure which is traditionally part of music is different.  That is, one does not sit down and listen to a piece of music from beginning to end – one can journey through the space of particular forms, and in journeying, one hears in a different way, whether there is music in the space or not.  It may be a sound landscape, or any sound in natural space.  An aspect of this is, for example, in the situation of a sound installation in which there are two or three interconnected spaces, when it is the audience who decides when it goes from one space to the other, when it stays in one part or for how long it stays listening to particular kinds of music and how long it takes in moving from one section to another – in other words, something of the temporal structure, which was always a great tradition in music, passes to the audience. What interests me is what this passage means in terms of composition; one cannot compose music in the same way if one loses control of the temporal structure.  This means that the music has to be different, an has to develop other kinds of structures – this was one of the things I found interesting.

The other component as far as architecture is concerned was the most obvious – the acoustic component.  Any sound is a reflection of the atmosphere in which it is played, of course.  Therefore, any voice is modulated by a space or by a sound landscape, and the sounds do not exist in the abstract, but in a particular context.  These two aspects were explored in various pieces.  For example, perhaps one of the simplest pieces to explore this aspect is the installation Partial Space, which uses a gallery, one can say, as an amplifier of relatively simple tones or frequencies.  In other words, I make an analysis of the frequencies, the resonance of a particular space and these frequencies are then synthesized in real time, depending on how the audience moves through the gallery – the two things are more or less mixed.  I think that this, at the time (the installation was made in 1995/1996) opened great possibilities for me in harmonic terms, or perhaps even in spectral terms.  It obliged me to pay more attention to what happens in space and perhaps less attention to what happens in the instrumental part; in other words, taking into account these two components more or less at the same time, thinking of ideas such as “what is the dissonance between an instrument and space” or “what is the consonance between an instrument and space”.
5

Systems of interaction

This aspect was one of the things I began to explore with the Laut Duo, which consists of me and the saxophonist Franziska Schroeder.  We explore various kinds of composition and improvisation: I do the electronic systems, normally a laptop with two or three other things, and she plays saxophone.  We develop pieces in various ways – either both together, or, sometimes, I write works to be performed by the duo.  And we also have other composers writing works for us, which has been an interesting experience in terms of looking specifically at the relationship between a computer that normally does not necessarily have a predetermined sound source, in the case of our improvisations, and an acoustic instrument such as the saxophone.  This led me to think about thwt the role of a computer could be on stage, and to compare this situation with what the role of the saxophone on stage might be.  I think that the traditional view of things is that a saxophone, or any other instrument, reveals itself through the technology by which the music is expressed – that is, the musician has the music somewhere in his body and uses the instrument to project the music to an audience.  However, the more I think about this matter, the falser I find this idea, though this kind of idea has been quite important in the way the computer has been developed as an instrument.  A saxophone or a violin or a guitar are given instruments and a musician, whether he improvising or performing a composed work, generates a reaction between himself and the instrument, and this is a very important component of performance.  And, unfortunately, in terms of computers and new technology, we have created a situation in which the majority of people spends most of its time and energy trying to develop things that are easy to use, easy to play, immediate in response.  If we look at the history of musical instruments in the last two hundred years, that’s not the way it was.  It was never the main idea of people who designed and made instruments that they be easy to play, or that within a week one could perfect a particular kind of music, that was never the idea.  I think that we are losing this a bit because of the development that there has been in terms of computers which, of course, are not made for music.  they are a general computing system and it is there that the problem lies: they are made as a generalized system and not as a specific system.  And I think that this idea that a computer can do everything is counter-musical.  I think that most musical cultures come from very specific conditions.  If we look at instruments, each instrument does one thing and it is, sometimes, in extending this or going against this that interesting musical cultures occur.  This is the case with the “extended techniques” of the flute, of multiphonics and all the noises that are made with instruments that afterwards came to be part of a language of contemporary music.  The interesting thing is not just the noise itself, but the fact that the noise is made with a violin or with a flute, with a traditional instrument.  Therefore, returning to the idea of error, it is doing something for which the instrument was not designed. Something I have been trying to do recently is to programme the computer in more or less the same way; that is, forgetting the idea that the computer is a universal machine that may have an orchestra of sounds inside it and trying to do the opposite: doing much simpler things in terms of sound, but which give the choice as a performer and as a person on stage with a computer to react to a specific system.  I believe that there are a number of people researching in this field, and there’s a very interesting aspect which is to think what music with computers in real time might become.  If we look at the instruments of new technology that have been invented, the paradigm of a sound source and a sound controller exists, and midi controllers have this paradigm open.  That is to say, the controller makes no sound, he sends parameters and there is some other machine that received he parameters and makes the sound.  I don’t think this has anything to do with the way most cultures create music; there’s not this abstract parametrization of saying “this is the rhythm, these are the notes, this here is the timbre” and then rendering them sonically.  This, for me, is a paradigm of no interest.  Using new technologies, there are new possibilities of creating other paradigms, but we will have to cease to view the computer as a universal machine and to create specific systems that only do a particular thing in order then to allow the musicians the possibility of reacting to these specific systems as performers.

6

Systems of interaction

This aspect was one of the things I began to explore with the Laut Duo, which consists of me and the saxophonist Franziska Schroeder.  We explore various kinds of composition and improvisation: I do the electronic systems, normally a laptop with two or three other things, and she plays saxophone.  We develop pieces in various ways – either both together, or, sometimes, I write works to be performed by the duo.  And we also have other composers writing works for us, which has been an interesting experience in terms of looking specifically at the relationship between a computer that normally does not necessarily have a predetermined sound source, in the case of our improvisations, and an acoustic instrument such as the saxophone.  This led me to think about thwt the role of a computer could be on stage, and to compare this situation with what the role of the saxophone on stage might be.  I think that the traditional view of things is that a saxophone, or any other instrument, reveals itself through the technology by which the music is expressed – that is, the musician has the music somewhere in his body and uses the instrument to project the music to an audience.  However, the more I think about this matter, the falser I find this idea, though this kind of idea has been quite important in the way the computer has been developed as an instrument.  A saxophone or a violin or a guitar are given instruments and a musician, whether he improvising or performing a composed work, generates a reaction between himself and the instrument, and this is a very important component of performance.  And, unfortunately, in terms of computers and new technology, we have created a situation in which the majority of people spends most of its time and energy trying to develop things that are easy to use, easy to play, immediate in response.  If we look at the history of musical instruments in the last two hundred years, that’s not the way it was.  It was never the main idea of people who designed and made instruments that they be easy to play, or that within a week one could perfect a particular kind of music, that was never the idea.  I think that we are losing this a bit because of the development that there has been in terms of computers which, of course, are not made for music.  they are a general computing system and it is there that the problem lies: they are made as a generalized system and not as a specific system.  And I think that this idea that a computer can do everything is counter-musical.  I think that most musical cultures come from very specific conditions.  If we look at instruments, each instrument does one thing and it is, sometimes, in extending this or going against this that interesting musical cultures occur.  This is the case with the “extended techniques” of the flute, of multiphonics and all the noises that are made with instruments that afterwards came to be part of a language of contemporary music.  The interesting thing is not just the noise itself, but the fact that the noise is made with a violin or with a flute, with a traditional instrument.  Therefore, returning to the idea of error, it is doing something for which the instrument was not designed. Something I have been trying to do recently is to programme the computer in more or less the same way; that is, forgetting the idea that the computer is a universal machine that may have an orchestra of sounds inside it and trying to do the opposite: doing much simpler things in terms of sound, but which give the choice as a performer and as a person on stage with a computer to react to a specific system.  I believe that there are a number of people researching in this field, and there’s a very interesting aspect which is to think what music with computers in real time might become.  If we look at the instruments of new technology that have been invented, the paradigm of a sound source and a sound controller exists, and midi controllers have this paradigm open.  That is to say, the controller makes no sound, he sends parameters and there is some other machine that received he parameters and makes the sound.  I don’t think this has anything to do with the way most cultures create music; there’s not this abstract parametrization of saying “this is the rhythm, these are the notes, this here is the timbre” and then rendering them sonically.  This, for me, is a paradigm of no interest.  Using new technologies, there are new possibilities of creating other paradigms, but we will have to cease to view the computer as a universal machine and to create specific systems that only do a particular thing in order then to allow the musicians the possibility of reacting to these specific systems as performers.

7

Systems of interaction

This aspect was one of the things I began to explore with the Laut Duo, which consists of me and the saxophonist Franziska Schroeder.  We explore various kinds of composition and improvisation: I do the electronic systems, normally a laptop with two or three other things, and she plays saxophone.  We develop pieces in various ways – either both together, or, sometimes, I write works to be performed by the duo.  And we also have other composers writing works for us, which has been an interesting experience in terms of looking specifically at the relationship between a computer that normally does not necessarily have a predetermined sound source, in the case of our improvisations, and an acoustic instrument such as the saxophone.  This led me to think about thwt the role of a computer could be on stage, and to compare this situation with what the role of the saxophone on stage might be.  I think that the traditional view of things is that a saxophone, or any other instrument, reveals itself through the technology by which the music is expressed – that is, the musician has the music somewhere in his body and uses the instrument to project the music to an audience.  However, the more I think about this matter, the falser I find this idea, though this kind of idea has been quite important in the way the computer has been developed as an instrument.  A saxophone or a violin or a guitar are given instruments and a musician, whether he improvising or performing a composed work, generates a reaction between himself and the instrument, and this is a very important component of performance.  And, unfortunately, in terms of computers and new technology, we have created a situation in which the majority of people spends most of its time and energy trying to develop things that are easy to use, easy to play, immediate in response.  If we look at the history of musical instruments in the last two hundred years, that’s not the way it was.  It was never the main idea of people who designed and made instruments that they be easy to play, or that within a week one could perfect a particular kind of music, that was never the idea.  I think that we are losing this a bit because of the development that there has been in terms of computers which, of course, are not made for music.  they are a general computing system and it is there that the problem lies: they are made as a generalized system and not as a specific system.  And I think that this idea that a computer can do everything is counter-musical.  I think that most musical cultures come from very specific conditions.  If we look at instruments, each instrument does one thing and it is, sometimes, in extending this or going against this that interesting musical cultures occur.  This is the case with the “extended techniques” of the flute, of multiphonics and all the noises that are made with instruments that afterwards came to be part of a language of contemporary music.  The interesting thing is not just the noise itself, but the fact that the noise is made with a violin or with a flute, with a traditional instrument.  Therefore, returning to the idea of error, it is doing something for which the instrument was not designed. Something I have been trying to do recently is to programme the computer in more or less the same way; that is, forgetting the idea that the computer is a universal machine that may have an orchestra of sounds inside it and trying to do the opposite: doing much simpler things in terms of sound, but which give the choice as a performer and as a person on stage with a computer to react to a specific system.  I believe that there are a number of people researching in this field, and there’s a very interesting aspect which is to think what music with computers in real time might become.  If we look at the instruments of new technology that have been invented, the paradigm of a sound source and a sound controller exists, and midi controllers have this paradigm open.  That is to say, the controller makes no sound, he sends parameters and there is some other machine that received he parameters and makes the sound.  I don’t think this has anything to do with the way most cultures create music; there’s not this abstract parametrization of saying “this is the rhythm, these are the notes, this here is the timbre” and then rendering them sonically.  This, for me, is a paradigm of no interest.  Using new technologies, there are new possibilities of creating other paradigms, but we will have to cease to view the computer as a universal machine and to create specific systems that only do a particular thing in order then to allow the musicians the possibility of reacting to these specific systems as performers.

8

Systems of interaction

This aspect was one of the things I began to explore with the Laut Duo, which consists of me and the saxophonist Franziska Schroeder.  We explore various kinds of composition and improvisation: I do the electronic systems, normally a laptop with two or three other things, and she plays saxophone.  We develop pieces in various ways – either both together, or, sometimes, I write works to be performed by the duo.  And we also have other composers writing works for us, which has been an interesting experience in terms of looking specifically at the relationship between a computer that normally does not necessarily have a predetermined sound source, in the case of our improvisations, and an acoustic instrument such as the saxophone.  This led me to think about thwt the role of a computer could be on stage, and to compare this situation with what the role of the saxophone on stage might be.  I think that the traditional view of things is that a saxophone, or any other instrument, reveals itself through the technology by which the music is expressed – that is, the musician has the music somewhere in his body and uses the instrument to project the music to an audience.  However, the more I think about this matter, the falser I find this idea, though this kind of idea has been quite important in the way the computer has been developed as an instrument.  A saxophone or a violin or a guitar are given instruments and a musician, whether he improvising or performing a composed work, generates a reaction between himself and the instrument, and this is a very important component of performance.  And, unfortunately, in terms of computers and new technology, we have created a situation in which the majority of people spends most of its time and energy trying to develop things that are easy to use, easy to play, immediate in response.  If we look at the history of musical instruments in the last two hundred years, that’s not the way it was.  It was never the main idea of people who designed and made instruments that they be easy to play, or that within a week one could perfect a particular kind of music, that was never the idea.  I think that we are losing this a bit because of the development that there has been in terms of computers which, of course, are not made for music.  they are a general computing system and it is there that the problem lies: they are made as a generalized system and not as a specific system.  And I think that this idea that a computer can do everything is counter-musical.  I think that most musical cultures come from very specific conditions.  If we look at instruments, each instrument does one thing and it is, sometimes, in extending this or going against this that interesting musical cultures occur.  This is the case with the “extended techniques” of the flute, of multiphonics and all the noises that are made with instruments that afterwards came to be part of a language of contemporary music.  The interesting thing is not just the noise itself, but the fact that the noise is made with a violin or with a flute, with a traditional instrument.  Therefore, returning to the idea of error, it is doing something for which the instrument was not designed. Something I have been trying to do recently is to programme the computer in more or less the same way; that is, forgetting the idea that the computer is a universal machine that may have an orchestra of sounds inside it and trying to do the opposite: doing much simpler things in terms of sound, but which give the choice as a performer and as a person on stage with a computer to react to a specific system.  I believe that there are a number of people researching in this field, and there’s a very interesting aspect which is to think what music with computers in real time might become.  If we look at the instruments of new technology that have been invented, the paradigm of a sound source and a sound controller exists, and midi controllers have this paradigm open.  That is to say, the controller makes no sound, he sends parameters and there is some other machine that received he parameters and makes the sound.  I don’t think this has anything to do with the way most cultures create music; there’s not this abstract parametrization of saying “this is the rhythm, these are the notes, this here is the timbre” and then rendering them sonically.  This, for me, is a paradigm of no interest.  Using new technologies, there are new possibilities of creating other paradigms, but we will have to cease to view the computer as a universal machine and to create specific systems that only do a particular thing in order then to allow the musicians the possibility of reacting to these specific systems as performers.