1. INTRODUCTION
This article discusses the live coding environment Sonic Pi (Sonic Pi n.d.), first released in 2012 (Blackwell, Cocker, Cox, McLean and Magnusson Reference Blackwell, Cocker, Cox, McLean and Magnusson2022: 184) and its inclusion as part of a new music production foundation pathway at a UK university. Given the maturity of live coding and Sonic Pi’s reliability and accessibility, this was an opportunity to not only broaden music production teaching but also to develop student’s (digital) music practice. This article considers live coding’s presence on the new foundation pathway as an opportunity to introduce this practice at an early stage of a music degree programme with the aim to combine creative music work with adaptive career identities (Bridgstock Reference Bridgstock2013: 131). These adaptive career identities are formed with not just creative work but also what Walzer refers to as changing ‘fluencies’ (Walzer Reference Walzer, Powell and Smith2022: 246 original emphasis). This is a changing context that Walzer views as one of the advantages of music technology education. However, as will be discussed later, my aim with this type of approach is to also include aspects of criticality within production teaching rather than deferring this to other, non-production-related modules. In this sense, where Walzer considers whether ‘social justice and inclusion inform our curricular and pedagogical choices’ (ibid.: 247), I would go further and argue that they are always present. Production is always political, whether the teacher chooses to acknowledge it or not. Therefore, the decision not to include is a non-neutral decision.
This article comprises three broad discussion areas. The first is to provide a short account of live coding at my current institution coupled with a somewhat longer account of my fragmented journey into live coding to provide some context. This section includes other influencing factors such as modular synthesis and other generative programming approaches such as Ableton’s follow actions (Sasso Reference Sasso2010) or mixed hardware and code platforms such as Monome’s Norns (Boon Reference Boon2020a).
Second, I provide some information about the Foundation, its structure and how it fits into the overall degree programme, as part of what I refer to as a hybrid production approach. My use of hybrid in the context of this article is at the level of mixture between two or more elements. Cocker states that live coding is a ‘hybrid – even liminal – practice’ (Cocker Reference Cocker2016: 107) whose users act upon and intersect with various disciplines. As an example, I briefly highlight DJ_Dave’s (n.d.) live coding and production approach that aligns well with music producers and the course. By advocating for this approach, foundation-level production students gain experience throughout the production process, from working in the digital audio workstation (DAW) to performing their work. This provides them with early explorative opportunities and laying the groundwork for a variety of public facing (enterprise) opportunities.
Lastly, I outline what I view as a critical opportunity in production and live coding teaching. From my perspective, the inclusion of the Amen Break in Sonic Pi’s library, together with its positioning in many DAWs and sample libraries, its presence in live coding and its use by various artists (live coding or not), opens up a number of critical discussion points. This highlights an interesting moment of criticality for live coding, especially where breakbeats are used, even when cut up and transformed.
2. A (POTTED) INSTITUTIONAL HISTORY OF LIVE CODING AND INDIVIDUAL JOURNEY
The University of Westminster is a post-1992 institution. However, its history starts in the founding of the Regent Street Polytechnic in 1838 and was the first of its type to be founded in London, receiving Royal Charter status in 1839. As well as the Regent Street campus, the university has campus locations in Fitzrovia, Marylebone and Harrow. Harrow is the location for most of the arts and communication-based courses. In addition to the London location, the university also runs an international university based in Tashkent, Uzbekistan. The music course was first established in 1993 as a BA Commercial Music. It has transitioned and adapted over time and is now called BA Music: Production, Performance and Business. I was part of the original teaching team that was headed up by Norton York (RSL 2023) and John Eacott (Eacott Reference Eacott2011). To this date the music department offers the aforementioned three-year undergraduate degree, a foundation pathway and three MA courses: Audio Production, Music Business Management and Live Music Business Management. We also have a partner course with Community Music (2023), whose students, on successful completion of their two-year HND, can top-up their award to the BA at Westminster.
The music department has some prior history with live coding, mainly using SuperCollider. The university hosted a number of SuperCollider workshops and summer schools convened by John Eacott in the early 2000s onwards (Blackwell, Cocker, Cox, McLean and Magnusson Reference Blackwell, Cocker, Cox, McLean and Magnusson2022: 26). SuperCollider was also used for some undergraduate modules. The first was an Introduction to Algorithmic Audio Development module for second-year students on our Music Informatics BSc. The second was Introduction to Algorithmic Music and Composition Systems for first-year students and was offered as an option module for students on BA Commercial Music and BMus Commercial Music Performance degrees. Both modules were taught by John with small, engaged student groups. In the current context, live coding is being established using Sonic Pi. Sonic Pi was chosen as the entry point partly due to its well-documented tutorials (Sonic Pi n.d.), working examples already in the Integrated Development Environment (IDE), coupled with a well-documented language library.
In 1995, second-year music students also had opportunities to study modules introducing them to basic concepts of hypertext and hypermedia. This was facilitated by our MA in Hypermedia convened by Andre Ktori. This allowed music students to explore various possibilities of mixed media in the early days of CD-ROM development and interactivity. Two main systems were used at that time. The first was HyperCard, which was a system to create stacks of interlinked visual pages. The most famous HyperCard project was the adventure game Myst, released in 1993 (Parish Reference Parish2017). Card stacks could be linked, not just by user interaction by clicking on navigational links, but also by using HyperCard’s scripting language called HyperTalk (see Lasar Reference Lasar2019 for more detailed discussion).
The second system was Macromedia’s Director (later purchased by Adobe). Director’s history starts in 1985 as MacroMind’s VideoWorks, renamed as Director in 1989 and subsequently rebranded as Macromedia Director in 1993. It also had a scripting language called Lingo, which was introduced in 1990 with Director version 2.0. Thus, as a visiting lecturer at that time, my introduction to these two program systems stimulated an interest in coding, ultimately focusing on Macromedia’s Director and the Lingo language developed by John H. Thompson (Thompson n.d.).
Director offered three types of script:
-
Behaviour scripts – these were attached to objects (known as sprites in Director speak) such as images and User Interface (UI) items. As well as providing interactivity for users, these scripts also allowed objects to interact with each such as collision detection.
-
Movie scripts – these were available throughout the program. These could hold global variables, run setup routines when a program started or ended. As Director used a timeline-based approach these scripts could also be attached to individual frames.
-
Parent scripts – these could be used to create new objects, similar to creating and instantiating a class in other object-oriented languages.
As well as its scripting language, Director also had what it termed ‘Xtras’, which were extension plugins. With these, users could make multimedia works using a mix of interactivity, code and pre-made resources such as audio files, similar to HyperCard. Director’s capabilities to code music, at least for me, were minimal until the release of SequenceXtra by SourceForce, later purchased by Sibelius (2022). While there were some music Xtra plugins available, such as Beatnik’s Headspace (Smith Reference Smith2017), these were primarily a means of ensuring sounds played back as intended across various computer systems. SequenceXtra allowed me to code real-time generative pieces with the timing handled by a robust sequencer. Early musical examples I made were Finite State Machines, harmony and rhythm generators, improvising systems and 12-tone generators.
For developers, Director also offered a message window as a means of debugging and inspecting variable values during a program’s execution. This meant that one could interact directly with the program and any instantiated Xtras using any of the three script-based mechanisms. This capability to send code in real-time to the application was my first attempt at live coding. A book that was influential on my comprehending the power of the message window and object-based programming was Peter Small’s Lingo Sorcery (Small Reference Small1996). Small’s basic premise was that objects could be created in code and that messages could be sent to them to perform functions. So these objects could be ‘asked’ to fetch things on the net or from the local file system. Small’s view, based on his long experience working in the field of genetics, was that these objects could be the basis of intelligent agents. I applied this type of programming approach to SequenceXtra with varying degrees of success. Unfortunately, Sibelius purchased the plugin and ceased development on the Director platform. Adobe purchased Director, introducing Flash, which meant that Director’s time was coming to an end.
Following on from this, my interests in generative music were applied in three areas. The first was Ableton Live, which combines a DAW with follow actions (simulating a probability system), dummy audio clips (when combined with follow actions can process incoming audio) and instrument racks. When these approaches are applied to MIDI and audio (recordings, samples and FX processing), it is possible to create pseudo-generative pieces. The only issues using this system concern those of ‘intelligence’ and ‘memory’. What I mean by this is the sharing and communication of information, such as state, between objects. However, despite this, it is quite possible to build interesting pieces of music using this somewhat limited implementation. Ableton’s more recent inclusion of CV tools, implemented using Max/MSP, go some way to improve state-based changes and musical behaviour via the inclusion of analogue logic operators such as AND, NOT and OR. I have documented some of this work as video (Boon Reference Boon2020c), at conferences (Boon Reference Boon2022a) or in articles (Boon Reference Boon2021b).
The second area of generative music is modular synthesis using the Eurorack system. Here my focus is on modules such as shift registers (Boon Reference Boon, Hepworth-Sawyer, Paterson and Toulson2021a: 162–4) and, more recently, the Stochastic Inspiration Generator (Stochastic Instruments 2021). This uses statistical probability to control notes using a variety of parameters. Again, these systems have no shared awareness but Eurorack systems can achieve state control via various analogue and digital logic controllers, which can be influenced by a variety of modulation sources. Eurorack modules such as Ornament and Crime (Stadler, Dowling and Churches Reference Stadler, Dowling and Churches2016) make use of shift registers offering a variety of algorithmic means to generate Control Voltage (CV) outputs (Boon Reference Boon, Hepworth-Sawyer, Paterson and Toulson2021a: 165–6). These include Turing Machine (Whitwell Reference Whitwell2012), Integer Sequences and ByteB. This latter algorithm is a variation of bytebeat equations that generate semi-fractal note values. Bytebeat equations usually generate audio (greggman n.d.) but for Ornament and Crime’s implementation they generate CV values.
My work using Eurorack has also influenced my approach to Ableton Live and Sonic Pi. I have adapted modular synthesis ideas native to the Buchla system, such as Todd Barton’s Krell Music (Barton Reference Barton2012), and made versions using Sonic Pi (Boon Reference Boon2021c). Eurorack has also provided me with opportunities to explore performance approaches that are semi-improvised (Boon Reference Boon, Hepworth-Sawyer, Paterson and Toulson2021a: 167–71). Of particular interest was Suzanne Ciani’s report to the arts in 1975 in which she outlined a variety of modular performance approaches (Ciani n.d.). She described her Buchla as ‘a hands-on compositional tool’ (Campbell Reference Campbell2019) saying that she arrived at her combination of prepared voltages and performance as the ‘seemingly inevitable consequences of an Arbitrary Function Generator meeting a Sequencer’ (Ciani n.d.: 8).
More recently I have also started to explore the Monome Norns system, which uses a combination of SuperCollider for synths and Lua for programming hardware and UI. One particular app that I use quite a lot in performance is the Dual Step Sequencer called Awake (Boon Reference Boon2020a; Boon Reference Boon2020b). Again, I have coded this in various ways using Sonic Pi and, much like using the Norns, altering notes and sequence lengths produces differences in real-time performance (Boon Reference Boon2022c).
I recognise that the account I have written only reflects part of my ongoing experience as a music creator and educator. My lived experience, as a person of mixed heritage and member of the African diaspora, has resulted in many confrontations where I am frequently told to leave the UK or that I do not belong here, which has also been said in the academy. This is an example of what Homi Bhabha refers to as ‘domination … achieved through a process of disavowal’ (Bhabha Reference Bhabha, Ashcroft, Griffiths and Tiffin1994: 33). These experiences are a type of oppositionality, in that they are opposed to me, which I have experienced since childhood. While these are not a norm, as I refuse to normalise them, they function as pressure and presence (ibid.: 32). This pressure and presence is experienced not just as confrontational interactions with the man in the street but also in academia. In my working life (academic and musical) I experience forms of exoticisation that make me valuable but only under certain conditions and circumstances. Within this I also experience denial in respect of my mixedness, my hybridity if you will, in that I face claims that I must be from somewhere else and, therefore, not from here (the UK), thus what I claim cannot be truthful. Bhabha characterises these sorts of experiences as ‘disposal-as-bestowal and disposition-as-inclination’ (ibid.: 32) Therefore, my presence is useful, such as for surveys on staff ethnicity and diversity, open days and conference programmes.
In this section, I have attempted to show my path into live coding and the role of the university as a general host location for live coding activities. I have also shown the way that different systems have contributed to my development and understanding, and continue to influence my approaches to music performance, production and coding. I have also attempted to communicate, albeit briefly, some of my lived context (see Boon Reference Boon, Powell and Smith2022b for an example incident as a student). In the next section I discuss the Foundation and how live coding fits within the overall foundation programme.
3. FOUNDATION RATIONALE
Westminster’s foundation pathways were initially devised for subjects such as Business, Law, Life Sciences and Art and Design, which were validated in 2018. The aim of these foundation pathways is to ‘develop high-quality and relevant learning, building an excellent student experience for learners who are transitioning to University level study from a diverse range of educational backgrounds’ (University of Westminster 2018: 2) based on an academic model ‘designed to support authentic-learning and learner-autonomy across the curriculum at University level’ (ibid.: 3).
Authentic learning and learner autonomy are recurring aspirational ideas in education in general but also aspired to in live coding systems. For example, EarSketch identifies an approach to authentic learning described as ‘thickly authentic’ that allows for a ‘personally creative approach’ (McKlin, Magerko, Lee, Wanzer, Edwards and Freeman Reference McKlin, Magerko, Lee, Wanzer, Edwards and Freeman2018: 987). Yet the question of authenticity for any teaching and learning situation is what type of authentic is being foregrounded? EarSketch’s aim is ‘providing [students] the opportunity to quickly begin coding and creating music in an environment perceived to be authentic by students’ (Wanzer, McKlin, Freeman, Magerko and Lee Reference Wanzer, McKlin, Freeman, Magerko and Lee2020: 397–8). It does this by replicating the timeline of the typical DAW, while directing students towards creating music via a pre-encoded block (samples) mode of working, thus achieving a constrained speed of practice. Yet, authentic music is also created in opposition to conventions and norms, which Théberge refers to as ‘explicit rejection’ (Théberge Reference Théberge, Frith, Straw and Street2001: 4). These are also authentic modes of creative, musical working. These types of oppositional practice also include the misuse of technologies where ‘New techniques are often discovered by accident or by the failure of an intended technique or experiment’ (Cascone Reference Cascone2000: 13). Therefore, any assertion of authenticity of practice is contestable without also acknowledging that works are also created in opposition to conventional approaches. By their very necessity, computer programming languages operate via conventional means of organisation. This pre-determination is an action undertaken by developers, EarSketch and other live coding applications included, where ‘Decisions-embedded-in-design have significant ramifications’ (Caplan et al. Reference Caplan, Donovan, Hanson and Matthews2018).
4. MUSIC FOUNDATION STRUCTURE
As of September 2022, there were nearly 800 students enrolled on all foundation pathways at Westminster. Two foundation academic modules are shared by all pathways. This approach means that music foundation students study alongside students from other art and related subject disciplines. This can assist students in building friendships and potential networks across related disciplines and courses. These include students from courses such as Contemporary Media Practice, Animation and Photography, which gives some indication of the potential for networking and working context to be experienced and explored by early stage practitioners.
The aim of the Music Foundation aligns with the main music degree, which offers pathway specialisms of production, performance and business. A decision was taken quite early on that the music foundation would focus on production, using Logic Pro, as a means of providing students with a solid grounding in the means of production and making music using a DAW. Our main understanding of the candidates selecting the foundation pathway fell into two main groups:
-
1. Those with little confidence in making music, as producers. This includes many singers/performers reliant upon other more experienced producers to assist in getting the work done. This group also includes performers who either purchase beats online or make frequent use of YouTube tracks offered as royalty free.
-
2. Those who have been out of education for a while and require a solid academic underpinning to build their confidence.
The choice to include live coding was to offer an alternative and complementary experience to DAWs, primarily one that would also get producers performing. This is especially important as recording projects could be adapted to a live coded performance. A good example of this approach is DJ_Dave. Her songs start in Logic, occasionally in Sonic Pi, with arrangements recorded, mixed and mastered in Logic (SongPsych 2021). This becomes the released track. The individual tracks and parts are then exported from Logic, either as one shot or as short loops, and then combined into a live coded performance using Sonic Pi. There is also another advantage to this approach for production students. Watching live coding performances, with their show your code approach, is what Biggs and Tang refer to as an ‘active demonstration’ (Biggs and Tang Reference Biggs and Tang2011: 181 original emphasis). For students, using live coding and showing their code is both performance and a performance of understanding (ibid.: 74–5). Whether used for assessment or not, this approach does not suffer from the decontextualisation that other modes of assessment can experience (ibid.: 182; Wiske Reference Wiske, Leach and Moon1999: 240). This is one type of approach to live coding and production that provides students with opportunities to make creative work and to disseminate their work in various formats. These include platforms such as Soundcloud, Bandcamp and Spotify. They can make their code and samples available under a Creative Commons licence. They can also perform the song as a live coded performance, which they can video or be part of a live streamed event. All are further evidence for and of ongoing assessment (Wiske Reference Wiske, Leach and Moon1999: 242).
The music foundation, including the two core academic modules managed by the university, comprises six modules (Table 1).
The introduction modules in semester 1 form the grounding of production skills, developing an understanding of how artists develop and build their audiences and academic work. The production module consists of weekly lectures, practical lab work and learning live coding. Both the production and live coding teaching run for 12 weeks, in parallel and are supported by videos that students can access via our Learning Management System (LMS), Blackboard, as well as links to external sources on YouTube and LinkedIn Learning. More detail on specific live coding teaching is covered later.
The Artist development module is split equally between lecture and practical work/seminar groups. The aim of the artist development module is to begin the process for foundation students to understand the formation of a creative identity where ‘In the main, artists’ careers are individually constructed in an ongoing and unfolding way’ (Bridgstock Reference Bridgstock2013: 124). As part of their coursework for this module, students deliver a podcast or documentary on their research of either an artist or producer, established within the last three years who is of interest and career relevance to the student. The selected artist forms the basis for their exploration and discussion of areas such as the artist–fan relationship, the role of artistic outputs in facilitating communication specific to style and genre, and broader areas such as sustainability, equality and diversity.
The Introduction to Academic Practice module is one of the required modules that all foundation students take. The module is run in several versions across all campuses. For Arts students based at Harrow, the module has 105 students enrolled drawn from Art, Architecture, Fashion, Photography, Design and Music. This module provides students with opportunities to create various items of academic work within their own disciplines. This work includes the traditional essay and annotated bibliography, as well as creating audio, video and poster-based artefacts. During this first semester, student work has also included paintings, a magazine, architectural plans and maps.
The semester 2 modules, not run yet at the point when this article was submitted, follow a slightly different plan. Becoming a Digital Practitioner for Music is divided into five thematic areas covered in two-week blocks. The planned activities include:
-
1. Live coding as production and performance.
-
2. Running an online channel such as YouTube or Twitch, including live streaming using OBS.
-
3. Live coding visuals using Hydra.
-
4. Making videos such as visualisers.
-
5. Exploring AI using Google Magenta Tools.
For items such as running an online channel we have invited one of our graduates who runs a reasonably successful music practice using both Twitch and YouTube. Her presence will also assist music foundation students in understanding this aspect of their potential career and identity building as a part of their degree rather than something that takes place post-graduation. By starting this sort of work early, students can take advantage of not just the facilities and expertise of tutors, but also experiences and skills of their cohort and of our alumni.
The emergence of AI in music is in constant development. While the practical teaching focus is narrowly on production tools such as Google’s Magenta Tools and Differentiable Digital Signal Processing (DDSP), taught content will also cover a broad range of AI applications such as music information retrieval, AI as assistant and AI as competitor. Discussions of what it means to use these tools and the implications of platform-based working will also be covered (Fisher Reference Fisher, Fuchs and Mosco2016; Schwarz Reference Schwarz2017; Wittel Reference Wittel, Fuchs and Mosco2016; Zhang and Negus Reference Zhang and Negus2021). Of direct relationship is Nygren and Gidlund’s observation that ‘Digital technology, in addition to being related to the labour sphere as industrial technology, is also related to the private sphere and ideas of individualization’ (Nygren and Gidlund Reference Nygren, Gidlund, Fuchs and Mosco2016: 398 my emphasis) Therefore, this module aims not only to introduce students to different tools and modes of making and dissemination but also to acknowledge Pacey’s observations that technology is not culturally neutral (Pacey Reference Pacey1999: 3). A good example of breaking this position is Crawford and Joler’s critical web-based document ‘Anatomy of an AI’. They created an ‘anatomical map of human labor, data and planetary resources’ (Crawford and Joler Reference Crawford and Joler2018) that constituted the Amazon Echo device.
The Creative Project module functions as a capstone project. Students are offered one of two choices for their project focus that are flexible enough for students to use their own initiative and sensitive to cultural and production tool differences. The type of working pathways are:
-
1. Original song productions – consisting of two new songs. Students can produce these in a DAW of their own choice. This is usually Logic but can also be Ableton Live or Fruity Loops.
-
2. Sample library – consisting of 16 meaningful samples (eight samples equivalent to one song).
For both types of project we envisage that some students will elect to produce live versions of their songs and/or create new music from their samples using Sonic Pi. Irrespective of which type of work students conduct – sample library or song – they will also create a range of supporting artefacts, such as video and streaming events, to show and promote their work. Their creative project work was also exhibited alongside other students on the Foundation for Art and Design, as part of a final year show in late April/early May 2023 at the Harrow campus.
Both semester 2 modules complement the module Critical Thinking for Academic and Professional Development, which is the other required module. Not only do these modules engage with form and content but they also explore modes of communication and distribution with contemporary culture. Alongside this, the music course team are also aware that the choices we have made are not neutral. All three modules discuss matters of mental health and balancing this with academic and professional work (Gross and Musgrave Reference Gross and Musgrave2020). For music students this is also important due to their use of online platforms to build and engage with audiences.
Combining creativity, identity work and production into a degree programme also engages with what Moir identifies as ‘“capitalistic creativity” – doing or creating services or things deemed necessary for their exchange value by our neoliberal, market-driven, consumerist society’ (Moir Reference Moir, Powell and Smith2022: 303). Yet, I would contrast Moir’s point with one drawn from my own teaching practice. Drillminister’s ‘Nouveau Riche’ (Drillminister 2020) is a critique of neo-liberalism and trickle-down economics dealing with its political, social and class-based themes. Class is important in music due to the general reduction of the working classes in creative occupations. In fact, researchers have identified that ‘cultural and creative occupations are not, and have never been, exceptionally open … remaining consistently unequal since the 1970s’ (Brook, Miles, O’Brien and Taylor Reference Brook, Miles, O’Brien and Taylor2022: 2). This is an area of concern for universities because ‘being from a working class background presents students – even once they have gained access to university – with multiple, intersecting and mutually reinforcing obstacles’ (Hale Reference Hale2020: 93). Thus, drill music (an absence in live coding raised by Blackwell et al. Reference Blackwell, Cocker, Cox, McLean and Magnusson2022: 239), based on my perspective, transcends observations that, for example, Sonic Pi seems to favour ‘Electronic Dance Music’ (Angel and Ogborn Reference Angel and Ogborn2022). While drill is one of many forms of electronic (dance) music, it also has a strong political and/or sociocritical communicative focus, especially through its lyrics. Its political effectiveness is not disqualified due to the means used to make the music. There is also a tendency in music teaching, as can happen in other study areas such as Metal Studies, to discount or devalue lyrics by ‘either dismissing them or failing to address them with any sophistication’ (Fletcher and Umurhan Reference Fletcher and Umurhan2019: 13) In many ways, which live coding environment, DAW, or even instrument, is used, is secondary to creative work that is directed towards developing this critical communication approach.
In this section, I outlined the structure of the music foundation, some of its curriculum and approach to student work and working practices. The modules, as well as introducing students to music production, also intend to assist students in understanding their practice and the formation of a creative identity. In the next section I outline how production and live coding fit together.
5. OUTLINING DRUM PROGRAMMING AND LIVE CODING
To complement their music production teaching, foundation students were introduced to drum programming using live coding. The purpose was not to wean students away from their DAW of choice but instead to show where different applications could be used to their advantage. The production teaching sessions cover a variety of drum programming approaches including real time, manual entry, the drum step sequencer, using loops and Logic Drummer. While I am not going to show lots of code, I will cover two types of drum programming introduced to students with live coding. The first, referred to as XOX, has its parallel with step pattern programming found in DAWs. The second example is Sonic Pi’s implementation of Euclidean rhythms using the spread function, which is not native to Logic without specialist plugins or scripts (Perkins Reference Perkins2021).
The XOX method (Boon Reference Boon2022d; Boon Reference Boon2022e) is an approach that has its history in the early days of sequencing music using electronic instruments and trackers, where the X denotes a trigger and the O is a rest. In some ways it is similar to other notation methods that have developed for devices in electronic music noted by Davis (Reference Davis2022: 11). In Sonic Pi the O is replaced with a dash as this is symbolically less ambiguous when coding. The advantage of using the XOX approach is twofold. First, there is no requirement to calculate different sleep values. Second, that the XOX method contains both events and rests. This is advantageous because all are processed at the same resolution, which is critical in a live environment.
The XOX implementation in Sonic Pi requires a function to process a string of text consisting of patterns of either x or -. Thus, students are introduced to defining a function to process the patterns. The following example shows this function in the context of a kick drum pattern known as Jersey Club (Future Audio Workshop n.d.). A single define function can be used to process any number of drum patterns:
use_bpm 122
define :pattern do |patt|
return patt.ring.tick == “x”
end
live_loop :kick do
sample :bd_haus if pattern “x---x---x--x-x--”
sleep 0.25
end
Students are also introduced to using the spread function, which is Sonic Pi’s implementation of Euclidean rhythms (Boon Reference Boon2022f). This is one of the most powerful aspects of live coding drum patterns. Given the simplicity of the code, students are able to immediately create variations such as (7, 15) or (11, 23), which gives them the experience of ‘an immediate code and run aesthetic’ (Collins, McLean, Rohrhuber and Ward Reference Collins, McLean, Rohrhuber and Ward2003: 321).
As well as evaluating when to play or trigger a sample, on can also be used in conjunction with the not operator, !, to play the off beats generated by the spread function. One of the uses for this, in conjunction with panning, is having two sounds play complementary rhythmic parts:
live_loop :euclidpatt do
tick
t = (spread 3,8)
#on beats
sample :perc_snap, release: 0.2, pan: -1, amp: 0.3, on: t.look
#off beats
sample :elec_flip, release: 0.2, pan: 1, amp: 0.3, on: !t.look
sleep 0.25
end
In this section, I outlined two drum programming approaches covered as part of the production curriculum. For any interested readers, the complete set of videos are available on YouTube (Boon Reference Boon2022c). In the next section I discuss an opportunity in live coding that moves beyond what Blackwell and colleagues call ‘process and technique’ (Blackwell et al. Reference Blackwell, Cocker, Cox, McLean and Magnusson2022: 231).
6. BREAKBEATS AND LIVE CODING
Live coding is fond of the breakbeat. In fact, music production makes use of breakbeats in both conventional and distinctive ways. Sonic Pi ships with a breakbeat sample from The Winstons ‘Amen Brother’ (The Winstons 1969) ubiquitously known as either the Amen Break or the Amen. This breakbeat has appeared in many musical contexts such as hip hop and drum and bass in particular. The emergence and establishment of these styles has influenced other artists such as Slipknot, David Bowie and Skrillex to use the break. The Winstons’ drummer, Gregory Coleman, died in 2006 and, along with the rest of the band, has never received royalties from the track’s use (Souppouris Reference Souppouris2015). Coleman has also not received acknowledgement on the five thousand or more releases that have used his drumming (Brown Reference Brown2020). Therefore, the sample’s presence in Sonic Pi’s library, as well as other DAW and sample libraries, becomes a moment of criticality. It enables a discussion not just about music production technicalities but also about acquisition, extraction, copyright, how samples are obtained, what their ongoing use means, lack of recognition for the original performers and what it means when the sample is shipped with either paid or free software. This includes the software provided by the university for students on the Foundation course, such as Logic Pro, which, as of this writing, most students have their own copy. It is important to appreciate that this moment of criticality can be equally applied to commercial DAWs as it can to free software. Free software makes computer music-making accessible to those less able to pay. However, free or not, all are equally implicated. Yet, the purpose of this current discussion is to understand the role of an audio loop, taken from a copyrighted performance, which circulates as a form of commons without attribution that is used to generate a new copyrightable artefact. Key questions become what are live coding performers participating in and signifying when they use it? How do ideas of self-expression and creativity change once the domain of the break becomes a critical event?
Some guidance on the use of sampled breaks can be found in Schloss and Chang’s research into hip-hop producers, which I have summarised here (from Schloss and Chang Reference Schloss and Chang2014: 100–30):
-
1. One should not sample material that has been recently used by someone else.
-
2. One should not sample records one respects.
-
3. Records are the only legitimate source of sampled material.
-
4. One should not sample from other hip-hop records.
-
5. One should not sample from reissues or compilation recordings of songs with good beats.
-
6. One should not sample more than one part of a given record.
By any system of evaluation, the use of the Amen Break without transformation does not align well with these principles.
Sample use can also be compared with advocates of the mashup and remix in what is referred to as the hybrid economy (Lessig Reference Lessig2008). This focus reveals differences of value between various producers and the contestability of terms such as ‘original creativity’ (ibid.: 81–97). Yet Lessig’s arguing for the relaxation of copyright is at odds with Hesmondhalgh, who states that relaxation in copyright laws ‘may not always favour the interests of musicians from less powerful social groups’ (Hesmondhalgh Reference Hesmondhalgh2006: 53). This dichotomy returns the discussion back to The Winstons and their continued lack of remuneration, recognition and rights, counterposed against their situated embeddedness and entanglement in many musical works.
Similarly, and arguably less transparent, is what happens when these breaks show up in commercial libraries or when used to extract groove maps? Collins’s discussion of algorithmic breakbeats elides the political and marginalised context of how and where breaks are obtained, thereby treating recordings as a somewhat neutral and de-politicised medium in the service of algorithms (Collins Reference Collins2001). From my perspective, the inclusion of the Amen Break in Sonic Pi’s library opens up all these discussion points and engages students with a practice that queries the right of anyone to appropriate without attribution and remuneration. This point is amplified with the introduction of Artificial Intelligence and machine learning. While not much seems to connect Salt ’n’ Pepa, David Bowie, Public Enemy and Aphex Twin as practitioners, they are all bound to the story of the Amen Break when it is treated as a type of commons.
The approach outlined in this section is an example of what can be described as classroom talk (Mercer, Dawes and Staarman Reference Mercer, Dawes and Kleine Staarman2009), or, in this context, as studio- or lab-based talk. This mode of discussion resides within what can be termed ‘an exploratory domain’. This sort of talk, indicative of this type of situation, can be described as:
-
containing challenges
-
clarifications
-
tending to use examples and illustrations
-
potentially also introducing alternative ideas and what if’s (see Cocker Reference Cocker2016: 107–8 for a discussion on the latter).
This gives space for students to participate in reasoned and lively discussions while also coming to understand the issues in music practice, especially when compounded by the use of technology. It also provides the tutor with an opportunity to explore student current understanding of the topic. As is evidenced by Schloss and Chang (Reference Schloss and Chang2014), practitioners can and do take a variety of positions regarding the use of samples. Thus students can make use of these understandings, add to them, modify or even reject them.
The sampling approach outlined here is one example of bringing a ‘historical socio-cultural eye’ (Mantie Reference Mantie, Ruthmann and Mantie2017: 26) to the teaching context. Yet not only are these subjects discussed but they are also made even more tangible by the embeddedness of this sample’s use at an industrial scale. The sample is historical, it is sociocultural as well as having economic and juridical consequences. My argument here is that far from considering music using samples as somehow disengaged or a matter of a lack of musicality, they are a tool of criticality. In many ways the Amen Break, due to its ubiquity, is subject to Feenberg’s Paradox of the Obvious: ‘what is most obvious is most hidden’ (Feenberg Reference Feenberg2010: 6 original emphasis). The Amen Break is obvious. Therefore its history, the recognition of its performer(s), its centrality as a defining sonic for many historical and ongoing musical outputs, including live coding, its lack of attribution and its ongoing role in a rights-based economy are hidden to almost all music-makers and listeners. The problem for all implicated in this system is their participation in what Ferguson refers to as ‘antiredistribution practices’ (Ferguson Reference Ferguson2014: 1105). While this tends to be an argument levelled at the recording industry, it implicates all practitioners due to the cultural embeddedness of the Amen Break. Likewise, field recordings from archives, such as the Lomaxes, will also bring issues for those sampling musicians looking for ‘new’ sounds to incorporate. One such example is Moby’s album Play (Osborne Reference Osborne2006).
A producer or software company’s participation in using the Amen Break, or any field recording for that matter, can be queried on any number of levels. It can be viewed as a commercial act or as a social act. It can be one that expresses membership or kinship. It can be for the purposes of signification as well as signifying taking part within a historical activity. However, none of these are neutral processes. None who use the break can claim neutrality nor can the break be estranged from being acknowledged as a much-used source of musical material. Blackwell and colleagues state that ‘technology (and indeed live coding) is something that you do, not something that you simply consume or own’ (Blackwell et al. Reference Blackwell, Cocker, Cox, McLean and Magnusson2022: 243 original emphasis). However, the activity of using a breakbeat such as the Amen is something that is done and consumed, in works owned by others. This activity is rarely queried and results in the production of a ‘use-value which has exchange-value’ (Marx Reference Marx1976: 293)
7. CONCLUSIONS
As live coding progresses past its twentieth anniversary, both it and DAWs still remain somewhat distant to each other. While the culture in its early manifestation was to build everything, the live coding platform Sonic Pi presents a user interface to facilitate music coding that is free from many of the issues that may have deterred producers from adding this skill set to their productions in this earlier time. Aaron has described the requirement that anyone using Sonic Pi should be able to ‘turn it on and make a sound’ rather than what Aaron saw as a difficulty with SuperCollider, which was ‘to make a sound you needed to design a sound’ (Elixir Newbie Reference Newbie2022) The recent addition of Ableton’s Link protocol (Ableton n.d.a) to Sonic Pi (in-thread 2022), opens up another area for teaching and practice, whereby groups of students can play together using desktop, laptop, hardware and mobile devices (Ableton n.d.b). There is also the potential for a piece to comprise a number of Link-enabled devices as a sort of (solo) performance and/or production ensemble.
From the perspective of Westminster’s music foundation, both DAW and codework comfortably alongside each other. Live coding, when used to augment or extend a production, means that a track can be taken to the stage and performed with. Live coding also encourages an approach that incorporates variation facilitated by code, rather than by musical instrument dexterity. This is a capability also shared with DAWs. Whether students choose to use Sonic Pi or DAW, or use both along with a collection of software and hardware solutions, foundation students are encouraged to adopt ideas and methods of making do. In the post-pandemic world of music-making, live coding provides these students with an additional means of production and communication. As such, making do ‘Recognize[s] and value[s] the countless ways of making do’ (Henn n.d.). Westminster’s music foundation aims to accomplish this by its broad approach to music production practice alongside the development of a communication approach.