“a : moving or extending in different directions from a common point : diverging from each other divergent paths” [1005 Divergent w]
The this thesis has to do mostly with a certain type of user experiences. To describe it, an analogy with geometry comes in handy. When a user is doing something whose target and methods are clear, it will mentioned be understood linear experience. The plan of such experience consists on going from point a to point b. An example of this, is following an assembly manual. A less linear activity could be one whose methods are unknown, but objective is clear, thus producing a path which is not straight, but always converging on the same point b (such as winning a game). A divergent activity, however is one in which neither the methods nor the target is known, hence following any possible path, and ending up on any possible outcome. The starting point may always be a, but the end point can be any.
In the field of psicology, divergent thinking is associated with the mechaninsms of creativity. The work in this area is credited to J.P. Guilford. According to Marc A. Runco, the three most common evaluation criteria to assess one subject’s divergent thinking capacity are fluency, originality and flexibility. Fluency represents the amount of ideas, originality represents the infrequency of such ideas in comparison to the other test subjects, and flexibility represents the conceptual difference among the ideas given by the same subject. [2024 divergent p.401] The subject of divergent thinking comes relevant to this production thesis because a music making tool is intended to maximize the extent to which a user expresses divergently. The music making tool, thus needs to allow the musical expression to be fluent, original and flexible.
unless stated otherwise, this thesis will refer to environment as a system that is intended for the creation of other systems. Examples of environment under this definition could be: programming languages, modular synthesizers, building toys. The environment may also include the extent where the resulting systems can express their nature as systems: for example, an autopoietic system, [2022 autopoiesis p.73+] utilizes the environment of the physical world and it’s chemical properties to manifest its quality of autopoiesis; such system wouldn’t make sense as a computer program, since it is a different environment.
Since this thesis is about the creation of an environment we are dealing with a three layer system. It could be useful to understand it as three layers that build one over each other, since there is a sequential dependence. The first layer, the environment, is the layer that has been discussed above. The system layer is where a user can design his own sequencing systems. In a programming language a system layer is the code writing itself. The outcome layer, is the layer that represents the outcome of the system layer. In the example of coding, it would be the resulting execution of the code. Since we are dealing with a music making environment, I will be referring to the outcome layer as the “musical layer”. The outcome that will be studied in this project will limit itself to be musical, thus this alias will result in a more fluid read.
//figure: a stack of three layers with the three different layers music ||| system ||||| environment |||||||
Each of these layers may be referred to as different domains, meaning that each layer can have a different set of terms, and each term can have a different meaning depending on which domain is being analyzed. For example, a defective user interface feedback in the musical domain doesn’t imply that there is a defective user interface in the system domain. The system may be well represented, while the way that this system is representing the musical outcome may be defective.
There won’t be mentions to the environment domain, since the modification of this domain is done using a proven, touring-complete programming language, which will be taken for granted.
//music domain: musical terms, and range of possible musical outcomes ----->subject to design //system domain: modular, system terms and range of possible systems ------>subject to design //environment domain: programming terms and range of possible environments (in reality, every imagined environment is possible)
There is a reason why collective parties happen around repetitive music which have predictable patterns. As found in james Andean & Alejandro Olarte’s experimentation: “it can be proposed that musical practice may have a higher degree of predictability, at least in the short-term, which has certain potential for engagement from the dancers, ” [URI 2021 2021-modular-system-design] This characteristic is not exclusive to electronic music, since each danceable style that is historically registered, possess some recurrent patterns that will allow the dancing participants to know for sure what are the upcoming musical events. (i.e. Foxtrots, Cumbia, Vals, Salsa, Flamenco) This makes it possible for the dancing group or couples to predict what are the future musical events, according to the accustomed rules of the music being danced. This also reveals what the rest of the dancers are going to do, and thus, take part in synchronicity with the rest, and with the musician. As we will see later, this sinchronicity between audience and musician integrates the audience into the musical process, leading to the idea of participating all together. Conventional dance music rules are most remarked because of their very simplistic structure based on factors of four.
A concrete example of the frontier between event-composition and signal-composition happens in the PureData environment. Puredata contains two different types of signals that can propagate: one type of signal consists of values, symbols and bangs. The other type of signals are the sound signals, whcih need to take place in a separate dimension, because of the different types of processing that each type requires. Now a days, there are many good options for creative, real-time signal processing environments. Examples of this are euro-rack, moog, puredata or Reaktor among others. The big development on this domain seems to have overshadowed the search for similarly open-ended environments when it comes to composition of musical patterns in a more conventional sense.
A signal is a continuous stream of a continuous value or it’s simulation. Examples of signals are the voltage level on any cable of a modular synthesizer or the position of a knob or fader in a control panel. An ideal signal represents with the perfect precision the state of an analog device, and is sustained for as long as the device remains in it’s state. Real signals are subject to radio-frequency noises, thermal coupling, hardware defects, resistance in a cable, capacitances, etc.
An event message is an event that takes place at a moment in time, and contains a discrete, digital value. They are best mean to control discrete behaviors such as states, tones, scales, metrics, etc.
This distinction is similar to the distinction between continuous and discrete values. The difference, is that when applied to the signal intention in a musical environment, these take a slightly different meaning than speaking of continuous and discrete in an engineering ambit. We could say this distinction in relation to the latter, is different because it is focused on the intention of being discrete rather than the actual fact of being discrete.
The line that makes signal and event distinct, can be thought in different ways, since a digital event-message is ultimately a modulated signal, which is written and read according to a common standard. For instance, a signal could also have a value and take place only at one moment in time, as it happens with drum trigger gate signals in an eurorack system. However, whereas a signal ideally spans along a certain amount of time to represent some value (such as envelope), an ideal event-message would take place in a duration of zero. Therefore, whereas a signal defines a timed event in a continuous timeline, an event message divides time between before and after, ideally taking zero time. An event message in the real world is subject to latency that increases for each intermediary, and it requires time to be transmitted.
A prototype is a component, that takes place in a system. The idea of an object-based environment, is that instead of having signals flowing in the real-time, each object contains a set of data that can be read or written at any given moment. An ideal object based environment would cause every object’s state to take effect in the system all the time. One example of this type of system, are physics simulation (hence, the real world) where the positions and forces of each simulated object must be calculated at each frame (i.e. moment) to take all their states into account. Other example is the relation that exists between the sound coming out of a synthesizer, and the state of its controls: the positions of the knobs, switches, etc., in most cases, is supposed to be tied deterministically to the sound that is coming out.
Despite every modular synthesizer has an object-modularity relation to its interfaces, their method of communication among modules may not be based on this logic, which means that it cannot be considered an object-based modular environment. For example, euro-rack modules are only coupled as long as there is a cable patching two parameters.]
Starting from the device of the MPC controllers, there has been a long evolution that has led to a category of music composition interfaces that seem to cover a very wide range of uses. It is hard to track down how the idea of using squares to play drums, would become a suggestion that each button can become a tactile pixel; allowing interfaces with a high bandwidth of feedback, that also has dimensions to it (e.g. horizontal time, vertical tone). In the industry, clear examples of a trend started appearing such as Yamaha’s Tenori-on or the Korg Kaoss Pad 2. Now a days, it seems that any live improvisation hardware will implement this type of interactive pixel-buttons.
//does solidarity add to the argument? I think this is more about the value of white label
Malbon argues further that “performed music means nothing without a musically adept audience; that is, without an audience who respond to, and distinguish between, different sounds and sequences of music, the performance of that music would be pointless” [URI 2011]–>(1999: 82).
There is a wide discussion around whether the “togetherness” (or in sociological terms, “solidarity”) of dance music is a product of the underground history of electronic music, or rather an effect of the group use of drugs such as MDMT. The fact that is out of discussion, is the collective experience of electronic music parties. If we compare a “disco” music party to one of electronic music, one will note that instead of finding dancing couples, one will find a crowd dancing all together, facing the dj or performer. This can be either seen like each participant is alone in the party, or seen like every participant is dancing with every other participant. Perhaps the association of different musical genres with the use of different drugs have led to a perception of togetherness. [URI 2017] Although it is apparent that the formal characteristics of electronic music have little to do with this idea of solidarity, electronic music does have inherent characteristics that foster a decoupling from authorship, as stated in Resident advisor opinion: electronic artists should make their own music, Will Lynch << There’s always been an element of smoke and mirrors to electronic music, especially within the creation process. As you listen to a house or techno record, there is nothing tangible to imagine—no band swinging their instruments, no pop star on a stage, just sounds. Unless you were there in the studio, you can’t know what the artist was actually doing. In some ways, this mystery makes things interesting. “When you know how a magic trick is done, it’s so depressing,” Thomas Bangalter of Daft Punk once told Pitchfork. “Giving away how it’s done instantly shuts down the sense of excitement and innocence.” >> [URI 1003]. Also, thinking of sampling as an aesthetic, one may argue that the material of the electronic music is mostly done out of borrowed material, which implies a rich interwinding of content, often expressed as intertextuality “The use of the sampler has made this intertextuality more apparent, since a song can be created from the sequencing of snippets of sound as well as from recognisable fragments from other records.” [URI 2012]. Also the scarce use of lyrics, has caused the potitical discourse of the style to be very open, thus lending itself to an heterogeneous group of people. “The crowd is unusually diverse as well. Teenagers from downtown Detroit mingle with suburban kids from across the Midwest. A young raver in a wheelchair, her arms covered from wrist to shoulder with plastic beads, spins about near a group of gay men. A middle-aged African-American woman in a jogging suit listens intently to the music, her eyes closed, while a tour group from Amsterdam takes in the scene. People of all stripes, from all walks of life, have come here to hear this music, yet they respond as a group. The beat can not only be heard, it can be seen in their movements, and felt in their bodies.” [URI 2011](3) Although there seems to be a trend of electronic music to machism [URI 2020]. Aditionally, many guides for DJ’ing, when not focusing in the technical part, will explain that a good DJ will select it’s tracks according to it’s audience. [URI 1006 Techniques to Improve Your Live Sets _ Dubspot]
“It is only when played to and interacted with a dancing crowd, that house music, as a medium, is complete. In addition, a dance record is also pretty meaningless when it is separated from other dance records. One should look at dance singles as words which are looking for a sentence; they need to be combined to create a soundscape.” [URI 2011 -> (Rietveld 1998: 107) ]
The electronic music subculture terms are confusing, meaning that with the labeling of a certain group, one must also explain a further distinction to what is being said. In wikipedia, EDM is defined as the whole genre of electronic music, but as there are important differences between “Ibiza-style” parties, and the ones of the underground, I want to refer to EDM culture as the less alternative electronic music, often associated with massive parties. The reason to make a disticntion between an undergroud dance music culture and an EDM culture, is because whereas the first appreciates uniqueness, and independence as a value, for EDM culture orbits around a more widespread, popular culture. This is a big difference, because it implies that unlike underground dance music, in EDM culture, we have a system of star performers or producers. It also means that in EDM culture, the popularity of certain tracks lever their popularity generating a positive feedback effect, where the popularity of a track in a underground music scene would generate a negative feedback. In this sense, EDM is a spinoff of the electronic underground which instead of counter-cultural it is masses-oriented: a social phenomenon that we have seen with alternative rock and punk music. Funnily enough, an EDM artist my take advantage from the authorship fuzziness that characterizes electronic music, and have their authored tracks produced by somebody else, like in the case of a pop music star.
-EDM is a cross breed between pop music and electronic music – that starts with the birth of great electronic music shows such as the prodigy, or electronic musicians that became widely popular outside of the underground electronic music circles like groove armada, felix the housecat, fatboy slim, etc. Perhaps this birth can also be attributed with the trend of “eurotech” that became an electronic music that got the interest of big masses of people.
“However, during 2012 in the US, the rich meanings of the term ‘EDM’ seem to have been narrowed in the popular media to electronic pop-dance (Sherburne,2012),”[URI 2020]
A widespread, perhaps true urban legent tells that stockhausen listened and critiqued some tracks of more popular electronic musicians like Aphex Twin. According to this story, Stockhausen wrote: “I think it would be very helpful if he listens to my work Song of the Youth, which is electronic music, and a young boy’s voice singing with himself. Because he would then immediately stop with all these post-African repetitions, and he would look for changing tempi and changing rhythms, and he would not allow to repeat any rhythm if it were [not] varied to some extent and if it did not have a direction in its sequence of variations” also, “It starts with 30 or 40—I don’t know, I haven’t counted them—fifths in parallel, always the same perfect fifths, you see, changing from one to the next, and then comes in hundreds of repetitions of one small section of an African rhythm:
duh-duh-dum, etc., and I think it would be helpful if he listened to Cycle for percussion, which is only a 15 minute long piece of mine for a percussionist, but there he will have a hell to understand the rhythms, and I think he will get a taste for very interesting non-metric and non-periodic rhythms. I know that he wants to have a special effect in dancing bars, or wherever it is, on the public who like to dream away with such repetitions, but he should be very careful, because the public will sell him out immediately for something else, if a new kind of musical drug is on the market (Witts 1995, 33).” [URI_2001 mts-butler-unlocking-groove Unlocking the Groove: Rhythm, Meter, and Musical Design in Electronic Dance Music. By Mark J. Butler. Bloomington: Indiana University Press, 2006, xi + 346 pages.] (Witts 1995, 33 or [URI_2025]). This story marks a clear difference between the worlds of experimental and conventional electronic music. In the first one, the most important intention is to experiment with the electronic music medium, and to explore it’s expressive possibilities. Usually this is done by drifting away from the conventions and notions of music.
[microtonal music and the attitude of making something different]
[Although there are many tools that are satisfactorily used to improvise dance electronic music, instead of making an electronic music instrument, a better approach to a truly divergent music improvisation tool would be a modular environment. there is no tool that allows a truly divergent improvisation of the genre, because current tools can only offer a set of procedures that are specific to predefined musical modulations]
It is said that the appearance of the tape recording devices had an impact of a similar magnitude than the impact potography had to painting. The capacity to record sounds, set out a filosophical unstability of their meanings, and suddenly there is a whole new field of research and exploration. [URI_1010 Chapter 1] As suggested by Daniel Warner, as many others, practical ability can have impact on matters such as the meaning of natural phenomena. When there is an abilitiy to record, the sound events can be recontextalized in new ways. The recording techniques gave place to sonic productions by individual artists, like Pierre Schaeffer. The early ‘musique concrete’ explorations started expanding into wider audiences while the recording technologies started infiltrating more popular genres. The tape reels became standard music studio equipment, allowing post-production. Music that is composed on the basis of techniques rather than performance started emerging organically. In the 80, with many developments on digital instrumentation, it would now be possible to record, alter and play sampled sounds in the live stage. Given this, there was a place for the perfomances using music records, giving birth to the idea of a DJ, whose function can be anywhere between a person who puts music on parties, to a collage musician.
From the late 80’s to the current time, there has been a succession of solo electronic music producers who would perform
In the times of traditional, mechanic musical instruments, the preparation of a western classical musician would consist on practicing performance of pieces, with the aim of obtaining muscular memory for each part of the musical score, allowing a smooth performance. When the musician was part of a more improvisational genre such as Jazz, or Folkloric music (some western classsic compositions also give places for the performer to improvise), this preparation would consist of getting to know the other musicians, and also the “rules of play” that will be agreed during performances. Otherwise, the musicians could not have a clue how to coordinate with the other musicians performing. A musician who is playing solo, could get the opportunity to make up his own rules as he doesn’t risk getting breaking out of the live piece. However, in a world of mechanical instruments, this improvisation freedom comes with the costs of being limited to one instrument at a time. There are some exceptions to this rule, such as the so-called ‘one man bands’; people who skillfully hack many instruments into one manageable set of interfaces; but such complex setups need the performer to have their performances memorized at a muscle-memory level, since the control of many instruments at once can be very counter-intuitive and hard to control.
In the time and context of electronic music, many tools will let the musician to improvise alone, and still offer to the audience a multi-instrument performance. A looper can allow a musician to record an instrument and have this instrument repeating while he can now play any other instrument, and keep repeating this proces. A sequencer allows manual programming of musical events. When the musical performance consists of putting tracks on a party, this preparation will consist of obtaining availability of a wide range of musics, to make available during the performance. When the musical performance consists on playing one’s own work, the preparation will consist on making “tracks” or “stems” that can be played together, forming a full composition.
In the digital domain of music making, there is also the pattern based compositions, where the musical loops or phrases are designated by event-message sequences, that allow themselves to be played by any type of synth, most usually using MIDI protocol. This has led to the possibility of performing and modifying sequences beyond what is possible with sampling techiques. The latest technologies to control musical performances, have made a great leap towards a fluid composition of music, that ultimately allows to generate musical content in the live stage, and they also allow for the alteration of these musical loops. This leaves the musician with an interesting mode of working: first, there is the current musical loop, which keeps repeating. Over this loop, the musician can do alterations, such as changing the timbres of the instruments being played, or making changes to the musical composition. The musical performance now can be improvised without having to stop the transport or have awkward patterns between the original loop, and the desired altered loop.
The last couple of decades have been plentiful of new tools in the area of music creation, and the knowledge of electronic music making is mostly related to knowledge of it’s technologies. This is important because unlike previously being human hearing, harmony and acoustics knowledge, the factor of music technologies, is determined by other people. Behind every technology there is a person and an intention (regardless of whether this intention was accomplished or not by the technology). So for instance, when a the instrument suddenly knows how to play because it has a sequencer, the user doesn’t have to perform by himself, for example, a TB-303 bass-line. The interaction with the tool is different, because whereas the interaction with traditional instruments is limited by physical constrains, the electronic instrument’s interaction patterns can be any. This brought the -once innocent- consequence that the means of performing music are now designed by an industrial counterpart, and not by the phyisical laws. Furthermore, now that the instruments can keep the sequences on their own, now it is not possible to make any pattern. Only the ones that the interface and memory allow. The maker of bricks becomes more influential than the architect.
Many different decisions in the electronic music instrument making, have heavily shaped the path of music making, so much that the earliest groove-boxes and rhythms machines, dictated the creations of new styles, whose characteristics were based on the instrument’s architecture and sound. If the most direct rhythm structure to make in a drum machine is 4/4, divided in 16 steps, suddenly the whole resolution of electronic music becomes 4/4, 16 steps. It didn’t matter that there was a way of making patterns in 3/4 signature. This has been great fun, because there has been some sort of ecosystem where designers and engineers create a stage where musicians and creators experiment, and have some musical outcomes. If the designers provide a low cut knob to the synth, electronic music then starts expressing through that knob.
It has been 34 years since the TB 303 and I believe that the electronic music instruments is much less naive than it used to be. Each of the musical instrument makers offer their own universe of possibilities, and have an interaction statement. For instance, Ableton aims to a very computer based, performance, and tries to blur the limits between production and performance when they produce hardware that turns a DAW into a performance instrument (Push). Reaper is trying to do this as well with the Nektar Panorama. Native instruments is searching for a very dj based performance: they come from the dj culture with Traktor, and Maschine tries to decompose music making into single tracks and samples. The Dj’ing software Traktor is trying to go towards merging with live production by introducing the stems concept (to separate music tracks into separate stems, so the DJ’s can get more creative with these) and introducing some looping features that are completely new to the idea of DJ’ing. In the other side, Maschine, achieves a very good groove-box by presenting a controller for their own Maschine DAW, that has very limited possibilities, and presenting in a hardware piece, the most common music making parameters. Limiting possibilities is not a bad strategy for a performance instrument, as I will explain later. Pioneer has now launched their own version of a live electronic music performance instrument; the Toraiz, that intends to mix the two worlds of groovebox and DJ’ing. I personally think that the Toraiz needs a bit of maturing, but will be a great tool in a bunch of software updates later. Korg also has been long ago creating groove-boxes, aiming more for a more analogue feel: the EMX 1’s even have a little window that let a couple of tube amplifiers exposed to the view. Korg has it’s own synth culture, and the newest Electribe versions EMX 2 and ESX 2, manage to keep some of this inheritance albeit they are purely digital machines: the sequencer is very limited, the synths simple, and all the timbres and effects are a Korg version of classic oscillators and filters. The way these Electribes are designed to be used in a performance, is pretty much by launching pre-sequenced tracks, and making some minor tweaks to the sound when on stage. The main problem, is that any compositional system cannot afford a certain musical transformation if there is not a specific procedure for that transformation; because most manual transformations of a pattern will take too long of a time. The surrogate procedure consists in deleting the sequence -the sudden silence can by hidden by using a looping sampler- and recording the modified sequence again.
Musicians, at the end have tree possibilities: they either play with other performers who will play along; in which case we are dealing with a music performance paradigm that is hybrid between mechanically instrumentated and electronic (for example skinnerbox). They can also work with pre-recorded tracks that they can fire at the right time (such as daft punk) and, perhaps tweak some parameters on top without much opportunity for divergence. Lastly, a musician can earn some improvisational liberty by throwing away the expectations of conventionality in music and work with more experimental music or sound.
[How can an electronic dance music instrument facilitate divergent improvisation?]
Musical improvisation. Within the domain of conventional [dance] electronic music, there is little chance for actual improvisation. The fact that live electronic music must be either pre-recorded, prepared or experimental, leads to think that there are no instruments that readily allow expressive and complete improvisation of dance electronic music. Albeit there are many artists or groups that effectively perform live electronic music, they remain as part of an incipient vanguard. It requires the solution of many technical challenges in order to expand an electronic music instrument into becoming a true live performance tool. This gap manifests being problematic, because certain experiences of electronic music are intended to be collective, which is achieved only to the extent that the performance tool allows a real-time interaction with the audience, hence improvisation.
The lack of divergent music improvisation tools is a suprising fact given that technology and interaction are far more developed than it would be required. It seems as if musical technology, while traveling towards making everyrhing possible, would have skipped the part where live improvisation of music is doable by a single person, without having to type programs. For instance, the MIDI protocol, while it has the potential to integrate many synthesizers and sequencers together, to build a more complex instrument; it was reduced to a mere protocol for synchronization of clocks and triggering of notes. A musician with many synthesizers, instead of getting a more complex syntesizer, only gets many segregated sound sources which [he] can only fade in and fade out; and they all require to have sequences which were prepared beforehand. What is even harder to understand, is how the music software that has been developed for live improvisation relies on the very same concepts, doing a mere virtualization of a set of instruments, with nearly the same limitations.
In this thesis project, I will make evidence of this lack by inventing a music making interface that facilitates free improvisation of conventional electronic music, and bring this interface to the closest possible of being a product. I will reflect upon what this new interface makes possible that previously wasn’t. I will also review other current and imaginary means of achieving true live performance of music.
As we are dealing with the three layers of creation, system and environment, these experiences can take place with three possible roles in mind: an audience for the music, a musician or a developer. These three roles can be fulfilled by one same person.
Novation is a company dedicated to music performance hardware tools. It is most notably known for the creation of the lauchpad, which I presume that was a pivotal point for Ableton and for Novation. The novation Circuits are a set of digital sequencers with a dedicated sound engine, and they usually dare a bit further than all the other brands in the interaction patterns. This family comprises MonoStation and Circuit. The concept of the circuit products is that they have a pad based interface that intends to allow a fluid composition.
When there is a scale modulation, it is really handy to be able to have a tool to transpose the pattern within the scale, and to automate this action
Importance of the removal of notes
the need to share one same scale operator to multiple voices. I need to implement this somehow
There is a need for backpropagation: what happens if I want to remove one particular grade within a time, by using a chordkit or a presetkit interface? This is a case of backpropagation similar to the one of recording.
In this sense, I need to extend the language of communication that I am currently considering, to include information that are not only events, but also very precise modification signals for sequencers and the so. Perhaps a sequencer should be able to program a presetkit entirely; but the question is: how could there be such a generic programming signal from a module, that allows to be taken by any morphology of modifyer, thus allowing in the future to design any type of modifiator that will still be fully compatible with the sequqencer. Is there a question/response protocol? is there a dump of all the data? is there some sort of central pool of data, like a prototypical language that would allow a modifier to propagate to other modules?
Maschine is an integration between dedicated software and a set of different hardwares, all designed and deployed by Native Instruments. This platform is on the lead of live electronic music making tools, because of the economy of scale that the company can leverage due to the size of this corporation. A user can access this platform at a relatively small prize, compared to other similar tools.
One of the strong points of Maschine is the hardware quality, which allows a very fast and expressive input of musical performance. Maschine can be used as a musical instrument with looper capabilities. The bandwidth is wide: there are 16 velocity sensitive pads, 8 encoders for instrument parameters, and certain functions for tweaking patterns.
The weakness of Maschine come from the capacity to edit content that already exists: once a a pattern is recorded in a track, it is very hard or impossible to access individual events and modify their properties. To alter a pattern, the easiest is to record it again from scratch. The tools for selection and modification of events are very incipient and they don’t allow a development of a sequenced loop into a similar one. It is important to note, though, that Maschine still has more pattern editing capabilities that most sequencers, as they are mostly null in this sense; but given that Maschine is a controller for a computer host application, one expects a lot more of editing capability access from the hardware.
It is very notable that it doesn’t support midi effects chaining.
One option is to change the length of a track. This allows to make a loop shortening of a pattern (which is limited only to the starting section of the given pattern). One can also choose lengths that don’t divide by the length of the part, this creates an effect of polyrhythm (that is re-set at every loop according to the length of the longest track pattern in the current part)
Other option is to record the track output sound, from which point one can sample and retrigger that loop recording, allowing sample slicing of any sound (including microphone input) and layering of multiple copies of that sound.
Maschine includes a beat delay, that also allows for some increase in the complexity of a pattern.
The interface allows for selection of all the notes on one same scale grade, and transposition of the selected notes. It doesn’t allow, however to select an intersection between time and grade.
There is also a dedicated button to shift the position of any set of selected events within one “group” in the time. One could have a base pattern running, and keep displacing a secondary rhythmic or melodic pattern.
It is nice that is possible to duplicate a track, mute the new one to add some modifications while silent, and then fade-in the duplicated track, with the intention to replace the previous one. This is a very effective way to change instruments or generate a second voice that harmonizes with the current material.
To play melodies in Maschine one must select a sound, and press the “shift” + “pad mode” buttons, which toggles to a mode where each button in the grid is a note. It is possible to select a scale to play by using one of the encoders in this mode while pressing the “pad mode” button. It is also possible to have each pad with a different chord.
The set of chords that are made available are selected by rotating the encoder. By scrolling, one scans through different chord variations and alterations, and each pad will serve to trigger a different grade of that altered chord. This is not a good way of presenting chords, since one will very rarely play a sequence of chords of the same type and alteration; meaning that to play chords using this tool, one would need to switch alterations very fast and precisely with the encoder into the desired type, or record just one chord per repetition of the pattern.
features of this sequencer that are not found in a basic sequencer
I need my to add to my sequencer the ability to reset position on the realtime, to allow performing with polyrhythm in a expressive way. I also need to add the ability to constrain the position of two sequences in the same way as maschine tracks are constrained in position. It is currently possible by adding a sequencer that triggers many sequencers, but this method is slow to implement in the live, and it requires an edition of the sequencer each time a new sequence is created
Perhaps a sequencer must allow to store multiple patterns, this allows the duplication of a sequence (without having to create a new sequencer) to make a variation, and then still be allowed to come back to the older sequencer as it was.
Undo is very handy.
Squarp Pyramid accomplishes the task of re-thinking live sequencing of music. The most remarkable feature of this sequencer, on my view, is the non-destructive layers of sequence tweaking that are present such as scale, and the ability to modulate parameters of the events in the same fashion as synth parameter automation. This allows effectively a more parametric approch to music composition.
Korg is most well known for their synthesizer and sound design. Although the Electribe is a very strong machine for live performance, the sequencer that is implemented is very short in features and flexibility. The performability of this sequencer is achieved mostly by synth parameter tweaking.
It is possible to sequence melodies in an electribe 2 in a way that is similar to maschine
Ableton has been the de-facto tool for most of the electronic music performers, regardless of how much of their performance is prepared or played live. In the area of live performance, Ableton’s core feature is to have many sound loopers which are tied together in timing. In ableton’s language, these loopers are called “clips”. These clips allow to do an on-the-fly sound or midi recording, which will start playing as soon as the record is stopped. Probably one of the most important factors for it’s success is the fact that the clips length adjusts automatically to match the recording time, but with a length quantization that is associated to the musical metric. Opposed to all the other tools where you have to know the length of the pattern that you are about to “improvise” because they only offer an “overdub” recording mode.
Push, being a mere controller of a tool that has been developed for a couple of decades, becomes a vast library of functions for the performance.
According to the spirit of Ableton, Push intent is to make the most fluent interaction that is possible with the composition. The case of push’es button matrix tilts toward the concept of pixel by having a matrix of 16*16 backlid buttons instead of having the classic 4*4 or 4*2 matrix that is to be found on most other sequencers. This presents many advantages, such as offering a very good interface to play melodies, offering enough grades when playing in scale or chromatic mode. For instance, push is the only device that offers a scale mode that still allows the use of the non-scale notes while in the mode, giving the role to the scale of being just a modification of the user interface. “In Chromatic Mode, the pad grid contains all notes. Notes that are in the key are lit, while notes that are not in the key are unlit.” [URI_1007]. The amount of buttons also allow for an effective melodic sequencing of events; which doesn’t work in 16 pads matrix. The disadvantage of this, is that the button sizes are not the best for drum performances.
It shares with Mascine the feature of accessing virtual instrument’s parameters via the encoders, whose values and labels are represented in a screen with correspondent positioning. As ableton push is much posterior to the development of it’s host hardware, the mapping of parameters to the interface is much more heterogeneous (where in maschine, each different virtual instrument has well-defined parameter to knob associations)
[URI_2014 Block Jam: A Tangible Interface for Interactive Musi]
It could be argued that eurorack can’t be compared properly against the aforementioned products, because so far we have seen off-the-shelf products whereas eurorack is an environment that comprises many different products from many different manufacturers. It is a bit like comparing an airplane with the automotive industry. For this analysis, however, this element makes sense, perhaps as an item in a different category than the others, but nevertheless it has interesting aspects that disallows us to leave it apart.
Eurorack environment as a music improvisation platform, attains many advantages over the other systems given it’s openness. This opennes is granted due to it’s historic agnosticism to a musical or sound cannon. Indeed, the euro-rack standard is born almost by accident with the design of the A100 module, whose focus was put on harnessing the use of control voltage of parameters in an open way [URI_1008]. The casing and power supply of the standard were inherited from standard casings and power supplies that the designer had in hand at the time [URI_1009].
It is interesting to note that Eurorack modularity, is different from the modularity of other tools such as Ableton because in Euro-rack. Instead of having a framework that leaves spaces where modules will perform a specific role (such as receiving midi and outputting sounds), we have a standard of voltages and an enclosure system that allows any module to take any role. Furthermore, the Euro-rack environment doesn’t provide any predesignated base such as a global clock, or even some sort of output, all of these features are meant to be provided -or not- by the modules themselves. So for instance, if in Ableton we are limited to having only one clock (wich is a handy limitation for conventional music), in Euro-rack we can have any amount of different clocks drifting away. “These definitions of the various signals, and the distinctions between them – sound sources and modulation sources – are right in principle, but a modular system like the A-100 often makes a mockery of them. In a modular set-up, all of the modules produce voltages, and can be used as control voltages or triggers, thus blurring the distinction between the various types.”[URI_1008] This type of modularity allows for a vast field of experimentation possibilities.
This homogeneous modularity is the epitome of a platform that allows divergent exploration of music, because instead of having a gamut of possibilities (as offered by DAW-based environments), we now have a field of possibilities. Eurorack, and analog-modular music hardware in general allows for experimental music outside the boundaries of our understanding of music, and this has been in general the place for this environment. A clear demonstration of this, is how there are so many modules that foster stochastic composition, such as modules that would capture electromagnetic noise, or modules that capture skin capacitance, modules that compose random patterns, etc.
Taking the findings
how could I design a pattern and melody sequencer in such a way that allows the user to make his own sequence systems?
This is the process of prototyping digitally the modular pattern system maker tool. It was an exploration done to determine the kind of hardware that would be most interesting to prototype physically, in the context of my thesis project.
The idea of a system maker is that, from many instances of the same element, a new system can be created, that will manifest qualities not present in any of it’s component. The user becomes the designer of the music composition system. This product comprised of this set of elements is what I will call the builder. By using the builder, the user can create his own music composition system. Ideally, this builder would allow the creation of music generation systems as well.
The molecular behaviour is what I explored when I made Licog composer, and it manifested some interesting emergent features. Nevertheless, a hardware version of such a device could be too expensive due to the quantity of required copies of the component, and the difficulties that pose connecting many hardware elements together. (see Brocs)
A nice feature to add to this experiment that I still have due to do, is to enable multiplayer, interactive composition. If I rebuild this prototype in a web browser, and communicate the client browsers through a socket, it could become an interesting collaborative composition toy.
Other interesting feature to this platform would be the ability to enclose groups of nodes as a single node that would have some inputs and outputs. This would enable the player to build on a higher level basis, in the same ways that occurs on puredata. For this feature, there is the added complexity and interaction challenge of making the notes within a system, somehow parametric (because it is not very desirable that a group of these nodes would have a fixed harmonic structure or set of timbres).
The third, and obvious feature that I can think of here, is that one node should be able to introduce changes over the global parameters. One node should be able to change timbre parameters of an instrument, to change the tempo, etc. One node should also be able to capture sound and reproduce it, as it was speculated for the brocs project.
The idea of the grid based behaviour is to make a hybrid between a molecular behaved system maker and traditional sequencers. This is due to the fact that sequencers can generate complete patterns using a single piece of hardware. In this exploration I intend to add modular features to a basic sequencer, obtaining a sequencer platform.
An example of a grid based modular system, are the analogue sequencers; because we can use their outputs as inputs, and, for example, connect another sequencer’s clock input to a gate output of the sequencer.
The most basic behaviour of a sequencer is the following:
The first component design was a four step, four events sequencer. This means that the playhead goes back to zero every four steps, and there are only four possible events that can be programmed. The four steps and four events could be programmed by clicking any of the 16 virtual buttons arranged in a 4*4 grid. All the sequencers play header was synced to a single master transport (i.e. metronome)
The first additional feature to this bare minimum sequencer, was the possibility to obey another sequencer’s input instead of the master clock, and jump to any step provided by the input instead of incrementing 1 step per event. I chose this feature, because it results in a mapping matrix: if a sequencer sends a [0,1,2,3] sequence, the child sequencer (the other sequencer that is receiving the signals) would play as a normal sequencer, but any other sequence such as [3,1,2,1] will repeat a step from the child sequencer whilst the primary sequencer is playing linearly. In this way, the horizontal axis of the child sequencer becomes an input, and the vertical becomes an output. An usage example of this feature (if there were more available events, and more available steps) would be, to create a palette of notes in a scale that are sequenced by the parent sequencer. Or perhaps, a palette of chords. It already presents us with an improvement over the traditional sequencing approach because, if we want to change the harmony of a melody, instead of needing to reprogram every note on each step, we can now just change one event per grade. This approach also allows complete transformations to a melody, if for example we start mapping all the child sequencer events to a same note, while the parent sequencer is playing a sequence with many distinct notes, and then start adding tonal variations, thus obtaining a very original melodic progression for the ambit of live electronic music.
The second introduced feature to the sequencer, was to address each possible sequencer value (vertical axis) to a different output. This will allow us to generate alternating outputs to one single sequencer event. In the following example, the point of view is taken from the leftmost sequencer, which is the only clock synced sequencer. The resulting melody of the system will be a repetition of [1,2,3,x], and x will be a number that alternates between [0,1,2,3] as an example. The resulting pattern was programmed in 12 steps and is 16 steps long: [0,1,2,0,0,1,2,1,0,1,2,2,0,1,2,3].
An interesting problem is that some behaviours may be different depending whether the connection goes “up an id” or “down an id”. The system scans each one of the modules, in the order that is indicated by each module’s id. If we set each module to respond instantly to any signal, there is no big difference on the response regardless of whether the connection goes up or down id’s. But if we set the modules to wait for a clock step to respond, there will be a difference. If a connection goes up an id, upon clock tick, the module will have already received the signal to which it has to respond, from it’s parent. If the connection goes down an id, when the clock ticks, the parent would have not yet sent the signal to which the sequencer has to respond, and therefore, it will respond to the signal with the delay of one whole clock tick.
In the left: instant response generates tiny difference regarding response up and down id’s. In the right, when elements are clock bound, down-id connected elements will be always one clock behind.
If this was a hardware situation, there would be no clear rule, because the elements would not be updated progressively as in the computer simulation. The result is that instead of a clear rule, whether the response is delayed or not will
Emission delay consists in receiving and reacting instantly to all incoming signals, but buffering all the resulting signals into a buffer, that will be send in the next clock tick.
The problem that results from this solution, is that the delay still happens, but in an even less intuitive way; on the sub-sequent, down-id module if all the chained ones are clock synced. The first module sends a signal to a lower id module; which reacts instantly and queues the output to the next step, that comes right after the signal. It sends it to the next module, and it reacts instantly, but the clock that corresponded to that signal in the upper module, has already happened; meaning that in this, sub-sequent module, the output signal will belong to the next clock.
seem to be random.
When an element responds to another sequencer’s input, and it is quantized to clock steps, it must always buffer it’s reaction for the next clock step.
This approach doesn’t solve the problem, as the reactions remain ambiguous.
This solution consists on processing all the elements in two separate processes, ion the same way we would treat graphic layers if we wanted to ensure that elements to be drawn from an array, would be drawn in a different order than the one specified by the array. Applied to time, this could ensure that we first process all the incoming signals to the elements, and once all the incoming signals are processed, we proceed to process all the elements reactions to the clock tick. What has been done so far, is that on each clock tick, all the elements are processed, while reactions to signals are processed as soon as the happen. To apply this procedure, each element must now have two signal queues: one queue for the incoming messages, and other queue for the outgoing messages. Upon clock, all outgoing messages are sent, and after clock, all incoming messages are processed, thus generating a new set of outgoing messages.
This solution ends up giving a consistent behaviour of delaying the signal one clock independent of wether the signal is up or down the array. To implement this solution, I had to create an additional property which determines wether a module can trigger itself. Perhaps this property must have been created from the beggining, but the lack of this came clear only in this stage.
The first problem of this solution, is that is impractical to apply on a hardware situation. It would need instead to be replaced by each modules independent timing, consisting on waiting for a clock signal to send, and after sending, processing all the buffered incoming signals. If this is possible, the other problem that I see that some of the programming may become counter intuitive, because as user you may need to take into account a cumulative step displacement backward when programming events. I think that this problem reflects that this system is not intended to be clock synced on any device until the last ones, that will ultimately trigger sounds. In a way it means that processing delays is not a problem altogether, as far as it happens consistently. Processing delay also permits us to produce molecular behaviour.
(This part will be rewritten in a clearer redaction later)
The other valid consists in relating self-triggering with clock sync. Seen from another point of view, it seems as if the device should consider the clock as a signal that should generate a trigger instead of using signal inputs as a trigger. This requires that each unit is capable of displaying multiple states on each view, instead of replacing the view at every input. When the device is clock-locked, but it is also receiving signals, it will look as if the device remains stuck at the position zero (if jumping is activated) or that is jumping several steps on each clock (if jumping is not active). But the truth for the first case, is that the device is virtually reacting at the same time to the clock than it’s response to the signal, because these two reactions are buffered. This is why the component should be displaying the state of the buffer rather than the state of the last reaction, giving a new, more rich meaning to the current state.
This experiment was based intentionally on very simple sequencers. If we expanded the capability of each module to the one of any sequencer, there would be many more expressive possibilities than the ones expressed here. For instance, the signal emitted from one sequencer to another could be comprised of many bytes (so far has been single byte messages) in such a way that a static message could be transmitted and routed through many sequencers, where a header byte may change through the patch because it is destined for addressing, while some payload bytes may go through the whole patch untouched until a destination (e.g. synth). The payload message, of course, could be also tweaked through the patching route. This will give us two layers of message processing: one layer which determines the physical route taken by a message, and other layers that determine the effect of this message once it arrives to the final destination. In this way you could consider these as modules that expand a sequencing interface (like in Roli Blocks), and also work with them as modules that expand the capability of the system, as it happens in Eurorack.
This small experiment validates the idea of making a modular sequencer as a hybrid between molecular and sequencer behaviours. It also presented some of the first nuanced challenges that modularizing sequencers will present on prototyping, and gives hints to their solutions. This experiment is also a good preparation that will make clearer what are the features that a module will need, and what features are not important for a module.
One of the important questions that remain, is the factor of shape and size: where the physical unit should stand between an inexpensive, naked circuit, or a manufactured, high end piece of hardware. I know that none of both ends is a correct answer. A naked circuit lacks the appeal of a live performance (hopefully the system can communicate that the patterns are being generated on stage). A high end piece of hardware discourages having many copies of the same unit, and so discourages making complex systems that involve many components.
As designer I would like to depart a bit from the shape factor of the sequencer; just to express the idea that (albeit based on), it is a different and new mean of pattern composition. However, I wouldn’t trade the convenience of this shape factor just to express this idea; so the fact of changing the shape depends on whether I can find a new shape factor that is as convenient as the 16 button matrix format.
If clock should be bang based, this means that in the same fashion as in Puredata a clock triggers events, it’s message should be a number that each module could also follow for absolute sync. In an absolute sync mode, the incremental mode can both be incrementing from zero, or synced to the modulus of this absolute clock. In this way, two parallel sequences can be guaranteed to be in the same phase.
There is a need for sequence chaining and reset. One sequence should be able to start a next sequencer as if they were the extension. A sequencer should be able to reset other sequencer position to the start position as well; because these approaches are simpler and more friendly to a newcomer.
I am designing a environment to make sequencer systems, based on the licog composer, but with the intention to allow the user to design any kind musical pattern making system. I have been working on the building blocks of this environment : what is the set of components that would allow the user to make the widest possible gamut of pattern making tools.
There are many decisions to be taken for component within this environment, regarding the communication and functioning protocol, that I realized that was crucial for getting a healthy base for any future developments. If there is anything whimsical on the way that this device is trigger, or about what this device outputs, then in future developments I would need to make compromises either in the functionality of these devices, or in the compatibility that these devices would have with the older ones. A good illustration, as always, is the Lego building block. By good luck or by a good decision, lego has been able to keep innovating and creating new pieces, allowing the user to build a very wide range of things, while still keeping compatibility with their earliest pieces. All this was dependent on that very first design of the mechanical joint that the first Lego block had. I personally find that the not so positive example is little bits. Little bits, I think that was a nice opportunity that got spoiled because of a poor definition of the communication between modules; because each module is orthogonal, and the physical shape of the first little bits was mostly designed for one input- one output pieces, now they face the following two problems: a) there are many components and systems that cannot be done, and b) some implementations of the environment have to break some rules. An example of the point a, is that despite all the enthusiasm around littleBits, in my opinion, there has not been really interesting toys made out of this. In general having a set of littleBits allow a very limited range of things that can be built. Playing with littleBits remain within the spectrum of making ‘hello worlds’ over and over. The for the case of Korg little bits, the spectrum of possible synths to be built is so narrow, that I can’t see how can any synth could be built that doesn’t already exist. An example of the point b is that in little bits Korg, the filter module needed an additional input than that single input that the environment seemed to specify, and it ended up having a lateral plug to allow that additional input.
While making a environment , it is impossible to define a correct interface between the system elements because usually it is unknown what the future elements are going to be. A possible approach to solve this, that I have taken, is to explode the current units of my system, into units that could build the units of my environment . Within the current context of actually making the building blocks to make pattern-making systems, this means that I am either defining the sub-components of the component in order to define the components in a way that will ensure that any other future component is compatible, assuming that it will be possible to build these still unknown components from the same sub-components than the components that are currently being designed.
These components, however, could have properties that can change how the object behaves. In this way, the process can be cheated in a way that we end up with a single component that has so many configuration options, that it can cover any functionality. Perhaps for some system designs this could be handy, but as I am targeting the design of physical units, I put a limit to myself where each object should have the simplest parameter tweaking interface possible, and the script that defines it’s behaviour should be simple, and as monolithic as possible (avoiding too many switch statements).
After working on this process, I ended up defining a message as being a 3 byte message, that makes it midi friendly but also has an optional header for n-length messages, and four modules to process these messages.
messages details (are very prone to redefinition):
The first image on this post shows how a 16 step sequencer can be made out of these components. A licog round-robin can be made on the same principle. Licogs are also easy to implement with these modules, as one can trigger a sound upon bang, store it in fifo until next clock, and send all messages in fifo on every clock to the next Licog. This distribution of parts satisfactorily covers the domain of sequencers and licog composer, but it still needs more testing regarding the domain of note offs and control messages.
I speculate three potential interesting products out of this environment:
Having defined the idea of modularity as a
also a video.
How are the different modules going to communicate with each other?
To make a modular sequencing system, I need to communicate several electronic devices, in such a way that real-time events can propagate fast throughout some, or many modules, while each module produce transformations to these signals. As we are speaking about communications, we will now refer a module (or music performance interface) as node. Nodes are the pseudonym for the units that run a program that lets them participate in the bus, and have the user interaction interface.
Communication between nodes in a network is very complex because of all the factors that are involved. The design and interaction of the product is also compromised in the mode of communication between nodes. As an example, if the network is point to point, then the expected interaction of the user involves patching the modules in the same way as modules are patched in an Eurorack system, mechanically. Otherwise, in a common bus network for instance, the user would be expected to virtually patch modules, as they are all already fully connected from the start.
The main challenge here is to create an algorithm to prevent data collision. Data collision is when two nodes need to send a message at the same time. A bus can’t support more than one message at a given time, and a microcontroller can’t (or has a limited capacity to) listen to more than one incoming message. This is similar to spoken communications, where we cant listen to more than one person speaking to us at the same time.
The main problems of concern are:
The idea of the point-to-point network is that each node is only aware of those nodes whose inputs are connected to it. Daily life examples of this could be neurons, manufacture and distribution chains, or the postal service. Other example is the software PureData.
It became unviable in front of the realization that a unit may need to receive signals from more than just one unit, while this protocol is intended for one to one communications only. Albeit the AtMega 2560, has four RS232 pair of pins, it would either constrain the extendability of the protocol or require a lot of hardware implementation. It also has the problem that it would require one dedicated socket for each input or output, limiting these to only four, including the possible MIDI input and output.
Is an idea of a multiplexed Rs232; where a RX pin would be connected sequentially to different multiplexor pins, theoretically allowing any quantity of outputs to a single port. This idea could theoretically work if the system has other, parallel multiplexor that distributes to the sending devices, an electric flag granting permission to transmit, as a consequence of the multiplexor being connected. I discarded this plan because it appeared another idea that would require less hardware than this.
A shared bus network consists of a single bus to which all nodes communicate. Daily life examples for this type of network are spoken conversations between more than two, an internet group chat, and the system in cars that check whether every component is working well.
Two advantages of a shared bus network are the ability to monitor the whole network by monitoring a single wire, and the possibility of optimizing the flow of events for lower latency. There are two drawbacks: one is that each node gets a portion of the bandwidth that is in inverse proportion to the amount of nodes in the network (whereas for the case of distributed, each network has a different bandwidth). The other drawback is that we loose the physical interaction of plugging and unplugging terminals manually.
Was a good candidate; it was tested by making a random pattern generator that outputs midi. To try that the bandwidth of the network is enough, I sent 24 clocks per step to the random step generator, and evaluated how much stutter and how much the common bus gets saturated from this. The conclusion was that one module can clock the other module with rates as high as 100 clocks per step, for a musical speed of 120 BPM without any noticeable stutter or latency. The problem was that 24 clocks per quarter note were already occupying the bus for most of the time; and so I couldn’t expect to be able to connect many more modules to the bus and have them sending too much information to the others. I2C would pose a low limit on the amount of modules that can be integrated to the network, and would put a low limit on how much information these modules could share. Other big problem with I2c is that there is no slave to slave communication. All data transfer needs to be actively mediated by the master, which introduces double processor use and double overhead on most device communications. To solve this, each module would need to have two active Wire objects, and operate as a point-to-point network.
RS484 is not a protocol in itself, but a standard, meaning that within this standard there are many different networking options. Most of these options are described here (https://users.ece.cmu.edu/~koopman/protsrvy/protsrvy.html). This standard suggested me that maybe I could use the Rs232 terminals in such a way, that would allow using Rs232 for a common bus protocol. This protocol could then be easily translated into the Rs484 standard if needed.
I set out to create this protocol, which I will name TBHN. The concept is the same as in a token ring, only that in this case, there is a token line, and to there is a module in charge of restarting the token every time it reaches the end.
This should allow us to make such a network that:
To achieve this, the approach is a hybrid between Token ring and master-bus polling. Token is a signal that is passed from one node to another, sequentially. The concept requires that there is only one token running for the network.
-----\ /---------------\ /---------------\ /----------> \ / \ / \ / |-TI-----TO---| |-TI-----TO---| |-TI-----TO---| | node | | node | | node | |-----COM-----| |-----COM-----| |-----COM-----| | | | -----------|-----------bus-----------|-------------------------|--------------> <-- "left or previous" "right or next"-->
TI is input with internal pull-up, and the logic is direct (not inverse) meaning that it defaults to 1
For easier commenting, in the code, modules are said to be on the left or right. This corresponds to the hierarchical connection of the ti/to. A module on the left is the one whose TOP is connected to the node’s TIP in question. A node on the right, (also refered as the next node) is the one whose TIP is connected to the node’s TOP in question.
TBHN should allow theoretically 100 messages per second, from each module in a network where there are three modules. In that case, the latency “downstream” is very low, and “upstream” is ~10ms. There is a chance I can implement a “end of line message” where the last node detects no following node, and communicates this to the bus, making the master react instantly without waiting a timeout.
To develop the protocol, I needed to follow incremental steps. Otherwise it becomes too difficult to spot the source of a problem.
Set up three arduinos, all connected to the same bus. Test that it is possible to address each individual arduino in the network by a hard coded address, and that the arduino can respond with his own ID plus a string. This is proves that there is a network, and there can be communications through it.
The arduinos are tied with the TI and TO connections. One single arduino is set to reflect in the serial all the signals that happen in the common bus. After the automatic address assignation, the arduino that is connected to the serial should be able to address individually each arduino as in the previous step
The arduinos should start their activity without input from the node that is connected to the serial. The message length is fixed. The activity can be seen in the serial output of one of the nodes.
This effect was granted automatically, because the continuity is given by the physical cable between nodes, thus removing a node becomes a complete removal from the network. Anyhow, there was a bug where newly connected nodes would assume an address 0 instantly and started creating new tokens that destroyed the network reliability. This bug seems to happen in many different cases, and they are being found and addressed one by one.
One change that I discovered that I should make to the protocol, is that the header byte goes before and not after the origin byte. This reduces the bandwidth usage, because in case of sending a “nothing to send” header, the origin byte becomes redundant. This change of order also theoretically allows each node to host multiple virtual nodes that could be addressed by the network.