2025-06-05

interaction

Defines precomputed inter-voice transformations — including canon, call & response, phasing, and hocketing — that restructure existing onset sequences into aligned polyphonic textures.


1. Header

  • Name: interaction
  • Layer: interplay
  • Role: Applies deterministic transformations to onset sequences, distributing material across multiple voices through offset, fragmentation, or duplication.

2. Introduction

interaction structures temporal relationships between voices by transforming a single onset sequence into multiple voice-specific variants. It operates after onset generation (via grid, field, pattern) and imposes relational structure through fully deterministic logic.

It encodes archetypes of musical interplay:

  • alternation
  • imitation
  • phasing
  • antiphony

It does not create events, assign behavior, or generate new material — it simply redistributes existing structure across time and identity.


3. Overview — Forms

Each form defines a distinct transformation of a source sequence, parameterized by a, b ∈ [0,1]. All forms are:

  • deterministic — no feedback or runtime dependency
  • stateless — no memory, no inter-voice tracking
  • identity-preserving — each output retains structural alignment

Form 0 — call_response

  • Behavior: Splits the motif into leading and trailing segments, assigned to separate voices.

  • Analogy: conversational phrasing, solo–answer dynamics

  • Parameters:

    • a: call–response balance (0 = mostly call, 1 = mostly response)

    • b: response delay (fraction of total motif duration)


Form 1 — canon

  • Behavior: Duplicates the motif across multiple voices, each with a fixed delay.

  • Analogy: round singing, staggered entry imitation

  • Parameters:

    • a: inter-voice delay (as fraction of motif duration)

    • b: number of voices (scaled from 2 to 4)


Form 2 — phasing

  • Behavior: Applies progressive temporal shift to identical sequences across voices.

  • Analogy: Steve Reich-style phase loops

  • Parameters:

    • a: phase increment per cycle

    • b: maximum total offset (relative to motif length)


Form 3 — hocket

  • Behavior: Divides the motif into short fragments and alternates them across voices.

  • Analogy: antiphonal exchange, pointillistic texture

  • Parameters:

    • a: number of fragments (from 1 to 8)

    • b: distribution skew (0 = mostly voice A, 1 = mostly voice B)


4. Parameter Summary

Each form interprets a and b as follows:

  • call_response

    • a: how much material remains in the call vs response

    • b: delay between call and response

  • canon

    • a: delay between entries

    • b: number of voices (discretized to 2–4)

  • phasing

    • a: amount of phase shift per repetition

    • b: maximum phase offset

  • hocket

    • a: total fragment count

    • b: assignment skew between voices


5. Why these were chosen

These forms represent core archetypes of musical interaction:

  • call_response — alternation and reply
  • canon — imitation and layering
  • phasing — desynchronization and drift
  • hocket — interleaving and fragmentation

They:

  • Offer maximal structural variation with minimal parameter space
  • Operate purely at the sequence level, separate from behavioral identity (role) or pressure shaping (impulse)
  • Enable rich polyphonic behavior while maintaining full reproducibility and transparency

6. What is not included

  • No event generation — input motifs are required; interaction only transforms
  • No adaptive interplay — responses are not context-dependent
  • No real-time coordination — all sequences are precomputed
  • No behavioral modeling — expressivity, reactivity, or yielding are defined in role, not here
  • No spatial/spectral shaping — handled by spatial, timbre, and related domains

7. Conclusion

interaction defines how material is distributed across voices — not what to play, but how voices relate to each other in time. It converts single motifs into polyphonic textures through canonical, responsive, phased, or interleaved transformations.

These structures are:

  • fully precomputed
  • parameterically deterministic
  • musically foundational

They enable expressive ensemble behavior without randomness or runtime dependency — interaction as structure, not simulation.

It answers: how do voices relate to one another through time and texture?