Starting as a Musician
You've been working with oscillators, envelopes, filters. Units that simulate analog signal flow. MayaFlux asks you to think differently: not as signal paths, but as data transformation moments.
This isn't about making MayaFlux behave like Max/MSP or SuperCollider. It's about discovering what becomes possible when you treat sound as numbers flowing through precise computational decisions rather than electricity through circuits.
Your First Sound
void compose() {
auto wave = vega.Sine(440.0f, 0.2f)[0] | Audio;
}
vega.Sine(440.0f, 0.2f) doesn't create an oscillator object.
It creates a mathematical decision applied to each sample point.
[0] routes to channel 0.
| Audio means "evaluate this at sample rate" (48kHz typically). The "sine wave" doesn't exist as a continuous signal.
It's 48,000 discrete computational moments per second, each one a fresh calculation.
Rhythm from Mathematical Truth
In traditional systems, you use a clock or metronome to trigger events. In MayaFlux, rhythm emerges from mathematical conditions.
void compose() {
auto clock = vega.Sine(2.0f, 0.3f)[0] | Audio;
clock->on_tick_if(
[](NodeContext ctx) { return ctx.value > 0.0; },
[](NodeContext ctx) {
std::cout << "Beat at " << ctx.value << "\n";
}
);
}
on_tick_if attaches to every sample evaluation (48,000 times per second) but only fires the callback when ctx.value > 0.0 becomes true. Sample-accurate rhythm tied to mathematical conditions, not external clocks.
You can make time operations from any mathematical relationship:
void compose() {
auto synth = vega.Sine(220.0f, 0.3f)[0] | Audio;
// Updates from polynomial peaks
auto envelope = vega.Polynomial(std::vector{1.0, -0.8, 0.2})[1] | Audio;
envelope->on_tick_if(
[](NodeContext ctx) { return ctx.value > 0.5; },
[synth](NodeContext ctx) {
synth->set_frequency(get_uniform_random(220.0f, 880.0f));
}
);
}
Traditional systems force you to separate "rhythm generation" from "sound generation." Any data source can become a temporal trigger.
Gates as Data Transformation
You know gates as "open/closed" switches on audio signals. MayaFlux treats gates as logical decisions about data flow.
void compose() {
auto synth = vega.Sine(220.0f, 0.3f);
auto buffer = vega.NodeBuffer(0, 512, synth)[0] | Audio;
auto gate_lfo = vega.Sine(0.5f, 1.0f);
auto gate = vega.Logic(LogicOperator::THRESHOLD, 0.3);
gate_lfo >> gate;
auto processor = create_processor<LogicProcessor>(buffer, gate);
processor->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
}
LogicProcessor doesn't just multiply audio by 0 or 1 (traditional gate).
HOLD_ON_FALSE freezes the buffer contents when logic is false.
Result: granular stuttering effect. The audio repeats frozen moments.
Other logical transformations:
void compose() {
auto synth = vega.Sine(120.0f, 0.3f);
auto buffer = vega.NodeBuffer(0, 512, synth)[0] | Audio;
auto gate = vega.Logic(LogicOperator::THRESHOLD, 0.0);
auto processor = create_processor<LogicProcessor>(buffer, gate);
// Bit crushing
processor->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
// Audio becomes pure 0.0 or 1.0 (1-bit audio)
// Or phase inversion on trigger
// processor->set_modulation_type(LogicProcessor::ModulationType::INVERT_ON_TRUE);
// Or granular freeze + crossfade
// auto tick = vega.Impulse(4.0f)[0] | Audio;
// gate->set_input_node(tick);
// processor->set_modulation_type(LogicProcessor::ModulationType::CROSSFADE);
}
Gates aren't on/off switches. They're data transformation strategies. The same logical condition creates different musical results depending on how you apply it.
Envelopes as Data Shapes
Traditional: Envelope generates control signal, modulates something else. MayaFlux: Envelope is a data transformation function applied directly.
void compose() {
auto synth = vega.Sine(440.0f, 0.3f);
auto buffer = vega.NodeBuffer(0, 512, synth)[0] | Audio;
auto envelope = vega.Polynomial(std::vector{0.1, 0.9, -0.3, 0.05});
auto processor = create_processor<PolynomialProcessor>(buffer, envelope);
}
PolynomialProcessor applies the polynomial to every sample in the buffer.
Each buffer cycle (512 samples typically) gets reshaped by 0.1 + 0.9x - 0.3x² + 0.05x³.
No separate "envelope follower" or "modulation routing." The data is the shape.
Variations:
void compose() {
auto synth = vega.Sine(220.0f, 0.3f);
auto buffer = vega.NodeBuffer(0, 512, synth)[0] | Audio;
// Create polynomial in DIRECT mode (default)
// auto curve = vega.Polynomial(std::vector { 0.0, 1.2, -0.4 });
// auto processor = create_processor<PolynomialProcessor>(buffer, curve);
// Waveshaping distortion (polynomial transforms amplitude directly)
// curve is already in DIRECT mode by default
// Or RECURSIVE mode: bitcrushing through truncation feedback
// auto crush = vega.Polynomial(
// [](std::span<double> history) {
// // Quantize previous outputs, creating harmonic distortion
// double quantized = std::floor(history[0] * 8.0) / 8.0;
// return quantized * 0.7 + history[20] * 0.3;
// },
// PolynomialMode::RECURSIVE,
// 64
// );
// auto crush_proc = create_processor<PolynomialProcessor>(buffer, crush);
// Or FEEDFORWARD mode: asymmetric distortion based on trajectory
// auto trajectory = vega.Polynomial(
// [](std::span<double> inputs) {
// // Different distortion based on whether signal is rising or falling
// double slope = inputs[0] - inputs[10];
// double curve = (slope > 0) ? inputs[0] * inputs[0] * inputs[0] : std::tanh(inputs[0] * 5.0);
// return std::clamp(curve * (1.0 + std::abs(inputs[20] - inputs[40]) * 10.0), -0.5, 0.5);
// },
// PolynomialMode::FEEDFORWARD,
// 64);
// auto trajectory_proc = create_processor<PolynomialProcessor>(buffer, trajectory);
// Optional: adjust how processor iterates through buffer
// processor->set_process_mode(PolynomialProcessor::ProcessMode::BATCH);
}
You're not "applying an envelope to a sound." You're transforming data according to a mathematical shape. The same polynomial creates wildly different results depending on the processing mode.
Time as Compositional Material
Traditional DAWs treat time as a fixed grid. MayaFlux treats time as something you compose with.
void compose() {
auto synth = vega.Sine(440.0f, 0.3f)[0] | Audio;
// Regular pulse
schedule_metro(0.5, [synth]() {
synth->set_frequency(get_uniform_random(220.0f, 880.0f));
}, "pitch_changes");
// Timed event sequence
schedule_sequence({
{0.0, [synth]() { synth->set_frequency(220.0f); }},
{0.5, [synth]() { synth->set_frequency(440.0f); }},
{1.0, [synth]() { synth->set_frequency(880.0f); }}
}, "pitch_sequence");
}
metro creates a persistent temporal pulse at exact intervals (0.5 seconds = 2Hz).
sequence choreographs events at precise time offsets.
Both run indefinitely until you terminate them. Sample-accurate timing coordinated by the scheduler's clock system.
More complex temporal structures:
void compose() {
auto synth = vega.Sine(440.0f, 0.3f);
schedule_pattern(
[](uint64_t beat) -> std::any {
return (beat % 8 == 0) ? get_uniform_random(220.0f, 880.0f) : 0.0;
},
[synth](std::any value) {
double freq = safe_any_cast<double>(value);
if (freq > 0.0) {
synth->set_frequency(freq);
synth >> Time(1);
}
},
0.2,
"every_eighth");
}
pattern generates events based on beat-conditional logic.
The scheduler manages all temporal coordination, you just describe the relationships.
EventChains for temporal choreography:
void compose() {
auto synth = vega.Sine(440.0f, 0.3f)[0] | Audio;
auto chain = Kriya::EventChain{}
.then([synth]() { synth->set_frequency(220.0f); }, 0.0)
.then([synth]() { synth->set_frequency(440.0f); }, 0.5)
.then([synth]() { synth->set_frequency(880.0f); }, 1.0)
.then([synth]() { synth->set_frequency(220.0f); }, 1.5);
chain.start();
}
Each .then() schedules an action at a specific time offset from the previous event.
The entire sequence runs once, executing actions at precise moments. You’re composing with temporal relationships.
Buffer Pipelines: Data Flow as Process
Buffers aren't just storage. They're temporal gatherers that accumulate moments, transform them, then release.
BufferPipeline lets you compose complex data flow patterns declaratively.
Simple capture from file:
void compose() {
auto capture_buffer = vega.AudioBuffer()[0] | Audio;
auto pipeline = create_buffer_pipeline();
pipeline
>> BufferOperation::capture_file_from("res/audio.wav", 0)
.for_cycles(1)
>> BufferOperation::route_to_buffer(capture_buffer);
pipeline->execute_buffer_rate();
}
capture_file_from reads from audio file.
route_to_buffer sends it to a buffer that plays.
execute_buffer_rate runs the pipeline synchronized to audio hardware boundaries.
Accumulation and batch processing:
void compose() {
Config::get_global_stream_info().input.enabled = true;
Config::get_global_stream_info().input.channels = 1;
}
void compose() {
auto output = vega.AudioBuffer()[0] | Audio;
auto pipeline = create_buffer_pipeline();
pipeline->with_strategy(ExecutionStrategy::PHASED);
pipeline
>> BufferOperation::capture_input_from(get_buffer_manager(), 0)
.for_cycles(20)
>> BufferOperation::transform([](const auto& data, uint32_t cycle) {
auto samples = std::get<std::vector<double>>(data);
// Process all 20 buffer cycles as one batch
for (auto& sample : samples) {
sample *= 0.5;
}
return samples;
})
>> BufferOperation::route_to_buffer(output);
pipeline->execute_buffer_rate();
}
.for_cycles(20) captures 20 times, concatenating into a single buffer.
The transform sees all accumulated samples at once.
PHASED means capture happens first, then all processing.
Windowed analysis:
void compose() {
auto pipeline = create_buffer_pipeline();
pipeline
>> BufferOperation::capture_input_from(get_buffer_manager(), 0)
.for_cycles(10)
.with_window(2048, 0.5f)
.on_data_ready([](const auto& data, uint32_t cycle) {
const auto& windowed = std::get<std::vector<double>>(data);
std::cout << "Cycle " << cycle << ": " << windowed.size() << " samples\n";
});
pipeline->execute_buffer_rate();
}
with_window creates overlapping capture windows —
use it for spectral analyzers, feature extraction, and sliding-window transforms.
Cross-Domain Coordination
This is where MayaFlux diverges completely from analog paradigms. Audio data can directly control visual processes.
void compose() {
auto synth = vega.Sine(220.0f, 0.3f);
auto peaks = vega.Logic(LogicOperator::THRESHOLD, 0.2)[0] | Audio;
peaks->enable_mock_process(true);
peaks->set_input_node(synth);
auto window = create_window({ "Audio-Visual", 1920, 1080 });
auto points = vega.PointCollectionNode(500) | Graphics;
auto geom = vega.GeometryBuffer(points) | Graphics;
geom->setup_rendering({ .target_window = window });
window->show();
peaks->on_change_to(true,
[points](NodeContext ctx) {
float x = get_uniform_random(-1.0f, 1.0f);
float y = get_uniform_random(-1.0f, 1.0f);
points->add_point(Nodes::GpuSync::PointVertex {
.position = glm::vec3(x, y, 0.0f),
.color = glm::vec3(1.0f, 0.8f, 0.2f),
.size = 10.0f });
});
}
Audio peaks trigger visual particle spawns. Sample-accurate coordination between domains. You can't do this in analog hardware. There's no physical equivalent to "audio amplitude directly spawns GPU geometry."
This is digital-native thinking: all data is just numbers, so audio streams, visual transforms, and control logic are the same substrate.
Temporal Accumulation
Traditional: Audio flows continuously through effects. MayaFlux: Audio accumulates in temporal chunks, gets transformed, then releases.
void compose() {
auto synth = vega.Sine(220.0f, 0.3f);
auto buffer = vega.NodeBuffer(0, 512, synth)[0] | Audio;
// First transformation: add noise
auto noise = vega.Random();
auto noise_proc = create_processor<NodeSourceProcessor>(buffer, noise);
// Second transformation: polynomial shaping
auto curve = vega.Polynomial(std::vector { 0.0, 1.2, -0.4 });
auto shape_proc = create_processor<PolynomialProcessor>(buffer, curve);
// Third transformation: feedback delay
auto feedback_proc = create_processor<FeedbackProcessor>(buffer);
feedback_proc->set_feedback(0.3f);
// Chain executes in order each buffer cycle
auto chain = create_processing_chain();
chain->add_processor(noise_proc, buffer);
chain->add_processor(shape_proc, buffer);
chain->add_processor(feedback_proc, buffer);
buffer->set_processing_chain(chain);
}
Buffers aren't just "storage." They're moments of accumulated time. Each cycle, 512 samples gather, get transformed by the chain, then release. Order matters: noise → shaping → feedback creates different results than feedback → shaping → noise.
Buffer size (512, 1024, 2048 samples) becomes a rhythmic parameter. Larger buffers = slower transformation update rate = more “smeared” or “granular” results.
Modal Synthesis
Traditional synthesis uses banks of oscillators. MayaFlux treats resonant modes as parallel data transformations.
void compose() {
auto bell = vega.ModalNetwork(
12,
220.0,
ModalNetwork::Spectrum::INHARMONIC
)[0] | Audio;
schedule_metro(2.0, [bell]() {
bell->excite(0.8);
bell->set_fundamental(get_uniform_random(220.0f, 880.0f));
}, "bell_strikes");
}
ModalNetwork creates 12 resonant modes with inharmonic frequency ratios (bell-like).
Each `excite()` call strikes all modes simultaneously. `set_fundamental()` changes the base frequency, all modes adjust proportionally.
The same network with different spectrum types produces completely different timbres:
void compose() {
auto string = vega.ModalNetwork(
8,
440.0,
ModalNetwork::Spectrum::HARMONIC
)[0] | Audio;
auto sifi_synth = vega.ModalNetwork(
16,
220.0,
ModalNetwork::Spectrum::STRETCHED
)[1] | Audio;
}
HARMONIC creates integer frequency ratios (string-like).
STRETCHE creates slightly sharp harmonics (piano-like stiffness).
Same computational structure, different mathematical relationships produce different musical results.
Where This Goes
You didn't learn "how to use MayaFlux." You learned to think about sound as:
- mathematical decisions evaluated at sample points
- truth conditions generating rhythm
- data transformations instead of control signals
- compositional temporal relationships
- cross-domain coordination
- temporal accumulation patterns
- modal networks instead of oscillators
This is digital-first thinking — not “how do I recreate analog workflows” but “what becomes possible when sound is computational data?” This is digital-first thinking. Not "how do I make MayaFlux behave like my analog synth?" but "what becomes possible when I treat sound as computational data?"
Everything you know about music still applies. But the foundation shifts from signal flow to data transformation.
Immediate Next Steps
Immediate next steps:
- Live coding: modify these transformations while audio is running (Lila JIT system)
- Recursive structures: feedback not as "delay line" but "previous computational state informing current decision"
- Grammar-driven processing: defining transformation rules as formal grammars
Deeper paradigm shifts:
- Coroutine-based temporal coordination: suspending/resuming computations based on musical conditions
- Container-based composition: treating entire audio files as multi-dimensional data to slice/transform/recombine
- Network topologies: spatial relationships between synthesis elements creating emergent behaviors
Cross-domain expansion:
- Audio analysis → particle physics parameters
- Logic gates → shader compute triggers
- Polynomial transforms → texture coordinate warping
MayaFlux isn't "better" than Max or SuperCollider or Ableton. It's asking a different question: what if we stopped pretending digital audio is electricity flowing through wires, and embraced it as discrete computational events we can control with arbitrary precision?
Everything you know about music production still applies: envelope shapes, filter responses, rhythmic structure. But the conceptual foundation shifts from "signal flow" to "data transformation."
That shift unlocks possibilities that don't exist in analog paradigms. Not "better." Different.