Processing Expression
Up to this point, you’ve learned how audio flows:
- containers feed buffers
- buffers run processors
- processors shape data.
Now we expand the vocabulary of processors themselves. In MayaFlux, mathematics, logic, feedback, and generation are not side features, they are first-class sculpting tools. This tutorial explores how computational expressions become sound-shaping primitives.
In MayaFlux, polynomials don't calculate—they sculpt. Logic doesn't branch—it decides. This tutorial shows you how mathematical expressions become sonic transformations.
Tutorial: Polynomial Waveshaping
Click this card to reveal full explanation
The Simplest Path
Run this code. Your file plays with harmonic distortion.
void compose() {
auto sound = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
// Polynomial: x² generates harmonics
auto poly = vega.Polynomial([](double x) { return x * x; });
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], poly);
}
Replace "path/to/file.wav" with an actual path.
The audio sounds richer, warmer—subtle saturation. That's harmonic content added by the squaring function.
Tutorial: Recursive Polynomials (Filters and Feedback)
Click this card to reveal full explanation
The Next Step
You have memoryless waveshaping. Now add memory.
void compose() {
auto sound = vega.read_audio("path/to/file.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
// Recursive: output depends on previous outputs
auto recursive = vega.Polynomial(
[](std::span history) {
// history[0] = previous output, history[1] = two samples ago
return 0.5 * history[0] + 0.3 * history[1];
},
PolynomialMode::RECURSIVE,
2 // remember 2 previous outputs
);
auto processor = MayaFlux::create_processor<PolynomialProcessor>(buffers[0], recursive);
}
Run this. You hear echo/resonance—the signal feeds back into itself.
Tutorial: Logic as Decision Maker
Click this card to reveal full explanation
The Simplest Path
Run this code. You'll hear rhythmic pulses.
void compose() {
auto buffer = vega.AudioBuffer()[0] | Audio;
// Logic node: threshold detection
auto logic = vega.Logic(LogicOperator::THRESHOLD, 0.0);
auto processor = MayaFlux::create_processor<LogicProcessor>(
buffer,
logic
);
processor->set_modulation_type(LogicProcessor::ModulationType::REPLACE);
// Feed a sine wave into the logic node
auto sine = vega.Sine(2.0);
logic->set_input_node(sine);
}
What you hear: 2 Hz pulse train—beeps every half second.
The sine wave crosses zero twice per cycle. Logic detects the crossings. Output becomes binary: 1.0 (high) or 0.0 (low).
Tutorial: Combining Polynomial + Logic
Click this card to reveal full explanation
The Pattern
Load a file. Detect transients with logic. Apply polynomial only when transient detected.
void compose() {
auto sound = vega.read_audio("drums.wav") | Audio;
auto buffers = MayaFlux::get_last_created_container_buffers();
auto bitcrush = vega.Logic(LogicOperator::THRESHOLD, 0.0);
auto crush_proc =
MayaFlux::create_processor<LogicProcessor>(
buffers[0], bitcrush
);
crush_proc->set_modulation_type(
LogicProcessor::ModulationType::REPLACE
);
// Step 2: Freeze audio in chunks - granular stutter
auto clock = vega.Sine(4.0); // 4 Hz freeze rate
auto freeze_logic = vega.Logic(LogicOperator::THRESHOLD, 0.0);
freeze_logic->set_input_node(clock);
auto freeze_proc =
MayaFlux::create_processor<LogicProcessor>(
buffers[0], freeze_logic
);
freeze_proc->set_modulation_type(
LogicProcessor::ModulationType::HOLD_ON_FALSE
);
// Step 3: Extreme waveshaping distortion
auto destroyer = std::make_shared<Polynomial>([](double x) {
return std::copysign(1.0, x) *
std::pow(std::abs(x), 0.3); // Extreme compression
});
auto poly_proc =
MayaFlux::create_processor<PolynomialProcessor>(
buffers[0], destroyer
);
chain->add_processor(crush_proc, buffers[0]);
chain->add_processor(freeze_proc, buffers[0]);
chain->add_processor(poly_proc, buffers[0]);
buffers[0]->set_processing_chain(chain);
}
Tutorial: Processing Chains and Buffer Architecture
Click this card to reveal full explanation
Tutorial: Explicit Chain Building
The Simplest Path
You've been adding processors one at a time. Now control their order explicitly.
void compose() {
auto sound = vega.read_audio("path/to/file.wav") | Audio;
auto buffer = MayaFlux::get_last_created_container_buffers()[0];
// Create an empty chain
auto chain = MayaFlux::create_processing_chain();
// Build the chain: Distortion → Gate → Compression
auto distortion =
vega.Polynomial([](double x) { return std::tanh(x * 2.0); });
auto gate =
vega.Logic(LogicOperator::THRESHOLD, 0.1);
auto compression =
vega.Polynomial([](double x) {
return x / (1.0 + std::abs(x));
});
chain->add_processor(
std::make_shared<PolynomialProcessor>(distortion),
buffer
);
chain->add_processor(
std::make_shared<LogicProcessor>(gate),
buffer
);
chain->add_processor(
std::make_shared<PolynomialProcessor>(compression),
buffer
);
// Attach the chain to the buffer
buffer->set_processing_chain(chain);
}
Run this. You hear: clean audio → saturated → gated (silence below threshold) → compressed (controlled peaks).
Swap the order:
chain->add_processor(gate_processor); // Gate first
chain->add_processor(distortion_processor); // Then distort
chain->add_processor(compression_processor);
Different sound. Order matters.
Tutorial: Various Buffer Types
Buffer Ecosystem Overview
MayaFlux provides specialized buffer types for different generation and processing patterns:
- NodeBuffer: Generate audio from mathematical nodes
- FeedbackBuffer: Recursive temporal processing with memory
- SoundStreamWriter: Capture processed audio to containers
Each buffer type has default processors and specific use cases. Click to explore each in detail.
Generating from Nodes (NodeBuffer)
Click this card to reveal full explanation
The Next Pattern
So far: buffers read from files, nodes affect buffer processing.
Now: buffers generate from nodes.
void compose() {
// Create a sine node
auto sine = vega.Sine(440.0);
// Create a NodeBuffer that captures the sine's output
auto node_buffer = vega.NodeBuffer(0, 512, sine)[0] | Audio;
// Add processing to the generated audio
auto distortion = vega.Polynomial([](double x) { return x * x * x; });
MayaFlux::create_processor<PolynomialProcessor>(node_buffer, distortion);
}
Run this. You hear a 440 Hz sine wave with cubic distortion.
No file loaded. The buffer generates audio by evaluating the node 512 times per cycle.
FeedbackBuffer (Recursive Audio)
Click this card to reveal full explanation
The Pattern
Buffers that remember their previous state.
void compose() {
// FeedbackBuffer: 70% feedback, 512 samples delay
auto feedback_buf = vega.FeedbackBuffer(0, 512, 0.7f, 512)[0] | Audio;
// Feed an impulse into the buffer to kick-start resonance
auto impulse = vega.Impulse(2.0); // 2 Hz pulse train
vega.NodeBuffer(0, 512, impulse, false)[0] | Audio; // Adds to feedback buffer
// WARN: Remember to turn OFF after a few seconds as feedback can build up!
}
Run this. You hear: repeating echoes, each 70% of the previous amplitude.
The buffer feeds back into itself—output becomes input next cycle.
SoundStreamWriter (Capturing Audio)
Click this card to reveal full explanation
The Pattern
Processors that write buffer data somewhere (instead of transforming it).
void compose() {
auto sound = vega.read_audio("path/to/file.wav") | Audio;
auto buffer = MayaFlux::get_last_created_container_buffers()[0];
// Create a DynamicSoundStream (accumulator for captured audio)
auto capture_stream = std::make_shared<DynamicSoundStream>(48000, 2);
// Create a processor that writes buffer data to the stream
auto writer = std::make_shared<SoundStreamWriter>(capture_stream);
// Add to buffer's processing chain
auto chain = buffer->get_processing_chain();
chain->add_processor(writer);
// File plays AND is captured to stream simultaneously
}
Run this. The file plays and is written to capture_stream every cycle.
After playback, capture_stream contains a copy of the entire file (processed through any other processors in the chain before the writer).
Closing: The Buffer Ecosystem
You now understand:
Buffer Types:
AudioBuffer: Generic accumulatorSoundContainerBuffer: Reads from files/streams (default:SoundStreamReader)NodeBuffer: Generates from nodes (default:NodeSourceProcessor)FeedbackBuffer: Recursive delay (default:FeedbackProcessor)
Processor Types:
PolynomialProcessor: Waveshaping, filters, recursive mathLogicProcessor: Decisions, gates, triggersSoundStreamWriter: Capture to containers
Processing Flow:
Default Processor (acquire/generate data)
↓
Processing Chain (transform data)
↓
Output (speakers/containers/other buffers)
Next: Buffer routing, cloning, and supply mechanics—how to send processed buffers to multiple channels/domains.
Tutorial: Audio Input, Routing, and Multi-Channel Distribution
Routing Ecosystem Overview
MayaFlux provides sophisticated routing capabilities for capturing, distributing, and cloning audio across multiple channels:
- Audio Input: Capture live microphone input with real-time processing
- Buffer Supply: Route one buffer to multiple output channels efficiently
- Buffer Cloning: Create independent copies for parallel processing
These systems enable complex signal routing, multi-channel processing, and efficient resource utilization. Click to explore each in detail.
Capturing Audio Input
Click this card to reveal full explanation
The Simplest Path
So far: buffers read from files or generate from nodes. Now: capture from your microphone.
void settings() {
auto& stream = MayaFlux::Config::get_global_stream_info();
stream.input.enabled = true; // Enable microphone input
stream.input.channels = 1; // Mono input
}
void compose() {
// Create a buffer that listens to microphone channel 0
auto mic_buffer = MayaFlux::create_input_listener_buffer(0, true);
// Add processing to the live input
auto distortion = vega.Polynomial([](double x) { return std::tanh(x * 3.0); });
MayaFlux::create_processor<PolynomialProcessor>(mic_buffer, distortion);
}
Run this. Speak into your microphone. You hear yourself with distortion applied in real-time.
Buffer Supply (Routing to Multiple Channels)
Click this card to reveal full explanation
The Pattern
One buffer, multiple output channels.
void compose() {
auto sine = vega.Sine(440.0);
auto buffer = vega.NodeBuffer(0, 512, sine)[0] | Audio; // Registered to channel 0
// Supply this buffer to channels 1 and 2 as well
MayaFlux::supply_buffer_to_channels(buffer, {1, 2}, 0.5); // 50% mix level
}
Run this. You hear the same 440 Hz sine on all three channels (left, center, right in surround setup).
The buffer processes once, but outputs to three channels.
Buffer Cloning
Click this card to reveal full explanation
The Pattern
One buffer specification, multiple independent instances.
void compose() {
auto sine = vega.Sine(440.0);
auto buffer = vega.NodeBuffer(0, 512, sine); // Don't register yet
// Clone to channels 0, 1, 2
MayaFlux::clone_buffer_to_channels(buffer, {0, 1, 2});
}
Run this. You hear three independent sine waves on three channels.
Each clone processes independently—they don't share data.
Closing: The Routing Ecosystem
You now understand:
Input Capture:
InputAudioBuffer: Hardware input hubInputAccessProcessor: Dispatches to listenerscreate_input_listener_buffer(): Quick setupread_from_audio_input()/detach_from_audio_input(): Manual control
Buffer Supply:
supply_buffer_to_channel(): Route one buffer to multiple outputs- Mix levels: Control send amounts
- Efficiency: Process once, output many times
remove_supplied_buffer_from_channel(): Dynamic routing changes
Buffer Cloning:
clone_buffer_to_channels(): Create independent copies- Preserves structure: Type, processors, chains
- Independent state: Each clone processes separately
- Post-clone modification: Differentiate behavior after creation
Mental Model:
Input (Microphone)
↓
InputAudioBuffer → Listener buffers (capture)
↓
Processing chains (transform)
↓
Supply (route to multiple channels)
OR
Clone (create independent instances)
↓
RootAudioBuffer (mix per channel)
↓
Output (Speakers)
Next: BufferPipeline (declarative multi-stage workflows with temporal control)