Visual Materiality Part I
Geometry as Data
In MayaFlux, geometry isn't shapes you draw. It's data you generate.
Points, vertices, colors: all numerical streams you sculpt and send to the
GPU.
This tutorial shows you the smallest visual gesture: a single point.
MayaFlux's visual system follows the same architectural principles as audio processing:
- Nodes: Generate vertex data at visual rate (60 FPS)
- Buffers: Manage GPU memory and processor chains
- Processors: Handle upload, transformation, and rendering
- No manual draw loops: Declare structure, subsystem handles timing
Tutorial: Points in Space
Click this card to reveal full explanation
The Simplest Visual Gesture
void compose() {
auto window = MayaFlux::create_window({ "Visual", 800, 600 });
auto point = vega.PointNode(glm::vec3(0.0f, 0.5f, 0.0f)) | Graphics;
auto buffer = vega.GeometryBuffer(point) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
}
Run this. You see a white point in the upper half of the window.
Change 0.5f to -0.5f. The point moves to the
lower half. Try 0.0f, 0.0f, 0.0f. It centers.
That's it. A point rendered from explicit coordinates. No primitives. Just three numbers.
Tutorial: Collections and Aggregation
Click this card to reveal the complete tutorial
In the previous section, you rendered a single point with its own
buffer.
Now: render many points with one buffer, one upload, one draw call.
This is how you work with data at scale: aggregation, not
repetition.
auto window = MayaFlux::create_window({ "Visual", 800, 600 });
// Create a spiral of points
auto points = vega.PointCollectionNode() | Graphics;
for (int i = 0; i < 200; i++) {
float t = i * 0.1f;
float radius = t * 0.05f;
float x = radius * std::cos(t);
float y = radius * std::sin(t);
// Color transitions from red → green → blue
float hue = static_cast(i) / 200.0f;
glm::vec3 color(
std::sin(hue * 6.28f) * 0.5f + 0.5f,
std::sin(hue * 6.28f + 2.09f) * 0.5f + 0.5f,
std::sin(hue * 6.28f + 4.19f) * 0.5f + 0.5f
);
points->add_point({ glm::vec3(x, y, 0.0f), color, 8.0f });
}
auto buffer = vega.GeometryBuffer(points) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
Run this. You see a colorful spiral expanding from center to edges.
Change the formula. Try
float radius = std::sin(t) * 0.5f; for a circular pattern.
Try different color equations. The pattern is the same: generate point
data procedurally, one buffer handles all of them.
That's the key difference: One
GeometryBuffer, 200 vertices. One upload cycle, one draw
call.
Tutorial: Time and Updates
Click this card to reveal the complete tutorial
In the previous sections, geometry was static, created once in
compose().
Now: geometry that evolves. Points that grow, move, disappear.
This is where MayaFlux's architecture reveals its power: no draw loop,
updates on your terms.
void compose() {
auto window = MayaFlux::create_window({ "Growing Spiral", 1920, 1080 });
auto points = vega.PointCollectionNode(256) | Graphics;
auto buffer = vega.GeometryBuffer(points) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
// Grow spiral over time: 10 new points per frame
static float angle = 0.0f;
static float radius = 0.0f;
MayaFlux::schedule_metro(0.016, [points]() { // ~60 Hz
angle += 0.02f;
radius += 0.001f;
// Reset when spiral fills screen
if (radius > 2.0f) {
points->clear_points();
radius = 0.0f;
}
// Add 10 new points this frame
for (int i = 0; i < 10; ++i) {
float local_angle = angle + (i / 10.0f) * 6.28f;
float x = std::cos(local_angle) * radius;
float y = std::sin(local_angle) * radius;
float brightness = 1.0f - radius;
points->add_point({
glm::vec3(x, y, 0.0f),
glm::vec3(brightness, brightness * 0.8f, brightness * 0.5f),
10.0f + (i * 2.0f)
});
}
});
}
Run this. The spiral grows from center, fades as it expands, then resets and repeats.
That's the pattern: Create geometry once. Schedule updates whenever you want. The graphics subsystem handles the rest.
Tutorial: Audio → Geometry
Click this card to reveal the complete tutorial
void settings() {
auto& stream = MayaFlux::Config::get_global_stream_info();
stream.input.enabled = true;
stream.input.channels = 1;
}
void compose() {
auto window = MayaFlux::create_window({ "Reactive", 1920, 1080 });
// Audio source: modulated drone
auto carrier = vega.Sine(220.0) | Audio;
auto modulator = vega.Sine(0.3) | Audio; // Slow amplitude modulation
auto envelope = vega.Polynomial([](double x) { return (x + 1.0) * 0.5; }) | Audio;
auto modded = modulator >> envelope; // Convert -1..1 to 0..1
// Play the modulated audio
auto audio_out = vega.Polynomial([modded](double x) {
return x * modded->get_last_output();
}) | Audio;
carrier >> audio_out;
// Particles
auto particles = vega.ParticleNetwork(
500,
glm::vec3(-1.5f, -1.0f, -0.5f),
glm::vec3(1.5f, 1.0f, 0.5f),
ParticleNetwork::InitializationMode::RANDOM_VOLUME)
| Graphics;
particles->set_gravity(glm::vec3(0.0f, -2.0f, 0.0f));
particles->set_drag(0.02f);
particles->set_bounds_mode(ParticleNetwork::BoundsMode::BOUNCE);
particles->set_output_mode(NodeNetwork::OutputMode::GRAPHICS_BIND);
// The crossing: envelope node controls gravity
particles->map_parameter("gravity", envelope, NodeNetwork::MappingMode::BROADCAST);
auto buffer = vega.NetworkGeometryBuffer(particles) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
}
Run this. You hear a slowly pulsing drone. Particles fall heavily when the sound swells, float when it recedes. The same envelope shapes both audio amplitude and gravitational force.
That's the paradigm shift. One node (one stream of numbers) simultaneously controls audio loudness and particle physics. Not because we "routed" audio to visuals. Because they were never separate.
Tutorial: Logic Events → Visual Impulse
Logic as Form
In MayaFlux, logic is not control flow glued onto systems after the fact.
Logic is data. Events are numerical transitions.
This tutorial shows how discrete logical change becomes visual motion.
Click this card to reveal full explanation
void compose() {
auto window = MayaFlux::create_window({ "Event Reactive", 1920, 1080 });
// Irregular pulse source: noise → threshold creates stochastic triggers
auto noise = vega.Random();
noise->set_amplitude(0.3f);
auto slow_filter = vega.IIR(std::vector { 0.01 }, std::vector { 1.0, -0.99 }) | Audio;
noise >> slow_filter;
// Derivative approximation (emphasizes change, not level)
auto diff = vega.IIR(std::vector { 1.0, -1.0 }, std::vector { 1.0 }) | Audio;
slow_filter >> diff;
// Rectify and smooth
auto rect = vega.Polynomial([](double x) { return std::abs(x); }) | Audio;
diff >> rect;
auto smooth = vega.IIR(std::vector { 0.1 }, std::vector { 1.0, -0.9 }) | Audio;
rect >> smooth;
// Threshold into logic: fires when change exceeds threshold
auto event_logic = vega.Logic(LogicOperator::THRESHOLD, 0.008) | Audio;
smooth >> event_logic;
// Particles in spherical formation
auto particles = vega.ParticleNetwork(
300,
glm::vec3(-1.0f, -1.0f, -1.0f),
glm::vec3(1.0f, 1.0f, 1.0f),
ParticleNetwork::InitializationMode::SPHERE_SURFACE)
| Graphics;
particles->set_gravity(glm::vec3(0.0f, 0.0f, 0.0f));
particles->set_drag(0.04f);
particles->set_bounds_mode(ParticleNetwork::BoundsMode::NONE);
particles->set_attraction_point(glm::vec3(0.0f, 0.0f, 0.0f));
auto buffer = vega.NetworkGeometryBuffer(particles) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
// On logic rising edge: breathing impulse
event_logic->on_change_to(true, [particles](const Nodes::NodeContext& ctx) {
// Radial expansion from center
for (auto& particle : particles->get_particles()) {
glm::vec3 pos = particle.point->get_position();
glm::vec3 outward = glm::normalize(pos) * 3.0f;
particle.velocity += outward;
}
}); // true = rising edge (false→true)
}
Run this. Stochastic events emerge from filtered noise which is unpredictable but fully random. Each event triggers a click and a radial breath. The sphere pulses with emergent rhythm, not metronomic time.
The chain detects change in the noise contour, not amplitude. Slow drifts pass silently. Sharp inflections trigger events. The same logic detection pattern from the original, but driven by generative source rather than external input.
Tutorial: Topology and Emergent Form
Click this card to reveal full explanation
void compose() {
auto window = MayaFlux::create_window({ "Emergent", 1920, 1080 });
// Control signal: slow triangle wave
auto control = vega.Phasor(0.15) | Audio; // 0..1 ramp, ~7 second cycle
auto shaped = vega.Polynomial([](double x) {
return x < 0.5 ? x * 2.0 : 2.0 - x * 2.0; // Triangle
}) | Audio;
control >> shaped;
// Audio: resonant ping modulated by control
auto resonator = vega.Sine(330.0) | Audio;
auto env = vega.Polynomial([](double x) {
return std::exp(-x * 5.0);
}) | Audio;
shaped >> env;
auto audio_out = resonator * env | Audio;
audio_out * 0.3;
// Particles with spatial interaction
auto particles = vega.ParticleNetwork(
400,
glm::vec3(-1.5f, -1.5f, -0.5f),
glm::vec3(1.5f, 1.5f, 0.5f),
ParticleNetwork::InitializationMode::GRID
) | Graphics;
particles->set_topology(NodeNetwork::Topology::SPATIAL);
particles->set_interaction_radius(0.5f);
particles->set_spring_stiffness(0.2f);
particles->set_repulsion_strength(1.0f);
particles->set_gravity(glm::vec3(0.0f, 0.0f, 0.0f));
particles->set_drag(0.03f);
particles->set_bounds_mode(ParticleNetwork::BoundsMode::BOUNCE);
// Control signal → spring stiffness: rising = rigidifying, falling = softening
particles->map_parameter("spring_stiffness", shaped, NodeNetwork::MappingMode::BROADCAST);
auto buffer = vega.NetworkGeometryBuffer(particles) | Graphics;
buffer->setup_rendering({ .target_window = window });
window->show();
// Periodic disturbance to reveal stiffness changes
MayaFlux::schedule_metro(0.5, [particles]() {
particles->apply_global_impulse(glm::vec3(
MayaFlux::get_uniform_random(-0.5f, 0.5f),
MayaFlux::get_uniform_random(-0.5f, 0.5f),
MayaFlux::get_uniform_random(-0.2f, 0.2f)
));
});
}
Run this. The grid receives periodic disturbances. When you're silent, particles flow fluidly, springs are weak. When you speak, the structure rigidifies, springs tighten. Sound becomes material property.
Conclusion
The Deeper Point
You've crossed the boundary that most frameworks enforce: audio and visuals as separate pipelines, separate mental models, separate codebases.
In MayaFlux, there is no boundary. A node that shapes amplitude can shape gravity. A logic gate that triggers a click can trigger a breath. The same polynomial that distorts audio can warp spatial relationships.
This isn't a "feature." It's the consequence of treating all creative data as what it actually is: numbers flowing through transformations.
The particle systems you've built here demonstrate the principle cross-domain
data flow, but they're still working with pre-built physics simulations.
The GPU does what ParticleNetwork tells it to do.
What Comes Next
Visual Materiality Part II moves deeper into the GPU itself.
Instead of mapping audio to physics parameters, you'll bind nodes directly
to shader programs. NodeBindingsProcessor writes node outputs
to push constants, i.e small, fast values updated every frame.
DescriptorBindingsProcessor writes larger data (vectors,
matrices, spectra) to UBOs and SSBOs.
You'll learn:
- Compute shaders: Massively parallel data transformation on GPU
- Push constant bindings: Node values injected directly into shader execution
- Descriptor bindings: Spectrum data, matrices, structured arrays flowing to GPU
- Custom vertex transformations: Audio-driven geometry deformation
- Fragment manipulation: Color, texture, and pixel-level audio response
The architecture you've learned: nodes, buffers, processors, tokens; remains
identical. But instead of map_parameter("gravity", envelope),
you'll write:
auto processor = std::make_shared("displacement.comp");
processor->bind_node("amplitude", envelope, offsetof(PushConstants, amplitude));
And inside your GLSL:
layout(push_constant) uniform PushConstants {
float amplitude;
};
void main() {
vec3 displaced = position + normal * amplitude * 0.5;
// ...
}
Same get_last_output(). Same data flow. But now you control
every vertex, every fragment, every compute thread.
The substrate doesn't change. Your access to it deepens.