Max Patch Share :: Glitchface
How Does It Work?
This experiment takes in webcam image and turns it into a 3D mesh, then glitches it out based on audio level. (see my first Selfie Sunday tutorial for a full video guide on how to turn your webcam into a mesh).
Everything is automated via the “loudness” value from the audio input system.
The Frame Buffer Glitching simply stores the last 40 frames of webcam image, and only plays them back when the loudness reaches a threshold – where it then randomly jumps through the frames – meant to enhance the louder, more frantic moments in music.
Otherwise, the image goes through a smoothing process in order to suit more ambient moments of music.
The webcam image is then stored as a texture to colour the mesh at the end. It’s also turned into a luminance image, and put into the 5 plane matrix to prepare it for the [jit.gen] – where the luminance value then deforms the z position of the mesh.
The BFG noise deformation matrix is also sent into the [jit.gen] object to morph the x and y vertex positions of the mesh.