Write a sketch to map a sound to an image and back again. To do this, you'll need to write a function (keep in a separate tab) that maps each sample in any sound file to a pixel. Your mapping function will take at least three arguments: an AudioChannel, a high threshold value, and a low threshold value. This function will need to do the following:
Here's an initial mapping to try: high threshold value is 0.03 (map to a red pixels), low threshold value is -0.03 ) (map to a blue pixel); everything else is green. If in your resulting image one color seems too dominant, experiment with a different mapping. Document the selected mapping by including a println statement that displays the the ranges used for each color.
Your sketch will make several calls to the mapping function with different arguments: display and save the image resulting from each (be sure to save to different names!). Display at least three different images. You can use mouse or key presses to control when each call is made.
When you have this conversion working, add functions that convert the image back to a sound when the user clicks on the image. Use the inverse mapping: if the pixel is red, output a sample with value = 0.03, and so on (for green, you could try 0 - you may need to experiment). Play the resulting sound. Is it recognizable?
Include a README file with your sketch that includes answers to the following questions:
This is showing us something more about information theory and digital media representation: what is lost, and what can still be recovered by such a conversion? In the end, it's all just bits!