As the title suggests, announcing that moon-eye now supports audio input via microphone.
Why the mic
I was hanging out with some buddies listening to music and thought it would be cool to put on moon-eye while we did so but I wasn’t in control of the music. I realized that it was a little impractical to assume that the device that can/will run moon-eye will be the same device that is controling music. Moreover, it doesn’t make sense to only visualize files uploaded directly to the browser as most forms of audio consumption are short (think a 3 - 10 minute song) and thus would require numerous trips to your device to choose a different file to visualize.
There were many ways to solve for this problem (one of which is to add support for streaming integrations with other platforms [still on the backlog]) but I figured mic was the best choice as it was 1) easiest (via native browser functions via Web Audio API) and 2) was most flexible as it was source/device agnostic - if you have a mic, you can get the input.
The technical side
Implementing mic functionality was actually as simple as adding a button on the landing page and piping the value through to the visualizer. During development, I would often use the mic instead of file upload as it was faster for iteration.
The internal logic simply branches off that value to determine whether the audio is coming from the user’s device (microphone option) or an element on the page that holds the selected media (file option).
Try it now
Intrigued? Give it a whirl.