The user(s) – up to 3 people at a time, wear headbands with infrared LEDs on them and a camera with an IR pass filter installed internally produces an image that is trackable with Max. This tracking info is then send to the JS running in the browser using socket.io as a bridge.
A webapp is running in the browser from a local node server, this is where the p5.js library is used. Programmable shapes allow the eyes to be animated programmatically an respond to the people in the room. The eyes in this case have no inherit emotions of their own, because the eyes have a low level of visual detail it is the user themselves who applies the emotion to the eyes as they are viewing them. Therefor the installation is affective.
As part of this project I compared 2D and 3D animating techniques for the eyes. p5.js has 3D rendering capabilities utilizing webGL. This determined that 2D made more sense for the desired outcome and was easier to animate in the desired way.
I wanted to challenge myself with this project, at the begining when I was pitching the concept to the rest of my class I had a sort or idea of how I would make each part and what software I’d use but nothing concrete. I enjoyed learning to use p5.js and max in new ways and was rewarded with a great feeling when everything eventually worked.
A part of this project that I didn’t imagine would be as big a part was the computer vision element. I ended up removing the IR cut filter from an xbox camera and replacing it with a piece of LEE 87 IR pass filter material to create a camera that could see the IR LEDs in the headbands mentioned before. see this other post for more on that
Maybe at some point I’ll make the code available here or on github, I’d need to tidy everything, especially the max patch.
Comprehensively breaking a logitech webcam trying to remove the IR filter before doing what should have been done in the first place.
After looking at a guide online for how to IR modify the Logitech C270 webcam I thought all it would take was a little heat to melt the glue. As it turns out n my newer model of the device the filter is fused to the sensor meaning my attempt to remove the glass brought the entire sensor with it.
Xbox vision cameras are known for being easy to remove the IR filter from. They are popular with astral imaging enthusiasts for attaching to telescopes. The small IR cut filter is in the lens in front of the sensor and can simply by pried out.
The only other modification I think will be necessary is to do something about the 4 green LEDs that illuminate the ring around the lens when the camera is in use.
Simply googling “IR modify [insert webcam model here]” was how I found the appropriate steps for these mods.
Actually I woke up having only just been dreaming about something I know not what and the only part of that dream still in my mind was a concept of mice wandering over a computer keyboard and everything they typed being turned into the word “harvest”. Weird, I know.
So I stuck this down on a post it note as it seemed like an interesting coding exercise.
I also like to put as much work down on a whiteboard as possible before starting to code. I’ve sketched out the html elements and made a to-do list of sorts. Originally I thought I’d be using an <input> element however I ended up using a <textarea>.
Information about keyboard presses as DOM events was also found here.
The entire project can be accessed here: on github
And used here thanks to GitHub pages, which is a recent and amazing discovery: Rich Harvest. NOTE: there is no support for mobile yet, I’ve branched the project with that addition in mind.
The only problem I ran into was with editing the content (or .value) of the <textarea> as a static variable instead of as live DOM element.
Pretty fun one-page web project, 10/10, would harvest again.
Spotify’s promotional and social media material, especially their in-feed adverts for 3 months premium membership for £9.99, has a look that involves shapes and “bold” colours. Not huge colours but eye catching at least.
It was the squiggly line especially that got me wondering how I would make them in illustrator…? So I used adobe illustrator and threw shapes at an artboard till it looked good, and while I was doing that I looked at the colours I was pulling from adobes color website and thinking what i would look like if the hue was animated in after effects as I’ve tried this during other projects in the past.
So I pulled the finished image into Photoshop and added a hue/saturation adjustment layer.
While the image was in Photoshop I tried making it black and white as I was curious what it might look like with a single colour overlay. Then after trying a gradient map adjustment layer it seems as thought the base image could be adapted to suit any theme on a site or for a brand for example.
I intend to experiment more with this style in the future and would like to find out if I can increase the aesthetic quality by adjusting the positioning of the shapes or what shapes are used etc… There really are a million options for this style.