final

Eye to Eye

❋ Group project with Jiaqi Yi ❋

Eye to eye aims to create an interactive experience where two strangers will be closely watching each other’s eyes, and their eyes only. It is meant to both give some anonymity (only showing each person’s eyes, and not their whole faces), while still revealing their “window to the soul”. Participants are encouraged to remain physically active; motion is required to keep their eyes visible on the screen. A total lack of movement causes their eyes to disappear, enhancing the dynamic nature of this encounter. 

We hope this experience can serve as a close, playful, human, and curious encounter with another human being.

This live interactive project is running on a virtual server using node and SimplePeer Data Channels.

Using a face-detection machine learning model, we’re only drawing each person’s eyes, removing everything else around them. We’re also continuously detecting the eyes’ position and moving the whole frame so that the eyes are always centered. This creates a visual effect that resembles cameras that follow movement.

Additionally, for each video stream that is created we’re using a frame differencing technique where the only pixels being drawn on the screen are the ones whose color changed between one frame and the next one. This way, we get a trace of anything that moved, while everything that remained the same gets removed from the screen. 

The combination of these three parts allows us to only draw the movement of the user’s eyes while always keeping it centered on the screen.

Process

We started with quick sketches that demonstrated all our functionalities. For example, the frame differencing effect is based off of Kyle Mcdonald's p5 sketch, with some changes.

We also made quick tests of a zooming effect. We tested both effects on the floor with people walking by and it seemed like the zoom effect, as simple as it is, was really engaging for people. this is because it created somewhat of an uncanny view of themselves that they did not expect.

At the same time, we also started working on the FaceAPI part and the network part. Most issues came from integrating all those parts together. For example, scaling the stream from the camera while applying an effect that calculates all the pixels on the screen every frame is very taxing for the browser. But the only data being sent between the clients is the live webcam stream — everything else is happening locally, so that makes a huge difference. Another advantage we have is that this project is only meant to work locally, at school (although it could work anywhere), so if both clients use the same network that makes the latency not as bad as if the clients were using different networks.

All the project's code is available on Github.

Next steps

We are going to present Eye to Eye at ITP's spring show. We intend to make things smoother, and have better control of transitioning between states where there is one person versus states of multiple people, which is currently a little jumpy. We are also considering adding some features, like measuring "stillness and presence" between the two videos and exposing more of the person on the other side after a "good connection" between the two sides has been established. Ideally we would like this work to be two screens that are positioned in two different locations. With some fine tuning we think this could be an interesting interaction to have in a public space.

X button icon

Jasmine Nackash is a multidisciplinary designer and developer intereseted in creating unique and innovative experiences.