week #5

Group presentation + midterm idea

Group presentation

Jiaqi Yi and I were assigned to work together on an interactive experience for the class.

After discussing various interesting ideas we chose one that stood out in particular as it felt both relevant and valuable, and can't really be done outside of a class' context without breaking the law in some way(?). While sitting and discussing ideas at the floor we saw this sign hanging by the hallway that says "premises under surveillance", and thought about translating that notion into the digital realm. "These premises" in the digital world could be any website, but really a combination of many, that track users' activities and then tailor content for them that would either maximize the probability of them spending money, or make them stay online for a longer time, which in turn translates to bigger profits the platform stand to make as a result of that.

We looked up what kind of information we can easily get on users just from their browser (this answer in Stack Overflow has a good list of those) and started implementing a pipeline where for each connected client we would get all of this information, send that along with instructions to ChatGPT using the ChatGPT API, and post back its answer. We asked ChatGPT to make assumptions based on the data, like where the user is (based on their geolocation), their economics (based on their computer), their field of occupation (based on things like screen dimensions, color depth etc.) — to sort of mimic how all of this supposedly unassuming data that is really readily and easily available to anyone with a server, could be used to make assumptions on the users and use it to manipulate them in one way or another.

✲ A side note: we found a VSCode plugin called Live Share that allowed us to work together on the same files at the same time, which made the work more streamlined and efficient. Despite the face that we were in fact sitting right next to each other while working on this, the plugin allowed us to quickly test things out and made for a smoother collaboration. Recommended!

❈ Thank you Jiaqi for this collaboration!

Midterm idea

For midterm I'd like to continue what I started last week — unfortunately it didn't quite work — and it might not fully work by next week too, but it's something I'd like to explore and try out. I was aiming to create a shared interactive experience where each user would get visual queues for all the other currently online users' eyes — whether they are open or closed (ideally it would be more than just open or closed but this is what I started with). With that, I think it might be interesting to experiment with things like audial queue (I'm thinking something quite subtle, like a phone message notification sound), or find some kind of way to establish this experience as something you do together — I'd like to think that there's something touching about people closing their eyes at the same time and for the same duration while sitting remotely in front of each other's digital, incredibly reduced and lacking representation. It could be a place you go to to relax with strangers for a bit; a tab you keep open while working from home and feeling lonely; a one-time experience where it's just you and another person and you're making up a silly winking game.

This could also go towards a more playful and defined experience, where maybe everyone are trying to close their eyes for exactly 10 seconds and are getting ranked by how well they did. In any case, I think I need to see what a minimal working version of this feels like and then decide on where to take it from there. Having your eyes closed while knowing you are: a – being watched yourself, and b – you can't watch the others at the same time — sound like a possibly interesting dynamic that I'd like to explore (it could also just be really bad and confusing and not work at all...)

The half-working current version

The main challenges include:

  1. Minimizing lag time. Right now it's not great... I guess I would need to find a way to only send the data once in a while (or maybe Simplepeer will make a difference?). I think the best approach at this time would be to run the detection locally and only send out data to the server when the threshold is crossed and there's a change in states (so no continuos eye detection for now...)
  2. Finding a better way to calculate the eyes' status. It is now calculated very simply by measuring the distance in pixels between a point in the upper eyelid and a point in the lower eyelid, but this doesn't account for distance from the webcam and works poorly for that reason. I'll work on implementing a relative calculation that perhaps scales the threshold by taking into account a third point like the nose.
  3. Figuring out the design! The visuals don't even have to look like eyes. It could be text; it could be eyes but with different color for each client; it could be actual footage of their eyes or other found footage of eyes; it could be a flower that opens and closes; or a drawer; or a UI toggle button. So many possibilities... I'd have to test and see what feels right.
X button icon

Jasmine Nackash is a multidisciplinary designer and developer intereseted in creating unique and innovative experiences.