Week #4

Patterns

This week's assignment was about patterns. It got me thinking about algorithms, and more specifically about machine learning algorithms. I've always thought that the way ML models work is by analyzing patterns, and categorizing inputs by some pattern-recognition techniques — most of which are unknown to us — hence the widely used term "black box". Because we can't really know how those models categorize the data they're being fed; we can only try to guess through trial and error, by feeding different sources of data and see how well those models perform.

The thing is, it works similarly with animals. I remembered seeing this GIF of chickens looking at a faucet that has similar features to said chickens, and they look like they're baffled by it, or trying to figure out what this weird other chicken is. This happens a lot in the animal kingdom. Another example is moths being attracted to light sources, because they're biologically wired to be attracted to the moon. Or even just teaching a dog how to roll on its back for a treat through repetition and reward.

Ultimately the jump from animals to humans is an easy one — think of newborns trying everything out for the first time, and slowly learning about the physical world around them. Think of all those houses that look like faces. Think of intuition (a trait we usually attribute only to humans).

I've come to think that everything we perceive is being deciphered through our own pattern recognition abilities. We got some of these abilities through evolution (think about how we tend to smile when we see another person smiling), but some we learn throughout life (think about how every person has a slightly different way of feeling loved and showing love themselves).

Queue the recent years' progression in AI and ML models' capabilities, I don't think it's safe to say that the human brain works that differently. Our brains have approximately 100 billion neurons while for example Chat-GPT3 has 60-80 billion neurons. The amount of connections we both are able to make depends heavily on the neurons we have at our disposal. So it's becoming a matter of computing power more than anything. I think it's hard to accept that the way our brains and these models work are not that different, and it's easy to laugh at their mistakes just like laughing at those chickens who don't comprehend reality as we see it. But if we agree to closely examine it, it might tell us more about how our brains (or other brains!) work.

Anyway, as for the assignment — I guess all these thoughts kind of left me wondering what to do as they did not lead me anywhere that's quite specific. But the graphic designer in me thought this would be a good time to play around with text in P5. So with that in mind I searched and found a great function called textToPoints() which allows you to... turn text... into points — and then mess around with them!

I wanted to create a word, or a sentence, whose points would move in such a way that the text would be hard to recognize, but through observation of familiar patterns, whether it be actual letter shapes, or a point in time where it kind of converges into a legible state, people would be able read it.

So I started learning how to make it work. First I just drew the points as they are, but then started moving them around and figuring out which outcome would work best — while I am writing this post I'm still not sure, so I decided to treat it as more of an open-ended experiment. At first, I moved the points with the sin() function, but using the mouse's position, which was fun but didn't feel quite right.

I then tried a different method for moving the points — the noise() function. I kind of liked the different outcomes but I couldn't really figure out a sensible way to make the points go back to being legible after they moved.

Eventually I went back to working with the sin() function as it made more sense for my objective (it is looping after all). This time I used everything I learned from trying to tweak the noise() sketches and incorporated that into my first sketch. This allowed me to gain much more control of the outcome, and play around with legibility while not being limited to linearity as I turned it into a loop.

Some outtakes from the process:

Thanks for reading!

X button icon

Jasmine Nackash is a multidisciplinary designer and developer intereseted in creating unique and innovative experiences.