Journey of The Ohm Room

Starting Point

This project initially stemmed from my motivation to do something regarding mental health on this campus. I wanted people to become more aware of how they were feeling instead of them dismissing it, as people often do when trying to cater to their busy schedules. But I also wanted to make them realise that the feeling will pass, and it is not a permanent stage in their life — regardless of how challenging feelings can make people feel like distress is a permanent aspect of their lives.

This idea manifested itself in multiple ways. Initially, I imagined a user typing in a maximum of four feelings. A picture of a body would then be displayed on the user’s screen. The body would be divided into four parts: head, chest, gut, and legs. The user would then match the feeling to the appropriate body part, specifically where the emotion is felt. Next, the user would enter a room where their body parts would be constructed using the feelings they entered. After 30 seconds, the feelings would disperse, to add emphasis to the concept that the feelings will pass.

Coding

The challenging aspect of this idea was to code the body divided into four parts and to mix and match the feelings with the user’s body part. Hence, I decided to go with something simpler. With Aaron’s assistance, I narrowed down the idea to simply have the user enter a maximum of four feelings. Next, the user would enter a meditation space where they could see their body reflecting their emotions. After 30 seconds, the feelings would disperse and dissolve into the background. There would also be meditative music playing in the background.

The most challenging aspect of creating this project was the coding, it was definitely outside of my comfort zone. Especially the section where I wanted to manipulate the video letters to only focus on the individual, and then have the letters disperse. I tried to research and do as much as I could before I asked others for help. One resource that really helped me a lot (apart from Aaron) was Daniel Shiffman’s videos. When I was stuck on how to disperse the letters, his tutorials on the particle system was extremely useful. Before this class, I had never spent 5 hours stuck on a small piece of code, it was a really frustrating experience at times. But the joy I felt when fixing such issues was tremendous. I think this project is literally the definition of an emotional roller-coaster. I have never felt such an intense frustration and intense joy simultaneously. Again, as much as I tried to work on the coding myself, I realised I was quite dependent on other people’s knowledge. I didn’t like this at first, because I felt that I needed to work harder or take more ownership. But I came to accept that a certain extent of dependence is not a bad thing (a shout out to Unix lab, my cool coding friends, and of course, AARON!!). 

User Testing

After the user testing in class, Aaron and my peers helped me realise that instead of setting strict rules for the user (ex. only input a maximum of four feelings), it was better to keep the instructions more open to interpretation so users had more freedom with the project. The instructions were also quite unclear, which was evident from user testers that kept asking me questions to clarify certain things. Hence, I decided to make the instructions more specific, and also add some background information to what the project is so users had a general understanding of why they were interacting with the work. I was sceptical of adding context because I didn’t want to influence the user’s preconceptions of my work, but some sort of general context was also necessary for a better understanding and interaction.

Working with Space

I assumed that setting up space would be the easiest part of the journey — I was wrong. It wasn’t the hardest aspect, but it was definitely difficult and very different from my expectations. I thought the user would have a larger space to meditate, but space was cut off by the curtains.  But I also liked the curtains, as that created a clear distinction between the meditation space and the user input space. I also realised that a large space for the user wasn’t necessary. In fact, the small space made the environment cosier, and thus more comfortable. Instead of using a blue screen to manipulate the background, Aaron mentioned that why not just leave the video as it is (which was that the word surrounded the person’s body, negative space in the person’s body). I was a little taken aback at first but I realised that it did not take away from my concept, and also limited the work I had to do, so I agreed. It actually turned out to be a great idea. The project also looked nicer when there were more letters dispersed because I didn’t manipulate the background.

I had to make quite a few changes to the lighting. At first, I experimented with the lamp being on top of the panel. However, there wasn’t enough light reaching the camera and it cut off a majority of the user’s reflection. Aaron suggested that I put the lamp in front of the user, but the projection of letters was more focused on the light source. Aaron then turned the theatre light on and that worked perfectly. The only issue was that I had to leave the curtain slightly open, so the meditation space may have appeared less private, but there was enough space covered by the curtains to still distinguish the meditation space from the user input space.

IM Show! 

Two things that I really wanted to do before the show got started was to have a timer set up that would automatically start dispersing the letters (instead of me having to press a key to disperse the letters) and to also have a key that would automatically reset the program when I press it. Due to the lack of time, I wasn’t able to do it. However, it didn’t cause as much harm as I thought it would.

User interaction was a really awesome experience. I didn’t realise how much of an impact user interaction would be on my perception of my project. A two hour interaction honestly had more of an impact on how I perceived my project rather than me working two weeks on constructing it. The responses I received were very positive and optimistic. I was really surprised when people told me it was their favourite project or was genuinely helpful. The responses definitely helped me value my project more, and I also met many interesting people that gave me great feedback.

Craig offered a great idea of having the letters disperse once the user settles down and stays still, which portrays a more organic movement of the emotions. Ume recommended having some timers or more instructions in place so I wouldn’t have to spend so much time transitioning people to places. Some users wanted the experience to be longer, which I also think would have been better. Due to the nature of the show, however, I had to cut down the time.

It was also interesting to see how people interacted with the project. I saw two people’s head peeking out of the curtains as they were laying down on the floor completely. I never expected someone interacting with my project with that specific body posture — and I wonder what I could have done with my project to be more inclusive of that body posture.

Next time, I would also want to ensure and emphasise that people’s privacy would be respected somewhere in the instructions, as I felt that sometimes people may have been worried to input their genuine emotions because there wasn’t really an emphasis on the respect of privacy. Also, I would work on ensuring a smoother transition from the user input station to the meditation room, where I wouldn’t have to talk much to transition the user. Lastly, I would also want to design the room in such a way where the user knows to go to the user input station before entering the meditation room, as I saw that confusion of “where do I go first?” with many of the users.

Overall, this has been a really fulfilling project, which I think it largely because of my motive. I have always wanted to create something that is an amalgamation of the arts and sciences, and I did that while also having a subject that I am quite passionate about, which is mental health. I didn’t expect this project to be as fulfilling and meaningful as it turned out to be. I also definitely did not realise how powerful and amazing it was to have user interaction and actually converse with the users after. This has genuinely been an awesome experience.

 

 

 

 

 

IM Final Project – User Testing Response

I was a little upset that I wasn’t able to incorporate the letters dispersing and then dissolving into the background before I was able to do the user testing. However, it was still fascinating to see the user testing response even though my project is only 60% finished. Overall, the responses were quite positive. They really liked the idea behind it once I elaborated on what would occur next.

User 1

User 2

User 3

The most challenging part in all three cases was that people weren’t able to recognise the text was reflecting their bodies. Perhaps because they were all seated on a chair on looked at the screen on a low chair, and that limited movement. In the meditation room, I will avoid using any seats and keep the space more ‘free’. Also, the light was dim in the room, and some people wore clothing items similar to the background and that affected how the bodies were reflected. Sometimes, the background and body were indistinguishable. When the letters are smaller, the body appears to be more distinguishable. Hence, I will make the letters smaller. I also found it challenging to see how the program reflected the people’s bodies because I wanted to give people privacy with the program since they entered quite vulnerable emotions on the screen. Therefore, I had to ask them what the program was showing them, and I didn’t always receive the most detailed explanations. Next time, perhaps I will ask them to write emotions that are less vulnerable, emotions they feel more comfortable sharing, for testing purposes. It also really helped to have meditative music in the background to set the mood, and it will be interesting to see how my project will change once I officially implement sound.

Golan Levin’s notes on computer Vision for Artists Response

I really liked the section of the text that discussed different sort of projects that artists created. I really liked the concept of Krueger’s Videoplace, it was fascinating that he was one of the first artists to incorporate the entire human body to have a role in our interactions with computers. This project serves as an inspiration for my IM final project. I also liked reading about the Suicide Box, I did extra research on the controversial debate on the subject and that was really interesting to read as well. Although I really liked how the author discussed various projects and explained computer vision algorithms and how it is essential to design physical conditions in tandem with the development of computer vision code, I found it really difficult to follow his explanations. I felt that his text was challenging to read, and it made way too many conceptual leaps for someone who is still relatively new to the material.

IM Final Project Idea

Materials needed:

  • laptop
  • projector
  • little cubicle-like space that can be constructed from panels – or any other ideas that you have?

Idea:

  1. The program will ask the user to input difficult feelings they have been experiencing. They must input at least 5 feelings, each input cannot be longer than two words.
  2. The program will then show them a diagram of a body, and they must select specific areas of the body where they feel the distressing emotions they have stated already. The body sections they can choose from will be divided into four parts: head, chest, stomach and legs. This step is important for users to be more thoughtful of their feelings by recognizing where they are feeling it. Recognizing where an emotion is felt can help individuals be more mindful of themselves.
  3. There will be a button that asks whether the user is “ready to start the meditation”. Once they click on the screen to start the meditation, a body sensor will be activated where they can see their body mirrored on a projection. The specific feelings they matched with the specific part of their body will be shown. In 10 seconds, the words will start vibrating and slowly flow out of the body while shaking faster and faster. It is important for the words to flow out of the body as it is a way for users to visually distance themselves from their emotions – to literally take a step back – and perhaps get a new perspective. After 10 seconds, the text will disperse into dust particles and the screen will go black. After five seconds, the small cubicle will brighten again. The purpose of the screen going from black to white was inspired by a quote- “you need the darkness to see the light”. I wanted to represent this quote visually. Throughout the whole project, there will be suspenseful music in the background.
  4. The whole purpose of this project is to make users more mindful of their emotions, provide a new perspective through distance and some sort of short-term relief from the difficult feelings they are experiencing.

Serial Communication Project

I decided to use one of my previous processing sketches and incorporate serial communication into it. The processing sketch I used was my self-portrait. I decided that I wanted to create something that could be used in a party. When the light dims down, the kaleidoscopic-like portrait would fade onto the wall. The project is hyperlinked here. The code I used is incorporated below, I used the handshake technique between processing and Arduino:

Self-portrait:

void setup(){
  size(480,640); //w,h
  frameRate(5);
  
  //fill(255,192,203);
  //ellipse(240,400,100,100);
}

float radius;

void draw(){
  background(0, 0, 0);
  
  fill(0);
  stroke(255);
  ellipse(240,325,730,800);
  stroke(255);
  ellipse(240,325,630,700);
  stroke(255);
  ellipse(240,325,530,600);
  stroke(255);
  ellipse(240,325,430,500);
  stroke(255);
  ellipse(240,325,330,400);
  stroke(255);
  ellipse(240,325,230,300);
  
  
  fill(255);
  stroke(0);
  ellipse(240,200, 150, 190);
  
  fill(255);
  ellipse(115,350, 50, 80);
  
  fill(255);
  ellipse(365,350, 50, 80);
  
  fill(0);
  ellipse(100,370,5,5);
  
  fill(0);
  ellipse(100,370,5,5);
  
  fill(0);
  ellipse(379,370,5,5);
 
  
  fill(random(0, 204), random(0, 255), random(0, 255));
  ellipse(240,330,275,350); //x,y,w,h

  

  fill(255);
  ellipse(240,360,275,350); //x,y,w,h
  fill(random(0, 255), random(0, 255), random(0, 255));
  ellipse(300,340,75,75); //glasses 1
  fill(random(0, 255), random(0, 255), random(0, 255));
  ellipse(180,340,75,75); //glasses 2
  line(216, 340, 264, 340); //middle portion of glasses
  line(104, 340, 141, 340); //side portion of glasses
  line(336, 340, 375, 340); //side portion of glasses

fill(255);
stroke(0);
arc(240, 410, 50, 35, PI, PI+QUARTER_PI);
arc(225, 410, 20, 20, HALF_PI, PI);

arc(270, 450, 170, 50, HALF_PI, PI);
arc(170, 440, 50, 35, PI, PI+QUARTER_PI);

arc(260, 485, 100, 10, HALF_PI, PI);
arc(260, 470, 50, 30, PI+QUARTER_PI, TWO_PI);
arc(210, 460, 60, 30, PI+QUARTER_PI, TWO_PI);

fill(0);
ellipse(225, 410, 3, 3);

//fill(0);
//stroke(255);
//ellipse(240,150,115,130);

stroke(0);
line(257, 410, 255, 380);


}

Arduino:

bool ledState=LOW;
int knobpin = A1;
int led=3;

void setup() {
  Serial.begin(9600);
  pinMode(led, OUTPUT);
  Serial.write(0);
  
  // put your setup code here, to run once:

}

void loop() {
  if(Serial.available()>0){ //if arduino receving something
    int inByte=Serial.read();//when you read something off buffer, it goes off the buffer
    int readvalue=analogRead(knobpin); //knob
    int writevalue = map(readvalue,0,1024,0,255);
    Serial.write(writevalue);
    analogWrite(led,writevalue);
    delay(1);
  }
    

I wanted to add more elements to this project such as adding music using an mp3 shield once the processing sketch is revealed. I also wanted to add a flex sensor that would simulate a wristband on people. Every time it moved, making the assumption that it moved because they people were dancing, my glasses would start changing color (as it does already). I have had tremendous difficulty coding this, and because I got slightly injured I didn’t have the time to go back and work on this aspect. If I am able to complete my weekly project for this week on time, I really want to spend some time enhancing this project.

What does computing mean to me?

I have been feeling really conflicted about whether computing has been adding any meaning to my life. It is something that I have been reflecting on tremendously over the semester in order to see whether I should go on with IM or choose another path. I love coding in the short-term. It is both fun and challenging, and provides some sort of intellectual release from the immense amount of readings and essays I have to do for my other classes. However when I found myself doing the mid-term project, or the 2 hour lab sessions on Wednesday, something about it wasn’t entirely enjoyable. Of course, I realise that what you enjoy doing won’t always be fun. There will be challenging moments. However, the longer I take to code, the less patient I get. The more quickly I want the answers. This is something that has always been present in how I complete tasks. However it is especially present in coding. I believe this is because I don’t consider myself to have a strong skill-set in quantitative reasoning, which is why it can get challenging as I dedicate more and more time to my projects. Another reason is due to the fact that I tend to look for how my impact on a project can benefit others on the scale of the greater good. Of course it can be undeniably challenging to find benefits for the greater good in an introductory IM course, I can’t get all my answers immediately. It takes time, patience and effort to build an impact. I guess computing has revealed to me as to how impatient I am, and I think it has really helped me become more patient while doing my work. Something else that worries me about computing is sustainability, how can I be sustainable with my resources while computing? I have noticed that me and my other peers in class are quite unsustainable with the resources we use for our project sometimes, going through multiple cardboards and other materials before we get the right shape or structure. I haven’t found a solution to the issue of sustainability yet.

So, I know the what I have mentioned above makes me seem pretty uncertain about where I stand on IM and how it has added to my life, but regardless of my doubts about how IM could impact me in the future, it has offered me incredibly valuable lessons for the now. One that I have already mentioned above, it has taught me an incredible amount of patience. It has also helped me appreciate the little things rather than looking at the larger picture: the way we check sections of our code one by one as we complete it rather than editing once we completely finish writing the code like we usually do while writing an essay has helped me appreciate the small sections of a larger work more profoundly. It has helped me be more attentive to the smaller details. I have also found logic in code to be incredibly compelling. I am a huge fan of philosophy and the way constructs a rational, clear argument. Although I am nowhere near great at understanding logic of code, I tend to draw many parallels between computing and writing philosophical arguments, which have rational, logical steps to show a particular output. I find a lot of beauty in this parallel, and it helps me appreciate computing more. I have struggled quite a lot with computing, and am still struggling. Perhaps the struggle makes me feel uncertain as to where I stand, but undoubtedly, it has filled my life with wondrous reflections, useful lessons, and valuable insight as to who I am as a creative individual.

Generative Text Design

I originally wanted to create something similar to the animated text memes that you scroll upon on social media, for example: GIF 1  and  GIF 2. I really enjoy how the text moves in a wave-like motion and the three-dimensional appeal of it.

However, this weekend was extremely rough with work and I wasn’t able to dedicate enough time to creating something to the examples demonstrated above. I decided to create something else, which I was inspired to do after looking at this artwork. I really liked the concept of words falling down in a random order, people being able to interact with the text through their bodies, and form actual words which translated into poems. I wanted to re-create something similar, but because I didn’t have enough knowledge (or frankly, time) to create a body sensor and somehow combine that with my code, I decided to do something similar on a smaller scale.

My final project can be found here. I had trouble thinking about ways I can make the project interactive without a body sensor. I decided to use something related to what we have learned about making things interactive on processing – I decided to use keyPressed(). The idea was to make random letters fall, and somehow arrange them into words when I click on the letters. However, I found that too difficult to code and opted for something slightly simpler. I found a tutorial that helped me do something similar to the idea, instead of the words falling on their own, the individual who is interacting would type any word, and the letters would be placed one by one in a random order across the screen. The letters wouldn’t completely disappear once typed, instead, I made them fade into the background for a nicer effect. The outcome isn’t what I initially wanted, and it’s not something I am really happy with considering the lack of time and energy I dedicated to this due to an incredibly busy week.

The Digitization Of Just About Everything

I found this read extremely fascinating. I really liked the section where the author discussed the economics behind digitization, the way digitization appeals to non-rivalry and zero marginal cost of reproduction. I also thought his discussion of free products was interesting. It is truly captivating how so many creators devote so much time producing amazing online content without expecting anything in return. Although he claimed that free content wasn’t necessarily a bad thing, he didn’t really elaborate on why it was beneficial either. I was hoping he would further unpack this discussion. Lastly, I thought some of the facts he laid down was really interesting. Facts such as if digitization growth grows at the pace it has been growing at, we will run out of the metric system. Or, how the combined level of robotic chatter is soon likely to exceed the sum of all human voice conversations taking place on wireless grids. This read had some really engaging material.

Game using OOP

Initially, I wanted to create an art piece because that is what I feel more comfortable coding. However, I also wanted to try something new and outside of my comfort level, so I chose to re-create a game that I knew. I wanted to re-create Flappy Bird, however, I found it extremely challenging to implement logic into my code. Especially when it came to making the ball not go below or above the pillars if it touches the pillars. I wasn’t able to solve this problem on time, so I decided to make the ball bounce on the racket. In addition, in the instructions, I asked a friend to keep track of the score, which also allows for a friendlier, interactive match.

I didn’t have a solid understanding of classes or OOP. After talked to Aaron an hour before class started, I realized that I created my code using functions, and not classes.

Here is my code for the project, using functions:

int gameScreen = 0;
float ballX, ballY;
float ballSize = 20;
int ballColor = color(255,255,0);
float gravity = 1;
float ballSpeedVert = 0;
float airfriction = 0.0001; //air resistance andfriction is important, ball doesn't just bounce up and down at same level, it reduces height as it bounces more.
float friction = 0.1; //surface friction
color racketColor = color(255,105,180); //platform ball bounces on
float racketWidth = 100;
float racketHeight = 10;
int racketBounceRate = 20;
float ballSpeedHorizon = 10;

//wall
int wallSpeed = 5;
int wallInterval = 1000;
float lastAddTime = 0;
int minGapHeight = 200;
int maxGapHeight = 300;
int wallWidth = 80;
color wallColors = color(0);
ArrayList<int[]> walls = new ArrayList<int[]>(); //list to keep data for gap between two walls. contains gap wall X, gap wall Y, gap wall width, gap wall height)

void setup() {
  size(500, 500);
  ballX=width/4;
  ballY=height/5;
}

void draw() {
  if (gameScreen == 0) {
    initScreen();
  } else if (gameScreen == 1) {
    gameScreen();
  } else if (gameScreen == 2) {
    gameOverScreen();
  }
}

void initScreen() {
  background(0);
  textAlign(CENTER);
 text("Don't let the ball touch the pillars. Make a friend keep track of your score.", height/2, width/2);
}
void gameScreen() {
  background(0,255,255);
  drawBall();
  applyGravity();
  keepInScreen();
  drawRacket();
  watchRacketBounce();
  applyHorizontalSpeed(); //for horizontal movement of ball
  wallAdder(); //adds new walls in every wallInterval millisecond
  wallHandler();
}
  
void gameOverScreen() {
}

public void mousePressed() { //what does public void mean
  // if we are on the initial screen when clicked, start the game
  if (gameScreen==0) {
    startGame();
  }
}

void startGame() { //necessary variable to start the game
  gameScreen=1;
}

void drawBall() {
  fill(ballColor);
  ellipse(ballX, ballY, ballSize, ballSize);
}

void applyGravity() {
  ballSpeedVert += gravity; //ballSpeedVert is -1. adding gravity to ballSpeedVert = it starts to get close to 0, becomes 0, then will start increasing again.
  ballY += ballSpeedVert; //vertical speed of ball added to Y coordinate of ball
  ballSpeedVert -= (ballSpeedVert * airfriction);
}

void makeBounceBottom(float surface) { //why is surface incorporated here? what is surface?
  ballY = surface-(ballSize/2);
  ballSpeedVert*=-1; //to bounce ball we move the ball to exact location where it had to bounce and multiply vertical speed with use of -1
  ballSpeedVert -= (ballSpeedVert * friction);
}

void makeBounceTop(float surface) { //why is surface incorporated here?
  ballY = surface+(ballSize/2);
  ballSpeedVert*=-1;
  ballSpeedVert -= (ballSpeedVert * friction);
}

void makeBounceLeft(float surface) {
  ballX = surface+(ballSize/2);
  ballSpeedHorizon*=-1;
  ballSpeedHorizon -= (ballSpeedHorizon * friction);
}
void makeBounceRight(float surface) {
  ballX = surface-(ballSize/2);
  ballSpeedHorizon*=-1;
  ballSpeedHorizon -= (ballSpeedHorizon * friction);
}

// keep ball in the screen
void keepInScreen() {
  // ball hits floor
  if (ballY+(ballSize/2) > width) {  //checking if ballY+radius less than height
    makeBounceBottom(width);
  }
  // ball hits ceiling
  if (ballY-(ballSize/2) < 0) { //checking if ballY-radius is more than 0
    makeBounceTop(0);
}
    if (ballX-(ballSize/2) < 0) {
      makeBounceLeft(0);
    }
    if (ballX+(ballSize/2) > width) {
      makeBounceRight(width);     
    }
}

void drawRacket() {
  fill(racketColor);
  rectMode(CENTER);
  rect(mouseX, mouseY, racketWidth, racketHeight);
}

void watchRacketBounce() { //makes sure racket and ball collide LOOK AT TUTORIALA GAIN
  float overhead = mouseY - pmouseY;
  if ((ballX+(ballSize/2) > mouseX-(racketWidth/2)) && (ballX-(ballSize/2) < mouseX+(racketWidth/2))) { //x coord. of right side of ball is greater than x coord. of left side of racket and other way around
    if (dist(ballX, ballY, ballX, mouseY)<=(ballSize/2)+abs(overhead)) { //distance between ball and racket is smaller than or equal to radius of ball (hence colliding)
      makeBounceBottom(mouseY); //hence this bounce method is called
      // racket moving up
      if (overhead<0) { //stores coordinates of the mouse at the previous time frame. Sometimes ball moves so fast that distance between ball and racket cant be correctly calculated between frames. so we take overhead value in between frames to detect difference. overhead value also detects negative value --stimulates racket effect.
        ballY+=overhead; //less than 0 means mouse is below in previous frame and racket is moving up
        ballSpeedVert+=overhead; //adds extra speed to ball to stimulate effect of hitting ball w racket
        if ((ballX+(ballSize/2) > mouseX-(racketWidth/2)) && (ballX-(ballSize/2) < mouseX+(racketWidth/2))) {  //edges of rack should give ball a more horizontal speed & middle should have no effect
          if (dist(ballX, ballY, ballX, mouseY)<=(ballSize/2)+abs(overhead)) { 
            ballSpeedHorizon = (ballX - mouseX)/5;
          }
        }
      }
    }
  }
}

void applyHorizontalSpeed() {
  ballX += ballSpeedHorizon;
  ballSpeedHorizon -= (ballSpeedHorizon * airfriction);
}

void wallAdder() {
  if (millis()-lastAddTime > wallInterval) { //if millis minus last added millisecond is larger than interval value, it is time to add new wall
    int randHeight = round(random(minGapHeight, maxGapHeight));
    int randY = round(random(0, height-randHeight));
    // {gapWallX, gapWallY, gapWallWidth, gapWallHeight}
    int[] randWall = {width, randY, wallWidth, randHeight}; 
    walls.add(randWall);
    lastAddTime = millis();
  }
}

void wallHandler() { //loops though each item. it removes each wall
  for (int i = 0; i < walls.size(); i++) {
    wallRemover(i);
    wallMover(i);
    wallDrawer(i);
    watchWallCollision(i);
  }
}
void wallDrawer(int index) {
  int[] wall = walls.get(index);
  // get gap wall settings 
  int gapWallX = wall[0];
  int gapWallY = wall[1];
  int gapWallWidth = wall[2];
  int gapWallHeight = wall[3];
  // draw actual walls
  rectMode(CORNER);
  fill(wallColors);
  rect(gapWallX, 0, gapWallWidth, gapWallY);
  rect(gapWallX, gapWallY+gapWallHeight, gapWallWidth, height-(gapWallY+gapWallHeight));
}

void wallMover(int index) {
  int[] wall = walls.get(index);
  wall[0] -= wallSpeed;
}
void wallRemover(int index) {
  int[] wall = walls.get(index);
  if (wall[0]+wall[2] <= 0) {
    walls.remove(index);
  }
}

void watchWallCollision(int index) { //gets called for each wall on each loop. take coordinates of wall (top and bottom) and check if coordinates of balls collifes w walls
  int[] wall = walls.get(index);
  // get gap wall settings 
  int gapWallX = wall[0];
  int gapWallY = wall[1];
  int gapWallWidth = wall[2];
  int gapWallHeight = wall[3];
  int wallTopX = gapWallX;
  int wallTopY = 0;
  int wallTopWidth = gapWallWidth;
  int wallTopHeight = gapWallY;
  int wallBottomX = gapWallX;
  int wallBottomY = gapWallY+gapWallHeight;
  int wallBottomWidth = gapWallWidth;
  int wallBottomHeight = height-(gapWallY+gapWallHeight);

  if (
    (ballX+(ballSize/2)>wallTopX) &&
    (ballX-(ballSize/2)<wallTopX+wallTopWidth) &&
    (ballY+(ballSize/2)>wallTopY) &&
    (ballY-(ballSize/2)<wallTopY+wallTopHeight)
    ) {
    // collides with upper wall
  }
  
  if (
    (ballX+(ballSize/2)>wallBottomX) &&
    (ballX-(ballSize/2)<wallBottomX+wallBottomWidth) &&
    (ballY+(ballSize/2)>wallBottomY) &&
    (ballY-(ballSize/2)<wallBottomY+wallBottomHeight)
    ) {
    // collides with lower wall
  }
}

By the middle of this week, I will convert this code to be structured with OOP and paste it below.

Casey Reas’ Eyeo Talk Response

I found some parts of this talk extremely engaging. However overall, it was quite disengaging as I found a lack of passion in Reas’ voice, which made it challenging to devote my full attention to the video. I liked how Reas stated that for a rational machine to do something unexpected was pretty subversive back then, but at the same time, it’s also pretty awesome. I really like the intersecting ideas of the ‘rational’ and ‘irrational’ in computer graphics. I also resonate with the artist Reas mentioned, who claimed that exploration of the grid can be seen as moving away from humanity, moving away from narrative. This is something I find puzzling and problematic about computer graphics, and most art today in general. When we abstract an idea so much that the narrative is removed, I do not see the purpose in art. Another question that arose while I watched the talk was that if I am not in total control of the artwork that is being produced, if I do not even know the possible result I desire, am I even the artist? Do I have the right to call solely myself the artist, or is it the duo of the man and machine? This video highlighted interesting ideas that I would love to reflect on and discuss further.