Spectrum of Time is a rainbow sundial calendar installation permanently located at the Kokerei Zollverein, Germany that tells the astronomical time. Hours, months, spring equinox, summer solstice, autumn equinox, and winter solstice are all mapped and marked astronomical accurately on the walls and floor of the 40′ X 40′ X 40′ space. The rainbow sundial calendar is lighted up by sunlight through laser-cut cross prism. The ellipse in the middle of the rainbow will travel in the space through the painted lines that indicate the path of the Sun.
For this week's assignment, I wanted to bring my experience of the first time me seeing the Pantheon in person--the rounded sunlight came through the hole in the middle of the dome, looks both organic and geometric at the same time. But to achieve this shape, I will need to do hollow shapes.
First, I tried with the easy way (which I think it might be easier), is to import a model.
I created a simple model with C4D, and couldn't figure out a way to import it to my local server. So I searched more on how to create hollow shapes with ThreeJS, and I found CSG. I first created a hollow sphere, then use the same method, I create a hole on the top of the dome. Although I am still not quite satisfied with the lighting, at leaset I figured out how to model a hollow shape. Below is another scene I created:
I experienced with SPADE COCO with a p5 sketch with draggable squares that has a random to all the color data that SPADE COCO carries and convert it in Runway And got some collage like below
Then experienced with First-Order-Motion-Model with both a human face and an anime face to see how well it tracks.
One that I really liked is HiDT that can blend a color theme to another image:
The first thing that came to my mind when I think about triggering something with sound is Magic Spell. And probably the most well-know spell of all would be Harry Potter's "Lumos" spell for light.
For this project, "Lumos" isn't a common word that will be easily recognized by the library so instead I used Teachable Machine to train specifically the word "Lumos" and also "Stop" to be the switch that triggers the LED.
This project is based off of my previous exploration with Tone.js, instead of interacting with the virtual guitar by using the cursor, in this project I want this virtual guitar to have more physical connection with the user.
In this project poseNet was used as the trigger.
For this week, which is my last week for the summer intern, I am still getting familier with ML5's imageclassification. I have played with the DoodleNet demo and was thinking about to create something like the quick draw example, but didn't know how to call out the entire label database so that I can do a random function to it. So I used a computer vision face tracking example from Kyle Mcdonald and created a demo that captures when it detechs a smile.
DEMO
This is a mini size of the installation. The interaction will be turning a rolling handle. The initial state will be the brightness of stars under light pollution and rolling the handle will allow the viewers to see the actual brightness of the stars.
From my (very few) experience in 3D modeling, fo me, I think C4D can do a rendering that has the closest material feeling to the actual one. But I am still very bad at using it, so the light of the space is still weird. I haven't thought about the color and the material for the bucket, but for now I use red since it is always a good color to go with black.
I am not sure what I want to do and what I can do. I might be starting to improve on my midterm if I still cannot came up with an idea. I somehow want to make a face filter, but haven't tought of the content yet. I just want to make something that I will be using.
I create a p5 sketch with Teachable Machine using the image recognition. It is a little tricky to use when I was trying to use it in p5 since the webcam for teachbale machine is square size and webcams are in rects. Thus, the sensing is a little jumpy (hard to know which part of the screen it is sensing though in the example it drew a square box) when it is in p5, but when in the Teachable page it worked fine:
This is my first time trying to use processing, and I couldn't fixed the issue that it might have for my verison of MacOS for Processing Video. I guess it is something to do with the Camera Access, but it didn't have any popup window for this nor it didn't show up in the privacy setting. So I googled... and found this page that seam to be working > Video capture not working in Mac OS Catalina
And here's my long journey to the unknown (the feeling of I might broke my computer)
Gladly I got it to be shown in the Camera Access
HOWEVER
So, I could only give up and move back to p5, and created this little smile application with the face tracking example from Kyle Mcdonald. The way the 'Smile' is recognized in this code is by the distance of the two end of the mouth:
var smile = mouthLeft.dist(mouthRight);
A cutting board. A product for people who cook.
Never need to worry about dirty hands--easy and intagible interaction for people who like to watch a cooking tutorial when cooking. Also, a more visual way of measuring the amount for each ingredients, inspired by:
At first, I was thinking to have a cooking station that has embeded AR system in it.
However, it will increase the price and also when there are some technical issue, it will be harder to fix it.
So instead, why not make it something that's portable and affordable? So I came up with a better solution, which is a cutting board, that works better and make much more sense.
New stuff that I learn from this project is virtual button. I first learn from a tutorial:
Then, I need to trigger video using the virtual button, I coded (acutally combining two scripts) the following:
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Video;
using Vuforia;
public class vb_anim : MonoBehaviour, IVirtualButtonEventHandler {
public GameObject vbBtnObj;
public GameObject video;
void Start () {
vbBtnObj = GameObject.Find("playVid");
video = GameObject.Find("videoYeah");
vbBtnObj.GetComponent
And got the result:
Problem with the current version: virtual button might not be the best solution for this, which I think the condition for it to be trigger is when it detects shadow? without know ing the depth of the element that covered it. As shown at the end of the video, angle and depth problem still need to be solved. (Also, ideally it shouldn't be my ipad, and I should be cutting on it. But I need an affordable tracker that I can reach for quarantine lives)
The concept I chose for prototyping for this assignment was the IoT cooking assistant AR.
Google Slide - Spark AR
The concept I chose for prototyping is the ticketing app that sits on the left handside of the graph, that means, to be more practical and not too difficult to impliment.
Google Slide - Spark AR
After struggling between the Cat Cube and the Music Cube, I chose the Music Cube for document purpose.
I am also adding LED to it to make it more interactive.
The first thing I was thinking was to create a drawing tool on p5 using the gesture and color detection of APDS9960, however, I figured the serial communication of more than one value is a little tricky, so I only did up until a rolling ball. I will continue working on this tiny project.
P.S. the webcam in the video is mirror so it was opposite from my gesture.
It is a web-based AR article that tell a story of a rescue happened in Thailand. The story that they want to delivery through AR is the scale of the caves—to emphasized the crucial situation—what makes the rescue difficult. New York Times had always did a cool job on immersive story telling for articles (eg. The well know “Snow Fall”) and they even step a foot into the AR/VR world.
I looked at a lot of cool AR projects but decided to pick this project because I think it is not using AR as something crazy, but something supporting the article. The purpose of this interaction is very simple—to give the reader another dimension of the story, but at the same time it also attracts readers who has no interest in this topic or even don’t know about this.
This is also a very exclusive experience that only people who go through that process will know how tough it is. I think it is very smart that they chose the scale as the main for this experience following along with the path that the rescue team goes.
Things that I think it can improve on is to deliver two other dimensions for this rescue, which it mentioned in the article through text, the visibility and the water level. The visibility in the cave is very poor entirely, no matter the darkness or under the muddy water, I think this experience can be brought into AR by having an option for the user to select to be under the “rescuer’s mode”— when the only source of light is the one on your helmet—that only the spot of the sight the reader is facing is visible, everywhere else should be dark. It might also be for immersive if there is a scale of the water level in the cave to give the reader an idea of what makes the rescue even more tough.
I think what impressed me is that (even though I’ve never been to the cave) the scale seemed to be very accurate when using the AR even when I walk up close. Having my body going near to the cave gives ideas of the scale scale by comparing it to my own body instead of a human male figure thats normally used for inforgraphy for this purposes.
Based on the previous week's analysis, we decided to work on the part that was interactive: the toilet doors.
The main two state of the interactive door will be, occupied and vacant outputs.
The two sensors that we are using are the IR Beam Breaker and the VL53L0X.
To improve the experience, the first thing to eliminate is 'pushing' the door, which VL53L0X will be the right fit for this, by connecting it to a motor, when VL53L0X detects anyone happen to be in the distance, the door opens automatically.
Another sensor: IR BEAM BREAKER will be detecting if there's anyone inside. And this will have an output (only when VL53L0X detects) to be informing that it is vacant to the user outside of the door and the the motor won't be moving.
Below is the Flow Diagram of the logic:
And the Installation Diagram:
to be updated
For this week's assignment, I want to revisit my previous project, the lasercut project--Lei, and redesign is with a wooden bas and brass details (the cutouts, ash tray and vase tube.)
My father used to collaborate with a brass maker and had designed a brass cup (which he made a huge mistake for not adding a handle to it... it conduct heat too well!) so I learned a little (with my eyes) about how brass works were formed. So I already foresee myself holding a hammer and being the annoying neighbor... But I have to share this video, this is too beautiful. It is saying how the brass makers are more in love with the time they spent with the work more than the finished work. I feel them so much when I was watching this video...
For this week's project, I wanted to actually make a box that works for what we learn in the pcomp class, which is, the tone output!
When I'm ready heading towards The Container Store, my boxes at home be like "Mom, you don't need more..."
So I decided to work with them.
I was sick during the week, so I forgot to do the documentaries...
I made three holes on the cover surface for the following parts: the potentiometer, speaker and the LED light. And one on the side for the USB.
The cover parts were first connected to the cover, then connected to the breadboard.
And make it workable
Lei (淚) means Tears - The essence of ones' emotion.
For this repeatability project, I want to be as zerowaste as possible, so it also means that I want all 5 objects to be useful.
I had two directions:
1. Something for myself
I always wanted a white modern lantern that looked somehow like this:
The materials will probably be rice paper, wire, lighting system (experience from the flashlight assignment) and wood as the base.
However, not only it is time consuming, but also do I really need 5 of them?
So I went into the second direction:
1. Something for my friend's kid
One of my college friend, Nina having this baby, called Arya, has the same birthday as I do. (Not to mention it because it's my birthday, but) It's less than a month to her birthday! I want to make something for this special baby. So I get some inspirations on web and had some sketches:
From here I figured among all, there's one game that all 5 objects can be used, and that is: The Fishing Game!
I am always amazed by the univers and stars, so instead of fishing fish, I want to create a star fishing game! Also reminded me the DreamWorks Logo:
So here is how I make it--
Documentation
This is a project collaborating with Zeyao, working on the data the New York Times collected on HK Protest:
As a "Taiwan Chinese", my identy seemed never to be freed from political topics. My background is, as what I termed it, a multi-chinese-culture. I grew up in Shanghai, my mom is from Beijing, and I am a Taiwan Passport holder. I found myself uncomfortable to either call myself a Taiwanese or Chinese infront of a crowd. I always call myself as Chinese Taipei.
When the HK just started, I see how a tiny snow ball slowly became a large one that could be dangerous.
Why is the snow ball getting larger and larger but no one notice it could happen, or stopping it.
By reading both sides' social media posts, I see both sides are not understanding each other (Maybe not they just can't but they don't even want to understand.)
Inspired by this, what if we put this data into a music piece, as a way to communicate. Since no one will say no to music.
After I updated my Mac, nothing's working... I tried to reinstalled the Soundflower but it still didn't work.
But I think the shape of the envolope is also intriguing.
I did four instead of two because I feel very uncertain on how to do this. I found myself constantly changing my standards as I moved on to the next interactive webpage. Below is a graph with my self understanding of each criteria:
Everthing will be high in "Distribution in Space" since they are all web-based interactive interfaces.
1.Neural Drum Machine
This is the first one that I interacted with. Personally, I like how simple and straight forward the interface is. The learning of this web was fast--only some simple clicking on the shapes, simple adjustment on each attributes, and clear looping animation. However, I do found the output is way too simple, which the only animation is sort of explanatory (showing you where the loop is going). You are only able to control one set of rhythm that goes in loop. I like the rhythm generated from this interface--it is clean, logical, easy to understand just like its interface, but lack of playfulness and diversity.
2.Rhythm Toy
I want to talk about this Rhythm Toy interface the next is because it also has a simple user controls, yet, lesser freedoms but higher output than the Neutral Drum Machine. The animation correspond with the beat you put in, which also provides a bit more diversity in timbral level (I'm not sure if it is corresponding with the diversity of instrument used), creates more fun than the previous one even though the music is still educational / a safe play.
3.Groove Pizza
The third one, I anaylsed is this Groove Pizza. I was attractived by the visual of this interface--very logically distributed (the pie graph like system, corresponding with the 7 coloum beat adjustor) at the same time very abstractive (how it has shapes on the left hand side--instead of creating music by the sound, but also how it can be visually related). Even though the visual output seemed very interesting, I didn't put it as a high output because the visuals were there already as an input, and nothing more that it produced that expected. Also, I feel the default setting of the tempo was a little fast. I found it difficult to use when was trying to adjust the beats while playing. I think it is always good to have a rather slower default, have the user get familliar with the system first and then they can adjust it themselfs.
4.Beat Blender
Last but not least, the Beat Blender. This is a very interesting interface than sort of hide the magic of music behind the four-color-gradient. Each corner has one loop of beat, and at the sections that the colors mixed together, the beat also sort of mixed--this compare to the rest of the interfaces is a more experimental project, specially from the music standards--when the user has been given more freedom. Eventhough I found it was pretty complicated when you try to adjust the beat of each corner (it gives a lot of default beat options), you don't need to adjust and you will still found intriguing beacuse of the "Drag / Draw" at the left bottom corner. It is definitly a very new way to combine loops together. it almost looks very random, but each loop are somehow related to each other because they are in 'gradient'.
In a room of crowd, people stopped to see this—a transparent cube with a man inside, who is standing in the middle of running machines, trying to stop the noises that the machines create. Outside of the cube there’s typewriters; that connects to the AI system that has a voice liabrary of on each side of the cube facing towards the audience—The Anxiety Of Machination is an interactive installation piece and performance tool inspired by the track “Intro” by the experimental hiphop group clipping.
This piece aims to explore the relationship between human and machine; the anxiety produced by their interaction and the power dynamics between them.
The first thing came to my mind when I thought about pixels, were the 'text drawings' sometimes you can see on the twitter timeline that might look something like this:
I want to play with the only the negative/positive space of each letter or symbol to create shape and line out of it without change the oppacity/brightness of the elements. To do this, I will need to get the brightness instead of the color for each pixel, and assign a certain range to different text by using if statement.
I made the negative space black, so that means the the darker pixels will be assigned to the symbols that has the least possitive spaces. And I have the following statement:
let brit = int(brightness(img.get(x, y)));
if (brit >= 0 && brit <= 10) {
updateText('.', x, y);
} else if (brit >= 11 && brit <= 20) {
updateText('*', x, y);
} else if (brit >= 21 && brit <= 30) {
updateText('!', x, y);
} else if (brit >= 31 && brit <= 40) {
updateText('/', x, y);
} else if (brit >= 41 && brit <= 50) {
updateText('+', x, y);
} else if (brit >= 51 && brit <= 60) {
updateText('=', x, y);
} else if (brit >= 61 && brit <= 70) {
updateText('o', x, y);
} else if (brit >= 71 && brit <= 90) {
updateText('O', x, y);
} else if (brit >= 91 && brit <= 100) {
updateText('@', x, y);
}
I'm pretty satisfied with the look already, but I still sort of wanted to play with color that is relevant to the color of the capture. I first tested out the range for red (it should be 255, 0, 0, 1 for rgba, but I know the lightness will affect the color). I called out the values of each pixel and give it the range of R > 200, G < 50, B < 50, A > 0. However, eventhough I was wearing a red hoodie, it still didn't get any red. So I made the range even wider:
let c = img.get(x, y);
if (c[0] >= 170 &&
c[1] <= 100 &&
c[2] <= 100 &&
c[3] >= 0) {
fill('red');
} else {
fill('white');
};
And this is what I got:
I love to zooming in to the symbol pixels to see them wiggling. Also like this visual esthic a lot and it's pretty accurate:
And I was having fun playing with this by creating a slider that you can change the colors: Code
After played with pure text, talking about my initial insipiration--visuals done by text, recently, people were also using emojis as pixels.
Seeing the potential of this becoming a video filter, I chose the two emoji that came to my mind straight when I planned to make this: 🌚 and 🌝, which are perfect for bright and dark pixels. Then I tested out with some emojis that will give a good transition inbetween the two (that also carries a similar 'meme(?) vibe') and the alien work out the best! It is pretty interesting that from far the color scheme actually looks like a decent (well, it should be) color choice, but when zoomed in, you will see faces...
For this week's project, I combined with my fabrication box project!
And the code for this:
void setup() {
pinMode(5, OUTPUT);
Serial.begin(9600);
}
void loop() {
int potValue = analogRead(A7);
Serial.println(potValue);
int brightness = map(potValue, 0, 1023, 0, 255);
analogWrite(5, brightness);
int frequency = map(potValue, 0, 1023, 100, 4000);
tone(6, frequency);
delay(100);
}
Led Light Switch on Arduino
It is all done by Magic.
Try:
Lumos Maxima...
Lumos MaxiMa...
Lumos MAXIMA!!
Hi... So the inspiration came from the material--magnet, which I intend to get it for another class, that also conduct electricity. I wanted to make a switch that happen to looks like:
Yet, when I was trying to connect the thin wire to the magnet, I figured that, unlike the on ein the image, every material need to be nude and touching each other inorder to have the current flow. However, when magnet are touching each other, their force are too big to have the circuit open again. Facing the truth of my lack of knowledge in this field (only for now), I need to think of another way since I do not want to give up on the material.
So, instead of having the magnet as one of the connections, I used the ability of magnet as the force to close the circuit. By using a material that can be attracted to magnet and one that only conduct electricity but not attracted to magnet. I had some help with google, and I had this
After the first try, I made a new, cleaner version of the system:
And then, I want to hide the magic so I covered up everything with a film case, with a wand that has hidden magnets at the tip of it: