top of page

​Voice Trigger 

Data Mirror includes both hardware and software to reach the purpose of visualizing human being. For the stage of communicating with artificial intelligence, the voice trigger is coded through processing with the library of wblut to detect the volume of the voice (once the volume reached a certain level) so that can enter the next stage. In the voice detection part, the design of the platform includes a sphere composed of a different shade of blue, which is the color of a sense of scientific and artificial intelligence. It also includes the background music created by Ableton Live by adding futuristic effects to the simple notes to create the sci-fi sense at this stage. This background music works together with the text-to-speech recordings which imitate the voice of the robot.

 

​

voice triggrt.JPG

​Camera Detection 

The stage of detecting human races and providing the description for the categorization builds on Arduino and PixyCam, a fast vision sensor that links to the computer to track the object that it is commanded to detect. PixyCam can learn new objects when being asked to do so. In this case, I have the PixyCam to learn to identify different skin tones by rewrite the command in Arduino. After triggers the camera detection with the voice detection to determine the skin color of the person who participates in the interactive project, the figure of simplification of the human being will be projected to the big screen for both the participant and other audiences to observe. With the combination of two stages that built on two different types of sensors, Data Mirror provides the experience of the futuristic world which technology takes a significant portion of the social formation and control over the citizens to users.

d55329c0186cd32d12569d32d2ecb20.jpg

Asian(Yellow)

Picture1.jpg
bottom of page