Final Project: Jake Marsico

Final Project,Submission,Uncategorized — jmarsico @ 11:45 pm

The final deliverable of these two instruments (video portrait register and reactive video sequencer) was a series of two installations on the CMU campus.

 Learnings:

The version  shown in both installations had major flaws.  The installation was meant to show a range of clips that varied in emotion and flowed seamlessly together. Because I shot the footage before completing the software, it wasn’t clear exactly what I needed from the actor (exact time of each clip, precision of face registration, number of clips for each emotion).  After finishing the playback software, it became clear that the footage on hand didn’t work as well as it could.  Most importantly, the majority of the clips lasted for more than 9 seconds. In order to really nail the fluid transitions, I had to play each clip foward and then in reverse, so as to ensure each clip finished in the same position it started. To do that with each 9 second clip would have meant that each clip would have lasted a total of 18 seconds (9 forward, 9 backwards). These 18 second clips would eliminate any responsiveness to movements of viewers.

As a result, I chose to only use the first quarter of each clip and play that forward and back. Although this made the program more responsive to viewers, it cut off the majority of the subject’s motions and emotions, rendering the entire piece almost emotionless.

Another major flaw is that the transitions between clips very noticeable as a result of imperfect face registrations. In hindsight, it would require an actor or actress with extreme dedication and patience to perfectly register their face at the beginning of each clip. It might also require some sort of physical body registering hardware. A guest critic suggested that a better solution might be to pair the current face-registration tool with a face-tracking and frame re-alignment application in post production.

If this piece were to be shown outside the classroom, I would want to re-shoot the video with a more explicit “script” and look into building a software face-aligning tool using existing face-tracking tools such as ofxFaceTracker for openFrameworks.

Code:

github.com/jmarsico/Woo/tree/master

 

Final Project Milestone 3: Jake Marsico

Assignment,Final Project — jmarsico @ 1:38 pm

 

Sequencing Software

The past two weeks were dedicated to building out the dynamic video sequencing software. As a primary goal of this project was to create seamless playback across many unique video clips, building a system with low-latency switching was key. Another goal of the project was to build reactive logic into the video sequencing.

I will explain how I addressed both of these challenges later when I explain the individual components.  Below is a high-level diagram of the software suite.

Woo_app_diagram

 

At the top of the diagram is openTSPS, an open source blob tracking tool built with openFrameworks by Rockwell Labs. openTSPS picks up webcam or kinect video, analyzes it with openCV and sends OSC packets that contain blob coordinates and other relevant events. For this project, we are only concerned with ‘personEntered’ events and knowing when the room is empty.

Screen Shot 2013-11-25 at 12.38.20 PMopenTSPS

Blow openTSPS in the above diagram is the max/msp/jitter program, which starts with a probabilistic state machine that is used to choose which group of videos will play next. The state machine relies on Nao Tokui’s markov object, which uses a markov chain to add probability weights to each possible state change. This patch loads two different state change tables: one for when people are present and one for when the room is empty. This relates to a key idea of the project: that people act very differently when they are alone or around others.

Screen Shot 2013-11-25 at 12.29.00 PMstate transition with markov object

The markov state transition machine is connected to a router that passes a metro signal to the different video groups. The metro is called the ‘Driving Signal’ in the top diagram. The video playback section of the patch relies on a constantly running metro that is routed to one of seven double-buffered playback modules, which are labeled as emotional states according to the groups of video clips they contain.

Screen Shot 2013-11-25 at 12.34.28 PMrouter with video group modules

Each video group module contains two playback modules. The video group module controls which of the two playback modules is running.

Screen Shot 2013-11-25 at 12.34.54 PMvideo group module

While one playback module is playing, the other one loads a new, random video from a list of videos.

Screen Shot 2013-11-25 at 12.35.09 PMplayback module

Here is a shot of the entire patch, from top to bottom:

Screen Shot 2013-11-25 at 12.28.31 PM“woo” max/msp/jitter patch

 

Challenges

The software works well on its own. Once connected to openTSPS’s OSC stream, it starts to act up. The patch is set up to bypass the state transition machine on two occasions: when a new person enters the room and when the last person leaves the room. To do this properly, the patch needs correct data from openTSPS. In its current location, openTSPS (paired with a PS3eye) is difficult to calibrate, resulting in false events being sent the application. One option is to build a filter at the top of the patch that only allows a certain number of entrances or ’empties’ within a given time period.  Another option is to find a location which has more controllable and consistent light.

Another challenge is that the footage was shot before the completion of the software. As a result, much of the footage seems incomplete and without emotion. To make the program more responsive to visitors, the clips need to be shorter (more like ~2 seconds each instead of ~9, which is the average for this batch).

Final Project Milestone 2 – Jake Marsico

Assignment,Final Project,Max,Uncategorized — jmarsico @ 1:21 pm

_MG_2036

The Shoot

This past weekend I finished the video shoot with The Moon Baby. Over the course of three and a half hours, we shot over 80 clips. A key part of the project was to build a portrait rig that would allow the subject to register her face at the beginning of every clip. The first prototype of this rig consisted of a two way mirror that had registration marks on it. The mirror prototype proved to be inaccurate.

The second prototype, which we used for the shoot, relied on a direct video feed from the video camera, a projector and a projection surface with a hole cut out for the camera to look through.

 

 

At the center of this rig was a max/msp/jitter patch that overlayed a live feed from the video camera on top of a still “register image”. This way, the subject was able to see her face as the camera saw it, and line up her eyes, nose, mouth and makeup with a constant still image. See an image of the patch below:

max_screenshot

 

The patch relied on Blair Neal’s Canon2Syphon application, which pulls video from the Canon dslr’s usb cable and places it into a syphon stream.  That stream is then picked up by the max/msp/jitter patch.

Here is a diagram of the entire projection rig:

Woo portrait setup

Soon into the shoot, we realized a flaw with the system: the Canon camera isn’t able to record video to its CF card while its video feed is being sent to the computer.  As a result, we had to unplug the camera after the subject registered her face, record the clip, then plug the camera back in.  We also had to close and reopen Canon2Syphon after each clip was recorded.

SONY DSC

Wide shot of the entire setup.

 

To light the subject, I used a combination of DMX-controlled fluorescent and LED lights along with several flags, reflectors and diffusers.

 

 

Project Milestone 1 – Jake Marsico

Assignment,Final Project,Max — jmarsico @ 12:06 pm

Portrait Jig Prototype

photo 8-2 copy

One of the primary challenges of delivering fluid non-linear video is to make each clip transition as seamless as possible. To do this, I’m working on a jig that will allow the actor to ‘register’ the position of his face at the end of each short clip. As you can see in the image below, the jig revolves around a two-way mirror that sits between the camera and the actor.  This will allow the actor to mark off eye/nose/mouth registers on the mirror and adjust his face into those registers at the end of every movement.

 

photo 4-2

The camera will be located directly against the mirror on the other side to minimize any glare.  As you can see below, the actor will not see the camera.  Likewise, video shot from the camera will look as though the mirror is invisible. This can be seen in the test video used in the max/openTSPS demo below.

OpenTSPS-controlled Video Sequencer

 

The software for this project is broken up into two sections: a Markov Chain state machine and a video queueing system. The video above demonstrates the first iteration of a video queuing system that is controlled by OSC messages coming from openTSPS, an open-source people tracking application built on openCV and openFrameworks.  In short, the max/msp application queues up a video based on how many  objects the openTSPS application is tracking.

Screen Shot 2013-10-29 at 12.00.41 PM

The second part of the application is a probability-based state transition machine that will be responsible for selecting which emotional state will be presented to the audience.  At the core of the state transition machine is an external object called ‘markov’, written by Nao Tokui. Mapping out the probability of each possible state transition based on a history of previous state transitions will require much thought and time.

Sound Design

Along with queueing up different video clips, the max patch will be responsible for controlling the different filters and effects that the actor’s voice will be passed through. For the most part, this section of the patch will rely on different groups of signal degradation objects and resonators~.

 

Actor Confirmed

Screen Shot 2013-10-29 at 11.38.57 AM

Sam Perry has confirmed that he’d like to collaborate on this project. His character The Moon Baby is fragile, grotesque, self-obsessed and vain.  These characteristics mesh up well with the emotional state changes shown in the project.

Final Project Proposal – Jakob Marsico

Group Project: Wireless Data + Wireless Video System (part 2)

Arduino,Assignment,Hardware,Max,OpenCV,Submission — jmarsico @ 11:07 pm

Overview

This project combines a Wixel wireless data system, servos, microcontrollers and wireless analog video in a small, custom-built box to provide wireless video with  remote viewfinding control.

IMG_0052-2

 

Hardware

Camera-Box:

  • Wixel wireless module
  • Teensey 2.0 (code found HERE)
  • Wireless video transmitter
  • 3.3v servo (2x)
  • FatShark analog video camera
  • 12v NiMH battery
  • 9v battery
  • 3.7v LiPo battery
  • Adafruit LiPo USB charger

IMG_0048

 

Control Side:

  • Alpha wireless video receiver
  • Analog to Digital video converter (ImagingSource DFG firewire module)
  • Wixel Wireless unit
  • Max/MSP (patch found HERE)

 

System Diagram:

wireless_servo_camera-2

 

 

 

Tips and Gotchas:

1. Max/MSP Patch Setup:

  1. Connect the your preferred video ADC to your computer
  2. Open the patch
  3. hit the “getvdevlist” message box , select your ADC in the drop-down menu
  4. hit the “getinputlist” message box, select the correct input option (if there are multiple on your unit)
  5. if you see ““NO SIGNALS” in the max patch:
  • double check the cables… this is a  problem with older analog video
  • verify that the camera and wireless transmitter are powered at the correct voltage

2. Power Choices:

  1. We ended up using three power sources within the box. This isn’t ideal, but we found that power requirement for the major components (teensey, wixel, transmitter, camera) are somewhat particular.  Also keep in mind that the video transmitter is the largest power consumer at  around 300mA.

 

 Applications:

 

1. Face Detection and 2. Blob Tracking

 

Using the cv.jit suite of objects, we built a patch that pulls in the wireless video feed from the box and uses openCV’s face detection capabilities to identify people’s faces. The same patch also uses openCV’s background removal and blob tracking functions to follow blob movement in the video feed.

Future projects can use this capability to send movement data to the camera servos once a face was detected, either to center the person’s face in the frame, or to look away as if it were shy.

We can also use the blob tracking data to adjust playback speed or signal processing parameters for the delayed video installation mentioned in the first part of this project.

 

3. Handheld Control

IMG_0065-2

 

In an effort to increase the mobility and potential covertness of the project, we also developed a handheld control device that could fit in a user’s pocket. The device uses the same Wixel technology as the computer-based controls, but is battery operated and contains its own microcontroller.

Group Project: Wireless Data + Wireless Video System (part 1)

Uncategorized — jmarsico @ 5:51 pm

 

 

Idea Proposals: 

1. Interactive Ceiling Robot

Wireless => Portability. To showcase the substantial reach of wireless control, a robot with a camera on a high ceiling, interacts with persons beneath it. The robot move around in the high shadows, feeding video of the ground below in various directions. When no person is underneath to robot releases a small ball of yarn/ candy bar on string just above ground level, enticing those beneath to it. As soon as movement is detect/ person goes for the bait, the robot reels the bait back in out of reach. The video feed capture the person dismay and distraught. Repeats process.

The ceil is a new frontier, ofter unexpected and unnoticed. A robot, supposedly a machine subservient to human, now turns the tables and mocks them from its noble high perch. From above it claim a birds-eye view, supposedly monitoring like big brother or looking down upon those beneath. A reverse of power structure.

Possible robustness factor will be a versatile clamping mechanism that easy hook on to various pipe or structural supports along the ceiling. Possible internal cushioning in case of falls. Wireless controlled camera with easy to use user interface.

2. “Re-Enter”

We will place the wireless video system near an entrance and record people walking into a building. Inside the building a delayed playback of that video will be projected elsewhere in the building, near the entrance. Some visitors, who happen to travel past the playback location, will possibly see a video of themselves entering the building in the past. The project is a mobile version of Dan Graham’s “Time Delay Room“. Users will be able to affect the playback time and angle of the camera.

The project aims to confuse visitors just enough to stop their quick routine.  By introducing the possibility of seeing a moving image of themselves, visitors are forced to contemplate their current and near-past action. This team, built of mechanical and electrical engineers, artists and A/V experts, is well equipped to take on the challenges presented by this project, including confronting their fast-paced schedules.

A main challenge of this proposal will be to build a wireless video transmitter that can handle outdoor weather and be secured against theft. See drawings below for proposed changes. To deal with weather, the team will build a temporary vestibule to sheild it from rain and wind. To prevent theft, the team will include anchor points on the wireless box that can be used to lock it to a permanent structure nearby.

3. yelling robot

we will fix this portable camera on a vehicle with four wheels and place two sensors to capture the applause sound from the opposite direction. Initially, the people will be divided into two groups and the vehicle will be put in the middle initially; then this little vehicle will move toward to the group has larger applause sound and keep to capture the faces of winning group at the same time.

this project aims to simulate the competing between two groups. It’s like a wireless edition of pull-push game. Since the projector will display the winning group, the other group may try their best to get the focus of projection image so that they will make better interactions to make larger sound.

The main challenge may be that how to capture the winning group’s faces and adjust the angle of camera since it approaches the winning group back and forth.

 

Block Diagram of System
wireless_servo_camera

Diagram of improved, more robust, weatherproof box:
scan004

 

participants: Job Bedford, Chris Williams, Ziyun Peng, Ding Xu, Jake Marsico

Assignment 2: “Be Still” by Jakob Marsico (2013)

Assignment,Submission — Tags: , , , , , — jmarsico @ 9:57 pm

 

 

“Be Still” mimics rhythmic patterns that can only be heard when one stops to listen. The piece focuses on our need to pay attention to nature. Using motion sensors, solenoid valves and a micro controller, “Be Still” forces the visitor to stop for an exaggerated amount of time before it grants them the pleasure of hearing its song.

Instrument: “Exploded Views” by Jim Campbell (2011)

Instrument,Reference — jmarsico @ 1:05 pm

jim campbell_explodedviews-2

More…

Link to great interview about the piece.

 

Instrument: “60 medical infusion sets, water, fire, metal sheets 20x20x4cm” by Zimoun (2013)

Instrument,Reference — jmarsico @ 12:57 pm

Zimoun_4867_800px-ef88693f-2

More…

 

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Hybrid Instrument Building 2014 | powered by WordPress with Barecity