Final Project Presentation – Mauricio Contreras

Assignment,Final Project,Robotics,Submission,Technique — mauricio.contreras @ 11:47 pm

My fourth and final milestone and final project presentation involved real time driving of the movement of a simulated robotic arm with a haptic feedback capable gestural controller. By this time, I was interfacing with an actual ABB IRB 6640 industrial robot, and my controller was a smartphone. The IMU of the smartphone allows to read its orientation, and thus allows for mapped gestural control of the position of the head of the robot based on the tilt of the smartphone on each of its own axis. The haptic feedback is provided by the smartphone’s vibrator. Through the development up until the 3rd milestone, I had concluded that even though I had implemented already the motion control system and vibration upon the robot touching “virtual walls” (preset coordinates beyond which motion is not allowed), with the smartphone vibrating upon touching the wall, the “quality” of the motion I was getting was not enough to make it a desirable sculpting tool. I then shifted the priorities of the project in order to get better motion characteristics, as opposed to exploring further on the haptic feedback side, which up to now is only binary (touch = max vibration, not touch = 0 vibration).

Motion experiments

The motion of the ABB industrial robotic arms is limited to receive targets (6 degrees of freedom points: 3 coordinates for position and 3 for head orientation/rotation) and the motion path between them cannot be interrupted. This means updating the next target upon real time variation of the driving variable (in this case the smartphone’s orientation) is essentially not possible. I say essentially because there is a parameter of the movement instructions called “zone”, which specifies how near the head of the robot needs to be to the current target being pursued for the instruction to be considered complete, and then move to the next one. “zone = fine” means the targets have to be reached precisely. “zone = zX”, where X belongs to a group of preset numbers, allows the robot to reach “near” the target (how near is specified by the different X). Upon reaching the “zone” around the target, the next instruction in the program starts being executed.

With the above information, I considered the following alternatives for improved motion, grouped mainly in two categories:

A. Better target generation
1) Low pass filtering of the orientation
2) Keep a reception buffer with at least one more target than the current instruction (for “zone!=fine” to work well)
3) Smart generation of targets based on gesture recognition

B. Optimization of movement commands parameters
1) Setting of correct speed, step size and zone.

Out of all the available options, I started by B.1. The original motion to compare against is the fixed speed (max), fixed step size (usually 5-10 cm) and zone=fine of the first motion test, as shown in the video below:

The main progress in this direction was achieved considering variable step size in each axis based on the change of magnitude of the rotation in that axis. Also, variable speed, based on the max of the absolute values of all rotation components. Static weights were applied so that the largest step size would be around 10 cm. and the speed varies between 0-100%, with the max speed of the robot being 200 mm/s (in the manual mode of operation which is what I’ve been allowed to use). The result of these parameters may be seen below:

The resulting motion, albeit keeping its piecewise nature, seems to be much better suited to precise yet responsive control, with very little motion being performed when the rotation is near 0 (smartphones own axis aligned with the worlds coordinate system, as described here) and larger displacements being performed due to higher magnitude rotations. This resembles the way humans work on a physical piece in the sense that when precision is required movements are slow and short/local, whereas the movement between areas of precision is by definition non precise and therefore is optimized with greater speed and less accuracy. This parameter optimization was shown in the final project presentation. The final precision reached, along with smartphone vibration upon touching virtual walls (“sandbox”) was shown both in free air and also with a very simple demo. It consisted in a pencil being attached to the arm’s head and a canvas layer on top of a table, where people could draw (safely, since a “sandbox” was created which would not allow the robot to pierce through the table or adjacent wall). A video of the presentation was taken:

Assessment

The overall goal of the semester was to give a step towards an overarching ambitious project for my degree, related to being able to sculpt with a robotic arm. For this class the goal was to get acquainted with the workflow around the robotic arms present in dFab, the Dept. of Architecture digital fabrication laboratory. The factual outcome would be to use the same software that previous users are familiar with and be able to connect a gestural controller and haptic feedback capable device to drive the robots. As stated this was achieved, but the following was learnt during the project:

  • Responsiveness: real time driving of the robot seems to be crucial in order for the user to feel she/he is in actual control of the robot. The slower the response time, the harder it is to relate one’s own motion to that of the robot in an intuitive way.
  • Quality of the motion: The piecewise motion obtained due to the constraint imposed by the ways the robots are programmed (the lowest layer accessible to the user being RAPID) greatly reduces the quality with which users seem to regard the quality of the motion. “Dumb” and “robotic” were adjectives used repeatedly by users/observers. Even when good parameter choice for motion commands aided the situation, this is a key aspect to address in future development. There are other robots which are made to more closely resemble human arms and allow better real time interaction, but my degree is based on architecture and on the practical side of things I want to explore and also give dFab a creation tool useful and tuned to their setup, which means using the ABB robots. My intuition tells me that A.3, smart target generation, may provide the greatest improvement and is the next step I intend to explore in future courses.
  • Mapping: the final setup maps smartphone orientation to position of the robot’s head. Whereas it proves the concept of gestural control, it is indirect (as opposed of driving position with position which was the original intent) and the final degree of control available to the user seems far from what is desired. Presentation observers had a really tough time trying to draw on the paper provided. As it is now, the controller more resembles a 3 degree of freedom joystick, and very likely an of the shelve one that would be better in some sense could be purchased. Again my intuition tells me direct mapping (position to position and orientation to orientation) is required for “natural” control, and since my research so far points that stand alone positioning through an IMU is not solved (at least in free air) and cannot be applied to the project, seems like external sensor based technologies, such as visual motion capture (“mocap”) are necessary.
  • Why?: the question came back again from many observers in the presentation. Since the example application was drawing, many commented on the fact that humans can draw much better than the robot did. To this I replied yes, absolutely, since a naturally feeling motion has not been achieved yet, but more importantly, because the idea of using an industrial robotic arm is in tasks that would be impossible or at least very difficult for a human to do directly and cumbersome to do with a power tool, like bending/milling/etc. very hard/large materials, and that with precision and speed. Essentially, a big industrial robotic arm is made for high power/high precision/large sized applications, so anything that needs a very powerful hand tool, a position hard to reach and very high precision work in either/both scenarios a robot, with the correct instructions, can do better than bare hand. I think now that it is essential to somehow preserve the high precision nature of the robot, but still explore the liveliness of human physical sculpting. A way to do this is with a mixed analog/digital instruction set, such as in drawing software which mixes free form drawing with a mouse, but also allows for precise mathematical operations to be performed on top of that. This is tried and true for sculpting in the virtual world (any CAD software), hence it is likely that some of it can be translated in a useful way to the physical world. I intend to build this mixed human driven/software enhanced toolkit.

Code

Rapid, Android. See the Future CNC course website and ABB’s full reference for further information on RAPID.

Acknowledments

I would like to thank very much Mike Jeffers, Madeline Gannon, Zack Jacobson-Weaver, Ali Momeni, Jeremy Ficca, Joshua Bard, Garth Zegling and CMU’s Manipulation Lab for their incredible support to this project.

Final Project Milestone 3 – Mauricio Contreras

Assignment,Final Project,Robotics,Submission,Technique — mauricio.contreras @ 10:26 pm

My original 3rd milestone had to do with connecting a haptic feedback controller with the simulation of robotic motion, which by this time had turned into real motion, and the device chosen as described before is a smartphone since it provides an IMU and a vibrator, all with a standard and well proven programming API.

Limitations of IMU standalone motion tracking

My original intent was to use the device’s IMU to track position and orientation, each in 3 axis, providing effectively 6 degrees of freedom. My pursuit is for a very natural gestural interaction that would mimic one’s own hand orientation and position in space, to be imitated by the robot changing it’s own head position and orientation. My assumption was that standalone positioning based on integrating the accelerometer’s readings twice was a method that must have been solved by now, and I started searching for code. Yet, to my surprise, it seems this is not true and the constraints lie mainly on the double integration, the first of which leaves a constant error, and hence the second multiplies that constant with time, meaning the error grows linear with time! The drift most algorithms (at least the ones available on the web) render is even of tens of centimeters per second, which is completely unusable for the application in mind. In the case of orientation, this is completely different because there is only one integration to be made, plus there are at any given time 2 vectors of reference against which to correct: gravity and the magnetic north. To sum up, whereas one can get very accurate orientation from an IMU, linear positioning is still very much a work in progress, the underlying reasons being physical more than technological.

This immediately cut 3 degrees of freedom from my ideal application, the most important ones at that (the assumption being that one can probably use tricks to change the head’s orientation but use real degrees of freedom for it’s position, as opposed to the other way round. This is pure intuition though). I faced the decision of changing technology to visual tracking or keep using the IMU, now only with orientation. Even though kinect based motion tacking seems to be pretty plug&play these days, I had no previous experience and decided the semester was too advanced to have a setback as not being able to show anything functional in the end, whereas I was already somewhat acquainted at this time with the smartphone workflow I had developed. I decided then to stay on this path.

Orientation based linear motion control, first tests

I devised a TCP/IP socket based client (Android smartphone) – server (robot controller) application. It uses the smartphones orientation (software sensor provided by Android based on fusing the raw information from accelerometer, gyroscopes and magnetometer/compass) in each axis to generate steps that offset the robots head position in each axis.

The motion result was pretty much just as cut as what I had obtained with hardcoded targets, which left me with a feeling of disappointment. See video below, and please notice how unnatural this piecewise movement feels.

This piecewise motion is not related to the smartphone input information, but to the way the robot is controlled. Through the trials, becoming acquainted with people who have done extensive work with the robots (Mike Jeffers, Madeline Gannon, Zack Jacobson-Weaver, Ali Momeni, Jaremy Ficca, Josh Bard, Kevyn McPhail, up to then) and online research, I came to know that at least ABB robots, being developed for industrial use, are aimed towards rigid precision. This means motion commands are based on targets and are not meant to be interrupted in the middle, which is exactly what is required for responsive gestural control (real time interrupts). The next and final milestone shows the way of how I’ve dealt with this limitation.

Final Project Milestone 2 – Mauricio Contreras

Assignment,Final Project,Robotics,Submission,Technique — mauricio.contreras @ 10:02 pm

Log

My second milestone was about simulating the motion of a robotic arm within the software workflow that had been explored in the first milestone (Rhino + Grasshopper + HAL). Upon getting acquainted with the capabilities of these pieces of softwares and understanding more about the possible constraints and needs of the instrument, I realized that REAL TIME driving of the robotic arm was a major requirement. Just imagine sculpting with your arm moving with a few seconds lag after your movement intention and you’ll see why. The software workflow described above is great for offline materialization of 3D designs, but not necessarily for real time control. Even though feasible, people from the lab commented about possible lag issues, which made me want to try out the real motion of the robot, even with simple commands, as soon as possible. I found procuring the tools to run in my own machine rather difficult: All of them are Windows only, so at first I got a virtual machine from Ali Momeni with everything preloaded, but it ran excruciatingly slow (even when I changed my computer to the latest Macbook Pro). Then I tried creating my own virtual machine from scratch, and installed Rhino and Grasshopper with success. Yet HAL’s developer webpage was down and I had problems procuring tutorial training for it. When I asked for help with this, people recommended to learn the former 2 tools first and then use HAL. This seemed reasonable but I was under time constraints (by choice) to test the robots motion as soon as possible with a configuration that would generate the least lag, and evaluate if that optimal setup would prove to be responsive enough to match the target application of the instrument, which is sculpting.

Early motion tests

I then turned to writing my own RAPID code, and quickly was able to generate a routine to move the head of the robot in a square in the air, as shown in the following video.

The routine was based on offsetting the current location by steps in each axis, but also waiting for a digital input state before each small step. Since the robot accepts 24 V digital inputs, I would have had to use a power source or do a conversion circuit from the standard micro controllers 5/3.3 V outputs. That is not difficult but I assumed that the DI of the robots had pull down resistors and just made it wait for DI=0 before each motion. Since the shape was completed that thesis was proven. Also, the motion seemed “cut”, as if doing start-pause-restart in every step, as opposed to a seamlessly continual motion that would have occurred either if the lag on processing the digital input was very low or depending on motion configuration of the robot (i.e. there may be other motion commands that would output less of a “cut” motion). I removed the wait for DI statements with no appreciable effect, hence the motion commands were the issue. To see the effect of this when driving the robot with gestures, and based on previous code for motion (FUTURE CNC LINK), I started writing a TCP/IP socket based client (Android smartphone) – server (robot controller) application, which will be outlined in the next milestone posting.

Assessment

Up to this milestone, just getting access to the robots themselves and being able to move them in a hardcoded fashion I consider a success by itself, yet it is clear that new, unforeseen difficulties have appeared.

Final Project Milestone 1 – Mauricio Contreras

Assignment,Final Project,Robotics,Submission,Technique — mauricio.contreras @ 11:45 am

My first milestone was to procure myself all the software tools necessary for at least simulating motion of a robotic arm within a framework which has been previously used by Ali Momeni. Namely, this means interacting with Rhinoceros 3D, the Grasshopper and HAL plugins, and ABB RobotStudio. I’ve now got all this pieces of software up and running in a virtual image of Windows (of the above only Rhino exists, as a beta, for OS X) and have basic understanding of all of them. I had basic dominion of Rhino throuh previous coursework, and now have done tutorials fro Grasshopper by digitaltoolbox.info and the Future CNC website for HAL and RobotStudio. I’ve got CAD files that represent the geometry of the robot, can move it in freehand with RobotStudio and am learning to rotate the different joints in Rhino from Grasshopper.

Update (12/11/2013): added presentation used the day of the milestone critique.

Final Project Proposal – Mauricio Contreras

Assignment,Mid-Semester Report,Submission — mauricio.contreras @ 1:13 pm

Group Project: Multi-Channel analog video recording system (part 2)

Arduino,Assignment,Submission — mauricio.contreras @ 2:17 pm

Overview

Eight video cameras present eight different views into a dynamic world. They can be oriented a number of ways including inward or outward on a subject.

Our basic setup is a box with the eight cameras arranged along the top. Cables run from the cameras to the base of the box where they are connected to power sources and a video multiplexer. The base also contains an Arduino which is used to control the mux. Power for the mux is supplied via the Arduino. A basic diagram of this setup can be seen below (this setup matches our “Fat Shark” application, as described in this post, but also can be generalized).

Hardware Details

Cameras

These cameras are generic analog mini cameras you can buy on the internet or steal from the artfab lab. They can be fed with 9-12V, and are provided with a 3 conductor cable: V+, GND and video.

 

Camera boxes

The camera boxes were constructed out of MDF. There is nothing special about the design except that there are holes to allow the camera to poke out as well as for the cord to come in. A good place to create a box is here: boxmaker.rahulbotics.com/.
Our camera cords are secured inside by foam padding and a zip-tie.

 

Open-beam structure

Open beam is very structural. openbeamusa.com/

 

Cords

Each camera has power input and video output. We used three 1-to-4 power splitters to distribute  power from a single 9V source to the 8 cameras and other components. The video output eventually terminates as RCA to connect to the video MUX.

 

Mux

The mux takes 8 analog video inputs, a selector input, an enable input, a power source, and 1 analog video output. This board is a collaboration between Ray Kampmeier and Ali Momeni, and more information can be found here www.raykampmeier.net

 

Arduino

The arduino controls the mux selector from either being programmed to switch channels or by an external controller eg a computer or a phone outputting OSC. Any arduino would do.

 

Immersion RC

The video output can be routed to a sender antenna that takes in a video input and power. Their website is: www.immersionrc.com

 

Fat Shark

The goggles with screens inside: www.fatshark.com

 

Code

All our code and documentation is located on github github.com/sbarton272/VideoMux.

Looking Outwards: Fat Shark

As shown in the diagram, the video multiplexer is connected to the immersion RC chip that sends a radio signal to the Fat Shark, where a single output video appears through the goggles. Fat Shark is a very interesting device because it allows you to see videos and images that are not necessarily where you can traditionally view. As of now, the cameras are fixed in place on the Open-beam structure. The structure is robust enough for mobility, which allows for a wide range of possibilities on the location of the system.

Looking Inwards

The setup of the cameras allows us to rotate the cameras inwards. For this application, we place an object in the middle, and use the eight cameras to view it from different angles. In order to hold the object in place, we cut out a square piece of masonite, that is placed on top of four screws and can be moved up and down depending on the size of the object. Similar to the Looking Outwards application, the videos are controlled by the phone through touchOSC.

Third Application

We took the outward facing camera set-up and did a few shoots using our new compass controller. Here are the results:

Experiment 1:

 

Experiment 2:

 

Fourth Application: An Image Capture System for 8 Cameras with Different Angles.

The aim of this project is building a image capture system with our camera box for 8 cameras. The capture system is composed by a real time processing software : Pure Data and Open source hardware platform : Arduino. Function is simple like that first, we made a connection between a video multiplexer and arduino, so we can control and choose which camera and angle we want to use. And, in Pure data patch for this combination (Arduino  and Video multiplexer), we can use a user interface which help to choose which camera with GUI and make a captured and file-saved images into PC. And then, we can use these images for making 3D scan image and multi viewed photo like a panorama image.

test

 

Group members

  • Mauricio Contreras
  • Spencer Barton
  • Patra Virasathienpornkul
  • Sean Lee
  • JaeWook Lee
  • David Lu

Group Project: Multi-Channel analog video recording system (part 1)

Arduino,Assignment,Submission — mauricio.contreras @ 5:31 pm

The project is centered around an 8-to-1 analog video multiplexer board. This board is a collaboration between Ray Kampmeier and Ali Momeni, and more information can be found here.

In the present setup, 8 analog small video cameras (“surveillance” type) are connected as inputs to the board, and the output is connected to a monitor. The selection of which of the 8 inputs gets routed to the output is done by an Arduino, which in turn maps the input of a distance sensor to a value between 1 and 8. Thus, one can cycle through the cameras simply by placing an object at a certain distance of the sensor. The connection diagram can be seen below:

A picture of the board with 8 inputs is displayed below (note the RCA connectors):

Video Multiplexer board

The original setup built for showcasing the project uses a box shaped cardboard structure to hold a camera in each of its corners, with the cameras pointing at the center of the box (see below).

Initial setup for 8 cameras

A simple “shield” board was designed to facilitate the interface between the Arduino, the distance sensor and the video mux.

Arduino shield

Improvements

The current camera frame is of cardboard which is not the most robust of materials. A new frame will be constructed of aluminum bars assembled in a strong cube:2013-09-21 22.17.04 

This is the cube being assembled:

2013-09-20 15.22.04

Wires attachments for the cameras will be routed away from the box to a board for further processing.

Project Ideas

1) Jigsaw faces

Our faces hold a universal language. We propose combining the faces of eight people to create a universal face. Eight cameras are set-up with one per person. The participants place their face through a hole in a board so that the camera only sees the face. These set-ups are arranged in a circle so that all the participants can see each other. The eight faces are taped and a section of face is selected from each person to combine into one jigsaw face. This jigsaw face is projected so that the participants can see it. This completes the feedback loop. The jigsaw face updates in real time so as the participants share an experience their individual expressions combine in the jigsaw face.

The Jigsaw Face  will consist of a few boards (of wood) for people to put their face through. Each board will have a camera attachment and all the cameras will be attached to a central processor. The boards will be arranged so that people face each other across a circle. This will enable feedback among the participants.

2013-09-21 22.17.18

2) The well of time : Time traveling instrument

I’m still thinking about the meaning of 8 cameras and why we need and what we can do originally. And, I assume that 8 cameras mean 8 different views and also it could be 8 different time points distinguishably. After arriving this point, I realized we can suggest a moment and situation that is a mixed timezone and image with a user’s present image from each camera and the old photo from in the past which is triggered by a motion or distance sensor, attached with each camera.

Here is a sample image of my thought. This image contains the moment of now and the past time when the computer science building had constructed.

sean1_500

And, if a user stand with another camera, we could represent the image like below. The moment of first introducing the propeller steam boat. basically, this is the artistic way of time traveling. So, our limitation by black and white camera couldn’t be problem and in this case, it could be benefit.

sean2-500

For this idea, of course, we have to make some situation and installation looks like this.

sean3_500

 

More precisely, it has this kind of structure.
sean4-500
As a result, an example interaction scenarios with a user and diagram.
sean5-500
sean6_500

3) Object interventions

Face and body expressions can tell a lot about people and their feelings towards the surroundings or the objects of interaction. Imagine attaching eight cameras to a handheld object or a large sculpture. With eight cameras as the inputs, the one output will be the video from the camera that is activated by a person interacting with it in that specific area. The object can range from a small handheld object like a rubik’s cube, to a large sculpture at a playground. We think it will be very interesting to see the changes in face and body expressions as a person gets more (or less) comfortable with the object. It might be more interesting to hide these cameras so that the user will be less conscious of their expressions because they do not realize they are being filmed. It will be more difficult to do that with a large public sculpture, but we can do a prototype of a handheld object where we can design specific places where the cameras should locate so that they cannot be seen.

4) Body attachment

When we see the world we see it from our eyes. Why not view the world from our feet. Perceptions can change with a simple change in vantage point. We propose to place cameras on key body locations: feet, elbows, knees and hands in order to view the world from a new vantage point as we interact with our surroundings. Cameras would be attached via elastic and wires routed to a backpack for processing.

Group members:

  • Mauricio Contreras
  • Spencer Barton
  • Patra Virasathienpornkul
  • Sean Lee
  • JaeWook Lee
  • David Lu

Assignment 2: “DialToneMadness” by Mauricio Contreras (2013)

Arduino,Assignment,Audio,Hardware,Sensors,Software,Submission — mauricio.contreras @ 9:38 pm

DialToneMadness is an instrument that generates different audio tones, which frequency and period of repetition can be altered  by proximity. It is based both in the Android and Arduino development platforms. It uses an ultrasonic based proximity sensor to measure distance to the “triggering” object, whatever that may be. The sensor is triggered and read from within Arduino and is then sent to the smartphone running Android. The latter reads this information and produces a tone based on it. Due to the aesthetic of the piece, with a high end smartphone, “retro” DTMF tones were used. This gives the audio output of the piece an interesting turn, since these sounds contain multiple tones and are not just sorted in a higher/lower pitch fashion. The smartphone responds with a single tone per message sent by the Arduino, hence the period of the repetitions is controlled by the latter, and is a linear function of the distance.
 

 
ProximityDialToneMadness backProximityDialToneMadness front

Instrument: “Fundawear” by Durex Australia (2013)

Assignment,Instrument,Reference,Submission — mauricio.contreras @ 4:52 pm


More…

Instrument: “I/O Brush” by Kimiko Ryokai, Stefan Marti and Professor Hiroshi Ishii (2004)

Assignment,Instrument,Reference,Submission — mauricio.contreras @ 4:45 pm


More…

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Hybrid Instrument Building 2014 | powered by WordPress with Barecity