Cat as Robot

Reference,Theory — Robb Godshaw @ 3:48 pm

reminds me of  Braitenberg vehicles

Project Paradise – 1998

Artists,Robotics,Theory — Robb Godshaw @ 1:28 am


Project Paradise is an very early exploration of issues surrounding telepresence and telexperience. Two participants enter booths in a gallery setting. The booths are equipped with a ringing telephone and a small monitor. When the participant answers the call, they are greeted with a kind voice which explains the interaction. Using the keypad on the phone, they are able to control a naked person in distant jungle setting. Both nude avatars are able to stroke and jab oneanother via the various telephone-controlled motors attached to their limbs. Watch the video, this description does no justice.

Wow. What fun. I am struck with the complexity they are able to glean from such simple analog electronics. Relays, motors, and CCTV combine to shake the particiapnts perception and role as a gallery goer. Is it socially acceptable for one to caress the naked flesh of a stranger? Who is held responsible for the action, the decision-maker or the limb-owner? Is the gesture of affection effectively carried through the medium of touch-tone and CCTV? If so, how far does it travel? All the way from participant to participant it seems unlikely that there would be lossless transmission of affection after all of the state changes. Instantly stripped from the gesture is the bodies of each participant. Eye contact, body language, warmth, physical beauty, even gender are removed at once. Intent and recognition of intent may be all that remain from the point of view of opposing participants. from the point of view of our avatars, the situation is the opposite. Present are their bodie and all that they entail. Eyes, genitals, apparent beauty, and history inorm their experience. These are individuals who have spent many hours caressing one another nude. Their lengthy exchange of contact is lacking a very important facet of intimacy. Intent. Neither party is responsible for their actions, being frequently reminded by the cold and loud apparatus that is driving the contact. The avatars supplement the motors by making small movements that carry on the intent of the limited mechanics. They smile, rub, and appear to be what is clearly a spritely fantastic time.

The Centre for Metahuman Exploration- Field Robotics Center – 1998

Carnegie Mellon University

More

relevant

Bio-inspired,Machine Vision,Theory — Tags: — isla @ 4:33 am

Demonstrates a phenomenon, relevant to robotics I think, that falls somewhere in the theoretical bermuda triangle cornered by (1) lacan’s concept of the mirror stage, (2) that folk-psychology article we read, (3) alfred hitchcock.

WHERE DO ROBOTS BELONG?

 

The following work by Matthew Hebert (posted below) relates to a discussion Adam, Dakotah, Rob and I had regarding where art belongs…. I think we decided that, eventually, inevitably, it seems to always end up, as all life does, buried in a land pit somewhere. Personally, I don’t mind if stuff I make ends up in the garbage. But I don’t really want to get into a discussion about whether art is “wasteful” or not, or whether it should be “useful” or not.

Instead, let’s just check out this project that might excite Adam, since it combines robotics with design & “utilitarian” shit for your home… you know, furniture.

 

^    This table is kind of “whimsical” (in a when-robotics-hits-Crate-&-Barrel sort of way?). But the designer is obviously a theory dork (<- no negative connotation), since here we see one of Braitenberg’s vehicles!  Maybe 2a style, mentioned on p.6?  Though you might not be able to tell from this not very revealing video, these little robots, imprisoned between two sheets of glass, move in the sun, and stay still in the “shade.” Their motors are most likely attached to light sensors. This creates a nice effect when you put something down on the coffee table, since they will flock to it and hide under it. Would I put this in my home if someone gave it to me? Sure. (But as Bob Bingham would ask, “Is it art yet?”)

Here’s another piece based on simple Braiteneberg architectures: a bench that moves itself into the sun (using light sensors in the front, back, and on both sides, as well as a microcontroller). These benches have solar panels on their seats that charge their battery (except, I guess, when someone’s sitting on one…hmmm….)   Watch out, this video is rather lengthy.

[Do we always have to use that Strauss composition from 2001 when introducing a monolithic design?][yes]

 

Coming from the “art” perspective: I think these projects could be more interesting if they complicated the nature of braitenberg architectures, perhaps simultaneously complicating the notion of utilitarian furniture. What if these devices were structured not to be useful? If this furniture made use of slightly extended models of braitenbergian forms (see the Lambrinos / Scheier article)… the emergent behaviors might appear more complex. This could get really weird and interesting, if we’re talking about furniture that is reacting to human use. Incorporating “artificial” learning, or the type of seemingly socially intelligent behaviors discussed in the article we read about folk-psychology might turn a table or a chair into something we really have to think about interacting with…. Heidegger would go bananas.

 

And last, this Hebert guy takes a stab at “art” !!

After all, if there’s one way to be SURE you’re making art …. it’s by putting it in a museum!

This apparently was a commission from the San Diego Museum of Art in 2011 for a weekly series themed around the topic of “what a city needs.”  Here, Hebert says he is approaching this theme “from an interest in power infrastructure and it’s critical importance to the city,” in relation to the often geographical remoteness of most of those forms of power. (Which apparently is especially true in San Diego). Hebert took public domain models from the Google SketchUp library, 3D printed them in ABS plastic, wired electronics to them, and placed them in the museum in what we MIGHT call “non-traditional” locations. Sounds like a well-followed recipe right out o’ the ol’ “art” cookbook to me!

 

 

 

 

 

 

Robotic Quintet Composes And Plays Its Own Music

This robot created by Festo listens to a piece of music breaking each note down into pitch, duration, and intensity. It then plugs that information into various algorithms derived from Conway’s “Game of Life” and creates a new composition while listening to one another producing an improv performance. Conway’s “Game of Life” put simply is a 2d environment where cells(pixels) react to neighboring cells based on rules.

They are:
Any live cell with fewer than two live neighbours dies, as if caused by under-population.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overcrowding.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This algorithm tends to evolve as time passes and created in an attempt to simulate life.

This robot essentially mimics how composers take a musical motif and evolve it over the life of the piece. The robot sets the sensory information from the music played to it as the initial condition or motif and lets the algorithm change it. Since western music is highly mathematical, robots are naturals. I would say this robot has more characteristics human/animal behavior in Wiener’s example of the music box and the kitten. Unlike the music box this robot performs in accordance with a pattern yet this pattern is directly effected by its past.

FIRST UNICORN ROBOT! (Converses with “female”)

 

[pardon my screenshot bootleg, sound is pretty bad… go to the link!]

 

“First ‘chatbot’ conversation ends in argument”

www.bbc.co.uk/news/technology-14843549

 

This is an interesting example of robot interaction. Two chatbots, having learned their chat behavior over time (1997 – 2011 !) from previous conversations with human “users” are forced to chat with each other. This BBC video probably highlights what we might consider the “human-interest” element of the story, such as the bots’ discussion of “god” and “unicorns” as well as their so-called “argumentative” sides, supposedly developed from users. With these highlights as examples, it does seem fairly convincing proof that learning from human behavior… makes you sort of human-like! This type of “artificial” learning or evolution is really interesting, as it reflects back what we choose to teach the robots we are using:  we really can see that these chatbots have had to live most of their lives on the defensive. I would like to see unedited footage of the interaction. I am sure some of their conversation is a lot more boring. I noticed that the conversation tends towards confusion or miscommunication, almost exemplifying the point about entropy that Robert Weiner makes (p. 20-27): that information carried by a message could be seen as “the negative of its entropy” (as well as the negative logarithm of its probability). And yet, just as it seems the conversation might spiral into utter nonsense (and maybe it does, who knows, this might be some clever editing), the robots seem to pick up the pieces and realize what the other is saying, sometimes resulting in some pretty uncanny conversational harmony about some pretty human-feeling topics. Again, if we saw more of this chat that didn’t become part of a news story, I wonder if this conversation might slip more frequently into moments of entropic confusion. (I think those moments of entropy can tell us as much about the bots’ system of learning as their moments of success (as Heidegger / Graham Harman might say, we only notice things when they’re not working… though I kinda like lil wayne’s version from We be steady mobbin:  If it ain’t broke, don’t break it)….

If we view chatbots as an analogue to the types of outside-world-sensing robots we are trying to build, only with words as both their input and output, this seems to show that they really are capable of the type of complex feedback-controlled learning that Weiner suggests (p.24) and that Alan Turing was gearing up for. This experiment is not unlike the really amazingly funny conversation in the Hofstadter reading between “Parry” (Colby), the paranoid robot, and “Doctor” (Weizenbaum), the nondirective-therapy psychiatrist robot (p.595). So, actually, BBC’s claim that this was the “first chatbot conversation” isn’t quite right…

Nonetheless, perhaps an experiment worth trying again on our own time?

13.01.29: Cybernetics: Lecture Outline

Cybernetics,In-Class,Reference,Robotics,Theory — Ali Momeni @ 12:00 am
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Advanced Studio: Critical Robotics – useless robot, uncanny gesture | powered by WordPress with Barecity