The complexities of vague simplicity

Bio-inspired,Scientific,Technique — Robb Godshaw @ 1:28 pm

Through the implementation of simple rules, elagant humanist behaviors appear to emerge in the behavior of Braitenberg’s little robots.
His notion that artificially implementing natural selection on a table full of robots might yield better robots than careful design got my attention. I think that in cases of simple insect-like behavior this would be an ideal and elegant way of solving ones problem. With the goal of survival in certain conditions and the limited resources of simple logic, it would seem best to use this randomly seeded naturally culled robot group as a starting point. The tremendous amount of time and resources it would consume to build and allow to die a sufficient quantity of robots would be disadvantageous. Also, you would end up with a robot who could survive the conditions the table indefinitely, but you would have no way of knowing how or why, nor would you be certain it could survive in similar conditions. Mother nature had all the time in the world to get it wrong again and again. Humans are in a hurry. Humans should probably stick to engineering. RObotFall
Candidate robot teeters on edge of cliff, hoping to be fit for habitation of infinite white plane.
Image is stolen clip art, Blur is watermark.

As far as the emotive qualities of lifelike things, there is a lot to be gleaned from insects and other simple organisms. Their nervous systems may be low level enough to practically replicate with current robotic technology. Persson outlines the frank notion that we are way behind being able to actually recreate intelligence in robots, we may as well get good at tricking humans. I find the practicality of this stance somewhat refreshing in a world of missed expectations and lofty aspirations. We’ll get there, relax everybody.

This lil roach bot sure does capture the gesture of the desperate clawing of an insect. Despite the fact that it is being controlled by a human, I instantly ascribe desire and desperation to the mechanical device. Curious.

The Discontented Robot

This little device made by David Bowen must be a version of Braitenberg’s vehicles that has attractive behavior to what it senses (either 2b or 4a). The nice thing about this little bot is that it synthesizes its own power from the source that it is attracted to. The set up is slightly different in that the object of desire is out of reach and so the bot ends up circling around the light source never satisfied.

WHERE DO ROBOTS BELONG?

 

The following work by Matthew Hebert (posted below) relates to a discussion Adam, Dakotah, Rob and I had regarding where art belongs…. I think we decided that, eventually, inevitably, it seems to always end up, as all life does, buried in a land pit somewhere. Personally, I don’t mind if stuff I make ends up in the garbage. But I don’t really want to get into a discussion about whether art is “wasteful” or not, or whether it should be “useful” or not.

Instead, let’s just check out this project that might excite Adam, since it combines robotics with design & “utilitarian” shit for your home… you know, furniture.

 

^    This table is kind of “whimsical” (in a when-robotics-hits-Crate-&-Barrel sort of way?). But the designer is obviously a theory dork (<- no negative connotation), since here we see one of Braitenberg’s vehicles!  Maybe 2a style, mentioned on p.6?  Though you might not be able to tell from this not very revealing video, these little robots, imprisoned between two sheets of glass, move in the sun, and stay still in the “shade.” Their motors are most likely attached to light sensors. This creates a nice effect when you put something down on the coffee table, since they will flock to it and hide under it. Would I put this in my home if someone gave it to me? Sure. (But as Bob Bingham would ask, “Is it art yet?”)

Here’s another piece based on simple Braiteneberg architectures: a bench that moves itself into the sun (using light sensors in the front, back, and on both sides, as well as a microcontroller). These benches have solar panels on their seats that charge their battery (except, I guess, when someone’s sitting on one…hmmm….)   Watch out, this video is rather lengthy.

[Do we always have to use that Strauss composition from 2001 when introducing a monolithic design?][yes]

 

Coming from the “art” perspective: I think these projects could be more interesting if they complicated the nature of braitenberg architectures, perhaps simultaneously complicating the notion of utilitarian furniture. What if these devices were structured not to be useful? If this furniture made use of slightly extended models of braitenbergian forms (see the Lambrinos / Scheier article)… the emergent behaviors might appear more complex. This could get really weird and interesting, if we’re talking about furniture that is reacting to human use. Incorporating “artificial” learning, or the type of seemingly socially intelligent behaviors discussed in the article we read about folk-psychology might turn a table or a chair into something we really have to think about interacting with…. Heidegger would go bananas.

 

And last, this Hebert guy takes a stab at “art” !!

After all, if there’s one way to be SURE you’re making art …. it’s by putting it in a museum!

This apparently was a commission from the San Diego Museum of Art in 2011 for a weekly series themed around the topic of “what a city needs.”  Here, Hebert says he is approaching this theme “from an interest in power infrastructure and it’s critical importance to the city,” in relation to the often geographical remoteness of most of those forms of power. (Which apparently is especially true in San Diego). Hebert took public domain models from the Google SketchUp library, 3D printed them in ABS plastic, wired electronics to them, and placed them in the museum in what we MIGHT call “non-traditional” locations. Sounds like a well-followed recipe right out o’ the ol’ “art” cookbook to me!

 

 

 

 

 

 

AUTOMATIC PERSONAL WEIGHT LIFTER

Prototyping / modeling to create a system in which a user’s simple arm motion (which also blows up an inflatable muscle) controls a machine / “robot” that will lift a large amount of weight.

Model:

sloppy map of possible linkages:

map_3

 

Cyborg Foundation

Artists,Reference,Robotics — adambd @ 5:30 am

“I started hearing colours in my dreams”

Neil Harrison(b. 1982 in Belfast, Northern Ireland)

CYBORG FOUNDATION | Rafel Duran Torrent from Focus Forward Films on Vimeo.

Robotic Musicianship

Artists,Reference,Robotics — Ali Momeni @ 2:53 pm

Eric Singer (b. 19.. in ..)

  • Lemur: League of extraordinary musical urban robots

Godfried-Willem Raes (b. 1952 in Gent, Belgium)

Ajay Kapur (b. 19.. in ..)

Robotic Quintet Composes And Plays Its Own Music

This robot created by Festo listens to a piece of music breaking each note down into pitch, duration, and intensity. It then plugs that information into various algorithms derived from Conway’s “Game of Life” and creates a new composition while listening to one another producing an improv performance. Conway’s “Game of Life” put simply is a 2d environment where cells(pixels) react to neighboring cells based on rules.

They are:
Any live cell with fewer than two live neighbours dies, as if caused by under-population.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overcrowding.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This algorithm tends to evolve as time passes and created in an attempt to simulate life.

This robot essentially mimics how composers take a musical motif and evolve it over the life of the piece. The robot sets the sensory information from the music played to it as the initial condition or motif and lets the algorithm change it. Since western music is highly mathematical, robots are naturals. I would say this robot has more characteristics human/animal behavior in Wiener’s example of the music box and the kitten. Unlike the music box this robot performs in accordance with a pattern yet this pattern is directly effected by its past.

FIRST UNICORN ROBOT! (Converses with “female”)

 

[pardon my screenshot bootleg, sound is pretty bad… go to the link!]

 

“First ‘chatbot’ conversation ends in argument”

www.bbc.co.uk/news/technology-14843549

 

This is an interesting example of robot interaction. Two chatbots, having learned their chat behavior over time (1997 – 2011 !) from previous conversations with human “users” are forced to chat with each other. This BBC video probably highlights what we might consider the “human-interest” element of the story, such as the bots’ discussion of “god” and “unicorns” as well as their so-called “argumentative” sides, supposedly developed from users. With these highlights as examples, it does seem fairly convincing proof that learning from human behavior… makes you sort of human-like! This type of “artificial” learning or evolution is really interesting, as it reflects back what we choose to teach the robots we are using:  we really can see that these chatbots have had to live most of their lives on the defensive. I would like to see unedited footage of the interaction. I am sure some of their conversation is a lot more boring. I noticed that the conversation tends towards confusion or miscommunication, almost exemplifying the point about entropy that Robert Weiner makes (p. 20-27): that information carried by a message could be seen as “the negative of its entropy” (as well as the negative logarithm of its probability). And yet, just as it seems the conversation might spiral into utter nonsense (and maybe it does, who knows, this might be some clever editing), the robots seem to pick up the pieces and realize what the other is saying, sometimes resulting in some pretty uncanny conversational harmony about some pretty human-feeling topics. Again, if we saw more of this chat that didn’t become part of a news story, I wonder if this conversation might slip more frequently into moments of entropic confusion. (I think those moments of entropy can tell us as much about the bots’ system of learning as their moments of success (as Heidegger / Graham Harman might say, we only notice things when they’re not working… though I kinda like lil wayne’s version from We be steady mobbin:  If it ain’t broke, don’t break it)….

If we view chatbots as an analogue to the types of outside-world-sensing robots we are trying to build, only with words as both their input and output, this seems to show that they really are capable of the type of complex feedback-controlled learning that Weiner suggests (p.24) and that Alan Turing was gearing up for. This experiment is not unlike the really amazingly funny conversation in the Hofstadter reading between “Parry” (Colby), the paranoid robot, and “Doctor” (Weizenbaum), the nondirective-therapy psychiatrist robot (p.595). So, actually, BBC’s claim that this was the “first chatbot conversation” isn’t quite right…

Nonetheless, perhaps an experiment worth trying again on our own time?

13.01.29: Cybernetics: Lecture Outline

Cybernetics,In-Class,Reference,Robotics,Theory — Ali Momeni @ 12:00 am

My Little Piece of Privacy.

Assignment,Robotics,Submission — Tags: , , , — joel_simon @ 8:48 pm


My Little Piece of Privacy is a robotic art piece where a small curtain is maneuvered in a window to only block those outside form looking in. Cameras with body tracking detect people and motor drives a belt that has the curtain on it. I believe the strength of this piece is the fact that security and privacy are things that humans and robots mutually understand. Computer systems are designed from the ground up to be secure in regards to attempts to steal data or hijack processes. Security intrusions are one of the biggest threats computers face. The robot helping the human keep his privacy demonstrates a level of relate ability and understanding the robot must have for the human.
However, the problems seems a little forced as one commenter put it “But why you don’t use the large one??.” I wish the person inside was actually more exposed and dependent on the curtain for security. The actually result of a moving curtain is that passersby interacted with it more in a playful way probably decreasing the level of security inside. I think there is a lot of potential for robotic art where a robots try to defend their privacy from viewers.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Advanced Studio: Critical Robotics – useless robot, uncanny gesture | powered by WordPress with Barecity