Research Assignment III

Assignment,Submission — adambd @ 8:41 am

Empathetic robo.

This robot tries to simulate and reflect human expression as closely as possible – it empathises with whoever is is talking to. This is a key requirement for humans to perceive a robot as truly intelligent ( dautenhahn et al.). But I think this attempt in wrong direction – as with paradox of animation, make something too life like and something just feels wrong about it.

 

Robots that deceive.

11 nueral connections – same as discussed Braiteneberg, possibly most similar to vehicle 4

find “food source” –  light ring and avoid dark ring.

“artificial neural network controlled by a binary “genome”. The network consisted of 11 neurons” – connections between sensors and motors. The “neurones” were linked by 33 synapses and strength of each connection controlled by 8bit gene – extended braitenberg – different processes running in parallel .

100 groups of 10 robots – the top 200robots (robots with the most points) mated together to shuffle there “genes”

another round – started to evolve to not shine blue light when “feeding” a third of the robots actually became repulsed to the blue light.

 

Complexity from environment.

 

 

 

The Discontented Robot

This little device made by David Bowen must be a version of Braitenberg’s vehicles that has attractive behavior to what it senses (either 2b or 4a). The nice thing about this little bot is that it synthesizes its own power from the source that it is attracted to. The set up is slightly different in that the object of desire is out of reach and so the bot ends up circling around the light source never satisfied.

WHERE DO ROBOTS BELONG?

 

The following work by Matthew Hebert (posted below) relates to a discussion Adam, Dakotah, Rob and I had regarding where art belongs…. I think we decided that, eventually, inevitably, it seems to always end up, as all life does, buried in a land pit somewhere. Personally, I don’t mind if stuff I make ends up in the garbage. But I don’t really want to get into a discussion about whether art is “wasteful” or not, or whether it should be “useful” or not.

Instead, let’s just check out this project that might excite Adam, since it combines robotics with design & “utilitarian” shit for your home… you know, furniture.

 

^    This table is kind of “whimsical” (in a when-robotics-hits-Crate-&-Barrel sort of way?). But the designer is obviously a theory dork (<- no negative connotation), since here we see one of Braitenberg’s vehicles!  Maybe 2a style, mentioned on p.6?  Though you might not be able to tell from this not very revealing video, these little robots, imprisoned between two sheets of glass, move in the sun, and stay still in the “shade.” Their motors are most likely attached to light sensors. This creates a nice effect when you put something down on the coffee table, since they will flock to it and hide under it. Would I put this in my home if someone gave it to me? Sure. (But as Bob Bingham would ask, “Is it art yet?”)

Here’s another piece based on simple Braiteneberg architectures: a bench that moves itself into the sun (using light sensors in the front, back, and on both sides, as well as a microcontroller). These benches have solar panels on their seats that charge their battery (except, I guess, when someone’s sitting on one…hmmm….)   Watch out, this video is rather lengthy.

[Do we always have to use that Strauss composition from 2001 when introducing a monolithic design?][yes]

 

Coming from the “art” perspective: I think these projects could be more interesting if they complicated the nature of braitenberg architectures, perhaps simultaneously complicating the notion of utilitarian furniture. What if these devices were structured not to be useful? If this furniture made use of slightly extended models of braitenbergian forms (see the Lambrinos / Scheier article)… the emergent behaviors might appear more complex. This could get really weird and interesting, if we’re talking about furniture that is reacting to human use. Incorporating “artificial” learning, or the type of seemingly socially intelligent behaviors discussed in the article we read about folk-psychology might turn a table or a chair into something we really have to think about interacting with…. Heidegger would go bananas.

 

And last, this Hebert guy takes a stab at “art” !!

After all, if there’s one way to be SURE you’re making art …. it’s by putting it in a museum!

This apparently was a commission from the San Diego Museum of Art in 2011 for a weekly series themed around the topic of “what a city needs.”  Here, Hebert says he is approaching this theme “from an interest in power infrastructure and it’s critical importance to the city,” in relation to the often geographical remoteness of most of those forms of power. (Which apparently is especially true in San Diego). Hebert took public domain models from the Google SketchUp library, 3D printed them in ABS plastic, wired electronics to them, and placed them in the museum in what we MIGHT call “non-traditional” locations. Sounds like a well-followed recipe right out o’ the ol’ “art” cookbook to me!

 

 

 

 

 

 

Project III

Assignment,Submission — adambd @ 3:20 am

This device records subtle eye movements while your eyes are open. When you blink the recording stops and your eye movements are translated into X & Y movement through servos which manipulate a long rod – exaggerating an otherwise subtle and unnoticed motion.

This device will be expanded to record more facial movements and translate them into other forms of mechanical movement.

photo 1

 

(more…)

Pendulum 2.0

Assignment,Max,Submission — Dakotah @ 7:32 pm

This is a combination of the previous pendulum project and my first pulse modulated motor. This allows me to deliver more power and have control over the speed of the swing.

AUTOMATIC PERSONAL WEIGHT LIFTER

Prototyping / modeling to create a system in which a user’s simple arm motion (which also blows up an inflatable muscle) controls a machine / “robot” that will lift a large amount of weight.

Model:

sloppy map of possible linkages:

map_3

 

Research Assignment II

Submission — adambd @ 7:18 am

 

 

 

Remote control insect –

electrodes are attached to the insects left, right and back side, electric pulses are generated by a small wireless receiver on the insects back.
The insect actually becomes an actuator (similar to Robb’s experiments)

I think that that this experiment is interesting in two respects, firstly in that way that we, humans perceive these bionic insects. I think that people generally consider insects to be above robots on a sentient scale. But interestingly, Hofstadter describes that to most people, robotics / electronics is a total mystery, through ignorance they personify simple robots.   It would be interesting to see how these bionic insects are regarded by humans.

Secondly the potential to be able to harness the insects finely tuned senses and even its processing abilities. For it to be able to send back a visual / audio stream and be not only controlled but programmed with what we want it to do.

 

 

 

 

(more…)

Theo Jansen Beach Robots

The tortoises mentioned in The Cybernetic Brain by Pickering were an attempt, by man, to create new forms of animals via robotics. Many anthropomorphic traits were projected upon them like dancing in front of mirrors and relationships with other bots. This reminded me strongly of the work of the Dutch artist Theo Jansen and probably served as some form of inspiration. Most of you are probably already very familiar with his beach crawlers, but the relationship between them as well as other forms of robotics I find worth comparing. I should note that these are robots are they have input, output and very basic computing done with pressurized bottles.
I choose this video specifically because of Jansen’s explicit reference to the ‘life’ of his creations. The video is even entitled “Presenting Strandbeest: Making New Life.” Jansen loves the idea of his creations not as sculpture but as animals who really inhabit the local beach. He has given ‘the animals’ tools to feel the water, harness energy from the wind and anchor into the sand for for protection. I believe that this attempt to mimic life is analogous to what is done on the ai side to mimic intelligence. Interestingly there is no turing test for animal robots (that I am aware of), perhaps there should be! Certainly regeneration, reproduction and evolution would be on there. Abilties these robots obviously do not have. What I have not seen is the ability for these creatures to actually survive outside own their own for extended periods of time. These creations are undoubtably eloquent and technically marvelous; however, I feel that his obsession with giving the creatures gimmicks that seem to replicate real animals is not doing as much for them.

An interesting dimension for the pieces could be to, in some way, expose how we want to think they are real and how we want to believe they are alive. Much like in our household pets we project and wish into existence many positive traits and abilities that aren’t actually there. If many of these traits are projected and people have pets the intelligence of some robots (turtles, fish, etc) then it may not be long until we have more serious robotic pets.

Robotic Quintet Composes And Plays Its Own Music

This robot created by Festo listens to a piece of music breaking each note down into pitch, duration, and intensity. It then plugs that information into various algorithms derived from Conway’s “Game of Life” and creates a new composition while listening to one another producing an improv performance. Conway’s “Game of Life” put simply is a 2d environment where cells(pixels) react to neighboring cells based on rules.

They are:
Any live cell with fewer than two live neighbours dies, as if caused by under-population.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overcrowding.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This algorithm tends to evolve as time passes and created in an attempt to simulate life.

This robot essentially mimics how composers take a musical motif and evolve it over the life of the piece. The robot sets the sensory information from the music played to it as the initial condition or motif and lets the algorithm change it. Since western music is highly mathematical, robots are naturals. I would say this robot has more characteristics human/animal behavior in Wiener’s example of the music box and the kitten. Unlike the music box this robot performs in accordance with a pattern yet this pattern is directly effected by its past.

FIRST UNICORN ROBOT! (Converses with “female”)

 

[pardon my screenshot bootleg, sound is pretty bad… go to the link!]

 

“First ‘chatbot’ conversation ends in argument”

www.bbc.co.uk/news/technology-14843549

 

This is an interesting example of robot interaction. Two chatbots, having learned their chat behavior over time (1997 – 2011 !) from previous conversations with human “users” are forced to chat with each other. This BBC video probably highlights what we might consider the “human-interest” element of the story, such as the bots’ discussion of “god” and “unicorns” as well as their so-called “argumentative” sides, supposedly developed from users. With these highlights as examples, it does seem fairly convincing proof that learning from human behavior… makes you sort of human-like! This type of “artificial” learning or evolution is really interesting, as it reflects back what we choose to teach the robots we are using:  we really can see that these chatbots have had to live most of their lives on the defensive. I would like to see unedited footage of the interaction. I am sure some of their conversation is a lot more boring. I noticed that the conversation tends towards confusion or miscommunication, almost exemplifying the point about entropy that Robert Weiner makes (p. 20-27): that information carried by a message could be seen as “the negative of its entropy” (as well as the negative logarithm of its probability). And yet, just as it seems the conversation might spiral into utter nonsense (and maybe it does, who knows, this might be some clever editing), the robots seem to pick up the pieces and realize what the other is saying, sometimes resulting in some pretty uncanny conversational harmony about some pretty human-feeling topics. Again, if we saw more of this chat that didn’t become part of a news story, I wonder if this conversation might slip more frequently into moments of entropic confusion. (I think those moments of entropy can tell us as much about the bots’ system of learning as their moments of success (as Heidegger / Graham Harman might say, we only notice things when they’re not working… though I kinda like lil wayne’s version from We be steady mobbin:  If it ain’t broke, don’t break it)….

If we view chatbots as an analogue to the types of outside-world-sensing robots we are trying to build, only with words as both their input and output, this seems to show that they really are capable of the type of complex feedback-controlled learning that Weiner suggests (p.24) and that Alan Turing was gearing up for. This experiment is not unlike the really amazingly funny conversation in the Hofstadter reading between “Parry” (Colby), the paranoid robot, and “Doctor” (Weizenbaum), the nondirective-therapy psychiatrist robot (p.595). So, actually, BBC’s claim that this was the “first chatbot conversation” isn’t quite right…

Nonetheless, perhaps an experiment worth trying again on our own time?

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2025 Advanced Studio: Critical Robotics – useless robot, uncanny gesture | powered by WordPress with Barecity