Pendulum 2.0

Assignment,Max,Submission — Dakotah @ 7:32 pm

This is a combination of the previous pendulum project and my first pulse modulated motor. This allows me to deliver more power and have control over the speed of the swing.

AUTOMATIC PERSONAL WEIGHT LIFTER

Prototyping / modeling to create a system in which a user’s simple arm motion (which also blows up an inflatable muscle) controls a machine / “robot” that will lift a large amount of weight.

Model:

sloppy map of possible linkages:

map_3

 

Cyborg Foundation

Artists,Reference,Robotics — adambd @ 5:30 am

“I started hearing colours in my dreams”

Neil Harrison(b. 1982 in Belfast, Northern Ireland)

CYBORG FOUNDATION | Rafel Duran Torrent from Focus Forward Films on Vimeo.

Robotic Musicianship

Artists,Reference,Robotics — Ali Momeni @ 2:53 pm

Eric Singer (b. 19.. in ..)

  • Lemur: League of extraordinary musical urban robots

Godfried-Willem Raes (b. 1952 in Gent, Belgium)

Ajay Kapur (b. 19.. in ..)

Research Assignment II

Submission — adambd @ 7:18 am

 

 

 

Remote control insect –

electrodes are attached to the insects left, right and back side, electric pulses are generated by a small wireless receiver on the insects back.
The insect actually becomes an actuator (similar to Robb’s experiments)

I think that that this experiment is interesting in two respects, firstly in that way that we, humans perceive these bionic insects. I think that people generally consider insects to be above robots on a sentient scale. But interestingly, Hofstadter describes that to most people, robotics / electronics is a total mystery, through ignorance they personify simple robots.   It would be interesting to see how these bionic insects are regarded by humans.

Secondly the potential to be able to harness the insects finely tuned senses and even its processing abilities. For it to be able to send back a visual / audio stream and be not only controlled but programmed with what we want it to do.

 

 

 

 

(more…)

Theo Jansen Beach Robots

The tortoises mentioned in The Cybernetic Brain by Pickering were an attempt, by man, to create new forms of animals via robotics. Many anthropomorphic traits were projected upon them like dancing in front of mirrors and relationships with other bots. This reminded me strongly of the work of the Dutch artist Theo Jansen and probably served as some form of inspiration. Most of you are probably already very familiar with his beach crawlers, but the relationship between them as well as other forms of robotics I find worth comparing. I should note that these are robots are they have input, output and very basic computing done with pressurized bottles.
I choose this video specifically because of Jansen’s explicit reference to the ‘life’ of his creations. The video is even entitled “Presenting Strandbeest: Making New Life.” Jansen loves the idea of his creations not as sculpture but as animals who really inhabit the local beach. He has given ‘the animals’ tools to feel the water, harness energy from the wind and anchor into the sand for for protection. I believe that this attempt to mimic life is analogous to what is done on the ai side to mimic intelligence. Interestingly there is no turing test for animal robots (that I am aware of), perhaps there should be! Certainly regeneration, reproduction and evolution would be on there. Abilties these robots obviously do not have. What I have not seen is the ability for these creatures to actually survive outside own their own for extended periods of time. These creations are undoubtably eloquent and technically marvelous; however, I feel that his obsession with giving the creatures gimmicks that seem to replicate real animals is not doing as much for them.

An interesting dimension for the pieces could be to, in some way, expose how we want to think they are real and how we want to believe they are alive. Much like in our household pets we project and wish into existence many positive traits and abilities that aren’t actually there. If many of these traits are projected and people have pets the intelligence of some robots (turtles, fish, etc) then it may not be long until we have more serious robotic pets.

Robotic Quintet Composes And Plays Its Own Music

This robot created by Festo listens to a piece of music breaking each note down into pitch, duration, and intensity. It then plugs that information into various algorithms derived from Conway’s “Game of Life” and creates a new composition while listening to one another producing an improv performance. Conway’s “Game of Life” put simply is a 2d environment where cells(pixels) react to neighboring cells based on rules.

They are:
Any live cell with fewer than two live neighbours dies, as if caused by under-population.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overcrowding.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This algorithm tends to evolve as time passes and created in an attempt to simulate life.

This robot essentially mimics how composers take a musical motif and evolve it over the life of the piece. The robot sets the sensory information from the music played to it as the initial condition or motif and lets the algorithm change it. Since western music is highly mathematical, robots are naturals. I would say this robot has more characteristics human/animal behavior in Wiener’s example of the music box and the kitten. Unlike the music box this robot performs in accordance with a pattern yet this pattern is directly effected by its past.

RoboNaut

Reference,Scientific,Uncategorized — Robb Godshaw @ 4:36 am

RoboNaut


The robonaut is the first humanoid robot to come into use in space.

The practicality of this device is obviously quite limited. It is intended as a research platform into humanoid robot presence in later missions.
NASA hopes that by watching the interactions and problems that a robot in a space craft presents, it can improve them to a point where deep-space missions can be unmanned. (more…)

FIRST UNICORN ROBOT! (Converses with “female”)

 

[pardon my screenshot bootleg, sound is pretty bad… go to the link!]

 

“First ‘chatbot’ conversation ends in argument”

www.bbc.co.uk/news/technology-14843549

 

This is an interesting example of robot interaction. Two chatbots, having learned their chat behavior over time (1997 – 2011 !) from previous conversations with human “users” are forced to chat with each other. This BBC video probably highlights what we might consider the “human-interest” element of the story, such as the bots’ discussion of “god” and “unicorns” as well as their so-called “argumentative” sides, supposedly developed from users. With these highlights as examples, it does seem fairly convincing proof that learning from human behavior… makes you sort of human-like! This type of “artificial” learning or evolution is really interesting, as it reflects back what we choose to teach the robots we are using:  we really can see that these chatbots have had to live most of their lives on the defensive. I would like to see unedited footage of the interaction. I am sure some of their conversation is a lot more boring. I noticed that the conversation tends towards confusion or miscommunication, almost exemplifying the point about entropy that Robert Weiner makes (p. 20-27): that information carried by a message could be seen as “the negative of its entropy” (as well as the negative logarithm of its probability). And yet, just as it seems the conversation might spiral into utter nonsense (and maybe it does, who knows, this might be some clever editing), the robots seem to pick up the pieces and realize what the other is saying, sometimes resulting in some pretty uncanny conversational harmony about some pretty human-feeling topics. Again, if we saw more of this chat that didn’t become part of a news story, I wonder if this conversation might slip more frequently into moments of entropic confusion. (I think those moments of entropy can tell us as much about the bots’ system of learning as their moments of success (as Heidegger / Graham Harman might say, we only notice things when they’re not working… though I kinda like lil wayne’s version from We be steady mobbin:  If it ain’t broke, don’t break it)….

If we view chatbots as an analogue to the types of outside-world-sensing robots we are trying to build, only with words as both their input and output, this seems to show that they really are capable of the type of complex feedback-controlled learning that Weiner suggests (p.24) and that Alan Turing was gearing up for. This experiment is not unlike the really amazingly funny conversation in the Hofstadter reading between “Parry” (Colby), the paranoid robot, and “Doctor” (Weizenbaum), the nondirective-therapy psychiatrist robot (p.595). So, actually, BBC’s claim that this was the “first chatbot conversation” isn’t quite right…

Nonetheless, perhaps an experiment worth trying again on our own time?

13.01.29: Cybernetics: Lecture Outline

Cybernetics,In-Class,Reference,Robotics,Theory — Ali Momeni @ 12:00 am
Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2017 Advanced Studio: Critical Robotics – useless robot, uncanny gesture | powered by WordPress with Barecity