When we walk down the street over even across the room we use many of our senses to get us from point A to point B with least amount of drama, hazard to our surroundings, or harm to ourselves and others. For the mule bots I figured they would use a host of sensors to provide them with a picture of what is going around them, and to reduce the chance of crew members getting their toes run over! Now I have no degree in robotics but based on watching people move through crowds and thinking about it I came up with these notes again for the animators and storywriters.
Category All / All
Species Unspecified / Any
Size 1013 x 825px
File Size 136.1 kB
I figured there would be the equivalent of transmitters located through out the ship that act as sort of location way point from which the bot could get its bearings. Also every time it received a mission either from an individual sending it, or the automatic dispenser loading there has to be an input its beginning and ending location. That gives the mule a start and end point. From it uses its own internal data and external sensor input to navigate the rest of the way.
Think of it this way: The entire ship would have a wireless network set up (that's how they're getting directions from the central computer in the first place). The signals have transmitter IDs embedded (think MAC addresses for a start). The Mule can easily triangulate its position off of all the transmitters in range (like current cellular signal triangulation).
In a video meant to parody the format of a band playing while aboard a moving, open-air vehicle traveling down a road, the band "Move to the Music" climbs aboard one of these and lets it run through a cubicle farm in an office.
The first run is flawless and boring, so without warning the actors/office workers first, they disable some of the sensors, randomly change the destination every 30 seconds, and try again....
The first run is flawless and boring, so without warning the actors/office workers first, they disable some of the sensors, randomly change the destination every 30 seconds, and try again....
One thing about moving people is that most try to avoid oncoming traffic. Combined with the simple evasion tactic you used for the mule this could become interesting. How often have you run into someone and both of you moved into the same direction, blocking each others path again? Another possibility is 2 mules meeting in the hallway and how to handle this.
I once had to program a small robot to find its way through a (cycle-free) maze using different sensors and measuring wheel movement. That was relatively easy, your mule bots have this 'a lot more complex' feeling to them.
If they are in a known environment, how about RFID tags embedded in the floor at every intersection and, on long hallways, maybe every 10 or 20 meters. Together with an onboard map this would allow the mule to know its position without needing external powered hardware (like a GPS-like transmitter setup). Such simple RFID-tags are cheap since all they need to transmit is their unique ID, the rest is done with the map.
I once had to program a small robot to find its way through a (cycle-free) maze using different sensors and measuring wheel movement. That was relatively easy, your mule bots have this 'a lot more complex' feeling to them.
If they are in a known environment, how about RFID tags embedded in the floor at every intersection and, on long hallways, maybe every 10 or 20 meters. Together with an onboard map this would allow the mule to know its position without needing external powered hardware (like a GPS-like transmitter setup). Such simple RFID-tags are cheap since all they need to transmit is their unique ID, the rest is done with the map.
Replacing a damaged RFID tag would mean that all mules need a map update but that could be rolled out wirelessly (location ID 0x08154711 is now 0x12345678), or, if the amount of replaced tags is limited, be done with a little bit of AI (I should be at tag 0x08154711 but found 0x12345678 which is not in my database, possible replacement?). Once the mule finds a known tag, it can update its map by itself (and maybe relay such an update to any other mule it meets) and function until getting the offical map during the next charging cycle.
I'd prefer the mule bots to be as independant as possible so that a power failure or infrastructure damage will not disable them all due to not being able to to get around.
I'd prefer the mule bots to be as independant as possible so that a power failure or infrastructure damage will not disable them all due to not being able to to get around.
Some great ideas in these concepts. I may have to make use of a few of them in my own robot. It was supposed to be a multi-purpose carrier, but money and parts are hard to come by, so I'm working with what I have now.. http://sci-starborne.dreamwidth.org/7217.html
Concidering using the laser scanning unit from some old printers to see if I can make a crude LIDAR for it. Once the basics are done anyway.
Concidering using the laser scanning unit from some old printers to see if I can make a crude LIDAR for it. Once the basics are done anyway.
The small robot I once programmed used IR-LEDs and an IR sensor to detect close obstructions. Worked reasonably well after some mechanical adjustments of the LED and sensor arrangement. Back then the CPU for the whole robot was a 68HC11. Nowadays, with much more powerful microcontrollers you might be able to use cameras and do some simple image processing for the same power and space budget.
The Atmega32 is an ill fit for image processing, having only 2KB RAM and being 8Bit. My experiments in that regard were done with a Connectix Quickcam (parallel port version) and a MC68332 microcontroller. The 68000 core allowed for easy programming in assembly (no underlying OS, the whole CPU was mine) and having more memory helped. Considering that current micrcontrollers run a lot faster than the MC68332 and have a few orders of magnitude more RAM, I'd think that you might even be able to pull off some 3D processing with 2 cameras. This would allow the robot to look further ahead and plan for getting around something blocking its path before actually running into it.
Only when you try to implement an autonomous robot you start to realize how much your brain does automatically while you are simply walking from point A to point B.
Only when you try to implement an autonomous robot you start to realize how much your brain does automatically while you are simply walking from point A to point B.
FA+

Comments