Tuesday, April 23, 2013

Vision For Robotic

The ability of a robot to sense its environment is a prerequisite for any decision making. Robots have traditionally used mainly range sensors such as sonars and laser range finders. However, camera and processing technology has recently advanced to the point where modern robots are increasingly equipped with vision-based sensors.

Indeed on the AIBO, the camera is the main source of sensory information, and as such, we placed a strong emphasis on the vision component of our team. Since computer vision is a current area of active research, there is not yet any perfect solution. As such, our vision module has undergone continual development over the course of this multi-year project. This lecture focusses on the progress made during our first year as an example of what can be done relatively quickly. During that time, the vision reached a sufficient level to support all of the localization and behavior achievements described in the rest of this lecture. Our progress since the first year is detailed in our 2004 and 2005 team technical reports , as well as a series of research papers Our vision module processes the images taken by the CMOS camera located on the AIBO. The module identifies colors in order to recognize objects, which are then used to localize the robot and to plan its operation.

Our visual processing is done using the established procedure of color segmentation followed by object recognition. Color segmentation is the process of classifying each pixel in an input image as belonging to one of a number of predefined color classes based on the knowledge of the ground truth on a few training images. Though the fundamental methods employed in this module have been applied previously (both in RoboCup and in other domains), it has been built from scratch like all the other modules in our team. Hence, the implementation details provided are our own solutions to the problems we faced along the way.

We have drawn some of the ideas from the previous technical reports of CMU [89]andUNSW[9]. This module can be broadly divided into two stages: (i) low-level vision, where the color segmentation and region building operations are performed and (ii) high-level vision, wherein object recognition is accomplished and the position and bearing of the various objects in the visual field are determined.

Robotics technology has recently advanced to the point of being widely accessible for relatively low-budget research, as well as for graduate, undergraduate, and even secondary and primary school education. However, for most interesting robot platforms, there remains a substantial learning curve or “ramp-up cost” to learning enough about the robot to be able to use it effectively. This learning curve cannot be easily eliminated with published curricula or how- to guides, both because the robots tend to be fairly complex and idiosyncratic, and, more importantly, because robot technology is advancing rapidly, oftenmaking previous years’models obsolete as quickly as competent educational guides can be created.



Operating System for Embedded systems

Embedded systems can have anything between a complex real-time operating system, such as Linux, or just the application program with no operating system, whatsoever. It all depends on the intended application area. For the Eye- Con controller, we developed our own operating system RoBIOS (Robot Basic Input Output System), which is a very lean real-time operating system that provides a monitor program as user interface, system functions (including multithreading, semaphores, timers), plus a comprehensive device driver library for all kinds of robotics and embedded systems applications. This includes serial/parallel communication, DC motors, servos, various sensors, graphics/text output, and input buttons.


 The RoBIOS monitor program starts at power-up and provides a comprehensive control interface to download and run programs, load and store programs in flash-ROM, test system components, and to set a number of system parameters. An additional system component, independent of RoBIOS, is the Hardware Description Table (HDT, see Appendix C), which serves as a userconfigurable hardware abstraction layer [Kasper et al. 2000], [Bräunl 2001]. RoBIOS is a software package that resides in the flash-ROM of the controller and acts on the one hand as a basic multithreaded operating system and on the other hand as a large library of user functions and drivers to interface all on-board and off-board devices available for the EyeCon controller. RoBIOS offers a comprehensive user interface which will be displayed on the integrated LCD after start-up. Here the user can download, store, and execute programs, change system settings, and test any connected hardware that has been registered in the HDT


Pprogram are shown in photo. Hardware access from both the monitor program and the user program is through RoBIOS library functions. Also, the monitor program deals with downloading of application program files, storing/ retrieving programs to/from ROM, etc.

The RoBIOS operating system and the associated HDT both reside in the controller’s flash-ROM, but they come from separate binary files and can be downloaded independently. This allows updating of the RoBIOS operating system without having to reconfigure the HDT and vice versa. Together the two binaries occupy the first 128KB of the flash-ROM; the remaining 384KB are used to store up to three user programs with a maximum size of 128KB each

Since RoBIOS is continuously being enhanced and new features and drivers are being added, the growing RoBIOS image is stored in compressed form in ROM. User programs may also be compressed with utility srec2bin before downloading. At start-up, a bootstrap loader transfers the compressed RoBIOS
from ROM to an uncompressed version in RAM. In a similar way, RoBIOS unpacks each user program when copying from ROM to RAM before execution.

User programs and the operating system itself can run faster in RAM than in ROM, because of faster memory access times. Each operating system comprises machine-independent parts (for example
higher-level functions) and machine-dependent parts (for example device drivers for particular hardware components). Care has been taken to keep the machine-dependent part as small as possible, to be able to perform porting to a different hardware in the future at minimal cost.

Applications to Robot Control

Genetic algorithms to robot control are briefly discussed in the following sections. These topics are dealt with in more depth in the following chapters on behavior-based systems and gait evolution.

 Genetic algorithms have been applied to the evolution of neural controllers for robot locomotion by numerous researchers. This approach uses the genetic algorithm to evolve the weight- ings between interconnected neurons to construct a controller that achieves the desired gait. Neuron inputs are taken from various sensors on the robot, and the outputs of certain neurons are directly connected to the robot’s actuators. successfully generated gaits for a hexapod robot using a simple traditional genetic algorithm with one-point crossover and mutate. A simple neural network controller was used to control the robot, and the fitness of the individuals generated was evaluated by human designers. evolved a controller for a simulated salamander using an enhanced genetic algorithm. The neural model employed was biologically based and very complex. However, the system developed was capable of operating without human fitness evaluators.

Genetic algorithms have been used in a variety of different ways to newl produce or optimize existing behavioral controllers. used a genetic algorithm to control the weightings and internal parameters of a simple reactive schema controller. In schema-based control, primitive motor and perceptual schemas do simple distributed processing of inputs (taken from sensors or other schemas) to produce outputs. Motor schemas asynchronously receive input from perceptual schemas to produce response outputs intended to drive an actuator. A schema arbitration controller produces output by summing contributions from independent schema units, each contributing to the final output signal sent to the actuators according to a weighting. These weightings are usually manually tuned to produce desired system behavior from the robot.

The approach taken by Ram et al. was to use a genetic algorithm to determine an optimal set of schema weightings for a given fitness function. By tuning the parameters of the fitness function, robots optimized for the qualities of safety, speed, and path efficiency were produced. The behavior of each of these robots was different from any of the others. This graphically demonstrates how behavioral outcomes may be easily altered by simple changes in a fitness function.

Example Evolution

Harvey used a genetic algorithm to evolve a robot neural net controller to perform the tasks of wandering and maximizing the enclosed polygonal area of a path within a closed space. The controller used sensors as its inputs and was directly coupled to the driving mechanism of the robot. A similar approach was taken in Venkitachalam 2002 but the outputs of the neural network were used to control schema weightings. The neural network produces dynamic schema weightings in response to input from percep- tual schemas.

Analog versus Digital Sensors

A number of sensors produce analog output signals rather than digital signals. This means an A/D converter  is required to connect such a sensor to a microcontroller. Typical examples of such sensors are:
• Microphone
• Analog infrared distance sensor
• Analog compass
• Barometer sensor
Digital sensors on the other hand are usually more complex than analog sensors and often also more accurate. In some cases the same sensor is available in either analog or digital form, where the latter one is the identical analog sensor packaged with an A/D converter.

The output signal of digital sensors can have different forms. It can be a parallel interface (for example 8 or 16 digital output lines), a serial interface (for example following the RS232 standard) or a “synchronous serial” interface.

The expression “synchronous serial” means that the converted data value is read bit by bit from the sensor. After setting the chip-enable line for the sensor,the CPU sends pulses via the serial clock line and at the same time reads 1 bit of information from the sensor’s single bit output line for every pulse (for example on each rising edge). See photo below for an example of a sensor with a 6bit wide output word. 


Mechanical environment

Considering the structure of the bone mechanical environment, it can be differentiate an external from an internal mechanical environment. The External one is connected with the environment of the human body which gives high load impulses (forces, moments, etc.) that are shaping the inter- nal environment. Loads are transmitted from the External by the fixator frame trough the bone screws to the Internal environment The ad- hesion zone can be in this way partially or fully relieved according to the mechanical profile of the fixator and its dump and carry loads ability.
Dynastab Mechatronics 2000 with measurement module

The Internal one is directly connected with the closest surroundings of the adhesion zone and in this way this environment is shaping the future adhe-sion’s mechanical profile. As common known the micro movements at the bone fracture can stimulate the growth process . People should
carry to shape these micro movements (range and loads) properly to as- sure that the adhesion growth and remodelling process goes in right way.

Mechanical environment as a stimulation source in the broken bone tissue regeneration process

In order to the time changeable mechanical loads that occurs in the bone fracture, the mechanical profile of the fixator frame should change its me- chanical configuration. Tracking the occurred loads can be very helpful in the individual healing patient profile building process. Each information can be used in two ways. First of them is connected with the active me- chanical crack zone securing. According to the occurring forces fixator should reconfigure itself and affect the proper shape in secure way

The second one can be use in active bone stimulation proces, in which has
to be firstly created the right bone loads and unloads profile. Only secure stimulation can properly accelerate the biological processes without any
mistaken that can not be successfully retrieved in the future.