Friday, June 14, 2013

Sensors and Actuators For Mechatronics System

 Sensors and Actuators For Mechatronics System

Sensors and actuators play an important role in robotic manipulation and its applications. They must operate precisely and function reliably as they directly influence the performance of the robot operation. A transducer, a sensor or actuator, like most devices, is described by a number of characteristics and distinctive features. In this section, we describe in detail the different sensing and actuation methods for robotic applications, the operating principle describing the energy conversion, and various significant designs that incorporate these methods. This section is divided into four subsections, namely, tactile and proximity sensors, force sensors, vision, and actuators.

 By definition, tactile sensing is the continuously variable sensing of forces and force gradients over an area. This task is usually performed by an m × n array of industrial sensors called forcels. By considering the outputs from all of the individual forcels, it is possible to construct a tactile image of the targeted object. This ability is a form of sensory feedback which is important in development of robots. These robots will incorporate tactile sensing pads in their end effectors. By using the tactile image of the grasped object, it will be possible to determine such factors as the presence, size, shape, texture, and thermal conductivity of the grasped object. The location and orientation of the object as well as reaction forces and moments could also be detected. Finally, the tactile image could be used to detect the onset of part slipping. Much of the tactile sensor data processing is parallel with that of the vision sensing. Recognition of contacting objects by extracting and classifying features in the tactile image has been a primary goal. Thus, the description of tactile sensor in the following subsection will be focused on transduction methods and their relative advantages and disadvantages.

Proximity sensing, on the other hand, is the detection of approach to a workplace or obstacle prior to touching. Proximity sensing is required for really competent general-purpose robots. Even in a highly structured environment where object location is presumably known, accidental collision may occur, and foreign object could intrude. Avoidance of damaging collision is imperative. However, even if the environment is structured as planned, it is often necessary to slow a working manipulator from a high slew rate to a slow approach just prior to touch. Since workpiece position accuracy always has some tolerance, proximity sensing is still useful.

Many robotic processes require sensors to transduce contact force information for use in loop closure and data gathering functions. Contact sensors, wrist force/torque sensors, and force probes are used in many applications such as grasping, assembly, and part inspection. Unlike tactile sensing which measures pressure over a relatively large area, force sensing measures action applied to a spot. Tactile sensing concerns extracting features of the object being touched, whereas quantitative measurement is of par- ticular interest in force sensing. However, many transduction methods for tactile sensing are appropriate for force sensing.

In the last three decades, computer vision has been extensively studied in many application areas which include character recognition, medical diagnosis, target detection, and remote sensing. The capabilities of commercial vision systems for robotic applications, however, are still limited. One reason for this slow progress is that robotic tasks often require sophisticated vision interpretation, yet demand low cost and high speed, accuracy, reliability, and flexibility. Factors limiting the commercially available computer vision techniques and methods to facilitate vision applications in robotics are highlights of the subsection on vision.

Resistive and Conductive Transduction 

This technique involves measuring the resistance either through or across the thickness of a conductive elastomer. As illustrated in Figure 14.5.1, the measured resistance changes with the amount of force applied to the materials, resulting from the deformation of the elastomer altering the particle density within it. Most commonly used elastomers are made from carbon or silicon-doped rubber, and the construction is such that the sensor is made up of a grid of discrete sites at which the resistance is measured.

A number of the conductive and resistive designs have been quite successful. A design using carbonloaded rubber originated by Purbrick at MIT formed the basis for several later designs. It was constructed



from a simple grid of silicon rubber conductors. Resistance at the electrodes was measured, which corresponds to loads. A novel variation of this design developed by Raibeit is to place the conductive sheet rubber over a printed circuit board (PCB) which incorporates VLSI circuitry, each forcel not only transduces its data but processes it as well. Each site performs transduction and processing operations at the same time as all the others. The computer is thus a parallel processor.

End Effector Design Issues

Good end effector design is in many ways the same as good design of any mechanical device. Foremost, it requires:

• A formal understanding of the functional specifications and relevant constraints. In the authors, experience, most design “failures” occurred not through faulty engineering, but through incompletely articulated requirements and constraints. In other words, the end effector solved the wrong problem.

• A “concurrent engineering” approach in which such issues as ease of maintenance, as well as related problems in fixturing, robot programming, etc., are addressed in parallel with end effector design.

• An attention to details in which issues such as power requirements, impact resistance, and sensor signal routing are not left as an afterthought. Some of the main considerations are briefly discussed below.

Sensing

Sensors are vital for some manufacturing applications and useful in many others for detecting error
conditions. Virtually every end effector design can benefit from the addition of limit switches, proximity sensors, and force overload switches for detecting improperly grasped parts, dropped parts, excessive assembly forces, etc. robot controller . The most complex class of sensors includes cameras and tactile arrays. A number of commercial solutions for visual and tactile imaging are available, and may include dedicated microprocessors and software.
These binary sensors are inexpensive and easy to connect to most industrial controllers. The next level of sophistication includes analog sensors such as strain gages and thermocouples. For these sensors, a dedicated microprocessor as well as analog instrumentation is typically required to interpret the signals and communicate with the

Although vision systems are usually thought of as separate from end effector design, it is sometimes desirable to build a camera into the end effector; this approach can reduce cycle times because the robot does not have to deposit parts under a separate station for inspecting them.
 
 Actuation

The actuation of industrial end effectors is most commonly pneumatic, due to the  availability of
compressed air in most applications and the high power-to-weight ratio that can be obtained. The grasp force is controlled by regulating air pressure.  The chief drawbacks of pneumatic actuation are the difficulties in achieving precise position control for active hands (due primarily to the compressibility of air) and the need to run air lines down what is otherwise an all-electric robot arm. Electric motors are also common. In these, the grasp force is regulated via the motor current. A  variety of drive mechanisms can be employed between the motor or cylinder and the gripper jaws, including worm gears, rack and pinion, toggle linkages, and cams to achieve either uniform grasping forces or a self-locking effect. For a comparison of different actuation technologies, with emphasis on servo-controlled appli- cations, see Hollerbach et al. (1992).

Fundamentals and Design Issues

A robot manipulator is fundamentally a collection of links connected to each other by joints, typically with an end effector (designed to contact the environment in some useful fashion) connected to the mechanism. A typical arrangement is to have the links connected serially by the joints in an open-chain fashion. Each joint provides one or more degree of freedom to the mechanism.
 
Manipulator designs are typically characterized by the number of independent degrees of freedom in the mechanism, the types of joints providing the degrees of freedom, and the geometry of the links connecting the joints. The degrees of freedom can be revolute (relative rotational motion θ between joints) or prismatic (relative linear motion d between joints). A joint may have more than one degree of freedom. Most industrial robots have a total of six independent degrees of freedom. In addition, most current robots have essentially rigid links (we will focus on rigid-link robots throughout this section).

Robots are also characterized by the type of actuators employed. Typically manipulators have hydraulic or electric actuation. In some cases where high precision is not important, pneumatic actuators are used.

 A number of successful manipulator designs have emerged, each with a different arrangement of joints and links. Some “elbow” designs, such as the PUMA robots and the SPAR Remote Manipulator System, have a fairly anthropomorphic structure, with revolute joints arranged into “shoulder,” “elbow,” and “wrist” sections. A mix of revolute and prismatic joints has been adopted in the Stanford Manipulator and the SCARA types of arms. Other arms, such as those produced by IBM, feature prismatic joints for the “shoulder,” with a spherical wrist attached. In this case, the prismatic joints are essentially used as positioning devices, with the wrist used for fine motions.

The above designs have six or fewer degrees of freedom. More recent manipulators, such as those of the Robotics Research Corporation series of arms, feature seven or more degrees of freedom. These arms are termed kinematically redundant, which is a useful feature as we will see later .

Key factors that influence the design of a manipulator are the tractability of its geometric (kinematic) analysis and the size and location of its workspace. The workspace of a manipulator can be defined as the set of points that are reachable by the manipulator (with fixed base). Both shape and total volume are important. Manipulator designs such as the SCARA are useful for manufacturing since they have a simple semicylindrical connected volume for their workspace (Spong and Vidyasagar, 1989), which facilitates workcell design. Elbow manipulators tend to have a wider volume of workspace, however the workspace is often more difficult to characterize. The kinematic design of a manipulator can tailor the workspace to some extent to the operational requirements of the robot.

In addition, if a manipulator can be designed so that it has a simplified kinematic analysis, many planning and control functions will in turn be greatly simplified. For example, robots with spherical wrists tend to have much simpler inverse kinematics than those without this feature. Simplification of the kinematic analysis required for a robot can significantly enhance the real-time motion planning and control performance of the robot system. For the rest of this section, we will concentrate on the kinematics of manipulators.

 For the purposes of analysis, a set of joint variables (which may contain both revolute and prismatic variables), are augmented into a vector q, which uniquely defines the geometric state, or configuration of the robot. However, task description for manipulators is most naturally expressed in terms of a different set of task coordinates. These can be the position and orientation of the robot end effector, or of a special task frame, and are denoted here by Y. Thus Y most naturally represents the performance of a task, and q most naturally represents the mechanism used to perform the task. Each of the coordinate systems q and Y contains information critical to the understanding of the overall status of the manipulator. Much of the kinematic analysis of robots therefore centers on transformations between the various sets of coordinates of interest.

Manipulator Kinematics

The study of manipulator kinematics at the position (geometric) level separates naturally into two subproblems: (1) finding the position/orientation of the end effector, or task, frame, given the angles and/or displacements of the joints (Forward Kinematics); and (2) finding possible angles/displacements of the joints given the position/orientation of the end effector, or task, frame  (Inverse Kinematics). At the  velocity level, the Manipulator Jacobian  relates joint  velocities to end effector  velocities and is important in motion planning and for identifying Singularities. In the case of Redundant Manipulators, the Jacobian is particularly crucial in planning and controlling robot motions. We will explore each of these issues in turn in the following subsections.

Tuesday, April 23, 2013

Vision For Robotic

The ability of a robot to sense its environment is a prerequisite for any decision making. Robots have traditionally used mainly range sensors such as sonars and laser range finders. However, camera and processing technology has recently advanced to the point where modern robots are increasingly equipped with vision-based sensors.

Indeed on the AIBO, the camera is the main source of sensory information, and as such, we placed a strong emphasis on the vision component of our team. Since computer vision is a current area of active research, there is not yet any perfect solution. As such, our vision module has undergone continual development over the course of this multi-year project. This lecture focusses on the progress made during our first year as an example of what can be done relatively quickly. During that time, the vision reached a sufficient level to support all of the localization and behavior achievements described in the rest of this lecture. Our progress since the first year is detailed in our 2004 and 2005 team technical reports , as well as a series of research papers Our vision module processes the images taken by the CMOS camera located on the AIBO. The module identifies colors in order to recognize objects, which are then used to localize the robot and to plan its operation.

Our visual processing is done using the established procedure of color segmentation followed by object recognition. Color segmentation is the process of classifying each pixel in an input image as belonging to one of a number of predefined color classes based on the knowledge of the ground truth on a few training images. Though the fundamental methods employed in this module have been applied previously (both in RoboCup and in other domains), it has been built from scratch like all the other modules in our team. Hence, the implementation details provided are our own solutions to the problems we faced along the way.

We have drawn some of the ideas from the previous technical reports of CMU [89]andUNSW[9]. This module can be broadly divided into two stages: (i) low-level vision, where the color segmentation and region building operations are performed and (ii) high-level vision, wherein object recognition is accomplished and the position and bearing of the various objects in the visual field are determined.

Robotics technology has recently advanced to the point of being widely accessible for relatively low-budget research, as well as for graduate, undergraduate, and even secondary and primary school education. However, for most interesting robot platforms, there remains a substantial learning curve or “ramp-up cost” to learning enough about the robot to be able to use it effectively. This learning curve cannot be easily eliminated with published curricula or how- to guides, both because the robots tend to be fairly complex and idiosyncratic, and, more importantly, because robot technology is advancing rapidly, oftenmaking previous years’models obsolete as quickly as competent educational guides can be created.



Operating System for Embedded systems

Embedded systems can have anything between a complex real-time operating system, such as Linux, or just the application program with no operating system, whatsoever. It all depends on the intended application area. For the Eye- Con controller, we developed our own operating system RoBIOS (Robot Basic Input Output System), which is a very lean real-time operating system that provides a monitor program as user interface, system functions (including multithreading, semaphores, timers), plus a comprehensive device driver library for all kinds of robotics and embedded systems applications. This includes serial/parallel communication, DC motors, servos, various sensors, graphics/text output, and input buttons.


 The RoBIOS monitor program starts at power-up and provides a comprehensive control interface to download and run programs, load and store programs in flash-ROM, test system components, and to set a number of system parameters. An additional system component, independent of RoBIOS, is the Hardware Description Table (HDT, see Appendix C), which serves as a userconfigurable hardware abstraction layer [Kasper et al. 2000], [Bräunl 2001]. RoBIOS is a software package that resides in the flash-ROM of the controller and acts on the one hand as a basic multithreaded operating system and on the other hand as a large library of user functions and drivers to interface all on-board and off-board devices available for the EyeCon controller. RoBIOS offers a comprehensive user interface which will be displayed on the integrated LCD after start-up. Here the user can download, store, and execute programs, change system settings, and test any connected hardware that has been registered in the HDT


Pprogram are shown in photo. Hardware access from both the monitor program and the user program is through RoBIOS library functions. Also, the monitor program deals with downloading of application program files, storing/ retrieving programs to/from ROM, etc.

The RoBIOS operating system and the associated HDT both reside in the controller’s flash-ROM, but they come from separate binary files and can be downloaded independently. This allows updating of the RoBIOS operating system without having to reconfigure the HDT and vice versa. Together the two binaries occupy the first 128KB of the flash-ROM; the remaining 384KB are used to store up to three user programs with a maximum size of 128KB each

Since RoBIOS is continuously being enhanced and new features and drivers are being added, the growing RoBIOS image is stored in compressed form in ROM. User programs may also be compressed with utility srec2bin before downloading. At start-up, a bootstrap loader transfers the compressed RoBIOS
from ROM to an uncompressed version in RAM. In a similar way, RoBIOS unpacks each user program when copying from ROM to RAM before execution.

User programs and the operating system itself can run faster in RAM than in ROM, because of faster memory access times. Each operating system comprises machine-independent parts (for example
higher-level functions) and machine-dependent parts (for example device drivers for particular hardware components). Care has been taken to keep the machine-dependent part as small as possible, to be able to perform porting to a different hardware in the future at minimal cost.

Applications to Robot Control

Genetic algorithms to robot control are briefly discussed in the following sections. These topics are dealt with in more depth in the following chapters on behavior-based systems and gait evolution.

 Genetic algorithms have been applied to the evolution of neural controllers for robot locomotion by numerous researchers. This approach uses the genetic algorithm to evolve the weight- ings between interconnected neurons to construct a controller that achieves the desired gait. Neuron inputs are taken from various sensors on the robot, and the outputs of certain neurons are directly connected to the robot’s actuators. successfully generated gaits for a hexapod robot using a simple traditional genetic algorithm with one-point crossover and mutate. A simple neural network controller was used to control the robot, and the fitness of the individuals generated was evaluated by human designers. evolved a controller for a simulated salamander using an enhanced genetic algorithm. The neural model employed was biologically based and very complex. However, the system developed was capable of operating without human fitness evaluators.

Genetic algorithms have been used in a variety of different ways to newl produce or optimize existing behavioral controllers. used a genetic algorithm to control the weightings and internal parameters of a simple reactive schema controller. In schema-based control, primitive motor and perceptual schemas do simple distributed processing of inputs (taken from sensors or other schemas) to produce outputs. Motor schemas asynchronously receive input from perceptual schemas to produce response outputs intended to drive an actuator. A schema arbitration controller produces output by summing contributions from independent schema units, each contributing to the final output signal sent to the actuators according to a weighting. These weightings are usually manually tuned to produce desired system behavior from the robot.

The approach taken by Ram et al. was to use a genetic algorithm to determine an optimal set of schema weightings for a given fitness function. By tuning the parameters of the fitness function, robots optimized for the qualities of safety, speed, and path efficiency were produced. The behavior of each of these robots was different from any of the others. This graphically demonstrates how behavioral outcomes may be easily altered by simple changes in a fitness function.

Example Evolution

Harvey used a genetic algorithm to evolve a robot neural net controller to perform the tasks of wandering and maximizing the enclosed polygonal area of a path within a closed space. The controller used sensors as its inputs and was directly coupled to the driving mechanism of the robot. A similar approach was taken in Venkitachalam 2002 but the outputs of the neural network were used to control schema weightings. The neural network produces dynamic schema weightings in response to input from percep- tual schemas.

Analog versus Digital Sensors

A number of sensors produce analog output signals rather than digital signals. This means an A/D converter  is required to connect such a sensor to a microcontroller. Typical examples of such sensors are:
• Microphone
• Analog infrared distance sensor
• Analog compass
• Barometer sensor
Digital sensors on the other hand are usually more complex than analog sensors and often also more accurate. In some cases the same sensor is available in either analog or digital form, where the latter one is the identical analog sensor packaged with an A/D converter.

The output signal of digital sensors can have different forms. It can be a parallel interface (for example 8 or 16 digital output lines), a serial interface (for example following the RS232 standard) or a “synchronous serial” interface.

The expression “synchronous serial” means that the converted data value is read bit by bit from the sensor. After setting the chip-enable line for the sensor,the CPU sends pulses via the serial clock line and at the same time reads 1 bit of information from the sensor’s single bit output line for every pulse (for example on each rising edge). See photo below for an example of a sensor with a 6bit wide output word. 


Mechanical environment

Considering the structure of the bone mechanical environment, it can be differentiate an external from an internal mechanical environment. The External one is connected with the environment of the human body which gives high load impulses (forces, moments, etc.) that are shaping the inter- nal environment. Loads are transmitted from the External by the fixator frame trough the bone screws to the Internal environment The ad- hesion zone can be in this way partially or fully relieved according to the mechanical profile of the fixator and its dump and carry loads ability.
Dynastab Mechatronics 2000 with measurement module

The Internal one is directly connected with the closest surroundings of the adhesion zone and in this way this environment is shaping the future adhe-sion’s mechanical profile. As common known the micro movements at the bone fracture can stimulate the growth process . People should
carry to shape these micro movements (range and loads) properly to as- sure that the adhesion growth and remodelling process goes in right way.

Mechanical environment as a stimulation source in the broken bone tissue regeneration process

In order to the time changeable mechanical loads that occurs in the bone fracture, the mechanical profile of the fixator frame should change its me- chanical configuration. Tracking the occurred loads can be very helpful in the individual healing patient profile building process. Each information can be used in two ways. First of them is connected with the active me- chanical crack zone securing. According to the occurring forces fixator should reconfigure itself and affect the proper shape in secure way

The second one can be use in active bone stimulation proces, in which has
to be firstly created the right bone loads and unloads profile. Only secure stimulation can properly accelerate the biological processes without any
mistaken that can not be successfully retrieved in the future.