Tartalmi kivonat
COMPLIANT WHOLE-BODY CONTROL OF HUMANOID ROBOTS A THESIS SUBMITTED TO THE DEPARTMENT OF MECHANICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF ENGINEER Taizo Yoshikawa March 2019 2019 by Taizo Yoshikawa. All Rights Reserved Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons AttributionNoncommercial 3.0 United States License http://creativecommons.org/licenses/by-nc/30/us/ This dissertation is online at: http://purl.stanfordedu/vp618qh1941 ii Approved for the department. Oussama Khatib, Adviser Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost Graduate Education This signature page was generated electronically upon submission of this thesis in electronic format. An original signed hard copy of the signature page is on file in University Archives iii Abstract A
major obstacle that prevents humanoid robots from accomplishing real world tasks is their inability to physically interact with, and effectively manipulate, the most common objects generally found in human environments. Even tasks that seem simple for a human remain a significant challenge for most robots. Robots generally employ precision to perform a manipulation task. Humans, in contrast, employ compliance through tactile and force feedback to overcome their imprecision, allowing them to resolve uncertainties associated with the task. The lack of compliance and force control has been indeed a major limiting factor in the ability of robots to interact and manipulate in human environments. One of the major objectives of this research is to endow humanoid robots with whole-body compliant motion abilities. With compliance, a robot overcomes position uncertainties by moving in directions that reduce contact forces, which in turn directs it towards its goal. Whole-body framework was
designed to allow the robot to compliantly interact with its environment at multiple contact points. The synthesis of compliant tasks is greatly simplified by being independent of postures and constraints, which are automatically integrated in the control hierarchy. This research focuses on the development of (I) sensor-based whole-body compliant motion primitives, (II) contact sensing and contact force control, (III) whole-body multi-contact for extended support, kneeling, crawling, leaning table, and locomotion strategy to improve support in unstructured terrains, (IV) dynamic collision-free motion planning and (V) dynamic collision-free walking path planning. iv Contents 1 2 3 Introduction 1 1.1 Objective 1 1.2 Background 1 1.3 Strategy 3 Torque Control on Position Controlled Robot 7 2.1 Conventional Position Control System 7 2.2 Conventional Motor Controller 8 2.3 Conventional Joint Position Controller 10 2.4 Conventional Joint Velocity Controller 12
2.5 Approach to Transfer Torque Command to Position Command 13 2.6 Experimental Derivation of Torque to Position Transformer 14 2.7 Derivation of Torque to Position Transformer 20 2.8 Derivation of Torque to Velocity Transformer 21 2.9 Experimental Validation 23 2.91 Torque to Position Transformer on PUMA Robot 23 2.92 Torque to Position Transformer on Humanoid Robot 26 2.93 Torque to Velocity Transformer on Humanoid Robot 33 2.10 Conclusion 35 Sensor-Based Whole-Body Compliant Motion Primitives 36 3.1 Lower Body Control 36 3.11 ZMP Control 36 3.12 Stable Balance Control 39 Unified Whole-Body Control 41 3.21 Whole Body Coordination 41 3.22 Upper-Body Posture Control 42 Whole Body Compliant Motion Control Test 43 3.2 3.3 v 3.4 3.5 4 45 3.41 Whole-Body Balance Estimation and Motion Control 46 3.42 Estimation of Balance Status 48 3.43 Motion Controller 49 3.44 Experimental Validation 52 Conclusion 53 Sensor-Based
Contact Estimation and Control Primitives 54 4.1 Decoupled Task Dynamics and Motion Control 55 4.11 Modeling Dynamic Contact Jacobian 58 4.12 Experimental Validation 61 Contact Estimation by Force Sensor and Whole-Body Control 63 4.21 Contact Estimation and Whole-Body Control 63 4.22 Experimental Validation: Whole-Body Control in Contact 66 4.23 Gravity Assist Control 67 4.24 Experimental Validation: Gravity Assist Control 69 4.2 4.3 4.4 4.5 5 Whole-Body Compliant Balance Control Contact Estimation by Tactile Skin Sensor 70 4.31 Contact Estimation by Tactile Skin 70 4.32 Sensing Network 74 4.33 Sensor ECU and Sensors 75 4.34 Setting Tactile Sensor Skin 76 4.35 Process of Contact Force Control and Estimation 77 Experimental Validation 79 4.41 Pushing Force Command 79 4.42 Development of Tactile Sensor Suit by QTF 80 Conclusion 81 Whole-Body Multi-Contact for Extended Support 84 5.1 Extending Support for Manipulation 85 5.2
Strategy to Extended Support 86 5.3 Gravity Stand 87 5.4 Experimental Validation : Gravity Stand 89 5.5 Whole-Body Multi-Contact for Extended Support 90 5.6 Experimental Validation : Multi-Contact for Extended Support 92 vi 5.7 Whole-Body Gravity Assist : Kneeing Motion 94 5.8 Whole-Body Gravity Assist : Crawling Motion 96 5.9 Conclusion 98 6 Dynamic Collision-Free Motion Planning and Control 99 6.1 Strategy to Dynamic Collision-Free Motion Planning 100 6.2 Master and Slave Control System 101 6.3 Master Control System 102 6.31 Master Control GUI: Dynamics SimulatorSAI-I 102 6.32 Whole-Body Controller in SAI-II 103 6.33 Haptic Interface 105 6.4 Slave Control System 106 6.41 Contact-Free Tracking Control 107 6.42 Computed Torque Control 111 6.43 Whole-Body Control 112 6.5 Experimental Validation : Master and Slave Control System 113 6.6 Tele-Operation and Force Control 115 6.61 Force Control by Active Observer 115 6.62
Tele-Operation System for Active Observer 118 6.63 Experimental Validation: Force Control by Active Observer 119 6.7 6.8 Elastic Motion Planning 122 6.71 Strategy for Elastic Motion Planning 123 6.72 Experimental Validation 125 Conclusion 127 7 Dynamic Collision-Free Walking Path Planning 129 7.1 Background 130 7.2 Trajectory Generation by RRT 132 7.21 Robot Platform 132 7.22 Sensor Network: Detection of Robot Position 133 7.23 Sensor Network: Detection of Obstacles and Pedestrian 135 7.24 Modeling Personal Space 137 7.25 Definition of Configuration Space 137 7.26 Trajectory Generation by RRT 139 vii 7.27 7.3 7.4 7.5 7.6 Initial Test 140 Analysis of Initial Test 142 7.31 Measurements of Human Walk 142 7.32 Analysis of RRT for Dynamic Collision Avoidance 143 7.33 Analysis of Use Case 1 144 7.34 Analysis of Use Case 2 145 7.35 Pedestrian Model in Configuration Space 146 Dynamic Trajectory Generation 149 7.41 System
Structure 149 7.42 Sensor Data 150 7.43 Dynamic Trajectory Generation System 150 7.44 3D RRT process in Configuration Space 152 7.45 Elastic Strip Method in Potential Field 154 7.46 Tracking Control 158 Experimental Validation 159 7.51 Validation of Proposed Dynamic Trajectory Generation 159 7.52 Real Sensor Simulation 162 Conclusion 165 8 Conclusion 167 viii List of Figures 1. Whole-body contact and control of humanoid robots 3 2.1 Conventional position control system 8 2.2 Conventional position controller 9 2.3 Framework of the proposed torque control system 14 2.4 Block diagram of the transformer for the position-controlled robot 15 2.5 Bode plot of T(s), the motor and transmission system of joint 3 18 2.6 Relationship between the ideal torque and effective torque 18 2.7 Bode plot of 2nd order approximation of X 3 ( s ) 19 2.8 Bode plot after damping is increased 19 2.9 Block diagram of TPT 21 2.10 Torque behaviors
during gravity compensation mode (under damped) 24 2.11 Torque behaviors during gravity compensation mode (increased damping) 25 2.12 x-y coordinates of end-effector for a square trajectory 25 2.13 Hardware and system specifications of HONDA ASIMO robot 26 2.14 Open-loop torque-position transformer zero torque command test 27 2.15 Gravity compensation torque command test for arm joints 29 2.16 Gravity compensation torque command data for the right arm 30 2.17 Operational space position control test 32 2.18 Operational space compliance control test 32 2.19 Gravity torque compensation test of Torque Transformer 34 2.20 The Operational Space Control on the Torque Transformer 34 3.1 ZMP control of ASIMO robot 38 3.2 Target ZMP and center of ground reaction force 40 3.3 Gyro sensor to estimate correct orientation of torso 40 3.4 Coordination between upper body Operational Space Controller and ASIMO’s balance controller 41 ix 3.5 Definition
of the constraint forces for the individual hip posture command 43 3.6 Constraint forces between arm and torso 43 3.7 Block diagram of whole-body controller 44 3.8 Whole body motion control test 45 3.9 Image of whole-body balance planning 47 3.10 Flowchart of whole-body balance planning 47 3.11 Estimation of contact force 48 3.12 Estimation of COM and ZMP 49 3.13 Contact avoidance by upper-body control 50 3.14 Reduction of fall moment by controlling height of hip 50 3.15 Block diagram of upper-body motion controller 51 3.16 Flowchart to estimate fall 51 3.17 Image of stepping motion to extend support polygon 52 3.18 Whole-body balance status and motion control test 53 4.1 Dynamics for contact motion control 58 4.2 Definition of contact Jacobian 61 4.3 Contact with a table by the forearm and by the elbow 62 4.4 Whole-body control and contact with a table by the forearm and by the elbow 62 4.5 Block diagram of whole-body controller
including contact Jacobian 64 4.6 Contact estimation in horizontal direction and whole-body control 65 4.7 Contact estimation in vertical direction and whole-body control 65 4.8 Pushing a table by right hand with force and position control 66 4.9 Contact force command and contact force measured by the force sensor 67 4.10 Force control mode in the Operational Space Formulation 68 4.11 Operational space force control and joint space control 69 4.12 Experimental Validation of Gravity Assist 69 4.13 Tactile sensor skin on the forearm 70 4.14 Three layered structure by FST 71 4.15 Tactile sensor skin and contact with the environment 72 4.16 Structure of tactile sensor skin 73 4.17 Sensing Network System (SNS) 74 4.18 Sensor layer by FSR tactile sensor on the forearm 75 4.19 Cross-sectional side view of a portion of the tactile sensor skin 76 x 4.20 Calibration result of the tactile sensor skin 77 4.21 Algorithm to process contact and
control 78 4.22 Side and perspective top view of a layer of tactile sensor 78 4.23 Contact force control and motion control test 79 4.24 Measured contact force while moving 80 4.25 Design of tactile sensor suit by QTC 81 4.26 ASIMO wearing the tactile sensor suit 82 4.27 Contact detection by the tactile sensor while tracking 82 4.28 Contact detection by the tactile sensor and switching control mode 82 5.1 Expanding manipulation workspace through extended contact support 85 5.2 Strategy to realize compliant and passive balance control 87 5.3 Block diagram of whole-body motion controller 88 5.4 Compliant standing mode by Gravity Stand 89 5.5 Experimental validation of Gravity Stand mode 90 5.6 Manipulation capabilities of humanoids by extending their support 91 5.7 Manipulation with extended support by Gravity Stand and Gravity Assist 93 5.8 Manipulation with extended support with teaching playback control 93 5.9 Control status of
whole-body Gravity Assist for kneeing motion 94 5.10 Experimental validation of whole-body Gravity Assist for kneeing motion 95 5.11 Control status of whole-body Gravity Assist for crawling motion 96 5.12 Experimental validation of whole-body Gravity Assist for crawling motion 97 6.1 Approach for contact free motion control in the environment 101 6.2 Master and slave control system and network sensing system 102 6.3 Specifications of Omega.7 106 6.4 Haptic user interface by SAI 106 6.5 Interface of the Type IV On-Line Trajectory Generation algorithm 108 6.6 Tracking pattern implemented 108 6.7 On-Line Trajectory Generation algorithm on the HONDA ASIMO robot 110 6.8 System of Motion Planner and Whole-Body Controller 112 6.9 Master-Slave control system for contact tracking I 114 6.10 Master-Slave control system for contact tracking II 114 6.11 Master-Slave control system for contact tracking III 114 6.12 Block diagram of operational space
control with contact force control 117 xi 6.13 AOB Design for force control 118 6.14 Tele-operation system for Active Observer 119 6.15 Force controller test by HONDA ASIMO and Omega haptics 120 6.16 Result of Force controller test 121 6.17 Force controller test by HONDA ASIMO and Omega haptics 122 6.18 Concept of Collision-Free Motion Planner 123 6.19 Elastic Trajectory Modification with multiple obstacles 124 6.20 Simulation of Elastic Motion Planner and OTG 126 6.21 Experimental validation of Elastic Motion Planner and OTG 126 7.1 Image of robot motion planning in real environment 132 7.2 HONDA Humanoid Robot ASIMO 133 7.3 Position error canceling using grand marker 134 7.4 Process of obstacle detection by LRF sensor network 135 7.5 LRF sensor network and obstacle detection 136 7.6 Definition of personal space 136 7.7 Configuration space 138 7.8 Path generation by RRT 139 7.9 Use case 1 test 139 7.10 Use case 1 test :
Pedestrian approaching slowly 141 7.11 Use case 2 test 141 7.12 Data of human walking straight line measured by LRF sensor on the shielding 142 7.13 Data of human walking straight line for 0.4[m/s], 06[m/s] and 08[m/s] 143 7.14 Continuous path generation by RRT 144 7.15 Analysis of use case 1 145 7.16 Analysis of use case 2 146 7.17 Human walk estimation model 147 7.18 Human walk estimation model and robot 148 7.19 Simulation of human walk estimation model and robot 148 7.20 System structure 149 7.21 Sensing Network System structure 150 7.22 Dynamic Trajectory Generation System 151 7.23 Selection of Trajectory Generator 151 7.24 Trajectory generation by RRT 152 xii 7.25 3D RRT process in configuration space 153 7.26 Generated candidates of trajectories in XY configuration space 153 7.27 RRT Process 154 7.28 Process of Elastic Strip in Potential Field 155 7.29 Potential Field of the Environment and obstacles 155 7.30 Elastic
Strip method 156 7.31 Image of Elastic Motion Planning 156 7.32 Image of forces acting on the segments 157 7.33 Timing of Forces 158 7.34 Tracking control 159 7.35 Collision between pedestrian corn model and a model of robot path at arbitrary time 160 7.36 Process of trajectory generation and trajectory modification 161 7.37 Real time simulation in the environment I 164 7.38 Real time simulation in the environment II 165 xiii Acknowledgement The Honda-Stanford humanoid robot project started in 2001 and I personally was in charge of this collaboration from 2001 to 2013. I consistently worked for this project both in Japan and at Stanford to create innovative progress in humanoid robotics. At the time of the start of this project, ASIMO was the most advanced robot in the world and was just part of the future dream. Through this project, we focused on real-world problems of robot interaction in our daily environment aiming to achieve new advanced capabilities
in humanoid robotics. It was very interesting to see how the robot was rapidly improving its abilities, as we all worked in this project which involved outstanding Stanford graduate students. At Stanford, we had many discussions on how to bring the HONDA ASIMO robot, into human's daily life. Through discussion, I realized that Professor Khatib loves robots especially our robot ASIMO. this might be one of the reasons for which this project continued for over ten years. I am very proud to say that our project stimulated other researchers in the world and contributed to the larger robotics society. It is true that our results, which are discussed in this thesis, are still in cutting edge Through this research, I have given a major effort to this project and this project made me grow as a matured researcher. Consequently, I could reach to my action philosophy, "Technology without settled convictions is lame; convictions without technology is blind". I would like to thank
Professor Khatib whose enormous support and insightful advise were invaluable during the course of my study. I was affected by his way of thinking, his research style, settled convictions for robotics. I would like to thank Professor Cutkosky for invaluable discussion about future robotics. I would also like to thank Professor Sentis, Professor Park, Dr Conti and Dr. Kwon who were PhD student when we worked together on the is project They gave me invaluable comments and warm encouragements. I would like to thank Dr Takenaka, who made innovative break through in humanoid robotics at HONDA, for his enormous support. Finally, I would also like to express my gratitude to my family for their moral support and warm encouragements. xiv CHAPTER 1 Introduction 1.1 Objective A major obstacle that prevents humanoid robots from accomplishing real world tasks is their inability to physically interact with, and effectively manipulate, the most common objects generally found in human
environments. Even tasks that seem simple for a human remain a significant challenge for most robots. Robots generally employ precision to perform a manipulation task. Humans, in contrast, employ compliance through tactile and force feedback to overcome their imprecision, allowing them to resolve uncertainties associated with the task. The lack of compliance and force control has been indeed a major limiting factor in the ability of robots to interact and manipulate in human environments. One of the major objectives of this project is to endow humanoid robots with whole-body compliant motion abilities. With compliance, a robot overcomes position uncertainties by moving in directions that reduce contact forces, which in turn directs it towards its goal. Whole-body framework needs to be designed to allow the robot to compliantly interact with its environment at multiple contact points. The synthesis of compliant tasks is greatly simplified by being independent of postures and
constraints, which are automatically integrated in the control hierarchy. The objective in this research is to develop the advanced whole-body motion control capabilities needed for humanoid robots to accomplish useful tasks in human environments. 1.2 Background Robots are supposed to coexist with human and work in our daily environment to assist and communicate with us in our homes, offices, public spaces, hospitals or to execute several important tasks which is hard for human in unstructured disaster areas. When the robot moves in the real environment, which is generally narrow and unstructured, contact is one of the most CAHPTER 1. INTRODUCTION 2 important key problems. Generally, unpredictable contact between the robot and the environment is a critical problem. The reliance of robots on precision stems from the very control modality they use, the position control paradigm; an approach that has been and continues to be dominant in industrial robotics. Because of its poor
tolerance of errors and uncertainties, this approach requires very accurate object localization and complex programming and planning for every task. Every single elementary motion associated with a robot's task must be generated with sufficient precision to achieve full coordination under diverse static and dynamic constraints. A motion of a robot is designed for every task so that the robot can accomplish its motion by following the designed trajectory. Individual joint position command is calculated by applying inverse kinematics for an end-effector’s position level control in Cartesian space. Motion can also be accomplished by designing an individual joint position command directory. A typical position controller by PID control is implemented for each joint level controller and the joint position command is achieved by high gain control. This is the most popular control system of industrial robots. It has been used for a long time because accuracy and quick responses have
been required to accomplish desired tasks and these were achieved by conventional position control. Most humanoid robots are also controlled by position control [1][2]. A conventional position controller was successfully applied for humanoid robot control, especially for balancing and walking control. This is because accuracy and fast responses for ZMP control and manipulation were the highest priority for humanoid robots. Human-like whole-body motions and simple manipulation tasks are also realized by using conventional position control. More advanced robots are desired to work not only in industry but also in our everyday environment. They can be used to assist people and communicate with people in our homes, offices, public spaces, hospitals, disaster areas etc. One of the critical problems for controlling a robot in our daily environment is contact between the robot and the external environment and human. In case of the contact with the environment, unpredictable contact will
happen because the work space of the robot is very narrow and complicated. In case of the contact with human, human motion is unpredictable However, most advanced robots are not designed for contact. They are designed for the presumption that the external environment is fixed or contact points are limited, such as both hands and feet. In this point of view, development of compliant motion for safe contact with the environment and people is becoming an important issue for advanced robot control. As for human-sized robots, whole-body motion control in contact with the environment was applied to some robots. They can stand up from a prone position or crawl in a narrow space However, the CAHPTER 1. INTRODUCTION 3 motion is pre-designed and the robot is just following the designed motion by position control. Compliant control is applied to motion in contact; however, the motion while in contact with the environment is limited to a fixed or stable environment. Whole-body contact
motion of a humanoid robot was proposed by using full-body distributed tactile sensor. However, compliant motion control and accurate task control has not achieved so far. To realize more advanced whole-body motion control in contact with the environment, the posture should be compliant for disturbance however the robot should accomplish its multiple tasks smoothly by sensing the environment through physical contact like a human would. In this research, a new approach by open-loop torque command on the conventional position controlled robot is proposed and experimentally validated on Honda Humanoid robot ASIMO to realize compliant posture control as well as accurate position control. Advanced whole-body motion control capabilities, needed for humanoid robots to accomplish useful tasks in human environments, are developed over the compliant posture control as well as accurate position control. Fig. 1 1.3 Whole-body contact and control of humanoid robots. Strategy One of the major
objectives of this project has been to endow humanoid robots with whole-body compliant motion abilities. With compliance, a robot overcomes position uncertainties by moving in directions that reduce contact forces, which in turn directs it towards its goal. Our whole-body framework has been designed to allow the robot to compliantly interact with its environment and with human at multiple contact points using torque control. The input torques for the system can CAHPTER 1. INTRODUCTION 4 be designed to accomplish the desired task as well as to compensate for dynamic effect of the contact with the environment. Force sensor at the end-effector and tactile sensor skin enables the robot to estimate contacts and they can be applied to the advanced synthesis of compliant task controller. The synthesis of compliant tasks is greatly simplified by being independent of postures and constraints by contacts, which are automatically integrated in the control hierarchy. Our first approach is
to integrate the advanced whole-body framework on the robot and validate compliant interaction with its environment and with human at multiple contact points. The second approach in this project is the elastic planning framework and the online trajectory generation framework (OTG), which efficiently updates the robots global movement to maintain collision-free trajectories, adapting to real-time changes in the environment. Addressing these key challenges, this project will focus on the development of (I) sensor-based whole-body compliant motion primitives, (II) contact sensing and contact force control, (III) whole-body multi-contact for extended support, kneeling, crawling, leaning table, and holding the handrail, and locomotion strategy to improve support in unstructured terrains, (IV) dynamic collision-free motion planning and (V) dynamic collision-free walking path planning. In the following sections, a new set of control strategies are proposed. I. Sensor-based Whole-Body
Compliant Motion Primitives : Chapter 3 The aim here is to create whole-body manipulation primitives for basic tasks. These primitives will be built by appropriate selection of compliant frames, and force and motion specifications. The control primitives that drive the behavior of the robot strongly rely on the feedback signals provided by the various sensors mounted on the robot. Such sensors provide important information on the state of the robot at a given time. To improve manipulation skill of the robot, we developed new strategies by estimating the internal and external status and understanding instability of balance that can potentially lead to sudden falls. Furthermore, to correctly program and simulate the control primitives of humanoid robot, it will be very important to correctly model each of these sensors. These models will address the issues of noise, time delay, signal distortion and resolution which characterize a real sensor. These sensor models will also be calibrated
by monitoring real data from the robot. Comparison between the real and simulated robot will be performed in order to obtain realistic models of the sensory stage. CAHPTER 1. II. INTRODUCTION 5 Sensor-based Contact Estimation and Control Primitives : Chapter 4 To control the robot keeping complaint contact with human, impact force between the skin sensor and human was defined and the stiffness of compliant material layer was applied to the skin sensor. The estimated contact information is feedback to a robot controller which constructs decoupled task dynamics corresponding to the contact force control at the tactile sensor skin in constraint space, the task control at the end-effector in task space and the link dynamics in joint space. By combination of the tactile sensor skin, estimation process of contacts and dynamics controller, it enables the robot to contact with the environment safely and move compliantly keeping contact with the environment. III. Whole-Body
Multi-Contact for Extended Support : Chapter 5 Based on the ZMP criterion, the control and stability of legged robots is obtained using a control strategy that requires the robot to accurately track precisely calculated joint trajectories. Although there were several impressive developments, the legged machines that demonstrate biological-like movement patterns and behaviors have not yet been offered due in part to limitations in conventional control techniques. To improve the balance technology in the real environment, we propose to broaden the capabilities of the current (ZMP) robot controller by introducing some of the more advanced skills offered in our whole-body multi-contact control framework. This development will enable the robot to perform various manipulation and posture tasks combined with walking, crawling, rolling, standing-up, or other multi-contact skills, which are impossible to perform on the current robot. We had already validated efficiency of ―Gravity Stand‖,
which enables compliant standing by lifting up the torso by force control (no position command) allowing multiple contact points and support points. Over this framework, we developed new strategy for balance control referring captured human motion data and balance data. By appropriately using Sensor-based Compliant Manipulation Primitives, which detects instability of balance that can potentially lead to sudden falls of the robot, our controller will immediately a) recover balance using upper-body, 2) make several steps, and 3) transfer from a biped state to a kneeling, quadruped, leaning table, and holding the handrail, if such unstable conditions are detected. IV. Dynamic Collision-Free Motion Planning and Control : Chapter 6 The second approach in this project is the elastic planning framework and the online trajectory generation framework (OTG), which efficiently updates the robots global movement to maintain collision-free trajectories, adapting to real-time changes in the
environment. To address this CAHPTER 1. INTRODUCTION 6 problem and realize contact-free motion of humanoid robot, we define our original approach constructed by three level of control architecture. 1) Real-time trajectory generation by RRT [14][15] and trajectory modification by elastic motion planner (EMP). 2) CFT (Contact Free Tracking): On-line contact free trajectory tracking by OTG [22]. 3) WMP : Whole-Body Multi-Contact Primitives A new algorithm for robot motion planning, which enables collision free stable motion planning in the real environment, is proposed. V. Dynamic Collision-Free Waking Path Planning : Chapter 7 When a robot moves, great capacity to adapt to the environment changes are required not only against fixed structures but also dynamically moving human (includes pedestrian) and obstacles. In this system, sensor information is used to estimate the position of human, obstacles, robot and the environment and they are modeled in real time. To adapt to the
environment changes, RRT is selected at first to generate new trajectory at first then elastic motion planner based on potential method is selected for real-time trajectory modification. If the elastic motion planner is in local minimum, the system calls RRT again and restarts the continuous trajectory modification or stops the motion of the robot if RRT can not generate a new trajectory. This combination by RRT and the elastic motion planner compensates demerits each other and maximizes each potential abilities. This algorithm was simulated with real sensor system in the real environment and its advantage was verified on the system. CHAPTER 2 Torque Control on Position Controlled Robot A robot overcomes position uncertainties by moving in directions that reduce contact forces with compliance, which in turn directs it towards its goal. Our whole-body framework has been designed to allow the robot to compliantly interact with its environment and with human at multiple contact
points. One approach for addressing this problem is to provide torque control The input torque for the system can be designed to compensate for dynamic effect of the system. Decoupled task dynamics can be applied by the Operational Space Formulation which provides the robot with higher performance in position tracking as well as in compliant motion. Therefore, advanced performance, complex behaviors and compliant posture control can be implemented for robots if torque control is applied. The proposed Torque Transformer provides a method to control the existing position controlled system by torque command and to compensate dynamic effect of the system in the motion controller. In this paper, the torque transformer is defined and modeled through the analysis of the internal motor control unit. It was implemented to the HONDA ASIMO robot. 2.1 Conventional Position Control System Figure 2.1 shows a framework of a conventional position control system In this description, position
controlled robot refers to a robot having closed loop position control, but incapable of having true closed loop torque control. Generally, the system is composed of a motion controller and a joint position controller. A motion of a robot is designed by an end-effector’s trajectory generation to achieve specific task that is required for the robot. According to the task level position command, individual joint position and velocity command is calculated and are sent to a joint position controller. The framework of the joint position controller is composed of a position CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 8 control unit, D* (s) , and a physical joint, G (s) . The inputs to the joint position control unit are a position command qcmd / jnt and a velocity command V ff / jnt which is designed by a higher level motion controller. The motor is controlled according to the position and velocity commands and generates a motor torque, eff . The relationship between
the inputs and the output of the position control unit is denoted by D* (s) which is typically designed to compensate for the linear dynamics of the joint as well as to address any nonlinear dynamics of the joint. As shown in Fig 2.1, a negative feedback loop is closed around the series combination of * D * (s) and G (s) to drive the position error to zero. Generally, position control system has a position feedback loop and a velocity feedback loop in which data is measured by an encoder sensor or a potential sensor, etc., which is attached to the motor Actual velocity is measured by a velocity sensor or calculated by actual position data. Generally, the motor is connected to a gear to generate a large torque at the physical joint. To improve accuracy and quick responses of the motor control, a velocity feedforward command is applied to the position control unit and the position gain and velocity gain are adjusted for every joint motor. Fig. 21 2.2 Conventional position control
system. Conventional Motor Controller * In Fig. 21, full dynamics of the joint is given by the ideal position control unit, D ( s ) , the resulting physical joint, G * (s) and the effective torque eff . I eff q Beff q N (q, q ) eff (2.1) CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 9 where, I eff is the effective moment of inertia seen at the joint output, B eff is the effective linear friction coefficient at the joint output and eff is the effective motor torque at the joint output. The term N (q, q ) is the nonlinear effect in the joint dynamics including friction. These effective values combine the properties of both the motor and the link and gear ratio and are calculated by using the mechanical properties of the system. In figure 22, a block diagram of a conventional joint position controller is shown. The inputs to the motor controller are a position command q cmd / mot and a velocity command V ff / mot which are
designed by higher level motion controller. Generally, the motor controller is composed of a position feedback loop, a velocity feedback loop and a current feedback loop. In the position feedback loop, the actual position, q act / mot , is subtracted from the position command qcmd / mot and the result is multiplied by the position gain K p . After the position feedback loop, a velocity feedforward command is applied to improve quick responses of the position control. In the velocity feedback loop, negative feedback loop is closed by subtracting the actual velocity, q act / mot and the result is multiplied by the velocity gain, K v to generate current command icmd . In the current feedback loop, the current gain KC is composed of a current proportional gain K ip and a current integral gain K ii Fig. 22 Conventional position controller. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 10 and negative feedback loop is closed by subtracting the actual current, i act . AL
represents armature losses and is defined as 1 /(Lm s Rm ) . Here, Rm is motor resistance and Lm is motor inductance. KE is back electromotive force which is generated by the actuation of the motor Motor torque is calculated by multiplying torque constant KT with actual current imot . IF is composed of rotor inertia J m and rotor friction Bm and is defined as 1 /( J m s Bm ) . The motor is controlled by this framework and the physical joint PJ is actuated according to the commands with high gain feedback control. 2.3 Conventional Joint Position Controller Here, we estimate the velocity feedback loop to be ignored by commanding joint velocity actual, q act / jnt , into the joint velocity command. In this case, relationship between the position command q cmd / mot ( s) and resulting position q act / mot in the joint position control unit can be represented by a closed loop transfer function. T (s) q act / mot (s) / q cmd / mot (s) (2.2) In cases where nonlinear
effects at the joint are negligible, it is sufficient, in terms of model accuracy, to represent D and G as linear transfer functions D (s ) and G (s ) . The effective torque, eff , at the joint is given by eff (s) D*p (s)(qcmd / mot (s) qact / mot (s)) D*p (s) qcmd / mot (s)(1 T (s)) (2.3) From Equation (2.2), we can determine the position input, q cmd / mot ( s) , corresponding to a desired dynamic torque des (s ) as follows: qcmd / mot ( s) des ( s) D*p ( s) qact / mot ( s) des (s) D ( s)1 T ( s) * p (2.4) CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 11 Equation (2.4) provides one of the key results, since it shows how a dynamic position input, q cmd / mot ( s) , corresponds to a desired dynamic torque, des (s) , for a position controlled robot. In cases where nonlinear effects are too large to be neglected, the effective torque at the joint, eff , differs from the torque, des (s
) , at the output of the controller, D. This torque, eff , is precisely the torque that we are interested in controlling. In that case, T (s) cannot be computed analytically because of the non-linear nature of the joint. However, it can be experimentally identified from frequency analysis of the response of the closed loop system. The torque, eff , can be viewed as the torque associated with the linear portion of the dynamic system described in Equation (2.1) des eff N (q, q ) (2.5) Then, the physical joint, G* (s) of an ideal system can be represented by G * ( s) 1 I eff s Beff s 2 (2.6) * Given the identified closed loop transfer function T (s) and the ideal system model G (s) , the effective controller, as illustrated in Figure 4, can be computed as: eff D*p (s)qcmd / mot (s) qact / mot (s) (2.7) In order to produce a desired torque, des , on the joint, the position input, qcmd / mot ( s) , is set to: qcmd / mot
( s) des ( s) D*p ( s) qact / mot ( s) d e (ss) D ( s)1 T ( s) * p (2.8) The transfer function, D*p ( s) , provides control of the effective torque of the joint that is controlled by a position based servo-loop. CAHPTER 2. 2.4 TORQUE CONTROL ON POSITION CONTROLLED ROBOT 12 Conventional Joint Velocity Controller In Fig. 22, we can estimate the position feedback loop to be ignored by commanding joint position actual, q act / jnt , into the joint position command or by commanding joint position gain as zero. In this case, relationship between the velocity command q cmd / mot ( s) and resulting velocity q act / mot in the joint position control unit is represented by a closed loop transfer function. T * ( s) qact / mot ( s) / qcmd / mot ( s) (2.9) In cases where nonlinear effects at the joint are negligible, it is sufficient, in terms of model accuracy, to represent D and G as linear transfer functions D (s ) and G (s ) . The
effective torque, eff , at the joint is given by eff (s) Dv* (s)(qcmd / mot (s) qact / mot (s)) Dv* ( s) qcmd / mot ( s)(1 T ( s)) (2.10) From Equation (2.9), we can determine the velocity input, q cmd / mot ( s) , corresponding to a desired dynamic torque des (s ) as follows: qcmd / mot ( s) des ( s) Dv* ( s) qact / mot ( s) d e (ss ) D ( s )1 T * ( s ) (2.11) * v Equation (2.11) provides one of the key results, since it shows how a dynamic velocity input, q cmd / mot ( s) , corresponds to a desired dynamic torque, des (s ) , for a velocity controlled robot. In cases where nonlinear effects are too large to be neglected, the effective torque at the joint, eff , differs from the torque, des (s ) , at the output of the controller, D. This torque, eff , is precisely * the torque that we are interested in controlling. In that case, T ( s ) cannot be computed analytically
because of the non-linear nature of the joint. However, it can be experimentally identified from frequency analysis of the response of the closed loop system. The torque, eff , CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 13 can be viewed as described in Equation (5). Then, the physical joint, G * (s) of an ideal system can be represented by Equation (6). Given the identified closed loop transfer function T (s) and the ideal system model G * (s) , the effective controller, as illustrated in Figure 2.2, can be computed as: eff Dv* (s)qcmd / mot (s) qact / mot (s) (2.12) In order to produce a desired torque, des , on the joint, the position input, q cmd / mot ( s) , is set to: qcmd / mot ( s ) des ( s) Dv* ( s ) qact / mot ( s ) d e (ss ) D ( s )1 T ( s ) * v (2.13) The transfer function, Dv* ( s) , provides control of the effective torque of the joint that is controlled by a velocity based
servo-loop. 2.5 Approach to transfer torque command to position command Torque Transformer was developed to convert desired joint torque command into instantaneous increments of joint position command. The merit of this framework was that (i) the open-loop torque control can be realized on the current position controlled robot without hardware modification and (ii) the controller can account for nonlinear dynamics of the system. In this framework, the Operational Space Formulation was applied to account for the dynamics of the robot. The concept of the transformer was analyzed and validated on the PUMA and the HONDA ASIMO robot [24][25][26]. In this framework, the transformer was defined as the inverse model of the internal motor control unit D * (s) . A key feature of this framework is to transfer the joint torque command into the motor current command directly by cancelling the effects of the inner feedback loops. The inverse model of internal motor controller is defined as Torque
Transformer and it is designed as position-based control unit as Torque to Position Transformer [24][25] or velocity-based control unit as Torque to Velocity Transformer [26] respectively. Fig 23 shows the framework of the proposed control system The left block shows the Motion Controller of the application software level, where the dynamics of the system is CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 14 defined by the Operational Space Formulation. In the right block, the joint position controller is defined as the hardware level controller. In the joint position controller, the ideal position control * unit D * (s) and the resulting physical joint G (s) are defined. The inverse model of the ideal position control unit, D * (s) , is applied as Torque Transformer which transforms a torque command into an instantaneous position command or velocity command. Through frequency analysis or identification of the individual motor controller, the transformer has to be
identified previously. Once this inverse model is generated, the torque command is directly sent to the motor current command icmd by cancelling the effects of the inner feedback loops. Fig. 23 Framework of the proposed torque control system 2.6 Experimental Derivation of Torque to Position Transformer Torque to position conversion is based on inversion of a dynamical model of each joint and position controller in the position controlled robot. Such inversion can transform a torque input to a corresponding position input that will tend to result in an actual torque at the robot that is a good approximation of the initial torque input. A block diagram of position-controlled robot which we design the inverse model is shown in Fig. 5 The approach of Equation (28) corresponds to selecting the physical joint transfer function, G(s), and the corresponding servo * D(s). In this framework, D ( s ) is the combination of D (s ) with the ‖other dynamics‖ transfer functions, X(s) which
relates the ideal torque, des (s) and effective torque, eff (s) . Therefore, the approach of Equation (2.8) corresponds to selecting the idealized joint transfer function, G * (s) and the corresponding servo D ( s ) . In cases where nonlinear effects are too large to be neglected, it can be experimentally identified from frequency analysis of the response of the closed loop system. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 15 In general, the preceding approaches can be regarded as including the following steps: A) determining a closed loop position transfer function of a selected joint of the robot to relate a commanded input joint position, qcmd / mot (s) , to an actual joint position, qcmd / mot (s) , T ( s) des ( s) / qcmd / mot ( s) ; B) determining a corresponding effective servo transfer function of the selected joint according to PID control; C) applying frequency response to the individual joint to identify the overall open loop
transfer function and to extract X(s); and * D) determining a dynamic open loop joint torque to position transfer function, D ( s ) , relating to the selected joint. The method is applied to the first 3 joints of a PUMA 550, a 6-DOF-manipulator arm. In M(s), only the linear friction is considered. Bode plot of M(s) is shown in Fig 25 At dc condition, M(s) is effectively reduced to a constant N, the gear ratio. Fig. 24 Block diagram of the implementation of the transformer for the position-controlled robot. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT g a km T ( s) des ( s) / qcmd / mot ( s) 16 Nks f s i Im Ii (2.11) fi 2 N 2 I m Ii ks fk s s s i s I m Ii I m Ii Ii 3 In Fig. 25(A), the position controller (PID with velocity filter applied), modeled transmission system and X(s) are combined into D * ( s ) . PID( s ) K P K Dc s 2 K pc
K I s K I c (2.12) s ( s c ) D* PID( s)T ( s) X ( s) (2.13) System identification using frequency response is applied to the individual joint to identify the overall open loop transfer function and to extract X(s). For the 3rd joint, the open loop transfer function is identified as in Equation (2.14) The ideal model is identified as a mass-spring-damper system instead of a pure inertia system. I m[ 3] is embedded in the constant value and therefore not readily identifiable. J ideal[3] 1 I m[3] s 6.03s 019 (2.14) Numerator D (s) 1.67 PID Deno min ator 5th order * 3 1.67 PID T (s) X (s) 5th order 3 3 (2.15) The inverse model, Z 3 ( s ) , is shown in Equation (2.16) and has been simplified to its 2nd order component for experimental reasons. 1 Z 3 ( s) D ( s) Approximation * 3 2 nd order 6.64e 5 s 200s (2.16) s 5s 0.4
The relationship between the ideal torque, des (s) and effective torque, eff (s ) , is shown in Equation (2.5) The dashed magnitude plot shows Z 3 ( s) D3 ( s) , and the solid shows Z 3 ( s) D3 ( s) K multiplier. K multiplier is approximated as in Equation (217) where Ga and K t are CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 17 the gain in the amplifier (A/V) and the motor torque constant (N.m/A) respectively I m[ 3] is the link effective inertia (Equation (2.14)) C is constant that is used to get a DC gain of zero on the bode plot. K multiplier Ga K t C I m[3] (2.17) The 3 parameters above are lumped into K multiplier because they appear as a single constant value in the transfer function and not readily identifiable. Bode plot of 2nd order approximation of X 3 ( s ) is shown in Fig. 27 By inverting Fig 27 and noting that the resonance peak comes from M(s), we have the 2nd order approximation of X 3 ( s ) . X 3 ( s) 0.000117s 50s
170 (2.18) Thus, through a series of 2nd order approximation and block diagram manipulation, we are able to approximate X 3 ( s ) and the 2nd order model of the joint. J ideal[3] 0.000117s 50s 170 I m[3] s 6.03s 019 (2.19) The magnitude plot (Fig. 27) is showing a behavior similar to an under-damped system in which a resonance peak occur amplification around some frequency of attenuation. The peak is from T(s) in Fig.27 We can further modify the relationship by adding compensators For example, small amount of damping can be added by use of a compensator such as in Equation (2.19) The resultant transfer function, after reduction to 2nd order, is shown in Equation (2.20) Compensator s 2 4.4s 16 s 2 8.2s 16 (2.19) 5.3 10 5 s 201 s 8s 0.2 (2.20) Z 3 W ithDamping ( s) CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT Fig. 25 Fig. 26 Bode plot of T(s),
the motor and transmission system of joint 3. Relationship between the ideal torque and effective torque. 18 CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT Fig. 27 Bode plot of 2nd order approximation of X 3 ( s ) . Fig. 28 Bode plot after damping is increased 19 CAHPTER 2. 2.7 TORQUE CONTROL ON POSITION CONTROLLED ROBOT 20 Derivation of Torque to Position Transformer The purpose here is to derive D*p ( s) in Equation (2.8) to derive inverse model when nonlinear effect is defined as negligible and the system can be analytically defined by a block diagram and the dynamic parameters of the motor. The inputs to the motor are a motor position command and a motor velocity feedforward command. In the proposed framework, joint velocity command V ff / mot is commanded by actual joint velocity q act / mot to cancel the effect of the velocity feedback loop. V ff / mot q act / mot (2.21) By multiplying velocity gain K p and K v , icmd becomes : icmd K
p K v (qcmd / mot qact / mot ) (2.22) On the other hand, the motor generates the torque through its actuation. The motor torque is defined as follows. eff K t i mot K t CL i cmd (2.23) By applying equation (2.22) to (223), eff is defined as follows eff K p K v K t CL (q cmd / mot q act / mot ) (2.24) Therefore, the relation between the motor position and the motor torque is defined by Equation (2.24) By replacing eff by cmd / mot , qcmd / mot is defined as follows: qcmd / mot cmd / mot qact / mot K p Kv Kt CL V ff / mot q act / mot (2.25) (2.26) Generally, a gear is mounted for the joint to amplify the output torque and to reduce the output velocity of the motor. In this system, because Harmonic Drive Gear is mounted on every joint, the effect of gear ratio and the parameter to cancel the effect of friction friction is added to equation (2.25) CAHPTER 2.
TORQUE CONTROL ON POSITION CONTROLLED ROBOT 21 V ff / jnt q act / jnt q act / mot / qcmd / jnt cmd / mot friction K p Kv Kt CL 2 qact / jnt (2.27) To simplify the effect of CL in Equation (2.27), a motor current command, imot , is defined as icmd imot . By this definition, the effect of the current feedback loop is ignored and the torque command is sent to the internal current feedback loop through Torque-Position Transformer [24][25]. Fig. 29 2.8 Block diagram of TPT. Derivation of Torque to Velocity Transformer * The purpose here is to derive Dv ( s) in Equation (13). In this framework, nonlinear effect is defined as negligible and the system can be analytically defined by a block diagram and the dynamic parameters of the motor. In the proposed framework, motor position command is commanded by actual motor position qact / mot or the motor position gain is commanded as zero to cancel the effect of the position
feedback loop in Fig. 2 qcmd / mot (s) qact / mot (s) Kp 0 (2.28) CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 22 According to the motor block diagram in Fig. 2, current command, icmd , becomes : icmd Kv (qcmd / mot ( s) qact / mot ( s)) (2.29) Here, K v is a velocity gain. On the other hand, the motor generates the torque through its actuation. The motor torque is defined as follows eff K t i mot K t CL i cmd (2.30) Where CL is the transfer function of the current feedback loop and K t is the motor torque constant. By Equation (229) and (230), eff (s ) is defined as follows eff K v Kt CL (qcmd / mot ( s ) qact / mot ( s )) (2.31) Therefore, the relation between the motor position and the motor torque is defined by Equation (2.31) By replacing eff by cmd / mot , qcmd / mot is defined as follows: qcmd / mot (s) qact / mot (s) q cmd / mot or K p 0
cmd / mot K v K t CL q act / mot (2.32) Equation (2.32) is defined as the Torque Transformer of motor control level when nonlinear effect is defined as negligible. Generally, a gear is mounted on the joint to amplify the output torque and to reduce the output velocity of the motor. In Equation (232), the Torque Transformer is modified as joint level equation by adding the definition of gear ratio, . q cmd / jnt ( s ) q act / jnt ( s ) q cmd / jnt or Kp 0 cmd / jnt friction q act / jnt K v K t CL 2 (2.32) If the joint friction, which is mainly caused by the gear system, affects the Torque Transformer, joint friction model, friction can be modeled by the conventional friction model [12]. Or the term K v Kt CL 2 in Equation (2.32) can be experimentally identified from frequency analysis of the feedback control system. Equation (232) can be also simplified by assuming the current CAHPTER 2.
TORQUE CONTROL ON POSITION CONTROLLED ROBOT 23 feedback loop, CL, to be equal to 1 if response of the closed loop is much faster than the velocity feedback loop. The torque command is sent to the internal current feedback loop through Torque-Velocity Transformer [26][27]. 2.9 Experimental Validation 2.91 Torque to Position Transformer on PUMA Robot Using a PUMA 560 manipulator arm controlled in a real time QNX environment at 800Hz, the joint position torque control system is compared in performance with a standard operational space controller. In the operational space mode, task level commands are translated into joint torque command. In the joint position torque control, the torque command from the operational space is mapped to position command by Z(s) and then sent to the joint position PID controller where the joint torque command is computed according to the Equation (2.12) A) Gravity compensation mode Here, we applied gravity compensation mode to the joints. In gravity
compensation mode, the operational space controller calculates the amount of torque necessary to compensate for gravity forces in the Equation (2.33), on the link with mass mi and enable free movement Fig 210 and Fig. 211 show the torque behavior during static and moving condition Transient behavior is important as the link inertia, and therefore the compensation torque required, changes significantly between configurations. Steady state behavior is also important as the arm will fall down from its static position if not enough compensation force is supplied. n T mi g0 g J vi (2.33) i In the original inverse mode (under damped Fig.210), a big swing in inertia causes the applied torque, applied to overshoot in one direction (at t = 27s and t = 54s) and the arm to overshoot its position. In steady state, the applied settles to the desired value of desired. Extra damping due to the addition of the notch-like compensator is added and the effect is shown in Fig.
210 The transient overshoot ( t = 38s and t = 52s) is dramatically reduced. The steady state error that appears at around 37 N.m is more likely caused by the static friction characteristics of the joint configuration as the deviation appears consistently in all experimental data. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 24 B) End-effector motion control mode A 0.1 m by 01 m rectangular trajectory is plotted in Fig 212 for the end effector with a movement time of 4 seconds per segment. In the figure, blue dot line shows result using conventional torque control and the red line shows proposed inverse model. The two recorded end-effector trajectories are practically identical using the different control methods. Fig. 210 Torque behaviors during gravity compensation mode (underdamped) CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT Fig. 211 Torque behaviors during gravity compensation mode (increased damping) Fig. 212 x-y coordinates of end-effector for
a square trajectory. 25 CAHPTER 2. 2.92 TORQUE CONTROL ON POSITION CONTROLLED ROBOT 26 Torque to Position Transformer on Humanoid Robot [A] Hardware and System In this project, Honda Humanoid Robot ASIMO [1][2] is used as an experimental test bed for validation and performance analysis of the theoretical results. The ASIMO robot has 5DOF for each arm, 6DOF for each leg and 2DOF for head. The joint has a motor, belt and Harmonic drive for arm and leg joints except for wrist joint of the arms. The wrist joints have simply motor and standard gear system. The motors are controlled by PD control on local ECU and all the local ECUs are connected to a main ECU which has 200Hz servo frequency. All the physical parameters such as mass, GC, inertia are given from precise CAD data. Fig. 213 Hardware and system specifications of HONDA ASIMO robot [B] Friction Compensation Proposed torque-position transformer is an inverse model of an internal motor controller mounted on the individual
joints and works as an open-loop transfer function. However it does not include a non-linear effect of a physical joint model between the motor and the output of the joint. If the joint friction in the physical joint model is not negligible, its effect on the torque to position transformer is also not negligible. In this case, the response of an open-loop transformer becomes CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 27 worse. In case of ASIMO, the characteristics of the individual joint friction are different The friction is mainly caused by Harmonic drive gear, belt gear, mechanical hinge, etc. The effect of the friction changes according to the mechanical condition. To improve the response of the torque-position transformer, the classical friction model was applied to compensate the friction without making significant changes according to the conditions once identified. Here, the classical friction model [10] which includes (i) Coulomb friction model, (ii) viscous
friction and (iii) stiction was applied to the torque-position transformer. Fig. 214 Open-loop torque-position transformer zero torque command test. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 28 [C] Zero Torque Command Test Zero torque command was applied to the Honda ASIMO robot in this test. The result of the torque-position transformer without friction model is shown in Fig. 214-(i) In Fig 214-(ii) and (iii), the results of the transformer with friction model are shown for right arm and right leg. Every joint was actuated and zero torque command was applied to the right arm and right leg joints. Because no torque command is generated, the posture of the robot changes according to the external force. An operator could move the individual links manually but if the operator release the link, it fell down according to gravity compensating the joint friction and stabilize at the point where the gravity torque is almost zero. This framework is useful for whole body
multi-contact and control. So far, every joint position of the robot had to be previously designed to make every posture such as standing, walking and manipulating. But this new framework makes it very easy to make the robot compliant with the environment because we don’t need to command the joint position while in contact with the environment. The posture of the robot is decided according to the constraint of the external environment. [D] Gravity Compensation Torque Command Test To identify how an open-loop torque-position transformer works correctly, a gravity torque command with friction compensation was applied to all joints of ASIMO. If we ignore a term of A matrix and a term of Coriolis /centrifugal in Equation (2.14), only a gravity torque is calculated. g (q) (2.34) g (q) is calculated as follows. n g (q) J vTi (mi GGyro ) (2.35) pc pc1 pc2 , ,., n ] q1 q2 qn (2.36) i 0 J vi [ Here, n is the number of joints, mi is the mass of the i’th
link. J vi is the linear velocity Jacobian, pci is the location of center of mass of the i-th link and q i is the unit vector in frame i which represents rotation axis. Different from industrial robots, humanoid robots have a mobile base whose base coordinate system is not fixed to the ground. In this framework, the base coordinate CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 29 system of whole body dynamics calculation is set between the hip joints of the robot. To compensate the gravity vector which changes according to the rotation of the body, a gyro sensor mounted on ASIMO was used. Here, Gconst is gravity constant, and GGyro is modified gravity constant by the gyro sensor. GGyro Rotation(Gconst ) (2.37) Fig. 216 shows a result of gravity compensation test of the right arm joints All the arm joints were actuated and the gravity torque command was applied to the torque-position transformer. In Fig. 215, the results of the three postures are shown In this
test, the posture of the robot changes according to the external force. An operator was able to move the individual links manually but if the operator released the link, it was able to hold its position. The weight of the link is always supported by the gravity compensation torque command. The measured data of the gravity compensation test of the arm joints is shown in Fig. 16 In this figure, the green line is the actual joint position, the blue line is the gravity torque command and the red line is the actual torque which is calculated by an actual current and torque constant. In this test, the individual links were moved manually. Even if the joint position moves, the blue line and the red line are symmetric along the zero line. This means that the calculation of a gravity torque was correct for every posture and the torque-position transformer transfers the gravity compensation torque command into a position command correctly. Through this test, the torque-position transformer was
experimentally verified. Fig. 215 Gravity compensation torque command test for arm joints. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 30 Fig. 216 Gravity compensation torque command data for the right arm [E] Operational Space Command Test Here, torque-position transformer was extended to use the decoupled task dynamics of the Operational Space Formulation [18] for both arms of ASIMO. In the Operational Space Formulation in Chapter 3, is defined as a simple PD control. f * K px ( x des x) K vx ( x des x ) K px (2.38) and K vx are the space PD gains which are selected for the unit-mass system x K vx x K px x 0 . xdes is a desired goal position For a simple positioning task xdes , the applied force to the task point is CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT F ( x)( K px ( x des x) K vx ( x des x )) 31 (2.39) The joint torque corresponding to forces
acting at the task is calculated by Equation (2.39) [F] Position Command Test in Cartesian Space Position command was applied to verify if the Operational Space Formulation is correctly commanded by way of the torque-position transformer. The results of the experimental test are shown in Fig. 217 In this test, a position gain K px =1500 and a velocity gain, K vx 2wn 2 K px were applied to the individual position control. In Fig 217, the green line is a desired position and the red line is an actual position in Cartesian space. The response will change according to the position gain K px and the velocity gain K vx ; however, when a position command was applied, the end-effecter followed the desired command with a position error under 0.005[m] for all directions Through this test, it was verified that the position control in Cartesian space using the Operational Space Formulation was correctly transferred through the torque-position transformer. [G] Compliance Control
Test If we modify K px and K vx in Equation (2.29), compliant task position control without using any force sensor was also achieved. In Fig 218, when K px =2000 was applied for the hand position control task, the hand position shifted a little when pushed manually because the mechanical stiffness of the arm is not enough; however, the hand was almost fixed in the commanded position. When K px =500 was applied for the hand position control task, the hand moved smoothly when pushed manually. When the hand was released, it tried to move back to the original position. The response changes according to the position gain K px and the velocity gain K vx but we could feel a spring and a damping effect for position control when we push the hand and moved it. The response was improved if we use the measured data by force sensor and applied it to f * . By using the torque-position transformer, compliant position control in Cartesian space was verified without using any force sensor.
CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT Fig. 217 Fig. 218 Operational space position control test. Operational space compliance control test. 32 CAHPTER 2. 2.93 TORQUE CONTROL ON POSITION CONTROLLED ROBOT 33 Torque to Velocity Transformer on Humanoid Robot [A] Gravity Compensation Torque Command Test In this test, gravity compensation torque was applied to the arms of the ASIMO robot to see how Torque Transformer works correctly. Proposed torque transformer does not include a non-linear effect of a physical joint model between the motor and the output of the joint. If the joint friction in the physical joint is not negligible, its effect on the torque to position transformer is also not negligible. In case of the arm joints of the ASIMO robot, the characteristics of the individual joint friction are different. The friction is mainly caused by Harmonic Drive gear, belt gear, mechanical hinge, etc. The effect of the friction also changes according to the
mechanical condition. In this gravity torque test, the effect of the joint friction was ignored to see the pure effect of the torque transformer. Moreover, since the dynamics model of the upper body has 6 DOF for the base, the gravity vector of the upper body was compensated according to the movement of the body. In the test, all the joints were manually and passively moved by an operator to see how the individual joints hold the connected link compensating the effect of the gravity. In Fig 219, the blue line shows the torque command and the red line shows the resulting torque which was calculated by actual current data and torque constant Kt. In Fig 219, the blue lines and the red lines are almost symmetry along the zero line even if the torso rotates. Through this test, the effect of the Torque Transformer was experimentally validated. [B] Operational Space Control Test In this test, the Operational Space Command was applied to the position control of both hands in the Cartesian
Space. In Equation (239), is defined as a simple PD control as same as the Equation (2.48) and (249) The results of the experimental test are shown in Fig220 In this test, sinusoid motion command was applied with the position gain K px =1500 and the velocity gain, K vx 2wn 2 K px for the individual direction. In Fig 220, the blue line is a desired position and the red line is an actual position in Cartesian space. When a position command was applied, the end-effecter followed the desired command with the position error under 0.005[m] Through this test, it was verified that the accurate position control in Cartesian space can be achieved if position control is closed over the torque transformer. CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT Fig. 219 Gravity torque compensation test of Torque Transformer. Fig. 220 The Operational Space Control on the Torque Transformer 34 CAHPTER 2. TORQUE CONTROL ON POSITION CONTROLLED ROBOT 35 2.10
Conclusions A) Torque Transformer was developed to convert desired joint torque command into instantaneous increments of joint position or velocity command that will tend to result in an actual torque at the robot. The merit of this framework was that (i) the open-loop torque control can be realized on the current position controlled robot without hardware modification and (ii) the controller can account for nonlinear dynamics of the system. The concept of the transformer was analyzed and validated on the PUMA and the ASIMO robot B) In this work, (a) Torque to Position Transformer Experimental Model, (b) Torque to Position Transformer Linear Model and (c) Torque to Velocity Transformer Linear Model were validated on the HONDA ASIMO robot. The Torque to Position Transformer Experimental Model was desired approach however modeling process was relatively complicated than other two methods. The Torque to Position Transformer Linear Model was our first approach to simplify the modeling
process. Consequently, the Torque to Velocity Transformer Linear Model was the simplest inverse modeling process and we selected it to use in this research. Applying the effect of the friction, we could have better performance for these three approaches. C) In this framework, the Operational Space Formulation was applied to account for the dynamics of the robot. The concept of the transformer was analyzed and validated on the HONDA ASIMO robot. Using Torque Transformer, compliant and passive control and precise position control in Cartesian Space were experimentally validated. Torque Transformer was developed to convert desired joint torque command into instantaneous increments of joint position command. CHAPTER 3 Sensor-Based Whole-Body Compliant Motion Primitives The aim here is to create whole-body motion primitives for basic tasks. These primitives will be built by appropriate selection of compliant frames, force and motion specifications and balance control. The control
primitives that drive the behavior of the robot strongly rely on the feedback signals provided by the various sensors mounted on the robot. Such sensors provide important information on the state of the robot at a given time. To improve manipulation skill of the robot, we developed new strategies by estimating the internal and external status and understanding instability of balance that can potentially lead to sudden falls. Furthermore, to correctly program and simulate the control primitives of humanoid robot, it will be very important to correctly model each of these sensors. These models will address the issues of noise, time delay, signal distortion and resolution which characterize a real sensor. These sensor models will also be calibrated by monitoring real data from the robot. Comparison between the real and simulated robot will be performed in order to obtain realistic models of the sensory stage. 3.1 Lower Body Control 3.11 ZMP Control As for the balance control of the
conventional humanoid robot, stable balance control has been realized by controlling ZMP (Zero Moment Point). Generally, ZMP is defined as "the point on the floor where the total of a moment caused by an inertial force and gravity becomes zero‖. Balance control is performed by controlling ZMP position so that it is located in convex hull of the foot-support area while standing. About the walking control of the humanoid robot, gait generation is performed with ZMP control, and a supporting foot, a floating foot and a waist CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 37 position are controlled together. The detection of the contact of foot and the floor, the detection of ground reaction force are detected by force sensor and contact sensor carried to a foot part, and control of contact is performed by compliance control. An estimate of the upper-body posture is carried out by gyro sensor and it is feedback for gait generation and balance control of the
robot. Generally, the dynamics of the humanoid robot is expressed by equation (3.1) ̈ ][ ] ̈ [ where ̈ [ ] [ ] [ ] ∑ [ ][ ] (3.1) is an acceleration of basement of the robot in global coordinate system. Here, it is defined that the basement of the robot is fixed to a waist and is not fixed in global coordinate system. The term ̈ is an acceleration of joints, centrifugal force vector, is mass matrix, is collioris and is gravity force vector. In the equation (31), upper section of the equation represents global movement of the robot in Cartesian Space and the lower section of the equation represents local movement of the robot in joint space, which is represented by joint torque τ and external force and k-th external moment and moment . The matrix , transfers k-th external force at point k into generalize force. Extracting out the translational motion from the upper section (driving force is 0) of equation (3.1) and focus on global motion of whole system,
we have the following simple equation. ∑ ̈ ∑ ̈ (3.2) Here n is number of links (assume a base link is 0), is mass of each link i and m is mass of whole robot, [ is center of gravity of i-th link, external force, g is gravity acceleration, and [ ] is a grand total of the ] is total external forces acting on the whole system. On the other hand, extracting out the part about the rotational motion from the upper section of equation (3.1) and focus on global rotation of whole system, we have a moment by the external force from the environment and the moment by the inertial force around the center of gravity. ̇ (3.3) CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES where ̇ is angular momentum acting around center of gravity, point on the ground and is total external moment around 38 [ ] from the environment. Here, are zero, we can define that ̇ represents ZMP. Since x-y elements of is a can be ignored and the system can be defined as
inverted pendulum model, which has the following equations of motion. ̈ (3.4) ̈ (3.5) ̈ (3.6) ̈ Here, we estimate that (3.7) is constant while walking and . According to equation (34) and (3.5), we have state equation shown in equation (38) and (39) [ ] ̇ [ ][ ] ̇ [ ] (3.8) [ ̇] [ ] [ ̇] [ ] (3.9) This is regarded as inverted pendulum constructed by rod which does not have a mass point and mass. The motion of inverted pendulum is decided from relationship between a contact point and Fig. 31 ZMP control of ASIMO robot CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 39 the center of gravity, however, in case of a humanoid robot, the contact point with the environment is defined as an support polygon and ZMP carries the role of the contact point. Therefore, ZMP is constrained within the support polygon which is defined by equation (3.10) and (3.11) (3.10) (3.11) 3.12 Stable Balance Control Issues to be address in order to achieve
stable walking is (a) not falling down even when the floor is uneven, (b) not falling down even when pushed and (c) being able to walk stable on stairs or slopes. This framework is achieved by the following three posture controls, which are main feature of HONDA humanoid robot ASIMO [1] [2]. 1. Floor Reaction Control Firm standing control of the soles of the 2. Target ZMP Control Control to maintain position by accelerating the upper torso in the direction in which it threatens to fall when the soles of the feet cannot stand firmly 3. Foot Planting Location Control Control using side steps to adjust for irregularities in the upper torso caused by target ZMP control. When the robot is walking, it is influenced by inertial forces caused by the earth's gravity and the acceleration and deceleration of walking. These combined forces are called the total inertial force When the robot's foot contacts the ground it is influenced by a reaction from the ground called the floor reaction
force. The intersection of the floor and the axis of the total inertial force has a total inertial force moment of 0, so it is called the Zero Moment Point. The point where the floor reaction force operates is called the floor reaction point. Basically, an ideal walking pattern is created by the computer and the robot's joints are moved accordingly. The total inertial force of the ideal walking pattern is called the target total inertial force, and the ZMP of the ideal walking pattern is called the target ZMP. When the robot is maintaining perfect balance while walking, CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 40 the axes of the target total inertial force and the actual floor reaction are same. Accordingly, the target ZMP and the center of ground reaction are the same. When the robot walks across uneven ground, the axes of the target total inertial force and the actual floor reaction force are out of alignment, balance is lost and falling force is
generated. This falling force is comparable to the misalignment of the target ZMP and the center of ground reaction. In short, the misalignment between the target ZMP and the center of ground reaction is the main cause of loss of balance. When the Honda ASIMO robot loses its balance and threatens to fall, the following three control systems operate to prevent the fall and allow continued walking. Fig. 32 Target ZMP and center of ground reaction force Fig. 33 Gyro sensor to estimate correct posture of torso. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 3.2 41 Unified Whole-Body Control 3.21 Whole Body Coordination Balance controller of the HONDA ASIMO robot is based on the inverse pendulum model and is very stable. In this framework, the upper body is modeled simply as a mass and the dynamic model of the arm is not actively included to the balance controller. To achieve whole body control, the upper body controller and the lower body controller need to be
connected [26]. The communication between the upper body and the lower body is shown in Fig. 34 In this framework, the effect of the arm motion needs to be affected to the balance controller as movement of center of the gravity points of both the arms. The information of the arm force sensor can also be affected to the balance controller. And desired position and posture of the hip can be also sent to the balance controller. In the balance controller, actual hip command is calculated according to the desired command from upper body controller. To calculate the upper Fig. 34 Coordination between upper body Operational Space Controller and ASIMO’s balance controller. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 42 body dynamics, the information of the actual hip position and orientation is necessary to compensate the gravity torque. It is very typical for the humanoid robot because the base of the arm moves according to the motion of the lower part. In this
robot, gyro sensor is mounted on the body and is used to know the absolute orientation of the body. The sensor information is filtered and used for the balance controller. This filtered information is sent to the upper body dynamics control. The hip has 6 DOF which is 3 DOF for position and 3 DOF for orientation The movement of the hip is realized by the movement of the legs which has 6 DOF for each leg. In this framework, position and orientation of the hip is commanded to the balance controller as a desired hip command. According to the desired command, the balance controller computes the internal command keeping the stable balance. In the balance controller, the hip position command is limited to keep the balance and the hip orientation control has higher priority than the hip horizontal position command. 3.22 Upper-Body Posture Control In this framework, the hip command is decided according to the constraint function [26][29] which is shown in Fig. 35 In the constraint function,
the hip height command and the hip orientation command of X, Y, and Z are decided according to the constraint forces between arms and the torso, which are defined in Fig. 36 The distance between the arm and the torso is defined and the constraint forces are calculated according to the spring and dumper model. The constraint moments are calculated for the individual orientation command of the hip. The hip X orientation is decided by constraint in blue. The hip Y orientation is decided by constraint in red The hip Z orientation is decided by constraint in green. The spring and dumper gains for each constraint force are tuned so that the torso follows the motion of each arm and the constraint forces absorb impact force from arm to the balance controller. In Fig 37, a block diagram of whole body motion controller is shown. In the motion controller, the Operational Space Formulation was applied to account for the dynamics of the system. The gravity vector in the dynamics of the system was
compensated by the balance controller which computes the precise orientation using gyro sensor. The torque command from the motion controller is transformed into the velocity command to the motor control unit. The upper body dynamics is integrated and coordinated with the ZMP based balance controller of ASIMO. The desired hip position and the orientation are commanded from the constraint function. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 43 Fig. 35 Definition of the constraint forces for the individual hip orientation command Fig. 36 3.3 Constraint forces between arm and torso. Whole Body Compliant Motion Control Test Fig. 38 shows the experimental result of physical interaction between the HONDA ASIMO robot and human. Compliant and passive upper body torque control by the Torque Transformer and stable lower body control by the ASIMO’s current balance controller were integrated together on the current position controlled system. The Operational Space
Formulation was applied to the upper body motion control. In Fig 38(a), the upper body and the lower body coordination test is shown. According to the motion of the arms, the hip rotates along the X, Y and Z axis and moves CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 44 up and down along the Z axis. Since the upper body and the lower body were coordinated, the lower body compensated the motion of the upper body and keeps the stable balance by the ASIMO’s current balance controller. The task position control was also applied If the hand was commanded to keep the desired position in global coordinate, which is defined as a middle point of both feet, the hand kept pointing the fixed position even if the torso moved. The robot also continued to keep pointing the commanded position when the robot made several stamping walk. In Fig. 38(b), compliant interaction test with an operator is shown In this test, only the gravity torque was commanded to the arms and the
arms were passively operated by the operator. The operator could move the arms manually and the arms followed the operator’s desired operation. The hip moved according to the motion of the arms to reduce the constraints between the arms and the torso. Through this test, flexible whole body motion control was realized Fig. 37 Block diagram of whole-body controller. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 45 Fig. 38 Whole body motion control test 3.4 Whole Body Compliant Balance Control In real environment, one of the important functions of the robot is collision free trajectory generation to improve the capacity to adapt to the environment changes. While the robot is moving, its trajectory needs to be generated dynamically from arbitrary point to planned goal position, avoiding any collision not only with the environment but also any moving obstacles, while the robot is moving. In order to avoid contact with the outside, it is common to CAHPTER 3.
SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 46 preliminarily calculate the interference avoidance trajectory with a visual sensor or the like and follow the derived trajectory. However, while the actual environment changes dynamically, the robot can not sense all situations. Therefore, it is impossible to avoid unexpected contacts occurring between the robot and the environment or pushed by a person during operation. As a result, the robot can not recognize the contact state which causes instability of the balance and causes fall. Balance maintenance under unexpected environment is a key requirement for safe and successful coexistence of humanoid robots in normal human environments. When disturbances occur, the balance controller needs to bring the robot back to the normal balance state. However, the motion of the humanoid robot is generally characterized by high-order, multi-degree of freedom, nonlinear system, causing balance control a complicated problem. When disturbances
occur, balance control strategies are aimed to reduce the velocity of COM by designing joint control torques. This function provides to lower the risk of falling by detecting the balance situation, controlling the body posture, notifying the external force, avoiding falls by making several steps. 3.41 Whole-Body Balance Estimation and Motion Control System This system [31][32] was developed to respond to unexpected contacts occurring between the robot and the environment. The system is composed by three key functions, that is (a) internal status estimator, (b) decision maker, and (c) motion controller. The system estimates internal status computing posture of the robot, ZMP, contact force to upper body, contact status between feet and the floor, using integrated sensor information. Status of balance is estimated according to history of internal status information and motion planner. The motion controller, which is composed by (1) upper body posture controller, (2) hip height
controller and (3) walk step controller, decides next action according to the estimated future balance status. If the unexpected contact force is small, that is transition of balance state is small, the robot reduces the effect of unexpected contact forces by moving upper body posture at first. If the robot can not reduce the effect of unexpected contact forces, the robot moves the upper body down to reduce total fall moment. If the robot can not reduce the effect of unexpected contact forces, the robot makes one step to support balance. If there are no time to not make any step, the robot moves the upper body down to reduce fall moment and actively make the robot fall safely. A key technology is to estimate current status of balance and decide how to move to recover its balance. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES Fig. 39 Fig. 310 Image of whole-body balance planning. Flowchart of whole-body balance planning. 47 CAHPTER 3. SENSOR-BASED
WHOLE-BODY COMPLIANT MOTION PRIMITIVES 48 3.42 Estimation of Balance Status This system was developed to respond to unexpected contacts occurring between the robot and the environment or between the robot and human. The system estimates internal status computing posture of the robot, ZMP, contact force to upper body, contact status between feet and the floor, using integrated sensor information. Status of balance is estimated according to history of internal status information and motion planner. Generally, contact force is designed to push and move the environment. If designed force is more than tolerance, extra force is feedback to the body (Fig 3.11) If the whole-body controller can not compensate the feedback force, balance state becomes unstable. This function compensates the feedback force by moving and rotating the upper-body back. Generally, ZMP and COM is computed for balance control of humanoid robot If the robot is not moving and COM is outside of support polygon, which is
defined by contact surface of both feet, it is estimated that balance of the robot is unstable. In this case, it is estimated that the robot is pushed. By computing position of the ZMP, direction of force applied can be estimated (Fig. 312) To recover balance, the robot rotates the upper-body to the direction as same as pushed direction. surface of both feet, it is estimated that balance of the robot is unstable In this case, it is estimated that the robot is pushed. By computing position of the ZMP, direction of force applied can be estimated. To recover balance, the robot rotates the upper-body to the direction as same as pushed direction. Fig. 311 Estimation of contact force. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES Fig. 312 49 Estimation of COM and ZMP. 3.43 Motion Controller So far, the balance control of the humanoid robot mainly includes standing balance control and walking balance control. In the standing state, for smaller disturbances, the
robot can maintain balance through controlling position and orientation of upper-body. On the other hand, for larger disturbances, the robot can maintain balance through stepping and extending support polygon. In the walking state, the stability of the robot includes walking trajectory design without external disturbances. In this case, the Zero Moment Point (ZMP) trajectory of the robot is designed without external disturbances. In this system, the motion controller is composed by (1) upper body posture controller, (2) hip height controller and (3) walk step controller. If the transition of balance state is small, the robot reduces unexpected contacts, occurring between the robot and the environment, by moving upper body posture at first (Fig. 313) If the robot can not reduce the unexpected contacts, the robot moves the upper body down to reduce total fall moment (Fig. 3.14) If the robot can not reduce the unexpected contacts, the robot makes one step to support balance (Fig. 317) If
there are no time to not make any step, the robot moves the upper body down to reduce fall moment and actively fall. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES Fig. 313 Fig. 314 Contact avoidance by upper-body control. Reduction of fall moment by controlling height of hip. 50 CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES Fig. 315 Block diagram of upper-body motion controller Fig. 316 Flowchart to estimate fall. 51 CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES Fig. 317 52 Image of stepping motion to extend support polygon. 3.44 Experimental Validation Fig. 318 shows whole-body balance status and motion control test According to the status of control mode and sensor information, the motion controller selected (1) upper body posture control mode, (2) hip height control mode or (3) walk step control mode and decided next action. When the 6 axis force sensor was used, height of the hip was controlled according
to the force applied to the hands (Fig. 318 top) When unexpected small contact force was applied to the any links in the upper-body, the robot estimated the status of the balance and reduced the effect of unexpected contact forces by moving upper body posture at first. When the robot could not reduce the effect of unexpected contact forces, the robot moved the upper body down to reduce total fall moment. As for the case when unexpected contact forces are large, we only tested the fall mode command accounting for safety and damage to the robot. Through the test, a key technology was to estimate current status of balance and decide how to move to recover its balance was validated. CAHPTER 3. SENSOR-BASED WHOLE-BODY COMPLIANT MOTION PRIMITIVES 53 Fig. 318 Whole-body balance status and motion control test 3.5 Conclusion A) To realize balance maintenance of humanoid robots in normal human environments safely and successfully, proposed balance controller was developed by controlling
ZMP (Zero Moment Point). Stable balance controller was developed by (1) Floor Reaction Control, (2) Target ZMP Control and (3) Foot Planting Location Control. To realize the unified whole-body control, the upper body controller and the lower body controller were connected using the conventional communication between the upper body and the lower body. B) The upper-body posture controller was developed to move the upper-body compliantly and to reduce effect of contacts from the arm to the lower-body balance controller. C) When disturbances occur, balance control strategies were aimed to reduce the velocity of COM by designing joint control torques. This function provides to lower the risk of falling by detecting the fall situation, controlling the body posture, notifying the external force, avoiding falls by making several steps. CHAPTER 4 Sensor-Based Contact Estimation and Motion Control Primitives Because the distance between the humanoid robot and human and the distance with
the environment is narrow in the real environment, the technology that makes the robot compliantly interact with the environment or that makes to instruct movement of the robot intuitively, becomes more important than conventional high gain trajectory tracking. When the humanoid robot is operating task in the narrow environment, the robot is supposed to collide with the environment. So far, conventional humanoid robot was designed to allow contact at the end-effector where force sensor is mounted . With contact force information, compliance control was applied to respond to the error between the designed contact force and actual contact force [4]. Since force sensor was required, contact points of the humanoid robot were limited only to the end-effectors such as feet, both hands [3]. And if contact happens for any points except for the end-effectors, the contact force affects balance control of the humanoid robot and makes the balance unstable. Even in such narrow environment, the
humanoid robot needs to execute commanded task allowing designed or unpredictable contacts, which is dynamically changing. To execute task of the humanoid robot while allowing physical contact with the environment and with human, we propose to control the robot by the Operational Space Formulation which makes it possible to control the humanoid robot compliantly in posture space and accurately in task space for position and force control. In this chapter, we introduce the Dynamic Contact Jacobian to the Operational Space Formulation, which enables decoupled task dynamics corresponding to (a) the contact force control in constraint space, (b) the task control at the end-effector in task space and (c) the joint space dynamics, to enable the robot contact with the environment safely and to move compliantly keeping contact with the environment or with human. As for the contact estimation, we propose 53 CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 55 to
use (a) 6 axis force sensor and (b) tactile skin sensor in this research. The 6 axis force sensor is supposed to be mounted on the end-effectors and feedback accurate contact information to the Operational Space Formulation. The sensor information is feedback to whole-body controller to realize stable contact force control and motion control in the real environment. As for the tactile sensor skin system, it is introduced to enable active slippage control and stable contact force control. Stiffness of compliant material layer and a friction coefficient of the smooth material layers are defined. To control the robot keeping complaint contact with human, impact force between the skin sensor and human was estimated through test and the stiffness of compliant material layer was applied to the skin sensor. All the functions were experimentally validated on the HONDA ASIMO robot. 4.1 Decoupled Task Dynamics and Motion Control A motion controller is designed by the Operational Space
Formulation [20][21]. As for the dynamics and control of the robot in multiple contact, the joint torque vector, , is composed by the torque for contact force control and the null space torque. J cT Fc N cT 0 (4.1) T Where the first term J c Fc is the control torque for contact force control and the second term N cT 0 is the torque in the null space control. In this formulation, the null space control torque, 0 , is used for motion control. The dynamically consistent null space projection matrix, N cT , projects the torque, 0 , into the null space of the contact forces. Therefore the contact forces are not affected by 0 . Here, to correspond with the dynamics of the robot for motion in contact, the equation of motion is described by A(q)q b(q, q ) g (q) J cT (q) f c (4.2) where q is the n 1 generalized vector in joint space, Aq is the n n mass/inertia matrix, bq, q is the Coriolis and
centrifugal torque and g q is gravity torque. The term fc is 6 1 vector of contact force/moment at the contact point and the term J c is 6 n matrix corresponding to the contact force fc on the desired contact link. Corresponding to the 53 CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 56 instantaneous linear/angular velocity, c , the following relationship is obtained by projecting Equation (4.2) into the operational space : c (q)c c (q, q ) pc (q) f c Fc where, fc c (q) , c ( q ,q ) and pc ( q ) (4.3) are the inertia matrix, the vector of Coriolis/centrifugal forces and the vector of gravity forces mapped into the operational space associated with contact force control. c1 (q ) J c (q ) A 1 (q ) J cT (q ) J cT (q) c (q) J c (q ) A 1 (q) N cT (q ) I J cT J cT c (q, q ) J cT (q)b(q, q ) c (q) Jc (q)q pc (q ) J cT
(q ) g (q) (4.4) (4.5) (4.6) (4.7) (4.8) The control force Fc in Equation (4.3) can be designed to compensate for the dynamic effects of the system with the estimates of matrices, ˆ c (q) , ˆ c (q, q ) , pˆ c (q) and fˆc . ˆ (q) f * ˆ (q, q) pˆ (q) fˆ Fc c c c c c (4.9) where .̂ represents estimates of the model parameters The resulting equations of motion form the decoupled unit mass system for each contact. c f c* (4.10) Equation (4.9) can be defined by i-th contact c,i f c*,i (4.11) CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 57 for all the contacts. Having the decoupled system for each contact, the control input f c* should be composed using the relationship between the motion and contact force. In this framework, the environment is assumed to have constant stiffness and a simple spring model is used for the controller design. Having the task-posture decomposition control
structure for the contact force and motion control, the task consistent dynamic equation for motion control can be obtained. The dynamic equation with the control structure is A(q)q b(q, q ) g (q) J cT (q) f c J cT Fc N cT 0 (4.12) We define the operational space coordinate for motion and the corresponding Jacobian is denoted as J m . m J mq (4.13) T Control torque for motion, 0 , is chosen as J m Fm to apply control force, Fm , in the null space. The dynamic equation in the motion space is obtained by projecting the joint space dynamics into the motion space. This projection can be performed by pre-multiplying Equation (412) by J mT ,c ( q ) : m,cm m,c pm,c J mT ,c J cT f c J mT ,c J cT Fc Fm (4.14) m1,c (q) J m (q) A1 (q) NcT J mT (q) (4.15) J mT,c (q) m,c (q) J m (q) A1 (q) (4.16) m,c (q, q) J mT,c (q)b(q, q) m,c (q) Jm (q)q (4.17) pm,c (q)
J mT ,c (q) g (q) (4.18) where The control force Fm is applied to the null-space of the contact force control. Any torque components which affect the contact force control will be eliminated by the null-space projection T matrix N c . According to Equation (15) and the composed control force Fc for the contact force control, the control force in motion control can be computed as CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES Fm ˆ m,c f m* ˆ m,c pˆ m,c J mT ,c J cT fˆc J mT ,c J cT Fc 58 (4.19) resulting in an unit mass system for motion, if the motion control can be executed in the null-space of the force control. xm f m* (4.20) The total torque to be applied to the robot is J cT Fc N cT J mT Fm (4.21) 4.11 Modeling Dynamic Contact Jacobian Given the contact point on a link, the corresponding Jacobian can be defined accordingly as Contact Jacobian. Since the humanoid robot has branching
structure and multiple contact points can be designed, the Contact Jacobian was extended to deal with more general contact forces dynamically (Fig. 41) For one contact point, contact force can be designed at any point on the link for any direction. Here, the Contact Jacobian of one branch is designed for A) contact only at the end-effector B) contact only at the target link[i] and C) contacts at the end-effector and at the target link[i] Fig.41 Dynamics for contact motion control. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 59 In this method, contact points are supposed to be estimated dynamically accounting for real status of contact and updating contact Jacobian to compute each joint torques. J CT Fcmd / global task (4.22) EE In case of (A), the Contact Jacobian J c is modeled as Jacobian at the end-effector. If the robot has the force sensor at the end-effector, force feedback can be applied to achieve precise force Link control. In
case of (B), the Contact Jacobian J c is modeled according to the desired contact situation. The contact points can be modeled from the joint[i] on the link [i] If the contact point is only on the link [i], the Contact Jacobian is reconstructed from the modeled contact point. In this case, the joints outside of the contact point are not actively controlled but are controlled passively and compliantly by zero torque command or gravity torque command. If the tactile sensor can be EE mounted on the link, force feedback can be achieved. In case of (C), the Contact Jacobian, J c Link and J c are modeled. To generate the Dynamic Contact Jacobian, we have the contact point as follows. X C T[i ] X C[i ] P[i ] (4.23) T[i ] 0 Tk (4.24) i Here, P[i ] is position of i-th joint in global coordinate system, X C [i ] is offset from P[i ] to contact point on the i-th link and T[i ] is rotation matrix of i-th link in global coordinate system. The rotation matrix T[i ] is
computed from the orientation of the basement T[ 0 ] , which is estimated by gyro sensor, accounting for the orientation of each links. For this estimated contact point, Contact Jacobian is dynamically updated. General Contact Jacobian matrix is shown as the Equation (4.25), on the other hand, proposed Dynamic Contact Jacobian ignores the columns outside of the contact i-th link, which is shown in Equation (4.26) CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES J1,1 J 2,1 J N ,1 J J N , 2 1, 2 J 2, 2 J1,3 J N ,3 JC J N ,4 J1, 4 J 1,5 J N ,5 J 1,6 J 2,6 J N ,6 J 1,1 J 1,1 J 1,i J J 2 ,i 1, 2 J 2, 2 J 1,3 J 3 ,i J DC J 4 ,i J 1, 4 J 1,5 J 5 ,i J 1,6 J 2,6 J 6,1 60 (4.25) 0 0 0 0 0 0 0 0 0 0 0 0 (4.26) According to the Dynamic Contact
Jacobian (Equation 4.26), force command at the contact i-th point is transferred to torques from 0-th joint to i-th joint. When contact force and force command at the contact point (i-th link) in local coordinate system on the i-th link is generated, it is transferred to contact force and force command in global coordinate system. Fcmd / global T[i ] Fcmd / local[i ] (4.27) Here, Fcmd / global is a vector of contact force and command force in global coordinate system and has 6 DOF. T[i ] is rotation matrix of i-th link in global coordinate system F force[ X ] F force[ X ] F F force[Y ] force[Y ] F force[ Z ] F , Fcmd / global force[ Z ] Fcmd / local Fmoment[ X ] Fmoment[ X ] Fmoment[Y ] Fmoment[Y ] Fmoment[ Z ] local Fmoment[ Z ] global (4.28) Therefore, any contact force measured and the force command at the i-th
link is transferred to joint torque command by Dynamic Contact Jacobian matrix. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES J 1,1 J 1,1 J 1,i [1] J 2 ,i J 1, 2 J 2, 2 [ 2 ] J J 3,i [3] 1,3 J 4 ,i J 1, 4 J J 5 ,i [ N ] 1,5 J 1,6 J 2,6 J 6,1 Fig.42 0 0 0 0 0 0 61 T 0 F force[ X ] 0 F force[Y ] 0 F force[ Z ] 0 Fmoment[ X ] 0 Fmoment[Y ] 0 Fmoment[ Z ] global (4.29) Definition of contact Jacobian. 4.12 Experimental Validation Using the Dynamic Contact Jacobian, contact point could be applied at any point of the link. Fig 4.3 shows situation of pushing a table by the forearm and by the elbow with pushing force by 15[N] towards the table. In this framework, since the Dynamic
Contact Jacobian was defined to control the contact forces of the robot at any point on any link for any direction. This framework was applied to the upper-body control of the HONDA ASIMO robot and the whole-body was controlled by torso controller and balance controller (Chapter 3). In this test, standard table was used to test the contact and motion control. The location of the table was unknown to the robot Only the constant contact force was applied to the forearm and the elbow and compliant contact with the table was validated. To validate constant force control by the Dynamic Contact Jacobian motion command was actively applied to the posture control of the torso (Fig. 44) In this test, it CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 62 Fig. 43 Contact with a table by the forearm and by the elbow Fig. 44 Whole-body control and contact with a table by the forearm and by the elbow. was confirmed that the Dynamic Contact Jacobian constantly
commanded the forearm and the elbow to generate the designed force even if the posture of the torso moved randomly keeping stable balance control. vertical plane keeping the original orientation or of the forearm Fig 422 shows the result of the experimental tests. In Fig 422 left, a contact point was originally designed at the middle point on the bottom line of the forearm and contact force was designed CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 63 toward the vertical direction to the bottom line of the forearm in local coordinate system as Fc / local (0.0,00,150) [N] In Fig 422 right, a contact point was originally designed at the middle point on the side line of the forearm and contact force was designed toward the side in global coordinate system as Fc / global (0.0,150,00) [N] In both cases, the forearm was intended to move by the sinusoid position command keeping the original orientation. 4.2 Contact Estimation by Force Sensor
and Whole-Body Control 4.21 Contact Estimation and Whole-Body Control When the humanoid robot actively makes contact with the environment and moves keeping contact, the balance controller is affected and becomes unstable. In Chapter 3, we had already developed stable whole-body controller which generates proper posture of the upper body accounting for the posture of both arm. In this framework, the upper body posture controller was connected to the balance controller of ASIMO to respond to dynamic change of upper body motion. In this chapter, the whole-body controller is extended to account for contacts by the arm In Fig. 45, a block of hip position and orientation control is extended to account for contacts by the arm. In Fig46, contact estimation in horizontal direction and whole-body control is shown Detected contact forces in X direction (forward direction in local coordinate system of the robot) at the link of the arm are projected to the moment at the hip position (a middle
point of basements of each leg) as a moment affecting to the orientation of the upper-body. And compliance control is applied to respond to the contacts at the arm. [ ] Here, [ ] [ ] ( [ ] [ ] [ ] [ ]) [ ] (4.30) is a rotation command to the upper-body posture along Z axis of the hip, is a contact force of X direction at the right arm, X direction at the left arm, right contact point, contact point and [ ] [ ] [ ] is a contact force of is a distance in Y direction from the center of rotation to the is a distance in Y direction from the center of rotation to the left [ ] is a compliance gain. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES Fig.45 64 Block diagram of whole-body controller including contact Jacobian. In Fig.47, contact estimation in vertical direction and whole-body control is shown As for the contact force in vertical direction, if the direction of contact forces in vertical direction (axis Z) is same, both vertical
forces are integrated and compliance control is applied to generate the hip height command such as lifting up or down accordingly. [ ] Here, [ ] ( [ ] [ ]) [ ] (4.31) is a position command to the upper-body position in vertical direction, [ ] . is a compliance gain On the other hand, when direction in vertical directions are opposite between right and left arm, the orientation around X axis is rotated by compliance control. [ ] ( [ ] [ ]) [ ] (4.32) CAHPTER 4. Here, SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES [ ] 65 is a position command to the upper-body position in vertical direction, [ ] . is a compliance gain If distance to the contact point from the hip position in forward direction is over the designed threshold, rotation along the side direction (Y axis) is applied. [ ] Here, [ ] ( [ ] [ ] [ ] [ ]) [ ] (4.33) is a position command to the upper-body position in vertical direction, [ ] . is a compliance gain Consequently, effect
to the balance controller from any contacts at the arm is reduced, which make the robot to stand stable while in contact. Fig.46 Contact estimation in horizontal direction and whole-body control Fig. 47 Contact estimation in vertical direction and whole-body control. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 66 4.22 Experimental Validation: Whole-Body Control in Contact For validation and performance analysis of the theoretical formulation, the ASIMO robot was used. The arm of the robot is 5 DOF, which are driven by a motor, a belt and a Harmonic Drive gear system for each joint except for the wrist joint. They are driven by standard PD control with position and velocity feedback loop, position gain and velocity gain. To enable complainant contact and force control accounting for the dynamics of the system, joints of arms are commanded by torque command by way of the Torque Transformer (Chapter 2). In the test, multiple contacts were designed
mainly at the hand. The force command was applied with the 6 axis force sensor feedback in vertical direction, Fc / global (0.0,00,100) [N], and simple motion command in Cartesian Space was applied in horizontal direction. In Fig 49, the green line shows force command and the red line shows contact force measured by the force sensor. Using force feedback, accurate contact force control can be realized. In Fig 49, the green line shows motion command in side direction and the blue line shows actual motion. With contact force control in vertical direction, there was a small delay and position error but was not critical level since the motion in contact is supposed to be slow generally. Fig. 48 Pushing a table by right hand with force and position control. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 67 Fig. 49 Contact force command and contact force measured by the force sensor According to this test, contact and motion control with precise
force sensor feedback was verified. In this experimental validation, following results were achieved. 1) Robust force control pushing the table with designed force and tracking the position command in the horizontal plane. 2) Effect of contact force at the hand was reduced by the upper-body posture controller and stable balance control was achieved even though the robot was compliantly in contact with the table. 4.23 Gravity Assist Control So far, the trajectory and the contact force of the robot were designed beforehand and the robot was controlled to track the trajectory with high gain position control. And contact force was controlled with compliance control which allows small difference between the designed force and the actual force. On the other hand, it is impossible to model all environmental perfectly beforehand, and it is supposed that the actual contact force can be largely different from the designed contact force or unexpected contact happens when a humanoid robot
works in true environment. Therefore, it becomes important to estimate contact force and compliantly control the robot. In this section, 6 axis force sensor, which is mounted on the wrist of the robot, is used to detect contact between the robot and the environment. Using the 6 axis force sensor, contact CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 68 force and moment at the hand are be detected with high accuracy. Detected contact force is used in the force controller and the contact force command is transferred to the joint torques through the Dynamic Contact Jacobian, which is explained in this chapter. The contact force in the task space dynamics and the joint dynamics in the posture space are integrated. Fig. 410 shows force control mode in the Operational Space Formulation To estimate the contact force, 6 axis force sensor was used. Using the 6 axis force sensor, contact force and moment at the hand can be detected with high accuracy. In the
force control, the force command was generated to command by the force in the Operational Space, which magnitude is as same as the detected force and direction is opposite to the detected direction. In this case, if the hand is pushed or more objects are added, the force controller commands by the force command which magnitude is as same as the detected force and direction is opposite to the detected direction. Consequently, the robot holds hand position. On the other hand, since Operational Space Formulation decouples the task and the posture dynamics, the joints between the hand and the shoulder could be moved compliantly. Fig 411 shows another use case Generally, the null space T T projection matrix N c decouples the task and the posture dynamics, however N c was replaced by the identical matrix to move the end-effector compliantly holding the object at the hand. In this case, the robot holds any objects at the hand however the wrist position can be moved moving the joints
between the hand and the shoulder. Consequently, the robot could hold any objects detecting the weight by the 6 axis force sensor and the objects could be manipulated by applying forces to the joints between the hand and the shoulder. We name this mode as Gravity Assist mode [28]. In Chapter 5, the concept of Gravity Assist is extended to realize complain standing mode. Fig. 410 Force control mode in the Operational Space Formulation CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES Fig. 411 69 Operational space force control and joint space control. 4.24 Experimental Validation of Gravity Assist Control Fig. 411 shows experimental validation of Gravity Assist mode In this experimental validation, Gravity Assist was applied to the right arm of the HONDA ASIMO robot. The weight of the box was unknown but it was measured by the 6 axis force sensor and the Gravity Assist force command was applied to hold the box. The Dynamic Contact Jacobian generated the
joint torque command for each joint which made the robot to hold the box. However the wrist position can be moved moving the joints between the hand and the shoulder. Consequently, the robot could hold any objects detecting the weight by the 6 axis force sensor and the objects could be manipulated by applying forces to the joints between the hand and the shoulder. Fig. 412 Experimental Validation of Gravity Assist. CAHPTER 4. 4.3 SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 70 Contact Estimation by Tactile Skin Sensor 4.31 Contact Estimation by Tactile Skin So far only the force sensor was used for humanoid robot to realize compliance control and to detect the contact with the environment. However, more precise information surrounding the robot is necessary to provide the robot with higher performance in the narrow environment. Force sensor, tactile sensor, vision, sonar, etc. can be integrated for sensor fusion to percept the environment. In the
setup, for the tactile sensing, resistor type sensor is used in this system Fig 4.12 The advantage for using resistor type sensor is that (i) the sensor is not expensive, (ii) the sensor is tough and (iii) the sensitivity level can be customized by voltage driver or by resistor. By changing resistors to higher or lower values we can adjust the sensitivity band up or down. This is perfect for customizing the sensor to the weight of the robot or the strength of the gripper. In our experiments we used 1.5 Inch Force Sensing Resistor Force Sensing Resistors (FSR), a polymer thick film (PTF) device which exhibits a decrease in resistance with an increase in the force applied to the active surface. Its force sensitivity is optimized for use in human touch control of electronic devices. FSRs are not a load cell or strain gauge, though they have similar properties. FSRs are not suitable for precision measurements Figure 414 shows three layered structure by FST (Fabric, Skin and Tactile
sensor). Fig. 413 Tactile sensor skin on the forearm CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 71 In this system, we use several FSR tactile sensors to surround the link such as fore arm link. Over the tactile sensor, soft material is put to realize soft contact between the link and the environment. It also works to minimize the impact force when the robot collides with the environment or when unpredictable contact happens. With the tactile sensor and with the soft material, designed contact can be achieved with sensor feedback. This construction also normalizes the contact force on the link and makes the robot easy to touch with the environment. The problem of this two layered structure is that the surface of the skin usually has large friction and makes the robot difficult to slide while touching the environment even with small contact force. To avoid this problem, we intended to apply fabric layer to reduce the friction between the robot and
the environment. To control the robot keeping stable contact with the environment and to control the motion keeping contact (slippage control) with the environment, using same tactile sensor skin system, a stiffness of compliant material layer and a friction coefficient of the smooth material layers are analytically defined. In this section, estimation process of the contact points and the contact forces are provided to feedback to the controller. Fig 415 is a cross-sectional side view of a portion of the tactile sensor skin showing physical effect of contact force applied to the tactile sensor skin. A surface layer preferably has thin thickness to deform its shape along a compliant layer below by applying large contact force. The surface layer preferably has low friction to enable motion control along the link by applying small contact force. Fig. 414 Three layered structure by FST CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 72 Fig. 415 Tactile
sensor skin and contact with the environment A) Surface layer The surface layer is preferably has thin thickness, low friction, elastic characteristic, small weight and waterproof function. The surface layer can be polyester or nylon or any kind of fabric Applying contact force Fz from the link i to the environment and motion control force Fx for the another direction of the link, which mass is M i , acceleration is a x , the link moves x per second. The following relationship needs to be satisfied to realize slippage physically Fz Fx M i a x x 1 1 1 a x T 2 a x (1) 2 a x 2 2 2 (4.34) (4.35) CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES M i a x / Fx M i 2x / Fx 73 (4.36) Applying general and predictable variable by Fz 0 ~ 20[N] , M i 0.5 ~ 10[Kg] and x 0.02 ~ 02[m] to the Equation (436), the friction coefficient to allow slippage control is defined as 0.25 Polyester used for
an umbrella is selected and applied to the surface layer B) Compliant layer Applying large contact force within predictable range by Fz 20 ~ 50[N] , the compliant layer preferably deforms and increases its dumping effect more to stabilize the contact between the link and the environment by any kind of material which has stiffness less than 60 in ASKER definition. Asker C is a spring hardness scale measure hardness of a device as stipulated in SRIS0101 (SRIS: Society of Rubber Industry [Japan] Standards). Similar hardness measurement includes the JIS K6253 and Asker F. Here, EPT SEALER EC-100-PS (Fig 416) is composed of flexible foam sealing material consisting of EPDM rubber foam with a semi-closed cell structure mixed with a flame retarder. Fig. 416 Structure of tactile sensor skin. C) Sensor layer The Interlink Force sensing Resistor uses a 1.5‖ (127mm) square pad to sense applied force Force Sensing Resistors (FSRs) are very thin, robust, polymer thick film (PTF) devices
that decrease in resistance when increased pressure is applied to the surface of the sensor. FSRs are not a load cell or strain gauge devices though they have many similar properties. They are more appropriate for qualitative rather than precision measurements. The sensors are arranged to cover the surface of the forearm link of the robot with 16 sensors. CAHPTER 4. 4.32 SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 74 Sensing Network In this system, several sensor ECU boards were used to build the sensing network of the robot. Multiple analogue and digital sensor can be connected with the sensor ECU board. The sensor ECU boards were connected with Sensing Network System (SNS) of UBUNTU PC. This Sensing Network Server was developed for the Perception Module which gathers all the sensor information from the sensor ECUs, percept the environment and sends the result to the robot controller through the network. This system was developed independent from robot model
and robot version. In the experimental validation, FSR tactile sensor was connected with the sensor ECU boards and the Sensing Network Server was connected with the HONDA ASIMO robot through TCP/IP network. The Sensing Network System (SNS) is shown in Fig 417 The system is composed of (i) sensor ECU unit, (ii) analogue and digital sensors and (iii) sensing network server. More sensor ECU can be added accordingly through the USB connection to measure further analogue and digital sensor data. The sensor ECU unit is connected with the sensing network server through the USB connection. On the sensing network server, the interface of sensor ECU unit is installed on UBUNTU OS. The Sensing Network System was developed to communicate with the Robot Control System through the network. The merit of this system is that the sensing network system is independent from the model, version and system of the robot. Fig. 417 Sensing Network System (SNS). CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION
AND MOTION CONTROL PRIMITIVES 75 The server is designed to receive (i) joint position actual, (ii) Operational Space position actual, (iii) orientation of gyro sensor, etc. which are measured by the robot The measured sensor information by the sensor ECU unit is sent to the robot control system through the TCP/IP network communication. Fig 416 shows the implemented Sensing Network System (SNS) The system is composed of (i) sensor ECU unit, (ii) analogue and digital sensors and (iii) sensing network server. 4.33 Sensor ECU and Sensors In this system, several sensor ECU units, which can be connected with multiple analogue and digital sensor, were used to build the sensing network of the robot. PhidgetInterfaceKit 8/8/8 is used as sensor ECU unit. The PhidgetInterfaceKit 8/8/8 allows you to connect devices to any of 8 analog inputs, 8 digital inputs and 8 digital outputs. It provides a generic, convenient way to interface your PC with various devices. Analog inputs are used to measure
continuous quantities, such as temperature, humidity, position, pressure, etc. This system offers a wide variety of sensors that can be plugged directly into the board using the cable included with the sensor. In the setup for a humanoid robot for the tactile sensing, resistor type sensor, the Interlink Force Sensing Resistor uses a 1.5" (127mm) square pad, is used in this system Fig. 418 Sensor layer by FSR tactile sensor on the forearm CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 76 4.34 Setting Tactile Sensor Skin In Fig. 419, the data of FST system is shown for (i) impact force and (ii) continuous pushing force. Here, the Interlink Force Sensing Resistor uses a 15" (127mm) square pad The resolution was changed by adding resistor of 5. The measured data was scaled differently around 08[N] threshold. In the test, the FSR could detect the impact force of 01[N] which was made by dropping the small battery. The measured data was linear until
it saturated at 15[N] (shown in red plot in Fig. 419) From 15[N] to 30[N], we applied simple modification by constant coefficient to linearize data shown in blue plot in Fig. 419 We applied this process to all the sensors which consists of the Tactile Sensor Skin. Moreover, regarding initial offset, we calibrate the offset every time we start up the system or when the link is not in contact or in motion. In Fig 420, using force sensor, calibrated tactile sensor was measured and linear response was measured up to 100[N]. Fig 421 shows an algorithm to process contact information into dynamics control Contact information, which is detected by several tactile sensors, is processed to estimate a contact point Xestimated and a contact force Festimated. The estimated contact point Xestimated is applied to generate Contact Jacobian matrix Jc. dynamically The estimated contact force is applied to force controller referring desired force at the control point. t F K pf e(t ) K if
e( )d K df e(t ) * 0 Fig. 419 (4.37) Cross-sectional side view of a portion of the tactile sensor skin. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 77 Fig. 420 Calibration result of the tactile sensor skin 4.35 Process of Contact Force Control and Estimation Contact information is set to decoupled task dynamics controller by the Operational Space Formulation and joint torque commands are generated. Fig 422 is a side and perspective top view of a layer of tactile sensor where several tactile sensors are mapped in two dimensionally. Y axis goes from the elbow to the wrist and X axis goes around the surface of forearm, where X [i] (i=0,5) is center position of tactile sensor in X direction and Y[j] (i=0,2) is center position of tactile sensor in Y direction. F[i,j] (i=0,5, j=0,2) is a contact force measured by tactile sensor When a contact force Fc is applied to the link, it affects several sensors under the contact point. Here, the
effect of the contact force is limited to four tactile sensors because the tactile sensor skin surrounds cylinder shaped link. Contact force and contact point is simply estimated by the Equation 4.38 2 2 Y F Y / F[i , j ] [ i ] [ i , j ] [ i , j ] j 0 i 0 n 1 n 1 X F[i ] X [i ] / F[ i ] i n i n n 1 n 1 Y F[i ]Y[ i ] / F[i ] i n i n (4.38) CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES Fig. 421 Algorithm to process contact and control Fig. 422 Side and perspective top view of a layer of tactile sensor 78 CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 4.4 Experimental Validation 4.41 Pushing Force Command 79 Using the Dynamic Contact Jacobian, contact point could be applied at any point of the link. Fig 4.23 shows situation of pushing a table by the forearm and by the elbow of HONDA ASIMO robot with pushing
force by 15[N] towards the table. Over this framework, motion command was applied to execute contact and motion control in contact with the external environment. In this framework, since the Dynamic Contact Jacobian was defined to control the contact forces of the robot at any point on any link for any direction. In the test, location of the table was unknown to the robot. At first, only the constant contact force was applied to the forearm of the robot toward the table. In Fig 423 left, a contact point was originally designed at the middle point on the bottom line of the forearm and contact force was designed toward the vertical direction to the bottom line of the forearm in local coordinate system as Fc / local (0.0,00,150) [N] The motion command by the sinusoid position command keeping the original orientation was applied to move in the vertical plane keeping the original orientation of the forearm. The contact point was geometrically estimated according to the motion of the
robot (Fig. 423) and was the Dynamic Contact Jacobian was continuously updated. The contact force was detected by the tactile sensor skin and force command was applied by the Equation (4.37) According to contact sensors on the forearm, the contact point was continuously estimated. Fig 424 shows measured contact force by two sensors in line, while moving in one direction. In Fig 424, two sensors are used to estimate contact point and contact force. During the motion, if a contact point is switching from one sensor to the other sensor, which is a transition phase, we applied simple algorithm to estimate contact force. According to Fig 424, the robot could keep reasonable contact force with small error during motion in contact. CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES Fig. 423 Contact force control and motion control test Fig. 424 Measured contact force while moving 80 CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 81
4.42 Development of Tactile Sensor Suit by QTF Another type of tactile sensor system was also developed as ASIMO’s sensor suit. QTC sensor (Quantum Tunneling Composite developed by Peratech) was used for the tactile sensor suit. QTC provides a major advantage in force sensing applications measuring a dynamic pressure distribution over the sensor area. QTC is composite materials of metals and non-conducting elastomeric binder which move closer and electrons can tunnel through the insulator when pressure is applied. QTC has extremely large, reversible increases in conductivity when compressed, stretched, bent, or twisted. To detect contact force, fabricated with cloth and shape of the suit was designed to fit the shape of the robot. In this project, ASIMO’s sensor suit was fabricated for each link of the upper-body and the arm of ASIMO by MIT Media Lab (Fig. 425) Fig. 425 Design of tactile sensor suit by QTC. Fig. 426 ASIMO wearing the tactile sensor suit CAHPTER 4.
SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 82 The tactile sensor skin was replaced by this sensor suit and was connected to the sensor network. The advantage of this sensor suit is that the shape of the sensor fits to the robot (Fig. 426) On the other hand, disadvantage of this system is that contact force can be detected by the sensor area and precise contact position can not be estimated in this system. Therefore, we decided to use this tactile sensor suit system to detect unpredictable contact with human in the environment. Compliant motion in contact with human while tracking given trajectory is shown in Fig. 427 In this test, the tactile sensor was used as ON/OFF sensor. We applied simple motion command for an arm which is tracking two goal positions in fixed interval. The motion command is like step motion command. During the tracking motion, when a contact is detected by a tactile sensor, the controller switches the mode from tracking mode to gravity
compensation mode, which is to assist the weight if the link passively. Fig 427 shows the result of the demonstration According to the step command the robot followed the command with reasonable response without overshoot and with small position error. When a contact happens, it is detected by a tactile sensor on the arm, which is shown in pink single step in Fig. 428 Since the tactile sensor is connected to the controller through a network, contact information was sent to the controller with 80 [msec] delay. As soon as the controller received the contact information, it switched into compliant mode By combination of physically compliant surface and passive motion, an operator close to the robot could avoid critical collision with the robot. 4.5 Conclusion In this chapter, an approach for compliant multi-contact and motion control of humanoid robot was discussed. Based on contact detection using force sensor and tactile sensor skin, decoupled task dynamics was applied to respond to
multiple contacts. Using sensor information, a contact point and a contact force were estimated. According to the estimated contact, the Contact Jacobian Matrix was dynamically modified. Dynamic effect of the robot was modeled by the Operational Space Formulation for the upper-body control. The command from the Operational Space Formulation was transformed into position and velocity command of the joints using the Torque Transformer. This framework was applied to the upper-body control of the humanoid robot and was connected with current balance controller for the lower-body. Through the test the availability of this framework was experimentally verified Advantage of this approach is that unpredictable contact can be measured by the force sensor and the tactile sensor and contact point CAHPTER 4. SENSOR-BASED CONTACT ESTIMATION AND MOTION CONTROL PRIMITIVES 83 and contact force were estimated so that the controller can deal with the contacts. The controller could dynamically
respond to the contact force if it is detected or estimated. And not only precise force feedback control but also compliant contact motion were realized. These results are especially efficient physically when humanoid robot coexist with human in the actual environment. This framework provides more freedom for motion in narrow environment if advanced environment sensing, decision making and behavior controller is connected. Fig. 427 Fig. 428 Contact detection by the tactile sensor while tracking. Contact detection by the tactile sensor and switching control mode. CHAPTER 5 Whole-Body Multi-Contact for Extended Support In this chapter, we focus on whole-body multi-contact for extended support. Extending a humanoid robot’s workspace is attractive for manipulating objects that are just out of reach. Since such circumstances occur often, humanoids that rapidly expand their workspace by changing their supports, stance, and gait will excel. Existing humanoid robotic controllers,
however, typically require complicated planners for adjusting the workspace. Planning’s computational complexity, in turn, makes it impossible to rapidly and dynamically adapt the workspace to suit a manipulation task. Humanoid robots’ many degrees of freedom make planning and re-planning particularly expensivespecially in unstructured environments. Consider, for instance, picking up an object that is just out of reach (Fig. 51) Most humans would simply extend their support base with one hand and pick up the object with the other (as ASIMO’s depiction demonstrates). A typical humanoid robot controller, however, would consider the object to be unreachable. In due time, and if at all, the controller would decide to step back, plan a path to walk around the table and then re-execute the reach task. Even worse is a situation when the object is unreachable from all of the table’s sides. To robustly extend a humanoid’s manipulation workspace, we propose using contact to dynamically
extend the workspace’s support. We will build workspace-extension motion primitives for a variety of contact surfaces and use them in coordination with our task controllers. Adding these primitives to our controller’s state-graph will enable rapidly switching them on if manipulation requires reaching the workspace’s edge. The two major sub-problems we will solve while building such controllers are how to customize gaits and motions as the robot changes its support base, and how to specify elastic humanoid-limb trajectories that can be modified smoothly in both space and time when unforeseen effects occur. CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 85 Fig. 51 Expanding manipulation workspace through extended contact support 5.1 Extending Support for Manipulation Extending a humanoid’s support base requires solving multiple technical challenges. First, the humanoid must define a path by specifying a series of contacts that extend its workspace and
bringing a manipulation task within its reach. Next it must create dynamically consistent trajectories to traverse the path. It must then perform the required manipulation task, and must finally revert to its original support position. We will use backpropagation to define a path from normal standing support, through many different contact positions to the final extended workspace position. Such a path could take many forms For example, to reach an object on a table, the path could involve ―walking‖ with the hands to extend the support polygon out onto the table’s surface. Finally, the humanoid could lift one hand and manipulate Interestingly, where possible, the humanoid could even brace itself with a body part and use both hands to manipulate. The reverse will return the robot to the original support position Second, our goal is to expand the manipulation capabilities of humanoids by extending their support with multiple contact points. In this system, our whole-body
framework has been CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 86 designed to allow the robot to compliantly interact with its environment at multiple contact points. The synthesis of compliant tasks is simplified by being independent of postures and constraints, which are automatically integrated in the control hierarchy. Using such abilities, a robot could extend its support by applying a series of contacts and could manipulate an object that was out of reach. We plan to use elastic sequences of contacts for extending the support so that they may be stretched around unforeseen obstacles. 5.2 Strategy to Extend Support In humanoid robot, contact with the environment or human is big concern. When the humanoid robot works in real environment, it is necessary to do way of moving of similar to human. For example, when human takes object in the inner part of desk, human perform behavior which breaks down balance condition and lean to the desk. However, even if all
the conditions are detectable by sensor, with current technology, it is difficult for humanoid robot to breaks down balance condition and lean to the desk. To realize this motion, it is necessary to turn off the balance control and behavior which assist only robot’s body weight is necessary to lean the robot to the desk. Similarly, assist robot which supports body weight of human estimating center of gravity tends to interrupt balance control of the user who is wearing the assist robot, which causes instability of balance. Therefore, in this case also, technology which turns of balance control supporting only body weight is needed. So far, a control strategy based on both the center of gravity (COG) and zero moment point (ZMP) control of the humanoid robot was formulated to realize dynamic stable balance control. On the other hand, as for the compliant balance and passive balance control, which we are targeting at here, it is difficult to realize by using the conventional balance
control approach. In order to realize compliant and passive control for standing and walking status, we propose to categorize conventional balance control technology into three core categories. (1) Gravity compensation technology (2) Balance control technology (3) Gait generation and trajectory generation technology Structure of control strategy for balance control of this method is shown in Fig. 52 So far, the area that three circles intersects has been mainstream of balance control technology in humanoid CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 87 robot control, however, we think that each three technical area has important core technology to be improved. In this research, we think that gravity compensation technology is a key function to realize contact with the environment or state to lean on the environment. On the other hand, balance control technology requires complicated information, such as contact friction corn for each contact point, internal and
external forces, support polygon, etc. and it is supposed that detailed computation is not effective in this case. Therefore, it is very effective approach to categorize the conventional balance control technology into proposed framework. Fig. 52 Strategy to realize compliant and passive balance control 5.3 Gravity Stand When humanoid robot is approaching object in the inner part of desk, stable balance controller which is based on both the center of gravity (COG) and zero moment point (ZMP) control prevent the robot to move the hand forward. This is because the hip is controlled to move back when the robot moves the hand forward to keep stable balance. Therefore it is impossible for current humanoid robot to approach object in the inner part of desk. In this research, we propose to categorize whole-body balance controller by (1) gravity compensation technology, (2) balance control technology, and (3) gait generation and trajectory generation technology and activate or CAHPTER
5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT Fig. 53 88 Block diagram of whole-body motion controller. deactivate each technology according to required task. In this case, we propose to activate only (1) gravity compensation technology and deactivate another controller. Consequently, we can turn off the balance control and make the humanoid robot to assist only it’s body weight, which is necessary to make the robot lean the robot to the desk. In this research, we name this control status as Gravity Stand and it was realized by control structure shown in Fig. 53 Approach to Gravity Stand is realized according to the following process. 1) Compute whole-body dynamics of the robot for any posture not only standing posture but also floating posture in the air using gyro sensor. 2) Apply gravity torque for each joint so that effect of gravity is cancelled. 3) Apply force control for estimated contact points where force sensor is mounted and contact information can be feedback
to the force controller. 4) Apply gravity assist mode in which vertical contact force information from the 6 axis force sensor in global coordinate system is feedback to the force controller. 5) Command both foot by force control so that the magnitude of the control force is as same as sensor information from the 6 axis force sensor and the direction of the command force is opposite to detected force that is towards the floor. CAHPTER 5. 6) WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 89 By generating force in gravity direction down, foot of the robot pushes the floor. Consequently, pelvis location in robot's global coordinate system is relatively raised from the floor. Since the foot generates force only in the gravity direction and the force in horizontal direction is set to zero, only the weight of the robot in the gravity direction is compensated and compliant motion becomes possible in the direction of horizontal plane. Image of Gravity Stand is shown in Fig. 54
Fig. 54 5.4 Compliant standing mode by Gravity Stand. Experimental Validation : Gravity Stand Gravity Stand was applied to the ASIMO robot. In the experimental validation, we started from the conventional balance control mode then turned to Gravity Stand mode while standing. Before switching the mode, the followings process is computed all the time together with the conventional balance controller. 1) Compute whole-body dynamics of the robot for any posture. 2) Apply gravity torque for each joint so that effect of gravity is cancelled. 3) Apply force control for estimated contact points where force sensor is mounted and contact information can be feedback to the force controller. CAHPTER 5. 4) WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 90 Apply gravity assist mode to lower-body controller so that vertical contact force information from the 6 axis force sensor in global coordinate system is feedback to the force controller. 5) Command both foot by force control so
that the magnitude of the control force is as same as sensor information from the 6 axis force sensor and the direction of the command force is opposite to detected force that is towards the floor. Experimental validation of Gravity Stand mode is shown in Fig. 55 By generating force in gravity direction down, foot of the robot pushes the floor. Consequently, hip location in robot's global coordinate system is relatively raised from the floor. Since the foot generates force only in the gravity direction and the force in horizontal direction is set to zero, only the weight of the robot in the gravity direction is compensated and compliant motion becomes possible in the direction of horizontal plane. Fig. 55 5.5 Experimental validation of Gravity Stand mode. Whole-Body Multi-Contact for Extended Support When humanoid robot is approaching object in the inner part of desk, it is necessary to lean the robot to the desk. To realize this motion, it is necessary to expand the
manipulation capabilities of humanoids by extending their support to allow the robot to compliantly interact with its environment at multiple contact points. However, current stable balance controller which is based on both the center of gravity (COG) and zero moment point (ZMP) control prevent the robot to CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 91 move the hand forward. This is because the hip is controlled to move back when the robot moves the hand forward to keep stable balance. Therefore it is impossible for current humanoid robot to approach object in the inner part of desk. In this research, we propose to categorize whole-body balance controller by (1) gravity compensation technology, (2) balance control technology, and (3) gait generation and trajectory generation technology and activate or deactivate each technology according to required task. In this case, we propose to activate only (1) gravity compensation technology and deactivate another
controller. Consequently, we can turn off the balance control and make the humanoid robot to assist only it’s body weight, which is necessary to make the robot lean the robot to the desk by Gravity Stand. At the same time, it is necessary to bend the upper-body forward to approach object in the inner part of desk. In this case, the posture of the upper-body must be supported by pushing the desk, at least using by one hand. To support the posture of the upper-body by pushing the desk, the idea of Gravity Stand was extended to support the posture of the upper-body using the arm. Using the force sensor, which is mounted on the wrist, contact force is detected and force command is generated to the end-effector of the arm (hand) to command toward the direction of gravity with the magnitude of measured force in vertical direction. As explained in chapter 44, we had already named this control as Gravity Fig. 56 Manipulation capabilities of humanoids by extending their support CAHPTER
5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 92 Assist. Therefore, to allow the robot to compliantly interact with its environment at multiple contact points by extending its support, Gravity Stand and Gravity Assist were applied. In this case, if we have the whole-body contact Jacobian, we need to consider the contact force at the contact point between the desk and the upper leg of the robot precisely. Therefore the contact Jacobian is separated at the hip to allow standing compliantly to lean to the desk and supporting posture of the upper-body, which makes to realize this motion easier. 5.6 Experimental Validation : Multi-Contact for Extended Support Our goal here is to make the humanoid robot approach object in the inner part of desk. To realize this motion, it is necessary to expand the manipulation capabilities of humanoids by extending support allowing the robot to compliantly interact with its environment at multiple contact points. To extend support, Gravity Stand
and Gravity Assist were applied to the ASIMO robot. In the experimental validation, we started from the conventional balance control mode and made the robot to move it’s hands to the surface of the desk before the mode of the arm controller is switched to Gravity Assist mode. The controller was designed to compute the following processes together with the conventional balance control mode. 1) Compute whole-body dynamics of the robot for any posture. 2) Apply gravity torque for each joint so that effect of gravity is cancelled. 3) Apply force control for estimated contact points where force sensor is mounted and contact information can be feedback to the force controller. 4) Apply gravity assist mode in which vertical contact force information from the force sensor in global coordinate system is feedback to the force controller. 5) Command both leg by force control so that the magnitude of the control force is as same as contact force sensor and the direction of the force is
towards the floor. 6) Apply gravity assist mode to arm controller so that vertical contact force information from the 6 axis force sensor in global coordinate system is feedback to the force controller. 7) Command both hands by force control so that the magnitude of the control force is as same as sensor information from the 6 axis force sensor and the direction of the force command is opposite to detected force that is towards the floor. CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 93 After the contact status by the conventional balance controller became stable, the mode was switched to Gravity Stand and Gravity Assist mode. Experimental validation of Gravity Stand mode and Gravity Assist mode is shown in Fig. 57 The robot was leaning to the desk by Gravity Stand mode and the posture of the upper-body was supported by Gravity Assist mode which was activated on the right arm. For the left arm, position control by the Operational Space Control trajectory was
applied. In this experimental validation, designed trajectory was applied to the left arm to move from initial position to new position following the designed trajectory while assisting the posture of the upper-body by the right arm. Another situation is shown in Fig 58 In this case, compliant motion control mode was applied to the left arm control in which only the gravity torque was computed to make it’s motion compliant. In this case, an operator manually moved the left arm from initial position compliantly to realize teaching playback control. Fig. 57 Manipulation with extended support by Gravity Stand and Gravity Assist. Fig. 58 Manipulation with extended support with teaching playback control CAHPTER 5. 5.7 WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 94 Whole-Body Gravity Assist : Kneeing Motion The conventional humanoid robot performed balance control by the ZMP control for a standing position. The control of the conventional humanoid robot supported only the
balance control for the state that a foot part and a floor came in contact with. However, the motions such as kneeing and crawling need to be accounted when a humanoid robot works in real environment. In this study, we expanded the framework of Gravity Assist control to a whole body control so that multi-contact and whole-body control of humanoid robot is enabled. To realize motion of humanoid robot approaching object in the inner part of desk, we expanded the manipulation capabilities of humanoids by extending their support to allow the robot to compliantly interact with its environment at multiple contact points. In this case, contact points were designed to end-effectors such as both hands and both feet. Here, we consider contact points from the end-effector to any points on the link using Dynamic Contact Jacobian. In Fig. 59, control status of whole-body Gravity Assist for kneeing motion is shown Control point in lower-body is designed at the center of lower-leg and contact force
is estimated to feedback the information to Dynamic Contact Jacobian. In this case, joint torques between the contact point and the basement of the local coordinate system (hip) are computed to generate Gravity Assist command. On the other hand, since the contact force does not affect to the joints from the contact point to the end-effector, the joint torque command does not actively computed. In this case, to realize compliant contact motion, gravity assist torque command is applied. Fig. 59 Control status of whole-body Gravity Assist for kneeing motion. CAHPTER 5. WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 95 In Fig. 510, result of experimental validation of whole-body Gravity Assist for kneeing motion is shown. The controller was designed to compute the following processes 1) Compute whole-body dynamics of the robot for any posture. 2) Apply gravity torque for each joint so that effect of gravity is cancelled. 3) Apply force control for estimated contact points
where force sensor is mounted and contact information can be feedback to the force controller. 4) Apply Gravity Assist mode in which vertical contact force information from the 6 axis in global coordinate system is feedback to the force controller. 5) Command both foot by force control so that the magnitude of the control force is as same as sensor information from the 6 axis force sensor and the direction of the force command is opposite to detected force that is towards the floor. 6) Apply gravity assist mode to arm controller so that vertical contact force information from the 6 axis force sensor in global coordinate system is feedback to the force controller. 7) Command both hands by force control so that the magnitude of the control force is as same as sensor information from the 6 axis force sensor and the direction of the force command is towards the floor. Fig. 510 Experimental validation of whole-body Gravity Assist for kneeing motion CAHPTER 5. WHOLE-BODY
MULTI-CONTACT FOR EXTENDED SUPPORT 96 The robot was leaning to the desk by Gravity Assist mode and the posture of the upper-body was supported by Gravity Assist mode which was activated on the arm. Consequently, the robot compliantly leans to the desk while kneeing on the floor. 5.8 Whole-Body Gravity Assist : Crawling Motion In Fig. 511, control status of whole-body Gravity Assist for crawling motion is shown Control point in lower-body is designed at the center of lower-leg and contact force is estimated to feedback the information to Dynamic Contact Jacobian. In this case, joint torques between the contact point and the basement of the local coordinate system (hip) are computed to generate Gravity Assist command. On the other hand, since the contact force does not affect to the joints from the contact point to the end-effector, the joint torque command does not actively computed. In this case, to realize compliant contact motion, gravity assist torque command is applied.
Designed trajectory can be applied to an arm to move from initial position to new position following the designed trajectory while assisting the posture of the upper-body by another arm. Fig. 511 Control status of whole-body Gravity Assist for crawling motion. In Fig. 512, result of experimental validation of whole-body Gravity Assist for crawling motion is shown. The controller was designed to compute the following processes 1) Compute whole-body dynamics of the robot for any posture. 2) Apply gravity torque for each joint so that effect of gravity is cancelled. CAHPTER 5. 3) WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 97 Apply force control for estimated contact points where force sensor is mounted and contact information can be feedback to the force controller. 4) Apply Gravity Assist mode in which vertical contact force information from the 6 axis in global coordinate system is feedback to the force controller. 5) Command both lower-leg by force control so that
the magnitude of the control force is as same as sensor information and the direction of the force command is opposite to detected force that is towards the floor. 6) Apply gravity assist mode to arm controller so that vertical contact force information from the force sensor in global coordinate system is feedback to the force controller. 7) Command both hands by force control so that the magnitude of the control force is as same as sensor information from and the direction of the force command is opposite to detected force that is towards the floor. Multi-contact and whole-body posture control was realized by Gravity Assist mode. Consequently, the robot could compliantly crawl on the floor. Fig. 512 Experimental validation of whole-body Gravity Assist for crawling motion CAHPTER 5. 5.9 WHOLE-BODY MULTI-CONTACT FOR EXTENDED SUPPORT 98 Conclusion We expanded the manipulation capabilities of humanoids by extending their support with multiple contact points. In this system,
our whole-body framework was designed to allow the robot to compliantly interact with its environment at multiple contact points. The synthesis of compliant tasks was simplified by being independent of postures and constraints, which are automatically integrated in the control hierarchy. We proposed to categorize whole-body balance controller by (1) gravity compensation technology, (2) balance control technology, and (3) gait generation and trajectory generation technology and activate or deactivate each technology according to required task. We proposed to activate only (1) gravity compensation technology and deactivate another controller. Consequently, we could turn off the balance control and made the humanoid robot to assist only it’s body weight. We name this control status as Gravity Stand To bend the upper-body forward to approach object in the inner part of desk, the posture of the upper-body was supported by pushing the desk, at least using by one hand. The idea of Gravity
Stand was extended to support the posture of the upper-body using the arm. Using the force sensor, which is mounted on the wrist, contact force was detected and force command was generated to the end-effector of the arm (hand) to command toward the direction of gravity with the magnitude of measured force in vertical direction. We had already named this control as Gravity Assist. The idea of Gravity Stand and whole-body multi-contact were extended to realize kneeling motion and crawling motion and these motions were experimentally validated. CHAPTER 6 Dynamic Collision-Free Motion Planning and Control In order to let humanoid robots accomplish useful tasks in real-world environments, we focused on formal approaches to let robots physically interact with, and effectively manipulate, the most common objects generally found in human environments. In this chapter, the following three topics have been addressed: A) Master-slave control for contact-free tracking tasks in uncertain
environments B) Haptic interfaces and simulation C) An advanced implementation of the elastic strips framework As a preliminary step to allow humanoid robots to react to unpredictable contacts with the environment, we focused on contact-free motion tracking with a master-slave control system. Through this research, we could achieve very unique and advanced results for trajectory tracking motion while contact with the environment. On the master control system, we compute the whole-body dynamics of the HONDA ASIMO robot on our simulator SAI-II. Two haptic devices are connected to SAI-II to command the hand position on-line and intuitively. According to the operator’s command of the haptic device, the hand position of the robot on simulator moves. The joint position of the robot model is sent to the physical robot as desired joint position commands, through the network. The on-line trajectory generation (OTG) framework filters the desired joint position command into the joint position
command and compensates for unpredictable network delays. Since we do not see any other robots moving in our real environment colliding with the environment, the result is one of the most advanced progresses in the robotics. In order to let robots immediately react to changes in their environment, it is required to develop instantaneous motion planning methods that can be deployed in the underlying whole-body control framework. Therefore, we implemented a reactive motion planning system on the HONDA ASIMO robot platform to perform interactive manipulation tasks under real-time conditions through highly advanced implementation of the Elastic Strips Framework. It features significantly improved CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 100 real-time performance and reliability. The former geometry approximation of the previous version was replaced by an implicit method that constructs an elastic tunnel for collision checking. Additionally, in order to maintain
system stability even in exceptional situations, such as undetected obstacles, the force transformer executes compliant motions, and the current elastic strip adapts the path tracking motion by monitoring tracking errors of the actual motion. 6.1 Strategy to Dynamic Collision-Free Motion Planning After developing and equipping a robot with tactile skin, we have been able to demonstrate how tactile sensor information supports force/torque control schemes in multi-contact situations. Position uncertainties are not relevant anymore as the robot is able to instantaneously move in directions that reduce contact forces, which in turn directs it towards its goal. We created a library of compliant manipulation primitives for basic tasks such as grasping, moving, aligning, and manipulating objects. The synthesis of compliant tasks using manipulation primitives is greatly simplified by being independent of postures and constraints, which are automatically integrated in the control hierarchy.
Learned from the human, manipulation primitives have been used to implement common manipulation tasks such as autonomously putting lids on boxes. In order to react to unforeseen motions of humans while executing a given task, we the elastic strips concept has been extended by elastic tunnels. The possible paths from the current configuration to the goal configuration are represented by free space elastic tunnels, within which continuous and feasible trajectories can be executed. Every single elementary motion associated with a robot’s task must be generated with sufficient precision to achieve full coordination under diverse static and dynamic constraints. While the elastic strips framework updates the path in real-time, the online trajectory generator (OTG)[22] computes trajectories that are executed by the robot and that allows the robot to react instantaneously to unforeseen sensor signals and events (e.g, contact situations). The new motion planning concept integrates the
intended usage of (multiple) contacts, so that the robot can benefit from desired contacts with the environment while avoiding contacts and collisions with humans that the share the workspace of the robot. The newly integrated framework combines motion and contact planning. Experiments showed how the robot can be remotely controlled using a bimanual haptic device, and how the contact information can be fed back to the operator, who can immediately feel those contact constraints and react to them with appropriate motions. Using the new contact tracking CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 101 controller, the robot will only be commanded motions that meet the current kinematic and dynamic constraints as well as the current contact constraints. Through overall discussion, we developed dynamic collision-free motion planner by three layered functions (Fig. 61) 1) Real-time trajectory modification for collision avoidance by Elastic Motion Planner. 2) Contact
free trajectory tracking by OTG (On-line Trajectory Generation). 3) Multi- Force Control in Multi-Contact (Chapter 4) Fig. 61 6.2 Approach for contact free motion control in the environment. Master and Slave Control System An overview of the system is shown in Fig. 62 In this system, we developed Master and Slave control system between the external PC and the HONDA ASIMO robot. On the master control system, dynamics simulator SAI-II, whole-body controller is running keeping the balance control, and the operational space control of both hands and checking self-collision to avoid the self-collision. On this system, networking communication task and two haptic devices are CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 102 connected to operate both hands position intuitively in real-time. 1) Master control GUI: dynamics simulator, SAI-II : Section 6.21 2) Whole-body controller in SAI-II : Section 6.22 3) Haptic user interface : Section 6.23 On the slave
control system, the following functions are running to control the robot . 1) Contact-free tracking control : Section 6.31 2) Computed torque control : Section 6.32 3) Torso controller : Section 6.33 Fig. 62 6.3 Master and slave control system and network sensing system. Master Control System 6.31 Master Control GUI: Dynamics SimulatorSAI-II If an operator activates the control of haptic device, the operator has a control of both hands position by moving haptic device. If the arm collides with the body or other parts of the robot, a collision detection function detects the distance of the parts and the controller generates torque to CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 103 avoid self-collision. According to the motion of two hands, balance controller compensates the balance of the lower body by moving hip position. And whole-body controller also controls the hip orientation according to the motion of the hands. An operational space control is
running on the hand. It follows the desired hand position, which is commanded by haptic device, with adequate position and velocity gain in task space. If the haptic device moves too fast, the hand position follows according to the setting of the gains. We have been working to develop (i) the dynamics simulator (ii) the dynamics formulation for humanoid robot and (iii) the dynamics control of humanoid robot. The dynamics simulator had been important part to research the dynamics formulation for the humanoid robot because we could not easily implement the controller on the robot. So, we worked to develop the dynamics simulator SAI-II as a tool to verify the dynamics formulation of humanoid robot. The SAI-II has integrated more through the project as well as to simulate general mechanical system. At the same time, we extended many functions on SAI-II. It has several interfaces to interact the object in the simulation world through mouse, haptic device to change or move the mechanical
system in the virtual world. The basic feature of the SAI-II is listed as follows 1. Modeling ability of any complicated mechanical system and complicated kinematical situation in the virtual world. 2. Interactive control of the mechanical system in the SAI-II by an operator. 3. Collision detection. 4. Interaction with the controller. In the master and slave control system, we use this interactivity of SAI-II as a GUI and UI of master system. 6.32 Whole-Body Controller in SAI-II The SAI-II runs the robot model of ASIMO in the virtual simulation world. The robot model in SAI-II is actuated by the whole-body controller running with the SAI-II. The dynamics controller receives the actual joint position and commands the joints of the robot model in SAI-II as same as controlling the actual robot system in the real environment. The whole-body controller has dynamics control of several tasks such as (i) both arm operational space control task, (ii) balance control task, (iii) body
posture control task and (iv) collision avoidance task. The configuration of the tasks on the whole-body controller is designed very simple, to command the complicated CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 104 whole-body tasks simply to generate desired posture of the robot. However, the formulation of the whole-body controller is very advanced and sophisticated one defined by the operational space formulation. It has the definition of (a) the posture, (b) the tasks and (c) the constraints, which are decoupled theoretically so as not affect each other. The most used definition of the Operational Space Formulation, which is explained in Chapter 4. In this system, the process of decoupling task dynamics and posture dynamics is extended to more general framework, in which two different dynamically-defined spaces can be decoupled to cancel the dynamic effect each other, theoretically. Equation (41) can be defined by two different tasks, shown in the
Equation (6.1) T J T FTask[i ] NTask [i ] Task[i 1] (6.1) In this system, tasks are composed by (a) a right hand control task, (b) a left hand control task, and (c) a hip control task, all of which can be defined by Equation (6.2) as multi-task control T T MContact Hip N Hip (RHand N RHand LHand )) (6.2) T T T J MTasks FMTasks N MTasks ( J Postures FPostures) (6.3) Using the definition of the Equation (6.2), we decouple the constraint task, which can be defined to limit the joint limit, joint velocity, joint torque, etc. Here we define the self-collision task in this space. Constraint s CollisionAvoidance (6.4) Finally, we have the formulation of the eqn. (65) T T T T T J Constra int s FConstraint s N constraint s ( J MTasks FMTasks N MTasks ( J PosturesFPostures)) (6.5) By using the framework of the Operational Space Formulation, we could manipulate the humanoid robot, which is usually
defined as complicated combination of coupled tasks, as decoupled task dynamics and we can control the complicated humanoid robot system simply in this system. So, this system allows the operator to have the control of right hand position, left hand position and hip position by grasping the each task points graphically through the mouse or CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 105 haptic interface and the operator can easily control the each task position of the robot on SAI-II. One of the important functions of SAI-II is collision detection. If modeled, the SAI-II computes the distance between the links while the robot is moving. If the environment is modeled, the SAI-II computes the distance of the links including the environment. In this system, we do not model the environment but computes the self-collision of the robot. The whole-body controller receives the distance information (i) between the arm and the torso, (ii) between the arm and the leg and
(iii) between the arms and computes the force of collision avoidance by compliance gain. The force of self-collision avoidance is applied to the Equations (6.4) and (65) 6.33 Haptic Interface We focused on creating contact-free tracking control. To realize general framework for any tracking command and to see the characteristic of the intuitive and interactive tracking ability, especially for motion in contact, we developed master and slave control system. Two haptic devices are connected to the SAI-II to command (i) right hand position, (ii) left hand position and (iii) hip position. The specification of the haptic device is shown in Fig 63 The operator activates the command from the haptic device to the SAI-II and the operator can move the hands of robot follow the position of the red ball (Fig. 64) which is commanded by the haptic devices The contact forces are net feedback to the haptic devices but to generate the desired posture of the robot in the SAI-II intuitively. The master
control system and the slave control system are connected through the network by TCP/IP communication. From the master system, joint position actual of the simulator is sent to the robot as desired joint position command. There exists network communication delay from 2.5ms to 160ms when the command comes from the master system to the robot Since the master control system runs on the windows PC which does not have real-time ability, the networking is not stable. So we use the command of the master system as desired command and filter the command as a real-time command to the robot at the slave system. According to the measurement of the network delay, we estimated and model the characteristic of the network delay by applying the feedforward effect, however, even with the network delay, this system is working well. This is because, even if there is no network delay, we usually design the spring effect and the dumping effect between the master system and the slave system to reduce
sensitivity when we command the robot by haptic device. In this system the network delay behaves as the dumping effect CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 106 Consequently, we leave the network delay as it is and use the network delay as positive effect of the dumping effect. 6.4 Fig. 63 Specifications of Omega.7 Fig. 64 Haptic user interface by SAI. Slave Control System The slave system is the HONDA ASIMO robot, which is controlled by real-time OS, VxWorks, and runs punctually by 400Hz servo loop. On the slave system, another network communication task runs receiving desired joint position command to from the master controller. The following CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 107 functions run to stabilize the network command from the master controller and to stabilize the HONDA ASIMO’s original balance controller. A) Contact-Free Tracking Control (Section 6.31) B) Computed Torque Control (Section 6.32) C)
Whole-Body Control (Section 6.33) 6.41 Contact-Free Tracking Control The robot needs to respond to unpredictable contacts while tracking the trajectory. At the same time, since the robot is commanded by the master control system and there is always unpredictable network delay, it also becomes one of the elements which interrupt the tracking ability of the robot, as well as contacts. To solve this problem, we implemented the OTG, On-line Trajectory Generation, which provides the robot robust tracking ability for any unstable inputs and unpredictable interruption by contact. The key feature of this new concept is that motion trajectories are generated on-line, that is, within every control cycle, typically every millisecond. This enables systems to react instantaneously to unforeseen and unpredictable (sensor) events at any time instant and in any state of motion. As a consequence, (multi-)sensor integration in inner control loops, in particular the development of control systems
enabling sensor-guided and sensor-guarded motions, becomes greatly simplified. These algorithms are of very basic nature and feature a very simple interface. Nine different algorithm types were introduced and the interface of one of them (Type IV) is exemplarily shown in Fig. 65 The input values are the current motion state at time instant Ti, a desired target state of motion, motion constraint values, and a selection vector that determines all DOFs, for which the algorithm has to calculate output values. The algorithm computes a jerk-limited trajectory that is time-synchronized for all selected DOFs and that transfers the system from the current state of motion into the desired target state of motion in the shortest possible time (i.e, time-optimally under consideration of the given motion constraint values). A whole trajectory is calculated every control cycle, and only the motion state of the next control cycle at instant Ti+1 is forwarded to the output and thus to lower-level
controllers. CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 65 Interface of the Type IV On-Line Trajectory Generation algorithm Fig. 66 Tracking pattern implemented 108 CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 2 ka k t i Ti peak 1 k Ai 109 (6.6) max k Ji peak 1 3 2 ka t t k i k i max k Ji (6.7) peak 2 5 4 ka t t k i k i max k Ji (6.8) a peak 2 t max k Ji (6.9) t sync i 5 k i k A a (6.10) (6.11) 2 k i v vi 1 2 k t i Ti 2 v k2 vi 1 3 2 peak1 k ti k ti k a 2 3 k i peak1 k i k 4 3 k vi k vi 0 v k4 vi 5 k i k (6.12) 1 5 4 peak 2 k ti k ti k a 2 (6.13) 12 A t T 16 J t T 12 a t t 16 J t t (6.16) 1 sync 5 t i k t i k a peak 2 2 k vitrgt k5 vi 2 k pi
k1 pi k1 vi k2 t i Ti k 3 k pi k2 pi k2 vi k3 t i k2 t i k 4 k pi k3 pi k3 vi k4 t i k3 t i 5 k 3 1 pi k4 pi k4 vi k5 t i k4 t i k J imax k5 t i k4 t i 6 12 a pitrgt k5 pi k5 vi t isync k5 t i k peak 2 (6.14) 2 2 i k i i peak1 3 k i k 2 2 k i max i k 3 2 k i max i i 3 k i (6.15) 3 2 k i (6.17) t sync i k5 t i 16 J t 2 k (6.18) max i sync i k5 t i 3 (6.19) (A) Algorithm of OTG After tracking task test for any situation, we found that three trajectory patterns were always selected in this system. Therefore, we modeled the three tracking patterns Based on the current CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 110 state of motion and the kinematic motion constraints, a new state of motion is calculated with estimate exactly on the time-optimal trajectory to reach the desired target state of motion. All
input values can change arbitrarily based on sensor signals and even discontinuously, and a steady jerk-limited motion trajectory is always guaranteed at the output. Through the OTG, the desired joint position command is computed as on-line joint position command and is commanded to the controller which interfaces the Operational Space Formulation on the HONDA ASIMO robot. In this system, The OTG algorithm provides the robot robust tracking ability for any unstable inputs and unpredictable interruption by contact. When there is no external objects which interrupts the tracking control of the slave robot, the real-time command is computed by OTG, filtering the instable network command and smoothen the command at the robot. When unpredictable contact happens, the OTG compensates the error between the desired command and the actual and responds the error smoothly, which make the robot possible to collide with the environment while tracking. Through the OTG, the desired joint position
command is computed as on-line joint position command and is commanded to the dynamics controller. Fig. 67 On-Line Trajectory Generation algorithm on the HONDA ASIMO robot CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 111 (B) Implementation of OTG In Fig. 67, hummer motion was applied to the end-effector of the arm using haptic device and the elbow joint followed according to the command to the end-effector. In the figure, the green line shows desired command produced by the haptic device, the red line shows actual state and the blue line shows filtered command by OTG. In the right side of Fig67, red area shows contact state while moving and white area shows free space. In this motion task, two trajectory patterns were selected for motion in free space and one trajectory pattern was selected for contact state. In the free area, the elbow joint tracks the desired command less than 0.5[rad] position error and filtered joint position command and actual joint
position were almost same. In the contact state, if OTG does not work, the joint position command always tries to track the desired joint position command and generates instability between the desired joint position command and the actual joint position (shown in orange dot line). On the other hand, if OTG works, the command to the robot can be filtered referring the desired joint position command and the actual joint position. In case of the contact, the joint position command to the robot stops tracking to the desired joint position command and the joint position command to the robot does not generate instability between the desired joint position command and the actual joint position. After tracking task test for any situation, we found that three trajectory patterns were always selected in this system. Therefore, we modeled the three tracking patterns. 6.42 Computed Torque Control On this system, the whole-body controller runs on the master control system with several tasks such
as (i) both arm operational space control task, (ii) balance control task, (iii) body posture control task and (iv) collision avoidance task. The formulation of the whole-body controller is very advanced and sophisticated one defined by the operational space formulation. However, it takes computation power and was difficult to run on the old CPU board of the ASIMO robot, we decided to run the whole-body controller on the master control system, which has more computation power with Intel® Core i7 CPU with 4.0 GB memory On the other hand, on the robot controller, we run the computed torque controller according to the command from the master controller, filtered by OTG. K pt (qcmd qact ) K vt (qcmd qact ) g (q) jnt (6.20) CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 112 here, K pt , K vt are the position gain and the velocity gain in computed-torque control. The joint torque command is sent to the torque-transformer and we can have
compliant joint position tracking by this mode. The tracking ability of the computed torque control is very high The tracking for contact can be achieved by OTG. 6.43 Whole-Body Control The framework of the operational space formulation is defined and implemented to the upper-body of the ASIMO robot. On the other hand, the lower-body is controlled by the original balance controller of the HONDA ASIMO robot. To synchronize the upper-body compliant controller and the rigid lower-body balance controller of the HONDA ASIMO robot, the original framework of arm-leg communication is used and replaced to this framework. The communication between the upper-body compliant controller and the lower-body balance controller is shown in Fig. 68 Fig. 68 System of Motion Planner and Whole-Body Controller CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 113 The lower-body compensates the balance according to the information from the upper-body controller, such as the position of
the arm GC and the contact forces at the force sensor mounted on the hands. And the information of balance controller is sent to the upper-body controller as real-time position and the orientation command and the actual. Using this whole-body controller, the upper-body is supposed to make compliant multi-contact with the environment. While tracking the trajectory, if the arm collides with the environment and unpredictable contact force happens, it becomes factor to affect the balance control. In this case, the contact forces at the arms will easily affect the balance controller of the lower-body. In this system, to reduce the effect of the contact force from the arm to the lower-body balance controller, we actively control the orientation of the torso to absorb the effect of the contact force, which was explained in Chapter 3. 6.5 Experimental Validation : Master and Slave Control System The experimental test for contact and tracking by the master and slave control system is shown
in Fig. 69, 610 and 611 Two haptic devices were connected to the SAI-II to command (i) right hand position, (ii) left hand position and (iii) hip position. The master control system and the slave control system were connected through the network by TCP/IP communication. From the master system, joint position actual of the simulator was sent to the robot as desired joint position command. The slave system is the HONDA ASIMO robot and another network communication task runs receiving desired joint position command to from the master controller. On the HONDA ASIMO robot system, (a) Contact-Free Tracking Control by OTG, (b) Computed Torque Control and (c) Whole-Body Control were activated. In the figure, green line shows the command from the master control system and the red line shows the actual tracking vector of the robot. In Fig 6.9, the contact happens with the table and working table, which are the contacts with the stable environment. If the operator commands the hand quickly and if
unpredictable contact happens, generally, the robot pushes the environment too much and the balance control will become unstable. However, in this case, the OTG controls the tracking and smoothly responds to any interruption. In Fig 610 and 611, the environment was unstable but the robot could respond to the contact toward the direction of contact while moving in the another direction. CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 69 Master-Slave control system for contact tracking I Fig. 610 Master-Slave control system for contact tracking II. Fig. 611 Master-Slave control system for contact tracking III. 114 CAHPTER 6. 6.6 DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 115 Tele-Operation and Force Control 6.61 Force Control by Active Observer A motion module is designed based on the Operational Space Formulation [6] to calculate the dynamics of the robot. Generally, the joint space dynamics of a robot are described by b(q, q
) g (q) J (q)T f c A(q)q (6.21) where q is the n 1 generalized vector in joint space, A(q ) is the n n mass/inertia matrix, b(q, q ) is the Coriolis and centrifugal torque and g (q) is gravity torque. The term f c is the vector of contact force/moment at the contact point. Corresponding to the instantaneous linear and angular velocity, , in task space, the following relationship is defined by the Jacobian, J (q ) . J (q) q (6.22) Task dynamic behavior is obtained by projecting the joint space dynamics into the space associated with the task: (q) (q, q ) p(q) f c F (6.23) here, (q ) , (q, q ) and p (q ) are the inertia matrix, the vector of Coriolis/centrifugal forces and the vector of gravity forces mapped into the operational space and are defined as follows; (q) ( JA1 J T ) 1 (q, q ) ( JA1b Jq ) p(q) JA1 g (6.24) The control force, F, in
Equation (6.23) provides a decoupled control structure by ˆ (q) f * ˆ (q, q ) pˆ (q) fˆ F t c (6.25) where .̂ represents estimates of the model parameters f * is the command to the unit mass system. When the estimates are perfect, the following decoupled equations of motion for the end-effector are obtained. CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL f * 116 (6.26) Generalized selection matrices f and m are used to compose f * . f * m f m f f f (6.27) The subscripts m and f denotes the motion and the force control. The motion control by f m* is accomplished with PD control and AOB’s method is implemented on the force control, f f* . Appling the proper control inputs, f m* and f f , the control force F is calculated by Equation (6.25) Operational Space Formulation provides a decomposition of joint forces into two control vectors; (i) the joint torque corresponding to forces acting at the
task and (ii) joint torque that only affects the posture behavior in the null space. task posture J T F N T (q) (6.28) Here, N T (q) is the dynamically consistent null space projection matrix. N T (q) I J T J T (6.29) J T JA 1 (6.30) The term N T (q) guarantees that the null space control torque will not generate any force on the task control. A block diagram of the controller is shown in Fig 612 Active Observer, AOB [19] is applied to achieve adaptive force control for each translational direction. AOB is designed to handle the uncertainties of contact with the environment using the force sensor at the end-effector. This stochastic estimation strategy provides robust and adaptive control in the presence of the uncertainties which are caused by modeling and measurement errors. AOB uses a modified Kalman estimator with an additional state, active state, associated with the disturbance at the system input. The role of AOB is similar
to the integral control in classical PID controller which generates the input by accumulating the error between the desired values and measured values. However, active state is generated by the error between the estimated and measured values. In Fig 6.13, the AOB-based force controller is shown AOB is designed to estimate the states of the system and the estimated input error is cancelled at the input. Full state feedback is also applied at the estimate state. AOB is integrated to Operational Space Controller in Fig 612 CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 612 117 Block diagram of operational space control with contact force control. Designing force control is to compose the control input, f f* in Equation (6.27) However, modeling a precise contact force model is difficult because of the uncertainties of the actual environment. In this framework, the contact is modeled by a spring model and the environment is assumed to have a constant
stiffness, k s . fc k s f k f * c s (6.31) (6.32) Here, Equation (6.32) is derived by Equation (626) In case of the contact with different environments, the contact stiffness changes abruptly and largely in magnitude. The performance of force control without adaptation degrades in the presence of a large mismatch of the environment stiffness. Here, fast on-line stiffness estimation is required to handle with the mismatch. On-line stiffness estimation is also implemented to the Operational Space Controller CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 613 118 AOB Design for force control. 6.62 Tele-Operation System for Active Observer Haptic tele-operation allows an operator to control a slave robot using a master device through which the operator can feel the remote environment of the slave robot. In this tele-operation system, an Omega haptic device is connected to the controller of ASIMO’s hand by way of a network. The operator
was intended to move the master device and feel the contact force on the slave robot. In Fig 614, the tele-operation system is shown In this system, two independent systems that is (i) haptic system and (ii) local force controller are connected by way of the virtual spring, . From haptic device, the operator commands the robot by position The desired force command is calculated by f des k vir ( x des x) (6.33) The desired force command is sent to the local force controller on the slave robot as a desired command. The contact and control is achieved locally by following the desired force command from the operator. When the slave robot is in free space, the virtual spring, , produces the desired force in the desired direction until the difference in position is zero. The robot is always considered to be in contact with the environment through the virtual spring. The position tracking in free space is implicitly accomplished by the force, An operator can control the robot
through real-time display. From the master system, the operator moves the haptic device feeling the contact force when the slave robot is in contact with the environment. The command from the operator, , is subtracted by the actual position, and multiplied by the virtual spring, to generate the desired force command, . The independent local CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 119 force controller calculates the contact force by AOB and control the slave robot by the Operational Space Formulation. Fig. 614 Tele-operation system for Active Observer 6.63 Experimental Validation: Force Control by Active Observer The force controller, via the torque-position transformer, was applied to the HONDA ASIMO robot to test robust contact with the environment (Fig.615) In this system, two independent systems that is (i) haptic system and (ii) local force controller are connected by way of the virtual spring. From haptic device, the operator commands the robot by
position Fig 617 shows three tests: (i) hard sponge in left, (ii) soft sponge in the middle and (iii) iron object in the right. In the test, HONDA ASIMO’s right hand was controlled by an operator (the blue line in Fig. 617) who CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 120 continuously felt the contact force and the stiffness of the external objects (the red line) by way of the haptic device. Fig. 615 Tele-operation by HONDA ASIMO and Omega haptics In Fig. 616, measured data is shown When the robot was moving in free space it followed the command from the haptic device (Fig6.16-(i)) When the robot was in contact with the environment in different stiffness (see Fig. 616-(ii), (iii) and (iv)), the response and the magnitude of the force sensor were different; however, the applied force controller stabilized the position command of the end-effector enough to feel the contact force via haptic device. In this case, even if the operator pushed the HONDA
ASIMO’s hand down further with the haptic device (the green line), the end- effector’s position(the red line) was stable. The gains of the force controller were adjusted on-line according to the estimated object stiffness. Through the test, effect of the force controller by AOB was validated enough to feel the stiffness of different obstacles by the operator. This result shows possibility for interactive motion control and intuitive tele-operation. CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 616 Result of Force controller test. 121 CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 617 6.7 122 Force controller test by HONDA ASIMO and Omega haptics. Elastic Motion Planning For robots to operate in human environments, it is required to react safely to unexpected changes around work areas. However, existing manipulation task planning methods take more than several seconds or minutes to update their solutions when environmental
changes are recognized. Furthermore, the computation time exponentially increases in case of highly complex structures such as humanoid robots. Therefore, we implemented a reactive motion planning system on the HONDA ASIMO robot platform to perform interactive manipulation tasks under real-time conditions. This section describes the implementation of the Elastic Strip Framework [17] To improve its real-time performance and reliability, the geometry approximation of the previous version was replaced by an implicit method that constructs an elastic tunnel for collision checking. Additionally, in order to maintain system stability even in exceptional situations, such as undetected obstacles, the force transformer executes compliant motions, and the current elastic strip adapts the path tracking motion by monitoring tracking errors of the actual motion. CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 123 6.71 Strategy for Elastic Motion Planning Our approach here is to
realize contact free motion control. The environment can be modeled previously or can be modified using sensors. Using designed model, collision free trajectory can be planned beforehand. On the other hand, since the environment changes dynamically and the sensors can not represent the environment correctly, the environment model can not be exactly as same as real environment. In this case, designed trajectory can not respond to real environment and causes unpredictable contact while tracking. Since the torque controller can respond to predictable contacts better, any effects of unpredictable contacts needs to be reduced before the torque controller. Therefore our third approach is to realize robust trajectory generator between the motion controller and the torque controller. In this project, to realize practical contact free manipulation in the environment, we propose our new framework which is composed by A) On-line Motion Generation by Elastic Motion Planner, B) On-line Trajectory
Generation (OTG) and C) Whole-body Multi-Contact Control. D) Fig. 618 Concept of Collision-Free Motion Planner CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 619 124 Elastic Trajectory Modification with multiple obstacles. In this system (Fig. 618), the Haptic Interface (Section 633) is replaced by the Elastic Motion Planner [17]. According to the framework (A), the motion generator generates desired trajectory avoiding collision with dynamic environment. The environment can be modeled using sensors or the model is not necessary when we use intuitive command by tele-operation. However, in most cases, the environment model and real environment have difference, which is generally caused by sensor error or modeling error, unpredictable contact happens while tracking the desired trajectory. To track the trajectory with less tracking error and respond the unpredictable contacts, the second framework (B) is designed in this system. This framework is designed by
OTG (On-line Trajectory Generation) , which generates the best position command referring current position and desired position command by OTG. Using this system, even if there is non-modeled or unpredictable obstacle in the actual environment, the OTG function compensates such contacts and the robot stops tracking. The OTG compensates the instability of the network command also The most important feature of this system is (C) Whole-body Multi-Contact Control. Our approach is addressing to use torque control. The input torques for the system can be designed to accomplish the desired task as well as to compensate for dynamic effect of the contact with the environment using this framework. This provides the robot with higher performance in position tracking as well as in compliant motion for predictable contacts in real-time control level. This CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 125 framework was implemented on the HONDA ASIMO robot and the effectiveness
of motion in unstructured environment was experimentally validated. 6.72 Experimental Validation In the simulation test, we simulated the goal tracking motion in the narrow environment. The goal position is set as in the box on the table. The obstacles to avoid is (i) a table, (ii) edge of a box, (iii) four moving obstacles. In the GUI, obstacles are manually moved while the robot is moving the right arm. For the simulation, we used a PC with Core i7 307(GHz)x2 The frequency of the elastic motion planner is almost 1kHz. When the obstacle is moved, it took 67[msec] to stabilize the trajectory. We connected the simulator with the real robot In this simulation, we simulated the goal tracking motion in the narrow environment. The goal position is set as in the box on the table. The obstacles to avoid are (i) a table, (ii) edge of a box, (iii) one moving obstacles In the GUI, obstacles are manually moved while the robot is moving the right arm. For the simulation, we used a PC with Core
i7 3.07(GHz)x2 The frequency of the elastic motion planner is almost 1kHz. This system is connected to the robot through the TCP/IP network communication The network delay was 100[msec] in average. From the elastic motion planner, each joint position is commanded to the robot as desired joint position command. Through OTG and computed torque control, the actual command is generated on the robot and the arm is controlled. The actual environment is exactly as same with the simulation model. We could verify the real time trajectory modification and goal position tracking. Here, we realized contact tracking control on the robot. This system is supposed to work to minimize the disturbances and error between the motion planner and the real environment. Since it is impossible to model the environment perfectly and there is always sensor error, the motion command of the planner does not avoid the collision with the environment perfectly. In this reason, we designed a kind of master slave
system. In the system, motion planner generates desired command and command the robot. The robot generates the actual command considering the actual situation of the constraints such as contacts. The contact tracking controller is designed by (i) On line Trajectory Generation and (ii) Computed Torque Control. The OTG is designed to respond to the disturbances flexibly CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL Fig. 620 Simulation of Elastic Motion Planner and OTG Fig. 621 Experimental validation of Elastic Motion Planner and OTG 126 CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 127 and the computed torque control is designed to respond to the contact. OTG estimates the current stat and selects the best profile according to the situation. In the simulation, several disturbances are estimated such as (i) change of the goal position, (ii) network delay and (iii) unpredictable contacts. These disturbances are supposed to be considered in
OTG and smooth trajectory is generated. When there is no obstacle, the robot followed the command If unpredictable contact happens, it restricts the motion for the direction in contact however OTG made the robot to move in the other direction since the motion planner generates the next desired motion. In the test, motion planner generated desired command and commanded the robot. The robot generated the actual command considering the actual situation of the constraints such as contacts. In the test, several disturbances were considered in OTG and smooth trajectory was generated. When there was no obstacle, the robot followed the command. If unpredictable contact happened, it restricted the motion for the direction. On the other hand, since the motion planner generated the other direction to move, it kept contact in one direction and moved in the other direction. If the constraint was suddenly removed and recovered to track, the robot recovered the commanded position smoothly. 6.8
Conclusion To allow humanoid robots to react to unpredictable contacts with the environment, we realized contact-free motion tracking with a master-slave control system. On the master control system, the whole-body dynamics of the HONDA ASIMO robot was computed on simulator SAI-II. Two haptic devices were connected to SAI-II to command the hand position on-line and intuitively. The on-line trajectory generation (OTG) framework filtered the desired joint position command into the joint position command and compensates for unpredictable contacts in the environment. In order to let robots immediately react to changes in their environment, it is required to develop instantaneous motion planning methods that can be deployed in the underlying whole-body control framework. The end-effector’s robust force control using Active Observer, AOB was integrated to the force control of the Operational Space Formulation. A robust control mechanism was achieved, which enables to drive the robot
safely under contact condition. To easily operate the position of both hands of the robot while sensing the interaction forces between the robot and the environment, an Omega haptic device was connected to the controller using a virtual spring CAHPTER 6. DYNAMIC COLLISION-FREE MOTION PLANNING AND CONTROL 128 model between the master and slave system. Finally, the framework was experimentally verified on the HONDA ASIMO robot by monitoring the system’s behavior while interacting with objects of various shape and stiffness properties. We implemented a reactive motion planning system, Elastic Strips Framework, on the HONDA ASIMO robot to perform interactive manipulation tasks under real-time conditions It significantly improved real-time performance and reliability. CHAPTER 7 Dynamic Collision-Free Walking Path Planning In this chapter, the Elastic Motion Planner was extended to dynamic collision-free walking path planning. A new algorithm for robot motion planning [30],
which enables collision free stable motion planning in the real environment, is proposed. When a robot moves, great capacity to adapt to the environment changes are required not only against fixed structures but also dynamically moving obstacles or humans. In this system, fixed environment is modeled precisely using CAD model. To detect environment changes, ceiling of the floor is segmented to mount LRF sensors for each grid and the position of pedestrian, obstacles and robot are detected and modeled in real time. Configuration space is defined by X-Y axis as position of the environment and obstacles and Z axis as time representing from start time to goal time of the robot. Using LRF sensor information, feature of human walking is analyzed and pedestrian model is developed estimating current position and future position. Main feature of this system is combination of trajectory generation by RRT (Rapidly-exploring Random Tree) and trajectory modification by elastic potential method.
When any obstacles are found along the way of the robot in the configuration space, the system applies RRT to generate the trajectory at the moment. On the other hand, dynamic movement of the obstacles or pedestrian interfere the generated trajectory. In this case, Elastic Strip method in Potential Field (ESPF) is applied to modify the trajectory in real-time, until the obstacles or the pedestrians moves out from the trajectory in the configuration space. Consequently, the trajectory is modified accounting for not only current position but also estimated future position of the obstacles and pedestrians. If the modified trajectory by the elastic potential method is in local minimum, the system calls RRT again to reset the path and to find the best path at the moment. This algorithm was connected to the real sensor network system and was simulated on-line using real sensor information. The algorithm is simulated for another environment to verify its stability and reliability. CAHPTER
7. 7.1 DYNAMIC COLLISION-FREE WALKING PATH PLANNING 130 Background Robots are desired to coexist in our environment and execute arbitrary tasks automatically such as delivery service, cleaning service, guard service, etc. in our daily environment and in public spaces. To let the robots assist our activities sharing same living space or work space, great capacity to adapt to the environment changes are required to the robots, where obstacles and humans are mixed and dynamically moving in narrow space (Fig. 71) Here, the environment can be any fixed structure such as wall, door, fixed table, etc. and the obstacles can be non-fixed objects such as chair, white board, plants, baggage, etc. and moving obstacles can be pedestrian, cart, any other robots, etc. In real environment, one of the important functions of the robot is collision free trajectory generation to improve the capacity to adapt to the environment changes. While the robot is moving, its trajectory needs to be generated
dynamically from arbitrary point to planned goal position, avoiding any collision not only with the environment but also any moving obstacles, while the robot is moving. So far, several methods had been proposed and applied to robot path planning. Generally, these methods can be discussed and categorized by function of (a) sampling, (b) roadmap sampling and (c) trajectory generation and their advantage was discussed by its complexity and computation cost. Dijkstra's algorithm is an algorithm for finding a path with minimum travel cost from designed origins to another destination through a connected network in a graph. Its advantage is simple and fast algorithm but sampling points and roadmap, which is all possible paths, needs to be pre-computed. A* is basically an informed variation of Dijkstra. The only difference is that A* tries to look for a better path by using a heuristic function which gives priority to nodes that are supposed to be better than others. In this point of
view, A* is faster than using Dijkstra since it just explore all possible paths. PRM (Rapidly-exploring RoadMap) is to take random samples from the configuration space testing them for whether they are in the free space, and use a local planner to attempt to connect these configurations to other nearby configurations. In a construction process, first, a random configuration is created then some neighbors less than some predetermined distances are connected. This process is not efficient for real-time system RRT preferentially expands tree rooted at the starting configuration towards large unsearched areas by using random samples from the search space and attempt to connect nearest state in the tree checking connection from the starting configuration to the goal configuration. If CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 131 the connection is feasible (passes entirely through free space and obeys any constraints), this results in the addition of the new state to the
tree. With uniform sampling of the search space, the probability of expanding an existing state is proportional to the size of its Voronoi region. As the largest Voronoi regions belong to the states on the frontier of the search, this means that the tree preferentially expands towards large unsearched areas. Various approaches improving the basic RRT algorithm have been investigated. Potential field method was developed as an online collision avoidance approach, applicable when the robot does not have a prior model of the obstacle, but senses then during motion execution. Generally, the potential field is defined across the entire free space and the induced force is calculated according to the position of a particle in each time step. In this approach, depending on the strength of the field, the particle can arrive to the source of the field by creating an attractive filed going inside the goal. Or this approach can be applicable as an online collision avoidance approach when the
obstacles are sensed during motion of the particle. A repulsive field around each obstacle is defined and if the particle approaches the obstacle, a repulsive force will act on it, pushing it away from the obstacle. In robotics, these two behaviors, seeking and avoiding, can be combined by combining the two potential fields, the robot then can follow the force induced by the new filed to reach the goal while avoiding the obstacle. In the trajectory generation, a path from a start position to a goal position is defined as a combination of particles, which are pushed away according to the repulsive force acting on it. In the elastic band approach, the path is dynamically modified by treating the path as an elastic band that is able to change its shape. The repelling forces on the elastic band are produced by obstacles in the vicinity of the path. Basically, the elastic band is modeled as a series of particles with a series of springs between them. Forces acting on the elastic band are
computed by taking the potential energy at discrete path points. Generally, the advantages of the potential method are that sampling points and roadmap needs not to be pre-computed and it is effective as the online collision avoidance approach because it generates stable result with less computation time. On the other hand, the disadvantage of the potential method is the local minimum phenomenon. Several methods have been suggested to deal with the local minimum phenomenon. One idea is to avoid the local minimum by incorporating the potential field with the high-level planner. Several methods have been suggested to deal with the local minimum phenomenon in potential filed. All these approaches rely on the fact that position of the robot and the position of the obstacles relative to the CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 132 environment are detected and the robot can discover that it is trapped. To deal with the complexity of planning a robot path planning,
several methods had been proposed and applied. Generally, algorithms that are good for solving specific types of problems have been developed over the years, but they may be infeasible when applied to other problem types. Therefore, there is no single algorithm which solves all the problems. Fig. 71 Image of robot motion planning in real environment 7.2 Trajectory Generation by RRT 7.21 Robot Platform In the validation process, new humanoid robot HONDA ASIMO robot (Fig. 72) was used ASIMO stands 131 cm tall and weighs 54 kg. ASIMO has total 35 DOF {Head = 3 DOF, Arms = 14 DOF (Left + Right), Hands = 4 DOF (Left +Right), Torso = 1 DOF, Legs = 12 DOF (Left + Right). ASIMO is controlled by a laptop computer or by a portable computer controller unit through a wireless network system. This permits more direct and flexible operation A single operator can easily and fully control the HONDA ASIMO’s movements. When destination is commanded, the system automatically generates path to the
goal position. ASIMO walks by ―prediction movement control,‖ that is, predicting the next move and shifting the center of gravity accordingly. The HONDA ASIMO robot is very stable even when moving suddenly Stored CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 133 walking patterns for start/acceleration, steady speed, deceleration, stopping and turning are combined to achieve smooth walking. ASIMO turns smoothly without pausing Walking patterns are generated independently in real time by the HONDA ASIMO itself, and foot placement locations and turning are carried out by the robot while walking smoothly in any direction. Fig. 72 HONDA Humanoid Robot ASIMO 7.22 Sensor Network: Detection of Robot Position To apply the path planning algorithm to the robot, position of the robot relative to global coordinate system in the environment needs to be detected. In this system, the environment is modeled using CAD model of the building and mapped into the global coordinate
system in the environment. To detect collision between a robot and the environment and to apply the path planning algorithm to the robot, position of the robot, relative to global coordinate system in the environment, needs to be detected. To localize the position of the robot, two functions are integrated (Fig. 73) The first function is to compute motion of the robot using the internal sensor, compute foot step, and estimate distance CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 134 from initial position. Using internal joint position data and gyro sensor data, position x-y and orientation theta z is computed with small error. The second function is grand markers Since drift error of the first function is general, the position of the robot must be calibrated using another sensor. In this case, grand marker is used to calibrate the position error In the environment, grand markers are set along the main path and in front of typical goal position. Each grand marker has
ID and accurate position information in the environment. When the robot comes near the marker, the robot detects the marker using an on-board camera, refers registered maker ID and position, compare it with the estimated position and calibrate the current position. Finally, since the LRF sensor network can not distinguish the robot from human, it will detect the robot as pedestrian. In this case, the system misunderstand that the robot and the pedestrian always collides. Therefore the LRF sensor network system needs to ignore the pedestrian which position is as same as the position of the robot. Through this process, the position of the robot is localized Fig. 73 Position error canceling using grand marker CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 135 7.23 Sensor Network: Detection of Obstacles and Pedestrian To detect human, pedestrian and moving obstacles in the environment, LRF sensor is used. The ceiling of a floor is segmented to mount LRF sensors for each
grid. Hundreds of LRF sensors are arranged for each grid and are connected to a sensor data processing PC by way of Tcp/IP. LRF sensors are coordinated to the ceiling so that sensor network tracks moving obstacles and pedestrian continuously by switching sensors. Process of obstacle detection is shown in Fig 74 Fig. 74 Process of obstacle detection by LRF sensor network The position and the orientation of each LRF sensor are designed. Each LRF sensor detects distance to the obstacle and 3D transformation is applied. Since the height of sensors is fixed, ground level is calibrated and the noise is canceled in the initialization process. After this process, grids of an obstacle are detected and transferred to the sensor data processing PC (Fig. 75) When an obstacle is captured by the first LRF sensor and it moves out from the first LRF sensor, next sensor captures the obstacle. Consequently, the sensor network tracks moving obstacles and pedestrian continuously (Fig. 75) The captured
grids of the obstacle are simplified such as simple rectangle or combination of rectangles with its center position. The captured grids of CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 136 human or pedestrian are simplified by circle with its center position. Regarding fixed obstacles which are not included in original map data, they are uploaded to the map. Regarding moving obstacles and human, the system applies ID for each and tracks their motion. Therefore new obstacles are uploaded to the map and moving obstacles and human are detected and tracked by this system. The system is connected to motion planner of the robot The robot can access real-time information about obstacles and human in the environment. Fig. 75 LRF sensor network and obstacle detection Fig. 76 Definition of personal space CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 137 7.24 Modeling Personal Space We define a human personal space by (a) human size in side direction between
two hands, (b) offset by oscillation for both sides and (c) margin to external obstacle from end of oscillation offset. Here, since the margin of oscillation can not be modeled for any walking situation, the maximum margin is defined as an oscillation margin. A robot personal space can also defined as same way as the human personal space. Collision model of the pedestrian and the trajectory was modeled in 3D space which is composed by x-y plane and t axis, which defines time from start time (time of start is zero). In this system, the trajectory of robot is represented by connected segments. To compute the trajectory easier, we integrated (1) size, (b) effect by oscillation, and (c) margin of the robot into Human Personal Space. 7.25 Definition of Configuration Space Initially, we developed dynamic path planning system which enables humanoid robot to move safely avoiding collision with not only the environment but also any moving obstacles, human or pedestrian. Using LRF sensor
network system, moving obstacles and human in the environment are detected and tracked by the system. The system is connected to motion planner of the robot and the robot can access real-time information about obstacles and human/pedestrian in the environment. The motion planner designs a path from start position to commanded goal position In the configuration space, the path is modeled in 3D space, XY plane as trajectory and Z axis as time which starts from start time and ends by goal time. Therefore, if the path is straight in the environment, it is modeled as diagonal straight line which connects the start position to the goal position. In the configuration space, pedestrian is modeled by cylinder, which basement represents its current position and top represents estimated future position using velocity vector. The radius of cylinder is defined by personal space shown in Fig. 77 The personal space is defined by human size, oscillation when walking and safety merging. Therefore, if
the pedestrian is stopping, the cylinder is modeled as perpendicular cylinder. Regarding the moving obstacles, they are modeled by diagonal box and if they are stopping, they are modeled as perpendicular box. Initially, RRT was applied to realize dynamic collision avoidance while the robot is moving. Since RRT generates trajectory randomly, if it is used in real time, the trajectory will change all the time according to movement of the obstacle. Therefore the system is designed to call RRT just in case when the path of the robot and the obstacle model or pedestrian model collides. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 77-(a) Fig. 77-(b) Configuration space (T=0) at initial position Configuration space (T=t) at collision 138 CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 139 7.26 Trajectory Generation by RRT So far, several methods had been proposed and applied to robot path planning. Generally, these methods can be discussed and
categorized by function of (a) sampling, (b) roadmap sampling and (c) trajectory generation and their advantage was discussed by its complexity and computation cost. Considering computation effect in defining state constraints and low computation cost, the RRT and the Potential method have more advantage. Generally, for any environment where obstacles suddenly appear and move slowly recursive planning is desirable to update the situation and change the trajectory according to moving obstacles. Measure advantage of the RRT is that it generates open-loop trajectories for nonlinear systems with state constraints and with low computation cost. In this point of view, we had applied RRT [14][15] for robot path generation in real system and payed effort to make it work on the system. On the other hand, there was measure difficulty when we applied the RRT to robot path generation. The samples of the RRT are randomly generated from the search space checking connection from the starting
configuration to the goal configuration resulting to generate another path if it is processed continuously. Since the roadmap is updated continuously and different global path is generated in the continuous loop, the RRT algorithm is not suitable for the environment where obstacles are continuously moving. In the motion planner, RRT is designed to be called just in case when current path is in collision or when another path needs to be generated. Fig. 78 Path generation by RRT CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 140 7.27 Initial Test To test the dynamic path planner in the real environment, we set two use case scenarios. In the first use case, two pedestrians walk along their designed path and a robot moves from start position to goal position. This test is supposed dynamic collision avoidance in open space to see robustness of the algorithm. In the second use case scenario, a pedestrian and a robot go through each other in the narrow pathway. The robot is
supposed to avoid collision or stop walking according to situation. This test is supposed to see steadiness of the algorithm [A] Use Case 1 Regarding experimental validation, we defined two use cases. In the first case, two pedestrian walks along their designed path and a robot moves from designed start position to goal position. The designed path of each pedestrian and the robot is shown in Fig. 79 The result of the experimental validation of the first case is shown in Fig.79 As for a pedestrian walking towards the robot, the robot modified the trajectory. The dynamic path planner worked for the pedestrian 1 and the robot could avoid collision with the pedestrian 1. Fig. 79 Use case 1 test Problem of Use Case 1 In case when the pedestrian 2 changed the path and walk to the robot slowly (Fig. 79), the path planner did not work well. In this case, the path planner generates the trajectory on the right side or on the left side randomly and the robot and the pedestrian could not find
a trajectory each other. In this case, the robot happened to find a path and passed the pedestrian however it was not smooth at all. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 710 141 Use case 1 test : Pedestrian approaching slowly [B] Use Case 2 The second use case scenario is shown in Fig. 711 In this case, original path of a pedestrian and a robot do not collide. On the other hand, since human oscillates while walking, the human cylinder model is supposed to oscillate according to steps, which make the human cylinder model and the trajectory of the robot collide in the configuration space. In this case, the robot is supposed to stop walking. To debug the trajectory generation by RRT and understand how the algorithm works according to the situation, all the initial settings are defined and same test was repeated every time. Fig. 711 Use case 2 test CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 142 Problem of Use Case 2 When we tested, the
robot stopped walking as soon as the human cylinder model collides with the trajectory of the robot. When the collision happened, RRT was called However, the RRT could not find any trajectory, which made the robot stop walking around 10[m] far away from the pedestrian, which was as same as the height of cylinder mode. 7.3 Analysis of Initial Test 7.31 Measurements of Human Walk In Fig. 712, measured data of human walking on the straight line is shown The data was measured by LRF sensor network. In the data, a subject walked on the straight line by 12 m/sec When human walks, oscillation occurs to the side while walking. In Fig 712, we can see that oscillation happens along the straight line for each foot step. If we know that the subject is walking on the straight line, we can decouple the oscillation from straight movement. On the other hand, when we measure pedestrian from the LRF sensor network, we can not know where the pedestrian is walking to. In this case, the oscillation can
not be decoupled from straight movement. Therefore, a direction of velocity vector changes continuously for each step when we Fig. 712 Data of human walking straight line measured by LRF sensor on the shielding CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 713 143 Data of human walking straight line for 0.4[m/s], 06[m/s] and 08[m/s] measure any pedestrian. The continuous velocity vector is shown in light blue lines in Figure 712 In the Fig.713, human walking on the straight line was measured for several speeds such as 0.4[m/sec], 06[m/sec] and 08[m/sec] Generally, 08[m/sec] is standard walking speed and 0.4[m/sec] is pretty slow From this figure, we can estimate that the effect of oscillation becomes more for slower speed. It is clear that the estimation of future position oscillates more for slower speed. Therefore it is better to estimate shorter range when human walks slower 7.32 Analysis of RRT for Dynamic Collision Avoidance In this system, RRT is
designed to be called just in case when current path is in collision or when another path needs to be generated. It is because the samples of the RRT are randomly generated from the search space checking connection from the starting configuration to the goal configuration resulting to generate another path if it is processed continuously. If the collision happens far from current position in the configuration space and takes some time until the robot reaches the collision point, short term trajectory will not be affected so much, even if RRT generates the trajectory randomly. Problem happens when the obstacle is continuously moving and affecting the trajectory of the robot near the robot. Simulation of RRT for three moving obstacles is shown in Fig. 714 In the Fig 714, a blue circle, an orange circle and a green circle CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 144 represents obstacles for each and they are constantly moving. RRT is applied for each moving circles.
The red line shows generated path by RRT As we see in Fig 714, generated path is always changing against moving obstacles and we understand that the generated path is instable. Consequently, the instability of trajectory directly affects the short term foot step plan and make foot step of the robot inconsistent. Fig. 714 Continuous path generation by RRT 7.33 Analysis of Use Case 1 According to the analysis of sensing of human walk data and process of RRT, we can explain technical subject of use case 1. This problem happened by two technical problems First problem was caused by the pedestrian cylinder model. When the robot is in front of a pedestrian and approaching, in most cases, the pedestrian slows down walking, which make the sway of walk speed vector angle large. Consequently, the pedestrian cylinder model moves right and left large in front of the robot. Since the pedestrian cylinder model moves dynamically, RRT was called frequently. Second problem was caused by RRT Since
the robot and the pedestrian are close to CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 145 collision, RRT is called according to dynamic change of the environment. Every time when RRT is called, it generates new trajectory which is not related to the previous trajectory. Because RRT generates path randomly, the generated path was right side or left side of the pedestrian. Therefore, pedestrian cylinder model switched right or left in front of the robot and the path of the robot updated right or left of the pedestrian. Consequently, the dynamic path planner did not work at all for this situation. Solution of Use Case 1 (1) Not to call RRT frequently and continuously modify the trajectory responding to dynamic change of the environment using another method. (2) Shorten the estimation range of pedestrian model, which is height of the model, according to walk speed of the pedestrian. For example, if the walk speed is slow, shorten the height of pedestrian model. Fig.
715 Analysis of use case 1 7.34 Analysis of Use Case 2 When a robot and a pedestrian walk through each other in narrow pathway, the cylinder model of pedestrian oscillates for each foot step. Even though the base of pedestrian cylinder model oscillates about 0.3m to sideway, it is extended more at the top of the cylinder model, which is estimated future position. Consequently, the estimated future position of the pedestrian moves CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 146 large and always fills the pathway. It makes the RRT impossible to generate path even if the pedestrian is far from the robot. Solution of Use Case 2 Model the pedestrian except for using cylinder model which represents current collision precisely and reduce effect to robot trajectory small at future collision point. Fig. 716 Analysis of use case 2. 7.35 Pedestrian Model in Configuration Space Personal space was defined by (a) human size in side direction between two hands, (b) maximum
offset by oscillation for both sides and (c) margin to external obstacle from end of oscillation offset. This safety margin was applied as radius of pedestrian cylinder model Through the analysis of pedestrian using LRF sensor, we understand that the problem of pedestrian model in the initial test was not it’s radius but it’s shape and height. According to the results and analysis of the initial test, we propose new pedestrian model in the configuration space. A) Vertical cylinder model when a pedestrian is stopping or walking speed is less than 0.2[m/s] If the pedestrian stops walking, the pedestrian collision model is represented by vertical cylinder in the configuration space. Even if the pedestrian is stopping, the LRF sensor detects small walking velocity, which is caused by sensor noise. In this case, since the CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 147 walking speed 0.1[m/s] is too small, we define it as noise and define that the pedestrian is
stopping. Here, the radius of the cylinder is defined by the radius of personal space B) Corn model when a pedestrian is walking over 0.2[m/s] To represent collision precisely at current position and reduce effect to interfere trajectory of the robot at future collision point, we propose corn shape for pedestrian collision model in the configuration space. Using corn shape, even if the pedestrian oscillates to the side while walking, the effect at the future collision point is reduced because the radius of corn model is smaller than cylinder model at the future collision point. At the same time, we propose to change height of the corn model according to walk speed of the pedestrian. For example, if the walk speed is slow, the height of the corn is shorten within the range of the personal space. If the height of the corn is inside of the personal space, even if the velocity vector oscillates, it will not affect the trajectory of the robot so much. The height of corn model increases from
the radius of personal space to 12[m] ahead according to walk speed from 0.2[m/s] to 20[m/s] Fig 718 shows image of pedestrian corn model When a robot and a pedestrian walk through each other in narrow pathway, the corn model of pedestrian oscillates for each foot step. Even though the base of pedestrian corn model oscillates about 0.3m to sideway, its effect at the top of the corn is reduced Fig. 717 Human walk estimation model. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 148 When the pedestrian approaches to the robot, the collision circle gradually increase until the robot and the pedestrian is in closest point. Fig 719 shows initial collision when the pedestrian and the robot are approaching. As we can see, the collision circle does not fill the pathway Therefore, RRT can generate trajectory in this case. Consequently, we can compute collision at current position and can reduce effect of estimated future position. Fig. 718 Fig. 719 Human walk estimation
model and robot. Simulation of human walk estimation model and robot. CAHPTER 7. 7.4 DYNAMIC COLLISION-FREE WALKING PATH PLANNING 149 Dynamic Trajectory Generation 7.41 System Structure The Fig. 720 shows total system of EPP (Elastic Path Planning) which is composed by (a) sensor data input, (b) trajectory generator and (c) motion controller. In the function of sensor data input, obstacles and the location of the robot are supposed to be measured on-line. This information is sent to the trajectory generator. In the trajectory generator, the proper computation method is selected to generate trajectory and the best solution is selected. The motion controller is designed to run independently from the trajectory generator. It has dynamics model of the robot motion and designed to track the trajectory generated by the trajectory generator. Even if the trajectory generator can not generate the trajectory, the motion controller plans to track the current trajectory and commands to
stop when the robot comes within the margin to the obstacles and the environment. Even if the trajectory generator fails to generate the path and the path goes into the environment, the motion controller predicts the critical collision with the environment and stops the robot with the stopping margin. If the moving obstacle goes off and the margin to the obstacle is safe, the motion controller commands the robot to move again. Fig. 720 System structure. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 150 7.42 Sensor Data The Fig. 721 shows sensor data input module In this system, several sensor information is required to estimate multiple human position, obstacle position, environment position and robot position. The position of human, obstacle and environment can be measured by LRF sensor, vision sensor, etc. The position of the robot can be measured by LRF sensor, vision sensor, marker, etc. The sensors in this system can be any sensor if the position of the target
can be measured on-line. After filtering process and synchronization, all the data are arranged into common global framework in the coordinate transformation module so that distances between the robot and the human, obstacle and environment are computed on-line. Fig. 721 Sensing Network System structure. 7.43 Dynamic Trajectory Generation System A stable trajectory generation system composed by RRT and elastic potential method was developed. It was designed to generate a new trajectory against the obstacles using the RRT and modify the trajectory continuously against motion of the obstacles by using the elastic potential method. The system selects arbitrary trajectory generation method according to the dynamically changing situation so as to stabilize the trajectory for any situation. At the beginning of the loop when the counter is zero, this system calls Global Path Planning to upload the original path and define a start point and goal point with via point. If there is no obstacle
nor human on the path, it goes into Elastic Strip Motion Planning mode. In this case, as soon as any obstacles and human comes into the path, it starts to modify the trajectory according to the obstacles. On the other hand, if there is any obstacle or humans on the path, it goes into CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 151 RRT mode. The system calls RRT for several times until it generates stable trajectory After the RRT process, the system calls Elastic Strip Motion Planning mode to modify the trajectory according to dynamic change of the outside. If Elastic Strip Motion Planning process fails to generate collision free trajectory for several count and the motion module commands to stop according to any collision and if the motion module commands to move again when the robot is collision free, the system calls RRT again and goes to the same process again. Fig. 722 Fig. 723 Dynamic Trajectory Generation System. Selection of Trajectory Generator.
CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 152 7.44 3D RRT process in Configuration Space Fig. 724 and Fig 725 show how the RRT process in this system is computed Generally, many approaches were discussed in the RRT to extend the possibility of RRT and compensate its demerit. In this system, the merit of RRT is compensated by combination of ESPF Therefore critical demerits are excluded by higher level trajectory planner. One unique characteristic of the RRT process in this system is trajectory generation in 3D configuration space. The process is as follows. A) Generate a candidate point in 2D XY space randomly and connect the candidate point to a parent point which is closest from the candidate point. B) Map the candidate point to 3D XYZ space where Z axis represents time and the height is decided by commanded walk speed of the robot (slope of the segment represents commanded walks speed of the robot). C) Check collision with the environment, obstacles and pedestrian in
the configuration space. D) In case of collision, cancel the candidate point. E) Repeat this process until the start position and the goal position is connected in the configuration space. Fig. 724 Trajectory generation by RRT. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 725 153 3D RRT process in configuration space. Through this process, we have the branch trees where all the child points are higher than their parent points in Z axis (because time is always positive). After the RRT process, smoothing function smoothen the trajectory and update the result. After the RRT process, Elastic Strip in Potential Field is applied for the result of RRT to evaluate the trajectory. This RRT process in Fig. 727 is processed several times and the results of RRT are compared in Fig. 726 to find the desired trajectory Since the RRT is stochastic method, it can generate several patterns of trajectories including undesired trajectory such as the trajectory close to the
obstacles. To generate the better trajectory by the RRT, potential function is applied to evaluate the trajectory by the margin to the obstacles. For example, the shortest distance to the obstacle for each result of RRT and the total potential energy of each result of RRT are used to compare the trajectory generated by the RRT. a Fig. 726 Generated candidates of trajectories in XY configuration space. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 727 154 RRT Process. 7.45 Elastic Strip Method in Potential Field Fig. 728 shows a process of ESPF The Elastic Strip framework is an approach for the integration of global behavior and local avoidance behavior to enable reactive motion execution while maintaining the global properties of the task. This is accomplished by an incremental modification of the previously planned motion. The modification approach preserves topological properties of the original path as well as task constraints. Thus, it can maintain a
task-consistent valid path from the current state to the desired goal state without being susceptible to local minima. An initial candidate path is provided by RRT and the Elastic Strip framework constructs nodes with robot configurations along the path. For any environment, obstacles and humans, potential fields are defined (Fig. 729) from any points of current path. The potential forces are computed for each point according to the distance from the external obstacles. For each path points, internal forces are also computed If two adjacent node points are closer than designed shortest margin, these two points are integrated on CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 155 the other hand if the two adjacent points are farther than designed longest margin, a new point is generated between the original two points and start to compute the collision. Fig. 728 Fig. 729 Process of Elastic Strip in Potential Field Potential Field of the Environment and obstacles
CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 730 Fig. 731 156 Elastic Strip method Image of Elastic Motion Planning The potential vector is computed according to the external potential force and the internal potential forces for each point and the points are controlled by PD controller. Finally, modified node points are arranged and updated to the system. In this system, several forces in the potential field and internal forces are affecting the trajectory together. In Fig 731, repulsion forces from the obstacles acting on node points are shown. When the trajectory comes within the margin of the obstacle defined by the potential field of the obstacle, repulsion force is computed for each obstacle and the forces are united for each node point. At the same time, attraction force is defined for makers which are used to localize the CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 157 robot so as not to lose its position in global framework. In this
system, since the framework of elastic strip is used, internal force between the node points are also acting on the node points. If the distance of two node points are closer than designed distance and the internal force is larger than the designed margin, these two node points are integrated. On the other hand, if the distance of two node points are larger than the designed distance and the internal force is larger than the designed margin, a new node point between these two original node points is generated. In this system, the forces acting on the node points are designed to respond differently shown in Fig. 7.33, which makes the system more stable Fig. 732 Image of forces acting on the segments CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 733 158 Timing of Forces 7.46 Tracking Control In this system, a motion controller is connected to control the robot to track the generated trajectory. The motion controller has tractor which is connected to the robot by
spring and dumper model. The tractor tracks the trajectory 04 [m] ahead of the robot, which is minimum margin to make the robot stop walking, in case of sudden collision. Fig 734 shows a scene when the robot is tracking on the segment S(i,i+1) which is between point P(i) and point P(i+1). If the Elastic Strip push the point P(i), new segment is generated between current position of robot and moved point P’(i). The tractor moves with the segment and continue tracking the modified segment. The tractor continuously pull the robot from the modified segment, which make the robot to change the orientation smoothly. If the point P(i+1) is moved and the segment S(I,i+1) moves, the tractor also moves with the segment S(i,i+1) (Fig. 734(b)) The tractor continuously pulls the robot from the modified segment, The tractor has another function. When the tractor collides with the obstacle on the trajectory, it commands the robot to stop until the obstacle moves out. Consequently, the tractor
commands the robot to track the generated trajectory smoothly. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 734 7.5 159 Tracking control Experimental Validation 7.51 Validation of Proposed Dynamic Trajectory Generation Fig. 735 shows how this framework works in real environment A robot is supposed to pass each other with a pedestrian in the passage. An original path of the robot is shown as a straight pin which position in horizontal plane shows position of the robot and vertical position shows estimated time when the robot passes. The figure shows x-y-t graph in 3 dimensional space It can be also mapped into x-y plane by robot position which shows real situation for arbitrary time. When pedestrian corn model and a model of robot path collide at arbitrary time, collision between the pedestrian and the robot is computed by circles, which is a part of each models. The trajectory of the robot is modified following this process. CAHPTER 7. DYNAMIC
COLLISION-FREE WALKING PATH PLANNING 160 Fig. 736(a)-36(e) shows process of trajectory generation Fig 736-(a) shows a situation when the robot and the pedestrian is far from each other and model of human walk estimation does not collide with the path of the robot. In this case the path of the robot does not change Fig 736-(b) shows next situation when the robot and human walk estimation starts to collide with the path of the robot. In this case the path of the robot is modified by the circle which is a part of the human walk estimation model cut at arbitrary time of robot motion. Fig 736-(c) shows the situation when the robot and human walk estimation collide with the path of the robot more and the trajectory of the robot the robot is continuously modified by the elastic potential method. Fig 7.36-(d) shows the situation when the robot and the human is passing each other with closest distance. In this case, the distance between them is calculated by real margin, which is the personal
space of the robot and the human. Fig 736-(e) shows the situation when the robot and the human passed each other and there is no collision between them. In this case, the modified trajectory of the robot gradually moves to the original path so that the modified trajectory does not suddenly jump back to the original path. Fig. 735 Collision between pedestrian corn model and a model of robot path at arbitrary time CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 161 Fig. 736-(a) Process of trajectory generation and trajectory modification : No collision Fig. 736-(b) Process of trajectory generation and trajectory modification : RRT is called Fig. 736-(c) Process of trajectory generation and trajectory modification : ESPF is called CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 162 Fig. 736-(d) Process of trajectory generation and trajectory modification : ESPF is called Fig. 736-(e) Process of trajectory generation and trajectory modification :
ESPF is called 7.52 Real Sensor Simulation The result of simulation is shown in Fig. 737 It is as same as image shown in Fig 736 The Dynamic Trajectory Generator (DTG) was connected to the real system and its effectivity was verified through the real time simulation. In this case, real environment of operation and real data of pedestrian was used and the DTG commands the modified trajectory on line to the robot control system. We did not use the real robot but real time command from the DTG to the robot control system was monitored on the DTG and the robot control system. The robot control system computes the position of the robot according to the trajectory command and feedback the position data to the EMG. The EMG generates the trajectory using real position data Fig 738-(a) shows a CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 163 situation when there is no collision and the path is not affected by any obstacles. Fig 737-(b) also shows a situation when there is no
collision between a pedestrian (corn model) and a path (cylinder model) in 3D space and the path is not affected by any obstacles. Fig 737-(c) shows a situation when there is collision between the pedestrian and the path in 3D space and the path is divided into several node points and affected by the obstacle. Fig 737-(d) shows a situation when the trajectory is dynamically modified more according to the position of the pedestrian. Fig 7.37-(e) shows a situation when the pedestrian and the robot are passing by each other and we confirmed that the motion controller commanded the robot to stop when a personal space of the robot and the pedestrian collides each other. Fig 737-(f) shows a situation when the pedestrian and the robot passed and there is no collision between the pedestrian and the trajectory and the trajectory is gradually moving back to the original path. Using the real data, the model of human walk estimation made unpredictable motion and affected to the trajectory
generation. On the other hand the trajectory generator responded to the unpredictable effect stably. We could also confirm that the robot followed the trajectory which is modified continuously. In case, when the passage is filled by human walk estimation model and there is no space to generate the trajectory, the trajectory generator keeps previous trajectory and try to find a new trajectory by RRT. If a new trajectory is generated by RRT, the new trajectory was applied and if it fails, previous trajectory was used (the trajectory was not updated) and the robot moved on the trajectory until, it collides with human walk estimation model. Fig. 738 shows more complicated situation at the cross with many pedestrian walking randomly. The robot is supposed to cross the street avoiding the pedestrian by trajectory generator and stopping its motion by motion controller. The same algorithm was applied to this situation and only a start position and a goal position were applied. Real data which
was recorded using the same sensor system was used for all the pedestrians, walking in this environment. Some pedestrian cross in front of the robot, come closer to the robot or pass the robot over. For any situation, the trajectory generator responded stably and the motion controller commanded the robot to follow the modified trajectory by stopping and walking according to the situation of the collision with several human walk estimation models. Therefore, we could confirm that this algorithm can be applied to arbitrary environment with many pedestrians walking randomly. CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 737 Real time simulation in the environment I 164 CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING Fig. 738 7.6 A) 165 Real time simulation in the environment II Conclusion Configuration space was defined in 3D space which is composed by x-y plane and t axis which defines time from start time (time of start is zero). The collision
model of the pedestrian was measured by LRF sensor network and modeled as 3D corn shape, which center of basement represents current position and top represents estimated future point. The radius of corn model is defined from personal space and the height. Collision between the trajectory of the robot and the pedestrian is computed by this model in the configuration space. B) Real-time robust trajectory generation was realized by hybrid algorithm using RRT for arbitrary trajectory generation and Elastic Strip in Potential Field for continuous trajectory CAHPTER 7. DYNAMIC COLLISION-FREE WALKING PATH PLANNING 166 modification. These two functions compensates potential weakness each other and made this trajectory generator robust for dynamic change of obstacles in the environment. C) RRT process is modified to generate the candidate point effectively in 3D configuration space. A candidate point is generated in XY plane by RRT and is lifted up in Z axis (axis of time) from its
parent node accounting for the walk speed command. Collision between the candidate point and the environment (obstacles and pedestrians) is computed in the 3D configuration space. If it collides, the candidate point is cancelled The RRT process generates several candidates of trajectories. Potential function is applied for the result of RRT to evaluate the trajectory. This RRT process is processed several times and the results of RRT are compared to find the desired trajectory. D) In the presence of the obstacles near the trajectory, the elastic potential method in potential field (ESPF) was applied to respond to dynamical motion of the obstacles in real time. For any environment, obstacles and humans, potential fields are defined and the potential forces are computed for each point according to the distance from the external obstacles. For each path points, internal forces are also computed. If two adjacent node points are closer than designed shortest margin, these two points are
integrated on the other hand if the two adjacent points are farther than designed longest margin, a new point is generated between the original two points and start to compute the collision. The potential vector is computed according to the external potential force and the internal potential forces for each point and the forces acting on the node points are designed to respond to the fore differently. Finally, the points are controlled by PD controller E) This system was connected to the real system and its effectivity was verified through the real time simulation. Real environment of operation and real data of pedestrian was used and this system commands the modified trajectory to the robot control system on line. Real time command from this system to the robot control system was monitored on this system and on the robot control system. The robot control system computes the position of the robot according to the trajectory command and feedback the position data to this system. The
EMG generates the trajectory using real position data Using the real data, the model of human walk estimation made unpredictable motion and affected to the trajectory generation. On the other hand the trajectory generator responded to the unpredictable effect stably. We could also confirm that the robot followed the trajectory which is modified continuously. CHAPTER 8 Conclusion 1. Torque Control on Position Controlled Robot A) Torque Transformer was developed to convert desired joint torque command into instantaneous increments of joint position or velocity command that will tend to result in an actual torque at the robot. The merit of this framework was that (i) the open-loop torque control can be realized on the current position controlled robot without hardware modification and (ii) the controller can account for nonlinear dynamics of the system. The concept of the transformer was analyzed and validated on the PUMA and the ASIMO robot B) In this work, (a) Torque to Position
Transformer Experimental Model, (b) Torque to Position Transformer Linear Model and (c) Torque to Velocity Transformer Linear Model were validated on the HONDA ASIMO robot. The Torque to Position Transformer Experimental Model was desired approach however modeling process was relatively complicated than other two methods. The Torque to Position Transformer Linear Model was our first approach to simplify the modeling process. Consequently, the Torque to Velocity Transformer Linear Model was the simplest inverse modeling process and we selected it to use in this research. Applying the effect of the friction, we could have better performance for these three approaches. C) In this framework, the Operational Space Formulation was applied to account for the dynamics of the robot. The concept of the transformer was analyzed and validated on the HONDA ASIMO robot. Using Torque Transformer, compliant and passive control and precise position control in Cartesian Space were experimentally
validated. Torque Transformer was developed to convert desired joint torque command into instantaneous increments of joint position command. 167 CAHPTER 8. CONCLUSION 168 2. Sensor-Cased Whole-Body Compliant Motion Primitives A) To realize balance maintenance of humanoid robots in normal human environments safely and successfully, proposed balance controller was developed by controlling ZMP (Zero Moment Point). Stable balance controller was developed by (A) Floor Reaction Control, (B) Target ZMP Control and (C) Foot Planting Location Control. To realize the unified whole-body control, the upper body controller and the lower body controller were connected using the conventional communication between the upper body and the lower body. B) The upper-body posture controller was developed to move the upper-body compliantly and to reduce effect of contacts from the arm to the lower-body balance controller. C) When disturbances occur, balance control strategies were aimed to reduce the
velocity of COM by designing joint control torques. This function provides to lower the risk of falling by detecting the fall situation, controlling the body posture, notifying the external force, avoiding falls by making several steps 3. Sensor-based Contact Estimation and Control Primitives A) Based on contact detection using force sensor and tactile sensor skin, decoupled task dynamics was applied to respond to multiple contacts. Using sensor information, a contact point and a contact force were estimated. According to the estimated contact, the Contact Jacobian Matrix was dynamically modified. Dynamic effect of the robot was modeled by the Operational Space Formulation for the upper-body control. The command from the Operational Space Formulation was transformed into position and velocity command of the joints using the Torque Transformer. B) This framework was applied to the upper-body control of the humanoid robot and was connected with current balance controller for the
lower-body. C) Through the test the availability of this framework was experimentally verified Advantage of this approach is that unpredictable contact can be measured by the force sensor and the tactile sensor and contact point and contact force were estimated so that the controller can deal with the contacts. The controller could dynamically respond to the contact force if it is detected or estimated. And not only precise force feedback control but also compliant contact motion were realized. 169 D) These results are especially efficient physically when humanoid robot coexist with human in the actual environment. This framework provides more freedom for motion in narrow environment if advanced environment sensing, decision making and behavior controller is connected. . 4. Whole-Body Multi-Contact for Extended Support A) We expanded the manipulation capabilities of humanoids by extending their support with multiple contact points. In this system, our whole-body framework was
designed to allow the robot to compliantly interact with its environment at multiple contact points. The synthesis of compliant tasks was simplified by being independent of postures and constraints, which are automatically integrated in the control hierarchy. B) We proposed to categorize whole-body balance controller by (1) gravity compensation technology, (2) balance control technology, and (3) gait generation and trajectory generation technology and activate or deactivate each technology according to required task. We proposed to activate only (1) gravity compensation technology and deactivate another controller. Consequently, we could turn off the balance control and made the humanoid robot to assist only it’s body weight. We name this control status as Gravity Stand. C) To bend the upper-body forward to approach object in the inner part of desk, the posture of the upper-body was supported by pushing the desk, at least using by one hand. The idea of Gravity Stand was extended to
support the posture of the upper-body using the arm. Using the force sensor, which is mounted on the wrist, contact force was detected and force command was generated to the end-effector of the arm (hand) to command toward the direction of gravity with the magnitude of measured force in vertical direction. We had already named this control as Gravity Assist D) The idea of Gravity Stand and whole-body multi-contact were extended to realize kneeling motion and crawling motion and these motions were experimentally validated. 5. Dynamic Collision-Free Motion Planning A) To allow humanoid robots to react to unpredictable contacts with the environment, we realized contact-free motion tracking with a master-slave control system. On the master control system, the whole-body dynamics of the HONDA ASIMO robot was computed 170 on simulator SAI-II. Two haptic devices were connected to SAI-II to command the hand position on-line and intuitively. The on-line trajectory generation (OTG)
framework filtered the desired joint position command into the joint position command and compensates for unpredictable contacts in the environment. B) The end-effector’s robust force control using Active Observer, AOB was integrated to the force control of the Operational Space Formulation. A robust control mechanism was achieved, which enables to drive the robot safely under contact condition. To easily operate the position of both hands of the robot while sensing the interaction forces between the robot and the environment, an Omega haptic device was connected to the controller using a virtual spring model between the master and slave system. Finally, the framework was experimentally verified on the HONDA ASIMO robot by monitoring the system’s behavior while interacting with objects of various shape and stiffness properties. C) In order to let robots immediately react to changes in their environment, it is required to develop instantaneous motion planning methods that can be
deployed in the underlying whole-body control framework. We implemented a reactive motion planning system, Elastic Strips Framework, on the HONDA ASIMO robot to perform interactive manipulation tasks under real-time conditions It significantly improved real-time performance and reliability. 6. Dynamic Collision-Free Walk Path Planning A) Configuration space was defined in 3D space which is composed by x-y plane and t axis which defines time from start time (time of start is zero). The collision model of the pedestrian was measured by LRF sensor network and modeled as 3D corn shape, which center of basement represents current position and top represents estimated future point. The radius of corn model is defined from personal space and the height. Collision between the trajectory of the robot and the pedestrian is computed by this model in the configuration space. B) Real-time robust trajectory generation was realized by hybrid algorithm using RRT for arbitrary trajectory
generation and Elastic Strip in Potential Field for continuous trajectory modification. These two functions compensates potential weakness each other 171 and made this trajectory generator robust for dynamic change of obstacles in the environment. C) RRT process is modified to generate the candidate point effectively in 3D configuration space. A candidate point is generated in XY plane by RRT and is lifted up in Z axis (axis of time) from its parent node accounting for the walk speed command. Collision between the candidate point and the environment (obstacles and pedestrians) is computed in the 3D configuration space. If it collides, the candidate point is cancelled The RRT process generates several candidates of trajectories. Potential function is applied for the result of RRT to evaluate the trajectory. This RRT process is processed several times and the results of RRT are compared to find the desired trajectory. D) In the presence of the obstacles near the trajectory, the
elastic potential method in potential field (ESPF) was applied to respond to dynamical motion of the obstacles in real time. For any environment, obstacles and humans, potential fields are defined and the potential forces are computed for each point according to the distance from the external obstacles. For each path points, internal forces are also computed If two adjacent node points are closer than designed shortest margin, these two points are integrated on the other hand if the two adjacent points are farther than designed longest margin, a new point is generated between the original two points and start to compute the collision. The potential vector is computed according to the external potential force and the internal potential forces for each point and the forces acting on the node points are designed to respond to the fore differently. Finally, the points are controlled by PD controller. E) This system was connected to the real system and its effectivity was verified through
the real time simulation. Real environment of operation and real data of pedestrian was used and this system commands the modified trajectory to the robot control system on line. Real time command from this system to the robot control system was monitored on this system and on the robot control system. The robot control system computes the position of the robot according to the trajectory command and feedback the position data to this system. The EMG generates the trajectory using real position data Using the real data, the model of human walk estimation made unpredictable motion and affected to the trajectory generation. On the other hand the trajectory generator responded to the unpredictable effect stably. We could also confirm that the robot followed the trajectory which is modified continuously. 172 7. Future Work A) Development of robot for compliant multi-contact and whole-body control In this project, we used the HONDA ASIMO robot as a platform for compliant whole-body
control research. However, since the robot was originally designed for walking, it was not enough to test whole-body contact. In this case, whole-body of the robot needs to be covered with tactile skin which was developed for the forearm. Whole-body compliant contact, estimation of contact, whole-body multi-contact and control needs to be integrated. B) Advanced behavior controller responding to dynamic change of the situation In this project, advanced motion primitives were developed to respond to complicated situation in the real environment. During the validation process, simple use case was applied to test each primitive. On the other hand, real environment is supposed to be combination of several use cases. Therefore, in real case, advanced behavior controller needs to sense the environment and predict the situation so that adequate motion primitive or control mode is selected. C) Acquisition of knowledge and situation decision So far, rule-base algorithm has been applied to
plan the motion of the robot in the environment. It worked for simple use case however it is difficult to respond to any situation which happens in real situation. The robot needs to acquire knowledge and needs to decide the next motion according to situation. Approach by AI is one of the possibilities to break through this problem. D) Human movement understanding When the humanoid robots move in the real environment, we can not ignore physical interaction between the robot and human. In this case, estimation of human movement becomes important for that robot to decide next motion. Currently, I am working for this project applying machine learning for motion capture data. It will be connected to the robot soon. E) Human centered approach to realize robust balance control Currently, ZMP and COP are key functions to realize stable balance control of humanoids. However, they work for fixed and stable environment When we think about next humanoid robot working in our daily environment or
disaster area, 173 innovative progress in balance control, as well as human balance control, is important. To break through in balance control of humanoid robots, I am taking human centered approach to realize robust balance control. Currently, I am working for this project applying machine learning for motion capture data. It will be connected to the robot soon. 174 BIBLIOGRAPHY [1] K. Hirai, M Hirose, Y Haikawa, and T Takenaka The development of HONDA humanoid robot. In Proc of the Int Conf on Robotics and Automation, pages 1321-1326, 1998 [2] Honda Motor, Internet page, http://world.hondacom/ASIMO [3] H. Iwata et al, ―Human-humanoid physical interaction realizing force following and task fulfillment,‖ in International Conference on Intelligent Robots and Systems, 2000, pp. 522–527 [4] Yu Ogura, Hiroyuki Aikawa, Kazushi Shimomura, Hideki Kondo, Akitoshi Morishima, Hun-ok Lim and Atsuo Takanishi. Development of A Humanoid Robot WABIAN-2 IEEE International Conference
on Robotics and Automation, pp. 76-81, 2006 [5] K. Kaneko, F Kanehiro, S Kajita and H Hirukawa Humanoid Robot HRP-2 International Conference on Robotics and Automation, 2004. [6] Mitsunaga, N, Miyashita, T, Ishiguro, H and Kogure, K. Hagita N Robovie-IV: A Communication Robot Interacting with People Daily in an Office. International Conference on Intelligent Robots and Systems, 2006. [7] Y. Ohmura and YKuniyoshi Humanoid Robot which can Lift a 30kg Box by Whole Body Contact and Tactile Feedback. International Conference on Intelligent Robots and Systems, pp 1136-11412007. [8] H. Sugiura, M Gienger, H Janssen, and C Goerick, ―Real-time collision avoidance with whole body motion control for humanoid robots, ‖International Conference on Intelligent Robotsand Systems, 2007, pp. 20533158 [9] Cota Nabeshima, Yasuo Kuniyoshi: "A Method for Sustaining Consistent Sensory-Motor Coordination under Body Property Changes Including Tool Grasp/Release" Advanced Robotics vol.24 (2010),
[10] K. Nagasaka et al, ―Whole-body cooperative force control for a two armed and two-wheeled mobile robot using generalized inverse dynamics and idealized joints,‖ in International Conference on Robotics and Automation, 2010, pp. 3487–3483 175 [11] A. Dietrich et al, ―Extension to reactive self-collision avoidance for torque and position controlled humanoids,‖ in International Conference on Robotics and Automation, 2011, pp. 3555–3562.[12] A D Luca, F Flacco, and O Khatib, ―Motion control of redundant robots under joint constraints: Saturation in the null space,‖ in International Conference on Robotics and Automation, 2012, pp. 285–302 [13] M. Behnisch, R Haschke, and M Gienger, ―Task space motion planning using reactive control,‖ in International Conference on Intelligent Robots and Systems, 2010, pp. 5935–5940 [14] J. Kuffner and J-C Latombe, ―Interactive manipulation planning for animated characters,‖ in The Eighth Picific Conference on Computer
Graphics and Applications, 2000, pp. 417–418 [15] J. Kuffner, K Nishiwaki, S Kagami, M Inaba, and H Inoue, ―Motion planning for humanoid robots,‖ Springer Tracts in Advanced Robotics, vol. 15, pp 385–394, 2005 [16] A. Dietrich, T Wimb¨ock, and A Albu-Sch¨affer, ―Dynamic whole body mobile manipulation with a torque controlled humanoid robot via impedance control laws, ‖International Conference on Intelligent Robots and Systems, 2011, pp. 3309–3316 [17] Jinsung Kwon, Taizo Yoshikawa, and Oussama Khatib, ―Elastic Strips: Implementation on a Physical Humanoid Robot‖ , IROS, 2012. [18] O. Khatib A unified approach for motion and force control of robot manipulators: The operational space formulation. International Journal of Robotics Research, 3(1):43-53, 1987 [19] Jaeheung Park, and Oussama Khatib, A Haptic Teleoperation Approach Based on Contact Force Control. The International Journal of Robotics Research Vol25, No5-6 (2006) 575-591 [20] Jaeheung Park and Oussama
Khatib, Robot Multiple Contact Control, Robotica Vol. 26, Issue 05 (Sept 2008) 667-677. [21] O.Khatib,LSentis,andJPark, ―A unified framework for whole-body humanoid robot control with multiple constraints and contacts,‖ in European Robotics Symposium 2008, ser. Springer Tracts in Advanced Robotics, H. Bruyninckx, L Pˇreuˇcil, and M Kulich, Eds Springer, Berlin, Heidelberg, Germany, 2009, vol. 44, pp 313412 [22] T. Kr¨oger and F M Wahl, ―On-line trajectory generation: Basic concepts for instantaneous reactions to unforeseen events,‖ IEEE Trans. on Robotics, vol 26, no 1, pp 94–111, Feb 2010 [23] O. Khatib, P Thaulad, T Yoshikawa, J Park Torque-position transformer for task control of position controlled robots. International Conference on Robotics and Automation, 1729-1734, 2008. [24] T. Yoshikawa and O Khatib Compliant motion control for a humanoid robot in contact with the environment and humans. International Conference on Intelligent Robots and Systems, 176
211-218, 2008 [25] T. Yoshikawa, J Park and O Khatib The Torque-to-Position Transformer: A Framework for Compliant Motion Control of Industrial Position-Controlled Robots. International Symposium on Robotics, THD1-3, 2008 [26] T. Yoshikawa, O Khatib, Compliant Humanoid Robot Control by the Torque Transformer, IROS, 2009 [27] T. Yoshikawa Open-Loop Torque Control on Joint Position-Controlled Robots United States Patent, US 12099047, 2008. [28] T. Yoshikawa, Japanese Patent, ―Compliant manipulation for arm robot‖, JP 2015-62972, 6033193 (2016/11/04) [29] T. Yoshikawa, Japanese Patent, ―Compliant upper-body control for humanoid robot‖, JP 2015-62971, 6076873 (2017/01/20) [30] T. Yoshikawa, Japanese Patent, ―Trajectory generator for mobile robot‖, JP 2017-151687 (2017.832) [31] T. Yoshikawa, Japanese Patent, ―Fall estimation and control for humanoid robot‖, JP 2019-XX [32] T. Yoshikawa, Japanese Patent, ―Estimation of balance and upper-body control for humanoid robot‖,
JP 2019-YY