会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • ROBOTIC DEVICE INCLUDING MACHINE VISION
    • 包括机器视觉的机器人设备
    • US20160368148A1
    • 2016-12-22
    • US14746072
    • 2015-06-22
    • GM GLOBAL TECHNOLOGY OPERATIONS LLC
    • David W. PaytonKyungnam KimZhichao ChenRyan M. UhlenbrockLi Yang Ku
    • B25J9/16
    • B25J9/1697
    • A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
    • 用于靠近工作空间的可控机器人设备的机器视觉系统包括图像采集传感器,其布置成周期性地捕获视觉信号输入,每个视觉信号输入包括包括工作空间的视野的图像。 控制器可操作地耦合到机器人设备并且包括包括可执行视觉感知程序的非暂时性存储器组件。 视觉感知例程包括可操作地动态地跟踪工作空间中的焦点对象的焦点循环控制例程和可操作以监视工作空间背景的背景环路控制程序。 焦点循环控制程序与背景环路控制程序并行异步执行,以基于周期性捕获的视觉信号输入来确定包括焦点对象和背景的组合结果。 控制器可操作以基于聚焦环控制程序来控制机器人装置来操纵焦点对象。
    • 4. 发明授权
    • Rapid robotic imitation learning of force-torque tasks
    • 快速机器人模拟学习力 - 扭矩任务
    • US09403273B2
    • 2016-08-02
    • US14285867
    • 2014-05-23
    • GM GLOBAL TECHNOLOGY OPERATIONS LLC
    • David W. PaytonRyan M. UhlenbrockLi Yang Ku
    • B25J9/16G05B19/423
    • B25J9/163B25J9/1664G05B19/423G05B2219/36442Y10S901/03
    • A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
    • 训练机器人以自主执行机器人任务的方法包括:通过预定机器人任务的多个状态移动末端执行器,以在一组训练演示中向机器人演示任务。 该方法包括在通过多个状态移动末端执行器时通过力 - 转矩传感器测量训练数据至少包括线性力和扭矩。 从训练数据中提取主要特征,将其分为控制原语的时间序列。 识别时间序列的相邻片段之间的转换。 在相同任务的自主执行期间,控制器检测转换并在控制模式之间自动切换。 机器人系统包括机器人,力 - 扭矩传感器和被编程为执行该方法的控制器。
    • 5. 发明申请
    • METHOD FOR CALIBRATING AN ARTICULATED END EFFECTOR EMPLOYING A REMOTE DIGITAL CAMERA
    • 用于校准使用远程数字摄像机的方位终端效果的方法
    • US20160214255A1
    • 2016-07-28
    • US14602519
    • 2015-01-22
    • GM GLOBAL TECHNOLOGY OPERATIONS LLC
    • Ryan M. UhlenbrockHeiko Hoffmann
    • B25J9/16
    • B25J9/1653B25J9/1674B25J9/1692B25J9/1697G05B2219/40611G06T7/70Y10S901/09Y10S901/47
    • A method for calibrating an articulable end effector of a robotic arm employing a digital camera includes commanding the end effector to achieve a plurality of poses. At each commanded end effector pose, an image of the end effector with the digital camera is captured and a scene point cloud including the end effector is generated based upon the captured image of the end effector. A synthetic point cloud including the end effector is generated based upon the commanded end effector pose, and a first position of the end effector is based upon the synthetic point cloud, and a second position of the end effector associated with the scene point cloud is determined. A position of the end effector is calibrated based upon the first position of the end effector and the second position of the end effector for the plurality of commanded end effector poses.
    • 使用数字照相机校准机器人手臂的可关节端部执行器的方法包括命令末端执行器以实现多个姿态。 在每个命令末端的执行器姿态中,捕获具有数字照相机的末端执行器的图像,并且基于末端执行器的捕获图像产生包括末端执行器的场景点云。 基于指令的末端执行器姿态产生包括末端执行器的合成点云,并且末端执行器的第一位置基于合成点云,并且确定与场景点云相关联的末端执行器的第二位置 。 基于末端执行器的第一位置和用于多个命令的末端执行器姿态的末端执行器的第二位置校准末端执行器的位置。
    • 8. 发明申请
    • RAPID ROBOTIC IMITATION LEARNING OF FORCE-TORQUE TASKS
    • 快速的机动特技学习强制扭矩任务
    • US20150336268A1
    • 2015-11-26
    • US14285867
    • 2014-05-23
    • GM GLOBAL TECHNOLOGY OPERATIONS LLC
    • David W. PaytonRyan M. UhlenbrockLi Yang Ku
    • B25J9/16
    • B25J9/163B25J9/1664G05B19/423G05B2219/36442Y10S901/03
    • A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
    • 训练机器人以自主执行机器人任务的方法包括:通过预定机器人任务的多个状态移动末端执行器,以在一组训练演示中向机器人演示任务。 该方法包括在通过多个状态移动末端执行器时通过力 - 转矩传感器测量训练数据至少包括线性力和扭矩。 从训练数据中提取主要特征,将其分为控制原语的时间序列。 识别时间序列的相邻片段之间的转换。 在相同任务的自主执行期间,控制器检测转换并在控制模式之间自动切换。 机器人系统包括机器人,力 - 扭矩传感器和被编程为执行该方法的控制器。
    • 9. 发明申请
    • VISUAL DEBUGGING OF ROBOTIC TASKS
    • 机器人任务视觉调试
    • US20150239127A1
    • 2015-08-27
    • US14189452
    • 2014-02-25
    • GM GLOBAL TECHNOLOGY OPERATIONS LLC.
    • Leandro G. BarajasDavid W. PaytonLi Yang KuRyan M. UhlenbrockDarren Earl
    • B25J9/16G05B13/02
    • B25J9/1697B25J9/1671G05B13/026G05B2219/40311
    • A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, and graphical user interface (GUI). The simulator module receives the status information and generates visual markers, in response to marker commands, as graphical depictions of the object and robot. An action planning module selects a next action of the robot. The marker generator module generates and outputs the marker commands to the simulator module in response to the selected next action. The GUI receives and displays the visual markers, selected future action, and input commands. Via the action planning module, the position and/or orientation of the visual markers are modified in real time to change the operation of the robot.
    • 机器人系统包括机器人,测量包括机器人的位置和取向以及工作空间内的物体的状态信息的传感器以及控制器。 视觉上调试机器人的操作的控制器包括模拟器模块,动作规划模块和图形用户界面(GUI)。 模拟器模块接收状态信息,并响应于标记命令生成视觉标记作为对象和机器人的图形描绘。 动作规划模块选择机器人的下一个动作。 标记生成器模块响应于所选择的下一个动作生成并将标记命令输出到模拟器模块。 GUI接收并显示可视标记,选择的未来操作和输入命令。 通过动作规划模块,实时修改视觉标记的位置和/或方向,以改变机器人的操作。