会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Apparatus and method for identifying and tracking objects with view-based representations
    • 用于基于视图的表示来识别和跟踪对象的装置和方法
    • US06526156B1
    • 2003-02-25
    • US08923436
    • 1997-09-03
    • Michael J. BlackAllan D. Jepson
    • Michael J. BlackAllan D. Jepson
    • G06K900
    • G06K9/32G06K9/6203G06K9/6214G06K9/6247G06T7/251G06T2207/10016
    • A system tracks and identifies view-based representations of an object through a sequence of images. As the view of the object changes due to its motion or the motion of its recording device, the object is identified by matching an image region containing the object with a set of basis images represented by an eigenspace. The eigenspace is generated from a training set of images which records different views of the object. The system identifies the object in the image region by simultaneously computing a transformation that aligns the image region with the eigenspace, and computing coefficients of a combination of linear eigenvectors that reconstruct the image region. This identification and tracking system operates when views of the object in the image are deformed under some transformation with respect to the eigenspace. Matching between the image region and the eigenspace is performed using a robust regression formulation that uses a coarse to fine strategy with incremental refinement. As the incremental refinement registers the image region with the eigenspace, the identification of a match between the object in an image region and the eigenspace improves. The transformation that warps the image region of a current image frame into alignment with the eigenspace is then used to track the object in a subsequent image frame.
    • 系统通过一系列图像跟踪和识别对象的基于视图的表示。 由于对象的视图由于其运动或其记录装置的运动而改变,所以通过将包含对象的图像区域与由本征空间表示的一组基础图像进行匹配来识别对象。 本征空间是从记录对象不同视图的训练图像集生成的。 该系统通过同时计算将图像区域与本征空间对准的变换以及重建图像区域的线性特征向量的组合的计算系数来识别图像区域中的对象。 当在图像中的对象的视图相对于本征空间进行一些变换时,该识别和跟踪系统变形。 图像区域和本征空间之间的匹配是使用鲁棒的回归公式进行的,该公式使用粗略到精细的策略和增量细化。 随着增量细化将图像区域注册到本征空间,图像区域中的对象与本征空间之间的匹配的识别得到改善。 然后使用使当前图像帧的图像区域与本征空间对齐的变换来跟踪后续图像帧中的对象。
    • 4. 发明授权
    • Apparatus and method for producing patterned tufted goods
    • 用于生产图案化簇绒物品的装置和方法
    • US5588383A
    • 1996-12-31
    • US397742
    • 1995-03-02
    • David L. DavisMichael J. BlackRichard A. DolfSean E. GormanJohn M. HavardMilton R. Sigelmann
    • David L. DavisMichael J. BlackRichard A. DolfSean E. GormanJohn M. HavardMilton R. Sigelmann
    • B65H49/16B65H57/16D05C15/14D05C15/18D05C15/24D05C15/28D05C15/34
    • D05C15/34B65H49/16B65H57/16D05C15/14D05C15/18D05C15/24D05C15/28B65H2701/31
    • An apparatus for tufting yarn in a backing comprising a yarn applicator for penetrating the backing and implanting the yarn therein and an electric motor for supplying a predetermined length of the yarn to the yarn applicator. The electric motor is operable to selectively advance the predetermined length of yarn to the yarn applicator, and alternatively, hold the yarn or retract the yarn from the applicator. Desirably, the electric motor is a stepper motor, and more desirably, the apparatus comprises a plurality of stepper motors for selectively feeding yarns to a row of reciprocable hollow tufting needles for producing a patterned tufted product. According to one aspect, the tufting apparatus composes a modular supply system and a corresponding modular control system. Pattern information and timing signals are sent to modular yarn control units by a remote process control computer system. According to another aspect, an apparatus for tufting yarn in a backing is provided wherein a flexible yarn supply tube extends from the outlet of a stationary manifold to the inlet of a reciprocable needle mount for a hollow tufting needle, so that during reciprocation of the needle, yarn does not move relative to a yarn feed path due to the reciprocation of the needle. This allows for yarn to be fed to the needle during the entire needle reciprocation cycle. A yarn movement monitoring apparatus, a yarn movement managing apparatus, a needle assembly for tufting yarn in a backing, and a knife assembly for mounting to a frame and cutting yarn implanted into a backing by a hollow needle are also encompassed.
    • 一种用于在背衬中簇绒纱线的装置,包括用于穿透背衬并将纱线注入其中的纱线施加器和用于将预定长度的纱线供应到纱线施加器的电动机。 电动机可操作以选择性地将预定长度的纱线前进到纱线施加器,或者,保持纱线或从施用器收回纱线。 期望地,电动机是步进电动机,并且更期望地,该装置包括多个步进电动机,用于选择性地将纱线馈送到一排往复运动的中空簇绒针,以产生图案化的簇绒制品。 根据一个方面,簇绒装置组成模块化供应系统和相应的模块化控制系统。 模式信息和定时信号由远程过程控制计算机系统发送到模块化纱线控制单元。 根据另一方面,提供了一种用于在背衬中簇绒纱线的装置,其中柔性纱线供应管从固定歧管的出口延伸到用于中空簇绒针的可往复运动的针座的入口,使得在针的往复运动期间 由于针的往复运动,纱线不会相对于喂纱路径移动。 这允许在整个针往复运动循环期间将纱线送入针头。 还包括纱线运动监测装置,纱线运动管理装置,用于在背衬中簇绒纱线的针组件和用于安装到框架上的刀组件和用中空针头注入背衬的切割纱线。
    • 6. 发明申请
    • SYSTEMS AND METHODS FOR DYNAMIC POWER ALLOCATION
    • 动力分配系统与方法
    • US20100295376A1
    • 2010-11-25
    • US12535829
    • 2009-08-05
    • Eric K. BlackMichael J. Black
    • Eric K. BlackMichael J. Black
    • H02J1/00
    • H02J7/34H02J7/0068H02J7/35Y10T307/696
    • Disclosed herein are systems and methods for providing power to a load. Systems according to the present embodiment may include an electrical generator to generate electrical power and a battery to store electrical power. The present disclosure may be applied to electrical power generators having variable outputs, and may be utilized to provide a more constant electrical output by drawing power from the electrical generator and the battery as necessary to satisfy the power requirements of the electrical load. The system draws power from the electrical power generator and the battery to satisfy the power requirements of the load. Based on a mode of operation, the system may draw power only from the battery, only from the electrical power generator, or from both the electrical power generator and the battery alternately.
    • 本文公开了用于向负载提供电力的系统和方法。 根据本实施例的系统可以包括用于产生电力的发电机和用于存储电力的电池。 本公开可以应用于具有可变输出的发电机,并且可以用于通过从发电机和电池抽取功率来提供更恒定的电力输出,以满足电负载的功率要求。 系统从发电机和电池抽取电力,以满足负载的功率要求。 基于操作模式,系统可以仅从电池,仅来自发电机,或从发电机和电池交替地供电。
    • 7. 发明授权
    • Visual motion analysis method for detecting arbitrary numbers of moving objects in image sequences
    • 用于检测图像序列中移动物体的任意数目的视觉运动分析方法
    • US06954544B2
    • 2005-10-11
    • US10155815
    • 2002-05-23
    • Allan D. JepsonDavid J. FleetMichael J. Black
    • Allan D. JepsonDavid J. FleetMichael J. Black
    • G06T7/20G06K9/00
    • G06K9/32G06T7/215G06T7/251G06T2207/10016G06T2207/30196
    • A visual motion analysis method that uses multiple layered global motion models to both detect and reliably track an arbitrary number of moving objects appearing in image sequences. Each global model includes a background layer and one or more foreground “polybones”, each foreground polybone including a parametric shape model, an appearance model, and a motion model describing an associated moving object. Each polybone includes an exclusive spatial support region and a probabilistic boundary region, and is assigned an explicit depth ordering. Multiple global models having different numbers of layers, depth orderings, motions, etc., corresponding to detected objects are generated, refined using, for example, an EM algorithm, and then ranked/compared. Initial guesses for the model parameters are drawn from a proposal distribution over the set of potential (likely) models. Bayesian model selection is used to compare/rank the different models, and models having relatively high posterior probability are retained for subsequent analysis.
    • 一种视觉运动分析方法,其使用多层全局运动模型来检测和可靠地跟踪出现在图像序列中的任意数量的移动物体。 每个全局模型包括背景层和一个或多个前景“多边形”,每个前景多边形包括参数形状模型,外观模型和描述相关联的运动对象的运动模型。 每个多骨架包括独占空间支持区域和概率边界区域,并被分配明确的深度排序。 使用例如EM算法进行精细化,然后进行排序/比较,生成与被检测对象对应的具有不同层数,深度顺序,动作等的多个全局模型。 模型参数的初始猜测是从一组潜在(可能)模型中的提案分布中得出的。 贝叶斯模型选择用于比较/排列不同模型,并且保留具有较高后验概率的模型用于后续分析。
    • 8. 发明授权
    • Method and apparatus for generating a condensed version of a video sequence including desired affordances
    • 一种用于生成包括所需可能性的视频序列的聚集版本的方法和装置
    • US06560281B1
    • 2003-05-06
    • US09028548
    • 1998-02-24
    • Michael J. BlackXuan JuScott MinnemanDonald G. Kimber
    • Michael J. BlackXuan JuScott MinnemanDonald G. Kimber
    • H04B166
    • G06F17/30793G06F17/30811G06F17/30843G06K9/6255
    • A method and apparatus analyzes and annotates a technical talk typically illustrated with overhead slides, wherein the slides are recorded in a video sequence. The video sequence is condensed and digested into key video frames adaptable for annotation to time and audio sequence. The system comprises a recorder for recording a technical talk as a sequential set of video image frames. A stabilizing processor segregates the video image frames into a plurality of associated subsets each corresponding to a distinct slide displayed at the talk and for median filtering of the subsets for generating a key frame representative of each of the subsets. A comparator compares the key frame with the associated subsets to identify differences between the key frame and the associates subset which comprise nuisances and affordances. A gesture recognizer locates, tracks and recognizes gestures occurring in the subset as gesture affordances and identifies a gesture video frame representative of the gesture affordance. An integrator compiles the key frames and gesture video frames as a digest of the video image frames which can also be annotated with the time and audio sequence.
    • 一种方法和装置分析和注释通常用架空幻灯片示出的技术讲话,其中幻灯片以视频序列记录。 视频序列被浓缩并消化成适用于时间和音频序列的注释的关键视频帧。 该系统包括用于记录技术谈话作为顺序集的视频图像帧的记录器。 稳定处理器将视频图像帧分离成多个相关联的子集,每个相关子集对应于在通话中显示的不同幻灯片,并且用于子集的中值滤波,用于生成代表每个子集的关键帧。 比较器将关键帧与相关联的子集进行比较,以识别关键帧和包含妨扰和可用性的关联子集之间的差异。 手势识别器定位,跟踪和识别在子集中发生的手势作为姿势提供,并且识别表示手势能力的手势视频帧。 集成商将关键帧和手势视频帧编译为可以用时间和音频序列注释的视频图像帧的摘要。
    • 9. 发明授权
    • Apparatus and method for tracking facial motion through a sequence of
images
    • 用于通过一系列图像跟踪面部运动的装置和方法
    • US5802220A
    • 1998-09-01
    • US574176
    • 1995-12-15
    • Michael J. BlackYaser Yacoob
    • Michael J. BlackYaser Yacoob
    • G06K9/00G06T7/20G06F9/36
    • G06K9/00248G06K9/00315G06T7/2006G06T7/2046G06T2207/10016G06T2207/30201
    • A system tracks human head and facial features over time by analyzing a sequence of images. The system provides descriptions of motion of both head and facial features between two image frames. These descriptions of motion are further analyzed by the system to recognize facial movement and expression. The system analyzes motion between two images using parameterized models of image motion. Initially, a first image in a sequence of images is segmented into a face region and a plurality of facial feature regions. A planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images. The second image is warped or shifted back towards the first image using the estimated motion parameters of the planar model, in order to model the facial features relative to the first image. An affine model and an affine model with curvature are used to recover motion parameters that estimate the image motion between the segmented facial feature regions and the warped second image. The recovered motion parameters of the facial feature regions represent the relative motions of the facial features between the first image and the warped image. The face region in the second image is tracked using the recovered motion parameters of the face region. The facial feature regions in the second image are tracked using both the recovered motion parameters for the face region and the motion parameters for the facial feature regions. The parameters describing the motion of the face and facial features are filtered to derive mid-level predicates that define facial gestures occurring between the two images. These mid-level predicates are evaluated over time to determine facial expression and gestures occurring in the image sequence.
    • 系统通过分析图像序列来跟踪人的头部和脸部特征。 该系统提供两个图像帧之间的头部和面部特征的运动的描述。 运动的这些描述进一步被系统分析以识别面部运动和表达。 该系统使用图像运动的参数化模型分析两幅图像之间的运动。 首先,图像序列中的第一图像被分割成面部区域和多个面部特征区域。 平面模型用于恢复估计第一图像中的分割面部区域与图像序列中的第二图像之间的运动的运动参数。 使用平面模型的估计运动参数将第二图像扭曲或向后移回第一图像,以便相对于第一图像对面部特征进行建模。 使用仿射模型和具有曲率的仿射模型来恢复估计分割的面部特征区域和翘曲的第二图像之间的图像运动的运动参数。 面部特征区域的恢复的运动参数表示第一图像和翘曲图像之间的面部特征的相对运动。 使用面部区域的恢复的运动参数来跟踪第二图像中的面部区域。 使用恢复的面部区域的运动参数和面部特征区域的运动参数来跟踪第二图像中的面部特征区域。 描述描述脸部和面部特征的运动的参数被过滤以得出定义在两个图像之间出现的面部手势的中间谓词。 随着时间的推移评估这些中级谓词以确定图像序列中出现的面部表情和手势。
    • 10. 发明授权
    • Apparatus and method for recognizing facial expressions and facial
gestures in a sequence of images
    • 用于识别图像序列中的面部表情和面部手势的装置和方法
    • US5774591A
    • 1998-06-30
    • US572776
    • 1995-12-15
    • Michael J. BlackYaser Yacoob
    • Michael J. BlackYaser Yacoob
    • G06K9/00
    • G06K9/00315G06K9/00248
    • A system tracks human head and facial features over time by analyzing a sequence of images. The system provides descriptions of motion of both head and facial features between two image frames. These descriptions of motion are further analyzed by the system to recognize facial movement and expression. The system analyzes motion between two images using parameterized models of image motion. Initially, a first image in a sequence of images is segmented into a face region and a plurality of facial feature regions. A planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images. The second image is warped or shifted back towards the first image using the estimated motion parameters of the planar model, in order to model the facial features relative to the first image. An affine model and an affine model with curvature are used to recover motion parameters that estimate the image motion between the segmented facial feature regions and the warped second image. The recovered motion parameters of the facial feature regions represent the relative motions of the facial features between the first image and the warped image. The face region in the second image is tracked using the recovered motion parameters of the face region. The facial feature regions in the second image are tracked using both the recovered motion parameters for the face region and the motion parameters for the facial feature regions. The parameters describing the motion of the face and facial features are filtered to derive mid-level predicates that define facial gestures occurring between the two images. These mid-level predicates are evaluated over time to determine facial expression and gestures occurring in the image sequence.
    • 系统通过分析图像序列来跟踪人的头部和脸部特征。 该系统提供两个图像帧之间的头部和面部特征的运动的描述。 运动的这些描述进一步被系统分析以识别面部运动和表达。 该系统使用图像运动的参数化模型分析两幅图像之间的运动。 首先,图像序列中的第一图像被分割成面部区域和多个面部特征区域。 平面模型用于恢复估计第一图像中的分割面部区域与图像序列中的第二图像之间的运动的运动参数。 使用平面模型的估计运动参数将第二图像扭曲或向后移回第一图像,以便相对于第一图像对面部特征进行建模。 使用仿射模型和具有曲率的仿射模型来恢复估计分割的面部特征区域和翘曲的第二图像之间的图像运动的运动参数。 面部特征区域的恢复的运动参数表示第一图像和翘曲图像之间的面部特征的相对运动。 使用面部区域的恢复的运动参数来跟踪第二图像中的面部区域。 使用恢复的面部区域的运动参数和面部特征区域的运动参数来跟踪第二图像中的面部特征区域。 描述描述脸部和面部特征的运动的参数被过滤以得出定义在两个图像之间出现的面部手势的中间谓词。 随着时间的推移评估这些中级谓词以确定图像序列中出现的面部表情和手势。