会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明授权
    • High-confidence labeling of video volumes in a video sharing service
    • 在视频共享服务中高分辨率地标注视频卷
    • US08983192B2
    • 2015-03-17
    • US13601802
    • 2012-08-31
    • Rahul SukthankarJay Yagnik
    • Rahul SukthankarJay Yagnik
    • G06K9/46H04N9/82H04N21/234
    • H04N9/8205G06K9/00718G06K9/00744H04N21/23418
    • A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers.The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.
    • 体积识别系统识别一组视频中的每一个中的一组未标记的时空体积,每个体积表示不同的对象或动作。 音量识别系统进一步为每个视频确定表征整个音量的一组音量级特征。 在一个实施例中,特征基于码本并且描述卷的不同码本条目的时间和空间关系。 音量识别系统使用音量级特征,结合分配给整个视频的现有标签,以高度置信的方式标识所识别的体积的一些子集,例如通过采用一致性学习或训练和应用弱音量分类器 。 标记的卷可以用于许多应用,例如训练强大的分类器,改进视频搜索(包括定位各个卷),以及基于识别的卷创建复合视频。
    • 32. 发明授权
    • Filter based object detection using hash functions
    • 使用散列函数进行基于过滤器的对象检测
    • US08977627B1
    • 2015-03-10
    • US13286963
    • 2011-11-01
    • Sudheendra VijayanarasimhanJay Yagnik
    • Sudheendra VijayanarasimhanJay Yagnik
    • G06F7/00G06K9/46
    • G06K9/46G06K9/6232G06K9/6251
    • This disclosure relates to filter based object detection using hash functions. A hashing component can compute respective hash values for a set of object windows that are associated with an image to be scanned. The hashing component can employ various hash functions in connection with computing the hash values, such as a winner takes all (WTA) hash function. A filter selection component can compare the respective hash values of the object windows against a hash table of object filters, and can select one or more object filters for recognizing or localizing at least one of an object within the image as a function of the comparison.
    • 本公开涉及使用散列函数的基于过滤器的对象检测。 散列组件可以计算与要扫描的图像相关联的一组对象窗口的相应散列值。 散列组件可以使用与计算哈希值相关联的各种哈希函数,例如获胜者获得所有(WTA)散列函数。 过滤器选择组件可以将对象窗口的相应散列值与对象过滤器的散列表进行比较,并且可以选择一个或多个对象过滤器来根据比较来识别或定位图像内的对象中的至少一个。
    • 34. 发明授权
    • Automatic video and dense image-based geographic information matching and browsing
    • 自动视频和密集的基于图像的地理信息匹配和浏览
    • US08593485B1
    • 2013-11-26
    • US12431279
    • 2009-04-28
    • Dragomir AnguelovAbhijit OgaleEhud RivlinJay Yagnik
    • Dragomir AnguelovAbhijit OgaleEhud RivlinJay Yagnik
    • G09G5/00
    • G09G5/377G06F3/1462G09G2340/12G09G2340/14G09G2354/00G09G2370/022G09G2380/00
    • Methods and systems permit automatic matching of videos with images from dense image-based geographic information systems. In some embodiments, video data including image frames is accessed. The video data may be segmented to determine a representative image frame of a segment of the video data. Data representing information from the representative image frame may be automatically compared with data representing information from a plurality of image frames of an image-based geographic information data system. Such a comparison may, for example, involve a search for a best match between geometric features, histograms, color data, texture data, etc. of the compared images. Based on the automatic comparing, an association between the video and one or more images of the image-based geographic information data system may be generated. The association may represent a geographic correlation between selected images of the system and the video data.
    • 方法和系统允许视频与来自基于图像的地理信息系统的图像自动匹配。 在一些实施例中,访问包括图像帧的视频数据。 视频数据可以被分割以确定视频数据的段的代表性图像帧。 表示来自代表图像帧的信息的数据可以与表示来自基于图像的地理信息数据系统的多个图像帧的信息的数据自动进行比较。 这样的比较可以例如涉及搜索比较图像的几何特征,直方图,颜色数据,纹理数据等之间的最佳匹配。 基于自动比较,可以生成视频与基于图像的地理信息数据系统的一个或多个图像之间的关联。 该关联可以表示系统的所选图像与视频数据之间的地理相关性。
    • 36. 发明申请
    • High-Confidence Labeling of Video Volumes in a Video Sharing Service
    • 视频共享服务中视频卷的高可信度标签
    • US20130114902A1
    • 2013-05-09
    • US13601802
    • 2012-08-31
    • Rahul SukthankarJay Yagnik
    • Rahul SukthankarJay Yagnik
    • G06K9/46
    • H04N9/8205G06K9/00718G06K9/00744H04N21/23418
    • A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers.The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.
    • 体积识别系统识别一组视频中的每一个中的一组未标记的时空体积,每个体积表示不同的对象或动作。 音量识别系统进一步为每个视频确定表征整个音量的一组音量级特征。 在一个实施例中,特征基于码本并且描述卷的不同码本条目的时间和空间关系。 音量识别系统使用音量级特征,结合分配给整个视频的现有标签,以高度置信的方式标识所识别的体积的一些子集,例如通过采用一致性学习或训练和应用弱音量分类器 。 标记的卷可以用于许多应用,例如训练强大的分类器,改进视频搜索(包括定位各个卷),以及基于识别的卷创建复合视频。
    • 37. 发明授权
    • Signal processing by ordinal convolution
    • 信号处理通过顺序卷积
    • US08417751B1
    • 2013-04-09
    • US13289416
    • 2011-11-04
    • Jay Yagnik
    • Jay Yagnik
    • G06F17/10G06F17/15G06K9/64
    • G06F17/15G06K9/6202
    • Convolutions are frequently used in signal processing. A method for performing an ordinal convolution is disclosed. In an embodiment of the disclosed subject matter, an ordinal mask may be obtained. The ordinal mask may describe a property of a signal. A representation of a signal may be received. A processor may convert the representation of the signal to an ordinal representation of the signal. The ordinal mask may be applied to the ordinal representation of the signal. Based upon the application of the ordinal mask to the ordinal representation of the signal, it may be determined that the property is present in the signal. The ordinal convolution method described herein may be applied to any type of signal processing method that relies on a transform or convolution.
    • 卷积经常用于信号处理。 公开了一种执行顺序卷积的方法。 在所公开的主题的实施例中,可以获得顺序掩模。 序数掩码可以描述信号的属性。 可以接收信号的表示。 处理器可以将信号的表示转换为信号的序数表示。 序数掩码可以应用于信号的顺序表示。 基于将序数掩码应用于信号的顺序表示,可以确定信号中存在该属性。 这里描述的顺序卷积方法可以应用于依赖于变换或卷积的任何类型的信号处理方法。
    • 38. 发明授权
    • Learning concepts for video annotation
    • 学习视频注释的概念
    • US08396286B1
    • 2013-03-12
    • US12822727
    • 2010-06-24
    • Hrishikesh AradhyeGeorge TodericiJay Yagnik
    • Hrishikesh AradhyeGeorge TodericiJay Yagnik
    • G06K9/62G06K9/66G06K9/00
    • G06K9/00718G06K9/6262
    • A concept learning module trains video classifiers associated with a stored set of concepts derived from textual metadata of a plurality of videos, the training based on features extracted from training videos. Each of the video classifiers can then be applied to a given video to obtain a score indicating whether or not the video is representative of the concept associated with the classifier. The learning process does not require any concepts to be known a priori, nor does it require a training set of videos having training labels manually applied by human experts. Rather, in one embodiment the learning is based solely upon the content of the videos themselves and on whatever metadata was provided along with the video, e.g., on possibly sparse and/or inaccurate textual metadata specified by a user of a video hosting service who submitted the video.
    • 概念学习模块训练与从多个视频的文本元数据导出的存储的一组概念相关联的视频分类器,该训练基于从训练视频中提取的特征。 然后可以将每个视频分类器应用于给定的视频以获得指示视频是否代表与分类器相关联的概念的分数。 学习过程不需要先验知道任何概念,也不需要由人类专家手动应用培训标签的培训视频。 相反,在一个实施例中,学习仅基于视频本身的内容以及与视频一起提供的任何元数据,例如,由提交的视频托管服务的用户指定的可能稀疏和/或不准确的文本元数据 视频。
    • 40. 发明授权
    • Method and system for automated annotation of persons in video content
    • 视频内容中用户自动注释的方法和系统
    • US08213689B2
    • 2012-07-03
    • US12172939
    • 2008-07-14
    • Jay YagnikMing Zhao
    • Jay YagnikMing Zhao
    • G06K9/00G06K9/62
    • G06K9/00711G06F17/30781G06K9/00295G06K9/6255
    • Methods and systems for automated annotation of persons in video content are disclosed. In one embodiment, a method of identifying faces in a video includes the stages of: generating face tracks from input video streams; selecting key face images for each face track; clustering the face tracks to generate face clusters; creating face models from the face clusters; and correlating face models with a face model database. In another embodiment, a system for identifying faces in a video includes a face model database having face entries with face models and corresponding names, and a video face identifier module. In yet another embodiment, the system for identifying faces in a video can also have a face model generator.
    • 公开了用于视频内容中的人的自动注释的方法和系统。 在一个实施例中,识别视频中的面部的方法包括以下阶段:从输入视频流生成面部曲面; 为每个脸部轨迹选择关键脸部图像; 聚集脸部轨迹以生成脸部群集; 从脸部群集中创建面部模型; 并将面部模型与面部模型数据库相关联。 在另一个实施例中,用于识别视频中的面部的系统包括具有面部表情和面部模型和对应名称的面部模型数据库,以及视频面部识别器模块。 在另一个实施例中,用于识别视频中的面部的系统还可以具有面部模型生成器。