会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • SYSTEMS AND METHODS FOR DYNAMICALLY CREATING HYPERLINKS ASSOCIATED WITH RELEVANT MULTIMEDIA CONTENT
    • 动态创造与相关多媒体内容相关的超音速的系统和方法
    • US20090235150A1
    • 2009-09-17
    • US12405298
    • 2009-03-17
    • Matthew G. Berry
    • Matthew G. Berry
    • G06F17/00G06F17/30
    • G06F17/30038G06F17/30781G06F17/30876G06F17/3089
    • The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.
    • 本公开涉及用于动态地创建与计算机网络中的相关多媒体内容相关联的超链接的系统和方法。 超链接生成模块从服务器接收电子文本文件。 模块搜索文本文件以识别文件中存在的关键字。 一旦确定了关键字,就会查询一个数据库来识别与关键字相关的多媒体内容。 通常,多媒体内容与元数据相关联,以便能够有效地搜索多媒体内容。 通常,多媒体内容与识别的关键字和文本文件的上下文相关。 然后生成与关键字对应的一个或多个超链接并将其插入到文本文件中。 超链接提供指向所识别的多媒体内容的指针。 在插入文本文件之后,可以由文件的用户或浏览者点击超链接来检索和显示所标识的多媒体内容。
    • 4. 发明授权
    • Systems and methods for identifying pre-inserted and/or potential advertisement breaks in a video sequence
    • 用于识别视频序列中的预插入和/或潜在广告中断的系统和方法
    • US08311390B2
    • 2012-11-13
    • US12466167
    • 2009-05-14
    • Matthew G. Berry
    • Matthew G. Berry
    • H04N9/82
    • H04N7/173G06K9/00711H04N5/76H04N9/898H04N21/233H04N21/23418H04N21/6125H04N21/812H04N21/84
    • The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc. Optionally, various time and/or frame thresholds, as well as an audio verification process, are used to validate the identified advertisement break.
    • 本公开涉及用于识别数字视频文件中的广告中断的系统和方法。 通常,广告断裂识别模块接收数字视频文件,并且生成从视频文件提取的一个或多个帧中的每个帧的边缘响应。 如果特定帧的生成的边缘响应之一小于预定阈值,则该模块将该特定帧识别为广告中断的开始。 然后,该模块为所识别的特定帧之后的帧生成进一步的边缘响应。 一旦对于大于阈值的特定后续帧生成边缘响应,则将其识别为广告中断的结束。 然后可以对视频文件进行操作或变换,例如通过将元数据与用于各种用途的广告中断相关联,从视频文件中删除广告中断等。可选地,各种时间和/或帧阈值以及 音频验证过程用于验证识别的广告中断。
    • 5. 发明授权
    • Systems and methods for semantically classifying shots in video
    • 视频中镜像语义分类的系统和方法
    • US08311344B2
    • 2012-11-13
    • US12372561
    • 2009-02-17
    • Heather DunlopMatthew G. Berry
    • Heather DunlopMatthew G. Berry
    • G06K9/62
    • G06K9/00664G06K9/00711
    • The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
    • 本公开涉及基于视频内容对视频进行分类的系统和方法。 对于包括多个帧的给定视频文件,提取帧的子集用于处理。 从子集中丢弃太暗,模糊或其他不良分类候选的帧。 通常,针对子集中的剩余帧计算描述每帧中可能包括的材料内容类型的材料分类分数。 材料分类分数用于生成表示每帧中材料内容的空间排列的材料排列向量。 随后将材料排列向量分类以产生每帧的场景分类分数向量。 场景分类结果在子集中的所有帧上被平均(或以其它方式处理),以将视频文件与与视频文件的场景内容的总体类型相关的一个或多个预定义场景类别相关联。
    • 6. 发明授权
    • Integrated systems and methods for video-based object modeling, recognition, and tracking
    • 用于基于视频的对象建模,识别和跟踪的集成系统和方法
    • US08170280B2
    • 2012-05-01
    • US12327589
    • 2008-12-03
    • Liang ZhaoMatthew G. Berry
    • Liang ZhaoMatthew G. Berry
    • G06K9/00G06F3/00
    • G06K9/6255G06K9/00295G06K9/00711G06T7/248G06T7/40G06T2207/10016G06T2207/20081G06T2207/30201
    • The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images.
    • 本公开涉及用于在视频文件中建模,识别和跟踪对象图像的系统和方法。 在一个实施例中,接收包括多个帧的视频文件。 从视频文件中的特定帧提取对象的图像,并且还从后续帧中提取后续图像。 然后在从特定帧和随后帧的提取的图像之间计算相似度值。 如果所计算的相似度值超过预定的相似性阈值,则将所提取的对象图像分配给对象组。 对象组用于生成与组中的图像相关联的对象模型,其中模型包括从对象组中的最佳对象图像提取的图像特征。 为了识别图像,该组的最佳图像也用于与其他对象模型进行比较。
    • 8. 发明申请
    • SYSTEMS AND METHODS FOR SEMANTICALLY CLASSIFYING SHOTS IN VIDEO
    • 在视频中进行分类分类的系统和方法
    • US20090208106A1
    • 2009-08-20
    • US12372561
    • 2009-02-17
    • Heather DunlopMatthew G. Berry
    • Heather DunlopMatthew G. Berry
    • G06K9/34G06K9/62
    • G06K9/00664G06K9/00711
    • The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
    • 本公开涉及基于视频内容对视频进行分类的系统和方法。 对于包括多个帧的给定视频文件,提取帧的子集用于处理。 从子集中丢弃太暗,模糊或其他不良分类候选的帧。 通常,针对子集中的剩余帧计算描述每帧中可能包括的材料内容类型的材料分类分数。 材料分类分数用于生成表示每帧中材料内容的空间排列的材料排列向量。 随后将材料排列向量分类以产生每帧的场景分类分数向量。 场景分类结果在子集中的所有帧上被平均(或以其它方式处理),以将视频文件与与视频文件的场景内容的总体类型相关的一个或多个预定义场景类别相关联。