会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • SONAR SYSTEM FOR AUTOMATICALLY DETECTING LOCATION OF DEVICES
    • 用于自动检测设备位置的SONAR系统
    • WO2013137958A2
    • 2013-09-19
    • PCT/US2012069750
    • 2012-12-14
    • MILLAR JAMESAGHDASI FARZINMILLAR GREGPELCO INC
    • MILLAR JAMESAGHDASI FARZINMILLAR GREG
    • G01S3/80
    • G01S13/46G01S15/025G01S15/874G01S2013/466
    • Systems and methods are described for determining device positions in a video surveillance system. A method described herein includes generating a reference sound; emitting, at a first device, the reference sound; detecting, at the first device, a responsive reference sound from one or more second devices in response to the emitted reference sound; identifying a position of each of the one or more second devices; obtaining information relating to latency of the one or more second devices; computing a round trip time associated with each of the one or more second devices based on at least a timing of detecting the one or more responsive reference sounds and the latency of each of the one or more second devices; and estimating the position of the first device according to the round trip time and the position associated with each of the one or more second devices.
    • 描述了用于确定视频监视系统中的设备位置的系统和方法。 这里描述的方法包括生成参考声音; 在第一设备处发射参考声音; 响应于所发射的参考声音,在第一设备处检测来自一个或多个第二设备的响应参考声音; 识别所述一个或多个第二设备中的每一个的位置; 获得与所述一个或多个第二设备的等待时间有关的信息; 至少基于检测所述一个或多个响应参考声音的时间以及所述一个或多个第二设备中的每一个的等待时间来计算与所述一个或多个第二设备中的每一个相关联的往返时间; 以及根据往返时间和与一个或多个第二设备中的每一个相关联的位置来估计第一设备的位置。
    • 3. 发明申请
    • SCENE ACTIVITY ANALYSIS USING STATISTICAL AND SEMANTIC FEATURE LEARNT FROM OBJECT TRAJECTORY DATA
    • 使用从对象轨迹数据中获得的统计和语义特征的场景活动分析
    • WO2012092148A2
    • 2012-07-05
    • PCT/US2011066962
    • 2011-12-22
    • PELCO INCMILLAR GREGAGHDASI FARZINZHU HONGWEI
    • MILLAR GREGAGHDASI FARZINZHU HONGWEI
    • G06T7/20H04N5/91H04N7/18
    • G06K9/00785
    • Trajectory information of objects appearing in a scene can be used to cluster trajectories into groups of trajectories according to each trajectory's relative distance between each other for scene activity analysis. By doing so, a database of trajectory data can be maintained that includes the trajectories to be clustered into trajectory groups. This database can be used to train a clustering system, and with extracted statistical features of resultant trajectory groups a new trajectory can be analyzed to determine whether the new trajectory is normal or abnormal. Embodiments described herein, can be used to determine whether a video scene is normal or abnormal. In the event that the new trajectory is identified as normal the new trajectory can be annotated with the extracted semantic data. In the event that the new trajectory is determined to be abnormal a user can be notified that an abnormal behavior has occurred.
    • 出现在场景中的物体的轨迹信息可用于根据每个轨迹的相对距离将轨迹分组到轨迹组,以进行场景活动分析。 通过这样做,可以保持轨迹数据的数据库,其包括要聚集成轨迹组的轨迹。 该数据库可用于训练聚类系统,并且通过提取的结果轨迹组的统计特征,可以分析新的轨迹,以确定新轨迹是正常还是异常。 本文描述的实施例可以用于确定视频场景是正常还是异常。 在新轨迹被识别为正常的情况下,可以用所提取的语义数据来注释新的轨迹。 在新轨迹被确定为异常的情况下,可以通知用户发生异常行为。
    • 4. 发明申请
    • TRACKING MOVING OBJECTS USING A CAMERA NETWORK
    • 使用摄像机网络跟踪移动对象
    • WO2012092144A3
    • 2012-11-15
    • PCT/US2011066956
    • 2011-12-22
    • PELCO INCMILLAR GREGAGHDASI FARZINWANG LEI
    • MILLAR GREGAGHDASI FARZINWANG LEI
    • G08B25/10G08B13/196H04N5/262H04N7/18
    • H04N7/181G08B13/19608
    • Techniques are described for tracking moving objects using a plurality of security cameras. Multiple cameras may capture frames that contain images of a moving object. These images may be processed by the cameras to create metadata associated with the images of the objects. Frames of each camera's video feed and metadata may be transmitted to a host computer system. The host computer system may use the metadata received from each camera to determine whether the moving objects imaged by the cameras represent the same moving object. Based upon properties of the images of the objects described in the metadata received from each camera, the host computer system may select a preferable video feed containing images of the moving object for display to a user.
    • 描述了使用多个安全摄像机跟踪移动物体的技术。 多个相机可以捕获包含移动物体的图像的帧。 这些图像可以由相机处理以创建与对象的图像相关联的元数据。 每个摄像机的视频馈送和元数据的帧可以被发送到主机系统。 主计算机系统可以使用从每个摄像机接收的元数据来确定由摄像机成像的移动物体是否表示相同的移动物体。 基于从每个摄像机接收到的元数据中描述的对象的图像的属性,主计算机系统可以选择包含用于显示给用户的移动物体的图像的优选视频馈送。
    • 5. 发明申请
    • MULTI-RESOLUTION IMAGE DISPLAY
    • 多分辨率图像显示
    • WO2012092472A3
    • 2012-10-04
    • PCT/US2011067812
    • 2011-12-29
    • PELCO INCSABLAK SEZAIMILLAR GREGAGHDASI FARZIN
    • SABLAK SEZAIMILLAR GREGAGHDASI FARZIN
    • H04N7/18
    • H04N7/18G08B13/19691H04N5/343H04N21/2187H04N21/47202H04N21/4728H04N21/6587
    • An image display method includes: receiving, from a single camera, first and second image information for first and second captured images captured from different perspectives, the first image information having a first data density; selecting a portion of the first captured image for display with a higher level of detail than other portions of the first captured image, the selected portion corresponding to a first area of the first captured image; displaying the selected portion in a first displayed image, using a second data density relative to the selected portion of the first captured image; and displaying another portion of the first captured image, in a second displayed image, using a third data density; where the another portion of the first captured image is other than the selected portion of the first captured image; and where the third data density is lower than the second data density.
    • 图像显示方法包括:从单个相机接收用于从不同视角捕获的第一和第二捕获图像的第一和第二图像信息,所述第一图像信息具有第一数据密度; 选择第一拍摄图像的一部分以比第一拍摄图像的其他部分更高的细节水平进行显示,所选择的部分对应于第一拍摄图像的第一区域; 使用相对于所选择的第一拍摄图像的部分的第二数据密度在第一显示图像中显示所选择的部分; 以及使用第三数据密度在第二显示图像中显示所述第一拍摄图像的另一部分; 其中所述第一拍摄图像的另一部分不是所述第一拍摄图像的所选部分; 并且其中第三数据密度低于第二数据密度。
    • 6. 发明申请
    • CLUSTERING-BASED OBJECT CLASSIFICATION
    • 基于聚类的对象分类
    • WO2013101460A3
    • 2013-10-03
    • PCT/US2012069148
    • 2012-12-12
    • PELCO INCZHU HONGWEIAGHDASI FARZINMILLAR GREG
    • ZHU HONGWEIAGHDASI FARZINMILLAR GREG
    • G06K9/68
    • G06K9/68
    • An example of a method for identifying objects in video content according to the disclosure includes receiving video content of a scene captured by a video camera, detecting an object in the video content, identifying a track that the object follows over a series of frames of the video content, extracting object features for the object from the video content, and classifying the object based on the object features. Classifying the object further comprises: determining a track-level classification for the object using spatially invariant object features, determining a global-clustering classification for the object using spatially variant features, and determining an object type for the object based on the track-level classification and the global-clustering classification for the object.
    • 根据本公开的用于识别视频内容中的对象的方法的示例包括:接收由摄像机捕获的场景的视频内容,检测视频内容中的对象,识别该对象在一系列帧上跟随的轨迹 视频内容,从视频内容中提取对象的对象特征,以及基于对象特征对对象进行分类。 分类对象还包括:使用空间不变对象特征来确定对象的轨道级分类,使用空间变异特征确定对象的全局聚类分类,以及基于轨道级分类来确定对象的对象类型 和对象的全局聚类分类。
    • 9. 发明申请
    • SEARCHING RECORDED VIDEO
    • 搜索已录制的视频
    • WO2012092429A3
    • 2012-10-11
    • PCT/US2011067732
    • 2011-12-29
    • PELCO INCMILLAR GREGAGHDASI FARZINWANG LEI
    • MILLAR GREGAGHDASI FARZINWANG LEI
    • H04N7/18H04N5/76H04N5/91
    • G06F17/30784G06F17/3079G06K9/00771G06K9/346
    • Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.
    • 本公开的实施例提供了用于创建与视频数据相关联的元数据的系统和方法。 元数据可以包括关于在视频场景内观看的对象和/或在视频场景内发生的事件的数据。 一些实施例允许用户通过搜索记录的元数据来搜索特定的对象和/或事件。 在一些实施例中,通过接收视频帧并开发视频帧的背景模型来创建元数据。 然后可以使用背景模型在视频帧中识别前景物体。 一旦识别出这些对象,就可以对它们进行分类和/或可以检测到与前景对象相关联的事件。 然后可以将事件和前景对象的分类记录为元数据。
    • 10. 发明申请
    • COLOR SIMILARITY SORTING FOR VIDEO FORENSICS SEARCH
    • 彩色相似性分类视频权威搜索
    • WO2012092488A2
    • 2012-07-05
    • PCT/US2011067872
    • 2011-12-29
    • PELCO INCWANG LEIYANG SHUMILLAR GREGAGHDASI FARZIN
    • WANG LEIYANG SHUMILLAR GREGAGHDASI FARZIN
    • G06K9/46
    • G06K9/4652G06K9/00744G06K9/00771G06K9/522
    • Systems and methods of sorting electronic color images of objects are provided. One method includes receiving an input representation of an object, the representation including pixels defined in a first color space, converting the input image into a second color space, determining a query feature vector including multiple parameters associated with color of the input representation, the query feature vector parameters including at least a first parameter of the first color space and at least a first parameter of the second color space and comparing the query feature vector to multiple candidate feature vectors. Each candidate feature vector includes multiple parameters associated with color of multiple stored candidate images, the candidate feature vector parameters including at least the first parameter from the first color space and at least the first parameter from the second color space. The method further includes determining at least one of the candidate images to be a possible match to the desired object based on the comparison.
    • 提供了分类物体电子彩色图像的系统和方法。 一种方法包括接收对象的输入表示,所述表示包括在第一颜色空间中定义的像素,将输入图像转换为第二颜色空间,确定包括与输入表示的颜色相关联的多个参数的查询特征向量,查询 特征向量参数,其包括第一颜色空间的至少第一参数和第二颜色空间的至少第一参数,并将查询特征向量与多个候选特征向量进行比较。 每个候选特征向量包括与多个存储的候选图像的颜色相关联的多个参数,候选特征向量参数至少包括来自第一颜色空间的第一参数和至少来自第二颜色空间的第一参数。 该方法还包括基于该比较来确定候选图像中的至少一个是与期望对象的可能匹配。