会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 32. 发明授权
    • Method and apparatus for providing virtual touch interaction in the drive-thru
    • 用于在驱动器中提供虚拟触摸交互的方法和装置
    • US06996460B1
    • 2006-02-07
    • US10679226
    • 2003-10-02
    • Nils KrahnstoeverEmilio SchapiraRajeev SharmaNamsoon Jung
    • Nils KrahnstoeverEmilio SchapiraRajeev SharmaNamsoon Jung
    • G06F17/00
    • G06F3/011G06F3/017G06F3/0304
    • The present invention is a method and apparatus for providing an enhanced automatic drive-thru experience to the customers in a vehicle by allowing use of natural hand gestures to interact with digital content. The invention is named Virtual Touch Ordering System (VTOS). In the VTOS, the virtual touch interaction is defined to be a contact free interaction, in which a user is able to select graphical objects within the digital contents on a display system and is able to control the processes connected to the graphical objects, by natural hand gestures without touching any physical devices, such as a keyboard or a touch screen. Using the virtual touch interaction of the VTOS, the user is able to complete transactions or ordering, without leaving the car and without any physical contact with the display. A plurality of Computer Vision algorithms in the VTOS processes a plurality of input image sequences from the image-capturing system that is pointed at the customers in a vehicle and performs the virtual touch interaction by natural hand gestures. The invention can increase the throughput of drive-thru interaction and reduce the delay in wait time, labor cost, and maintenance cost.
    • 本发明是一种通过允许使用自然手势与数字内容交互来向车辆中的客户提供增强的自动驾驶经验的方法和装置。 本发明被命名为虚拟触摸订购系统(VTOS)。 在VTOS中,虚拟触摸交互被定义为无联系的交互,其中用户能够在显示系统上的数字内容中选择图形对象,并且能够通过自然来控制连接到图形对象的过程 无需触摸任何物理设备(如键盘或触摸屏)的手势。 使用VTOS的虚拟触摸交互,用户能够完成交易或订购,而不用离开汽车,并且与显示器没有任何物理接触。 VTOS中的多个计算机视觉算法处理来自图像捕获系统的指向车辆中的客户的多个输入图像序列,并通过自然的手势执行虚拟触摸交互。 本发明可以增加驱动通过交互的吞吐量,并减少等待时间,人工成本和维护成本的延迟。
    • 34. 发明授权
    • Method and system for determining ethnicity category of facial images based on multi-level primary and auxiliary classifiers
    • 基于多级主辅分类器确定面部图像种族类别的方法和系统
    • US09317785B1
    • 2016-04-19
    • US14257816
    • 2014-04-21
    • Hankyu MoonRajeev SharmaNamsoon JungJoonhwa Shin
    • Hankyu MoonRajeev SharmaNamsoon JungJoonhwa Shin
    • G06K9/00G06K9/62
    • G06K9/6267G06K9/00234G06K9/00288G06K9/6292G06K9/6857G06K2009/00322G06T2207/30201
    • The present invention is a system and method for performing ethnicity classification based on the facial images of people, using multi-category decomposition architecture of classifiers, which include a set of predefined auxiliary classifiers that are specialized to auxiliary features of the facial images. In the multi-category decomposition architecture, which is a hybrid multi-classifier architecture specialized to ethnicity classification, the task of learning the concept of ethnicity against significant within-class variations, is handled by decomposing the set of facial images into auxiliary demographics classes; the ethnicity classification is performed by an array of classifiers where each classifier, called an auxiliary class machine, is specialized to the given auxiliary class. The facial image data is annotated to assign the age and gender labels as well as the ethnicity labels. Each auxiliary class machine is trained to output both the given auxiliary class membership likelihood and the ethnicity likelihoods. Faces are detected from the input image, individually tracked, and fed to all the auxiliary class machines to compute the desired auxiliary class membership and ethnicity likelihood outputs. The outputs from all the auxiliary class machines are combined in a manner to make a final decision on the ethnicity of the given face.
    • 本发明是一种使用分类器的多类别分解架构,基于人脸部图像进行种族分类的系统和方法,其包括专门针对面部图像的辅助特征的一组预定义的辅助分类器。 在专门用于种族分类的混合多分类架构的多类别分解架构中,通过将面部图像集合分解为辅助人口统计学类来处理种族对概念内部变化的概念的任务; 种族分类由分类器阵列执行,其中每个分类器(称为辅助类机器)专用于给定的辅助类。 注意面部图像数据以分配年龄和性别标签以及种族标签。 训练每个辅助类机器输出给定的辅助类成员资格似然率和种族可能性。 从输入图像检测到面部,单独跟踪并馈送到所有辅助类机器,以计算所需的辅助类成员和种族似然输出。 所有辅助类机器的输出结合起来,对给定面孔的种族做出最终决定。
    • 35. 发明授权
    • Method and system for media audience measurement by viewership extrapolation based on site, display, and crowd characterization
    • 基于网站,展示和人群表征的观众推断的媒体观众测量方法和系统
    • US09161084B1
    • 2015-10-13
    • US13998392
    • 2013-10-29
    • Rajeev SharmaNamsoon JungJoonhwa Shin
    • Rajeev SharmaNamsoon JungJoonhwa Shin
    • H04N7/16H04H60/33H04H60/45H04H60/56H04H60/32H04N21/442H04N21/4223
    • H04N21/44218G06K9/00778H04H60/33H04N21/4223
    • The present invention provides a comprehensive method to design an automatic media audience measurement system that can estimate the site-wide audience of a media of interest (e.g., the site-wide viewership of a target display) based on the measurements of a subset of the actual audience sampled from a limited space in the site. This invention enables (1) the placement of sensors in optimal positions for the viewership data measurement and (2) the estimation of the site-wide viewership of the target display by performing the viewership extrapolation based on the sampled viewership data. The viewership extrapolation problem is formulated in a way that the time-varying crowd dynamics around the target display is an important decisive factor as well as the sampled viewership data at a given time in yielding the estimated site-wide viewership. To solve this problem, the system elements that affect the viewership—site, display, crowd, and audience—and their relationships are first identified in terms of the visibility, the viewership relevancy, and the crowd occupancy. The optimal positions of the sensors are determined to cover the maximum area of the viewership with high probabilities. The viewership extrapolation function is then modeled and learned from the sampled viewership data, the site-wide viewership data, and the crowd dynamics measurements while removing the noise in the sampled viewership data using the viewership relevancy of the measurements to the target display.
    • 本发明提供了一种全面的方法来设计自动媒体观众测量系统,该系统可以基于对所述媒体的子集的测量来估计感兴趣的媒体的站点范围的受众(例如,目标显示的站点范围的收视率) 实地观众从网站上的有限空间抽样。 本发明使得(1)将传感器放置在用于观众数据测量的最佳位置,以及(2)通过基于所抽样的观看数据执行观看量外推来估计目标显示的站点范围的收视。 观众推论问题的制定方式是,在目标展示范围内的时变人群动态是一个重要的决定性因素,以及在给定时间抽样的观众数据,以产生估计的全厂范围的观众人数。 为了解决这个问题,影响观众网站,展示,人群和观众的系统元素及其关系首先根据可见度,观众人数相关性和人群占有率来确定。 传感器的最佳位置被确定为以高概率覆盖观众的最大面积。 然后,通过采样的观众数据,网站范围的观众数据和人群动态测量,对观众推论功能进行建模和学习,同时使用测量与目标显示的观众关联度来消除采集的观众数据中的噪点。
    • 36. 发明授权
    • Method and system for detecting and tracking shopping carts from videos
    • 用于从视频中检索和跟踪购物车的方法和系统
    • US08325982B1
    • 2012-12-04
    • US12460818
    • 2009-07-23
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • Hankyu MoonRajeev SharmaNamsoon Jung
    • G06K9/00H04N5/225
    • G06K9/3233G06K9/00771G06T7/215
    • The present invention is a method and system for detecting and tracking shopping carts from video images in a retail environment. First, motion blobs are detected and tracked from the video frames. Then these motion blobs are examined to determine whether or not some of them contain carts, based on the presence or absence of linear edge motion. Linear edges are detected within consecutive video frames, and their estimated motions vote for the presence of a cart. The motion blobs receiving enough votes are classified as cart candidate blobs. A more elaborate model of passive motions within blobs containing a cart is constructed. The detected cart candidate blob is then analyzed based on the constructed passive object motion model to verify whether or not the blob indeed shows the characteristic passive motion of a person pushing a cart. Then the finally-detected carts are corresponded across the video frames to generate cart tracks.
    • 本发明是一种用于在零售环境中从视频图像检测和跟踪购物车的方法和系统。 首先,从视频帧中检测和跟踪运动斑点。 然后根据是否存在线性边缘运动来检查这些运动斑点以确定它们中的一些是否包含推车。 线性边缘在连续的视频帧内被检测到,并且它们的估计的动作投票给购物车的存在。 获得足够投票的动作斑点被分类为购物车候选点。 构建了一个更加精细的被动运动模型,其中包含一个推车的斑点内。 然后基于所构造的被动对象运动模型来分析检测到的购物车候选Blob,以验证该小块是否确实显示推送推车的人的特征被动运动。 然后,最终检测到的车在视频帧上对应,以产生购物车轨道。
    • 38. 发明授权
    • Object verification enabled network (OVEN)
    • 启用对象验证网络(OVEN)
    • US07904477B2
    • 2011-03-08
    • US11999649
    • 2007-12-06
    • Namsoon JungRajeev Sharma
    • Namsoon JungRajeev Sharma
    • G06F7/00G06F17/30
    • G06F8/38G06F8/24
    • The present invention is a method and system for handling a plurality of information units in an information processing system, such as a multimodal human computer interaction (HCI) system, through verification process for the plurality of information units. The present invention converts each information unit in the plurality of information units into verified object by augmenting the first meaning in the information unit with a second meaning and expresses the verified objects by object representation for each verified object. The present invention utilizes a processing structure, called polymorphic operator, which is capable of applying a plurality of relationships among the verified objects based on a set of predefined rules in a particular application domain for governing the operation among the verified objects. The present invention is named Object Verification Enabled Network (OVEN). The OVEN provides a computational framework for the information processing system that needs to handle complex data and event in the system, such as handling a huge amount of data in a database, correlating information pieces from multiple sources, applying contextual information to the recognition of inputs in a specific domain, processing fusion of the multiple inputs from different modalities, handling unforeseen challenges in deploying a commercially working information processing system in a real-world environment, and handling collaboration among multiple users.
    • 本发明是通过多个信息单元的验证处理来处理诸如多模式人机交互(HCI)系统的信息处理系统中的多个信息单元的方法和系统。 本发明通过用第二含义增加信息单元中的第一含义将多个信息单元中的每个信息单元转换为经验证的对象,并通过每个已验证对象的对象表示来表示所验证的对象。 本发明利用称为多态运算符的处理结构,其能够基于特定应用领域中的一组预定义规则来应用经验证的对象之间的多个关系,以管理已验证对象之间的操作。 本发明被命名为对象验证启用网络(OVEN)。 OVEN为需要处理系统中复杂数据和事件的信息处理系统提供计算框架,例如处理数据库中的大量数据,将来自多个源的信息相关,将上下文信息应用于输入的识别 在特定领域中,处理来自不同模式的多个输入的融合,处理在现实环境中部署商业工作信息处理系统中的不可预见的挑战,以及处理多个用户之间的协作。
    • 39. 发明授权
    • Method and system for enhancing virtual stage experience
    • 增强虚拟舞台体验的方法和系统
    • US07053915B1
    • 2006-05-30
    • US10621181
    • 2003-07-16
    • Namsoon JungRajeev Sharma
    • Namsoon JungRajeev Sharma
    • G09G5/00G09B5/00
    • G10H1/368
    • The present invention is a system and method for increasing the value of the audio-visual entertainment systems, such as karaoke, by simulating a virtual stage environment and enhancing the user's facial image in a continuous video input, automatically, dynamically and in real-time. The present invention is named Enhanced Virtual Karaoke (EVIKA). The EVIKA system consists of two major modules, the facial image enhancement module and the virtual stage simulation module. The facial image enhancement module augments the user's image using the embedded Facial Enhancement Technology (F.E.T.) in real-time. The virtual stage simulation module constructs a virtual stage in the display by augmenting the environmental image. The EVIKA puts the user's enhanced body image into the dynamic background, which changes according to the user's arbitrary motion. During the entire process, the user can interact with the system and select and interact with the virtual objects on the screen. The capability of real-time execution of the EVIKA system even with complex backgrounds enables the user to experience a whole new live virtual entertainment environment experience, which was not possible before.
    • 本发明是通过模拟虚拟舞台环境并在连续视频输入中自动,动态地和实时地增强用户的面部图像来增加诸如卡拉OK的视听娱乐系统的价值的系统和方法 。 本发明被称为增强虚拟卡拉OK(EVIKA)。 EVIKA系统由两个主要模块,面部图像增强模块和虚拟舞台仿真模块组成。 面部图像增强模块使用嵌入式面部增强技术(F.E.T.)实时增强用户的图像。 虚拟舞台仿真模块通过增加环境图像在显示器中构建虚拟舞台。 EVIKA将用户的增强身体图像放入动态背景中,根据用户的任意动作进行改变。 在整个过程中,用户可以与系统进行交互,并选择并与屏幕上的虚拟对象进行交互。 EVIKA系统即使采用复杂的背景实时执行功能,也可以让用户体验全新的虚拟娱乐环境体验,这是以前无法实现的。