会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 61. 发明申请
    • Detect-point-click (DPC) based gaming systems and techniques
    • 基于检测点击(DPC)的游戏系统和技术
    • US20060035709A1
    • 2006-02-16
    • US10915181
    • 2004-08-10
    • Zicheng LiuYong Rui
    • Zicheng LiuYong Rui
    • A63F13/00
    • A63F13/52A63F13/10A63F13/80A63F2300/66A63F2300/8064
    • Disclosed are a unique DPC (detect point click) based game system and method. The DPC based game system involves generating one or a plurality of DPC images, presenting them to a game participant, and collecting the participant's clicks (that identify which object in the DPC image the participant believes to be the correct DPC object), and determining whether the participant's clicks represent the correct object. DPC images can be created in part by selecting a base image, altering some portion of the base image to create at least one confusion image, mapping these images to a geometric model, and applying one or more distortion filters to at least one of the base or confusing image to obscure the DPC object from clear view. Locating the DPC object nearly hidden in the DPC image can advance the participant in the DPC based game or other game including DPC images as a part thereof.
    • 公开了一种基于独特的DPC(检测点点击)的游戏系统和方法。 基于DPC的游戏系统涉及生成一个或多个DPC图像,将其呈现给游戏参与者,并收集参与者的点击(识别参与者认为是正确的DPC对象的DPC图像中的哪个对象),以及确定是否 参与者的点击代表正确的对象。 DPC图像可以部分地通过选择基本图像来创建,改变基本图像的一些部分以创建至少一个混淆图像,将这些图像映射到几何模型,以及将一个或多个失真滤波器应用于基底中的至少一个 或令人困惑的图像从清晰的视图来掩盖DPC对象。 将DPC对象几乎隐藏在DPC图像中可以将参与者推送到基于DPC的游戏或其他游戏中,其中包括DPC图像作为其一部分。
    • 68. 发明申请
    • HEAD POSE TRACKING USING A DEPTH CAMERA
    • 使用深度摄像机的头部跟踪
    • US20130201291A1
    • 2013-08-08
    • US13369168
    • 2012-02-08
    • Zicheng LiuZhengyou ZhangZhenning Li
    • Zicheng LiuZhengyou ZhangZhenning Li
    • H04N13/02
    • G06F3/012G06T7/74G06T2207/10028G06T2207/30244
    • Head pose tracking technique embodiments are presented that use a group of sensors configured so as to be disposed on a user's head. This group of sensors includes a depth sensor apparatus used to identify the three dimensional locations of features within a scene, and at least one other type of sensor. Data output by each sensor in the group of sensors is periodically input, and each time the data is input it is used to compute a transformation matrix that when applied to a previously determined head pose location and orientation established when the first sensor data was input identifies a current head pose location and orientation. This transformation matrix is then applied to the previously determined head pose location and orientation to identify a current head pose location and orientation.
    • 提出了头部姿势跟踪技术实施例,其使用被配置为设置在用户头部上的一组传感器。 该组传感器包括用于识别场景内的特征的三维位置的深度传感器装置和至少一种其它类型的传感器。 周期性地输入传感器组中的每个传感器的数据输出,并且每次输入数据时,它用于计算当应用于当输入第一传感器数据时确定的先前确定的头姿势位置和方向的变换矩阵 当前的头部姿势位置和方位。 然后将该变换矩阵应用于先前确定的头部姿势位置和方向以识别当前头部姿势位置和姿态。
    • 70. 发明授权
    • Translation and capture architecture for output of conversational utterances
    • 翻译和捕获结构,用于输出会话话语
    • US07991607B2
    • 2011-08-02
    • US11167870
    • 2005-06-27
    • Zhengyou ZhangDavid W. WilliamsYuan KongZicheng Liu
    • Zhengyou ZhangDavid W. WilliamsYuan KongZicheng Liu
    • G06F17/28
    • G06F17/289G06K9/00335G06K9/6293G10L15/1822
    • Architecture that combines capture and translation of concepts, goals, needs, locations, objects, locations, and items (e.g., sign text) into complete conversational utterances that take a translation of the item, and morph it with fluidity into sets of sentences that can be echoed to a user, and that the user can select to communicate speech (or textual utterances). A plurality of modalities that process images, audio, video, searches and cultural context, for example, which are representative of at least context and/or content, and can be employed to glean additional information regarding a communications exchange to facilitate more accurate and efficient translation. Gesture recognition can be utilized to enhance input recognition, urgency, and/or emotional interaction, for example. Speech can be used for document annotation. Moreover, translation (e.g., speech to speech, text to speech, speech to text, handwriting to speech, text or audio, . . . ) can be significantly improved in combination with this architecture.
    • 将概念,目标,需求,位置,对象,位置和项目(例如,签名文本)的捕获和翻译结合到完整的对话话语中进行翻译的体系结构,并将其流畅地转换成可以组合的句子集 被回传给用户,并且用户可以选择通信语音(或文本话语)。 处理例如代表至少上下文和/或内容的图像,音频,视频,搜索和文化语境的多种模式,并且可以用于收集关于通信交换的附加信息以促进更准确和更有效率 翻译。 例如,手势识别可用于增强输入识别,紧急性和/或情感交互。 语音可用于文档注释。 此外,结合该架构,可以显着地改进翻译(例如,语音到语音,文本到语音,语音到文本,手写到语音,文本或音频等等)。