会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 43. 发明申请
    • ANNOTATION AND/OR RECOMMENDATION OF VIDEO CONTENT METHOD AND APPARATUS
    • 视频内容方法和设备的宣传和/或推荐
    • WO2013037080A1
    • 2013-03-21
    • PCT/CN2011/001546
    • 2011-09-12
    • INTEL CORPORATIONLI, WenlongDU, YangzhouTONG, XiaofengZHANG, Yimin
    • LI, WenlongDU, YangzhouTONG, XiaofengZHANG, Yimin
    • G06F17/30
    • G06F3/0484G06F17/30817
    • Methods, apparatuses and storage medium associated with cooperative annotation and/or recommendation by shared and personal devices. In various embodiments, at least one non- transitory computer-readable storage medium may include a number of instructions configured to enable a personal device (PD) of a user, in response to execution of the instructions by the personal device, to receive a user input selecting performance of a user function in association with a video stream being rendered on a shared video device (SVD) configured for use by multiple users, render an image frame of the video stream rendered on the shared video device at a time proximate to a time of the user input, and facilitate performance of the user function, which may include annotation of video objects. Other embodiments, including recommendation of video content, may be disclosed or claimed.
    • 与共同和个人设备的协作注释和/或推荐相关联的方法,设备和存储介质。 在各种实施例中,至少一个非暂时的计算机可读存储介质可以包括多个指令,其被配置为使得用户的个人设备(PD)响应于个人设备的指令的执行而接收用户 与在被配置为由多个用户使用的共享视频设备(SVD)上呈现的视频流相关联的用户功能的输入选择性能,在接近于一个时间的时刻渲染在共享视频设备上呈现的视频流的图像帧 用户输入的时间,并且便于用户功能的执行,其可以包括视频对象的注释。 可以公开或要求保护包括视频内容的推荐的其他实施例。
    • 47. 发明申请
    • ADAPTIVE FACIAL EXPRESSION CALIBRATION
    • 自适应表情表达式校准
    • WO2014139118A1
    • 2014-09-18
    • PCT/CN2013/072594
    • 2013-03-14
    • INTEL CORPORATIONDU, YangzhouLI, WenlongHU, WeiTONG, XiaofengZHANG, Yimin
    • DU, YangzhouLI, WenlongHU, WeiTONG, XiaofengZHANG, Yimin
    • G06K9/00
    • G06K9/00281A63F13/213A63F13/655G06T13/40G06T13/80G06T2215/16
    • Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user.
    • 用于生成具有与用户的面部表情相对应的面部表情的化身的技术包括当用户表达用于注册的参考面部表情时,在计算设备上捕获用户的参考用户图像。 计算装置基于捕获的参考用户图像生成参考面部测量数据,并将参考面部测量数据与化身的对应参考表达式的面部测量数据进行比较以生成面部比较数据。 在用户注册之后,计算设备捕获用户的实时面部表情,并且基于所捕获的实时图像生成实时面部测量数据。 计算装置将面部比较数据应用于实时面部测量数据,以产生修改后的表达数据,该修改的表情数据用于生成具有与用户的面部表情相对应的面部表情的化身。
    • 48. 发明申请
    • FACIAL MOVEMENT BASED AVATAR ANIMATION
    • 基于动画的动画
    • WO2014094199A1
    • 2014-06-26
    • PCT/CN2012/086739
    • 2012-12-17
    • INTEL CORPORATIONDU, YangzhouLI, WenlongTONG, XiaofengHU, WeiZHANG, Yimin
    • DU, YangzhouLI, WenlongTONG, XiaofengHU, WeiZHANG, Yimin
    • H04N7/14
    • G06T13/40G06K9/00315G06T13/80H04N7/157
    • Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as bunking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
    • 使用基于从用户的视频提取的用户的面部特征来选择的预定化身图像来动画化身。 在实时视频中跟踪用户的面部特征,根据跟踪的特征确定面部特征参数,并且基于面部特征参数选择头像图像。 然后将所选择的图像显示发送到另一个设备进行显示。 选择和显示不同的头像图像作为用户的面部动作变化动画化身。 可以从代表特定面部运动的一系列化身图像中选择化身图像,例如分组。 还可以从从用户脸部(眼睛,嘴,鼻,眉毛)的不同区域相关联的多个化身特征图像系列中选择的多个化身特征图像中生成化身图像,其允许化身的不同区域独立地动画化。