会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明申请
    • ADAPTIVE FACIAL EXPRESSION CALIBRATION
    • 自适应表情表达式校准
    • WO2014139118A1
    • 2014-09-18
    • PCT/CN2013/072594
    • 2013-03-14
    • INTEL CORPORATIONDU, YangzhouLI, WenlongHU, WeiTONG, XiaofengZHANG, Yimin
    • DU, YangzhouLI, WenlongHU, WeiTONG, XiaofengZHANG, Yimin
    • G06K9/00
    • G06K9/00281A63F13/213A63F13/655G06T13/40G06T13/80G06T2215/16
    • Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user.
    • 用于生成具有与用户的面部表情相对应的面部表情的化身的技术包括当用户表达用于注册的参考面部表情时,在计算设备上捕获用户的参考用户图像。 计算装置基于捕获的参考用户图像生成参考面部测量数据,并将参考面部测量数据与化身的对应参考表达式的面部测量数据进行比较以生成面部比较数据。 在用户注册之后,计算设备捕获用户的实时面部表情,并且基于所捕获的实时图像生成实时面部测量数据。 计算装置将面部比较数据应用于实时面部测量数据,以产生修改后的表达数据,该修改的表情数据用于生成具有与用户的面部表情相对应的面部表情的化身。
    • 6. 发明申请
    • FACIAL MOVEMENT BASED AVATAR ANIMATION
    • 基于动画的动画
    • WO2014094199A1
    • 2014-06-26
    • PCT/CN2012/086739
    • 2012-12-17
    • INTEL CORPORATIONDU, YangzhouLI, WenlongTONG, XiaofengHU, WeiZHANG, Yimin
    • DU, YangzhouLI, WenlongTONG, XiaofengHU, WeiZHANG, Yimin
    • H04N7/14
    • G06T13/40G06K9/00315G06T13/80H04N7/157
    • Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as bunking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
    • 使用基于从用户的视频提取的用户的面部特征来选择的预定化身图像来动画化身。 在实时视频中跟踪用户的面部特征,根据跟踪的特征确定面部特征参数,并且基于面部特征参数选择头像图像。 然后将所选择的图像显示发送到另一个设备进行显示。 选择和显示不同的头像图像作为用户的面部动作变化动画化身。 可以从代表特定面部运动的一系列化身图像中选择化身图像,例如分组。 还可以从从用户脸部(眼睛,嘴,鼻,眉毛)的不同区域相关联的多个化身特征图像系列中选择的多个化身特征图像中生成化身图像,其允许化身的不同区域独立地动画化。
    • 7. 发明申请
    • METHOD, APPARATUS AND SYSTEM OF VIDEO AND AUDIO SHARING AMONG COMMUNICATION DEVICES
    • 通信设备中视频和音频共享的方法,装置和系统
    • WO2014089732A1
    • 2014-06-19
    • PCT/CN2012/086260
    • 2012-12-10
    • INTEL CORPORATIONLI, QiangDU, YangzhouLI, WenlongTONG, XiaofengHU, WeiXU, LinZHANG, Yimin
    • LI, QiangDU, YangzhouLI, WenlongTONG, XiaofengHU, WeiXU, LinZHANG, Yimin
    • H04N7/52
    • H04L65/607G10L19/04H04N21/235H04N21/2368H04N21/242H04N21/4307H04N21/4341H04N21/435
    • A device, method and system of video and audio sharing among communication devices, may comprise a communication device for generating and sending a packet containing information related to the video and audio, and another communication device for receiving the packet and rendering the information related to the audio and video. In some embodiments, the communication device may comprise: an audio encoding module to encode a piece of audio into an audio bit stream; an avatar data extraction module to extract avatar data from a piece of video and generate an avatar data bit stream; and a synchronization module to generate synchronization information for synchronizing the audio bit stream with the avatar parameter stream. In some embodiments, the another communication device may comprise: an audio decoding module to decode an audio bit stream into decoded audio data; an Avatar animation module to animate an Avatar model based on an Avatar data bit stream to generate an animated Avatar model; and a synchronizing and rendering module to synchronize and render the decoded audio data and the animated Avatar model by utilizing the synchronization information.
    • 在通信设备之间的视频和音频共享的设备,方法和系统可以包括用于生成和发送包含与视频和音频有关的信息的分组的通信设备,以及用于接收分组并呈现与 音频和视频。 在一些实施例中,通信设备可以包括:音频编码模块,用于将音频片段编码成音频比特流; 一种化身数据提取模块,用于从一条视频中提取头像数据并生成化身数据比特流; 以及同步模块,用于生成用于使音频比特流与化身参数流同步的同步信息。 在一些实施例中,另一通信设备可以包括:音频解码模块,用于将音频比特流解码为解码的音频数据; Avatar动画模块,用于基于Avatar数据位流为Avatar模型生成动画Avatar模型; 以及同步和渲染模块,通过利用同步信息来同步和渲染解码的音频数据和动画化身模型。