会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明申请
    • LIP SYNCHRONIZATION BETWEEN RIGS
    • RIGS之间的LIP同步
    • US20170053663A1
    • 2017-02-23
    • US14831021
    • 2015-08-20
    • LUCASFILM ENTERTAINMENT COMPANY LTD.
    • Ji Hun YuMichael KoperwasJeffrey Bruce YostSheila SantosKiran S. Bhat
    • G10L21/10G06T13/40G06T13/20
    • G06T13/40G06K9/00302G06K9/4604G06T7/246G06T2207/30201G06T2207/30204
    • In some embodiments a method of transferring facial expressions from a subject to a computer-generated character is provided where the method includes receiving positional information from a motion capture session of the subject representing a performance having facial expressions to be transferred to the computer-generated character, receiving a first animation model that represents the subject, and receiving a second animation model that represents the computer-generated character. Each of the first and second animation models can include a plurality of adjustable controls that define geometries of the model and that can be adjusted to present different facial expressions on the model, and where the first and second animation models are designed so that setting the same values for the same set of adjustable controls in each model generates a similar facial poses on the models. The method further includes determining a solution, including values for at least some of the plurality of controls, that matches the first animation model to the positional information to reproduce the facial expressions from the performance to the first animation model, retargeting the facial expressions from the performance to the second animation model using the solution; and thereafter, synchronizing lip movement of the second animation model with lip movement from the first animation model.
    • 在一些实施例中,提供了将面部表情从受试者转移到计算机生成的角色的方法,其中所述方法包括从表示具有要传送到计算机生成的角色的表情的表现的对象的运动捕捉会话中接收位置信息 ,接收表示所述对象的第一动画模型,以及接收表示所述计算机生成的角色的第二动画模型。 第一和第二动画模型中的每一个可以包括多个可调整的控件,其限定模型的几何形状,并且可以被调整以在模型上呈现不同的面部表情,并且其中第一和第二动画模型被设计成使得设置相同 每个模型中同一组可调节控件的值在模型上产生类似的面部姿势。 该方法还包括确定包括与第一动画模型匹配的多个控件中的至少一些的值的解决方案与位置信息,以从表演到第一动画模型再现面部表情,将面部表情从 使用该解决方案对第二动画模型的性能; 之后,使第二动画模型的唇部移动与来自第一动画模型的唇部移动同步。
    • 20. 发明申请
    • DETERMINING CONTROL VALUES OF AN ANIMATION MODEL USING PERFORMANCE CAPTURE
    • 使用性能捕获确定动画模型的控制值
    • US20160328628A1
    • 2016-11-10
    • US14704796
    • 2015-05-05
    • Lucasfilm Entertainment Company Ltd.
    • Kiran BhatMichael KoperwasJeffery YostJi Hun YuSheila Santos
    • G06K9/62G06K9/00G06K9/52G06K9/46G06T7/60G06T13/80G06T7/00
    • G06T13/40G06K9/00315G06K9/6201G06T7/60G06T7/75G06T13/80G06T2207/10016G06T2207/30201
    • Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input data including position information defining positions on a face of the subject. The method may further include obtaining an animation model for the subject that includes adjustable controls that control the animation model to define facial features of the computer-generated representation of the face, and matching one or more of the positions on the face with one or more corresponding positions on the animation model. The matching includes using an objective function to project an error onto a control space of the animation model. The method may further include determining, using the projected error and one or more constraints on the adjustable controls, one or more values for one or more of the adjustable controls. The values are configured to control the animation model to cause the computer-generated representation to perform a representation of the pose using the one or more adjustable controls.
    • 性能捕获系统和技术被提供用于捕捉主体的表现并再现跟踪对象的表现的动画演奏。 例如,提供系统和技术用于确定用于控制动画模型的控制值,以基于性能来定义对象的计算机生成的表示的特征。 方法可以包括获得与被摄体执行的姿势对应的输入数据,所述输入数据包括定义被摄体的脸部上的位置的位置信息。 该方法可以进一步包括获得用于对象的动画模型,该动画模型包括可调控制,其控制动画模型以定义计算机生成的脸部表示的面部特征,以及将脸部上的一个或多个位置与一个或多个 动画模型上的相应位置。 匹配包括使用目标函数将错误投射到动画模型的控制空间。 该方法还可以包括:使用投影误差和对可调节控制器的一个或多个约束来确定一个或多个可调整控件的一个或多个值。 这些值被配置为控制动画模型,以使计算机生成的表示使用一个或多个可调控件来执行姿势的表示。