会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明申请
    • DETERMINING CONTROL VALUES OF AN ANIMATION MODEL USING PERFORMANCE CAPTURE
    • 使用性能捕获确定动画模型的控制值
    • US20160328628A1
    • 2016-11-10
    • US14704796
    • 2015-05-05
    • Lucasfilm Entertainment Company Ltd.
    • Kiran BhatMichael KoperwasJeffery YostJi Hun YuSheila Santos
    • G06K9/62G06K9/00G06K9/52G06K9/46G06T7/60G06T13/80G06T7/00
    • G06T13/40G06K9/00315G06K9/6201G06T7/60G06T7/75G06T13/80G06T2207/10016G06T2207/30201
    • Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input data including position information defining positions on a face of the subject. The method may further include obtaining an animation model for the subject that includes adjustable controls that control the animation model to define facial features of the computer-generated representation of the face, and matching one or more of the positions on the face with one or more corresponding positions on the animation model. The matching includes using an objective function to project an error onto a control space of the animation model. The method may further include determining, using the projected error and one or more constraints on the adjustable controls, one or more values for one or more of the adjustable controls. The values are configured to control the animation model to cause the computer-generated representation to perform a representation of the pose using the one or more adjustable controls.
    • 性能捕获系统和技术被提供用于捕捉主体的表现并再现跟踪对象的表现的动画演奏。 例如,提供系统和技术用于确定用于控制动画模型的控制值,以基于性能来定义对象的计算机生成的表示的特征。 方法可以包括获得与被摄体执行的姿势对应的输入数据,所述输入数据包括定义被摄体的脸部上的位置的位置信息。 该方法可以进一步包括获得用于对象的动画模型,该动画模型包括可调控制,其控制动画模型以定义计算机生成的脸部表示的面部特征,以及将脸部上的一个或多个位置与一个或多个 动画模型上的相应位置。 匹配包括使用目标函数将错误投射到动画模型的控制空间。 该方法还可以包括:使用投影误差和对可调节控制器的一个或多个约束来确定一个或多个可调整控件的一个或多个值。 这些值被配置为控制动画模型,以使计算机生成的表示使用一个或多个可调控件来执行姿势的表示。
    • 6. 发明授权
    • Facial performance capture in an uncontrolled environment
    • US11049332B2
    • 2021-06-29
    • US16808110
    • 2020-03-03
    • LUCASFILM ENTERTAINMENT COMPANY LTD.
    • Matthew LoperStéphane GrabliKiran Bhat
    • G06T19/20G06T7/80
    • A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and solving for the facial expression in the plate by executing the deformation solver using the intermediate facial mesh, the estimated intrinsic camera parameters, the estimated lighting conditions and the refined albedo estimates as inputs over a series of iterations to infer geometry of the facial expression and generate a final facial mesh using the set of parameter values of the deformable model which result in a facial expression that more closely matches the expression of the subject in the plate than does the intermediate facial mesh.
    • 7. 发明申请
    • FACIAL PERFORMANCE CAPTURE IN AN UNCONTROLLED ENVIRONMENT
    • US20200286301A1
    • 2020-09-10
    • US16808110
    • 2020-03-03
    • LUCASFILM ENTERTAINMENT COMPANY LTD.
    • Matthew LoperStéphane GrabliKiran Bhat
    • G06T19/20G06T7/80
    • A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and solving for the facial expression in the plate by executing the deformation solver using the intermediate facial mesh, the estimated intrinsic camera parameters, the estimated lighting conditions and the refined albedo estimates as inputs over a series of iterations to infer geometry of the facial expression and generate a final facial mesh using the set of parameter values of the deformable model which result in a facial expression that more closely matches the expression of the subject in the plate than does the intermediate facial mesh.