会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Framework to evaluate content display policies
    • 评估内容显示政策的框架
    • US08504558B2
    • 2013-08-06
    • US12184114
    • 2008-07-31
    • Deepak AgarwalPradheep ElangoRaghu RamakrishnanSeung-Taek ParkBee-Chung Chen
    • Deepak AgarwalPradheep ElangoRaghu RamakrishnanSeung-Taek ParkBee-Chung Chen
    • G06F17/30
    • G06Q30/02
    • Content display policies are evaluated using two kinds of methods. In the first kind of method, using information, collected in a “controlled” manner about user characteristics and content characteristics, truth models are generated. A simulator replays users' visits to the portal web page and simulates their interactions with content items on the page based on the truth models. Various metrics are used to compare different content item-selecting algorithms. In the second kind of method, no explicit truth models are built. Events from the controlled serving scheme are replayed in part or whole; content item-selection algorithms learn using the observed user activities. Metrics that measure the overall predictive error are used to compare different content-item selection algorithms. The data collected in a controlled fashion plays a key role in both the methods.
    • 使用两种方法评估内容显示策略。 在第一种方法中,使用以“受控”的方式收集关于用户特征和内容特征的信息,生成真实模型。 模拟器会根据真实模型重播用户对门户网页的访问,并模拟与页面上的内容项目的交互。 各种指标用于比较不同的内容项目选择算法。 在第二种方法中,没有建立明确的真理模型。 受控服务计划的活动部分或全部重播; 内容项目选择算法学习使用观察到的用户活动。 衡量总体预测误差的度量用于比较不同的内容项目选择算法。 以受控方式收集的数据在这两种方法中起关键作用。
    • 5. 发明申请
    • FRAMEWORK TO EVALUATE CONTENT DISPLAY POLICIES
    • 评估内容显示政策的框架
    • US20100030717A1
    • 2010-02-04
    • US12184114
    • 2008-07-31
    • Deepak AgarwalPradheep ElangoRaghu RamakrishnanSeung-Taek ParkBee-Chung Chen
    • Deepak AgarwalPradheep ElangoRaghu RamakrishnanSeung-Taek ParkBee-Chung Chen
    • G06N5/02
    • G06Q30/02
    • Content display policies are evaluated using two kinds of methods. In the first kind of method, using information, collected in a “controlled” manner about user characteristics and content characteristics, truth models are generated. A simulator replays users' visits to the portal web page and simulates their interactions with content items on the page based on the truth models. Various metrics are used to compare different content item-selecting algorithms. In the second kind of method, no explicit truth models are built. Events from the controlled serving scheme are replayed in part or whole; content item-selection algorithms learn using the observed user activities. Metrics that measure the overall predictive error are used to compare different content-item selection algorithms. The data collected in a controlled fashion plays a key role in both the methods.
    • 使用两种方法评估内容显示策略。 在第一种方法中,使用以“受控”的方式收集关于用户特征和内容特征的信息,生成真实模型。 模拟器会根据真实模型重播用户对门户网页的访问,并模拟与页面上的内容项目的交互。 各种指标用于比较不同的内容项目选择算法。 在第二种方法中,没有建立明确的真理模型。 受控服务计划的活动部分或全部重播; 内容项目选择算法学习使用观察到的用户活动。 衡量总体预测误差的度量用于比较不同的内容项目选择算法。 以受控方式收集的数据在这两种方法中起关键作用。
    • 6. 发明申请
    • ENHANCED MATCHING THROUGH EXPLORE/EXPLOIT SCHEMES
    • 通过探索/开发计划进行更好的匹配
    • US20120303349A1
    • 2012-11-29
    • US13569728
    • 2012-08-08
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • G06G7/62
    • G06F17/3089
    • Content items are selected to be displayed on a portal page in such a way as to maximize a performance metric such as click-through rate. Problems relating to content selection are addressed, such as changing content pool, variable performance metric, and delay in receiving feedback on an item once the item has been displayed to a user. An adaptation of priority-based schemes for the multi-armed bandit problem, are used to project future trends of data. The adaptation introduces experiments concerning a future time period into the calculation, which increases the set of data on which to solve the multi-armed bandit problem. Also, a Bayesian explore/exploit method is formulated as an optimization problem that addresses all of the issues of content item selection for a portal page. This optimization problem is modified by Lagrange relaxation and normal approximation, which allow computation of the optimization problem in real time.
    • 内容项被选择以在门户页面上显示,以便最大化诸如点击率的性能度量。 解决与内容选择相关的问题,例如改变内容池,可变性能度量,以及一旦项目已被显示给用户,对项目的反馈的延迟。 用于多武装强盗问题的基于优先权的方案的适应性用于预测未来数据趋势。 适应性将关于未来时间段的实验引入计算,这增加了解决多武装强盗问题的数据集。 此外,贝叶斯探索/漏洞利用方法被制定为一个优化问题,解决门户页面的内容项目选择的所有问题。 该优化问题由拉格朗日弛豫和正态逼近法进行修正,可实时计算优化问题。
    • 7. 发明授权
    • Enhanced matching through explore/exploit schemes
    • 通过探索/利用方案增强匹配
    • US08560293B2
    • 2013-10-15
    • US13569728
    • 2012-08-08
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • G06F17/50
    • G06F17/3089
    • Content items are selected to be displayed on a portal page in such a way as to maximize a performance metric such as click-through rate. Problems relating to content selection are addressed, such as changing content pool, variable performance metric, and delay in receiving feedback on an item once the item has been displayed to a user. An adaptation of priority-based schemes for the multi-armed bandit problem, are used to project future trends of data. The adaptation introduces experiments concerning a future time period into the calculation, which increases the set of data on which to solve the multi-armed bandit problem. Also, a Bayesian explore/exploit method is formulated as an optimization problem that addresses all of the issues of content item selection for a portal page. This optimization problem is modified by Lagrange relaxation and normal approximation, which allow computation of the optimization problem in real time.
    • 内容项被选择以在门户页面上显示,以便最大化诸如点击率的性能度量。 解决与内容选择相关的问题,例如改变内容池,可变性能度量,以及一旦项目已被显示给用户,对项目的反馈的延迟。 用于多武装强盗问题的基于优先权的方案的适应性用于预测未来数据趋势。 适应性将关于未来时间段的实验引入计算,这增加了解决多武装强盗问题的数据集。 此外,贝叶斯探索/漏洞利用方法被制定为一个优化问题,解决门户页面的内容项目选择的所有问题。 该优化问题由拉格朗日弛豫和正态逼近法进行修正,可实时计算优化问题。
    • 8. 发明授权
    • Enhanced matching through explore/exploit schemes
    • 通过探索/利用方案增强匹配
    • US08244517B2
    • 2012-08-14
    • US12267534
    • 2008-11-07
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • H. Scott RoyRaghunath RamakrishnanPradheep ElangoNitin MotgiDeepak K. AgarwalWei ChuBee-Chung Chen
    • G06F9/45
    • G06F17/3089
    • Content items are selected to be displayed on a portal page in such a way as to maximize a performance metric such as click-through rate. Problems relating to content selection are addressed, such as changing content pool, variable performance metric, and delay in receiving feedback on an item once the item has been displayed to a user. An adaptation of priority-based schemes for the multi-armed bandit problem are used to project future trends of data. The adaptation introduces experiments concerning a future time period into the calculation, which increases the set of data on which to solve the multi-armed bandit problem. Also, a Bayesian explore/exploit method is formulated as an optimization problem that addresses all of the issues of content item selection for a portal page. This optimization problem is modified by Lagrange relaxation and normal approximation, which allow computation of the optimization problem in real time.
    • 内容项被选择以在门户页面上显示,以便最大化诸如点击率的性能度量。 解决与内容选择相关的问题,例如改变内容池,可变性能度量,以及一旦项目已被显示给用户,对项目的反馈的延迟。 用于多武装强盗问题的基于优先权的方案的改编用于预测未来数据趋势。 适应性将关于未来时间段的实验引入计算,这增加了解决多武装强盗问题的数据集。 此外,贝叶斯探索/漏洞利用方法被制定为一个优化问题,解决门户页面的内容项目选择的所有问题。 该优化问题由拉格朗日弛豫和正态逼近法进行修正,可实时计算优化问题。