会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Methods and systems for cost-sensitive boosting
    • 成本敏感提升的方法和系统
    • US08275721B2
    • 2012-09-25
    • US12190325
    • 2008-08-12
    • Naoki AbeAurelie C. Lozano
    • Naoki AbeAurelie C. Lozano
    • G06F15/18
    • G06N99/005
    • Multi-class cost-sensitive boosting based on gradient boosting with “p-norm” cost functionals” uses iterative example weighting schemes derived with respect to cost functionals, and a binary classification algorithm. Weighted sampling is iteratively applied from an expanded data set obtained by enhancing each example in the original data set with as many data points as there are possible labels for any single instance, and where each non-optimally labeled example is given the weight equaling a half times the original misclassification cost for the labeled example times the p−1 norm of the average prediction of the current hypotheses. Each optimally labeled example is given the weight equaling the sum of the weights for all the non-optimally labeled examples for the same instance. Component classification algorithm is executed on a modified binary classification problem. A classifier hypothesis is output, which is the average of all the hypotheses output in the respective iterations.
    • 基于使用“p范数”成本函数梯度提升的多类成本敏感性提升“使用相对于成本函数导出的迭代示例加权方案和二进制分类算法。 从通过增强原始数据集中的每个示例而获得的扩展数据集中的加权采样被迭代地应用于具有任何单一实例的可能标签的数据点数量,并且其中每个非最佳标记的示例给出权重等于一半 乘以标记示例的原始错误分类成本乘以当前假设的平均预测的p-1范数。 给出每个最佳标记的例子,其权重等于同一实例的所有非最佳标记的示例的权重之和。 对修改的二进制分类问题执行组件分类算法。 输出分类器假设,它是各自迭代中输出的所有假设的平均值。
    • 6. 发明申请
    • METHODS AND SYSTEMS FOR COST-SENSITIVE BOOSTING
    • 成本敏感性升高的方法和系统
    • US20100042561A1
    • 2010-02-18
    • US12190325
    • 2008-08-12
    • Naoki AbeAurelie C. Lozano
    • Naoki AbeAurelie C. Lozano
    • G06F15/16
    • G06N99/005
    • Multi-class cost-sensitive boosting based on gradient boosting with “p-norm” cost functionals” uses iterative example weighting schemes derived with respect to cost functionals, and a binary classification algorithm. Weighted sampling is iteratively applied from an expanded data set obtained by enhancing each example in the original data set with as many data points as there are possible labels for any single instance, and where each non-optimally labeled example is given the weight equaling a half times the original misclassification cost for the labeled example times the p−1 norm of the average prediction of the current hypotheses. Each optimally labeled example is given the weight equaling the sum of the weights for all the non-optimally labeled examples for the same instance. Component classification algorithm is executed on a modified binary classification problem. A classifier hypothesis is output, which is the average of all the hypotheses output in the respective iterations.
    • 基于使用“p范数”成本函数梯度提升的多类成本敏感性提升“使用相对于成本函数导出的迭代示例加权方案和二进制分类算法。 通过增加原始数据集中的每个示例获得的扩展数据集,对任何单个实例可能的标签具有尽可能多的数据点,并且每个非最佳标记的示例给出权重等于一半的加权采样 乘以标记示例的原始错误分类成本乘以当前假设的平均预测的p-1范数。 给出每个最佳标记的例子,其权重等于同一实例的所有非最佳标记的示例的权重之和。 对修改的二进制分类问题执行组件分类算法。 输出分类器假设,它是各自迭代中输出的所有假设的平均值。
    • 9. 发明授权
    • Resource-light method and apparatus for outlier detection
    • 资源光法和异常检测装置
    • US08006157B2
    • 2011-08-23
    • US11863704
    • 2007-09-28
    • Naoki AbeJohn Langford
    • Naoki AbeJohn Langford
    • H04L1/00G06F11/30H03M13/00
    • G06N99/005Y10S707/99936Y10S707/99943
    • Outlier detection methods and apparatus have light computational resources requirement, especially on the storage requirement, and yet achieve a state-of-the-art predictive performance. The outlier detection problem is first reduced to that of a classification learning problem, and then selective sampling based on uncertainty of prediction is applied to further reduce the amount of data required for data analysis, resulting in enhanced predictive performance. The reduction to classification essentially consists in using the unlabeled normal data as positive examples, and randomly generated synthesized examples as negative examples. Application of selective sampling makes use of an underlying, arbitrary classification learning algorithm, the data labeled by the above procedure, and proceeds iteratively. Each iteration consisting of selection of a smaller sub-sample from the input data, training of the underlying classification algorithm with the selected data, and storing the classifier output by the classification algorithm. The selection is done by essentially choosing examples that are harder to classify with the classifiers obtained in the preceding iterations. The final output hypothesis is a voting function of the classifiers obtained in the iterations of the above procedure.
    • 异常值检测方法和装置具有较轻的计算资源需求,特别是对存储要求的要求,而且具有最先进的预测性能。 异常值检测问题首先降低到分类学习问题,然后应用基于预测不确定度的选择性抽样来进一步减少数据分析所需的数据量,从而提高预测性能。 归类分类主要在于使用未标记的正常数据作为正例,随机生成合成实例作为阴性实例。 选择性抽样的应用使用了基础的,任意的分类学习算法,由上述过程标记的数据,并且迭代地进行。 每个迭代包括从输入数据中选择较小的子样本,对所选数据训练底层分类算法,以及通过分类算法存储分类器输出。 选择是通过基本上选择难以对上述迭代中获得的分类器进行分类的示例来完成的。 最终输出假设是在上述过程的迭代中获得的分类器的投票函数。