会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Input feature and kernel selection for support vector machine classification
    • 输入特征和内核选择,用于支持向量机分类
    • US20050049985A1
    • 2005-03-03
    • US10650121
    • 2003-08-28
    • Olvi MangasarianGlenn Fung
    • Olvi MangasarianGlenn Fung
    • G06F17/00G06K9/62G06N5/00
    • G06K9/6228G06K9/6269
    • A feature selection technique for support vector machine (SVM) classification makes use of fast Newton method that suppresses input space features for a linear programming formulation of a linear SVM classifier, or suppresses kernel functions for a linear programming formulation of a nonlinear SVM classifier. The techniques may be implemented with a linear equation solver, without the need for specialized linear programming packages. The feature selection technique may be applicable to linear or nonlinear SVM classifiers. The technique may involve defining a linear programming formulation of a SVM classifier, solving an exterior penalty function of a dual of the linear programming formulation to produce a solution to the SVM classifier using a Newton method, and selecting an input set for the SVM classifier based on the solution.
    • 用于支持向量机(SVM)分类的特征选择技术使用快速牛顿法来抑制线性SVM分类器的线性规划公式的输入空间特征,或者抑制非线性SVM分类器的线性规划公式的核函数。 这些技术可以用线性方程求解器来实现,而不需要专门的线性规划包。 特征选择技术可以适用于线性或非线性SVM分类器。 该技术可以包括定义SVM分类器的线性规划公式,求解线性规划公式的双重的外部惩罚函数,以使用牛顿法产生对SVM分类器的解,并且基于SVM分类器选择输入集合 在解决方案。
    • 3. 发明授权
    • System and method for multiple instance learning for computer aided detection
    • 用于计算机辅助检测的多实例学习的系统和方法
    • US07986827B2
    • 2011-07-26
    • US11671777
    • 2007-02-06
    • R. Bharat RaoMurat DundarBalaji KrishnapuramGlenn Fung
    • R. Bharat RaoMurat DundarBalaji KrishnapuramGlenn Fung
    • G06K9/62G06K9/00G06E1/00
    • G06K9/6277G06T7/0012G06T2207/30004
    • A method of training a classifier for computer aided detection of digitized medical image, includes providing a plurality of bags, each bag containing a plurality of feature samples of a single region-of-interest in a medical image, where each region-of-interest has been labeled as either malignant or healthy. The training uses candidates that are spatially adjacent to each other, modeled by a “bag”, rather than each candidate by itself. A classifier is trained on the plurality of bags of feature samples, subject to the constraint that at least one point in a convex hull of each bag, corresponding to a feature sample, is correctly classified according to the label of the associated region-of-interest, rather than a large set of discrete constraints where at least one instance in each bag has to be correctly classified.
    • 训练用于数字化医学图像的计算机辅助检测的分类器的方法包括提供多个袋,每个袋包含在医学图像中的单个感兴趣区域的多个特征样本,其中每个感兴趣的区域 已被标记为恶性或健康。 培训使用空间上相邻的候选人,由“包”建模,而不是每个候选人本身。 在多个特征样本袋上训练分类器,受限于根据相关区域的标签对每个袋子的凸包中的至少一个点(对应于特征样本)进行正确分类, 而不是大量离散约束,每个行李中的至少一个实例必须被正确分类。
    • 4. 发明授权
    • Using candidates correlation information during computer aided diagnosis
    • 在计算机辅助诊断期间使用候选相关信息
    • US07912278B2
    • 2011-03-22
    • US11742781
    • 2007-05-01
    • Glenn FungBalaji KrishnapuramVolkan VuralR. Bharat Rao
    • Glenn FungBalaji KrishnapuramVolkan VuralR. Bharat Rao
    • G06K9/46G06K9/62
    • G06T7/0012G06K9/6269G06K9/6278
    • A method and system correlate candidate information and provide batch classification of a number of related candidates. The batch of candidates may be identified from a single data set. There may be internal correlations and/or differences among the candidates. The candidates may be classified taking into consideration the internal correlations and/or differences. The locations and descriptive features of a batch of candidates may be determined. In turn, the locations and/or descriptive features determined may used to enhance the accuracy of the classification of some or all of the candidates within the batch. In one embodiment, the single data set analyzed is associated with an internal image of patient and the distance between candidates is accounted for. Two different algorithms may each simultaneously classify all of the samples within a batch, one being based upon probabilistic analysis and the other upon a mathematical programming approach. Alternate algorithms may be used.
    • 一种方法和系统将候选信息相关联并提供一些相关候选者的批次分类。 可以从单个数据集中识别该批候选。 候选人之间可能存在内部相关性和/或差异。 候选人可以考虑内部相关性和/或差异进行分类。 可以确定一批候选人的位置和描述性特征。 反过来,所确定的位置和/或描述性特征可以用于提高批次内的一些或所有候选者的分类的准确性。 在一个实施例中,所分析的单个数据集与患者的内部图像相关联,并且考虑候选者之间的距离。 两种不同的算法可以各自同时对批次中的所有样本进行分类,一种基于概率分析,另一种基于数学规划方法。 可以使用替代算法。
    • 5. 发明授权
    • Incorporating spatial knowledge for classification
    • 结合空间知识进行分类
    • US07634120B2
    • 2009-12-15
    • US10915076
    • 2004-08-10
    • Arun KrishnanGlenn FungJonathan Stoeckel
    • Arun KrishnanGlenn FungJonathan Stoeckel
    • G06K9/00
    • G06T7/0012G06K9/6807G06T2207/10072G06T2207/20012G06T2207/30064
    • We propose using different classifiers based on the spatial location of the object. The intuitive idea behind this approach is that several classifiers may learn local concepts better than a “universal” classifier that covers the whole feature space. The use of local classifiers ensures that the objects of a particular class have a higher degree of resemblance within that particular class. The use of local classifiers also results in memory, storage and performance improvements, especially when the classifier is kernel-based. As used herein, the term “kernel-based classifier” refers to a classifier where a mapping function (i.e., the kernel) has been used to map the original training data to a higher dimensional space where the classification task may be easier.
    • 我们建议基于对象的空间位置使用不同的分类器。 这种方法背后的直观思想是,几个分类器可以比涵盖整个特征空间的“通用”分类器更好地学习局部概念。 使用本地分类器确保特定类的对象在该特定类中具有更高程度的相似度。 使用本地分类器也会导致内存,存储和性能改进,特别是当分类器是基于内核的时候。 如本文所使用的,术语“基于内核的分类器”是指其中已经使用映射函数(即,内核)将原始训练数据映射到更高维度空间的分类器,其中分类任务可以更容易。
    • 7. 发明申请
    • Greedy support vector machine classification for feature selection applied to the nodule detection problem
    • 贪心支持向量机分类功能选择应用于结节检测问题
    • US20050105794A1
    • 2005-05-19
    • US10924136
    • 2004-08-23
    • Glenn Fung
    • Glenn Fung
    • G06K9/62G06K9/00
    • G06K9/6269G06K9/6228
    • An incremental greedy method to feature selection is described. This method results in a final classifier that performs optimally and depends on only a few features. Generally, a small number of features is desired because it is often the case that the complexity of a classification method depends on the number of features. It is very well known that a large number of features may lead to overfitting on the training set, which then leads to a poor generalization performance in new and unseen data. The incremental greedy method is based on feature selection of a limited subset of features from the feature space. By providing low feature dependency, the incremental greedy method 100 requires fewer computations as compared to a feature extraction approach, such as principal component analysis.
    • 描述了增量贪婪方法来进行特征选择。 这种方法导致一个最终的分类器,其执行最佳并仅依赖于几个特征。 通常,需要少量特征,因为分类方法的复杂性常常取决于特征的数量。 众所周知,大量的特征可能导致训练集上的过度拟合,这导致新的和未知的数据中的泛化性能差。 增量贪心方法基于特征空间中特征选择的有限子集。 通过提供低特征依赖性,与特征提取方法(诸如主成分分析)相比,增量贪心方法100需要较少的计算。