会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • SYSTEMS AND METHODS EMPLOYING COOPERATIVE OPTIMIZATION-BASED DIMENSIONALITY REDUCTION
    • 基于合作优化的尺寸减少的系统和方法
    • WO2010017300A1
    • 2010-02-11
    • PCT/US2009/052860
    • 2009-08-05
    • HALLIBURTON ENERGY SERVICES, INC.CHEN, DingdingHAMID, SyedDIX, Michael, C.
    • CHEN, DingdingHAMID, SyedDIX, Michael, C.
    • G01V1/40
    • G01V1/34G06N3/126G06T11/206
    • Dimensionality reduction systems and methods facilitate visualization, understanding, and interpretation of high-dimensionality data sets, so long as the essential information of the data set is preserved during the dimensionality reduction process. In some of the disclosed embodiments, dimensionality reduction is accomplished using clustering, evolutionary computation of low-dimensionality coordinates for cluster kernels, particle swarm optimization of kernel positions, and training of neural networks based on the kernel mapping. The fitness function chosen for the evolutionary computation and particle swarm optimization is designed to preserve kernel distances and any other information deemed useful to the current application of the disclosed techniques, such as linear correlation with a variable that is to be predicted from future measurements. Various error measures are suitable and can be used.
    • 只要在维度降低过程中保留数据集的基本信息,尺寸减小系统和方法便于高维数据集的可视化,理解和解释。 在一些所公开的实施例中,使用聚类,集群内核的低维度坐标的进化计算,核心位置的粒子群优化以及基于内核映射的神经网络的训练来实现维数降低。 为进化计算和粒子群优化选择的适应度函数被设计为保留核心距离以及被认为对所公开技术的当前应用有用的任何其它信息,例如与将来的测量将要预测的变量的线性相关。 各种错误措施是合适的,可以使用。
    • 2. 发明申请
    • ENSEMBLES OF NEURAL NETWORKS WITH DIFFERENT INPUT SETS
    • 具有不同输入集的神经网络的识别
    • WO2007001731A2
    • 2007-01-04
    • PCT/US2006/021158
    • 2006-06-01
    • HALLIBURTON ENERGY SERVICES, INC.CHEN, DingdingQUIREIN, John, A.SMITH, Harry, D.HAMID, SyedGRABLE, Jeffery, L.
    • CHEN, DingdingQUIREIN, John, A.SMITH, Harry, D.HAMID, SyedGRABLE, Jeffery, L.
    • G06N3/02
    • G06N3/0454G06N3/086
    • Methods of creating and using robust neural network ensembles are disclosed. Some embodiments take the form of computer-based methods that comprise receiving a set of available inputs; receiving training data; training at least one neural network for each of at least two different subsets of the set of available inputs; and providing at least two trained neural networks having different subsets of the available inputs as components of a neural network ensemble configured to transform the available inputs into at least one output. The neural network ensemble may be applied as a log synthesis method that comprises: receiving a set of downhole logs; applying a first subset of downhole logs to a first neural network to obtain an estimated log; applying a second, different subset of the downhole logs to a second neural network to obtain an estimated log; and combining the estimated logs to obtain a synthetic log.
    • 公开了创建和使用鲁棒神经网络集合的方法。 一些实施例采用基于计算机的方法的形式,其包括接收一组可用输入; 接收培训数据; 为所述一组可用输入中的至少两个不同子集中的每一个训练至少一个神经网络; 以及提供至少两个经训练的神经网络,其具有可用输入的不同子集,作为被配置为将可用输入转换成至少一个输出的神经网络集合的组件。 神经网络集合可以作为对数合成方法应用,包括:接收一组井下测井; 将第一个井下日志子集应用于第一神经网络以获得估计的对数; 将第二个不同的井下日志子集应用于第二神经网络以获得估计的对数; 并组合估计的日志以获得合成日志。
    • 4. 发明申请
    • NEURAL-NETWORK BASED SURROGATE MODEL CONSTRUCTION METHODS AND APPLICATIONS THEREOF
    • 基于神经网络的现场建模方法及其应用
    • WO2008112921A1
    • 2008-09-18
    • PCT/US2008/056894
    • 2008-03-13
    • HALLIBURTON ENERGY SERVICES, INC.CHEN, DingdingZHONG, AllanHAMID, SyedSTEPHENSON, Stanley
    • CHEN, DingdingZHONG, AllanHAMID, SyedSTEPHENSON, Stanley
    • G06F17/00
    • G06N3/0454B33Y80/00
    • Various neural-network based surrogate model construction methods are disclosed herein, along with various applications of such models. Designed for use when only a sparse amount of data is available (a "sparse data condition"), some embodiments of the disclosed systems and methods: create a pool of neural networks trained on a first portion of a sparse data set; generate for each of various multi-objective functions a set of neural network ensembles that minimize the multi-objective function; select a local ensemble from each set of ensembles based on data not included in said first portion of said sparse data set; and combine a subset of the local ensembles to form a global ensemble. This approach enables usage of larger candidate pools, multi-stage validation, and a comprehensive performance measure that provides more robust predictions in the voids of parameter space.
    • 本文公开了各种基于神经网络的替代模型构建方法,以及这些模型的各种应用。 设计为仅在少量数据可用时使用(“稀疏数据条件”),所公开的系统和方法的一些实施例:创建在稀疏数据集的第一部分上训练的神经网络池; 为各种多目标函数中的每一个生成一组最小化多目标函数的神经网络集合; 基于不包括在所述稀疏数据集的所述第一部分中的数据,从每组集合中选择本地集合; 并组合一个本地组合的子集以形成全局集合。 这种方法使得可以使用更大的候选池,多级验证以及综合性能测量,从而在参数空间的空白中提供更强大的预测。