会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Computer system and computerized method for partitioning data for parallel processing
    • 用于并行处理分区数据的计算机系统和计算机化方法
    • US06415286B1
    • 2002-07-02
    • US09281984
    • 1999-03-29
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. Zyszkowski
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. Zyszkowski
    • G06F15163
    • G06F17/30224G06F9/5027G06K9/00973G06K9/6218G06K9/6282G06N3/0454Y10S707/99933Y10S707/99936Y10S707/99938
    • A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
    • 计算机系统将数据空间拆分为处理器或进程之间的数据分区。 可以使用决策树将数据空间拆分成不需要与定义数据空间参数的轴正交的子区域。 决策树可以在其非终端节点中的每个训练数据上进行训练并用于分割训练数据的神经网络。 每个终端或叶节点可以具有对到达终端节点的训练数据训练的隐层神经网络。 可以在一个处理器上执行非终端节点神经网络的训练,并且可以在单独的处理器上运行叶节点的神经网络的训练。 不同目标值可用于不同非终端节点网络的训练。 非终端节点网络可以是隐层神经网络。 每个非终端节点可自动发送其接收到的每个子节点的培训记录的期望比例,因此叶节点网络每个接收大约相同数量的训练记录。 该系统可以自动地配置该树以使得多个叶节点等于可用于训练叶节点网络的单独处理器的数量。 在非终端和叶节点网络被训练之后,大数据库的记录可以通过树进行分类或估计某些参数值。
    • 3. 发明授权
    • Computer system and computerized method for partitioning data for
parallel processing
    • 用于并行处理分区数据的计算机系统和计算机化方法
    • US5909681A
    • 1999-06-01
    • US624844
    • 1996-03-25
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. A. Zyszkowski
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. A. Zyszkowski
    • G06F9/50G06F17/30G06K9/00G06K9/62G06N3/04G06F15/163
    • G06F17/30224G06F9/5027G06K9/00973G06K9/6218G06K9/6282G06N3/0454Y10S707/99933Y10S707/99936Y10S707/99938
    • A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
    • 计算机系统将数据空间拆分为处理器或进程之间的数据分区。 可以使用决策树将数据空间拆分成不需要与由数据空间参数定义的轴正交的子区域。 决策树可以在其非终端节点中的每个训练数据上进行训练并用于分割训练数据的神经网络。 每个终端或叶节点可以具有对到达终端节点的训练数据训练的隐层神经网络。 可以在一个处理器上执行非终端节点神经网络的训练,并且可以在单独的处理器上运行叶节点的神经网络的训练。 不同目标值可用于不同非终端节点网络的训练。 非终端节点网络可以是隐层神经网络。 每个非终端节点可自动发送其接收到的每个子节点的培训记录的期望比例,因此叶节点网络每个接收大约相同数量的训练记录。 该系统可以自动地配置该树以使得多个叶节点等于可用于训练叶节点网络的单独处理器的数量。 在非终端和叶节点网络被训练之后,大数据库的记录可以通过树进行分类或估计某些参数值。
    • 5. 发明授权
    • Computer system and process for training of analytical models using large data sets
    • 使用大数据集的分析模型的计算机系统和过程
    • US06347310B1
    • 2002-02-12
    • US09075730
    • 1998-05-11
    • Anthony Passera
    • Anthony Passera
    • G06F1518
    • G06K9/6273G06K9/6282G06N3/08G06N99/005
    • A database often contains sparse, i.e., under-represented, conditions which might be not represented in a training data set for training an analytical model if the training data set is created by stratified sampling. Sparse conditions may be represented in a training set by using a data set which includes essentially all of the data in a database, without stratified sampling. A series of samples, or “windows,” are used to select portions of the large data set for phases of training. In general, the first window of data should be a reasonably broad sample of the data. After the model is initially trained using a first window of data, subsequent windows are used to retrain the model. For some model types, the model is modified in order to provide it with some retention of training obtained using previous windows of data. Neural networks and Kohonen networks may be used without modification. Models such as probabilistic neural networks, generalized regression neural networks, Gaussian radial basis functions, decision trees, including K-D trees and neural trees, are modified to provide them with properties of memory to retain the effects of training with previous training data sets. Such a modification may be provided using clustering. is Parallel training models which partition the training data set into disjoint subsets are modified so that the partitioner is trained only on the first window of data, whereas subsequent windows are used to train the models to which the partitioner applies the data in parallel.
    • 数据库通常包含稀疏(即,低于代表的)条件,如果训练数据集是通过分层抽样创建的,则训练数据集中的训练数据集可能不被表示。 稀疏条件可以通过使用包含数据库中的基本上所有数据的数据集而在训练集中表示,而不进行分层抽样。 一系列样本或“窗口”用于选择训练阶段的大数据集的部分。 一般来说,第一个数据窗口应该是相当广泛的数据样本。 在使用第一个数据窗口最初训练模型之后,使用随后的窗口来重新训练模型。 对于某些模型类型,该模型进行了修改,以便为其提供使用先前的数据窗口获得的训练的一些保留。 可以使用神经网络和Kohonen网络而无需修改。 修改了诸如概率神经网络,广义回归神经网络,高斯径向基函数,包括K-D树和神经树在内的决策树等模型,为其提供了存储器的属性,以保留训练与以前的训练数据集的效果。 可以使用聚类提供这样的修改。 将训练数据集划分为不相交子集的并行训练模型被修改,使分割器仅在第一个数据窗口进行训练,而随后的窗口用于训练分割器并行应用数据的模型。
    • 6. 发明授权
    • Computer system and process for explaining behavior of a model that maps input data to output data
    • 用于解释将输入数据映射到输出数据的模型的行为的计算机系统和过程
    • US06272449B1
    • 2001-08-07
    • US09102349
    • 1998-06-22
    • Anthony Passera
    • Anthony Passera
    • G06F1710
    • G06K9/6232G06N3/02
    • The present invention provides a description of the behavior of a model that indicates the sensitivity of the model in subspaces of the input space and which indicates which dimensions of the input data are salient in subspaces of the input space. By implementing this description using a decision tree, the subspaces and their salient dimensions are both described and determined hierarchically. A sensitivity analysis is performed on the model to provide a sensitivity profile of the input space of the model according to sensitivity of outputs of the model to variations in data input to the model. The input space is divided into at least two subspaces according to the sensitivity profile. A sensitivity analysis is performed on the model to provide a sensitivity profile of each of the subspaces according to sensitivity of outputs of the model to variations in data input to the model.
    • 本发明提供了一种模型的行为的描述,该模型指示模型在输入空间的子空间中的灵敏度,并且指示输入数据的哪个维度在输入空间的子空间中是显着的。 通过使用决策树实现该描述,子空间及其显着尺寸被分层地描述和确定。 对模型执行灵敏度分析,以根据模型输出的灵敏度向模型输入的数据的变化提供模型输入空间的灵敏度分布。 输入空间根据灵敏度分布被划分为至少两个子空间。 对模型执行灵敏度分析,以根据模型输出的灵敏度向模型输入的数据变化提供每个子空间的灵敏度分布。