会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明授权
    • Computer system and computerized method for partitioning data for
parallel processing
    • 用于并行处理分区数据的计算机系统和计算机化方法
    • US5909681A
    • 1999-06-01
    • US624844
    • 1996-03-25
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. A. Zyszkowski
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. A. Zyszkowski
    • G06F9/50G06F17/30G06K9/00G06K9/62G06N3/04G06F15/163
    • G06F17/30224G06F9/5027G06K9/00973G06K9/6218G06K9/6282G06N3/0454Y10S707/99933Y10S707/99936Y10S707/99938
    • A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
    • 计算机系统将数据空间拆分为处理器或进程之间的数据分区。 可以使用决策树将数据空间拆分成不需要与由数据空间参数定义的轴正交的子区域。 决策树可以在其非终端节点中的每个训练数据上进行训练并用于分割训练数据的神经网络。 每个终端或叶节点可以具有对到达终端节点的训练数据训练的隐层神经网络。 可以在一个处理器上执行非终端节点神经网络的训练,并且可以在单独的处理器上运行叶节点的神经网络的训练。 不同目标值可用于不同非终端节点网络的训练。 非终端节点网络可以是隐层神经网络。 每个非终端节点可自动发送其接收到的每个子节点的培训记录的期望比例,因此叶节点网络每个接收大约相同数量的训练记录。 该系统可以自动地配置该树以使得多个叶节点等于可用于训练叶节点网络的单独处理器的数量。 在非终端和叶节点网络被训练之后,大数据库的记录可以通过树进行分类或估计某些参数值。
    • 6. 发明授权
    • Computer system and computerized method for partitioning data for parallel processing
    • 用于并行处理分区数据的计算机系统和计算机化方法
    • US06415286B1
    • 2002-07-02
    • US09281984
    • 1999-03-29
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. Zyszkowski
    • Anthony PasseraJohn R. ThorpMichael J. BeckerleEdward S. Zyszkowski
    • G06F15163
    • G06F17/30224G06F9/5027G06K9/00973G06K9/6218G06K9/6282G06N3/0454Y10S707/99933Y10S707/99936Y10S707/99938
    • A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
    • 计算机系统将数据空间拆分为处理器或进程之间的数据分区。 可以使用决策树将数据空间拆分成不需要与定义数据空间参数的轴正交的子区域。 决策树可以在其非终端节点中的每个训练数据上进行训练并用于分割训练数据的神经网络。 每个终端或叶节点可以具有对到达终端节点的训练数据训练的隐层神经网络。 可以在一个处理器上执行非终端节点神经网络的训练,并且可以在单独的处理器上运行叶节点的神经网络的训练。 不同目标值可用于不同非终端节点网络的训练。 非终端节点网络可以是隐层神经网络。 每个非终端节点可自动发送其接收到的每个子节点的培训记录的期望比例,因此叶节点网络每个接收大约相同数量的训练记录。 该系统可以自动地配置该树以使得多个叶节点等于可用于训练叶节点网络的单独处理器的数量。 在非终端和叶节点网络被训练之后,大数据库的记录可以通过树进行分类或估计某些参数值。
    • 7. 发明授权
    • Apparatuses and methods for monitoring performance of parallel computing
    • 用于监视并行计算性能的装置和方法
    • US06330008B1
    • 2001-12-11
    • US08807040
    • 1997-02-24
    • Allen M. RazdowDaniel W. KohnMichael J. BeckerleJeffrey D. Ives
    • Allen M. RazdowDaniel W. KohnMichael J. BeckerleJeffrey D. Ives
    • G06F1300
    • G06F11/3404
    • A performance monitor represents execution of a data flow graph by changing performance information along different parts of a representation of that graph. If the graph is executed in parallel, the monitor can show parallel operator instances, associated datalinks, and performance information relevant to each. The individual parallel processes executing the graph send performance messages to the performance monitor, and the performance monitor can instruct such processes to vary the information they send. The monitor can provides 2D or 3D views in which the user can change focus, zoom and viewpoint. In 3D views, parallel instances of the same operator are grouped in a 2D array. The data rate of a datalink can be represented by both the density and velocity of line segments along the line which represent it. The line can be colored as a function of the datalink's source or destination, its data rate, or the integral thereof. Alternatively, a histogram can be displayed along each datalink's line, displaying information about the rate of, total of, or value of a field in, the data sent, at successive intervals. The user can click on objects to obtain additional information, such as bar charts of statistics, detailed performance listings, or invocation of a debugger. The user can selectively collapse representations of graph objects into composite representations, highlight objects which are out of records or which have flow blockages; label operators; turn off the display of objects; and record and playback the performance information.
    • 性能监视器通过根据该图的表示的不同部分改变性能信息来表示数据流图的执行。 如果图形并行执行,则监视器可以显示并行运算符实例,关联的数据链路和与每个相关的性能信息。 执行图形的各个并行进程将性能消息发送到性能监视器,性能监视器可以指示这些进程改变它们发送的信息。 显示器可以提供2D或3D视图,用户可以在其中更改焦点,缩放和视点。 在3D视图中,同一运算符的并行实例被分组在2D数组中。 数据链路的数据速率可以由沿着代表它的线的线段的密度和速度两者来表示。 该线可以作为数据链路的源或目的地,其数据速率或其积分的函数而着色。 或者,可以沿着每个数据链路的线显示直方图,以连续的间隔显示关于发送的数据中的字段的速率,总数或值的信息。 用户可以点击对象来获取附加信息,如统计图表,详细的性能列表或调试器的调用。 用户可以选择性地将图形对象的表示折叠成复合表示,突出显示超出记录或具有流阻塞的对象; 标签操作员 关闭对象的显示; 并记录和播放演出信息。
    • 8. 发明授权
    • Apparatuses and methods for programming parallel computers
    • 用于编程并行计算机的装置和方法
    • US06311265B1
    • 2001-10-30
    • US08627801
    • 1996-03-25
    • Michael J. BeckerleJames Richard BurnsJerry L. CallenJeffrey D. IvesRobert L. KrawitzDaniel L. LearySeven RosenthalEdward S. A. Zyzkowski
    • Michael J. BeckerleJames Richard BurnsJerry L. CallenJeffrey D. IvesRobert L. KrawitzDaniel L. LearySeven RosenthalEdward S. A. Zyzkowski
    • G06F938
    • G06F17/30445G06F8/20G06F8/314
    • A system provides an environment for parallel programming by providing a plurality of modular parallelizable operators stored in a computer readable memory. Each operator defines operation programming for performing an operation, one or more communication ports, each of which is either an input port for providing the operation programming a data stream of records, or an output port for receiving a data stream of records from the operation programming and an indication for each of the operator's input ports, if any, of a partitioning method to be applied to the data stream supplied to the input port. An interface enables users to define a data flow graph by giving instructions to select a specific one of the operators for inclusion in the graph, or instructions to select a specific data object, which is capable of supplying or receiving a data stream of one or more records, for inclusion in the graph, or instructions to associate a data link with a specific communication port of an operator in the graph, which data link defines a path for the communication of a data stream of one or more records between its associated communications port and either a specific data object or the specific communication port of another specific operator in said graph. The execution of a data flow graph equivalent to that defined by the users is automatically parallelized by causing a separate instance of each such operator, including its associated operation programming, to be run on each of multiple processors, with each instance of a given operator having a corresponding input and output port for each input and output port of the given operator, and by automatically partitioning the data stream supplied to the corresponding inputs of the instances of a given operator as a function of the partitioning method indication for the given operator's corresponding input.
    • 系统通过提供存储在计算机可读存储器中的多个模块化可并行化操作器来提供用于并行编程的环境。 每个操作员定义用于执行操作的操作编程,一个或多个通信端口,每个通信端口是用于向操作编程提供记录的数据流的输入端口或用于从操作编程接收记录数据流的输出端口 以及针对要提供给输入端口的数据流应用的分区方法的每个操作者的输入端口(如果有的话)的指示。 界面使得用户能够通过给出选择特定的一个运算符以包括在图中的指令来定义数据流图,或者选择能够提供或接收一个或多个数据流的数据流的特定数据对象的指令 用于包括在图中的记录,或用于将数据链路与图中的运营商的特定通信端口相关联的指令,哪个数据链路定义用于在其相关联的通信端口之间的一个或多个记录的数据流的通信的路径 以及所述图表中的特定数据对象或另一特定运算符的特定通信端口。 通过使每个这样的操作者的单独实例(包括其相关的操作编程)在多个处理器中的每一个上运行,自动并行化与用户定义的数据流图相当的数据流图的执行,给定操作符的每个实例具有 用于给定操作员的每个输入和输出端口的相应输入和输出端口,并且通过根据给定操作者的对应输入的分区方法指示,自动地分配提供给给定操作者的实例的相应输入的数据流 。