会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Learning machine synapse processor system apparatus
    • 学习机突触处理器系统设备
    • US5613044A
    • 1997-03-18
    • US459199
    • 1995-06-02
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Frias
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Frias
    • G06F15/18
    • G06N3/063
    • A Neural synapse processor apparatus having a neuron architecture for the synapse processing elements of the apparatus. The apparatus which we prefer will have a N neuron structure having synapse processing units that contain instruction and data storage units, receive instructions and data, and execute instructions. The N neuron structure should contain communicating adder trees, neuron activation function units, and an arrangement for communicating both instructions, data, and the outputs of neuron activation function units back to the input synapse processing units by means of the communicating adder trees. The apparatus can be structured as a bit-serial or word parallel system. The preferred structure contains N.sup.2 synapse processing units, each associated with a connection weight in the N neural network to be emulated, placed in the form of a N by N matrix that has been folded along the diagonal and made up of diagonal cells and general cells. Diagonal cells, each utilizing a single synapse processing unit, are associated with the diagonal connection weights of the folded N by N connection weight matrix and general cells, each of which has two synapse processing units merged together, and which are associated with the symmetric connection weights of the folded N by N connection weight matrix. The back-propagation learning algorithm is first discussed followed by a presentation of the learning machine synapse processor architecture. An example implementation of the back-propagation learning algorithm is then presented. This is followed by a Boltzmann like machine example and data parallel examples mapped onto the architecture.
    • 一种具有用于装置的突触处理元件的神经元结构的神经突触处理器装置。 我们喜欢的装置将具有具有包含指令和数据存储单元,接收指令和数据以及执行指令的突触处理单元的N个神经元结构。 N神经元结构应包含通信加法器树,神经激活功能单元和用于通过通信加法器树将神经元激活功能单元的指令,数据和输出传送回输入突触处理单元的布置。 该装置可以被构造为比特串行或字并行系统。 优选结构包含N 2个突触处理单元,每个N突触处理单元与N个仿真网络中的连接权重相关联,以N×N矩阵的形式放置,该矩阵已经沿对角线折叠并由对角线单元和通用单元组成 。 每个使用单个突触处理单元的对角线单元与折叠的N乘N连接权重矩阵和通用单元的对角连接权重相关联,每个单元具有合并在一起的两个突触处理单元,并且与对称连接相关联 折叠N乘以N连接权重矩阵的权重。 首先讨论反向传播学习算法,然后介绍学习机器突触处理器架构。 然后呈现反向传播学习算法的示例实现。 接下来是一个Boltzmann的机器示例,并将数据并行示例映射到架构上。
    • 5. 发明授权
    • Learning machine synapse processor system apparatus
    • 学习机突触处理器系统设备
    • US5517596A
    • 1996-05-14
    • US161839
    • 1993-12-01
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Frias
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Frias
    • G06F15/18
    • G06N3/063
    • A Neural synapse processor apparatus having a neuron architecture for the synapse processing elements of the apparatus. The apparatus which we prefer will have a N neuron structure having synapse processing units that contain instruction and data storage units, receive instructions and data, and execute instructions. The N neuron structure should contain communicating adder trees, neuron activation function units, and an arrangement for communicating both instructions, data, and the outputs of neuron activation function units back to the input synapse processing units by means of the communicating adder trees. The apparatus can be structured as a bit-serial or word parallel system. The preferred structure contains N.sup.2 synapse processing units, each associated with a connection weight in the N neural network to be emulated, placed in the form of a N by N matrix that has been folded along the diagonal and made up of diagonal cells and general cells. Diagonal cells, each utilizing a single synapse processing unit, are associated with the diagonal connection weights of the folded N by N connection weight matrix and general cells, each of which has two synapse processing units merged together, and which are associated with the symmetric connection weights of the folded N by N connection weight matrix. The back-propagation learning algorithm is first discussed followed by a presentation of the learning machine synapse processor architecture. An example implementation of the back-propagation learning algorithm is then presented. This is followed by a Boltzmann like machine example and data parallel examples mapped onto the architecture.
    • 一种具有用于装置的突触处理元件的神经元结构的神经突触处理器装置。 我们喜欢的装置将具有具有包含指令和数据存储单元,接收指令和数据以及执行指令的突触处理单元的N个神经元结构。 N神经元结构应包含通信加法器树,神经激活功能单元和用于通过通信加法器树将神经元激活功能单元的指令,数据和输出传送回输入突触处理单元的布置。 该装置可以被构造为比特串行或字并行系统。 优选结构包含N 2个突触处理单元,每个N突触处理单元与N个仿真网络中的连接权重相关联,以N×N矩阵的形式放置,该矩阵已经沿对角线折叠并由对角线单元和通用单元组成 。 每个使用单个突触处理单元的对角线单元与折叠的N乘N连接权重矩阵和通用单元的对角连接权重相关联,每个单元具有合并在一起的两个突触处理单元,并且与对称连接相关联 折叠N乘以N连接权重矩阵的权重。 首先讨论反向传播学习算法,然后介绍学习机器突触处理器架构。 然后呈现反向传播学习算法的示例实现。 接下来是一个Boltzmann的机器示例,并将数据并行示例映射到架构上。
    • 7. 发明授权
    • Parallel processing system and method using surrogate instructions
    • 并行处理系统和使用代理指令的方法
    • US5649135A
    • 1997-07-15
    • US373128
    • 1995-01-17
    • Gerald G. PechanekClair John GlossnerLarry D. LarsenStamatis Vassiliadis
    • Gerald G. PechanekClair John GlossnerLarry D. LarsenStamatis Vassiliadis
    • G06F9/318G06F9/38G06F15/16G06F15/80G06F9/355
    • G06F9/3885G06F17/142G06F9/30072G06F9/30181
    • A parallel processing system and method is disclosed, which provides an improved instruction distribution mechanism for a parallel processing array. The invention broadcasts a basic instruction to each of a plurality of processor elements. Each processor element decodes the same instruction by combining it with a unique offset value stored in each respective processor element, to produce a derived instruction that is unique to the processor element. A first type of basic instruction results in the processor element performing a logical or control operation. A second type of basic instruction results in the generation of a pointer address. The pointer address has a unique address value because it results from combining the basic instruction with the unique offset value stored at the processor element. The pointer address is used to access an alternative instruction from an alternative instruction storage, for execution in the processor element. The alternative instruction is a very long instruction word, whose length is, for example, an integral multiple of the length of the basic instruction and contains much more information than can be represented by the basic instruction. A very long instruction word such as this is useful for providing parallel control of a plurality of primitive execution units that reside within the processor element. In this manner, a high degree of flexibility and versatility is attained in the operation of processor elements of a parallel processing array.
    • 公开了一种并行处理系统和方法,其提供了用于并行处理阵列的改进的指令分配机制。 本发明向多个处理器元件中的每一个广播基本指令。 每个处理器元件通过将其与存储在每个相应处理器元件中的唯一偏移值组合来解码相同的指令,以产生对于处理器元件唯一的导出指令。 第一类基本指令导致处理器元件执行逻辑或控制操作。 第二种类型的基本指令导致生成指针地址。 指针地址具有唯一的地址值,因为它是将基本指令与存储在处理器元件中的唯一偏移值组合而产生的。 指针地址用于从替代指令存储器访问替代指令,以在处理器元件中执行。 替代指令是非常长的指令字,其长度例如是基本指令的长度的整数倍,并且包含比基本指令可以表示的更多的信息。 诸如此类的非常长的指令字对于提供驻留在处理器元件内的多个基本执行单元的并行控制是有用的。 以这种方式,在并行处理阵列的处理器元件的操作中获得高度的灵活性和多功能性。
    • 8. 发明授权
    • Apparatus and method for neural processing
    • 用于神经处理的装置和方法
    • US5251287A
    • 1993-10-05
    • US915
    • 1993-01-06
    • Stamatis VassiliadisGerald G. Pechanek
    • Stamatis VassiliadisGerald G. Pechanek
    • G06N3/063G06N3/10G06F15/18
    • G06N3/10G06N3/063
    • The neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. Herein is described neural network architecture for a Scalable Neural Array Process (SNAP) which uses a unique interconnection and communication scheme within an array structure that provides high performance for completely connected network models such as the Hopfield model. SNAP's packaging and expansion capabilities are addressed, demonstrating SNAP's scalability to larger networks. The array processor is made up of multiple sets of orthogonal interconnections and activity generators. Each activity generator is responsive to an output signal in order to generate a neuron value. The interconnection structure also uses special adder trees which respond in a first state to generate an output signal and in a second state to communicate a neuron value back to the input of the array processor.
    • 神经计算范式被表征为动态和高度计算密集型系统,通常由输入权重乘积,乘积求和,神经状态计算以及神经元之间的完全连通性组成。 这里描述了用于可扩展神经阵列过程(SNAP)的神经网络架构,其使用阵列结构内的唯一的互连和通信方案,其为诸如Hopfield模型的完全连接的网络模型提供高性能。 SNAP的封装和扩展功能得到了解决,展示了SNAP对较大网络的可扩展性。 阵列处理器由多组正交互连和活动发生器组成。 每个活动发生器响应于输出信号以产生神经元值。 互连结构还使用特殊加法器树,其在第一状态下响应以产生输出信号,并且处于第二状态以将神经元值传送回阵列处理器的输入。
    • 10. 发明授权
    • Parallel array processor interconnections
    • 并行阵列处理器互连
    • US5577262A
    • 1996-11-19
    • US522163
    • 1995-07-13
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Fnias
    • Gerald G. PechanekStamatis VassiliadisJose G. Delgado-Fnias
    • G06F15/80G06N3/063G06N3/10G06F15/00
    • G06N3/063G06F15/8023G06N3/10
    • Image processing for multimedia workstations is a computationally intensive task requiring special purpose hardware to meet the high speed requirements associated with the task. One type of specialized hardware that meets the computation high speed requirements is the mesh connected computer. Such a computer becomes a massively parallel machine when an array of computers interconnected by a network are replicated in a machine. The nearest neighbor mesh computer consists of an N.times.N square array of Processor Elements(PEs) where each PE is connected to the North, South, East and West PEs only. Assuming a single wire interface between PEs, there are a total of 2N.sup.2 wires in the mesh structure. Under the assumtion of SIMD operation with uni-directional message and data transfers between the processing elements in the meah, for example all PES transferring data North, it is possible to reconfigure the array by placing the symmetric processing elements together and sharing the north-south wires with the east-west wires, thereby reducing the wiring complexity in half, i.e. N.sup.2 without affecting performance. The resulting diagonal folded mesh array processor, which is called Oracle, allows the matrix transformation operation to be accomplished in one cycle by simple interchange of the data elements in the dual symmetric processor elements. The use of Oracle for a parallel 2-D convolution mechanish for image processing and multimedia applications and for a finite difference method of solving differential equations is presented, concentrating on the computational aspects of the algorithm.
    • 多媒体工作站的图像处理是一项计算密集型任务,需要专用硬件来满足与任务相关的高速度要求。 一种满足计算高速要求的专用硬件是网状计算机。 当通过网络互连的计算机阵列在机器中复制时,这样的计算机成为大规模并行机器。 最近的相邻网格计算机由处理器元素(PE)的N×N正方形阵列组成,其中每个PE仅连接到北,南,东和西PE。 假设PE之间的单线接口,网格结构中总共有2N2条线。 在SIMD操作的假设下,单向消息和数据传输在meah中的处理元素之间,例如所有PES传输数据北部,可以通过将对称处理元素放置在一起并共享南北部来重新配置阵列 电线与东西电线,从而将布线复杂度降低一半,即N2不影响性能。 所得到的对称折叠网格阵列处理器(称为Oracle)允许通过简单地交换双对称处理器元件中的数据元素在一个周期内完成矩阵变换操作。 提出了使用Oracle进行图像处理和多媒体应用的并行2-D卷积机制,并提出了一种求解微分方程的有限差分方法,集中在算法的计算方面。