会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Method of increasing the accuracy of an analog neural network and the
like
    • 提高模拟神经网络精度的方法等
    • US5146602A
    • 1992-09-08
    • US634033
    • 1990-12-26
    • Mark A. HollerSimon M. Tam
    • Mark A. HollerSimon M. Tam
    • G06F15/18G06G7/60G06N3/063G06N99/00G11C27/00H01L29/788
    • H01L29/7887G06N3/0635G11C27/005
    • A method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. In one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. The training may be carried out using any standard learning algorithm. Preferably, a back-propagation learning algorithm is employed.Next, network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. This change results from a charge redistribution which occurs within each of the synapses of the network. After baking, the network is then retrained to compensate for the change resulting from the charge redistribution. The baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level.
    • 描述了一种用于增加计算输入向量和存储的权重模式之和的乘积的模拟神经网络的精度的方法。 在本发明的一个实施例中,该方法包括通过以某种权重模式编程突触来初始训练网络。 可以使用任何标准学习算法进行训练。 优选地,采用反向传播学习算法。 接下来,在升高的温度下烘烤网络,以实现在初始训练期间先前编程的重量模式的变化。 这种变化是由于网络每个突触中发生的电荷再分配产生的。 烘烤后,网络再次进行补充,以补偿由电荷重新分配引起的变化。 烘焙和再培训步骤可以连续重复,以将神经网络的精度提高到任何所需的水平。
    • 5. 发明授权
    • Synapse cell employing dual gate transistor structure
    • 突触细胞采用双栅晶体管结构
    • US4961002A
    • 1990-10-02
    • US419685
    • 1989-10-11
    • Simon M. TamMark A. HollerHernan A. Castro
    • Simon M. TamMark A. HollerHernan A. Castro
    • G06N3/063G11C15/04
    • G06N3/0635G06N3/063G11C15/046
    • A synapse cell for providing a weighted connection between an input voltage line and an output summing line having an associated capacitance. Connection between input and output lines in the associative network is made using a dual-gate transistor. The transistor has a floating gate member for storing electrical charge, a pair of control gates coupled to a pair of input lines, and a drain coupled to an output summing line. The floating gate of the transistor is used for storing a charge which corresponds to the strength or weight of the neural connection. When a binary voltage pulse having a certain duration is applied to either one or both of the control gates of the transistor, a current is generated. This current acts to discharge the capacitance associated with the output summing line. Furthermore, by employing a dual-gate structure, programming disturbance of neighboring devices in the network is practically eliminated.
    • 一种用于在输入电压线和具有相关电容的输出求和线之间提供加权连接的突触单元。 关联网络中的输入和输出线之间的连接是使用双栅极晶体管进行的。 晶体管具有用于存储电荷的浮动栅极部件,耦合到一对输入线路的一对控制栅极和耦合到输出求和线路的漏极。 晶体管的浮置栅极用于存储对应于神经连接的强度或重量的电荷。 当具有一定持续时间的二进制电压脉冲被施加到晶体管的一个或两个控制栅极时,产生电流。 该电流用于放电与输出求和线相关联的电容。 此外,通过采用双栅结构,实际上消除了网络中相邻设备的编程干扰。
    • 7. 发明授权
    • Neural network with multiplexed snyaptic processing
    • 神经网络复用s tic加工
    • US5256911A
    • 1993-10-26
    • US896204
    • 1992-06-10
    • Mark A. HollerSimon M. Tam
    • Mark A. HollerSimon M. Tam
    • G06N3/063G06F15/18
    • G06N3/063G06N3/0635
    • In an apparatus for multiplexed operation of multi-cell neural network, the reference vector component values are stored as differential values in pairs of floating gate transistors. A long-tail pair differential transconductance multiplier is synthesized by selectively using the floating gate transistor pairs as the current source. Appropriate transistor pairs are multiplexed into the network for forming a differential output current representative of the product of the input vector component applied to the differential input and the stored reference vector component stored in the multiplexed transistor pair that is switched into the multiplier network to function as the differential current source. Pipelining and output multiplexing is also described in other preferred embodiments for increasing the effective output bandwidth of the network.
    • 在多单元神经网络的多路复用操作装置中,参考矢量分量值作为浮动栅极晶体管对中的差分值存储。 通过选择性地使用浮栅晶体管对作为电流源来合成长尾对差分跨导乘法器。 合适的晶体管对被多路复用到网络中,用于形成代表施加到差分输入的输入矢量分量与存储在多路复用晶体管对中的存储的参考矢量分量的乘积的差分输出电流,所述基准矢量分量被切换到乘法器网络中,以用作 差动电流源。 在其他优选实施例中还描述了流水线和输出多路复用以增加网络的有效输出带宽。
    • 8. 发明授权
    • Method of increasing the accuracy of an analog circuit employing
floating gate memory devices
    • 使用浮动栅极存储器件的模拟电路的精度提高的方法
    • US5268320A
    • 1993-12-07
    • US865451
    • 1992-04-09
    • Mark A. HollerSimon M. Tam
    • Mark A. HollerSimon M. Tam
    • G06N3/063G11C27/00H01L29/788H01L21/265H01L21/76
    • G06N3/0635G11C27/005H01L29/7887Y10S438/91
    • A method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. In one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. The training may be carried out using any standard learning algorithm. Preferably, a back-propagation learning algorithm is employed.Next, the network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. This change results from a charge redistribution which occurs within each of the synapses of the network. After baking, the network is then retrained to compensate for the change resulting from the charge redistribution. The baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level.
    • 描述了一种用于增加计算输入向量和存储的权重模式之和的乘积的模拟神经网络的精度的方法。 在本发明的一个实施例中,该方法包括通过以某种权重模式编程突触来初始训练网络。 可以使用任何标准学习算法进行训练。 优选地,采用反向传播学习算法。 接下来,网络在升高的温度下烘烤以实现初始训练中先前编程的重量模式的变化。 这种变化是由于网络每个突触中发生的电荷再分配产生的。 烘烤后,网络再次进行补充,以补偿由电荷重新分配引起的变化。 烘焙和再培训步骤可以连续重复,以将神经网络的精度提高到任何所需的水平。
    • 9. 发明授权
    • Multi-layer neural network employing multiplexed output neurons
    • 多层神经网络采用复用输出神经元
    • US5087826A
    • 1992-02-11
    • US635231
    • 1990-12-28
    • Mark A. HollerSimon M. Tam
    • Mark A. HollerSimon M. Tam
    • G06F15/18G06G7/60G06N3/04G06N99/00
    • G06N3/0445
    • A multi-layer electrically trainable analog neural network employing multiplexed output neurons having inputs organized into two groups, external and recurrent (i.e., feedback). Each layer of the network comprises a matrix of synapse cells which implement a matrix multiplication between an input vector and a weight matrix. In normal operation, an external input vector coupled to the first synaptic array generates a Sigmoid response at the output of a set of neurons. This output is then fed back to the next and subsequent layers of the network as a recurrent input vector. The output of second layer processing is generated by the same neurons used in first layer processing. Thus, the neural network of the present invention can handle N-layer operation by using recurrent connections and a single set of multiplexed output neurons.
    • 一种多层电可训练的模拟神经网络,其使用具有组合成两组的输入的多路复用输出神经元,外部和复发(即,反馈)。 网络的每个层包括突变单元的矩阵,其实现输入向量和权重矩阵之间的矩阵乘法。 在正常操作中,耦合到第一突触阵列的外部输入向量在一组神经元的输出处产生S形反应。 然后将该输出作为反复输入向量反馈到网络的下一层和后续层。 第二层处理的输出由第一层处理中使用的相同神经元产生。 因此,本发明的神经网络可以通过使用循环连接和单组多路复用输出神经元来处理N层操作。