会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • A METHOD FOR SUPERVISED TEACHING OF A RECURRENT ARTIFICIAL NEURAL NETWORK
    • 用于监督人工神经网络教学的方法
    • WO0231764A3
    • 2003-08-21
    • PCT/EP0111490
    • 2001-10-05
    • FRAUNHOFER GES FORSCHUNGJAEGER HERBERT
    • JAEGER HERBERT
    • G06N3/04G06N3/08
    • G06N3/08
    • A method for the supervised teaching of a recurrent neutral network (RNN) is disclosed. A typical embodiment of the method utilizes a large (50 units or more), randomly initialized RNN with a globally stable dynamics. During the training period, the output units of this RNN are teacher-forced to follow the desired output signal. During this period, activations from all hidden units are recorded. At the end of the teaching period, these recorded data are used as input for a method which computes new weights of those connections that feed into the output units. The method is distinguished from existing training methods for RNNs through the following characteristics: (1) Only the weights of connections to output units are changed by learning - existing methods for teaching recurrent networks adjust all network weights. (2) The internal dynamics of large networks are used as a "reservoir" of dynamical components which are not changed, but only newly combined by the learning procedure - existing methods use small networks, whose internal dynamics are themselves competely re-shaped through learning.
    • 公开了一种反复中性网络(RNN)的监督教学方法。 该方法的典型实施例利用具有全局稳定动力学的大(50单位或更多)随机初始化的RNN。 在训练期间,该RNN的输出单位被强制按照所需的输出信号。 在此期间,记录了所有隐藏单位的激活。 在教学期结束时,这些记录的数据被用作计算馈送到输出单元的那些连接的新权重的方法的输入。 该方法与现有的RNN培训方法不同,具体如下:(1)只有通过学习改变与输出单元的连接权重 - 教学循环网络的现有方法调整所有网络权重。 (2)大型网络的内部动力学被用作未改变的动力部件的“储层”,但只能通过学习过程新组合 - 现有方法使用小型网络,其内部动力本身通过学习进行竞争性重塑 。