会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 72. 发明公开
    • METHOD AND APPARATUS FOR A LOCAL COMPETITIVE LEARNING RULE THAT LEADS TO SPARSE CONNECTIVITY
    • 方法和设备的局部WETTBERWERBS学习规则,稀疏连通LEADS
    • EP2724297A1
    • 2014-04-30
    • EP12730136.4
    • 2012-06-21
    • Qualcomm Incorporated
    • APARIN, Vladimir
    • G06N3/08
    • G06N3/08G05B13/024G05B13/027G06N3/06G06N3/088
    • Certain aspects of the present disclosure support a local competitive learning rule applied in a computational network that leads to sparse connectivity among processing units of the network. The present disclosure provides a modification to the Oja learning rule, modifying the constraint on the sum of squared weights in the Oja rule. This constraining can be intrinsic and local as opposed to the commonly used multiplicative and subtractive normalizations, which are explicit and require the knowledge of all input weights of a processing unit to update each one of them individually. The presented rule provides convergence to a weight vector that is sparser (i.e., has more zero elements) than the weight vector learned by the original Oja rule. Such sparse connectivity can lead to a higher selectivity of processing units to specific features, and it may require less memory to store the network configuration and less energy to operate it.
    • 本公开的某些方面支持在计算网络应用的本地竞争学习规则的网络处理单元之间没有导致稀疏的连接。 本发明提供的修饰的Oja的学习规则,修改对平方的权重的在Oja的规则之和约束。 相对于常用的乘法和减归一化,这是明确的,需要处理单元的所有输入的权重知识单独更新他们中的每一个这种约束可以是内在的和局部的。 所呈现的规则提供收敛的权重向量做是稀疏(即,有更多的零种元素)比由原始Oja的规则学习的权重向量。 搜索稀疏连接可导致处理单元对特定特征的更高的选择性,并且可能需要较少的存储网络配置和较少的操作它的能量存储器。
    • 74. 发明公开
    • ARCHITECTURE, SYSTEM AND METHOD FOR ARTIFICIAL NEURAL NETWORK IMPLEMENTATION
    • ARCHITEKTUR,SYSTEM UND VERFAHRENFÜRKÜNSTLICHENEURONALE NETZWERKIMPLEMENTIERUNG
    • EP2122542A4
    • 2012-05-16
    • EP07855509
    • 2007-12-10
    • MOUSSA MEDHATSAVICH ANTONY
    • MOUSSA MEDHATSAVICH ANTONY
    • G06N3/06G06N3/02G06N3/04
    • G06N3/063G06N3/02G06N3/06
    • An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
    • 一种用于可伸缩人造神经网络的架构,系统和方法,其中所述架构包括:输入层; 至少一个隐藏层; 输出层; 以及并行化子系统,被配置为向输入层,至少一个隐藏层和输出层提供可变程度的并行化。 在特定情况下,该架构包括反向传播子系统,其被配置为根据可变程度的并行化调整可伸缩人工神经网络中的权重。 还提供了系统和方法,用于基于诸如硬件资源和性能要求等因素来选择适当的并行度。