会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 6. 发明公开
    • NEURAL PROCESSING UNIT
    • NEURALE VERARBEITUNGSEINHEIT
    • EP2572293A4
    • 2013-12-04
    • EP11783883
    • 2011-01-21
    • UNIV CALIFORNIA
    • PALMER DOUGLAS AFLOREA MICHAEL
    • G06F15/18G06F9/06G06N3/06
    • G06N3/06G06N3/063
    • The subject matter disclosed herein provides methods, apparatus, and articles of manufacture for neural-based processing. In one aspect, there is provided a method. The method may include reading, from a first memory, context information stored based on at least one connection value; reading, from a second memory, an activation value matching the at least one connection value; sending, by a first processor, the context information and the activation value to at least one of a plurality of microengines to configure the at least one microengine as a neuron; and generating, at the at least one microengine, a value representative of an output of the neuron. Related apparatus, systems, methods, and articles are also described.
    • 本文公开的主题提供了用于基于神经的处理的方法,装置和制造物品。 在一个方面中,提供了一种方法。 该方法可以包括:从第一存储器读取基于至少一个连接值存储的上下文信息; 从第二存储器读取与所述至少一个连接值匹配的激活值; 由第一处理器将所述上下文信息和所述激活值发送到多个微引擎中的至少一个,以将所述至少一个微引擎配置为神经元; 以及在所述至少一个微引擎上生成表示所述神经元的输出的值。 还描述了相关的装置,系统,方法和物品。
    • 9. 发明公开
    • METHOD AND APPARATUS FOR A LOCAL COMPETITIVE LEARNING RULE THAT LEADS TO SPARSE CONNECTIVITY
    • 方法和设备的局部WETTBERWERBS学习规则,稀疏连通LEADS
    • EP2724297A1
    • 2014-04-30
    • EP12730136.4
    • 2012-06-21
    • Qualcomm Incorporated
    • APARIN, Vladimir
    • G06N3/08
    • G06N3/08G05B13/024G05B13/027G06N3/06G06N3/088
    • Certain aspects of the present disclosure support a local competitive learning rule applied in a computational network that leads to sparse connectivity among processing units of the network. The present disclosure provides a modification to the Oja learning rule, modifying the constraint on the sum of squared weights in the Oja rule. This constraining can be intrinsic and local as opposed to the commonly used multiplicative and subtractive normalizations, which are explicit and require the knowledge of all input weights of a processing unit to update each one of them individually. The presented rule provides convergence to a weight vector that is sparser (i.e., has more zero elements) than the weight vector learned by the original Oja rule. Such sparse connectivity can lead to a higher selectivity of processing units to specific features, and it may require less memory to store the network configuration and less energy to operate it.
    • 本公开的某些方面支持在计算网络应用的本地竞争学习规则的网络处理单元之间没有导致稀疏的连接。 本发明提供的修饰的Oja的学习规则,修改对平方的权重的在Oja的规则之和约束。 相对于常用的乘法和减归一化,这是明确的,需要处理单元的所有输入的权重知识单独更新他们中的每一个这种约束可以是内在的和局部的。 所呈现的规则提供收敛的权重向量做是稀疏(即,有更多的零种元素)比由原始Oja的规则学习的权重向量。 搜索稀疏连接可导致处理单元对特定特征的更高的选择性,并且可能需要较少的存储网络配置和较少的操作它的能量存储器。