会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • FAST DEEP NEURAL NETWORK FEATURE TRANSFORMATION VIA OPTIMIZED MEMORY BANDWIDTH UTILIZATION
    • 通过优化的记忆带宽利用快速深入神经网络特征转换
    • US20160322042A1
    • 2016-11-03
    • US14699778
    • 2015-04-29
    • Nuance Communications, Inc.
    • Jan VlietinckStephan KanthakRudi VuerinckxChristophe Ris
    • G10L15/06G06N3/08
    • G06N3/08G06N3/0454G10L15/02G10L15/16G10L2015/0635
    • Deep Neural Networks (DNNs) with many hidden layers and many units per layer are very flexible models with a very large number of parameters. As such, DNNs are challenging to optimize. To achieve real-time computation, embodiments disclosed herein enable fast DNN feature transformation via optimized memory bandwidth utilization. To optimize memory bandwidth utilization, a rate of accessing memory may be reduced based on a batch setting. A memory, corresponding to a selected given output neuron of a current layer of the DNN, may be updated with an incremental output value computed for the selected given output neuron as a function of input values of a selected few non-zero input neurons of a previous layer of the DNN in combination with weights between the selected few non-zero input neurons and the selected given output neuron, wherein a number of the selected few corresponds to the batch setting.
    • 具有许多隐藏层和每层许多单元的深层神经网络(DNN)是具有非常大数量参数的非常灵活的模型。 因此,DNN具有挑战性来进行优化。 为了实现实时计算,本文公开的实施例通过优化的存储器带宽利用率实现快速DNN特征变换。 为了优化存储器带宽利用率,可以基于批量设置来减少访问存储器的速率。 对应于DNN的当前层的所选给定输出神经元的存储器可以用针对所选择的给定输出神经元计算的增量输出值作为所选择的几个非零输入神经元的输入值的函数来更新 结合所选择的几个非零输入神经元和所选择的给定输出神经元之间的权重组合的DNN的先前层,其中所选择的几个的数量对应于批次设置。