会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 8. 发明授权
    • Exploiting sparseness in training deep neural networks
    • 在深层神经网络训练中利用稀疏性
    • US08700552B2
    • 2014-04-15
    • US13305741
    • 2011-11-28
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • G06F15/18G06N3/08
    • G06N3/08
    • Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training.
    • 提出了深层神经网络(DNN)训练技术实施例,其训练DNN,同时利用非零隐藏层互连权重值的稀疏性。 通常,完全连接的DNN最初通过遍历完整的训练集多次进行训练。 那么,在大多数情况下,只有重量大小超过最小重量阈值的互连在进一步的训练中被考虑。 该最小权重阈值可以被建立为在训练期间通过错误反向传播过程设置互连权重值时仅考虑规定的最大数量的互连的值。 值得注意的是,继续进行的DNN训练往往比初始训练快得多。
    • 9. 发明申请
    • COMPUTER-IMPLEMENTED DEEP TENSOR NEURAL NETWORK
    • 计算机实现深度传感器神经网络
    • US20140067735A1
    • 2014-03-06
    • US13597268
    • 2012-08-29
    • Dong YuLi DengFrank Seide
    • Dong YuLi DengFrank Seide
    • G06N3/08
    • G06N3/02G06N3/04G06N3/0454G06N3/084
    • A deep tensor neural network (DTNN) is described herein, wherein the DTNN is suitable for employment in a computer-implemented recognition/classification system. Hidden layers in the DTNN comprise at least one projection layer, which includes a first subspace of hidden units and a second subspace of hidden units. The first subspace of hidden units receives a first nonlinear projection of input data to a projection layer and generates the first set of output data based at least in part thereon, and the second subspace of hidden units receives a second nonlinear projection of the input data to the projection layer and generates the second set of output data based at least in part thereon. A tensor layer, which can converted into a conventional layer of a DNN, generates the third set of output data based upon the first set of output data and the second set of output data.
    • 本文描述了深张量神经网络(DTNN),其中DTNN适合于在计算机实现的识别/分类系统中的使用。 DTNN中的隐藏层包括至少一个投影层,其包括隐藏单元的第一子空间和隐藏单元的第二子空间。 隐藏单元的第一子空间至少部分地将输入数据的第一非线性投影接收到投影层,并且至少部分地生成第一组输出数据,并且隐藏单元的第二子空间接收输入数据的第二非线性投影 投影层并且至少部分地基于其生成第二组输出数据。 可以转换成DNN的常规层的张量层基于第一组输出数据和第二组输出数据产生第三组输出数据。