会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 7. 发明授权
    • Exploiting sparseness in training deep neural networks
    • 在深层神经网络训练中利用稀疏性
    • US08700552B2
    • 2014-04-15
    • US13305741
    • 2011-11-28
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • G06F15/18G06N3/08
    • G06N3/08
    • Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training.
    • 提出了深层神经网络(DNN)训练技术实施例,其训练DNN,同时利用非零隐藏层互连权重值的稀疏性。 通常,完全连接的DNN最初通过遍历完整的训练集多次进行训练。 那么,在大多数情况下,只有重量大小超过最小重量阈值的互连在进一步的训练中被考虑。 该最小权重阈值可以被建立为在训练期间通过错误反向传播过程设置互连权重值时仅考虑规定的最大数量的互连的值。 值得注意的是,继续进行的DNN训练往往比初始训练快得多。
    • 8. 发明授权
    • Method of speech recognition using hidden trajectory Hidden Markov Models
    • 使用隐藏轨迹隐马尔可夫模型的语音识别方法
    • US07617104B2
    • 2009-11-10
    • US10348192
    • 2003-01-21
    • Li DengJian-Iai ZhouFrank Torsten Bernd Seide
    • Li DengJian-Iai ZhouFrank Torsten Bernd Seide
    • G10L15/14
    • G10L15/142
    • A method of speech recognition is provided that determines a production-related value, vocal-tract resonance frequencies in particular, for a state at a particular frame based on the production-related values associated with two preceding frames using a recursion. The production-related value is used to determine a probability distribution of the observed feature vector for the state. A probability for an observed value received for the frame is then determined from the probability distribution. Under one embodiment, the production-related value is determined using a noise-free recursive definition for the value. Use of the recursion substantially improves the decoding speed. When the decoding algorithm is applied to training data with known phonetic transcripts, forced alignment is created which improves the phone segmentation obtained from the prior art.
    • 提供了一种语音识别方法,其基于与使用递归的两个先前帧相关联的生产相关值来确定特定帧处的状态的特定生产相关值,声道共振频率。 生产相关值用于确定观察到的状态特征向量的概率分布。 然后从概率分布确定为帧接收的观测值的概率。 在一个实施例中,使用针对该值的无噪声递归定义来确定生产相关值。 使用递归大大提高了解码速度。 当解码算法应用于具有已知语音抄本的训练数据时,产生强制对准,其改善了从现有技术获得的电话分割。
    • 9. 发明申请
    • DISCRIMINATIVE PRETRAINING OF DEEP NEURAL NETWORKS
    • 深层神经网络的辨别性预测
    • US20130138436A1
    • 2013-05-30
    • US13304643
    • 2011-11-26
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • G10L15/16
    • G06N3/08G06N3/04
    • Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
    • 提出了预先训练深层神经网络(DNN)的隐藏层的识别性预训练技术实施例。 一般来说,首先使用带有误差反向传播(BP)的标签对标签进行单层隐藏层神经网络的训练。 然后,在丢弃前一个隐藏层神经网络中的输出层之后,将另一个随机初始化的隐藏层与先前训练过的隐藏层一起添加,并将其代表表示用于分类或识别的目标的新输出层。 然后使用相同的策略对所得到的多隐层DNN进行鉴别性训练,等等,直到达到所需数量的隐藏层。 这产生了一个预训练的DNN。 鉴别预培训技术实施例具有使DNN层权重接近良好的局部最优值的优点,同时仍将它们留在具有高梯度的范围内,使得它们可以被有效地微调。