会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Tensor deep stacked neural network
    • 张量深层神经网络
    • US09165243B2
    • 2015-10-20
    • US13397580
    • 2012-02-15
    • Dong YuLi DengBrian Hutchinson
    • Dong YuLi DengBrian Hutchinson
    • G06N3/04G06N3/08
    • G06N3/04G06N3/08
    • A tensor deep stacked neural (T-DSN) network for obtaining predictions for discriminative modeling problems. The T-DSN network and method use bilinear modeling with a tensor representation to map a hidden layer to the predication layer. The T-DSN network is constructed by stacking blocks of a single hidden layer tensor neural network (SHLTNN) on top of each other. The single hidden layer for each block then is separated or divided into a plurality of two or more sections. In some embodiments, the hidden layer is separated into a first hidden layer section and a second hidden layer section. These multiple sections of the hidden layer are combined using a product operator to obtain an implicit hidden layer having a single section. In some embodiments the product operator is a Khatri-Rao product. A prediction is made using the implicit hidden layer and weights, and the output prediction layer is consequently obtained.
    • 张量深层次神经(T-DSN)网络,用于获得鉴别建模问题的预测。 T-DSN网络和方法使用具有张量表示的双线性建模来将隐藏层映射到预测层。 T-DSN网络由单个隐层张量神经网络(SHLTNN)的堆叠堆叠构成。 然后,每个块的单个隐藏层被分离或分成多个两个或更多个部分。 在一些实施例中,隐藏层被分成第一隐藏层部分和第二隐藏层部分。 使用产品运算符组合隐藏层的这些多个部分以获得具有单个部分的隐式隐藏层。 在一些实施例中,产品操作者是Khatri-Rao产品。 使用隐式隐层和权重进行预测,从而获得输出预测层。
    • 7. 发明申请
    • DEEP CONVEX NETWORK WITH JOINT USE OF NONLINEAR RANDOM PROJECTION, RESTRICTED BOLTZMANN MACHINE AND BATCH-BASED PARALLELIZABLE OPTIMIZATION
    • 连续使用非线性随机投影,限制性BOLTZMANN机器和基于批量的平行优化的深层网络
    • US20120254086A1
    • 2012-10-04
    • US13077978
    • 2011-03-31
    • Li DengDong YuAlejandro Acero
    • Li DengDong YuAlejandro Acero
    • G06N3/08
    • G06N3/08G06N3/02G06N3/04G06N3/0454
    • A method is disclosed herein that includes an act of causing a processor to access a deep-structured, layered or hierarchical model, called deep convex network, retained in a computer-readable medium, wherein the deep-structured model comprises a plurality of layers with weights assigned thereto. This layered model can produce the output serving as the scores to combine with transition probabilities between states in a hidden Markov model and language model scores to form a full speech recognizer. The method makes joint use of nonlinear random projections and RBM weights, and it stacks a lower module's output with the raw data to establish its immediately higher module. Batch-based, convex optimization is performed to learn a portion of the deep convex network's weights, rendering it appropriate for parallel computation to accomplish the training. The method can further include the act of jointly substantially optimizing the weights, the transition probabilities, and the language model scores of the deep-structured model using the optimization criterion based on a sequence rather than a set of unrelated frames.
    • 本文公开了一种方法,其包括使处理器访问被保留在计算机可读介质中的称为深凸网络的深层结构的分层或层次模型的动作,其中深层结构模型包括多个具有 分配给它的权重。 该分层模型可以产生作为分数的输出,以与隐藏的马尔可夫模型和语言模型分数中的状态之间的转移概率相结合,以形成完整的语音识别器。 该方法联合使用非线性随机投影和RBM权重,并将较低模块的输出与原始数据叠加以建立其立即更高的模块。 执行基于批次的凸优化来学习深凸网络权重的一部分,使其适合于并行计算以完成训练。 该方法还可以包括使用基于序列而不是一组不相关帧的优化准则共同基本优化深层结构模型的权重,转移概率和语言模型分数的动作。
    • 8. 发明授权
    • Speech-centric multimodal user interface design in mobile technology
    • 以移动技术为中心的多模态用户界面设计
    • US08219406B2
    • 2012-07-10
    • US11686722
    • 2007-03-15
    • Dong YuLi Deng
    • Dong YuLi Deng
    • G10L21/00
    • G06F3/038G06F2203/0381G10L15/24
    • A multi-modal human computer interface (HCI) receives a plurality of available information inputs concurrently, or serially, and employs a subset of the inputs to determine or infer user intent with respect to a communication or information goal. Received inputs are respectively parsed, and the parsed inputs are analyzed and optionally synthesized with respect to one or more of each other. In the event sufficient information is not available to determine user intent or goal, feedback can be provided to the user in order to facilitate clarifying, confirming, or augmenting the information inputs.
    • 多模式人机界面(HCI)同时或串行地接收多个可用信息输入,并且使用输入的子集来确定或推断关于通信或信息目标的用户意图。 分别对接收到的输入进行解析,并且解析输入相对于彼此中的一个或多个进行分析并任选地合成。 如果没有足够的信息来确定用户意图或目标,则可以向用户提供反馈,以便于澄清,确认或增加信息输入。
    • 9. 发明授权
    • Noise suppressor for robust speech recognition
    • 噪声抑制器用于强大的语音识别
    • US08185389B2
    • 2012-05-22
    • US12335558
    • 2008-12-16
    • Dong YuLi DengYifan GongJian WuAlejandro Acero
    • Dong YuLi DengYifan GongJian WuAlejandro Acero
    • G10L15/20
    • G10L21/0208G10L15/20
    • Described is noise reduction technology generally for speech input in which a noise-suppression related gain value for the frame is determined based upon a noise level associated with that frame in addition to the signal to noise ratios (SNRs). In one implementation, a noise reduction mechanism is based upon minimum mean square error, Mel-frequency cepstra noise reduction technology. A high gain value (e.g., one) is set to accomplish little or no noise suppression when the noise level is below a threshold low level, and a low gain value set or computed to accomplish large noise suppression above a threshold high noise level. A noise-power dependent function, e.g., a log-linear interpolation, is used to compute the gain between the thresholds. Smoothing may be performed by modifying the gain value based upon a prior frame's gain value. Also described is learning parameters used in noise reduction via a step-adaptive discriminative learning algorithm.
    • 描述了通常用于语音输入的噪声降低技术,其中除了信噪比(SNR)之外,基于与该帧相关联的噪声电平来确定用于帧的噪声抑制相关增益值。 在一个实现中,降噪机制基于最小均方误差,Mel-frequency cepstra降噪技术。 设置高增益值(例如一个),以在噪声电平低于阈值低电平时实现很少或没有噪声抑制,以及设置或计算的低增益值,以实现高于阈值高噪声电平的大噪声抑制。 使用噪声功率相关函数,例如对数线性插值来计算阈值之间的增益。 可以通过基于先前帧的增益值修改增益值来执行平滑化。 还描述了通过步进自适应识别学习算法在降噪中使用的学习参数。
    • 10. 发明申请
    • Deep-Structured Conditional Random Fields for Sequential Labeling and Classification
    • 用于顺序标记和分类的深层结构条件随机场
    • US20110191274A1
    • 2011-08-04
    • US12696051
    • 2010-01-29
    • Dong YuLi DengShizhen Wang
    • Dong YuLi DengShizhen Wang
    • G06F15/18G06N5/02
    • G06F15/18G06N5/02
    • Described is a technology by which a deep-structured (multiple layered) conditional random field model is trained and used for classification of sequential data. Sequential data is processed at each layer, from the lowest layer to a final (highest) layer, to output data in the form of conditional probabilities of classes given the sequential input data. Each higher layer inputs the conditional probability data and the sequential data jointly to output further probability data, and so forth, until the final layer which outputs the classification data. Also described is layer-by-layer training, supervised or unsupervised. Unsupervised training may process raw features to minimize average frame-level conditional entropy while maximizing state occupation entropy, or to minimize reconstruction error. Also described is a technique for back-propagation of error information of the final layer to iteratively fine tune the parameters of the lower layers, and joint training, including joint training via subgroups of layers.
    • 描述了一种深层结构(多层)条件随机场模型被训练并用于顺序数据分类的技术。 在从最低层到最终(最高)层的每个层处理顺序数据,以给定顺序输入数据的类的条件概率的形式输出数据。 每个较高层输入条件概率数据和顺序数据,共同输出进一步的概率数据,等等,直到输出分类数据的最后一层。 还描述了逐层培训,监督或无监督。 无监督训练可以处理原始特征以最小化平均帧级条件熵,同时最大化状态占用熵,或最小化重建误差。 还描述了用于反向传播最终层的误差信息的技术,以迭代地微调下层的参数,以及联合训练,包括通过子层的联合训练。