会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • LEARNING STUDENT DNN VIA OUTPUT DISTRIBUTION
    • 学习DNN通过输出分配
    • WO2016037350A1
    • 2016-03-17
    • PCT/CN2014/086397
    • 2014-09-12
    • MICROSOFT CORPORATIONZHAO, RuiHUANG, Jui-TingLI, JinyuGONG, Yifan
    • ZHAO, RuiHUANG, Jui-TingLI, JinyuGONG, Yifan
    • G06K9/66
    • G06N3/084G06N3/0454G06N7/005G06N99/005G09B5/00
    • Systems and methods are provided for generating a DNN classifier by "learning" a "student" DNN model from a larger, more accurate "teacher" DNN model. The student DNN may be trained from unlabeled training data by passing the unlabeled training data through the teacher DNN, which may be trained from labeled data. In one embodiment, an iterative processis applied to train the student DNN by minimizing the divergence of the output distributions from the teacher and student DNN models. For each iteration until convergence, the difference in the outputs of these two DNNsis used to update the student DNN model, and outputs are determined again, using the unlabeled training data. The resulting trained student DNN model may be suitable for providing accurate signal processing applications on devices having limited computational or storage resources such as mobile or wearable devices. In an embodiment, the teacher DNN model comprises an ensemble of DNN models.
    • 提供了通过从更大,更准确的“教师”DNN模型学习“学生”DNN模型来生成DNN分类器的系统和方法。 通过传递未标记的训练数据通过教师DNN,可以从未标记的训练数据训练学生DNN,该DNN可以从标记数据中训练。 在一个实施例中,迭代过程被应用于通过最小化来自教师和学生DNN模型的输出分布的差异来训练学生DNN。 对于每次迭代直到收敛,这两个DNNsis的输出的差异用于更新学生DNN模型,并且使用未标记的训练数据再次确定输出。 所得到的训练有素的学生DNN模型可能适合于在具有有限计算或存储资源的设备(例如移动或可穿戴设备)上提供精确的信号处理应用。 在一个实施例中,教师DNN模型包括DNN模型的集合。
    • 2. 发明申请
    • DEEP NEURAL SUPPORT VECTOR MACHINES
    • 深层神经支持矢量机
    • WO2016165120A1
    • 2016-10-20
    • PCT/CN2015/076857
    • 2015-04-17
    • MICROSOFT TECHNOLOGY LICENSING, LLCZHANG, ShixiongLIU, ChaojunYAO, KaishengGONG, Yifan
    • ZHANG, ShixiongLIU, ChaojunYAO, KaishengGONG, Yifan
    • G10L15/02
    • G10L15/16G06N3/02G06N99/005G10L15/187G10L2015/025
    • Aspects of the technology described herein relates to a new type of deep neural network (DNN). The new DNN is described herein as a deep neural support vector machine (DNSVM). Traditional DNNs use the multinomial logistic regression (softmax activation) at the top layer and underlying layers for training. The new DNN instead uses a support vector machine (SVM) as one or more layers, including the top layer. The technology described herein can use one of two training algorithms to train the DNSVM to learn parameters of SVM and DNN in the maximum-margin criteria. The first training method is a frame-level training. In the frame-level training, the new model is shown to be related to the multiclass SVM with DNN features. The second training method is the sequence-level training. The sequence-level training is related to the structured SVM with DNN features and HMM state transition features.
    • 本文描述的技术的方面涉及一种新型的深神经网络(DNN)。 新DNN在本文中被描述为深神经支持向量机(DNSVM)。 传统的DNN使用顶层和下层进行训练的多项Logistic回归(softmax激活)。 新的DNN代替使用支持向量机(SVM)作为一个或多个层,包括顶层。 本文描述的技术可以使用两种训练算法中的一种来训练DNSVM以在最大裕度标准中学习SVM和DNN的参数。 第一种训练方法是一个框架级的训练。 在帧级训练中,新模型与具有DNN特征的多类SVM相关。 第二种训练方法是序列级训练。 序列级训练与具有DNN特征和HMM状态转换特征的结构化SVM相关。