CONSERVATIVELY ADAPTING A DEEP NEURAL NETWORK IN A RECOGNITION SYSTEM
    1.
    发明申请
    CONSERVATIVELY ADAPTING A DEEP NEURAL NETWORK IN A RECOGNITION SYSTEM 审中-公开
    在认知系统中保持深度适应深层神经网络

    公开(公告)号:WO2014137952A2

    公开(公告)日:2014-09-12

    申请号:PCT/US2014/020052

    申请日:2014-03-04

    CPC classification number: G10L15/16 G06N3/0481 G06N3/084 G10L15/07 G10L15/20

    Abstract: Various technologies described herein pertain to conservatively adapting a deep neural network (DNN) in a recognition system for a particular user or context. A DNN is employed to output a probability distribution over models of context-dependent units responsive to receipt of captured user input. The DNN is adapted for a particular user based upon the captured user input, wherein the adaption is undertaken conservatively such that a deviation between outputs of the adapted DNN and the unadapted DNN is constrained.

    Abstract translation: 本文描述的各种技术涉及在特定用户或上下文的识别系统中保守地适配深层神经网络(DNN)。 使用DNN来响应于接收到所捕获的用户输入而输出上下文相关单元的模型的概率分布。 所述DNN基于所捕获的用户输入适用于特定用户,其中所述适配被保守地进行,使得所适配的DNN和所述未适应的DNN的输出之间的偏差被约束。

    COMPUTING SYSTEM FOR TRAINING NEURAL NETWORKS
    2.
    发明申请
    COMPUTING SYSTEM FOR TRAINING NEURAL NETWORKS 审中-公开
    用于训练神经网络的计算系统

    公开(公告)号:WO2016037351A1

    公开(公告)日:2016-03-17

    申请号:PCT/CN2014/086398

    申请日:2014-09-12

    CPC classification number: G06N3/08 G06N3/04 G06N3/0454 G06N3/084 G06N7/005

    Abstract: Techniques and constructs can reduce the time required to determine solutions to optimization problems such as training of neural networks. Modifications to a computational model can be determined by a plurality of nodes operating in parallel. Quantized modification values can be transmitted between the nodes to reduce the volume of data to be transferred. The quantized values can be as small as one bit each. Quantization-error values can be stored and used in quantizing subsequent modifications. The nodes can operate in parallel and overlap computation and data transfer to further reduce the time required to determine solutions. The quantized values can be partitioned and each node can aggregate values for a corresponding partition.

    Abstract translation: 技术和结构可以减少确定优化问题的解决方案所需的时间,例如神经网络的训练。 可以通过并行操作的多个节点来确定对计算模型的修改。 可以在节点之间传输量化的修改值,以减少要传输的数据量。 量化值可以小到一位。 量化误差值可以存储并用于量化后续修改。 节点可以并行运行并重叠计算和数据传输,以进一步减少确定解决方案所需的时间。 可以对量化的值进行分区,并且每个节点可以聚合相应分区的值。

Patent Agency Ranking