MULTI-OBJECT POSITIONING USING MIXTURE DENSITY NETWORKS

    公开(公告)号:WO2022178473A1

    公开(公告)日:2022-08-25

    申请号:PCT/US2022/070281

    申请日:2022-01-21

    Abstract: Certain aspects of the present disclosure provide techniques for object positioning using mixture density networks, comprising: receiving radio frequency (RF) signal data collected in a physical space; generating a feature vector encoding the RF signal data by processing the RF signal data using a first neural network; processing the feature vector using a first mixture model to generate a first encoding tensor indicating a set of moving objects in the physical space, a first location tensor indicating a location of each of the moving objects in the physical space, and a first uncertainty tensor indicating uncertainty of the locations of each of the moving objects in the physical space; and outputting at least one location from the first location tensor.

    SPIKING MULTI-LAYER PERCEPTRON
    3.
    发明申请
    SPIKING MULTI-LAYER PERCEPTRON 审中-公开
    SPIKING多层PERCEPTRON

    公开(公告)号:WO2017136104A1

    公开(公告)日:2017-08-10

    申请号:PCT/US2017/012730

    申请日:2017-01-09

    CPC classification number: G06N3/049 G06N3/084

    Abstract: A method of training a neural network with back propagation includes generating error events representing a gradient of a cost function for the neural network. The error events may be generated based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal. The method further includes updating the weights of the neural network based on the error events.

    Abstract translation: 用反向传播训练神经网络的方法包括产生表示神经网络的代价函数的梯度的误差事件。 可以基于由输入事件,神经网络的权重和来自目标信号的事件引起的神经网络的正向通过来生成错误事件。 该方法还包括基于错误事件更新神经网络的权重。

    VOLTAGE OFFSET FOR COMPUTE-IN-MEMORY ARCHITECTURE

    公开(公告)号:WO2021222821A1

    公开(公告)日:2021-11-04

    申请号:PCT/US2021/030285

    申请日:2021-04-30

    Abstract: In one embodiment, an electronic device includes a compute-in-memory (CIM) array that includes a plurality of columns. Each column includes a plurality of CIM cells connected to a corresponding read bitline, a plurality of offset cells configured to provide a programmable offset value for the column, and an analog-to-digital converter (ADC) having the corresponding bitline as a first input and configured to receive the programmable offset value. Each CIM cell is configured to store a corresponding weight.

    PERFORMING XNOR EQUIVALENT OPERATIONS BY ADJUSTING COLUMN THRESHOLDS OF A COMPUTE-IN-MEMORY ARRAY

    公开(公告)号:WO2021050440A1

    公开(公告)日:2021-03-18

    申请号:PCT/US2020/049754

    申请日:2020-09-08

    Abstract: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.

    SIGMA-DELTA POSITION DERIVATIVE NETWORKS
    9.
    发明申请

    公开(公告)号:WO2018212946A1

    公开(公告)日:2018-11-22

    申请号:PCT/US2018/029180

    申请日:2018-04-24

    CPC classification number: G06N3/084 G06N3/04 G06N3/049 G06N3/063

    Abstract: A method for processing temporally redundant data in an artificial neural network (ANN) includes encoding an input signal, received at an initial layer of the ANN, into an encoded signal. The encoded signal comprises the input signal and a rate of change of the input signal. The method also includes quantizing the encoded signal into integer values and computing an activation signal of a neuron in a next layer of the ANN based on the quantized encoded signal. The method further includes computing an activation signal of a neuron at each layer subsequent to the next layer to compute a full forward pass of the ANN. The method also includes back propagating approximated gradients and updating parameters of the ANN based on an approximate derivative of a loss with respect to the activation signal.

    TEMPORAL DIFFERENCE ESTIMATION IN AN ARTIFICIAL NEURAL NETWORK
    10.
    发明申请
    TEMPORAL DIFFERENCE ESTIMATION IN AN ARTIFICIAL NEURAL NETWORK 审中-公开
    人工神经网络的时间差分估计

    公开(公告)号:WO2018084941A1

    公开(公告)日:2018-05-11

    申请号:PCT/US2017/052083

    申请日:2017-09-18

    CPC classification number: G06N3/049

    Abstract: A method of computation in a deep neural network includes discretizing input signals and computing a temporal difference of the discrete input signals to produce a discretized temporal difference. The method also includes applying weights of a first layer of the deep neural network to the discretized temporal difference to create an output of a weight matrix. The output of the weight matrix is temporally summed with a previous output of the weight matrix. An activation function is applied to the temporally summed output to create a next input signal to a next layer of the deep neural network.

    Abstract translation: 在深度神经网络中的计算方法包括离散输入信号并计算离散输入信号的时间差以产生离散时间差。 该方法还包括将深度神经网络的第一层的权重应用于离散时间差以创建权重矩阵的输出。 权重矩阵的输出在时间上与权重矩阵的先前输出相加。 将激活函数应用于时间求和输出以创建下一个输入信号到深层神经网络的下一层。

Patent Agency Ranking