First-order logical neural networks with bidirectional inference

    公开(公告)号:AU2021269906A1

    公开(公告)日:2022-10-27

    申请号:AU2021269906

    申请日:2021-04-13

    Applicant: IBM

    Abstract: A system for configuring and using a logical neural network including a graph syntax tree of formulae in a represented knowledgebase connected to each other via nodes representing each proposition. One neuron exists for each logical connective occurring in each formula and, additionally, one neuron for each unique proposition occurring in any formula. All neurons return pairs of values representing upper and lower bounds on truth values of their corresponding subformulae and propositions. Neurons corresponding to logical connectives accept as input the output of neurons corresponding to their operands and have activation functions configured to match the connectives' truth functions. Neurons corresponding to propositions accept as input the output of neurons established as proofs of bounds on the propositions' truth values and have activation functions configured to aggregate the tightest such bounds. Bidirectional inference permits every occurrence of each proposition in each formula to be used as a potential proof.

    FIRST-ORDER LOGICAL NEURAL NETWORKS WITH BIDIRECTIONAL INFERENCE

    公开(公告)号:ZA202100290B

    公开(公告)日:2022-01-26

    申请号:ZA202100290

    申请日:2021-01-15

    Applicant: IBM

    Abstract: A system for configuring and using a logical neural network including a graph syntax tree of formulae in a represented knowledgebase connected to each other via nodes representing each proposition. One neuron exists for each logical connective occurring in each formula and, additionally, one neuron for each unique proposition occurring in any formula. All neurons return pairs of values representing upper and lower bounds on truth values of their corresponding subformulae and propositions. Neurons corresponding to logical connectives accept as input the output of neurons corresponding to their operands and have activation functions configured to match the connectives' truth functions. Neurons corresponding to propositions accept as input the output of neurons established as proofs of bounds on the propositions' truth values and have activation functions configured to aggregate the tightest such bounds. Bidirectional inference permits every occurrence of each proposition in each formula to be used as a potential proof.

Patent Agency Ranking