-
公开(公告)号:AU2021269906A1
公开(公告)日:2022-10-27
申请号:AU2021269906
申请日:2021-04-13
Applicant: IBM
Inventor: RIEGEL RYAN , LUUS FRANCOIS , AKHALWAYA ISMAIL YUNUS , KHAN NAWEED AGHMAD , MAKONDO NDIVHUWO , BARAHONA FRANCISCO , GRAY ALEXANDER
IPC: G06N3/04
Abstract: A system for configuring and using a logical neural network including a graph syntax tree of formulae in a represented knowledgebase connected to each other via nodes representing each proposition. One neuron exists for each logical connective occurring in each formula and, additionally, one neuron for each unique proposition occurring in any formula. All neurons return pairs of values representing upper and lower bounds on truth values of their corresponding subformulae and propositions. Neurons corresponding to logical connectives accept as input the output of neurons corresponding to their operands and have activation functions configured to match the connectives' truth functions. Neurons corresponding to propositions accept as input the output of neurons established as proofs of bounds on the propositions' truth values and have activation functions configured to aggregate the tightest such bounds. Bidirectional inference permits every occurrence of each proposition in each formula to be used as a potential proof.
-
公开(公告)号:ZA202100290B
公开(公告)日:2022-01-26
申请号:ZA202100290
申请日:2021-01-15
Applicant: IBM
Inventor: RIEGEL RYAN , LUUS FRANCOIS PIERRE , AKHALWAYA ISMAIL YUNUS , KHAN NAWEED , MAKONDO NDIVHUWO , BARAHONA FRANCISCO , GRAY ALEXANDER
Abstract: A system for configuring and using a logical neural network including a graph syntax tree of formulae in a represented knowledgebase connected to each other via nodes representing each proposition. One neuron exists for each logical connective occurring in each formula and, additionally, one neuron for each unique proposition occurring in any formula. All neurons return pairs of values representing upper and lower bounds on truth values of their corresponding subformulae and propositions. Neurons corresponding to logical connectives accept as input the output of neurons corresponding to their operands and have activation functions configured to match the connectives' truth functions. Neurons corresponding to propositions accept as input the output of neurons established as proofs of bounds on the propositions' truth values and have activation functions configured to aggregate the tightest such bounds. Bidirectional inference permits every occurrence of each proposition in each formula to be used as a potential proof.
-
公开(公告)号:ZA202100291B
公开(公告)日:2022-02-23
申请号:ZA202100291
申请日:2021-01-15
Applicant: IBM
Inventor: AKHALWAYA ISMAIL YUNUS , KHAN NAWEED , LUUS FRANCOIS PIERRE , MAKONDO NDIVHUWO , RIEGEL RYAN , GRAY ALEXANDER
Abstract: Word sense disambiguation using a glossary layer embedded in a deep neural network includes receiving, by one or more processors, input sentences including a plurality of words. At least two words in the plurality of words are homonyms. The one or more processors convert the plurality of words associated with each input sentence into a first vector including possible senses for the at least two words. The first vector is then combined with a second vector including a domain-specific contextual vector associated with the at least two words. The combination of the first vector with the second vector is fed into a recurrent deep logico-neural network model to generate a third vector that includes word senses for the at least two words. A threshold is set for the third vector to generate a fourth vector including a final word sense vector for the at least two words.
-
-