RELATION EXTRACTION SYSTEM AND METHOD ADAPTED TO FINANCIAL ENTITIES AND FUSED WITH PRIOR KNOWLEDGE

    公开(公告)号:US20240086650A1

    公开(公告)日:2024-03-14

    申请号:US18217207

    申请日:2023-06-30

    CPC classification number: G06F40/53 G06F40/295 G06F40/30

    Abstract: The present invention relates to a relation extraction system adapted to financial entities and fused with prior knowledge and a method thereof, the system at least comprising: a deep pretraining module, for training and generating a deep pretrained model for recognizing attributes of the financial entities; a keyword analyzing module, for extracting and outputting positional information and importance vectors of keywords in a Chinese finance-related text; an attention mechanism module, for encoding the positional information of the keywords to obtain attention masks, and inputting them with entity information into the deep pretrained model to acquire text feature vectors; and an optimal margin distribution model module, for predicting financial-entity relations based on the text feature vectors and the importance vectors. Aiming at low applicability of existing models to specific Chinese fields, the present invention obtains more accurate extraction results of entities and related features in Chinese finance-related texts.

    HARDWARE ACCELERATOR FOR HYPERGRAPH PROCESSING AND OPERATING METHOD THEREOF

    公开(公告)号:US20240061779A1

    公开(公告)日:2024-02-22

    申请号:US18145565

    申请日:2022-12-22

    CPC classification number: G06F12/0806 G06F12/10 G06F2212/1016

    Abstract: The present invention relates to a hardware accelerator for hypergraph processing and its operating method, the hardware accelerator comprising: a data loader: for, in the presence of a data-centric load-trigger-reduce execution model, reading hypergraph partition data from an off-chip memory successively according to hypergraph data structure and an order of hypergraph partitions; an address translator, for deploying the hypergraph data into a private register of a processor and/or into a buffer memory according to a priority level of loaded data, and recording corresponding offset information; a task trigger, for generating computing tasks according to the loaded data, and scheduling the computing tasks into the processor; the processor, for receiving and executing the computing tasks; a reducer, for scheduling intermediate results into a first-priority-data reducer unit or a second-priority-data reducer unit depending on the priority level of the data so as to execute a reducing operation for the intermediate results. In view of the shortcomings of task-centric hardware accelerators, the present invention can prevent any possible data conflict during parallel execution of multiple processing units.

    DYNAMIC MEMORY MANAGEMENT APPARATUS AND METHOD FOR HLS

    公开(公告)号:US20240053892A1

    公开(公告)日:2024-02-15

    申请号:US18145552

    申请日:2022-12-22

    CPC classification number: G06F3/061 G06F3/0673 G06F3/0656

    Abstract: The present invention relates to a dynamic memory management apparatus and method for HLS, the apparatus at least comprising: several searching and caching modules and several modifying and writing-back modules, wherein the searching and caching modules are in connection with a DRAM storing module and a BRAM buffer, respectively, and the modifying and writing-back modules are in connection with the DRAM storing module and the BRAM buffer, respectively, wherein the BRAM buffer is for caching information about nodes on a search path and registering information about modification made to the nodes; the searching and caching module is for reading node data from the DRAM storing module according to received operators and node addresses, and writing the node data into the BRAM buffer; and the modifying and writing-back module reads the node data from the BRAM buffer and writes the node data back into the DRAM storing module. Aiming at the defect that the traditional operating system is directly transplanted to the FPGA and has low execution efficiency, the present invention utilizes the advantage of the large capacity of the DRAM on the FPGA to realize efficient dynamic memory allocation and deallocation, and improve the usability and code reusability of HLS.

    GRAPHIC-BLOCKCHAIN-ORIENTATED HYBRID CONSENSUS IMPLEMENTATION APPARATUS AND IMPLEMENTATION METHOD THEREOF

    公开(公告)号:US20230017790A1

    公开(公告)日:2023-01-19

    申请号:US17806668

    申请日:2022-06-13

    Abstract: The present invention relates to an implementation method for graphic-blockchain-orientated hybrid consensus, at least comprising: calling at least one module to generate a new data block and broadcast it; calling at least one module to check and validate the received new block based on predetermined rules; calling at least one module to generate a void block; and calling at least one module to update a committee member list. The existing graphic blockchain can only achieve probabilistic consensus, yet the present invention achieves deterministic consensus on graphic blockchain for the first time, thereby reaching consensus faster. The present invention decouples generation and consensus of blocks for the first time, so that the two parts can be designed separately in a modularized manner. The present invention provides the first totally decentralized hybrid consensus algorithm in the graphic blockchain. This is unachievable to many existing graphic blockchain such as IOTA and Obyte.

    LIGHTWEIGHT DATA STORAGE APPARATUS FOR GRAPHIC BLOCKCHAIN AND METHOD THEREOF

    公开(公告)号:US20230015556A1

    公开(公告)日:2023-01-19

    申请号:US17664750

    申请日:2022-05-24

    Abstract: The present invention relates to a lightweight data storage apparatus for graphic blockchains, at least comprising a common transaction construction module for a user to initiate new transactions and a network broadcast module for broadcasting the transactions, wherein the apparatus further comprises a combined-transaction constructing module and a transaction deleting module, wherein the combined-transaction constructing module serves to determine whether number of transactions initiated by an account satisfies a first predetermined condition, and if yes, execute a first lightening procedure on the transactions, and the transaction deleting module serves to execute a second lightening procedure on the transactions that have been processed by the first lightening procedure and now have validation references satisfying a second predetermined condition, after which the network broadcast module broadcasts the transactions obtained after the second lightening procedure. With the disclosed transaction-combining and reference-transaction-deleting scheme, data storage overheads of a graphic blockchain can be reduced.

    METHOD OF TIME-DELAY ENCRYPTION WITH KEYWORD SEARCH AND SYSTEM USING THE SAME

    公开(公告)号:US20220255744A1

    公开(公告)日:2022-08-11

    申请号:US17444613

    申请日:2021-08-06

    Abstract: The present invention relates to a method of time-delay encryption with keyword search and system using the same, at least comprising: based on a public key PK, generating searchable ciphertexts Cw and/or file ciphertexts for keywords w of at least one to-be-uploaded file by means of time-delay encryption and uploading the ciphertexts to a cloud server; sending at least one keyword search trapdoor Tw generated for one said to-be-searched keyword w based on a private key SK to the cloud server; and the cloud server, based on the keyword search trapdoor Tw performing keyword search on all the searchable ciphertexts Cw so as to obtain the corresponding searchable ciphertexts Cw, and determining the corresponding file ciphertexts based on the searched searchable ciphertexts Cw and feeding the corresponding file ciphertexts to a receiving end. The present invention increases the difficulty for attackers to launch keyword guessing attacks.

    TENSOR-BASED OPTIMIZATION METHOD FOR MEMORY MANAGEMENT OF A DEEP-LEARNING GPU AND SYSTEM THEREOF

    公开(公告)号:US20210142178A1

    公开(公告)日:2021-05-13

    申请号:US16946690

    申请日:2020-07-01

    Abstract: The present disclosure relates to a tensor-based optimization method for GPU memory management of deep learning, at least comprising steps of: executing at least one computing operation, which gets tensors as input and generates tensors as output; when one said computing operation is executed, tracking access information of the tensors, and setting up a memory management optimization decision based on the access information, during a first iteration of training, performing memory swapping operations passively between a CPU memory and a GPU memory so as to obtain the access information about the tensors regarding a complete iteration; according to the obtained access information about the tensors regarding the complete iteration, setting up a memory management optimization decision; and in a successive iteration, dynamically adjusting the set optimization decision of memory management according to operational feedbacks.

Patent Agency Ranking