-
公开(公告)号:US10146699B2
公开(公告)日:2018-12-04
申请号:US15500576
申请日:2015-04-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Mark David Lillibridge , Paolo Faraboschi
IPC: G06F12/02 , G06F12/1036 , G06F12/0804
Abstract: Apertures of a first size in a first physical address space of at least one processor are mapped to respective blocks of the first size in a second address space of a storage medium. Apertures of a second size in the first physical address space are mapped to respective blocks of the second size in the second address space, the second size being different from the first size.
-
公开(公告)号:US10127282B2
公开(公告)日:2018-11-13
申请号:US15305960
申请日:2014-04-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Sheng Li , Kevin T. Lim , Dejan S. Milojicic , Paolo Faraboschi
Abstract: A bit vector for a Bloom filter is determined by performing one or more hash function operations on a set of ternary content addressable memory (TCAM) words. A TCAM array is partitioned into a first portion to store the bit vector for the Bloom filter and a second portion to store the set of TCAM words. The TCAM array can be searched using a search word by performing the one or more hash function operations on the search word to generate a hashed search word and determining whether bits at specified positions of the hashed search word match bits at corresponding positions of the bit vector stored in the first portion of the TCAM array before searching the second portion of the TCAM array with the search word.
-
公开(公告)号:US20180074959A1
公开(公告)日:2018-03-15
申请号:US15320685
申请日:2014-07-22
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Sheng Li , Jishen Zhao , Kevin T. Lim , Paolo Faraboschi
IPC: G06F12/0815 , G06F1/32 , G06F13/40
CPC classification number: G06F12/0815 , G06F1/3287 , G06F1/3296 , G06F12/0806 , G06F13/14 , G06F13/4022 , G06F2212/1008 , G06F2212/62
Abstract: According to an example, a node-based computing device includes memory nodes communicatively coupled to a processor node. The memory nodes may form a main memory address space for the processor node. The processor node may establish a virtual circuit through memory nodes. The virtual circuit may dedicate a path within the memory nodes. The processor node may then communicate a message through the virtual circuit. The memory nodes may forward the message according to the path dedicated by the virtual circuit.
-
公开(公告)号:US20180060233A1
公开(公告)日:2018-03-01
申请号:US15246136
申请日:2016-08-24
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Gabriel Parmer , Paolo Faraboschi , Dejan S. Milojicic
IPC: G06F12/0808 , G06F12/0811 , G06F3/06
CPC classification number: G06F12/0808 , G06F12/0811 , G06F12/0831 , G06F2212/283 , G06F2212/621
Abstract: Examples described herein relate to caching in a system with multiple nodes sharing a globally addressable memory. The globally addressable memory includes multiple windows that each include multiple chunks. Each node of a set of the nodes includes a cache that is associated with one of the windows. One of the nodes includes write access to one of the chunks of the window. The other nodes include read access to the chunk. The node with write access further includes a copy of the chunk in its cache and modifies multiple lines of the chunk copy. After a first line of the chunk copy is modified, a notification is sent to the other nodes that the chunk should be marked dirty. After multiple lines are modified, an invalidation message is sent for each of the modified lines of the set of the nodes.
-
公开(公告)号:US20180025043A1
公开(公告)日:2018-01-25
申请号:US15556238
申请日:2015-03-06
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Stanko Novakovic , Kimberly Keeton , Paolo Faraboschi , Robert Schreiber
IPC: G06F17/30
CPC classification number: G06F16/2358 , G06F16/273 , G06F16/9024
Abstract: In some examples, a graph processing server is communicatively linked to a shared memory. The shared memory may also be accessible to a different graph processing server. The graph processing server may compute an updated vertex value for a graph portion handled by the graph processing server and flush the updated vertex value to the shared memory, for retrieval by the different graph processing server. The graph processing server may also notify the different graph processing server indicating that the updated vertex value has been flushed to the shared memory.
-
公开(公告)号:US20180004674A1
公开(公告)日:2018-01-04
申请号:US15199285
申请日:2016-06-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Qiong Cai , Paolo Faraboschi
IPC: G06F12/0875
CPC classification number: G06F12/0875 , G06F9/46 , G06F12/0866 , G06F12/0893 , G06F2212/608
Abstract: Examples disclosed herein relate to programmable memory-side cache management. Some examples disclosed herein may include a programmable memory-side cache and a programmable memory-side cache controller. The programmable memory-side cache may locally store data of a system memory. The programmable memory-side cache controller may include programmable processing cores, each of the programmable processing cores configurable by cache configuration codes to manage the programmable memory-side cache for different applications.
-
公开(公告)号:US20180004456A1
公开(公告)日:2018-01-04
申请号:US15545915
申请日:2015-01-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Vanish Talwar , Paolo Faraboschi , Daniel Gmach , Yuan Chen , Al Davis , Adit Madan
IPC: G06F3/06
Abstract: In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
-
公开(公告)号:US20230034011A1
公开(公告)日:2023-02-02
申请号:US17498124
申请日:2021-10-11
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Soumyendu Sarkar , Rohit Rawat , Vineet Gundecha , Sahand Ghorbanpour , Abhishek Jindal , Paolo Faraboschi
IPC: G06F16/33 , G06F40/216 , G06N20/00 , G06F16/338
Abstract: Examples described herein include a natural language processing (NLP) workflow for determining answers to queries. A query is received from a first client of a plurality of clients. A set of machine learning (ML) models are selected based on available service provider resources for processing the query. Each of the set of ML models corresponds to a respective stage of a NLP workflow. The query is input to a first model of the set of ML models. According to the NLP workflow, results from the first model are input to a second model of the set of ML models to determine a final result. A query answer based on the final result is transmitted to the first client.
-
公开(公告)号:US11481328B2
公开(公告)日:2022-10-25
申请号:US16925870
申请日:2020-07-10
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Alexandros Daglis , Paolo Faraboschi , Qiong Cai , Gary Gostin
IPC: G06F12/08 , G06F12/0817 , G06F12/14 , G06F12/0831 , G06F12/0811
Abstract: A technique includes, in response to a cache miss occurring with a given processing node of a plurality of processing nodes, using a directory-based coherence system for the plurality of processing nodes to regulate snooping of an address that is associated with the cache miss. Using the directory-based coherence system to regulate whether the address is included in a snooping domain is based at least in part on a number of cache misses associated with the address.
-
公开(公告)号:US20220327376A1
公开(公告)日:2022-10-13
申请号:US17226917
申请日:2021-04-09
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Cong Xu , Suparna Bhattacharya , Paolo Faraboschi
Abstract: Systems and methods are configured to split an epoch associated with a training dataset into a plurality of mini-epochs. A machine learning model can be trained with a mini-epoch of the plurality of mini-epochs. The mini-epoch can be, during the training, iterated for a number of times during the training. One or more metrics reflective of at least one of: a training loss, training accuracy, or validation accuracy of the machine learning model associated with the mini-epoch can be received. Whether to terminate iterations of the mini-epoch early before a number of iterations of the mini-epoch reaches the number of times based on the one or more metrics can be determined. The number of iterations can be a non-zero number.
-
-
-
-
-
-
-
-
-