Abstract:
Semiconductor memory devices include unit cells two-dimensionally arranged along rows and columns in one cell array block. The unit cells are classified into a plurality of cell subgroups, and each of the cell subgroups includes the unit cells constituting a plurality of the rows. Each of the unit cells includes a selection element and a data storage part. A word line is connected to gate electrodes of selection elements of the unit cells constituting each column. Bit lines are connected to data storage parts of the unit cells constituting the rows. A source line, parallel to the bit line, is electrically connected to source terminals of the selection elements of the unit cells in each cell subgroup. The source line is parallel to the bit line. A distance between the source line and the select bit line is equal to a distance between the bit lines adjacent to each other.
Abstract:
A texture unit may be used utilized to perform general purpose mathematical computations such as dot products. This enables some general purpose computations and operations to be offloaded from a central processing unit to the texture unit. The texture unit may use linear interpolators in order to perform the dot product calculations.
Abstract:
Methods, apparatuses and storage device associated with cache and/or socket sensitive breadth-first iterative traversal of a graph by parallel threads, are described. A vertices visited array (VIS) may be employed to track graph vertices visited. VIS may be partitioned into VIS sub-arrays, taking into consideration cache sizes of LLC, to reduce likelihood of evictions. Potential boundary vertices arrays (PBV) may be employed to store potential boundary vertices for a next iteration, for vertices being visited in a current iteration. The number of PBV generated for each thread may take into consideration a number of sockets, over which the processor cores employed are distributed. The threads may be load balanced; further data locality awareness to reduce inter-socket communication may be considered, and/or lock-and-atomic free update operations may be employed.
Abstract:
Embodiments of shared cache memories for multi-core processors are presented. In one embodiment, a cache memory comprises a group of sampling cache sets and a controller to determine a number of misses that occur in the group of sampling cache sets. The controller is operable to determine a victim cache line for a cache set based at least in part on the number of misses.
Abstract:
Methods, apparatuses and storage device associated with cache and/or socket sensitive breadth-first iterative traversal of a graph by parallel threads, are disclosed. In embodiments, a vertices visited array (VIS) may be employed to track graph vertices visited. VIS may be partitioned into VIS sub-arrays, taking into consideration cache sizes of LLC, to reduce likelihood of evictions. In embodiments, potential boundary vertices arrays (PBV) may be employed to store potential boundary vertices for a next iteration, for vertices being visited in a current iteration. The number of PBV generated for each thread may take into consideration a number of sockets, over which the processor cores employed are distributed. In various embodiments, the threads may be load balanced; further data locality awareness to reduce inter-socket communication may be considered, and/or lock-and-atomic free update operations may be employed. Other embodiments may be disclosed or claimed.
Abstract:
Methods, apparatuses and systems to decrease the energy consumption of a memory chip while increasing its effect bandwidth during the execution of any workload. Methods, apparatuses and systems may allow a memory chip utilize a plurality of virtual row buffers to respond to requests for data included in a memory array block. Methods, apparatuses and systems may further eliminate or reduce the cost associated with transferring unnecessary data from a memory array block to row buffers by altering the data transfer size between a memory array block and a row buffer.
Abstract:
An apparatus or system may comprises cache control circuitry coupled to a processor, and a plurality of independently accessible memory banks (228) coupled to the cache control circuitry. Some of the banks may have non-uniform latencies, organized into two or more spread bank sets (246). A method may include accessing data in the banks, wherein selected banks are closer to the cache control circuitry and/or processor than others, and migrating a first datum (445) to a closer bank from a further bank upon determining that the first datum is accessed more frequently than a second datum, which may be migrated to the further bank (451).
Abstract:
In one embodiment, a processor may include a vector unit to perform operations on multiple data elements responsive to a single instruction, and a control unit coupled to the vector unit to provide the data elements to the vector unit, where the control unit is to enable an atomic vector operation to be performed on at least some of the data elements responsive to a first vector instruction to be executed under a first mask and a second vector instruction to be executed under a second mask. Other embodiments are described and claimed.
Abstract:
Embodiments of techniques and systems for parallel processing of B+ trees are described. A parallel B+ tree processing module with partitioning and redistribution may include a set of threads executing a batch of B+ tree operations on a B+ tree in parallel. The batch of operations may be partitioned amongst the threads. Next, a search may be performed to determine which leaf nodes in the B+ tree are to be affected by which operations. Then, the threads may redistribute operations between each other such that multiple threads will not operate on the same leaf node. The threads may then perform B+ tree operations on the leaf nodes of the B+ tree in parallel. Subsequent modifications to nodes in the B+ may similarly be redistributed and performed in parallel as the threads work up the tree.
Abstract:
Methods and apparatus relating to gather or scatter operations in a multi-level cache are described. In some embodiments, a logic may determine whether to perform gather or scatter operations at a first memory or a second memory, based in part on a relative performance of performing the gather or scatter operations at the first memory and the second memory. Other embodiments are also described and claimed.