Abstract:
A system and method for supporting targeted stores in a shared-memory multiprocessor. A targeted store enables a first processor to push a cache line to be stored in a cache memory of a second processor. This eliminates the need for multiple cache-coherence operations to transfer the cache line from the first processor to the second processor. More specifically, the disclosed embodiments provide a system that notifies a waiting thread when a targeted store is directed to monitored memory locations. During operation, the system receives a targeted store which is directed to a specific cache in a shared-memory multiprocessor system. In response, the system examines a destination address for the targeted store to determine whether the targeted store is directed to a monitored memory location which is being monitored for a thread associated with the specific cache. If so, the system informs the thread about the targeted store.
Abstract:
Particular techniques for improving the scalability of concurrent programs (e.g., lock-based applications) may be effective in some environments and for some workloads, but not others. The systems described herein may automatically choose appropriate ones of these techniques to apply when executing lock-based applications at runtime, based on observations of the application in the current environment and with the current workload. In one example, two techniques for improving lock scalability (e.g., transactional lock elision using hardware transactional memory, and optimistic software techniques) may be integrated together. A lightweight runtime library built for this purpose may adapt its approach to managing concurrency by dynamically selecting one or more of these techniques (at different times) during execution of a given application. In this Adaptive Lock Elision approach, the techniques may be selected (based on pluggable policies) at runtime to achieve good performance on different platforms and for different workloads.
Abstract:
The present embodiments provide a system for supporting targeted stores in a shared-memory multiprocessor. A targeted store enables a first processor to push a cache line to be stored in a cache memory of a second processor in the shared-memory multiprocessor. This eliminates the need for multiple cache-coherence operations to transfer the cache line from the first processor to the second processor. The system includes an interface, such as an application programming interface (API), and a system call interface or an instruction-set architecture (ISA) that provides access to a number of mechanisms for supporting targeted stores. These mechanisms include a thread-location mechanism that determines a location near where a thread is executing in the shared-memory multiprocessor, and a targeted-store mechanism that targets a store to a location (e.g., cache memory) in the shared-memory multiprocessor.
Abstract:
A computer including multiple processors and memory implements a managed runtime providing a synchronization application programming interface (API) for threads that perform synchronized accesses to shared objects. A standardized header of objects includes a memory word storing an object identifier. To lock the object for synchronized access, the memory word may be converted to store the tail of a linked list of a first-in-first-out synchronization structures for threads waiting to acquire the lock, with the object identifier relocated to the list structure. The list structure may further include a stack of threads waiting on events related to the object, with the synchronization API additionally providing wait, notify and related synchronization operations. Upon determining that no threads hold or desire to hold the lock for the object and that no threads are waiting on events related to the object, the memory word may be restored to contain the object identifier.
Abstract:
Transactional Lock Elision allows hardware transactions to execute unmodified critical sections protected by the same lock concurrently, by subscribing to the lock and verifying that it is available before committing the transaction. A “lazy subscription” optimization, which delays lock subscription, can potentially cause behavior that cannot occur when the critical sections are executed under the lock. Hardware extensions may provide mechanisms to ensure that lazy subscriptions are safe (e.g., that they result in correct behavior). Prior to executing a critical section transactionally, its lock and subscription code may be identified (e.g., by writing their locations to special registers). Prior to committing the transaction, the thread executing the critical section may verify that the correct lock was correctly subscribed to. If not, or if locations identified by the special registers have been modified, the transaction may be aborted. Nested critical sections associated with different lock types may invoke different subscription code.
Abstract:
A computer comprising one or more processors and memory may implement multiple threads performing mutually exclusive lock acquisition operations on disjoint ranges of a shared resource each using atomic compare and swap (CAS) operations. A linked list of currently locked ranges is maintained and, upon entry to a lock acquisition operation, a thread waits for all locked ranges overlapping the desired range to be released then inserts a descriptor for the desired range into the linked list using a single CAS operation. To release a locked range, a thread executes a single fetch and add (FAA) operation. The operation may be extended to support simultaneous exclusive and non-exclusive access by allowing overlapping ranges to be locked for non-exclusive access and by performing an additional validation after locking to provide conflict resolution should a conflict be detected.
Abstract:
A computer comprising one or more processors and memory may implement multiple threads that perform a lock operation using a data structure comprising an allocation field and a grant field. Upon entry to a lock operation, a thread allocates a ticket by atomically copying a ticket value contained in the allocation field and incrementing the allocation field. The thread compares the allocated ticket to the grant field. If they are unequal, the thread determines a number of waiting threads. If the number is above the threshold, the thread enters a long term wait operation comprising determining a location for long term wait value and waiting on changes to that value. If the number is below the threshold or the long term wait operation is complete, the thread waits for the grant value to equal the ticket to indicate that the lock is allocated.
Abstract:
Generic Concurrency Restriction (GCR) may divide a set of threads waiting to acquire a lock into two sets: an active set currently able to contend for the lock, and a passive set waiting for an opportunity to join the active set and contend for the lock. The number of threads in the active set may be limited to a predefined maximum or even a single thread. Generic Concurrency Restriction may be implemented as a wrapper around an existing lock implementation. Generic Concurrency Restriction may, in some embodiments, be unfair (e.g., to some threads) over the short term, but may improve the overall throughput of the underlying multithreaded application via passivation of a portion of the waiting threads.
Abstract:
A computer comprising multiple processors and non-uniform memory implements multiple threads that perform a lock operation using a shared lock structure that includes a pointer to a tail of a first-in-first-out (FIFO) queue of threads waiting to acquire the lock. To acquire the lock, a thread allocates and appends a data structure to the FIFO queue. The lock is released by selecting and notifying a waiting thread to which control is transferred, with the thread selected executing on the same processor socket as the thread controlling the lock. A secondary queue of threads is managed for threads deferred during the selection process and maintained within the data structures of the waiting threads such that no memory is required within the lock structure. If no threads executing on the same processor socket are waiting for the lock, entries in the secondary queue are transferred to the FIFO queue preserving FIFO order.
Abstract:
Generic Concurrency Restriction (GCR) may divide a set of threads waiting to acquire a lock into two sets: an active set currently able to contend for the lock, and a passive set waiting for an opportunity to join the active set and contend for the lock. The number of threads in the active set may be limited to a predefined maximum or even a single thread. Generic Concurrency Restriction may be implemented as a wrapper around an existing lock implementation. Generic Concurrency Restriction may, in some embodiments, be unfair (e.g., to some threads) over the short term, but may improve the overall throughput of the underlying multithreaded application via passivation of a portion of the waiting threads.