Abstract:
In various embodiments, methods and systems are disclosed for a hybrid rate plus window based congestion protocol that controls the rate of packet transmission into the network and provides low queuing delay, practically zero packet loss, fair allocation of network resources amongst multiple flows, and full link utilization. In one embodiment, a congestion window may be used to control the maximum number of outstanding bits, a transmission rate may be used to control the rate of packets entering the network (packet pacing), a queuing delay based rate update may be used to control queuing delay within tolerated bounds and minimize packet loss, and aggressive ramp-up/graceful back-off may be used to fully utilize the link capacity and additive-increase, multiplicative-decrease (AIMD) rate control may be used to provide fairness amongst multiple flows.
Abstract:
Update requests that specify updates to a logical page associated with a key-value store are obtained. Updates to the logical page are posted using the obtained plurality of update requests, without accessing the logical page via a read operation.
Abstract:
Memory is shared among physically distinct, networked computing devices. Each computing device comprises a Remote Memory Interface (RMI) accepting commands from locally executing processes and translating such commands into forms transmittable to a remote computing device. The RMI also accepts remote communications directed to it and translates those into commands directed to local memory. The amount of storage capacity shared is informed by a centralized controller, either a single controller, a hierarchical collection of controllers, or a peer-to-peer negotiation. Requests that are directed to remote high-speed non-volatile storage media are detected or flagged and the process generating the request is suspended such that it can be efficiently revived. The storage capacity provided by remote memory is mapped into the process space of processes executing locally.
Abstract:
A technique for resource allocation in a wireless network (for example, an access point type wireless network), which supports concurrent communication on a band of channels, is provided. The technique includes accepting connectivity information for the network that supports concurrent communication on the band of channels. A conflict graph is generated from the connectivity information. The generated conflict graph models concurrent communication on the band of channels. A linear programming approach, which incorporates information form the conflict graph and rate requirements for nodes of the network, can be utilized to maximize throughput of the network.
Abstract:
The subject disclosure is directed towards using primary data deduplication concepts for more efficient access of data via content addressable caches. Chunks of data, such as deduplicated data chunks, are maintained in a fast access client-side cache, such as containing chunks based upon access patterns. The chunked content is content addressable via a hash or other unique identifier of that content in the system. When a chunk is needed, the client-side cache (or caches) is checked for the chunk before going to a file server for the chunk. The file server may likewise maintain content addressable (chunk) caches. Also described are cache maintenance, management and organization, including pre-populating caches with chunks, as well as using RAM and/or solid-state storage device caches.
Abstract:
The subject disclosure is directed towards using primary data deduplication concepts for more efficient access of data via content addressable caches. Chunks of data, such as deduplicated data chunks, are maintained in a fast access client-side cache, such as containing chunks based upon access patterns. The chunked content is content addressable via a hash or other unique identifier of that content in the system. When a chunk is needed, the client-side cache (or caches) is checked for the chunk before going to a file server for the chunk. The file server may likewise maintain content addressable (chunk) caches. Also described are cache maintenance, management and organization, including pre-populating caches with chunks, as well as using RAM and/or solid-state storage device caches.
Abstract:
The subject disclosure is directed towards predicting compressibility of a data block, and using the predicted compressibility in determining whether a data block if compressed will be sufficiently compressible to justify compression. In one aspect, data of the data block is processed to obtain an entropy estimate of the data block, e.g., based upon distinct value estimation. The compressibility prediction may be used in conjunction with a chunking mechanism of a data deduplication system.
Abstract:
The subject disclosure is directed towards encryption and deduplication integration between computing devices and a network resource. Files are partitioned into data blocks and deduplicated via removal of duplicate data blocks. Using multiple cryptographic keys, each data block is encrypted and stored at the network resource but can only be decrypted by an authorized user, such as domain entity having an appropriate deduplication domain-based cryptographic key. Another cryptographic key referred to as a content-derived cryptographic key ensures that duplicate data blocks encrypt to substantially equivalent encrypted data.
Abstract:
A transaction engine includes a multi-version concurrency control (MVCC) module that accesses a latch-free hash table that includes respective hash table entries that include respective buckets of respective bucket items. The bucket items represent respective records, the respective bucket items each including a value indicating a temporal most recent read time of the item and a version list of descriptions that describe respective versions of the respective records, the MVCC module performing timestamp order concurrency control, using the latch-free hash table. Recovery log buffers may be used as cache storage for the transaction engine.
Abstract:
A data manager may include a data opaque interface configured to provide, to an arbitrarily selected page-oriented access method, interface access to page data storage that includes latch-free access to the page data storage. In another aspect, a swap operation may be initiated, of a portion of a first page in cache layer storage to a location in secondary storage, based on initiating a prepending of a partial swap delta record to a page state associated with the first page, the partial swap delta record including a main memory address indicating a storage location of a flush delta record that indicates a location in secondary storage of a missing part of the first page. In another aspect, a page manager may initiate a flush operation of a first page in cache layer storage to a location in secondary storage, based on atomic operations with flush delta records.