CONGESTION CONTROL FOR DELAY SENSITIVE APPLICATIONS
    1.
    发明申请
    CONGESTION CONTROL FOR DELAY SENSITIVE APPLICATIONS 有权
    延迟敏感应用的约束控制

    公开(公告)号:US20130279338A1

    公开(公告)日:2013-10-24

    申请号:US13917441

    申请日:2013-06-13

    CPC classification number: H04L47/25 H04L47/22 H04L47/2416 H04L47/29 H04L47/30

    Abstract: In various embodiments, methods and systems are disclosed for a hybrid rate plus window based congestion protocol that controls the rate of packet transmission into the network and provides low queuing delay, practically zero packet loss, fair allocation of network resources amongst multiple flows, and full link utilization. In one embodiment, a congestion window may be used to control the maximum number of outstanding bits, a transmission rate may be used to control the rate of packets entering the network (packet pacing), a queuing delay based rate update may be used to control queuing delay within tolerated bounds and minimize packet loss, and aggressive ramp-up/graceful back-off may be used to fully utilize the link capacity and additive-increase, multiplicative-decrease (AIMD) rate control may be used to provide fairness amongst multiple flows.

    Abstract translation: 在各种实施例中,公开了用于混合速率加上基于窗口的拥塞协议的方法和系统,其控制到网络的分组传输速率并提供低排队延迟,实际上零分组丢失,多个流之间的网络资源的公平分配以及全部 链接利用率。 在一个实施例中,可以使用拥塞窗口来控制未完成比特的最大数量,可以使用传输速率来控制进入网络的分组的速率(分组起搏),基于排队延迟的速率更新可以用于控制 可以利用容忍范围内的排队延迟并尽可能减少分组丢失,并且可以使用积极的提升/优雅退避来充分利用链路容量,并且可以使用加法增加乘法减少(AIMD)速率控制来提供多个 流动。

    Memory Sharing Over A Network
    3.
    发明申请
    Memory Sharing Over A Network 审中-公开
    网络内存共享

    公开(公告)号:US20140280669A1

    公开(公告)日:2014-09-18

    申请号:US13831753

    申请日:2013-03-15

    Abstract: Memory is shared among physically distinct, networked computing devices. Each computing device comprises a Remote Memory Interface (RMI) accepting commands from locally executing processes and translating such commands into forms transmittable to a remote computing device. The RMI also accepts remote communications directed to it and translates those into commands directed to local memory. The amount of storage capacity shared is informed by a centralized controller, either a single controller, a hierarchical collection of controllers, or a peer-to-peer negotiation. Requests that are directed to remote high-speed non-volatile storage media are detected or flagged and the process generating the request is suspended such that it can be efficiently revived. The storage capacity provided by remote memory is mapped into the process space of processes executing locally.

    Abstract translation: 内存在物理上不同的联网计算设备之间共享。 每个计算设备包括接收来自本地执行过程的命令的远程存储器接口(RMI),并将这些命令转换成可发送到远程计算设备的形式。 RMI还接受指向它的远程通信,并将它们转换为指向本地存储器的命令。 共享的存储容量由集中控制器,单个控制器,控制器的分层收集或对等协商通知。 针对远程高速非易失性存储介质的请求被检测或标记,并且产生请求的过程被暂停,使得可以有效地恢复。 由远程存储器提供的存储容量映射到本地执行的进程的进程空间。

    LOAD AWARE RESOURCE ALLOCATION IN WIRELESS NETWORKS

    公开(公告)号:US20130301606A1

    公开(公告)日:2013-11-14

    申请号:US13773825

    申请日:2013-02-22

    Abstract: A technique for resource allocation in a wireless network (for example, an access point type wireless network), which supports concurrent communication on a band of channels, is provided. The technique includes accepting connectivity information for the network that supports concurrent communication on the band of channels. A conflict graph is generated from the connectivity information. The generated conflict graph models concurrent communication on the band of channels. A linear programming approach, which incorporates information form the conflict graph and rate requirements for nodes of the network, can be utilized to maximize throughput of the network.

    CACHING CONTENT ADDRESSABLE DATA CHUNKS FOR STORAGE VIRTUALIZATION
    6.
    发明申请
    CACHING CONTENT ADDRESSABLE DATA CHUNKS FOR STORAGE VIRTUALIZATION 有权
    用于存储虚拟化的缓存内容可寻址数据库

    公开(公告)号:US20140280664A1

    公开(公告)日:2014-09-18

    申请号:US13830950

    申请日:2013-03-14

    Abstract: The subject disclosure is directed towards using primary data deduplication concepts for more efficient access of data via content addressable caches. Chunks of data, such as deduplicated data chunks, are maintained in a fast access client-side cache, such as containing chunks based upon access patterns. The chunked content is content addressable via a hash or other unique identifier of that content in the system. When a chunk is needed, the client-side cache (or caches) is checked for the chunk before going to a file server for the chunk. The file server may likewise maintain content addressable (chunk) caches. Also described are cache maintenance, management and organization, including pre-populating caches with chunks, as well as using RAM and/or solid-state storage device caches.

    Abstract translation: 主题公开涉及使用主重复数据删除概念,以便通过内容可寻址高速缓存更有效地访问数据。 数据块(例如重复数据删除的数据块)被维护在快速访问客户端缓存中,例如基于访问模式包含块。 分块内容可通过系统中该内容的散列或其他唯一标识符进行内容寻址。 当需要一个块时,在进入组块的文件服务器之前,将检查该组块的客户端缓存(或高速缓存)。 文件服务器可以同样保持内容可寻址(块)高速缓存。 还描述了缓存维护,管理和组织,包括具有块的预填充高速缓存,以及使用RAM和/或固态存储设备高速缓存。

    PREDICTING DATA COMPRESSIBILITY USING DATA ENTROPY ESTIMATION
    7.
    发明申请
    PREDICTING DATA COMPRESSIBILITY USING DATA ENTROPY ESTIMATION 审中-公开
    使用数据熵估计预测数据可压缩性

    公开(公告)号:US20140244604A1

    公开(公告)日:2014-08-28

    申请号:US13781663

    申请日:2013-02-28

    CPC classification number: H03M7/30 H03M7/3091

    Abstract: The subject disclosure is directed towards predicting compressibility of a data block, and using the predicted compressibility in determining whether a data block if compressed will be sufficiently compressible to justify compression. In one aspect, data of the data block is processed to obtain an entropy estimate of the data block, e.g., based upon distinct value estimation. The compressibility prediction may be used in conjunction with a chunking mechanism of a data deduplication system.

    Abstract translation: 主题公开涉及预测数据块的可压缩性,并且在确定数据块是否被压缩时将使用预测的可压缩性将足够可压缩以证明压缩。 在一个方面,数据块的数据被处理以获得数据块的熵估计,例如基于不同的值估计。 可压缩性预测可以与重复数据删除系统的分块机制结合使用。

    Integrated Data Deduplication and Encryption
    8.
    发明申请
    Integrated Data Deduplication and Encryption 有权
    集成数据重复数据删除和加密

    公开(公告)号:US20140189348A1

    公开(公告)日:2014-07-03

    申请号:US13731746

    申请日:2012-12-31

    CPC classification number: G06F21/6218 G06F2221/2107 H04L63/0428

    Abstract: The subject disclosure is directed towards encryption and deduplication integration between computing devices and a network resource. Files are partitioned into data blocks and deduplicated via removal of duplicate data blocks. Using multiple cryptographic keys, each data block is encrypted and stored at the network resource but can only be decrypted by an authorized user, such as domain entity having an appropriate deduplication domain-based cryptographic key. Another cryptographic key referred to as a content-derived cryptographic key ensures that duplicate data blocks encrypt to substantially equivalent encrypted data.

    Abstract translation: 主题公开涉及计算设备和网络资源之间的加密和重复数据删除集成。 文件被分割成数据块,并通过删除重复的数据块进行重复数据删除。 使用多个加密密钥,每个数据块被加密并存储在网络资源中,但是只能由授权用户(例如具有适当的基于重复数据删除域的加密密钥的域实体)解密。 被称为内容导出加密密钥的另一密码密钥确保重复的数据块加密到基本相等的加密数据。

    HIGH PERFORMANCE TRANSACTIONS IN DATABASE MANAGEMENT SYSTEMS
    9.
    发明申请
    HIGH PERFORMANCE TRANSACTIONS IN DATABASE MANAGEMENT SYSTEMS 有权
    数据库管理系统中的高性能交易

    公开(公告)号:US20160110403A1

    公开(公告)日:2016-04-21

    申请号:US14588390

    申请日:2014-12-31

    CPC classification number: G06F17/30356 G06F17/3033 G06F17/30353

    Abstract: A transaction engine includes a multi-version concurrency control (MVCC) module that accesses a latch-free hash table that includes respective hash table entries that include respective buckets of respective bucket items. The bucket items represent respective records, the respective bucket items each including a value indicating a temporal most recent read time of the item and a version list of descriptions that describe respective versions of the respective records, the MVCC module performing timestamp order concurrency control, using the latch-free hash table. Recovery log buffers may be used as cache storage for the transaction engine.

    Abstract translation: 交易引擎包括多版本并发控制(MVCC)模块,其访问无闩锁哈希表,该哈希表包括各自的哈希表条目,该条目包括相应桶项的相应桶。 桶项表示相应的记录,每个桶项目各自包括指示项目的最新的最新读取时间的值和描述相应记录的各个版本的描述的版本列表,MVCC模块执行时间戳顺序并发控制,使用 无闩锁哈希表。 恢复日志缓冲区可以用作事务引擎的缓存存储。

    LATCH-FREE, LOG-STRUCTURED STORAGE FOR MULTIPLE ACCESS METHODS
    10.
    发明申请
    LATCH-FREE, LOG-STRUCTURED STORAGE FOR MULTIPLE ACCESS METHODS 有权
    无锁,无结构存储多种访问方式

    公开(公告)号:US20140379991A1

    公开(公告)日:2014-12-25

    申请号:US13924567

    申请日:2013-06-22

    Abstract: A data manager may include a data opaque interface configured to provide, to an arbitrarily selected page-oriented access method, interface access to page data storage that includes latch-free access to the page data storage. In another aspect, a swap operation may be initiated, of a portion of a first page in cache layer storage to a location in secondary storage, based on initiating a prepending of a partial swap delta record to a page state associated with the first page, the partial swap delta record including a main memory address indicating a storage location of a flush delta record that indicates a location in secondary storage of a missing part of the first page. In another aspect, a page manager may initiate a flush operation of a first page in cache layer storage to a location in secondary storage, based on atomic operations with flush delta records.

    Abstract translation: 数据管理器可以包括数据不透明接口,被配置为向任意选择的面向页面的访问方法提供对包括对页面数据存储器的无闩锁访问的页面数据存储的接口访问。 在另一方面,可以基于发起部分交换增量记录的前缀到与第一页相关联的页面状态,将高速缓存层存储器中的第一页面的一部分的交换操作发起到辅助存储器中的位置, 该部分交换增量记录包括指示闪存增量记录的存储位置的主存储器地址,其指示第一页的缺失部分的辅助存储器中的位置。 在另一方面,页面管理器可以基于具有刷新三角洲记录的原子操作来启动高速缓存层存储器中的第一页面的刷新操作到辅助存储器中的位置。

Patent Agency Ranking