Technologies for quality of service based throttling in fabric architectures

    公开(公告)号:US10237169B2

    公开(公告)日:2019-03-19

    申请号:US15088948

    申请日:2016-04-01

    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.

    TECHNOLOGIES FOR AUTO-MIGRATION IN ACCELERATED ARCHITECTURES

    公开(公告)号:US20190065281A1

    公开(公告)日:2019-02-28

    申请号:US15859385

    申请日:2017-12-30

    Abstract: Technologies for auto-migration in accelerated architectures include multiple compute sleds, accelerator sleds, and storage sleds. Each of the compute sleds includes phase detection logic to receive an indication from an application presently executing on the compute sled that indicates a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled. The phase detection logic is further to monitor a plurality of hardware threads associated with the application, detect whether a phase change has been detected as a function of the monitored hardware threads, and migrate, in response to having detected the phase change, the hardware threads to another compute element having a lower-performance central processing unit (CPU) relative to the CPU the application is presently being executed on. Other embodiments are described herein.

    ARBITRATION ACROSS SHARED MEMORY POOLS OF DISAGGREGATED MEMORY DEVICES

    公开(公告)号:US20190050261A1

    公开(公告)日:2019-02-14

    申请号:US15929005

    申请日:2018-03-29

    Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.

    TECHNOLOGIES FOR PROVIDING ADAPTIVE PLATFORM QUALITY OF SERVICE

    公开(公告)号:US20190007747A1

    公开(公告)日:2019-01-03

    申请号:US15636779

    申请日:2017-06-29

    Abstract: Technologies for providing adaptive platform quality of service include a compute device. The compute device is to obtain class of service data for an application to be executed, execute the application, determine, as a function of one or more resource utilizations of the application as the application is executed, a present phase of the application, set a present class of service for the application as a function of the determined phase, wherein the present class of service is within a range associated with the determined phase, determine whether a present performance metric of the application satisfies a target performance metric, and increment, in response to a determination that the present performance metric does not satisfy the target performance metric, the present class of service to a higher class of service in the range. Other embodiments are also described and claimed.

Patent Agency Ranking