Transmission of address translation type packets

    公开(公告)号:US11853231B2

    公开(公告)日:2023-12-26

    申请号:US17357838

    申请日:2021-06-24

    CPC classification number: G06F12/1458 H04L41/08 H04L49/90 H04L61/25

    Abstract: Apparatuses, systems and methods for routing requests and responses targeting a shared resource. A queue in a communication fabric is located in a path between the requesters and a shared resource. In some embodiments, the shared resource is a shared address translation cache stored in an endpoint. The physical channel between the queue and the shared resource supports multiple virtual channels. The queue assigns at least one entry to each virtual channel of a group of virtual channels where the group includes a virtual channel for each address translation request type from a single requester of the multiple requesters. When the at least one entry for a given requester is de-allocated, the queue allocates this entry only with requests from the assigned virtual channel even if the empty entry is the only available entry of the queue.

    MULTI-GPU DEVICE PCIE TOPOLOGY RETRIEVAL IN GUEST VM

    公开(公告)号:US20230401082A1

    公开(公告)日:2023-12-14

    申请号:US17839821

    申请日:2022-06-14

    CPC classification number: G06F9/45558 G06F9/4881 G06F2009/45579

    Abstract: A system and method for efficiently scheduling tasks to multiple endpoint devices are described. In various implementations, a computing system has a physical hardware topology that includes multiple endpoint devices and one or more general-purpose central processing units (CPUs). A virtualization layer is added between the hardware of the computing system and an operating system that creates a guest virtual machine (VM) with multiple endpoint devices. The guest VM utilizes a guest VM topology that is different from the physical hardware topology. The processor of an endpoint device that runs the guest VM accesses a table of latency information for one or more pairs of endpoints of the guest VM based on physical hardware topology, rather than based on the guest VM topology. The processor schedules tasks on paths between endpoint devices based on the table.

    Video encode pre-analysis bit budgeting based on context and features

    公开(公告)号:US11843772B2

    公开(公告)日:2023-12-12

    申请号:US16706473

    申请日:2019-12-06

    Abstract: Systems, apparatuses, and methods for bit budgeting in video encode pre-analysis based on context and features are disclosed. A pre-encoder receives a video frame and evaluates each block of the frame for the presence of several contextual indicators. The contextual indicators can include memory colors, text, depth of field, and other specific objects. For each contextual indicator detected, a coefficient is generated and added with other coefficients to generate a final importance value for the block. The coefficients can be adjusted so that only a defined fraction of the picture is deemed important. The final importance value of the block is used to determine the bit budget for the block. The block bit budgets are provided to the encoder and used to influence the quantization parameters used for encoding the blocks.

    Adaptive audio mixing
    68.
    发明授权

    公开(公告)号:US11839815B2

    公开(公告)日:2023-12-12

    申请号:US17132827

    申请日:2020-12-23

    Abstract: Systems, apparatuses, and methods for performing adaptive audio mixing are disclosed. A trained neural network dynamically selects and mixes pre-recorded, human-composed music stems that are composed as mutually compatible sets. Stem and track selection, volume mixing, filtering, dynamic compression, acoustical/reverberant characteristics, segues, tempo, beat-matching and crossfading parameters generated by the neural network are inferred from the game scene characteristics and other dynamically changing factors. The trained neural network selects an artist's pre-recorded stems and mixes the stems in real-time in unique ways to dynamically adjust and modify background music based on factors such as game scenario, the unique storyline of the player, scene elements, the player's profile, interest, and performance, adjustments made to game controls (e.g., music volume), number of viewers, received comments, player's popularity, player's native language, player's presence, and/or other factors. The trained neural network creates unique music that dynamically varies according to real-time circumstances.

    Apparatus and method for providing subsystem processor based power shifting for peripheral devices

    公开(公告)号:US11815974B2

    公开(公告)日:2023-11-14

    申请号:US17385244

    申请日:2021-07-26

    CPC classification number: G06F1/3228 G06F1/266 G06F1/3209 G06F1/3296

    Abstract: A computing device and method controls power consumption of a graphics processing unit in the computing device by the GPU determining an allocated power for the USB device connected through a USB port, such as a USB-C port. The GPU issues allocated power information for the external USB device to cause the allocated power to be provided to the USB device and includes issuing allocated power information to a power delivery (PD) controller that is connected to a USB port. In some implementations, the GPU shifts at least a portion of the allocated power from the USB device back to the GPU in response to a usage change event associated with the USB device for improving GPU performance. The usage change event can be a disconnect event of the USB device, a power renegotiation event between the USB device and the GPU, or any other suitable usage change event.

Patent Agency Ranking