-
公开(公告)号:US11863844B2
公开(公告)日:2024-01-02
申请号:US16833582
申请日:2020-03-28
Applicant: Intel Corporation
Inventor: Ravishankar Iyer , Nilesh Kumar Jain , Rameshkumar Illikkal , Carl S. Marshall , Selvakumar Panneer , Rajesh Poornachandran
IPC: H04N21/234 , H04N21/81 , H04N21/647 , H04N21/235
CPC classification number: H04N21/812 , H04N21/235 , H04N21/23418 , H04N21/23424 , H04N21/64715
Abstract: Various embodiments for dynamically generating an advertisement in a video stream are disclosed. In one embodiment, video stream content associated with a video stream for a user device is received. Video analytics data is obtained for the video stream content, which indicates a scene recognized in the video stream content. An advertisement to be generated and inserted into the video stream content is then selected based on the scene recognized in the video stream content, and an advertisement template for generating the selected advertisement is obtained. Video advertisement content corresponding to the advertisement is then generated based on the advertisement template and the video analytics data. The video advertisement content is then inserted into the video stream content, and the modified video stream content is transmitted to the user device.
-
公开(公告)号:US20190005176A1
公开(公告)日:2019-01-03
申请号:US15640448
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Ananth Sankaranarayanan , David Zimmerman , Pratik M. Marolia , Suchit Subhaschandra , Dave Minturn
Abstract: Aspects of the embodiments are directed to systems, devices, and methods for accessing storage-as-memory. Embodiments include a microprocessor including a microprocessor system agent and a field programmable gate array (FPGA). The FPGA including an FPGA system agent to process memory access requests received from the microprocessor system agent across a communications link; a memory controller communicatively coupled to the system agent; and a high speed serial interface to link the system agent with a storage system. Embodiments can also include a storage device connected to the FPGA by the high speed serial interface.
-
公开(公告)号:US20210208863A1
公开(公告)日:2021-07-08
申请号:US17133015
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Andrzej Kuriata , Mihai-Daniel Dodan , Wenhui Shu , Long Cui , Jinshi Chen , Rameshkumar Illikkal , Teck Joo Goh
Abstract: Methods, apparatus, systems, and articles of manufacture for loading of a container image are disclosed. An example apparatus includes a prioritizer to determine a priority level at which a container is to be executed. A container controller is to determine a first expected location for a first set of layers of the container, the container controller to determine a second expected location for a second set of layers of the container, the first expected location and the second expected location determined based on the determined priority level, the second set of layers separated from the first set of layers in an image by a landmark. A container loader is to mount the first set of layers from the first expected location. A container executor is to initiate execution of the container based on the mounted first set of layers.
-
公开(公告)号:US12231304B2
公开(公告)日:2025-02-18
申请号:US18037964
申请日:2020-12-21
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Anna Drewek-Ossowicka , Dharmisha Ketankumar Doshi , Qian Li , Andrzej Kuriata , Andrew J. Herdrich , Teck Joo Goh , Daniel Richins , Slawomir Putyrski , Wenhui Shu , Long Cui , Jinshi Chen , Mihai Daniel Dodan
IPC: H04L41/5019 , G06F9/50
Abstract: Various approaches to efficiently allocating and utilizing hardware resources in data centers while maintaining compliance with a service level objective (SLO) specified for a computational workload is translated into a hardware-level SLO to facilitate direct enforcement by the hardware processor, e.g., using a feedback control loop or model-based mapping of the hardware-level SLO to allocations of microarchitecture resources of the processor. In some embodiments, a computational model of the hardware behavior under resource contention is used to predict the application performance (e.g., as measured in terms of the hardware-level SLO) to be expected under certain contention scenarios. Scheduling of workloads among the compute nodes within the data center may be based on such predictions. In further embodiments, configurations of microservices are optimized to minimize hardware resources while meeting a specified performance goal.
-
公开(公告)号:US20230015537A1
公开(公告)日:2023-01-19
申请号:US17950826
申请日:2022-09-22
Applicant: Intel Corporation
Inventor: Anjo Lucas Vahldiek-Oberwagner , Ravi L. Sahita , Mona Vij , Rameshkumar Illikkal , Michael Steiner , Thomas Knauth , Dmitrii Kuvaiskii , Sudha Krishnakumar , Krystof C. Zmudzinski , Vincent Scarlata , Francis McKeen
Abstract: Example methods and systems are directed to reducing latency in providing trusted execution environments (TEEs). Initializing a TEE includes multiple steps before the TEE starts executing. Besides workload-specific initialization, workload-independent initialization is performed, such as adding memory to the TEE. In function-as-a-service (FaaS) environments, a large portion of the TEE is workload-independent, and thus can be performed prior to receiving the workload. Certain steps performed during TEE initialization are identical for certain classes of workloads. Thus, the common parts of the TEE initialization sequence may be performed before the TEE is requested. When a TEE is requested for a workload in the class and the parts to specialize the TEE for its particular purpose are known, the final steps to initialize the TEE are performed.
-
公开(公告)号:US12113902B2
公开(公告)日:2024-10-08
申请号:US17131684
申请日:2020-12-22
Applicant: Intel Corporation
Inventor: Anjo Lucas Vahldiek-Oberwagner , Ravi L. Sahita , Mona Vij , Dayeol Lee , Haidong Xia , Rameshkumar Illikkal , Samuel Ortiz , Kshitij Arun Doshi , Mourad Cherfaoui , Andrzej Kuriata , Teck Joo Goh
CPC classification number: H04L9/321 , H04L9/3242
Abstract: In function-as-a-service (FaaS) environments, a client makes use of a function executing within a trusted execution environment (TEE) on a FaaS server. Multiple tenants of the FaaS platform may provide functions to be executed by the FaaS platform via a gateway. Each tenant may provide code and data for any number of functions to be executed within any number of TEEs on the FaaS platform and accessed via the gateway. Additionally, each tenant may provide code and data for a single surrogate attester TEE. The client devices of the tenant use the surrogate attester TEE to attest each of the other TEEs of the tenant and establish trust with the functions in those TEEs. Once the functions have been attested, the client devices have confidence that the other TEEs of the tenant are running on the same platform as the gateway.
-
公开(公告)号:US20240015080A1
公开(公告)日:2024-01-11
申请号:US18037964
申请日:2020-12-21
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Anna Drewek-Ossowicka , Dharmisha Doshi , Qian Li , Andrzej Kuriata , Andrew J. Herdrich , Teck Joo Goh , Daniel Richins , Slawomir Putyrski , Wenhui Shu , Long Cui , Jinshi Chen , Mihal Daniel Dodan
IPC: H04L41/5019 , G06F9/50
CPC classification number: H04L41/5019 , G06F9/5011
Abstract: Various approaches to efficiently allocating and utilizing hardware resources in data centers while maintaining compliance with a service level objective (SLO) specified for a computational workload is translated into a hardware-level SLO to facilitate direct enforcement by the hardware processor, e.g., using a feedback control loop or model-based mapping of the hardware-level SLO to allocations of microarchitecture resources of the processor. In some embodiments, a computational model of the hardware behavior under resource contention is used to predict the application performance (e.g., as measured in terms of the hardware-level SLO) to be expected under certain contention scenarios. Scheduling of workloads among the compute nodes within the data center may be based on such predictions. In further embodiments, configurations of microservices are optimized to minimize hardware resources while meeting a specified performance goal.
-
公开(公告)号:US11238203B2
公开(公告)日:2022-02-01
申请号:US15640448
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Ananth Sankaranarayanan , David Zimmerman , Pratik M. Marolia , Suchit Subhaschandra , Dave Minturn
IPC: G06F30/331 , G06F21/76 , G06F3/06 , G06F9/445 , G06F12/0817 , G06F21/79 , G06F30/34
Abstract: Aspects of the embodiments are directed to systems, devices, and methods for accessing storage-as-memory. Embodiments include a microprocessor including a microprocessor system agent and a field programmable gate array (FPGA). The FPGA including an FPGA system agent to process memory access requests received from the microprocessor system agent across a communications link; a memory controller communicatively coupled to the system agent; and a high speed serial interface to link the system agent with a storage system. Embodiments can also include a storage device connected to the FPGA by the high speed serial interface.
-
公开(公告)号:US20250150362A1
公开(公告)日:2025-05-08
申请号:US19013402
申请日:2025-01-08
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Anna Drewek-Ossowicka , Dharmisha Ketankumar Doshi , Qian Li , Andrzej Kuriata , Andrew J. Herdrich , Teck Joo Goh , Daniel Richins , Slawomir Putyrski , Wenhui Shu , Long Cui , Jinshi Chen , Mihai Daniel Dodan
IPC: H04L41/5019 , G06F9/50
Abstract: Various approaches to efficiently allocating and utilizing hardware resources in data centers while maintaining compliance with a service level agreement are described. In various embodiments, an application-level service level objective (SLO) specified for a computational workload is translated into a hardware-level SLO to facilitate direct enforcement by the hardware processor, e.g., using a feedback control loop or model-based mapping of the hardware-level SLO to allocations of microarchitecture resources of the processor. In some embodiments, a computational model of the hardware behavior under resource contention is used to predict the application performance (e.g., as measured in terms of the hardware-level SLO) to be expected under certain contention scenarios. Scheduling of workloads among the compute nodes within the data center may be based on such predictions. In further embodiments, configurations of microservices are optimized to minimize hardware resources while meeting a specified performance goal.
-
10.
公开(公告)号:US11989587B2
公开(公告)日:2024-05-21
申请号:US16914301
申请日:2020-06-27
Applicant: Intel Corporation
Inventor: Rameshkumar Illikkal , Andrew J. Herdrich , Francesc Guim Bernat , Ravishankar Iyer
CPC classification number: G06F9/5016 , G06F9/30101 , G06F9/4881 , G06F11/3037 , G06F11/3466
Abstract: An apparatus and method for dynamic resource allocation with mile/performance markers. For example, one embodiment of a processor comprises: resource allocation circuitry to allocate a plurality of hardware resources to a plurality of workloads including priority workloads associated with one or more guaranteed performance levels; and monitoring circuitry to evaluate execution progress of a workload across a plurality of nodes, each node to execute one or more processing stages of the workload, wherein the monitoring circuitry is to evaluate the execution progress of the workload, at least in part, by reading progress markers advertised by the workload at the specified processing stages, wherein the monitoring circuitry is to detect that the workload may not meet one of the guaranteed performance levels based on the progress markers, and wherein the resource allocation circuitry, responsive to the monitoring circuitry, is to reallocate one or more of the plurality of hardware resources to improve the performance level of the workload.
-
-
-
-
-
-
-
-
-