Abstract:
Technologies for function as a service (FaaS) arbitration include an edge gateway, multiple endpoint devices, and multiple service providers. The edge gateway receives a registration request from a service provider that is indicative of an FaaS function identifier and a transform function. The edge gateway verifies an attestation received from the service provider and registers the service provider. The edge gateway receives a function execution request from an endpoint device that is indicative of the FaaS function identifier. The edge gateway selects the service provider based on the FaaS function identifier, programs an accelerator with the transform function, executes the transform function with the accelerator to transform the function execution request to a provider request, and submits the provider request to the service provider. The service provider may be selected based on an expected service level included in the function execution request. Other embodiments are described and claimed.
Abstract:
Systems and methods may provide implementing one or more device locking procedures to block access to a device. In one example, the method may include receiving an indication that a user is no longer present, initiating a timing mechanism to set a period to issue a first device lock instruction to lock a peripheral device, relaying timing information from the timing mechanism to a controller module associated with the peripheral device; and locking the peripheral device upon expiration of the period.
Abstract:
Systems and methods may provide for receiving runtime input from one or more unlock interfaces of a device and selecting a level of access with regard to the device from a plurality of levels of access based on the runtime input. The selected level of access may have an associated security policy, wherein an authentication of the runtime input may be conducted based on the associated security policy. In one example, one or more cryptographic keys are used to place the device in an unlocked state with regard to the selected level of access if the authentication is successful. If the authentication is unsuccessful, on the other hand, the device may be maintained in a locked state with regard to the selected level of access.
Abstract:
Techniques and mechanisms for determining an operation to be performed with a direct memory access (DMA) request. An inspection unit (105) is coupled between an input-output memory management unit (IOMMU) (120) and an endpoint device (118). The inspection unit (105) stores a registry (330) comprising entries (332) which each correspond to a respective address, and a respective one or more resources of the endpoint device (118). A given entry (332) of the registry (330) is created based on a message from the IOM MU (120) which indicates the successful completion of an address translation to facilitate a DMA request. The endpoint device (118) performs a search, based on a DMA request, to determine if any registry (330) entry (332) indicates a combination of an address and an endpoint resource, where said combination matches a corresponding combination indicated by the DMA request. Communication of the DMA request to the IOMMU (120) is contingent on a result of the search.
Abstract:
Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
Abstract:
An embodiment of an integrated circuit may comprise memory to store respective resource control descriptors in correspondence with respective identifiers, and an input/output (JO) memory management unit (IOMMU) communicatively coupled to the memory, the IOMMU including circuitry to determine resource control information for an IO transaction based on a resource control descriptor stored in the memory that corresponds to an identifier associated with the IO transaction, and control utilization of one or more resources of the IOMMU based on the determined resource control information. Other embodiments are disclosed and claimed.
Abstract:
Methods, apparatus, systems and articles of manufacture are disclosed to facilitate information exchange using publish-subscribe with blockchain. An example apparatus includes a security manager to integrate a security service with an instruction execution flow in a distributed device environment. The security manager is to include a processor. The processor is to be configured to implement at least an executable hierarchical state machine to provide credential management and access management in conjunction with instruction execution according to an execution plan. The executable hierarchical state machine is to generate a security context for the execution plan to implement a guard condition governing a transition from a first state to a second state in accordance with the execution plan.
Abstract:
Technologies for hybrid field-programmable gate array (FPGA) application-specific integrated circuit (ASIC) code acceleration are described. In one example, the computing device includes a FPGA comprising: algorithm circuitry to: perform one or more algorithm tasks of an algorithm, wherein the algorithm to perform a service request that is offloaded to the FPGA; and determine a primitive task associated with an algorithm task of the one or more algorithm tasks; primitive offload circuitry to encapsulate the primitive task in a buffer of the FPGA, wherein the buffer is accessible by an ASIC of the computing device; and result circuitry to return one or more results of the service request responsive to performance of the primitive task by the ASIC.
Abstract:
Example methods, apparatus, systems and articles of manufacture (e.g., non-transitory physical storage media) to provide trust topology selection for distributed transaction processing in computing environments are disclosed herein. Example distributed transaction processing nodes disclosed herein include a distributed transaction application to process a transaction in a computing environment based on at least one of a centralized trust topology or a diffuse trust topology. Disclosed example distributed transaction processing nodes also include a trusted execution environment to protect first data associated with a centralized trust topology and to protect second data associated with a diffuse trust topology. Disclosed example distributed transaction processing nodes further include a trust topology selector to selectively configure the distributed transaction application to use the at least one of the centralized trust topology or the diffuse trust topology to process the transaction.
Abstract:
Technologies for providing a multi-tenant local breakout switching and dynamic load balancing include a network device to receive network traffic that includes a packet associated with a tenant. Upon a determination that the packet is encrypted, a secret key associated with the tenant is retrieved. The network device decrypts a payload from the packet using the secret key. The payload is indicative of one or more characteristics associated with network traffic. The network device evaluates the characteristics and determines whether the network traffic is associated with a workload requesting compute from a service hosted by a network platform. If so, the network device forwards the network traffic to the service.