Abstract:
A method is provided for managing applications for sensors. In one embodiment, the method includes loading a plurality of applications and links for communicating with a plurality of sensors on a platform having an interface for entry of a requested use case; and copying a configuration from a grouping of application instances being applied to a first sensor performing in a function comprising of the requested use case. The method may further include applying the configuration for the grouping of application instances to a second set of sensors to automatically conform the plurality of sensors on the platform to perform the requested use case.
Abstract:
A computer-implemented method executed by at least one processor for person identification is presented. The method includes employing one or more cameras to receive a video stream including a plurality of frames to extract features therefrom, detecting, via an object detection model, objects within the plurality of frames, detecting, via a key point detection model, persons within the plurality of frames, detecting, via a color detection model, color of clothing worn by the persons, detecting, via a gender and age detection model, an age and a gender of the persons, establishing a spatial connection between the objects and the persons, storing the features in a feature database, each feature associated with a confidence value, and normalizing, via a ranking component, the confidence values of each of the features.
Abstract:
Methods are provided. A method includes capturing a snapshot of an offload process being executed by one or more many-core processors. The offload process is in signal communication with a host process being executed by a host processor. At least the offload is in signal communication with a monitoring process. The method further includes terminating the offload process on the one or more many-core processors, by the monitor process responsive to a communication between the monitor process and the offload processing being disrupted. The snapshot includes a respective predetermined minimum set of information required to restore a same state of the process as when the snapshot was taken.
Abstract:
A big data processing system includes a memory management engine having stream buffers, realtime views and models, and batch views and models, the stream buffers coupleable to one or more stream processing frameworks to process stream data, the batch models coupleable to one or more batch processing frameworks; one or more processing engines including Join, Group, Filter, Aggregate, Project functional units and classifiers; and a client layer engine communicating with one or more big data applications, the client layer engine handling an output layer, an API layer, and an unified query layer.
Abstract:
Methods and systems for scheduling jobs to manycore nodes in a cluster include selecting a job to run according to the job's wait time and the job's expected execution time; sending job requirements to all nodes in a cluster, where each node includes a manycore processor; determining at each node whether said node has sufficient resources to ever satisfy the job requirements and, if no node has sufficient resources, deleting the job; creating a list of nodes that have sufficient free resources at a present time to satisfy the job requirements; and assigning the job to a node, based on a difference between an expected execution time and associated confidence value for each node and a hypothetical fastest execution time and associated hypothetical maximum confidence value.
Abstract:
A method in a graph storage and processing system is provided. The method includes storing, in a scalable, distributed, fault-tolerant, in-memory graph storage device, base graph data representative of graphs, and storing, in a real-time, in memory graph storage device, update graph data representative of graph updates for the graphs with respect to a time threshold. The method further includes sampling the base graph data to generate sampled portions of the graphs and storing the sampled portions, by an in-memory graph sampler. The method additionally includes providing, by a query manager, a query interface between applications and the system. The method also includes forming, by the query manager, graph data representative of a complete graph from at least the base graph data and the update graph data, if any. The method includes processing, by a graph computer, the sampled portions using batch-type computations to generate approximate results for graph-based queries.
Abstract:
Methods are provided. A method for swapping-out an offload process from a coprocessor includes issuing a snapify_pause request from a host processor to the coprocessor to initiate a pausing of the offload process executing by the coprocessor and another process executing by the host processor using a plurality of locks. The offload process is previously offloaded from the host processor to the coprocessor. The method further includes issuing a snapify_capture request from the host processor to the coprocessor to initiate a local snapshot capture and saving of the local snapshot capture by the coprocessor. The method also includes issuing a snapify_wait request from the host processor to the coprocessor to wait for the local snapshot capture and the saving of the local snapshot capture to complete by the coprocessor.
Abstract:
Systems and methods for swapping out and in pinned memory regions between main memory and a separate storage location in a system, including establishing an offload buffer in an interposing library; swapping out pinned memory regions by transferring offload buffer data from a coprocessor memory to a host processor memory, unregistering and unmapping a memory region employed by the offload buffer from the interposing library, wherein the interposing library is pre-loaded on the coprocessor, and collects and stores information employed during the swapping out. The pinned memory regions are swapped in by mapping and re-registering the files to the memory region employed by the offload buffer, and transferring data of the offload buffer data from the host memory back to the re-registered memory region.
Abstract:
A runtime method is disclosed that dynamically sets up core containers and thread-to-core affinity for processes running on manycore coprocessors. The method is completely transparent to user applications and incurs low runtime overhead. The method is implemented within a user-space middleware that also performs scheduling and resource management for both offload and native applications using the manycore coprocessors.
Abstract:
Systems and methods are provided for deploying applications within a wireless network infrastructure, including initiating, by a centralized control module in a pre-configured hardware unit having a 5G wireless communication module, edge computing device, centralized control module, and data processing module with access to cloud resources, a setup procedure upon receiving a deployment command, the setup procedure including activating the 5G wireless communication module to establish a network connection. User equipment for communication with sensors and cameras is deployed using an edge device through the network connection. Application deployment is managed using a centralized control module including an edge cloud optimizer for allocating resources between an edge computing device and the cloud resources based on real-time analysis of network conditions and application requirements. Computing resource allocation between the edge computing device and cloud resources is dynamically adjusted for application requirements and network conditions during automated application deployment and optimization.