Abstract:
A sender device includes a non-transitory memory storage comprising instructions and a temporal control policy, and a processor coupled to the memory. The processor executes the instructions to generate an email, generate a control mechanism for the email, wherein the control mechanism instructs a security server to implement the temporal control policy and wherein the temporal control policy affects a recipient device's use of the email, and integrate the control mechanism into the email to generate an integrated email. The sender device further includes a transmitter coupled to the processor and configured to transmit the integrated email to the security server for the security server to implement the control mechanism.
Abstract:
A sender device includes a non-transitory memory storage comprising instructions and a location control policy, and a processor coupled to the memory. The processor executes the instructions to generate an email, generate a control mechanism for the email, wherein the control mechanism instructs a security server to implement the location control policy and wherein the location control policy affects a recipient device's use of the email, and integrate the control mechanism into the email to generate an integrated email. The sender device further includes a transmitter coupled to the processor and configured to transmit the integrated email to the security server for the security server to implement the control mechanism.
Abstract:
Methods and systems that facilitate efficient and effective adaptive execution mode selection are described. The adaptive execution mode selection is performed in part on-the-fly and changes to an execution mode (e.g., sequential, parallel, etc.) for a program task can be made. An intelligent adaptive selection can be made between a variety execution modes. The adaptive execution mode selection can also include selecting parameters associated with the execution modes. A controller receives historical information associated with execution mode selection, engages in training regarding execution mode selection, and adaptively selects an execution mode on-the-fly. The training can use an approach similar to an artificial neural network in which automated guided machine learning approach establishes correspondences between execution modes and task/input feature definitions based upon historical information. An adaptive selection is performed on-the-fly based on an initial trial run.
Abstract:
The disclosure relates to technology for displaying a notification on a communication device. A communication device receives communications associated with a respective application operating on the communication device. The communication is identified to determine the sender by accessing contact information stored in storage and accessible by the communication device. The communications are filtered based on a prioritization level determined at least in part by information acquired when accessing the contact information, and the user of the communication device is notified of the filtered communications by displaying one or more customized images on a display of the communication device, with a customized image of the one or more customized images being representative of the sender such that the one or more customized images visually overlap with an icon corresponding to the application.
Abstract:
System and method embodiments are provided for creating data structure for parallel programming. A method for creating data structures for parallel programming includes forming, by one or more processors, one or more data structures, each data structure comprising one or more global containers and a plurality of local containers. Each of the global containers is accessible by all of a plurality of threads in a multi-thread parallel processing environment. Each of the plurality of local containers is accessible only by a corresponding one of the plurality of threads. A global container is split into a second plurality of local containers when items are going to be processed in parallel and two or more local containers are merged into a single global container when a parallel process reaches a synchronization point.
Abstract:
In one embodiment, a method for predicting false sharing includes running code on a plurality of cores and determining whether there is potential false sharing between a first cache line and a second cache line, and where the first cache line is adjacent to the second cache line. The method also includes tracking the potential false sharing and reporting the potential false sharing.
Abstract:
Embodiments are provided for isolating Input/Output (I/O) execution by combining compiler and Operating System (OS) techniques. The embodiments include dedicating selected cores, in multicore or many-core processors, as I/O execution cores, and applying compiler-based analysis to classify I/O regions of program source codes so that the OS can schedule such regions onto the designated I/O cores. During the compilation of a program source code, each I/O operation region of the program source code is identified. During the execution of the compiled program source code, each I/O operation region is scheduled for execution on a preselected I/O core. The other regions of the compiled program source code are scheduled for execution on other cores.
Abstract:
A method for operating a multithread processing system is provided, including assigning, by a controller, a subset of a plurality of tasks to a plurality of threads during a time N, collecting, by the controller, data during the time N concerning the operation of the plurality of threads, analyzing, by the controller, the data to determine at least one condition concerning the operation of the plurality of threads during the time N, and adjusting, by the controller, a number of the plurality of threads available in time N+1 in accordance with the at least one condition.
Abstract:
In one embodiment, a method for predicting false sharing includes running code on a plurality of cores and tracking potential false sharing in the code while running the code to produce tracked potential false sharing, where tracking the potential false sharing includes determining whether there is potential false sharing between a first cache line and a second cache line, and where the first cache line is adjacent to the second cache line. The method also includes reporting potential false sharing in accordance with the tracked potential false sharing to produce a false sharing report.