Abstract:
A method of parallel processing an ordered input data stream that includes a plurality of input data elements and a corresponding plurality of order keys for indicating an ordering of the input data elements, with each order key associated with one of the input data elements, includes processing the input data stream in a parallel manner with a plurality of worker units, thereby generating a plurality of sets of output data elements. The plurality of sets of output data elements is stored in a plurality of buffers, with each buffer associated with one of the worker units. An ordered output data stream is output while the input data stream is being processed by outputting selected output data elements from the buffers in an order that is based on the order keys.
Abstract:
A method of analyzing a data parallel query includes receiving a user-specified data parallel query that includes a plurality of query operators. An operator type for each of the query operators is identified based on a type of parallel input data structure the operator operates on and a type of parallel output data structure the operator outputs. It is determined whether the query is a prohibited query based on the identified operator types.
Abstract:
A membership interface provides procedure headings to add and remove elements of a data collection, without specifying the organizational structure of the data collection. A membership implementation associated with the membership interface provides thread-safe operations to implement the interface procedures. A blocking-bounding wrapper on the membership implementation provides blocking and bounding support separately from the thread-safety mechanism.
Abstract:
The present invention extends to methods, systems, and computer program products for indicating parallel operations with user-visible events. Event markers can be used to indicate an abstracted outer layer of execution as well as expose internal specifics of parallel processing systems, including systems that provide data parallelism. Event markers can be used to show a variety of execution characteristics including higher-level markers to indicate the beginning and end of an execution program (e.g., a query). Inside the execution program (query) individual fork/join operations can be indicated with sub-levels of markers to expose their operations. Additional decisions made by an execution engine, such as, for example, when elements initially yield, when queries overlap or nest, when the query is cancelled, when the query bails to sequential operation, when premature merging or re-partitioning are needed can also be exposed.
Abstract:
A method includes receiving a query that identifies an input data source. A query category for a query operator in the received query is identified. A data source category for the input data source is also identified. A results object is generated based on the identified query category and the identified data source category. The results object supports at least one of random access and sequential access to results produced by the query operator.
Abstract:
Partitioning query execution work of a sequence including a plurality of elements. A method includes a worker core requesting work from a work queue. In response, the worker core receives a task from the work queue. The task is a replicable sequence-processing task including two distinct steps: scheduling a copy of the task on the scheduler queue and processing a sequence. The worker core processes the task by: creating a replica of the task and placing the replica of the task on the work queue, and beginning processing the sequence. The acts are repeated for one or more additional worker cores, where receiving a task from the work queue is performed by receiving one or more replicas of tasks placed on the task queue by earlier performances of creating a replica of the task and placing the replica of the task on the work queue by a different worker core.
Abstract:
A concurrent grouping operation for execution on a multiple core processor is provided. The grouping operation is provided with a sequence or set of elements. In one phase, each worker receives a partition of a sequence of elements to be grouped. The elements of each partition are arranged into a data structure, which includes one or more keys where each key corresponds to a value list of one or more of the received elements associated with that key. In another phase, the data structures created by each worker are merged so that the keys and corresponding elements for the entire sequence of elements exist in one data structure. Recursive merging can be completed in a constant time, which is not proportional to the length of the sequence.
Abstract:
A method of translating a comprehension into executable code for execution on a SIMD (Single Instruction, Multiple Data stream) execution unit, includes receiving a user specified comprehension. The comprehension is compiled into a first set of executable code. An intermediate representation is generated based on the first set of executable code. The intermediate representation is translated into a second set of executable code that is configured to be executed by a SIMD execution unit.
Abstract:
A method includes compiling an expression into executable code that is configured to create a data structure that represents the expression. The expression includes a plurality of sub-expressions. The code is executed to create the data structure. The data structure is evaluated using a plurality of concurrent threads, thereby processing the expression in a parallel manner.
Abstract:
A method of resizing a concurrently accessed hash table is disclosed. The method includes acquiring the locks in the hash table. The hash table, in a first state, is dynamically reconfigured in size into a second state. Additionally, the amount of locks is dynamically adjusted based on comparing the size of the hash table in the second state to the size of the hash table in the second state.