Abstract:
Dispositif de répartition de la priorité destiné à des ordinateurs renfermant des processeurs de deux types, un type prioritaire capable de déterminer lui-même sa priorité par rapport à des processeurs d'un second type non prioritaire lors de l'utilisation d'un bus commun. Le dispositif comprend un premier circuit logique (20) dont la première entrée est sollicitée lors d'une demande d'accès de la part d'une des unités non prioritaires (3a-3h), la deuxième entrée est sollicitée lors d'une demande d'accès de la part de l'unité prioritaire (1), la troisième entrée étant sollicitée pendant toute la durée d'utilisation du bus, et dont les deux sorties sont destinées à affecter le bus à une unité non prioritaire ou à l'unité prioritaire. La disposition comprend également un second circuit logique (40) avec deux entrées dont l'une détecte que l'unité prioritaire désire avoir l'accès et l'autre détecte que cet accès peut être retardé, le circuit possédant aussi deux sorties dont l'une sert à indiquer au premier circuit logique que la demande d'accès émanant de l'unité prioritaire a été faite, et l'autre sert à indiquer que le bus est occupé. Lorsque le signal d'entrée au second circuit logique indique que la concession du bus à l'unité prioritaire peut être retardée, le dispositif a le temps d'accorder le bus à une unité non prioritaire, mais l'unité prioritaire conserve l'accès immédiat au bus dès que l'unité non prioritaire a terminé sa tâche.
Abstract:
The invention involves local media rendering of a multi -party call, performed by a Client User Equipment (1). The media is encoded by each party in the call, and sent as a media stream to a Media server (2), and the media server receives a request for media streams from each Client User Equipment, each media stream in the request associated with a client priority. The Media server selects the media streams to send to each Client User Equipment, based on the request, and further such that the number of streams does not exceed a determined maximum number, which is based e.g. on the available bandwidth.
Abstract:
In general, the invention is directed towards a multiprocessing system in which jobs are speculatively executed in parallel by multiple processors (30-1, 30-2, ..., 30-N). By speculating on the existence of more coarse-grained parallelism, so-called job-level parallelism, and backing of to sequential execution only in cases where dependencies that prevent parallel execution of jobs are detected, a high degree of parallelism can be extracted. According to the invention a private memory buffer is speculatively allocated for holding results, such as a communication message, an operation system call or a new job signal, of a speculatively executed job, and these results are speculatively written directly into the allocated memory buffer. When commit priority is assigned to the speculatively executed job, a pointer referring to the allocated memory buffer is transferred to an input/output (10) device which may access the memory buffer by means of the transferred pointer. In this way, by speculatively writing messages and signals into private memory buffers, even further parallelism can be extracted.
Abstract:
The invention generally relates to a processor developed for a service network that provides various services to a plurality of users connected to the network. The processor (30; 50) comprises a job queue (52) with a number of storage positions for storing job signals corresponding to jobs that form part of substantially independent services requested by the users of the network, and a plurality of parallel processing units (54) which independently process job signals from different storage positions of the job queue (52) to execute corresponding jobs in parallel. As a number of jobs are executed speculatively, a unit (56) for checking for possible dependencies between the executed jobs is incorporated into the processor. If a dependency is detected for a speculative job, that job is flushed. To ensure prompt and proper service for the users of the service network, flushed jobs are quickly restarted directly from the job queue.
Abstract:
The present invention presents a processing system (1) comprising a number o f memory-sharing processors (10a-e) arranged for parallel processing of jobs, and data consistency means (14) for assuring data consistency. The processin g system (1) comprises a scheduler (17), for scheduling jobs for execution on the processors (10a-e) according to a first algorithm. The processing system (1) according to the present invention further uses means for retiring the jobs (18) in an order given by a second algorithm, preferably according to a global order of creation. The second algorithm is different from the first algorithm. The first algorithm may be adjusted to the particular system used , and may base the scheduling to a particular processor on e.g. the source, target, communication channel or creating processor for the job in question. The processing system (1) uses a common job queue (16), and the scheduling i s preferably performed adaptively.
Abstract:
A method and system are described for synchronizing a first processor unit with a second processor unit in a fault tolerant system comprising a plurality of processing units where all processing units are executing the same processes in synchronization. The invention is readily adapted to a system of loosely coupled processing units with a low bandwidth communication channel coupled between the processing units.
Abstract:
To achieve a highly efficient upgrade of software in computer based systems a message conversion apparatus (34) comprises an interface unit (36) for message conversion information (MCI) describing at least one message being exchanged in a software processing system before and after an upgrade of the software processing system. Also, a message conversion means (38, 40) is provided to convert the message between old and new representation for the upgraded software processing system in compliance with the specifications given in the message conversion information (MCI). Therefore, it is possible to introduce a disturbance free upgrade of software in computer based systems with minimized system downtime.
Abstract:
A data processing system executes two instruction sequences in an order determined in advance. The executions include selection of read/write instructions containing read/ write addresses. With the aid of the instructions a main memory common to both sequences is activated for data information reading/writing. During execution of the sequence which is second due to the order, data information is used which is not guaranteed in advance as being independent of the data information obtained during execution of the sequence which is first due to the order. Increased data handling capacity is achieved in the following manner: both sequences are executed in parallel to start with. During execution of the first sequence the main memory is prevented from being activated for writing due to the second sequence write instructions. A write address and data information included in a write instruction associated with the second sequence are intermediate-stored. The intermediate stored write address is compared with the read addresses of the second sequence, and data information is prevented from being read from the main memory upon likeness in addresses, the intermediatestored data information being read instead. An address included in a read instruction associated with the second sequence is intermediate stored if this address has not been previosuly selected in conjunction with a write instruction associated with the second sequence. The intermediate-stored read address is compared with the write address of the first sequence and execution of the second sequence is restarted upon likeness in addresses. When the first sequence has been finally executed, the intermediate-stored data information is transferred with the aid of the intermediate stored write address to the main memory.
Abstract:
PCT No. PCT/SE85/00429 Sec. 371 Date Jun. 27, 1986 Sec. 102(e) Date Jun. 27, 1986 PCT Filed Nov. 1, 1985 PCT Pub. No. WO86/03606 PCT Pub. Date Jun. 19, 1986.A priority apportioning arrangement for computers with processors of two types, namely a first high-priority type which can determine its priority itself in relation to processors of a second low-priority type when using a common bus. The arrangement contains a first logic circuit (20) which has its first input activated on a request for access from one of the low-priority units (3a-3h), its second input activated on a request for access from the high-priority unit (1) and its third activated during the whole time the bus is used and has two outputs for assigning the bus a low-priority unit or the high-priority unit. The arrangement furthermore contains a second logic circuit (40) with two inputs, of which one senses that the high-priority unit desires access and the other senses that this access can take place with delay, the circuit also having two outputs, of which one is for indicating to the first logic circuit that the access request from the high-priority unit is present, and the other for indicating that the bus is occupied. When the input signal to the second logic circuit indicates that granting the bus to the high-priority unit can take place with delay, the arrangement has time to grant the bus to a low-priority unit, but the high-priority unit still has immediate access to the bus after termination of the task of the low-priority unit.
Abstract:
A method and system are described for synchronizing a first processor (10) unit with a second processor unit (20) in a fault tolerant system comprising a plurality of processing units where all processing units are executing the same processes in synchronization. The invention is readily adapted to a system of loosely coupled processing units with a low bandwidth communication channel (16) coupled between the processing units.