Abstract:
Aspects of the present disclosure provide methods and apparatus for offloading checksum processing in a user equipment (UE) (e.g., from an application processor to a modem processor). Such offloading may speed up packet processing, increase data rate, and/or free up resources of the application processor for other tasks.
Abstract:
Techniques for processing data for a set of protocols at a transmitter and a receiver are described. In one exemplary embodiment, the receiver decodes received frames to obtain decoded MAC frames, obtains un-ordered RLC frames from the decoded MAC frames, and deciphers the un-ordered RLC frames to obtain deciphered RLC frames. In another exemplary embodiment, the receiver computes partial checksums for RLC frames, computes checksums for IP headers, and computes checksums for TCP/UDP frames based on the partial checksums for the RLC frames and the checksums for the IP headers. In yet another exemplary embodiment, the transmitter generates MAC and RLC headers in a fast memory, transfers these header to an internal memory, retrieves a block of data from an external memory in a single transaction, moves the block of data and the RLC headers to form RLC frames, and moves the RLC frames and MAC headers to form MAC frames.
Abstract:
Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt.
Abstract:
Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt.
Abstract:
An apparatus and method for distributed data processing is described herein. A main processor programs a mini-processor to process an incoming data stream. The mini-processor is located in close proximity to hardware components operating on the input data stream. A copy engine is also provided for copying data from multiple protocol data units in a single copy operation.
Abstract:
Multiple memory pools are defined in hardware for operating on data. At least one memory pool has a lower latency that the other memory pools. Hardware components operate directly on data in the lower latency memory pool.
Abstract:
Incoming data frames are parsed by a hardware component. Headers are extracted and stored in a first location along with a pointer to the associated payload. Payloads are stored in a single, contiguous memory location.
Abstract:
Multiple memory pools are defined in hardware for operating on data. At least one memory pool has a lower latency that the other memory pools. Hardware components operate directly on data in the lower latency memory pool.