-
公开(公告)号:WO2012096967A1
公开(公告)日:2012-07-19
申请号:PCT/US2012/020790
申请日:2012-01-10
Applicant: APPLE INC. , HARPER, John, S. , DYKE, Kenneth, C. , SANDMEL, Jeremy
Inventor: HARPER, John, S. , DYKE, Kenneth, C. , SANDMEL, Jeremy
IPC: G09G5/00
CPC classification number: G06F3/1431 , G06F2200/1614 , G09G5/12 , G09G5/373 , G09G5/377 , G09G2340/04 , G09G2340/0407 , G09G2340/0435 , G09G2340/0485 , G09G2360/04
Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaling operates asynchronously with respect to the compositing of the graphics content.
Abstract translation: 数据处理系统复合由在数据处理系统上运行的应用程序生成的图形内容,以生成图像数据。 数据处理系统将图像数据存储在第一帧缓冲器中,并且在数据处理系统的内部显示装置上将从第一帧缓冲器中的图像数据生成的图像显示。 数据处理系统中的缩放器对第一帧缓冲器中的图像数据执行缩放操作,将缩放的图像数据存储在第二帧缓冲器中,并将第二帧缓冲器中的缩放图像数据生成的图像显示在耦合到第二帧缓冲器的外部显示设备上 数据处理系统。 相对于图形内容的合成,缩放操作是异步的。
-
公开(公告)号:WO2013119446A1
公开(公告)日:2013-08-15
申请号:PCT/US2013/023993
申请日:2013-01-31
Applicant: APPLE INC.
Inventor: SANDMEL, Jeremy , SCHAFFER, Joshua, H. , PATTERSON, Toby, C. , COFFMAN, Patrick , STAHL, Geoffrey , HARPER, John, S.
IPC: G06F3/14
CPC classification number: G09G5/391 , G06F3/14 , G06F3/1431 , G06T3/40 , G06T3/4076 , G09G5/14 , G09G2320/08 , G09G2340/0407 , G09G2340/0485 , H04N5/4401 , H04N5/46 , H04N7/0122 , H04N9/642 , H04N21/44004 , H04N21/4431 , H04N21/4622 , H04N21/482
Abstract: Systems, methods, and computer readable media for dynamically setting an executing application's display buffer size are described. To ameliorate display device overscan operations, the size of an executing application's display buffer may be set based on the display device's extent and a display mode. In addition, contents of the executing application's display buffer may be operated on as they are moved to a frame buffer based on the display mode. In one mode, for example, display buffer contents may be scaled before being placed into the frame buffer. In another mode, a black border may be placed around display buffer contents as it is placed into the frame buffer. In yet another mode, display buffer contents may be copied into the frame buffer without further processing.
Abstract translation: 描述用于动态设置执行应用程序的显示缓冲区大小的系统,方法和计算机可读介质。 为了改善显示设备过扫描操作,可以基于显示设备的范围和显示模式来设置执行应用的显示缓冲器的大小。 此外,执行应用的显示缓冲器的内容可以在基于显示模式移动到帧缓冲器时被操作。 在一种模式中,例如,可以在放置到帧缓冲器之前缩放显示缓冲器内容。 在另一种模式中,当边框放置在帧缓冲器中时,黑色边框可以放置在显示缓冲器内容周围。 在另一种模式中,显示缓冲器内容可以被复制到帧缓冲器中而无需进一步处理。
-
公开(公告)号:WO2008127610A3
公开(公告)日:2008-10-23
申请号:PCT/US2008/004617
申请日:2008-04-09
Applicant: APPLE INC. , MUNSHI, Aaftab , SANDMEL, Jeremy
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
IPC: G06F9/50
Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The system includes a host processor, a graphics processing unit (GPU) coupled to the host processor and a memory coupled to at least one of the host processor and the GPU. The parallel computing program is stored in the memory to allocate threads between the host processor and the GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g GPUs or CPUs, separate from the host processor. Standard data tokens in the programming language schedule a plurality of threads for execution on a plurality of processors, such as CPUs or GPUs in parallel. Extended data tokens in the programming language implement executables for the plurality of threads according to the schedules from the standard data tokens.
-
公开(公告)号:EP2135163B1
公开(公告)日:2018-08-08
申请号:EP08742741.5
申请日:2008-04-09
Applicant: Apple Inc.
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
IPC: G06F9/50
CPC classification number: G06F9/5044
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
-
公开(公告)号:EP3413198A1
公开(公告)日:2018-12-12
申请号:EP18175404.5
申请日:2008-04-09
Applicant: Apple Inc.
Inventor: SANDMEL, Jeremy
IPC: G06F9/50
CPC classification number: G06F9/5044
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
-
公开(公告)号:WO2008127623A2
公开(公告)日:2008-10-23
申请号:PCT/US2008/004652
申请日:2008-04-09
Applicant: APPLE INC. , MUNSHI, Aaftab , SANDMEL, Jeremy
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
CPC classification number: G06F9/445 , G06F8/41 , G06F9/4843 , G06F9/5044 , G06F9/541
Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
Abstract translation: 描述了在一个或多个物理计算设备(例如CPU或GPU)中同时调度用于在一个或多个物理计算设备中执行的调度队列中的多个可执行程序的方法和装置。 一个或多个可执行文件在来自具有用于不同于一个或多个物理计算设备的物理计算设备的类型的现有可执行程序的源的在线编译。 确定与调度的可执行程序相对应的元件之间的依赖性关系,以在多个物理计算设备中同时选择要由多个线程执行的可执行程序。 如果GPU忙于图形处理线程,则初始化用于在物理计算设备的GPU中执行可执行程序的线程被初始化以在物理计算设备的另一个CPU中执行。 用于API函数的源和现有可执行文件存储在API库中以在多个物理计算设备中执行多个可执行程序,包括来自源的现有可执行文件和在线编译的可执行文件。
-
公开(公告)号:WO2008127622A2
公开(公告)日:2008-10-23
申请号:PCT/US2008/004648
申请日:2008-04-09
Applicant: APPLE INC. , MUNSHI, Aaftab , SANDMEL, Jeremy
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
CPC classification number: G06F9/5044
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
Abstract translation: 描述分配一个或多个物理计算设备(诸如连接到运行用于执行应用的一个或多个线程的应用的主机处理单元的CPU或GPU)的方法和装置。 分配可以基于表示来自用于在一个或多个线程中执行可执行程序的应用程序的处理能力要求的数据。 计算设备标识符可以与所分配的物理计算设备相关联,以在一个或多个所分配的物理计算设备中同时调度和执行一个或多个线程中的可执行文件。
-
公开(公告)号:WO2008127604A3
公开(公告)日:2008-10-23
申请号:PCT/US2008/004606
申请日:2008-04-09
Applicant: APPLE INC. , MUNSHI, Aaftab , SANDMEL, Jeremy
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
IPC: G06F9/50
Abstract: A method and an apparatus that allocate a stream memory and/or a local memory for a variable in an executable loaded from a host processor to the compute processor according to whether a compute processor supports a storage capability are described. The compute processor may be a graphics processing unit (GPU) or a central processing unit (CPU). Alternatively, an application running in a host processor configures storage capabilities in a compute processor, such as CPU or GPU, to determine a memory location for accessing a variable in an executable executed by a plurality of threads in the compute processor. The configuration and allocation are based on API calls in the host processor.
-
公开(公告)号:EP2798453A1
公开(公告)日:2014-11-05
申请号:EP13703501.0
申请日:2013-01-31
Applicant: Apple Inc.
Inventor: SANDMEL, Jeremy , SCHAFFER, Joshua, H. , PATTERSON, Toby, C. , COFFMAN, Patrick , STAHL, Geoffrey , HARPER, John, S.
IPC: G06F3/14
CPC classification number: G09G5/391 , G06F3/14 , G06F3/1431 , G06T3/40 , G06T3/4076 , G09G5/14 , G09G2320/08 , G09G2340/0407 , G09G2340/0485 , H04N5/4401 , H04N5/46 , H04N7/0122 , H04N9/642 , H04N21/44004 , H04N21/4431 , H04N21/4622 , H04N21/482
Abstract: Systems, methods, and computer readable media for dynamically setting an executing application's display buffer size are described. To ameliorate display device overscan operations, the size of an executing application's display buffer may be set based on the display device's extent and a display mode. In addition, contents of the executing application's display buffer may be operated on as they are moved to a frame buffer based on the display mode. In one mode, for example, display buffer contents may be scaled before being placed into the frame buffer. In another mode, a black border may be placed around display buffer contents as it is placed into the frame buffer. In yet another mode, display buffer contents may be copied into the frame buffer without further processing.
Abstract translation: 描述了用于动态设置执行应用程序的显示缓冲区大小的系统,方法和计算机可读介质。 为了改善显示设备的过扫描操作,正在执行的应用程序的显示缓冲区的大小可以基于显示设备的范围和显示模式来设置。 另外,执行应用程序的显示缓冲器的内容可以在它们基于显示模式被移动到帧缓冲器时被操作。 在一种模式中,例如,显示缓冲区内容可以在放入帧缓冲区之前被缩放。 在另一种模式下,当显示缓冲区内容放入帧缓冲区时,可能会在显示缓冲区内容周围放置黑色边框。 在又一种模式中,可以将显示缓冲器内容复制到帧缓冲器中而无需进一步处理。
-
10.
公开(公告)号:EP2135163A2
公开(公告)日:2009-12-23
申请号:EP08742741.5
申请日:2008-04-09
Applicant: Apple Inc.
Inventor: MUNSHI, Aaftab , SANDMEL, Jeremy
IPC: G06F9/50
CPC classification number: G06F9/5044
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
-
-
-
-
-
-
-
-
-