会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明授权
    • System and method for compositing path color in path rendering
    • 在路径渲染中合成路径颜色的系统和方法
    • US09202303B2
    • 2015-12-01
    • US13112874
    • 2011-05-20
    • Jeffrey A. BolzMark J. Kilgard
    • Jeffrey A. BolzMark J. Kilgard
    • G09G5/00G06T15/00G06T11/20
    • G06T15/005G06T11/203
    • One embodiment of the present invention sets forth a technique for compositing a rendered path object into an image buffer. A shader program executing within a graphics processing unit (GPU) performs a stenciling operation for the path object and subsequently performs a texture barrier operation, which invalidates caches configured to store texture and frame buffer data within the GPU. The shader program then performs covering operation for the path object in which the shader renders color samples for the path object and composites the color samples into an image buffer. The shader program binds to the image buffer for access as both a texture map and a writeable image. Stencil values are reset when corresponding pixels are written once per path object, and texture caches are invalidated via the texture barrier operation, which is performed after each covering operation per path object.
    • 本发明的一个实施例提出了一种用于将渲染的路径对象合成到图像缓冲器中的技术。 在图形处理单元(GPU)内执行的着色器程序执行路径对象的模板操作,并且随后执行纹理屏障操作,这使得被配置为在GPU内存储纹理和帧缓冲器数据的高速缓存无效。 着色程序然后对路径对象执行覆盖操作,其中着色器为路径对象呈现颜色样本,并将颜色样本复合到图像缓冲区中。 着色器程序绑定到图像缓冲区以作为纹理贴图和可写图像。 当每个路径对象写入一次对应的像素时,模板值被复位,纹理高速缓存通过每个路径对象的每个覆盖操作之后执行的纹理屏障操作无效。
    • 12. 发明申请
    • BINDLESS MEMORY ACCESS IN DIRECT 3D
    • 直接3D中的无痕记忆访问
    • US20110242125A1
    • 2011-10-06
    • US13078848
    • 2011-04-01
    • Jesse David HALLJeffrey A. Bolz
    • Jesse David HALLJeffrey A. Bolz
    • G09G5/00G06T1/00
    • G06T1/20
    • One embodiment of the present invention sets for a method for accessing data objects stored in a memory that is accessible by a graphics processing unit (GPU). The method comprises the steps of creating a data object in the memory based on a command received from an application program, transmitting a first handle associated with the data object to the application program such that data associated with different graphics commands can be accessed by the GPU, wherein the first handle includes a memory address that provides access to only a particular portion of the data object, receiving a first graphics command as well as the first handle from the application program, wherein the first graphics command includes a draw command or a compute grid launch, and transmitting the first graphics command and the first handle to the GPU for processing.
    • 本发明的一个实施例设置用于访问存储在由图形处理单元(GPU)可访问的存储器中的数据对象的方法。 该方法包括以下步骤:基于从应用程序接收的命令在存储器中创建数据对象,将与数据对象相关联的第一句柄发送到应用程序,使得与GPU不同的图形命令相关联的数据可被GPU访问 ,其中所述第一句柄包括仅提供对所述数据对象的特定部分的访问的存储器地址,从所述应用程序接收第一图形命令以及所述第一句柄,其中所述第一图形命令包括绘制命令或计算 网格发射,并将第一图形命令和第一个句柄传送到GPU进行处理。
    • 16. 发明授权
    • Using affinity masks to control multi-GPU processing
    • 使用亲和力掩码来控制多GPU处理
    • US08253749B1
    • 2012-08-28
    • US11683185
    • 2007-03-07
    • Barthold B. LichtenbeltJeffrey F. JulianoJeffrey A. BolzRoss A. Cunniff
    • Barthold B. LichtenbeltJeffrey F. JulianoJeffrey A. BolzRoss A. Cunniff
    • G06F15/16G06F9/46
    • G06F9/5033
    • One embodiment of the present invention sets forth a set of application programming interface (API) extensions that enable a software application to control the processing work assigned to each GPU in a multi-GPU system. The software application enumerates a list of available GPUs, sets an affinity mask from the enumerated list of GPUs and generates an affinity device context associated with the affinity mask. The software application can then generate and utilize an affinity rendering context that directs rendering commands to a set of explicitly selected GPUs, thus allocating work among specifically selected GPUs. The software application is empowered to use domain specific knowledge to better optimize the work assigned to each GPU, thus achieving greater overall processing efficiency relative to the prior art techniques.
    • 本发明的一个实施例提出了一组应用编程接口(API)扩展,其使得软件应用能够控制分配给多GPU系统中的每个GPU的处理工作。 软件应用程序枚举可用GPU的列表,从枚举的GPU列表中设置一个亲和性掩码,并生成与亲和性掩码相关联的关联设备上下文。 然后,软件应用程序可以生成并利用将渲染命令引导到一组明确选择的GPU的亲和度渲染上下文,从而在特定选择的GPU之间分配工作。 软件应用程序被授权使用域特定知识来更好地优化分配给每个GPU的工作,从而相对于现有技术获得更大的整体处理效率。
    • 17. 发明授权
    • Bindless memory access in direct 3D
    • 直接3D中无限存储器访问
    • US09251551B2
    • 2016-02-02
    • US13078848
    • 2011-04-01
    • Jesse David HallJeffrey A. Bolz
    • Jesse David HallJeffrey A. Bolz
    • G06T1/20
    • G06T1/20
    • One embodiment of the present invention sets for a method for accessing data objects stored in a memory that is accessible by a graphics processing unit (GPU). The method comprises the steps of creating a data object in the memory based on a command received from an application program, transmitting a first handle associated with the data object to the application program such that data associated with different graphics commands can be accessed by the GPU, wherein the first handle includes a memory address that provides access to only a particular portion of the data object, receiving a first graphics command as well as the first handle from the application program, wherein the first graphics command includes a draw command or a compute grid launch, and transmitting the first graphics command and the first handle to the GPU for processing.
    • 本发明的一个实施例设置用于访问存储在由图形处理单元(GPU)可访问的存储器中的数据对象的方法。 该方法包括以下步骤:基于从应用程序接收的命令在存储器中创建数据对象,将与数据对象相关联的第一句柄发送到应用程序,使得与GPU不同的图形命令相关联的数据可被GPU访问 ,其中所述第一句柄包括仅提供对所述数据对象的特定部分的访问的存储器地址,从所述应用程序接收第一图形命令以及所述第一句柄,其中所述第一图形命令包括绘制命令或计算 网格发射,并将第一图形命令和第一个句柄传送到GPU进行处理。
    • 19. 发明授权
    • GPU virtual memory model for OpenGL
    • GPU虚拟内存模型为OpenGL
    • US08537169B1
    • 2013-09-17
    • US12715176
    • 2010-03-01
    • Jeffrey A. BolzEric S. WernessJason Sams
    • Jeffrey A. BolzEric S. WernessJason Sams
    • G06T1/00G06F12/10
    • G06T1/60G06F12/1081G06F2212/302
    • One embodiment of the present invention sets forth a method for accessing, from within a graphics processing unit (GPU), data objects stored in a memory accessible by the GPU. The method comprises the steps of creating a data object in the memory based on a command received from an application program, transmitting an address associated with the data object to the application program for providing data associated with different draw commands to the GPU, receiving a first draw command and the address associated with the data object from the application program, and transmitting the first draw command and the address associated with the data object to the GPU for processing.
    • 本发明的一个实施例提出了一种用于从图形处理单元(GPU)内访问存储在由GPU可访问的存储器中的数据对象的方法。 该方法包括以下步骤:基于从应用程序接收的命令在存储器中创建数据对象,将与数据对象相关联的地址发送到应用程序,以向GPU提供与不同绘制命令相关联的数据,接收第一 绘制命令和与应用程序中的数据对象相关联的地址,以及将第一绘制命令和与数据对象相关联的地址发送到GPU进行处理。
    • 20. 发明申请
    • SYSTEM AND METHOD FOR LONG RUNNING COMPUTE USING BUFFERS AS TIMESLICES
    • 使用缓冲区作为时间表长时间运行的系统和方法
    • US20130162661A1
    • 2013-06-27
    • US13333920
    • 2011-12-21
    • Jeffrey A. BolzJeff SmithJesse HallDavid SodmanPhilip CuadraNaveen Leekha
    • Jeffrey A. BolzJeff SmithJesse HallDavid SodmanPhilip CuadraNaveen Leekha
    • G06T1/00
    • G06T1/20G06F9/3802G06F9/3836
    • A system and method for using command buffers as timeslices or periods of execution for a long running compute task on a graphics processor. Embodiments of the present invention allow execution of long running compute applications with operating systems that manage and schedule graphics processing unit (GPU) resources and that may have a predetermined execution time limit for each command buffer. The method includes receiving a request from an application and determining a plurality of command buffers required to execute the request. Each of the plurality of command buffers may correspond to some portion of execution time or timeslice. The method further includes sending the plurality of command buffers to an operating system operable for scheduling the plurality of command buffers for execution on a graphics processor. The command buffers from a different request are time multiplexed within the execution of the plurality of command buffers on the graphics processor.
    • 使用命令缓冲区作为图形处理器上长时间运行的计算任务的执行时间或时间段的系统和方法。 本发明的实施例允许使用管理和调度图形处理单元(GPU)资源的操作系统执行长时间运行的计算应用,并且可以对每个命令缓冲器具有预定的执行时间限制。 该方法包括从应用程序接收请求并确定执行请求所需的多个命令缓冲区。 多个命令缓冲器中的每一个可以对应于执行时间或时间片的一部分。 该方法还包括将多个命令缓冲器发送到可操作用于调度多个命令缓冲器以在图形处理器上执行的操作系统。 来自不同请求的命令缓冲器在图形处理器上的多个命令缓冲器的执行中被时分复用。