会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Enhanced tag-based structures, systems and methods for implementing a pool of independent tags in cache memories
    • 增强的基于标签的结构,用于在高速缓冲存储器中实现独立变量池的系统和方法
    • US07796137B1
    • 2010-09-14
    • US11552415
    • 2006-10-24
    • Dane T. MrazekSameer M. GauriaJames C. Bowman
    • Dane T. MrazekSameer M. GauriaJames C. Bowman
    • G09G5/36G06F12/08
    • G06F12/0895G06F12/0864
    • Disclosed are an apparatus, a system, a method, a graphics processing unit (“GPU”), a computer device, and a computer medium to implement a pool of independent enhanced tags to, among other things, decouple a dependency between tags and cachelines. In one embodiment, an enhanced tag-based cache structure includes a tag repository configured to maintain a pool of enhanced tags. Each enhanced tag can have a match portion configured to form an association between the enhanced tag and an incoming address. Also, an enhanced tag can have a data locator portion configured to locate a cacheline in the cache in response to the formation of the association. The data locator portion enables the enhanced tag to locate multiple cachelines. Advantageously, the enhanced tag-based cache structure can be formed to adjust the degree of reusability of the enhanced tags independent from the degree of latency tolerance for the cacheline repository.
    • 公开了一种装置,系统,方法,图形处理单元(GPU),计算机设备和计算机介质,以实现独立增强标签池,以便除其他之外,分离标签和高速缓存行之间的依赖关系 。 在一个实施例中,增强的基于标签的高速缓存结构包括被配置为维护增强标签池的标签库。 每个增强标签可以具有匹配部分,其被配置为在增强标签和传入地址之间形成关联。 此外,增强标签可以具有配置为响应于关联的形成而在高速缓存中定位高速缓存行的数据定位器部分。 数据定位器部分使增强标签能够定位多个高速缓存线。 有利地,可以形成增强的基于标签的高速缓存结构,以独立于高速缓存线库的延迟容忍度来调整增强标签的可重用性的程度。
    • 5. 发明授权
    • Transposition structures and methods to accommodate parallel processing in a graphics processing unit (“GPU”)
    • 用于在图形处理单元(“GPU”)中适应并行处理的转置结构和方法
    • US07755631B1
    • 2010-07-13
    • US11552350
    • 2006-10-24
    • Dane T. MrazekSameer M. GauriaJames C. Bowman
    • Dane T. MrazekSameer M. GauriaJames C. Bowman
    • G06F15/00
    • G06T1/00
    • Disclosed are an apparatus, a method, a programmable graphics processing unit (“GPU”), a computer device, and a computer medium to facilitate, among other things, the generation of parallel data streams to effect parallel processing in at least a portion of a graphics pipeline of a GPU. In one embodiment, an input of the apparatus receives graphics elements in a data stream of graphics elements. The graphics pipeline can use the graphics elements to form computer-generated images. The apparatus also can include a transposer configured to produce parallel attribute streams. Each of the parallel attribute streams includes a type of attribute common to the graphics elements. In one embodiment, the transposer can be configured to convert at least a portion of the graphics pipeline from a single data stream to multiple data streams (e.g., executable by multiple threads of execution) while reducing the memory size requirements to implement such a conversion.
    • 公开了一种装置,方法,可编程图形处理单元(“GPU”),计算机设备和计算机介质,以便促进并行数据流的产生,以在至少一部分 GPU的图形流水线。 在一个实施例中,设备的输入接收图形元素的数据流中的图形元素。 图形管线可以使用图形元素来形成计算机生成的图像。 该装置还可以包括被配置为产生并行属性流的转置器。 每个并行属性流包括图形元素共有的属性类型。 在一个实施例中,转置器可以被配置为将图形流水线的至少一部分从单个数据流转换为多个数据流(例如,由多个执行线程执行),同时减少了实现这种转换的存储器大小要求。
    • 6. 发明授权
    • Tuning DRAM I/O parameters on the fly
    • 即时调整DRAM I / O参数
    • US07647467B1
    • 2010-01-12
    • US11642368
    • 2006-12-19
    • Brian D. HutsellSameer M. GauriaPhilip R. ManelaJohn A. Robinson
    • Brian D. HutsellSameer M. GauriaPhilip R. ManelaJohn A. Robinson
    • G06F12/00
    • G06F13/4239
    • On the fly tuning of parameters used in an interface between a memory (e.g. high speed memory such as DRAM) and a processor requesting access to the memory. In an operational mode, a memory controller couples the processor to the memory. The memory controller can also inhibit the operational mode to initiate a training mode. In the training mode, the memory controller tunes one or more parameters (voltage references, timing skews, etc.) used in an upcoming operational mode. The access to the memory may be from an isochronous process running on a graphics processor. The memory controller determines whether the isochronous process may be inhibited before entering the training mode. If memory buffers for the isochronous process are such that the training mode will not impact the isochronous process, then the memory controller can enter the training mode to tune the interface parameters without negatively impacting the process.
    • 在用于存储器(例如,诸如DRAM的高速存储器)和请求访问存储器的处理器之间的接口中使用的参数的飞行调谐。 在操作模式中,存储器控制器将处理器耦合到存储器。 存储器控制器还可以禁止操作模式启动训练模式。 在训练模式下,存储器控制器调节在即将到来的操作模式中使用的一个或多个参数(电压参考,时序偏移等)。 对存储器的访问可以来自在图形处理器上运行的同步进程。 存储器控制器确定在进入训练模式之前是否可以禁止同步过程。 如果同步过程的存储器缓冲器使得训练模式不会影响同步过程,则存储器控制器可以进入训练模式以调整接口参数而不会不利地影响该过程。
    • 7. 发明授权
    • Look-up filter structures, systems and methods for filtering accesses to a pool of tags for cache memories
    • 查找过滤器结构,用于过滤对缓存存储器的标签池的访问的系统和方法
    • US07571281B1
    • 2009-08-04
    • US11445658
    • 2006-06-02
    • Sameer M. Gauria
    • Sameer M. Gauria
    • G06F13/00
    • G06F12/0895
    • In one embodiment, an apparatus includes an input port to receive a request to determine whether data units are stored in the cache, as well as an output port to generate look-ups for the pool of tags. The apparatus also includes a look-up filter coupled to the input and output ports, and operates to filter out superfluous look-ups for the data units, thereby forming filtered look-ups. Advantageously, the look-up filter can filter out superfluous look-ups to at least reduce the quantity of look-up operations associated with the request, thereby reducing stalling associated with multiple look-up operations. In a specific embodiment, the look-up filter can include a data unit grouping detector and a look-up suppressor.
    • 在一个实施例中,装置包括用于接收确定数据单元是否存储在高速缓存中的请求的输入端口以及用于生成标签池的查找的输出端口。 该装置还包括耦合到输入和输出端口的查找滤波器,并且用于滤除数据单元的多余查找,从而形成滤波的查找。 有利地,查找过滤器可以过滤掉多余的查找,以至少减少与请求相关联的查找操作的数量,从而减少与多个查找操作相关联的停顿。 在具体实施例中,查找过滤器可以包括数据单元分组检测器和查找抑制器。