会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Miss-under-miss processing and cache flushing
    • Miss-under-miss处理和缓存刷新
    • US08037281B2
    • 2011-10-11
    • US11889753
    • 2007-08-16
    • Warren F. KrugerWade K. Smith
    • Warren F. KrugerWade K. Smith
    • G06F12/00
    • G06F12/1027G06F9/30043G06F12/0862G06F2212/681G06F2212/684
    • Described herein are systems and methods that reduce the latency which may occur when a level one (L1) cache issues a request to a level two (L2) cache, and that ensure that a translation requests sent to an L2 cache are flushed during a context switch. Such a system may include a work queue and a cache (such as an L2 cache). The work queue comprises a plurality of state machines, each configured to store a request for access to memory. The state machines can monitor requests that are stored in the other state machines and requests that the other state machines issue to the cache. A state machine only sends its request to the cache if another state machine is not already awaiting translation data relating to the that request. In this way, the request/translation traffic between the work queue and the cache can be significantly reduced.
    • 这里描述了减少当一级(L1)高速缓存向第二级(L2)高速缓存发出请求时可能发生的延迟的系统和方法,并且确保在上下文期间刷新发送到L2高速缓存的翻译请求 开关。 这样的系统可以包括工作队列和高速缓存(诸如L2高速缓存)。 工作队列包括多个状态机,每个状态机被配置为存储访问存储器的请求。 状态机可以监视存储在其他状态机中的请求,并请求其他状态机发出缓存。 如果另一个状态机尚未等待与该请求有关的翻译数据,则状态机仅将其请求发送到缓存。 以这种方式,工作队列和缓存之间的请求/转换流量可以大大减少。
    • 3. 发明授权
    • Binsorter triangle insertion optimization
    • Binsorter三角插入优化
    • US06424345B1
    • 2002-07-23
    • US09418685
    • 1999-10-14
    • Wade K. SmithJames T. BattleChris J. Goodman
    • Wade K. SmithJames T. BattleChris J. Goodman
    • G06T1500
    • G06T11/40G06T15/005
    • A method for rendering polygons in a computer graphics system in which the computer display is divided into a plurality of subregions, and the rasterization process is performed in a micro framebuffer for each subregion, rather than sending raster data for each triangle into the frame buffer. Each polygon undergoes a first stage bounding box intersection test to identify the subregions which are likely to intersect with the polygon. If the number or configuration of intersected subregions exceeds a predetermined threshold requirement, then the polygon undergoes a more precise second stage intersection test to identify which subregions are actually intersected by the polygon. If the number or configuration of intersected subregions is below the threshold requirement, then the control data for the polygon is passed on to each of the identified subregions.
    • 一种用于在计算机图形系统中渲染多边形的方法,其中计算机显示器被划分为多个子区域,并且在用于每个子区域的微帧缓冲器中执行光栅化处理,而不是将每个三角形的光栅数据发送到帧缓冲器中。 每个多边形经历第一阶段边界框交叉测试以识别可能与多边形相交的子区域。 如果相交子区域的数量或配置超过预定阈值要求,则多边形经历更精确的第二阶段交叉测试,以确定哪些子区域实际上被多边形相交。 如果相交子区域的数量或配置低于阈值要求,则将多边形的控制数据传递给每个所识别的子区域。
    • 4. 发明授权
    • Computing apparatus and operating method using software queues to
improve graphics performance
    • 使用软件队列来提高图形性能的计算设备和操作方法
    • US5949439A
    • 1999-09-07
    • US698366
    • 1996-08-15
    • Roey Ben-YosephPaul HsiehWade K. Smith
    • Roey Ben-YosephPaul HsiehWade K. Smith
    • G06F9/46G06T1/20G06F15/16
    • G06F9/546G06T1/20
    • A software queue located in an offscreen portion of video memory is used as a large-capacity software queue for queuing messages to a graphics accelerator. Although the software queue is typically stored in a dynamic RAM (DRAM) memory, advantages of faster static RAM (SRAM) are achieved by shadowing some of the queuing information in SRAM. Usage of a large-capacity software queue in video DRAM memory and information shadowing in faster SRAM memory achieves an advantageous balance between throughput speed and queue size. The large-capacity of the software queue ensures that the queue is virtually never filled to capacity so that delays while awaiting free space in the queue are virtually never incurred. The capacity of the software queue is determined in software and is therefore adaptable to match a particular graphics application.
    • 位于视频存储器的屏幕外部分的软件队列用作用于将消息排队到图形加速器的大容量软件队列。 虽然软件队列通常存储在动态RAM(DRAM)存储器中,但是通过对SRAM中的一些排队信息进行遮蔽来实现更快的静态RAM(SRAM)的优点。 大容量软件队列在视频DRAM存储器中的使用以及更快的SRAM存储器中的信息阴影在吞吐速度和队列大小之间实现了有利的平衡。 大容量的软件队列确保队列实际上从来没有被填充到容量中,这样在队列中等待可用空间的延迟几乎从未被发生。 软件队列的容量由软件确定,因此适合于匹配特定的图形应用程序。
    • 5. 发明授权
    • Virtual memory fragment aware cache
    • 虚拟内存片段感知缓存
    • US07539843B2
    • 2009-05-26
    • US11549570
    • 2006-10-13
    • Warren F. KrugerWade K. Smith
    • Warren F. KrugerWade K. Smith
    • G06F12/10
    • G06F12/1009G06F2212/652
    • The present invention is directed to a method, computer program product, and system for processing memory access requests. The method includes the following features. First, page table entries of a page table are organized into at least one fragment that maps logical memory to at least one of logical memory or physical memory. The at least one fragment has a fragment size and an alignment boundary. Then, a subset of the page table entries stored in one of a plurality of cache banks is accessed to determine a mapping between a first logical memory address and at least one of a second logical memory address or a physical memory address. Each cache bank is configured to store at least one page table entry corresponding to a fragment of a predetermined set of fragment sizes and a predetermined alignment boundary.
    • 本发明涉及一种用于处理存储器访问请求的方法,计算机程序产品和系统。 该方法包括以下特征。 首先,页表的页表条目被组织成将逻辑存储器映射到逻辑存储器或物理存储器中的至少一个的至少一个片段。 至少一个片段具有片段大小和对准边界。 然后,访问存储在多个高速缓存组之一中的页表条目的子集,以确定第一逻辑存储器地址与第二逻辑存储器地址或物理存储器地址中的至少一个之间的映射。 每个高速缓存组被配置为存储对应于预定的一组片段大小和预定对齐边界的片段的至少一个页表条目。