会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 12. 发明授权
    • Dynamic cache partitioning
    • 动态缓存分区
    • US06865647B2
    • 2005-03-08
    • US10730761
    • 2003-12-08
    • Sompong P. OlarigPhillip M. JonesJohn E. Jenne
    • Sompong P. OlarigPhillip M. JonesJohn E. Jenne
    • G06F12/08G06F12/12
    • G06F12/084G06F12/12
    • A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.
    • 基于缓存的系统适用于动态高速缓存分区。 高速缓存被分割成多个实体的多个高速缓存分区。 每个缓存分区可以分配为不同实体的私有缓存。 如果检测到满足第一预定高速缓存分区条件的第一缓存分区和满足第二预定高速缓存分区条件的第二高速缓存分区,则第一高速缓存分区的大小增加预定分段,并且第二高速缓存分区的大小为 减少预定段。 实体可以在其分配的高速缓存分区中专门执行高速缓存行替换,并且还可以读取任何高速缓存分区。
    • 14. 发明授权
    • System for identifying memory requests as noncacheable or reduce cache coherence directory lookups and bus snoops
    • 用于将内存请求识别为非缓存或减少缓存一致性目录查找和总线侦听的系统
    • US06470429B1
    • 2002-10-22
    • US09752128
    • 2000-12-29
    • Phillip M. JonesRobert Allan Lester
    • Phillip M. JonesRobert Allan Lester
    • G06F1200
    • G06F12/0835G06F12/0831G06F12/0888
    • An apparatus for identifying requests to main memory as non-cacheable in a computer system with multiple processors includes a main memory, memory cache, processor and cache coherence directory all coupled to a host bridge unit (North bridge). The processor transmits requests for data to the main memory via the host bridge unit. The host bridge unit includes a cache coherence controller that implements a protocol to maintain the coherence of data stored in each of the processor caches in the computer system. A cache coherence directory is connected to the cache coherence controller. After receiving the request for data from main memory, the host bridge unit identifies requests for data to main memory as cacheable or non-cacheable. If the data is non-cacheable, then the host bridge unit does not request the cache coherence controller to perform a cache coherence directory lookup to maintain the coherence of the data.
    • 用于在具有多个处理器的计算机系统中将对主存储器的请求识别为不可缓存的装置包括全部耦合到主桥单元(北桥)的主存储器,存储器高速缓存,处理器和高速缓存一致性目录。 处理器通过主机桥单元向主存储器发送数据请求。 主机桥单元包括高速缓存一致性控制器,其执行协议以维持存储在计算机系统中的每个处理器高速缓存中的数据的一致性。 高速缓存一致性目录连接到高速缓存一致性控制器。 在从主存储器接收到数据请求之后,主桥单元将对主存储器的数据请求标识为可高速缓存或不可缓存。 如果数据不可缓存,则主机桥单元不请求高速缓存一致性控制器执行高速缓存一致性目录查找以维持数据的一致性。
    • 15. 发明授权
    • Computer system with synchronous memory arbiter that permits asynchronous memory requests
    • 具有允许异步存储器请求的同步存储器仲裁器的计算机系统
    • US06249847B1
    • 2001-06-19
    • US09134057
    • 1998-08-14
    • Kenneth T. ChinPhillip M. JonesRobert A. LesterGary J. PiccirilloMichael J. Collins
    • Kenneth T. ChinPhillip M. JonesRobert A. LesterGary J. PiccirilloMichael J. Collins
    • G06F1378
    • G06F13/18
    • A computer system that includes a CPU, a memory and a memory controller for controlling access to the memory. The memory controller generally includes arbitration logic for deciding which memory request among one or more pending requests should win arbitration. When a request wins arbitration, the arbitration logic asserts a “won” signal corresponding to that memory request. The memory controller also includes synchronizing logic to synchronize memory requests, corresponding to a first group of requests, that win arbitration to a clock signal and an arbitration enable signal. The synchronizing logic includes an AND gate and a latch for synchronizing the won signals. The memory controller also asynchronously arbitrates a second group of memory requests by asserting a won signal associated with the second group requests that is not synchronized to the clock signal. In this manner, the won signals for the second group of requests can be asserted earlier than the synchronized won signals, thereby permitting the asynchronously arbitrated second group memory requests to be performed earlier than otherwise possible.
    • 一种包括CPU,存储器和用于控制对存储器的访问的存储器控​​制器的计算机系统。 存储器控制器通常包括仲裁逻辑,用于决定一个或多个待处理请求中哪个存储器请求应该赢得仲裁。 当请求赢得仲裁时,仲裁逻辑确定与该存储器请求对应的“赢”信号。 存储器控制器还包括同步逻辑,以将与第一组请求相对应的存储器请求同步到仲裁到时钟信号和仲裁使能信号。 同步逻辑包括与门和用于使获胜信号同步的锁存器。 存储器控制器还通过断言与不与时钟信号同步的第二组请求相关联的获胜信号来异步地仲裁第二组存储器请求。 以这种方式,第二组请求的获胜信号可以早于同步的获胜信号被断言,从而允许异步仲裁的第二组存储器请求比其他可能的更早执行。
    • 16. 发明授权
    • Computer system with adaptive memory arbitration scheme
    • 具有自适应内存仲裁方案的计算机系统
    • US06286083B1
    • 2001-09-04
    • US09112000
    • 1998-07-08
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensMichael J. CollinsC. Kevin Coffee
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensMichael J. CollinsC. Kevin Coffee
    • G06F1318
    • G06F13/1605
    • A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests. Also, the adaptive memory arbitration scheme introduces a flexible method of adjustable priority-weighting which permits selected devices to transact a programmable number of consecutive memory accesses without those devices losing request priority.
    • 计算机系统包括用于对存储器访问请求进行优先级的自适应存储器仲裁器,包括自调整可编程请求优先级排序系统。 存储器仲裁器在每个仲裁周期内进行调整,从而降低获取内存仲裁的任何请求的优先级。 因此,初始保持低优先级排序的存储器请求可以逐渐提前优先,直到该请求赢得存储器仲裁。 这样的方案可防止低优先级的设备变得“记忆不足”。 因为某些类型的存储器请求(例如刷新请求和存储器读取)固有地需要比其他请求(诸如存储器写入)更快的存储器访问,所以自适应存储器仲裁器另外将不可调整的优先级结构集成到自适应排名系统中,从而保证更快的服务 最迫切的要求。 此外,自适应存储器仲裁方案引入了可调整优先权重的灵活方法,其允许所选择的设备在没有丢失请求优先级的情况下处理可编程数量的连续存储器访问。
    • 18. 发明授权
    • Memory controller using queue look-ahead to reduce memory latency
    • 内存控制器使用队列预先来减少内存延迟
    • US06269433B1
    • 2001-07-31
    • US09069515
    • 1998-04-29
    • Phillip M. JonesGary J. Piccirillo
    • Phillip M. JonesGary J. Piccirillo
    • C06F1200
    • G06F13/161G06F12/0215
    • A computer system includes a processor, a memory device, at least one expansion bus, and a bridge device coupling the processor, memory device, and expansion bus together. The bridge device preferably includes a memory controller that is capable of arbitrating among pending memory requests, and in certain situations, completing the current cycle after the next cycle begins. This allows executing at least two memory requests concurrently, thus improving bus utilization and retrieving and storing data in memory occurs more efficiently. The memory controller can complete the current memory cycle during the next cycle when the next memory request to be executed will result in a bank miss and a least recently used tracker is currently tracking its maximum number of open memory pages and banks. Further concurrent memory request execution is possible when a bank inactivate condition is valid for the currently executing memory request and the next request to execute will result in a page miss or a page hit to a page other than the MRU page.
    • 计算机系统包括处理器,存储器件,至少一个扩展总线和将处理器,存储器件和扩展总线耦合在一起的桥接器件。 桥接器件优选地包括能够在未决存储器请求之间进行仲裁的存储器控​​制器,并且在某些情况下,在下一个周期开始之后完成当前周期。 这允许同时执行至少两个存储器请求,从而提高总线利用率并且更有效地检索和存储数据在存储器中。 存储器控制器可以在下一个周期中完成当前的存储器周期,当下一个要执行的存储器请求将导致存储块丢失并且最近使用的跟踪器当前正在跟踪其最大数量的开放存储器页面和存储体。 当银行非活动条件对当前执行的存储器请求有效并且下一个执行请求将导致页面错失或页面命中到除了MRU页面之外的页面时,进一步的并发存储器请求执行是可能的。
    • 19. 发明授权
    • Computer system employing memory controller and bridge interface permitting concurrent operation
    • 采用内存控制器和桥接口的计算机系统允许并行运行
    • US06247102B1
    • 2001-06-12
    • US09047876
    • 1998-03-25
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensC. Kevin CoffeeMichael J. CollinsJohn Larson
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensC. Kevin CoffeeMichael J. CollinsJohn Larson
    • G06F1314
    • G06F13/1642G06F13/4036
    • A computer system includes a CPU, a memory device, two expansion buses, and a bridge logic unit coupling together the CPU, the memory device and the expansion buses. The CPU couples to the bridge logic unit via a CPU bus and the memory device couples to the bridge logic unit via a memory bus. The bridge logic unit generally routes bus cycle requests from one of the four buses to another of the buses while concurrently routing bus cycle requests to another pair of buses. The bridge logic unit preferably includes four interfaces, one each to the CPU, memory device and the two expansion buses. Each pair of interfaces are coupled by at least one queue; write requests are stored (or “posted”) in write queues and read data are stored in read queues. Because each interface can communicate concurrently with all other interfaces via the read and write queues, the possibility exists that a first interface cannot access a second interface because the second interface is busy processing read or write requests from a third interface, thus starving the first interface for access to the second interface. To remedy this starvation problem, the bridge logic unit prevents the third interface from posting additional write requests to its write queue, thereby permitting the first interface access to the second interface. Further, read cycles may be retried from one interface to allow another interface to complete its bus transactions.
    • 计算机系统包括CPU,存储器件,两个扩展总线以及将CPU,存储器件和扩展总线耦合在一起的桥逻辑单元。 CPU通过CPU总线耦合到桥逻辑单元,存储器件通过存储器总线耦合到桥逻辑单元。 桥接逻辑单元通常将总线周期请求从四条总线之一路由到另一条总线,同时将总线周期请求转发到另一对总线。 桥逻辑单元优选地包括四个接口,每个接口连接到CPU,存储设备和两个扩展总线。 每对接口由至少一个队列耦合; 写入请求在写入队列中被存储(或“发布”),并且读取数据被存储在读取队列中。 因为每个接口可以通过读写队列与所有其他接口同时进行通信,所以存在第一接口无法访问第二接口的可能性,因为第二接口正忙于处理来自第三接口的读或写请求,从而使第一接口 用于访问第二个接口。 为了解决这个饥饿问题,桥接逻辑单元防止第三接口向其写入队列发布额外的写入请求,从而允许第一接口访问第二接口。 此外,可以从一个接口重试读周期,以允许另一接口完成其总线事务。
    • 20. 发明授权
    • System and method for aligning an initial cache line of data read from
local memory by an input/output device
    • 用于通过输入/输出设备对准从本地存储器读取的数据的初始高速缓存行的系统和方法
    • US06160562A
    • 2000-12-12
    • US135620
    • 1998-08-18
    • Kenneth T. ChinClarence K. CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. Piccirillo
    • Kenneth T. ChinClarence K. CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. Piccirillo
    • G06F12/08G06F13/40G06F13/14
    • G06F12/0879G06F13/404
    • A computer is provided having a bus interface unit coupled between a CPU bus, a PCI bus and/or a graphics bus. The bus interface unit includes controllers linked to the respective buses and further includes a plurality of queues placed within address and data paths linking the various controllers. An interface controller coupled between a peripheral bus (excluding the CPU local bus) determines if an address forwarded from a peripheral device is the first address within a sequence of addresses used to select a set of quad words constituting a cache line. If that address (i.e., target address) is not the first address (i.e., initial address) in that sequence, then the target address is modified so that it becomes the initial address in that sequence. An offset between the target address and the modified address is denoted as a count value. The initial address aligns the reads to a cacheline boundary and stores in successive order the quad words of the cacheline in the queue of the bus interface unit. Quad words arriving in the queue prior to a quad word attributed to the target address are discarded. This ensures the interface controller, and eventually the peripheral device, will read quad words in successive address order, and all subsequently read quad words will also be sent in successive order until the peripheral read transaction is complete.
    • 提供一种具有耦合在CPU总线,PCI总线和/或图形总线之间的总线接口单元的计算机。 总线接口单元包括链接到各个总线的控制器,还包括放置在连接各种控制器的地址和数据路径内的多个队列。 耦合在外围总线(不包括CPU本地总线)之间的接口控制器确定从外围设备转发的地址是否是用于选择构成高速缓存行的一组四字的地址序列内的第一地址。 如果该地址(即,目标地址)不是该序列中的第一地址(即,初始地址),则修改目标地址,使其成为该序列中的初始地址。 目标地址与修改地址之间的偏移量表示为计数值。 初始地址将读取对齐到高速缓存行边界,并以连续顺序存储总线接口单元队列中的高速缓存行的四个字。 在归因于目标地址的四字之前到达队列的四字被丢弃。 这确保接口控制器以及最终的外围设备将以连续的地址顺序读取四个字,并且所有随后读取的四字都将以连续的顺序发送,直到外设读取事务完成。