会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Memory module resync
    • 内存模块重新同步
    • US06684292B2
    • 2004-01-27
    • US09966892
    • 2001-09-28
    • Gary J. PiccirilloJerome J. JohnsonJohn E. Larson
    • Gary J. PiccirilloJerome J. JohnsonJohn E. Larson
    • G06F1200
    • G06F11/1658G06F11/1666
    • A technique for resynchronizing a memory system. More specifically, a technique for resynchronizing a plurality of memory segments in a redundant memory system after a hot-plug event. After a memory cartridge is hot-plugged into a system, the memory cartridge is synchronized with the operational memory cartridges such that the memory system can operate in lock step. A refresh counter in each memory cartridge is disabled to generate a first refresh request to the corresponding memory segments in the memory cartridge. After waiting a period of time to insure that regardless of what state each memory cartridge is in when the first refresh request is initiated all cycles have been completely executed, each refresh counter is re-enabled, thereby generating a second refresh request. The generation of the second refresh request to each of the memory segments provides synchronous operation of each of the memory cartridges.
    • 一种用于重新同步存储器系统的技术。 更具体地说,一种用于在热插拔事件之后重新同步冗余存储器系统中的多个存储器段的技术。 在将内存盒热插入系统之后,内存盒与操作的存储盒同步,使得存储器系统可以在锁定步骤中操作。 每个存储器盒中的刷新计数器被禁用以对存储器盒中的对应存储器段产生第一刷新请求。 在等待一段时间后,确保无论启动第一次刷新请求时每个存储卡盒何时处于完整状态,每个刷新计数器都被重新启用,从而产生第二个刷新请求。 向每个存储器段产生第二刷新请求提供每个存储器盒的同步操作。
    • 22. 发明授权
    • Computer system with adaptive memory arbitration scheme
    • 具有自适应内存仲裁方案的计算机系统
    • US06286083B1
    • 2001-09-04
    • US09112000
    • 1998-07-08
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensMichael J. CollinsC. Kevin Coffee
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensMichael J. CollinsC. Kevin Coffee
    • G06F1318
    • G06F13/1605
    • A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests. Also, the adaptive memory arbitration scheme introduces a flexible method of adjustable priority-weighting which permits selected devices to transact a programmable number of consecutive memory accesses without those devices losing request priority.
    • 计算机系统包括用于对存储器访问请求进行优先级的自适应存储器仲裁器,包括自调整可编程请求优先级排序系统。 存储器仲裁器在每个仲裁周期内进行调整,从而降低获取内存仲裁的任何请求的优先级。 因此,初始保持低优先级排序的存储器请求可以逐渐提前优先,直到该请求赢得存储器仲裁。 这样的方案可防止低优先级的设备变得“记忆不足”。 因为某些类型的存储器请求(例如刷新请求和存储器读取)固有地需要比其他请求(诸如存储器写入)更快的存储器访问,所以自适应存储器仲裁器另外将不可调整的优先级结构集成到自适应排名系统中,从而保证更快的服务 最迫切的要求。 此外,自适应存储器仲裁方案引入了可调整优先权重的灵活方法,其允许所选择的设备在没有丢失请求优先级的情况下处理可编程数量的连续存储器访问。
    • 24. 发明授权
    • Computer system employing memory controller and bridge interface permitting concurrent operation
    • 采用内存控制器和桥接口的计算机系统允许并行运行
    • US06247102B1
    • 2001-06-12
    • US09047876
    • 1998-03-25
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensC. Kevin CoffeeMichael J. CollinsJohn Larson
    • Kenneth T. ChinJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. PiccirilloJeffrey C. StevensC. Kevin CoffeeMichael J. CollinsJohn Larson
    • G06F1314
    • G06F13/1642G06F13/4036
    • A computer system includes a CPU, a memory device, two expansion buses, and a bridge logic unit coupling together the CPU, the memory device and the expansion buses. The CPU couples to the bridge logic unit via a CPU bus and the memory device couples to the bridge logic unit via a memory bus. The bridge logic unit generally routes bus cycle requests from one of the four buses to another of the buses while concurrently routing bus cycle requests to another pair of buses. The bridge logic unit preferably includes four interfaces, one each to the CPU, memory device and the two expansion buses. Each pair of interfaces are coupled by at least one queue; write requests are stored (or “posted”) in write queues and read data are stored in read queues. Because each interface can communicate concurrently with all other interfaces via the read and write queues, the possibility exists that a first interface cannot access a second interface because the second interface is busy processing read or write requests from a third interface, thus starving the first interface for access to the second interface. To remedy this starvation problem, the bridge logic unit prevents the third interface from posting additional write requests to its write queue, thereby permitting the first interface access to the second interface. Further, read cycles may be retried from one interface to allow another interface to complete its bus transactions.
    • 计算机系统包括CPU,存储器件,两个扩展总线以及将CPU,存储器件和扩展总线耦合在一起的桥逻辑单元。 CPU通过CPU总线耦合到桥逻辑单元,存储器件通过存储器总线耦合到桥逻辑单元。 桥接逻辑单元通常将总线周期请求从四条总线之一路由到另一条总线,同时将总线周期请求转发到另一对总线。 桥逻辑单元优选地包括四个接口,每个接口连接到CPU,存储设备和两个扩展总线。 每对接口由至少一个队列耦合; 写入请求在写入队列中被存储(或“发布”),并且读取数据被存储在读取队列中。 因为每个接口可以通过读写队列与所有其他接口同时进行通信,所以存在第一接口无法访问第二接口的可能性,因为第二接口正忙于处理来自第三接口的读或写请求,从而使第一接口 用于访问第二个接口。 为了解决这个饥饿问题,桥接逻辑单元防止第三接口向其写入队列发布额外的写入请求,从而允许第一接口访问第二接口。 此外,可以从一个接口重试读周期,以允许另一接口完成其总线事务。
    • 25. 发明授权
    • System and method for aligning an initial cache line of data read from
local memory by an input/output device
    • 用于通过输入/输出设备对准从本地存储器读取的数据的初始高速缓存行的系统和方法
    • US06160562A
    • 2000-12-12
    • US135620
    • 1998-08-18
    • Kenneth T. ChinClarence K. CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. Piccirillo
    • Kenneth T. ChinClarence K. CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert A. LesterGary J. Piccirillo
    • G06F12/08G06F13/40G06F13/14
    • G06F12/0879G06F13/404
    • A computer is provided having a bus interface unit coupled between a CPU bus, a PCI bus and/or a graphics bus. The bus interface unit includes controllers linked to the respective buses and further includes a plurality of queues placed within address and data paths linking the various controllers. An interface controller coupled between a peripheral bus (excluding the CPU local bus) determines if an address forwarded from a peripheral device is the first address within a sequence of addresses used to select a set of quad words constituting a cache line. If that address (i.e., target address) is not the first address (i.e., initial address) in that sequence, then the target address is modified so that it becomes the initial address in that sequence. An offset between the target address and the modified address is denoted as a count value. The initial address aligns the reads to a cacheline boundary and stores in successive order the quad words of the cacheline in the queue of the bus interface unit. Quad words arriving in the queue prior to a quad word attributed to the target address are discarded. This ensures the interface controller, and eventually the peripheral device, will read quad words in successive address order, and all subsequently read quad words will also be sent in successive order until the peripheral read transaction is complete.
    • 提供一种具有耦合在CPU总线,PCI总线和/或图形总线之间的总线接口单元的计算机。 总线接口单元包括链接到各个总线的控制器,还包括放置在连接各种控制器的地址和数据路径内的多个队列。 耦合在外围总线(不包括CPU本地总线)之间的接口控制器确定从外围设备转发的地址是否是用于选择构成高速缓存行的一组四字的地址序列内的第一地址。 如果该地址(即,目标地址)不是该序列中的第一地址(即,初始地址),则修改目标地址,使其成为该序列中的初始地址。 目标地址与修改地址之间的偏移量表示为计数值。 初始地址将读取对齐到高速缓存行边界,并以连续顺序存储总线接口单元队列中的高速缓存行的四个字。 在归因于目标地址的四字之前到达队列的四字被丢弃。 这确保接口控制器以及最终的外围设备将以连续的地址顺序读取四个字,并且所有随后读取的四字都将以连续的顺序发送,直到外设读取事务完成。
    • 26. 发明授权
    • System and method for improving processor read latency in a system employing error checking and correction
    • 在采用错误检查和校正的系统中提高处理器读延迟的系统和方法
    • US06272651B1
    • 2001-08-07
    • US09135274
    • 1998-08-17
    • Kenneth T. ChinClarence Kevin CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert Allen LesterGary J. Piccirillo
    • Kenneth T. ChinClarence Kevin CoffeeMichael J. CollinsJerome J. JohnsonPhillip M. JonesRobert Allen LesterGary J. Piccirillo
    • G06F1100
    • G06F11/10
    • A computer is provided having a system interface unit coupled between main memory, a CPU bus, and a PCI bus and/or graphics bus. A hard drive is typically coupled to the PCI bus. The system interface unit is configured to perform a data integrity protocol. Also, all bus master devices (CPUs) on the processor bus may perform the same data integrity protocol. When a CPU requests read data from main memory, the bus interface unit forwards the read data and error information unmodified to the processor bus bypassing the data integrity logic within the system interface unit. However, the system interface unit may still perform the data integrity protocol in parallel with the requesting CPU so that the system interface unit may track errors and possibly notify the operating system or other error control software of any errors. In this manner processor read latency is improved without sacrificing data integrity. Furthermore, the system interface unit may still track errors on processor reads. If the read request is from a device on a peripheral bus (AGP or PCI bus), then the system interface unit performs the data integrity protocol on the data and error bits before forwarding the read data to the appropriate bus.
    • 提供了一种具有耦合在主存储器,CPU总线以及PCI总线和/或图形总线之间的系统接口单元的计算机。 硬盘驱动器通常耦合到PCI总线。 系统接口单元被配置为执行数据完整性协议。 此外,处理器总线上的所有总线主控器件(CPU)可以执行相同的数据完整性协议。 当CPU从主存储器请求读取数据时,总线接口单元将未修改的读取数据和错误信息转发到绕过系统接口单元内的数据完整性逻辑的处理器总线。 然而,系统接口单元仍然可以与请求CPU并行地执行数据完整性协议,使得系统接口单元可以跟踪错误并且可能通知操作系统或其他错误控制软件的任何错误。 以这种方式,处理器读取延迟得到改善而不牺牲数据完整性。 此外,系统接口单元仍然可以跟踪处理器读取的错误。 如果读取请求来自外围总线(AGP或PCI总线)上的设备,则系统接口单元在将读取的数据转发到适当的总线之前对数据和错误位执行数据完整性协议。
    • 27. 发明授权
    • Accelerated graphics port multiple entry gart cache allocation system
and method
    • 加速图形端口多进入gart缓存分配系统和方法
    • US5949436A
    • 1999-09-07
    • US941861
    • 1997-09-30
    • Ronald T. HoranPhillip M. JonesGregory N. SantosRobert Allan LesterJerome J. JohnsonMichael J. Collins
    • Ronald T. HoranPhillip M. JonesGregory N. SantosRobert Allan LesterJerome J. JohnsonMichael J. Collins
    • G06F12/10G06T1/60G06F15/00G06T1/00
    • G06T1/60G06F12/1027G06F12/1081
    • A computer system having a core logic chipset that functions as a bridge between an Accelerated Graphics Port ("AGP") bus device such as a graphics controller, and a host processor and computer system memory wherein a Graphics Address Remapping Table ("GART table") is used by the core logic chipset to remap virtual memory addresses used by the AGP graphics controller into physical memory addresses that reside in the computer system memory The GART table enables the AGP graphics controller to work in contiguous virtual memory address space, but actually use non-contiguous blocks or pages of physical system memory to store textures, command lists and the like. The GART table is made up of a plurality of entries, each entry comprising an address pointer to a base address of a page of graphics data in memory. The core logic chipset may cache a subset of the most recently used GART table entries to increase AGP performance when performing the address translation. When a GART table entry is not found in the cache, a memory access is required to obtained the needed GART table entry. There are two GART table entries in each quadword returned in toggle mode of the cacheline of memory information returned from the memory read access. At least one quadword (two GART table entries) are stored in the cache each time a memory access is required because of a cache miss.
    • 具有核心逻辑芯片组的计算机系统,其作为诸如图形控制器的加速图形端口(“AGP”)总线设备与主机处理器和计算机系统存储器之间的桥接,其中图形地址重映射表(“GART表” )由核心逻辑芯片组使用,将AGP图形控制器使用的虚拟内存地址重新映射到驻留在计算机系统内存中的物理内存地址GART表使AGP图形控制器能够在连续的虚拟内存地址空间中工作,但实际使用 不连续的块或物理系统存储器的页面来存储纹理,命令列表等。 GART表由多个条目组成,每个条目包括指向存储器中的图形数据页面的基地址的地址指针。 核心逻辑芯片组可以缓存最近使用的GART表项的子集,以在执行地址转换时提高AGP性能。 当缓存中没有找到GART表条目时,需要内存访问才能获取所需的GART表条目。 在内存读取访问返回的内存信息的缓存行的切换模式下,每个四字中有两个GART表条目。 由于缓存未命中,每次需要存储器访问时,至少有一个四字(两个GART表条目)存储在缓存中。