会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Mechanism for synchronizing multiple skewed source-synchronous data channels with automatic initialization feature
    • 同步多个偏斜源同步数据通道与自动初始化功能的机制
    • US06636955B1
    • 2003-10-21
    • US09652480
    • 2000-08-31
    • Richard E. KesslerPeter J. BannonMaurice B. SteinmanScott E. BreachAllen J. BaumGregg A. Bouchard
    • Richard E. KesslerPeter J. BannonMaurice B. SteinmanScott E. BreachAllen J. BaumGregg A. Bouchard
    • G06F1200
    • G06F13/1689
    • A computer system has a memory controller that includes read buffers coupled to a plurality of memory channels. The memory controller advantageously eliminates the inter-channel skew caused by memory modules being located at different distances from the memory controller. The memory controller preferably includes a channel interface and synchronization logic circuit for each memory channel. This circuit includes read and write buffers and load and unload pointers for the read buffer. Unload pointer logic generates the unload pointer and load pointer logic generates the load pointer. The pointers preferably are free-running pointers that increment in accordance with two different clock signals. The load pointer increments in accordance with a clock generated by the memory controller but that has been routed out to and back from the memory modules. The unload pointer increments in accordance with a clock generated by the computer system itself. Because the trace length of each memory channel may differ, the time that it takes for a memory module to provide read data back to the memory controller may differ for each channel. The “skew” is defined as the difference in time between when the data arrives on the earliest channel and when data arrives on the latest channel. During system initialization, the pointers are synchronized. After initialization, the pointers are used to load and unload the read buffers in such a way that the effects of inner-channel skew is eliminated.
    • 计算机系统具有存储器控制器,其包括耦合到多个存储器通道的读取缓冲器。 存储器控制器有利地消除由存储器模块位于与存储器控制器不同的距离处引起的通道间偏移。 存储器控制器优选地包括用于每个存储器通道的通道接口和同步逻辑电路。 该电路包括读取和写入缓冲区,读取缓冲区的加载和卸载指针。 卸载指针逻辑生成卸载指针,加载指针逻辑生成加载指针。 指针优选地是根据两个不同的时钟信号递增的自由运行指针。 负载指针根据由存储器控制器产生的时钟增加,但是已经被引出到存储器模块和从存储器模块返回。 卸载指针根据计算机系统本身产生的时钟增加。 因为每个存储器通道的迹线长度可能不同,所以存储器模块将读数据提供给存储器控制器所花费的时间可能对于每个通道而言可能不同。 “偏斜”被定义为数据到达最早通道时和数据到达最新通道之间的时间差。 在系统初始化期间,指针是同步的。 初始化之后,这些指针用于加载和卸载读取缓冲区,从而消除内部信道偏移的影响。
    • 3. 发明授权
    • Proprammable DRAM address mapping mechanism
    • 可预测的DRAM地址映射机制
    • US06546453B1
    • 2003-04-08
    • US09653093
    • 2000-08-31
    • Richard E. KesslerMaurice B. SteinmanPeter J. BannonMichael C. BraganzaGregg A. Bouchard
    • Richard E. KesslerMaurice B. SteinmanPeter J. BannonMichael C. BraganzaGregg A. Bouchard
    • G06F1200
    • G06F12/0882G06F12/0215G06F12/0607
    • A computer system contains a processor that includes a software programmable memory mapper. The memory mapper maps an address generated by the processor into a device address for accessing physical main memory. The processor also includes a cache controller that maps the processor address into a cache address. The cache address places a block of data from main memory into a memory cache using an index subfield. The physical main memory contains RDRAM devices, each of the RDRAM devices containing a number of memory banks that store rows and columns of data. The memory mapper maps processor addresses to device addresses to increases memory system performance. The mapping minimizes memory access conflicts between the memory banks. Conflicts between memory banks are reduced by placing a number of bits corresponding to the bank subfield above the most significant boundary bit of the index subfield. This diminishes page misses caused by replacement of data blocks from the cache memory because the read of the new data block and write of the victim data block are not to the same memory bank. Adjacent memory bank conflicts are reduced for sequential accesses to memory banks by reversing the bit order of a bank number subfield within the bank subfield of the device address.
    • 计算机系统包含包括软件可编程存储器映射器的处理器。 存储器映射器将由处理器生成的地址映射到用于访问物理主存储器的设备地址。 处理器还包括将处理器地址映射到高速缓存地址的高速缓存控制器。 高速缓存地址使用索引子字段将主存储器的数据块放入存储器高速缓存。 物理主存储器包含RDRAM设备,每个RDRAM设备包含存储行和数据列的多个存储器组。 内存映射器将处理器地址映射到设备地址,以提高内存系统性能。 该映射最小化了存储体之间的存储器访问冲突。 通过将对应于银行子字段的多个位放置在索引子字段的最高有效边界位之上来减少存储体之间的冲突。 由于新数据块的读取和受害者数据块的写入不是同一个存储体,这会减少由高速缓冲存储器替换数据块所造成的页面错误。 通过反转设备地址的银行子字段内的库号子字段的位顺序,减少了对存储体的顺序访问的相邻存储体冲突。
    • 4. 发明授权
    • Mechanism to track all open pages in a DRAM memory system
    • 跟踪DRAM存储器系统中所有打开的页面的机制
    • US06662265B1
    • 2003-12-09
    • US09652704
    • 2000-08-31
    • Richard E. KesslerMaurice B. SteinmanMichael S. BertonePeter J. BannonGregg A. Bouchard
    • Richard E. KesslerMaurice B. SteinmanMichael S. BertonePeter J. BannonGregg A. Bouchard
    • G06F1200
    • G06F12/0215G06F13/1631
    • A system and method is disclosed to track a large number of open pages in a computer memory system. The computer system contains one or more processors each including a memory controller containing a page table, the page table organized into a plurality of rows with each row able to store an address of an open memory page. A RIMM module containing RDRAM devices is coupled to each processor, each RDRAM containing a plurality of memory banks. The page table increases system memory performance by tracking a large number of open memory pages. Associated with the page table is a bank active table that indicates the memory banks in each RDRAM device having open memory pages. The page table enqueues accesses to the RIMM module in a precharge queue resulting from a page miss caused by the address of an open memory page occupying the same row of the page table as the address of the system memory access resulting in the page miss. The page table also enqueues accesses to system memory in a Row-address-select (“RAS”) queue resulting from a page miss caused by a row of the page table not containing any open memory page address. The page table enqueues accesses to system memory resulting in page hits to open memory pages in a Column-address-select (“CAS”) queue. An entry in the precharge queue is then enqueued into the RAS queue. An entry in the RAS queue after completion is enqueued into the CAS Read or CAS Write queue.
    • 公开了一种在计算机存储器系统中跟踪大量打开页面的系统和方法。 计算机系统包含一个或多个处理器,每个处理器包括包含页表的存储器控​​制器,所述页表被组织成多行,每行能够存储打开存储器页的地址。 包含RDRAM设备的RIMM模块耦合到每个处理器,每个RDRAM包含多个存储器组。 页面表通过跟踪大量的开放内存页面来增加系统内存性能。 与页表相关联的是一个存储区活动表,指示每个具有打开存储器页的RDRAM设备中的存储体。 页表格排队访问预充电队列中的RIMM模块,这是由于打开的内存页面的地址与页表的同一行的地址导致的页错误导致的系统内存访问的地址导致页错过。 页表还对由行页地址选择(“RAS”)队列访问系统内存进行排队,这是由于不包含任何打开的内存页地址的页表的行引起的页错误导致的。 页面表格对对系统内存的访问进行排队,导致页面命中,以打开列地址选择(“CAS”)队列中的内存页面。 然后将预充电队列中的条目排入RAS队列。 完成后RAS队列中的条目排入CAS读取或CAS写入队列。
    • 5. 发明授权
    • Fault containment and error recovery in a scalable multiprocessor
    • 可扩展多处理器中的故障控制和错误恢复
    • US07152191B2
    • 2006-12-19
    • US10691744
    • 2003-10-23
    • Richard E. KesslerPeter J. BannonKourosh GharachorlooThukalan V. Verghese
    • Richard E. KesslerPeter J. BannonKourosh GharachorlooThukalan V. Verghese
    • G06F11/00
    • G06F11/0793G06F11/0724G06F15/17
    • A multi-processor computer system permits various types of partitions to be implemented to contain and isolate hardware failures. The various types of partitions include hard, semi-hard, firm, and soft partitions. Each partition can include one or more processors. Upon detecting a failure associated with a processor, the connection to adjacent processors in the system can be severed, thereby precluding corrupted data from contaminating the rest of the system. If an inter-processor connection is severed, message traffic in the system can become congested as messages become backed up in other processors. Accordingly, each processor includes various timers to monitor for traffic congestion that may be due to a severed connection. Rather than letting the processor continue to wait to be able to transmit its messages, the timers will expire at preprogrammed time periods and the processor will take appropriate action, such as simply dropping queued messages, to keep the system from locking up.
    • 多处理器计算机系统允许实现各种类型的分区以包含和隔离硬件故障。 各种类型的分区包括硬,半硬,坚固和软分区。 每个分区可以包括一个或多个处理器。 当检测到与处理器相关联的故障时,可以切断与系统中的相邻处理器的连接,从而防止损坏的数据污染系统的其余部分。 如果处理器间连接被切断,则在其他处理器中的消息备份时,系统中的消息流量可能会变得拥塞。 因此,每个处理器包括各种定时器,以监视可能由于切断的连接造成的交通拥堵。 而不是让处理器继续等待能够发送其消息,定时器将在预编程的时间段过期,并且处理器将采取适当的动作,例如简单地删除排队的消息,以防止系统锁定。
    • 6. 发明授权
    • Fault containment and error recovery in a scalable multiprocessor
    • 可扩展多处理器中的故障控制和错误恢复
    • US06678840B1
    • 2004-01-13
    • US09651949
    • 2000-08-31
    • Richard E. KesslerPeter J. BannonKourosh GharachorlooThukalan V. Verghese
    • Richard E. KesslerPeter J. BannonKourosh GharachorlooThukalan V. Verghese
    • G06F1100
    • G06F11/0793G06F11/0724G06F15/17
    • A multi-processor computer system permits various types of partitions to be implemented to contain and isolate hardware failures. The various types of partitions include hard, semi-hard, firm, and soft partitions. Each partition can include one or more processors. Upon detecting a failure associated with a processor, the connection to adjacent processors in the system can be severed, thereby precluding corrupted data from contaminating the rest of the system. If an inter-processor connection is severed, message traffic in the system can become congested as messages become backed up in other processors. Accordingly, each processor includes various timers to monitor for traffic congestion that may be due to a severed connection. Rather than letting the processor continue to wait to be able to transmit its messages, the timers will expire at preprogrammed time periods and the processor will take appropriate action, such as simply dropping queued messages, to keep the system from locking up.
    • 多处理器计算机系统允许实现各种类型的分区以包含和隔离硬件故障。 各种类型的分区包括硬,半硬,坚固和软分区。 每个分区可以包括一个或多个处理器。 当检测到与处理器相关联的故障时,可以切断与系统中的相邻处理器的连接,从而防止损坏的数据污染系统的其余部分。 如果处理器间连接被切断,则在其他处理器中的消息备份时,系统中的消息流量可能会变得拥塞。 因此,每个处理器包括各种定时器,以监视可能由于切断的连接造成的交通拥堵。 而不是让处理器继续等待能够发送其消息,定时器将在预编程的时间段过期,并且处理器将采取适当的动作,例如简单地删除排队的消息,以防止系统锁定。
    • 8. 发明授权
    • Data cache block zero implementation
    • 数据缓存块零实现
    • US08301843B2
    • 2012-10-30
    • US12650075
    • 2009-12-30
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • G06F12/00G06F13/00
    • G06F12/0808G06F9/30047G06F9/383G06F9/3834G06F9/3842G06F9/3861G06F12/0815G06F2212/507
    • In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.
    • 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。
    • 10. 发明授权
    • Data cache block zero implementation
    • 数据缓存块零实现
    • US07707361B2
    • 2010-04-27
    • US11281840
    • 2005-11-17
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • G06F12/00G06F13/00
    • G06F12/0808G06F9/30047G06F9/383G06F9/3834G06F9/3842G06F9/3861G06F12/0815G06F2212/507
    • In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.
    • 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。