会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Programmable SRAM and DRAM cache interface with preset access priorities
    • 可编程SRAM和DRAM缓存接口,具有预设的访问优先级
    • US6151664A
    • 2000-11-21
    • US329134
    • 1999-06-09
    • John Michael BorkenhagenGerald Gregory FagernessJohn David IrishDavid John Krolak
    • John Michael BorkenhagenGerald Gregory FagernessJohn David IrishDavid John Krolak
    • G06F12/00G06F12/06G06F12/08G06F13/18
    • G06F12/0893
    • A cache interface that supports both Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM) is disclosed. The cache interface preferably comprises two portions, one portion on the processor and one portion on the cache. A designer can simply select which RAM he or she wishes to use for a cache, and the cache controller interface portion on the processor configures the processor to use this type of RAM. The cache interface portion on the cache is simple when being used with DRAM in that a busy indication is asserted so that the processor knows when an access collision occurs between an access generated by the processor and the DRAM cache. An access collision occurs when the DRAM cache is unable to read or write data due to a precharge, initialization, refresh, or standby state. When the cache interface is used with an SRAM cache, the busy indication is preferably ignored by a processor and the processor's cache interface portion. Additionally, the disclosed cache interface allows speed and size requirements for the cache to be programmed into the interface. In this manner, the interface does not have to be redesigned for use with different sizes or speeds of caches.
    • 公开了一种支持静态随机存取存储器(SRAM)和动态随机存取存储器(DRAM)的缓存接口。 高速缓存接口优选地包括两个部分,处理器上的一个部分和高速缓存上的一个部分。 设计人员可以简单地选择他或她希望用于高速缓存的RAM,并且处理器上的高速缓存控制器接口部分使处理器使用这种类型的RAM。 当与DRAM一起使用时,缓存上的高速缓存接口部分是简单的,因为忙指示被断言,使得处理器知道在由处理器生成的访问与DRAM高速缓存之间何时发生访问冲突。 当DRAM高速缓存由于预充电,初始化,刷新或待机状态而无法读取或写入数据时,发生访问冲突。 当高速缓存接口与SRAM缓存一起使用时,处理器和处理器的高速缓存接口部分最好忽略忙指示。 此外,所公开的高速缓存接口允许高速缓存的速度和大小要求被编程到接口中。 以这种方式,界面不必重新设计用于不同大小或速度的高速缓存。
    • 3. 发明授权
    • Abridged virtual address cache directory
    • 简化的虚拟地址缓存目录
    • US5751990A
    • 1998-05-12
    • US233654
    • 1994-04-26
    • David John KrolakLyle Edwin GrosbachSheldon B. LevensteinJohn David Irish
    • David John KrolakLyle Edwin GrosbachSheldon B. LevensteinJohn David Irish
    • G06F12/08G06F12/10
    • G06F12/1063
    • A hierarchical memory utilizes a translation lookaside buffer for rapid recovery of virtual to real address mappings and a cache system. Lines in the cache are identified in the cache directory by pointers to entries in the translation lookaside buffer. This eliminates redundant listings of the virtual and real addresses for the cache line from the cache directory allowing the directory to be small in size. Upon a memory access by a processing unit, a cache hash address is generated to access a translation lookaside buffer entry allowing comparison of the address stored in the TLB entry with the address of the memory access. Congruence implies a hit. Concurrently, the cache hash address indicates a pointer from the cache directory. The pointer should correspond to the cache hash address to indicate a cache directory hit. Where both occur a cache hit has occurred.
    • 分层存储器利用翻译后备缓冲器来快速恢复虚拟到真实的地址映射和缓存系统。 缓存中的行通过指向转换后备缓冲区中的条目的缓存目录中标识。 这消除了缓存目录中虚拟和实际地址的高速缓存行的冗余清单,允许目录体积小。 在由处理单元进行存储器访问时,生成高速缓存散列地址以访问转换后备缓冲器条目,允许将存储在TLB条目中的地址与存储器访问的地址进行比较。 一致意味着一击。 同时,缓存散列地址指示缓存目录中的指针。 指针应对应于缓存哈希地址,以指示缓存目录命中。 发生高速缓存命中的地方。
    • 4. 发明授权
    • Fair hierarchical arbiter
    • 公平的等级仲裁者
    • US07302510B2
    • 2007-11-27
    • US11239615
    • 2005-09-29
    • Mark S. FredricksonDavid John Krolak
    • Mark S. FredricksonDavid John Krolak
    • G06F13/14
    • G06F13/362
    • A fair hierarchical arbiter comprises a number of arbitration mechanisms, each arbitration mechanism forwarding winning requests from requestors in round robin order by requestor. In addition to the winning requests, each arbitration mechanism forwards valid request bits, the valid request bits providing information about which requestor originated a current winning request, and, in some embodiments, about how many separate requesters are arbitrated by that particular arbitration mechanism. The fair hierarchical arbiter outputs requests from the total set of separate requestors in a round robin order.
    • 一个公平的分级仲裁器包括多个仲裁机制,每个仲裁机制按照请求者的轮询顺序转发请求者的获胜请求。 除了获胜请求之外,每个仲裁机制转发有效请求比特,有效请求比特提供关于哪个请求者发起当前获胜请求的信息,并且在一些实施例中,关于该特定仲裁机制来仲裁多少个单独的请求者。 公平的分级仲裁器以循环次序输出来自全套独立请求者的请求。
    • 7. 发明授权
    • Data processing system and multi-way set associative cache utilizing
class predict data structure and method thereof
    • 数据处理系统和多路集相关缓存利用类预测数据结构及其方法
    • US6138209A
    • 2000-10-24
    • US924272
    • 1997-09-05
    • David John KrolakSheldon Bernard Levenstein
    • David John KrolakSheldon Bernard Levenstein
    • G06F12/08G06F12/10G06F12/00
    • G06F12/0864G06F12/1054G06F12/0897G06F2212/601G06F2212/6082
    • A data processing system and method thereof utilize a unique cache architecture that performs class prediction in a multi-way set associative cache during either or both of handling a memory access request by an anterior cache and translating a memory access request to an addressing format compatible with the multi-way set associative cache. Class prediction may be performed using a class predict data structure with a plurality of predict array elements partitioned into sub-arrays that is accessed using a hashing algorithm to retrieve selected sub-arrays. In addition, a master/slave class predict architecture may be utilized to permit concurrent access to class predict information by multiple memory access request sources. Moreover, a cache may be configured to operate in multiple associativity modes by selectively utilizing either class predict information or address information related to a memory access request in the generation of an index into the cache data array.
    • 数据处理系统及其方法利用独特的高速缓存架构,其在由前缓存处理存储器访问请求的任一者或两者中的一个或两者中在多路组关联高速缓存中执行类预测,并将存储器访问请求转换为与 多路组合关联缓存。 可以使用具有被划分为子阵列的多个预测阵列元素的类预测数据结构来执行类预测,所述子阵列使用散列算法来访问以检索所选择的子阵列。 此外,可以利用主/从类预测架构来允许通过多个存储器访问请求源同时访问类预测信息。 此外,高速缓存可以被配置为在多个关联模式中操作,通过在向高速缓存数据阵列生成索引时选择性地利用与存储器访问请求相关的类预测信息或地址信息。
    • 9. 发明授权
    • Control logic for very fast clock speeds
    • 非常快的时钟速度的控制逻辑
    • US5649177A
    • 1997-07-15
    • US563561
    • 1995-11-28
    • Lyle Edwin GrosbachDavid John KrolakDavid Wayne Marquart
    • Lyle Edwin GrosbachDavid John KrolakDavid Wayne Marquart
    • G06F1/12G06F1/10
    • G06F1/12
    • The ability to harmonize the activities of individual computer system components with control signals is key to the operation of any computer system. Examples of this need for control include the need to write data to multiple registers on the same clock cycle, the need to clear values on multiple entities on the same clock cycle, and the need to stop and start the master clock pulse train itself. In the past, providing this control was not a problem because control signals could be reliably sent to all the timing dependent components within a single cycle of the master clock pulse train. This control methodology is called "single cycle control." Today, however, single cycle control is not trustworthy in all situations. Master clock pulse trains are so fast that single cycle control is no longer reliable when timing dependent components reside in locations distant from the control signal generating circuitry. The present invention provides reliable control in all cases, including the situation where a master clock pulse train is so fast that single cycle control is not viable.
    • 将个人计算机系统组件的活动与控制信号协调一致的能力是任何计算机系统的运行的关键。 这种控制需求的例子包括需要在相同的时钟周期将数据写入多个寄存器,需要在同一时钟周期内清除多个实体上的值,以及需要停止和启动主时钟脉冲序列本身。 在过去,提供这种控制并不是问题,因为控制信号可以被可靠地发送到主时钟脉冲串的单个周期内的所有定时相关分量。 这种控制方法被称为“单循环控制”。 然而,今天,在所有情况下,单循环控制是不可靠的。 主时钟脉冲串非常快,当定时相关组件驻留在远离控制信号发生电路的位置时,单周期控制不再可靠。 本发明在所有情况下提供可靠的控制,包括主时钟脉冲串如此快以致单周期控制不可行的情况。