会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 53. 发明授权
    • Managing a multi-way associative cache
    • 管理多路关联缓存
    • US07237067B2
    • 2007-06-26
    • US10829186
    • 2004-04-22
    • Simon C. Steely, Jr.
    • Simon C. Steely, Jr.
    • G06F12/00
    • G06F12/128
    • Methods for storing replacement data in a multi-way associative cache are disclosed. One method comprises logically dividing the cache's cache sets into segments of at least one cache way; searching a cache set in accordance with a segment search sequence for a segment currently comprising a way which has not yet been accessed during a current cycle of the segment search sequence; searching the current segment in accordance with a way search sequence for a way which has not yet been accessed during a current way search cycle; and storing the replacement data in a first way which has not yet been accessed during a current cycle of the way search sequence. A cache controller that performs such methods is also disclosed.
    • 公开了在多路关联高速缓存中存储替换数据的方法。 一种方法包括将高速缓存的高速缓存集合逻辑划分成至少一个缓存方式的段; 根据段搜索序列搜索当前包括在片段搜索序列的当前周期中尚未被访问的方式的段的高速缓存集; 根据当前方式搜索周期中尚未被访问的方式的搜索顺序搜索当前段; 并且以当前循环的搜索顺序的第一种方式存储替换数据,该方式尚未被访问。 还公开了执行这种方法的高速缓存控制器。
    • 56. 发明授权
    • Apparatus and method for serialized set prediction
    • 串联集预测的装置和方法
    • US5966737A
    • 1999-10-12
    • US971630
    • 1997-11-17
    • Simon C. Steely, Jr.Joseph Dominic Macri
    • Simon C. Steely, Jr.Joseph Dominic Macri
    • G06F12/08G06F9/38
    • G06F12/0864G06F2212/6082
    • A prediction mechanism for improving direct-mapped cache performance is shown to include a direct-mapped cache, partitioned into a plurality of pseudo-banks. Prediction means are employed to provide a prediction index which is appended to the cache index to provide the entire address for addressing the direct mapped cache. One embodiment of the prediction means includes a prediction cache which is advantageously larger than the pseudo-banks of the direct-mapped cache and is used to store the prediction index for each cache location. A second embodiment includes a plurality of partial tag stores, each including a predetermined number of tag bits for the data in each bank. A comparison of the tags generates a match in one of the plurality of tag stores, and is used in turn to generate a prediction index. A third embodiment for use with a direct mapped cache divided into two partitions includes a distinguishing bit ram, which is used to provide the bit number of any bit which differs between the tags at the same location in the different banks. The bit number is used in conjunction with a complement signal to provide the prediction index for addressing the direct-mapped cache.
    • 示出了用于改善直接映射高速缓存性能的预测机制,其包括被划分成多个伪库的直接映射高速缓存。 预测装置被用于提供附加到高速缓存索引的预测索引,以提供用于寻址直接映射高速缓存的整个地址。 预测装置的一个实施例包括有利地大于直接映射高速缓存的伪库的预测高速缓存,并且用于存储每个高速缓存位置的预测索引。 第二实施例包括多个部分标签存储,每个部分标签存储器包括用于每个存储体中的数据的预定数量的标签位。 标签的比较在多个标签存储之一中产生匹配,并且依次用于生成预测索引。 用于分割成两个分区的直接映射高速缓存的第三实施例包括区分位RAM,其用于提供不同存储体中相同位置处的标签之间不同的任何位的位数。 位数与补码信号结合使用,以提供用于寻址直接映射高速缓存的预测索引。
    • 57. 发明授权
    • Apparatus and method for intelligent multiple-probe cache allocation
    • 智能多探头缓存分配的装置和方法
    • US5829051A
    • 1998-10-27
    • US223069
    • 1994-04-04
    • Simon C. Steely, Jr.Richard B. Gillett, Jr.Tryggve Fossum
    • Simon C. Steely, Jr.Richard B. Gillett, Jr.Tryggve Fossum
    • G06F12/08G06F9/26
    • G06F12/0864
    • An apparatus for allocating data to and retrieving data from a cache includes a memory subsystem coupled between a processor and a memory to provide quick access of memory data to the processor. The memory subsystem includes a cache memory. The address provided to the memory subsystem is divided into a cache index and a tag, and the cache index is hashed to provide a plurality of alternative addresses for accessing the cache. During a cache read, each of the alternative addresses are selected to search for the data responsive to an indicator of the validity of the data at the locations. The selection of the alternative address may be done through a mask having a number of bits corresponding to the number of alternative addresses. Each bit indicates whether the alternative address at that location should be used during the access of the cache in search of the data. Alternatively, a memory device which has more entries than the cache has blocks may be used to store the select value of the best alternative address to use to locate the data. Data is allocated to each alternative address based upon a modified least recently used technique wherein a quantum number and modula counter are used to time stamp the data.
    • 一种用于向高速缓存提供数据并从其中检索数据的装置包括耦合在处理器和存储器之间的存储器子系统,以便将存储器数据快速地存取到处理器。 存储器子系统包括高速缓冲存储器。 提供给存储器子系统的地址被划分为高速缓存索引和标签,并且高速缓存索引被散列以提供用于访问高速缓存的多个替代地址。 在缓存读取期间,选择每个备选地址以响应于在该位置处的数据的有效性的指示符来搜索数据。 替代地址的选择可以通过具有对应于替代地址的数量的位数的掩码来完成。 每个位指示在缓存访问期间是否应该使用该位置处的替代地址来搜索数据。 或者,具有比高速缓存具有更多条目的存储器装置可以用于存储用于定位数据的最佳替代地址的选择值。 基于修改的最近最少使用的技术将数据分配给每个备选地址,其中使用量子数和模数计数器来对数据进行时间戳。
    • 60. 发明授权
    • Ring based distributed communication bus for a multiprocessor network
    • 用于多处理器网络的基于环的分布式通信总线
    • US5551048A
    • 1996-08-27
    • US253474
    • 1994-06-03
    • Simon C. Steely, Jr.
    • Simon C. Steely, Jr.
    • G06F12/08G06F12/00
    • G06F12/0813G06F12/0815Y10S707/99952
    • A method for providing communication between a plurality of nodes coupled in a ring arrangement, wherein a plurality of the nodes comprise processors each having a cache memory for storing a subset of shared data. Each of the nodes on the ring deposits data into a data slot during a given time period. The data deposited by each node may comprise an address field and a node field. To ensure data coherency between the caches, each processor on the ring includes a queue for saving a plurality of received data representative of the latest bus data transmitted on the bus. As each processor receives new data, the new data is compared against the plurality of saved data in the queue to determine if the address field of the new data matches the address field of any of the saved data of the queue. In the event that the new data matches one of the plurality of saved data, it is determined whether the new data represents updated data from the memory device. If the new data represents updated data it is shifted into the queue. If it does not represent updated data, it is discarded.
    • 一种用于在以环形布置耦合的多个节点之间提供通信的方法,其中多个节点包括具有用于存储共享数据子集的高速缓存存储器的处理器。 环中的每个节点在给定时间段内将数据存入数据时隙。 由每个节点存储的数据可以包括地址字段和节点字段。 为了确保高速缓存之间的数据一致性,环上的每个处理器包括用于保存表示在总线上发送的最新总线数据的多个接收数据的队列。 当每个处理器接收新数据时,将新数据与队列中的多个保存的数据进行比较,以确定新数据的地址字段是否与队列中保存的任何数据的地址字段匹配。 在新数据与多个保存数据中的一个匹配的情况下,确定新数据是否表示来自存储装置的更新数据。 如果新的数据表示更新的数据,则它被转移到队列中。 如果不表示已更新的数据,则会被丢弃。