会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Multiple store miss handling in a cache memory memory system
    • 缓存存储系统中的多个存储错误处理
    • US06311254B1
    • 2001-10-30
    • US09271494
    • 1999-03-18
    • Belliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • Belliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • G06F1200
    • G06F12/0859
    • A cache memory system including a cache memory suitable for coupling to a load/store unit of a CPU, a buffer unit comprised of a plurality of entries each including a data buffer and a corresponding address tag. The system is configured to initiate a data fetch transaction in response to a first store operation that misses in both the cache memory and the buffer unit, to allocate a first entry in the buffer unit, and to write the first store operation's data in the first entry's data buffer. The system is adapted to write data from at least one subsequent store operation into the first entry's data buffer if the subsequent store operation misses in the cache but hits in the first entry of the buffer unit prior to completion of the data fetch transaction. In this manner, the first entry's data buffer includes a composite of the first and subsequent store operations' data. Preferably, the cache system is further configured to merge, upon completion of the data fetch, the fetched data with the store operation data in the first entry's data buffer and to reload the cache memory from the first entry's data buffer. In the preferred embodiment, each buffer unit entry further includes data valid bits that indicate the validity of corresponding portions of the entry's data buffer. In this embodiment, the buffer unit is preferably configured to reload the cache memory from the first buffer unit entry if all of the first entry's data valid bits are set prior to completion of the data fetch transaction thereby affecting a “silent” reload of the cache memory in which no data is ultimately required from memory.
    • 一种高速缓冲存储器系统,包括适于耦合到CPU的加载/存储单元的高速缓冲存储器,由包括数据缓冲器和相应的地址标签的多个条目组成的缓冲器单元。 该系统被配置为响应于在高速缓冲存储器和缓冲器单元中丢失的第一存储操作来发起数据提取事务,以在缓冲器单元中分配第一条目,并将第一存储操作的数据写入第一存储操作的第一存储操作 条目的数据缓冲区。 如果随后的存储操作在高速缓存中错过但在数据提取事务完成之前在缓冲单元的第一条目中命中,则系统适于将数据从至少一个后续存储操作写入到第一条目的数据缓冲器中。 以这种方式,第一条目的数据缓冲器包括第一和后续存储操作数据的组合。 优选地,高速缓存系统还被配置为在数据获取完成时,将获取的数据与第一条目的数据缓冲器中的存储操作数据进行合并,并从第一条目的数据缓冲器重新加载高速缓冲存储器。 在优选实施例中,每个缓冲器单元条目还包括指示条目的数据缓冲器的对应部分的有效性的数据有效位。 在该实施例中,缓冲单元优选地被配置为如果在完成数据提取事务之前设置了所有第一条目的数据有效位,从而影响高速缓存的“无声”重新加载,则从第一缓冲单元条目重新加载高速缓冲存储器 内存最终不需要内存的内存。
    • 4. 发明授权
    • Dynamically modifying queued transactions in a cache memory system
    • 动态修改缓存内存系统中的排队事务
    • US06321303B1
    • 2001-11-20
    • US09271492
    • 1999-03-18
    • Thomas Alan HoyBelliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • Thomas Alan HoyBelliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • G06F1208
    • G06F12/0831
    • A computer and its corresponding cache system includes a cache memory, a buffer unit, and a bus transaction queue. The buffer unit includes a plurality of entries suitable for temporarily storing data, address, and attribute information of operations generated by the CPU. A first operation initiated by the load store unit buffers an operation in a first entry of the buffer unit, which initiates a first transaction to be queued in a first entry of the bus transaction queue where the first transaction in the bus transaction queue points to the first entry in the buffer unit. Preferably, the buffer unit is configured to modify the first transaction from a first transaction type to a second transaction type prior to execution in response to an event that alters the data requirements of the queued transaction. Additional utility is achieved by merging multiple store operation that miss to a common cache line into a single entry. Further benefits is achieved by allowing multiple load misses to the same cache line to be completed from a buffer that reduces cache pipeline stalls.
    • 计算机及其对应的缓存系统包括高速缓冲存储器,缓冲器单元和总线事务队列。 缓冲单元包括适于临时存储由CPU生成的操作的数据,地址和属性信息的多个条目。 由加载存储单元启动的第一操作缓冲在缓冲单元的第一条目中的操作,该操作启动要在总线事务队列的第一条目中排队的第一事务,其中总线事务队列中的第一事务指向 缓冲单元中的第一个条目。 优选地,缓冲单元被配置为响应于改变排队交易的数据需求的事件,在执行之前将第一事务从第一事务类型修改为第二事务类型。 通过将多个存储操作错过公用高速缓存行合并到单个条目中来实现附加效用。 通过允许从减少高速缓存管道停顿的缓冲器中完成相同高速缓存行的多个加载错误来实现进一步的益处。
    • 6. 发明授权
    • Multiple load miss handling in a cache memory system
    • 高速缓冲存储器系统中的多次错误处理
    • US06269427B1
    • 2001-07-31
    • US09271493
    • 1999-03-18
    • Belliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • Belliappa Manavattira KuttannaRajesh PatelMichael Dean Snyder
    • G06F1208
    • G06F9/30043G06F9/30047G06F9/3824G06F12/0831G06F12/0859
    • A cache memory system including a cache memory configured for coupling to a load/store unit of a CPU, a buffer unit coupled to said cache memory, and an operation queue comprising a plurality of entries, wherein each valid operation queue entry points to an entry in the buffer unit. The buffer unit includes a plurality of data buffers and each of the data buffers is associated with a corresponding address tag. The system is configured to initiate a data fetch transaction and allocate an entry in the buffer unit in response to a CPU load operation that misses in both the cache memory and the buffer unit. The cache system is further configured to allocate entries in the operation queue in response to subsequent CPU load operations that miss in the cache memory but hit in the buffer unit prior to completion of the data fetch. Preferably, the system is configured to store the fetched data in the buffer unit entry upon satisfaction of said data fetch and still further configured to satisfy pending load operations in the operation queue from the buffer unit entry. In the preferred embodiment, the system is configured to reload the. cache memory from the buffer unit entry upon satisfying all operation queue entries pointing to the buffer unit entry and, thereafter, to invalidate the buffer unit entry and the operation queue entries. The buffer unit entries preferably each include data valid bits indicative of which portions of data stored in a buffer unit entry are valid.
    • 一种高速缓冲存储器系统,包括被配置为耦合到CPU的加载/存储单元的高速缓存存储器,耦合到所述高速缓存存储器的缓冲器单元和包括多个条目的操作队列,其中每个有效操作队列入口指向条目 在缓冲单元中。 缓冲单元包括多个数据缓冲器,并且每个数据缓冲器与相应的地址标签相关联。 该系统被配置为响应于在高速缓冲存储器和缓冲器单元中丢失的CPU加载操作而发起数据提取事务并在缓冲器单元中分配条目。 高速缓存系统还被配置为响应于在高速缓冲存储器中错过的随后的CPU加载操作来在操作队列中分配条目,而在数据提取完成之前在缓冲器单元中命中。 优选地,系统被配置为在满足所述数据提取时将获取的数据存储在缓冲器单元条目中,并且还被配置为满足来自缓冲器单元条目的操作队列中的未决加载操作。 在优选实施例中,系统被配置为重新加载。 满足指向缓冲单元条目的所有操作队列条目,然后使缓冲单元条目和操作队列条目无效,从缓冲单元条目缓存存储器。 缓冲单元条目优选地每个都包括指示存储在缓冲单元条目中的数据的哪些部分有效的数据有效位。
    • 9. 发明授权
    • Data prefetching apparatus in a data processing system and method therefor
    • 数据处理系统中的数据预取装置及其方法
    • US06785772B2
    • 2004-08-31
    • US10132918
    • 2002-04-26
    • Suresh VenkumahantiMichael Dean Snyder
    • Suresh VenkumahantiMichael Dean Snyder
    • G06F1200
    • G06F12/0862G06F9/30047G06F9/3455G06F9/383G06F2212/6028
    • A data processing system (20) is able to perform parameter-selectable prefetch instructions to prefetch data for a cache (38). When attempting to be backward compatible with previously written code, sometimes performing this instruction can result in attempting to prefetch redundant data by prefetching the same data twice. In order to prevent this, the parameters of the instruction are analyzed to determine if such redundant data will be prefetched. If so, then the parameters are altered to avoid prefetching redundant data. In some of the possibilities for the parameters of the instruction, the altering of the parameters requires significant circuitry so that an alternative approach is used. This alternative but slower approach, which can be used in the same system with the first approach, detects if the line of the cache that is currently being requested is the same as the previous request. If so, the current request is not executed.
    • 数据处理系统(20)能够执行参数可选择的预取指令以预取高速缓存(38)的数据。 当尝试向后兼容以前写入的代码时,有时执行此指令可能会导致尝试通过预取相同数据两次来预取冗余数据。 为了防止这种情况,分析指令的参数以确定是否将预取这样的冗余数据。 如果是这样,则修改参数以避免预取冗余数据。 在指令参数的一些可能性中,参数的更改需要大量的电路,以便使用另外的方法。 可以在与第一种方法相同的系统中使用的这种替代但较慢的方法检测当前正在请求的高速缓存行是否与先前请求相同。 如果是,则不执行当前请求。
    • 10. 发明授权
    • Method and apparatus for transferring data on a split bus in a data processing system
    • 用于在数据处理系统中的分离总线上传送数据的方法和装置
    • US06240479B1
    • 2001-05-29
    • US09127459
    • 1998-07-31
    • Michael Dean SnyderDavid William ToddBrian Keith ReynoldsMichael Julio Garcia
    • Michael Dean SnyderDavid William ToddBrian Keith ReynoldsMichael Julio Garcia
    • G06F13362
    • G06F13/364
    • A bus protocol for a split bus (50, 60) where each device (10, 20, 30) coupled to the bus has an age-based queue (12, 24, 34) of pending transactions. Queues are updated as transactions are executed. A central arbiter (40) has a copy of each device's queue (44). A priority transaction is determined from among all the queues in the arbiter. A data transaction index (DTI) is broadcast during the data tenure to all devices indicating the position in the queue of the next transaction. The index allows out-of-order data transfers without the provision of a static tag during the address tenure. Queues maintain a history of pending transactions. In one embodiment, each device receives a separate data bus grant (DBG), allowing a single provision of the index to both a source and a sink device.
    • 一种用于分离总线(50,60)的总线协议,其中耦合到所述总线的每个设备(10,20,30)具有未决事务的基于年龄的队列(12,24,34)。 队列随着交易的执行而更新。 中央仲裁器(40)具有每个设备队列的副本(44)。 从仲裁器中的所有队列中确定优先级事务。 数据交换索引(DTI)在数据期间广播到所有设备,指示下一个事务的队列中的位置。 该索引允许在地址持有期间不提供静态标签的无序数据传输。 队列保留待处理交易的历史。 在一个实施例中,每个设备接收单独的数据总线许可(DBG),允许向源设备和宿设备单独提供索引。