会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 14. 发明授权
    • Non-inclusive cache system using pipelined snoop bus
    • 使用流水线监听总线的非包容性缓存系统
    • US6076147A
    • 2000-06-13
    • US881745
    • 1997-06-24
    • William L. LynchAl Yamauchi
    • William L. LynchAl Yamauchi
    • G06F12/08
    • G06F12/0831G06F12/0811
    • A non-inclusive cache system includes an external cache and a plurality of on-chip caches each having a set of tags associated therewith, with at least one of the on-chip caches including data which is absent from the external cache. A pipelined snoop bus is ported to each of the set of tags of the plurality of on-chip caches and transmits a snoop address to the plurality of on-chip caches. A system interface unit is responsive to a received snoop request to scan the external cache and to apply the snoop address of the snoop request to the pipelined snoop bus. A plurality of response signal lines respectively extend from the plurality of on-chip caches to the system interface unit, each of the signal lines for transmitting a snoop response from a corresponding one of the on-board caches to the system interface unit. The set of tags can be implemented by dual-porting the cache tags, or by providing a duplicate and dedicated set of snoop tags.
    • 非包容性缓存系统包括外部高速缓存和多个片上高速缓存,每个具有与之相关联的一组标签,片上高速缓存中的至少一个包括外部高速缓存中不存在的数据。 流水线窥探总线被移植到多个片上高速缓存中的每组标签中,并将一个窥探地址发送到多个片上高速缓存。 系统接口单元响应于所接收的窥探请求来扫描外部高速缓存并将窥探请求的窥探地址应用于流水线窥探总线。 多个响应信号线分别从多个片上高速缓存延伸到系统接口单元,每个信号线用于从相应的板载高速缓存发送窥探响应到系统接口单元。 标签集可以通过双端口缓存标签来实现,或者通过提供一组重复的和专用的snoop标签集来实现。
    • 16. 发明授权
    • Ensuring consistency of an instruction cache with a store cache check
and an execution blocking flush instruction in an instruction queue
    • 确保指令高速缓存与指令队列中的存储高速缓存检查和执行阻塞刷新指令的一致性
    • US06164840A
    • 2000-12-26
    • US881106
    • 1997-06-24
    • William L. Lynch
    • William L. Lynch
    • G06F9/30G06F9/38G06F12/08
    • G06F9/3812G06F9/30047G06F12/0848
    • A method of ensuring instruction cache consistency in a processor includes executing a flush instruction whenever a program executed by the processor stores data to a given data address and, subsequently, executes another instruction requiring a data fetch from the same address. According to this method, a write cache prevents any addressed instruction from residing in the write cache and the instruction cache at the same time. Thus, when an instruction having a store address not already present in the write cache is retired to the write cache, the write cache instructs the instruction cache to invalidate any data stored therein having a same address. The flush instruction prevents execution of any other instructions after the store at least until the store to the memory address has been allocated to a write cache of the processor, thus enabling the write cache to invalidate the subsequent instruction at the same address in the instruction cache. The method insures instruction cache consistency without the need to check every store against the instruction cache.
    • 确保处理器中的指令高速缓存一致性的方法包括每当由处理器执行的程序将数据存储到给定的数据地址时执行刷新指令,并且随后从相同的地址执行需要数据取出的另一指令。 根据该方法,写入高速缓存防止任何寻址的指令同时驻留在写入高速缓存和指令高速缓存中。 因此,当具有写入高速缓存中尚未存在的存储地址的指令退出到写入高速缓存时,写入高速缓存指示指令高速缓存使其中存储的具有相同地址的数据无效。 闪存指令防止在存储之后执行任何其他指令,直到存储到存储器地址的存储已被分配给处理器的写入高速缓存,从而使得写入高速缓存能够使指令高速缓存中的相同地址处的后续指令无效 。 该方法确保指令高速缓存一致性,而不需要根据指令高速缓存检查每个存储。
    • 17. 发明授权
    • Method for handling data cache misses using help instructions
    • 使用帮助说明处理数据高速缓存未命中的方法
    • US6016532A
    • 2000-01-18
    • US884066
    • 1997-06-27
    • William L. LynchGary R. Lauterbach
    • William L. LynchGary R. Lauterbach
    • G06F9/30G06F9/312G06F9/38G06F17/30
    • G06F9/30043G06F9/383G06F9/3875
    • A microprocessor is configured to generate help instructions in response to a data cache miss. The help instructions flow through the instruction processing pipeline of the microprocessor in a fashion similar to the instruction which caused the miss (the "miss instruction"). The help instructions use the source operands of the miss instruction to form the miss address, thereby providing the fill address using the same elements which are used to calculate cache access addresses. In one embodiment, a fill help instruction and a bypass help instruction are generated. The fill help instruction provides the input address to the data cache during the clock cycle in which the fill data arrives. The appropriate row of the data cache is thereby selected for storing the fill data. The bypass help instruction is dispatched to arrive in a second pipeline stage different from the stage occupied by the fill help instruction. The bypass help instruction causes the datum requested by the miss instruction to be forwarded to the destination of the miss instruction.
    • 微处理器被配置为响应于数据高速缓存未命中产生帮助指令。 帮助指令以类似于导致未命中的指令(“未命令指令”)的方式流过微处理器的指令处理流水线。 帮助指令使用miss指令的源操作数来形成未命中地址,从而使用用于计算缓存访问地址的相同元素来提供填充地址。 在一个实施例中,生成填充帮助指令和旁路帮助指令。 填充帮助指令在填充数据到达的时钟周期期间向数据高速缓存提供输入地址。 由此选择数据高速缓存的适当行以存储填充数据。 旁路帮助指令被调度到不同于填充帮助指令占据的阶段的第二流水线阶段。 旁路帮助指令导致由miss指令请求的数据被转发到未命中指令的目的地。
    • 18. 发明授权
    • Low-latency memory indexing method and structure
    • 低延迟内存索引方法和结构
    • US5754819A
    • 1998-05-19
    • US282525
    • 1994-07-28
    • William L. LynchGary R. Lauterbach
    • William L. LynchGary R. Lauterbach
    • G11C11/413G06F12/02G06F12/08G06F12/00
    • G06F12/0864
    • A significant reduction in the latency between the time the addressed components of memory are ready and the time addressed data is available to the address components of memory is achieved by processing the raw address information faster than the addition used in the prior art. XOR memory addressing replaces the addition of the base and offset address components with an XOR operation, eliminating carry propagation and reducing overall latency. In another embodiment, a sum-addressed memory (SAM) also eliminates the carry propagation and thus reduce the latency while providing the correct base+offset index to access the memory word line corresponding to the correct addition; thus a SAM causes no XOR duplicate problems.
    • 通过比现有技术中使用的添加更快地处理原始地址信息来实现存储器的寻址组件准备好的时间和时间寻址数据对存储器的地址组件可用的等待时间的显着减少。 XOR存储器寻址用XOR操作替代了基本和偏移地址组件的添加,消除了进位传播并减少了总体延迟。 在另一个实施例中,和寻址存储器(SAM)也消除了进位传播,从而减少延迟,同时提供正确的基极+偏移索引来访问对应于正确相加的存储器字线; 因此SAM不会导致XOR重复的问题。
    • 20. 发明授权
    • Microprocessor having a prefetch cache
    • 具有预取缓存的微处理器
    • US06317810B1
    • 2001-11-13
    • US08882691
    • 1997-06-25
    • Herbert Lopez-AguadoDenise ChiacchiaWilliam L. LynchGary Lauterbach
    • Herbert Lopez-AguadoDenise ChiacchiaWilliam L. LynchGary Lauterbach
    • G06F1208
    • G06F9/3802G06F9/3455G06F9/383G06F9/3832G06F9/3861G06F12/0862G06F2212/6022G06F2212/6028
    • A central processing unit of a computer includes a single-ported data cache and a dual-ported prefetch cache. The data cache accommodates a first pipeline and the prefetch cache, which is much smaller than the data cache, accommodates both the first pipeline and a second pipeline. If a data cache miss occurs, a row of data corresponding to the specified address is stored in the data cache and the prefetch cache. Thereafter, if a prefetch cache hit occurs, a row of data corresponding to a prefetch address is loaded into the prefetch cache. The prefetch address may, for instance, be generated by adding a fixed increment to the specified address. This operation frequently results in the prefetch cache storing data soon requested by a computer program. When this condition is achieved, the data corresponding to the subsequent address request is rapidly retrieved from cache memory without incurring memory latencies associated with the external cache, the primary memory, and the secondary memory. In this manner, the prefetch cache of the present invention facilitates improved memory latencies. Further, the prefetch cache allows for two data requests to be processed simultaneously without a corresponding two-fold increase in cost of data cache memory.
    • 计算机的中央处理单元包括单端口数据高速缓存和双端口预取缓存。 数据高速缓存容纳第一流水线,并且比数据高速缓存小得多的预取缓存容纳第一流水线和第二流水线。 如果发生数据高速缓存未命中,则与指定地址对应的数据行存储在数据高速缓存和预取缓存中。 此后,如果发生预取缓存命中,则将与预取地址对应的数据行加载到预取高速缓存中。 预取地址可以例如通过向指定的地址添加固定的增量来生成。 此操作经常导致预取缓存存储计算机程序很快请求的数据。 当达到这种条件时,与高速缓冲存储器快速地检索对应于后续地址请求的数据,而不会引起与外部高速缓存,主存储器和辅助存储器相关联的存储器延迟。 以这种方式,本发明的预取高速缓存有利于改进的存储器延迟。 此外,预取缓存允许同时处理两个数据请求,而数据高速缓冲存储器的成本增加相应的两倍。