会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 6. 发明申请
    • REDUCING NUMBER OF REJECTED SNOOP REQUESTS BY EXTENDING TIME TO RESPOND TO SNOOP REQUEST
    • 通过延长时间减少违反SNOOP要求的数量以应对SNOOP要求
    • US20080201533A1
    • 2008-08-21
    • US12114790
    • 2008-05-04
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • G06F12/08
    • G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 如果失速/重新排序单元未满,则在失速/重新排序单元中的锁存器管线中的第一可用锁存器中输入进入的窥探请求。 输入的窥探请求在进入管道中的底部闩锁时被调度到选择器。 失速/重新排序单元不知道发送后发送的几个时钟周期是否由仲裁机制接受发送的窥探请求。 调度窥探请求的副本在调度窥探请求时被存储在第一单元中的锁存器的溢出管道中的顶部锁存器中。 通过维护关于窥探请求的信息,可以在被发送的窥探请求被拒绝的情况下再次发送到选择器的窥探请求,从而增加窥探请求将最终被接受的机会。
    • 7. 发明授权
    • Reducing number of rejected snoop requests by extending time to respond to snoop request
    • 通过延长响应窥探请求的时间来减少被拒绝的窥探请求数
    • US07386682B2
    • 2008-06-10
    • US11056764
    • 2005-02-11
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 如果失速/重新排序单元未满,则在失速/重新排序单元中的锁存器管线中的第一可用锁存器中输入进入的窥探请求。 输入的窥探请求在进入管道中的底部闩锁时被调度到选择器。 失速/重新排序单元不知道发送后发送的几个时钟周期是否由仲裁机制接受发送的窥探请求。 调度窥探请求的副本在调度窥探请求时被存储在第一单元中的锁存器的溢出管道中的顶部锁存器中。 通过维护关于窥探请求的信息,可以在被发送的窥探请求被拒绝的情况下再次发送到选择器的窥探请求,从而增加窥探请求将最终被接受的机会。
    • 8. 发明授权
    • Reducing number of rejected snoop requests by extending time to respond to snoop request
    • 通过延长响应窥探请求的时间来减少被拒绝的窥探请求数
    • US07523268B2
    • 2009-04-21
    • US12114790
    • 2008-05-04
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieHugh ShenWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 如果失速/重新排序单元未满,则在失速/重新排序单元中的锁存器管线中的第一可用锁存器中输入进入的窥探请求。 输入的窥探请求在进入管道中的底部闩锁时被调度到选择器。 失速/重新排序单元不知道发送后发送的几个时钟周期是否由仲裁机制接受发送的窥探请求。 调度窥探请求的副本在调度窥探请求时被存储在第一单元中的锁存器的溢出管道中的顶部锁存器中。 通过维护关于窥探请求的信息,可以在被发送的窥探请求被拒绝的情况下再次发送到选择器的窥探请求,从而增加窥探请求将最终被接受的机会。
    • 10. 发明授权
    • Updating partial cache lines in a data processing system
    • 更新数据处理系统中的部分缓存行
    • US08117390B2
    • 2012-02-14
    • US12424434
    • 2009-04-15
    • David W. CummingsGuy L. GuthrieHugh ShenWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • David W. CummingsGuy L. GuthrieHugh ShenWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F13/00
    • G06F12/0822G06F12/0897G06F2212/507
    • A processing unit for a data processing system includes a processor core having one or more execution units for processing instructions and a register file for storing data accessed in processing of the instructions. The processing unit also includes a multi-level cache hierarchy coupled to and supporting the processor core. The multi-level cache hierarchy includes at least one upper level of cache memory having a lower access latency and at least one lower level of cache memory having a higher access latency. The lower level of cache memory, responsive to receipt of a memory access request that hits only a partial cache line in the lower level cache memory, sources the partial cache line to the at least one upper level cache memory to service the memory access request. The at least one upper level cache memory services the memory access request without caching the partial cache line.
    • 用于数据处理系统的处理单元包括具有一个或多个用于处理指令的执行单元的处理器核心和用于存储在指令处理中访问的数据的寄存器文件。 处理单元还包括耦合到并支持处理器核的多级高速缓存层级。 多级高速缓存层级包括具有较低访问延迟的至少一个高级缓存存储器和具有较高访问延迟的至少一个较低级别的高速缓存存储器。 响应于仅接收低级高速缓冲存储器中的部分高速缓存行的存储器访问请求的响应,较低级别的高速缓存存储器将部分高速缓存行源送至至少一个上级高速缓冲存储器来服务存储器访问请求。 至少一个上级缓存存储器服务于存储器访问请求,而不缓存部分高速缓存行。