会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • Method and apparatus for performing data prefetch in a multiprocessor system
    • 在多处理器系统中执行数据预取的方法和装置
    • US20060179237A1
    • 2006-08-10
    • US11054173
    • 2005-02-09
    • James FieldsBenjiman GoodmanGuy GuthrieJeffrey Stuecheli
    • James FieldsBenjiman GoodmanGuy GuthrieJeffrey Stuecheli
    • G06F13/28G06F12/00
    • G06F12/0862G06F12/0811G06F12/0831G06F12/0851
    • A method and apparatus for performing data prefetch in a multiprocessor system are disclosed. The multiprocessor system includes multiple processors, each having a cache memory. The cache memory is subdivided into multiple slices. A group of prefetch requests is initially issued by a requesting processor in the multiprocessor system. Each prefetch request is intended for one of the respective slices of the cache memory of the requesting processor. In response to the prefetch requests being missed in the cache memory of the requesting processor, the prefetch requests are merged into one combined prefetch request. The combined prefetch request is then sent to the cache memories of all the non-requesting processors within the multiprocessor system. In response to a combined clean response from the cache memories of all the non-requesting processors, data are then obtained for the combined prefetch request from a system memory.
    • 公开了一种用于在多处理器系统中执行数据预取的方法和装置。 多处理器系统包括多个处理器,每个具有高速缓冲存储器。 缓存存储器被细分成多个片段。 一组预取请求最初由多处理器系统中的请求处理器发出。 每个预取请求用于请求处理器的高速缓冲存储器的相应片段之一。 响应于在请求处理器的高速缓冲存储器中错过的预取请求,预取请求被合并成一个组合预取请求。 然后将组合的预取请求发送到多处理器系统内的所有不请求处理器的高速缓冲存储器。 响应于来自所有非请求处理器的高速缓冲存储器的组合清洁响应,然后从系统存储器获得用于组合预取请求的数据。
    • 4. 发明申请
    • Reducing Number of Rejected Snoop Requests By Extending Time to Respond to Snoop Request
    • 通过延长响应Snoop请求的时间来减少被拒绝的侦听请求数
    • US20070294486A1
    • 2007-12-20
    • US11847941
    • 2007-08-30
    • Benjiman GoodmanGuy GuthrieWilliam StarkeJeffrey StuecheliDerek Williams
    • Benjiman GoodmanGuy GuthrieWilliam StarkeJeffrey StuecheliDerek Williams
    • G06F12/00
    • G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。
    • 7. 发明申请
    • Reducing Number of Rejected Snoop Requests By Extending Time To Respond To Snoop Request
    • 通过延长响应Snoop请求的时间减少被拒绝的侦听请求数
    • US20080077744A1
    • 2008-03-27
    • US11950717
    • 2007-12-05
    • Benjiman GoodmanGuy GuthrieWilliam StarkeJeffrey StuecheliDerek Williams
    • Benjiman GoodmanGuy GuthrieWilliam StarkeJeffrey StuecheliDerek Williams
    • G06F13/28
    • G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。
    • 10. 发明申请
    • Data processing system and method for efficient communication utilizing an Tn and Ten coherency states
    • 数据处理系统和利用Tn和10相​​关性状态的高效通信方法
    • US20080040556A1
    • 2008-02-14
    • US11835984
    • 2007-08-08
    • James FieldsBenjiman GoodmanGuy GuthrieWilliam StarkeDerek Williams
    • James FieldsBenjiman GoodmanGuy GuthrieWilliam StarkeDerek Williams
    • G06F12/08
    • G06F12/0817G06F12/0831G06F12/084
    • A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit. The first coherency domain includes a first cache memory and a second cache memory, and the second coherency domain includes a remote coherent cache memory. The first cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the memory block is possibly shared with the second cache memory in the first coherency domain and cached only within the first coherency domain.
    • 高速缓存一致数据处理系统至少包括第一和第二相关域,每个域包括至少一个处理单元。 第一相关域包括第一高速缓存存储器和第二高速缓冲存储器,并且第二相干域包括远程一致高速缓存存储器。 第一高速缓存存储器包括高速缓存控制器,包括用于高速缓存存储器块的数据存储位置的数据阵列和高速缓存目录。 缓存目录包括用于存储与存储器块相关联的地址标签的标签字段和与标签字段和数据存储位置相关联的一致性状态字段。 相关性状态字段具有多个可能的状态,包括指示存储器块可能与第一相关域中的第二高速缓冲存储器共享并且仅在第一相干域内缓存的状态。