会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Store stream prefetching in a microprocessor
    • 在微处理器中存储流预取
    • US20060179238A1
    • 2006-08-10
    • US11054871
    • 2005-02-10
    • John GriswellHung LeFrancis O'ConnellWilliam StarkeJeffrey StuecheliAlbert Williams
    • John GriswellHung LeFrancis O'ConnellWilliam StarkeJeffrey StuecheliAlbert Williams
    • G06F13/28
    • G06F12/0862G06F9/30043G06F9/30047G06F9/383G06F12/0811
    • In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
    • 在具有加载/存储单元和预取硬件的微处理器中,预取硬件包括预取队列,其包含指示分配的数据流的条目。 预取引擎接收与由加载/存储单元执行的存储指令相关联的地址。 预取引擎通过将队列中的条目与包含多个高速缓存块的地址的窗口进行比较来确定是否对与存储指令相对应的预取队列中的条目进行分配,其中地址窗口从接收到的地址导出。 预取引擎将预取队列中的条目与两个连续高速缓存块的窗口进行比较。 当预取队列中的任何条目都在地址窗口内时,预取引擎抑制新条目的分配。 当存储指令的数据地址等于地址窗口的边界区域中的地址时,预取引擎进一步抑制新条目的分配。
    • 4. 发明申请
    • Data processing system and method that permit pipelining of I/O write operations and multiple operation scopes
    • 允许I / O写入操作和多个操作范围的流水线的数据处理系统和方法
    • US20070073919A1
    • 2007-03-29
    • US11226967
    • 2005-09-15
    • George DalyJames FieldsGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • George DalyJames FieldsGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • G06F13/28
    • G06F12/0831G06F12/0811
    • A data processing system includes at least a first processing node having an input/output (I/O) controller and a second processing including a memory controller for a memory. The memory controller receives, in order, pipelined first and second DMA write operations from the I/O controller, where the first and second DMA write operations target first and second addresses, respectively. In response to the second DMA write operation, the memory controller establishes a state of a domain indicator associated with the second address to indicate an operation scope including the first processing node. In response to the memory controller receiving a data access request specifying the second address and having a scope excluding the first processing node, the memory controller forces the data access request to be reissued with a scope including the first processing node based upon the state of the domain indicator associated with the second address.
    • 数据处理系统至少包括具有输入/输出(I / O)控制器的第一处理节点和包括用于存储器的存储器控​​制器的第二处理。 存储器控制器按顺序从I / O控制器接收流水线的第一和第二DMA写入操作,其中第一和第二DMA写操作分别针对第一和第二地址。 响应于第二DMA写入操作,存储器控制器建立与第二地址相关联的域指示符的状态,以指示包括第一处理节点的操作范围。 响应于所述存储器控制器接收到指定所述第二地址并且具有排除所述第一处理节点的范围的数据访问请求,所述存储器控制器基于所述第一处理节点的状态强迫所述数据访问请求被重新发布,所述范围包括所述第一处理节点 与第二个地址关联的域指示符。
    • 10. 发明申请
    • Cache member protection with partial make MRU allocation
    • 缓存成员保护部分使MRU分配
    • US20060179234A1
    • 2006-08-10
    • US11054390
    • 2005-02-09
    • Robert BellGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • Robert BellGuy GuthrieWilliam StarkeJeffrey Stuecheli
    • G06F12/00
    • G06F12/126G06F12/0897G06F12/123G06F12/128
    • A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.
    • 一种用于在LRU受害者选择期间能够保护缓存的特定成员的方法和装置。 LRU状态阵列除了状态位之外还包括额外的“保护”位。 保护位用作用于标识要保护的同余类的成员的位置的指针。 在标准LRU受害者选择期间,保护成员不会从缓存中删除,除非该成员无效。 保护位被流水线到MRU更新逻辑,它们用于生成MRU向量。 由MRU向量(和指针)标识的特定成员不被选择作为下一个LRU受害者,除非成员无效。 使MRU操作仅影响布置了基于树的结构的较低级LRU状态位,并且因此仅在不影响其他成员的LRU受害者选择的情况下否定对被保护成员的选择。