会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明授权
    • Multiprocessor system bus protocol with group addresses, responses, and priorities
    • 具有组地址,响应和优先级的多处理器系统总线协议
    • US06591321B1
    • 2003-07-08
    • US09437200
    • 1999-11-09
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Guy Lynn GuthrieJody Bern JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Guy Lynn GuthrieJody Bern JoynerJerry Don Lewis
    • G06F1200
    • G06F12/0831
    • A multiprocessor system bus protocol system and method for processing and handling a processor request within a multiprocessor system having a number of bus accessible memory devices that are snooping on. at least one bus line. Snoop response groups which are groups of different types of snoop responses from the bus accessible memory devices are provided. Different transfer types are provided within each of the snoop response groups. A bus master device that provides a bus master signal is designated. The bus master device receives the processor request. One of the snoop response groups and one of the transfer types are appropriately designated based on the processor request. The bus master signal is formulated from a snoop response group, a transfer type, a valid request signal, and a cache line address. The bus master signal is sent to all of the bus accessible memory devices on the cache bus line and to a combined response logic system. All of the bus accessible memory devices on the cache bus line send snoop responses in response to the bus master signal based on the designated snoop response group. The snoop responses are sent to the combined response logic system. A combined response by the combined response logic system is determined based on the appropriate combined response encoding logic determined by the designated and latched snoop response group. The combined response is sent to all of the bus accessible memory devices on the cache bus line.
    • 一种用于处理和处理处理器请求的多处理器系统总线协议系统和方法,所述多处理器系统具有被窥探的多个总线可访问存储器件。 至少有一条总线。 提供了来自总线可访问存储器设备的不同类型的窥探响应的侦听响应组。 在每个窥探响应组中提供不同的传输类型。 指定提供总线主机信号的总线主设备。 总线主设备接收处理器请求。 根据处理器请求适当地指定其中一个侦听响应组和传输类型之一。 总线主机信号由侦听响应组,传输类型,有效请求信号和高速缓存线地址来制定。 总线主机信号被发送到高速缓存总线上的所有总线可访问存储器件和组合响应逻辑系统。 基于指定的窥探响应组,高速缓存总线上的所有总线可访问存储器件响应于总线主机信号发送窥探响应。 侦听响应被发送到组合的响应逻辑系统。 基于由指定和锁存的窥探响应组确定的适当的组合响应编码逻辑来确定组合响应逻辑系统的组合响应。 组合的响应被发送到高速缓存总线上的所有总线可访问存储器件。
    • 33. 发明授权
    • Multiprocessor system in which a cache serving as a highest point of coherency is indicated by a snoop response
    • 多处理器系统,其中作为最高点的一致性的缓存由窥探响应指示
    • US06405289B1
    • 2002-06-11
    • US09437196
    • 1999-11-09
    • Ravi Kumar ArimilliLeo James ClarkJames Stephen Fields, Jr.Guy Lynn Guthrie
    • Ravi Kumar ArimilliLeo James ClarkJames Stephen Fields, Jr.Guy Lynn Guthrie
    • G06F1200
    • G06F12/0831G06F12/0813G06F2212/2542
    • A method of maintaining cache coherency, by designating one cache that owns a line as a highest point of coherency (HPC) for a particular memory block, and sending a snoop response from the cache indicating that it is currently the HPC for the memory block and can service a request. The designation may be performed in response to a particular coherency state assigned to the cache line, or based on the setting of a coherency token bit for the cache line. The processing units may be grouped into clusters, while the memory is distributed using memory arrays associated with respective clusters. One memory array is designated as the lowest point of coherency (LPC) for the memory block (i.e., a fixed assignment) while the cache designated as the HPC is dynamic (i.e., changes as different caches gain ownership of the line). An acknowledgement snoop response is sent from the LPC memory array, and a combined response is returned to the requesting device which gives priority to the HPC snoop response over the LPC snoop response.
    • 通过将一个具有一行的高速缓存指定为特定存储器块的最高一致性(HPC),以及从高速缓存指示其当前是存储器块的HPC的高速缓存发送侦听响应的方法来维持高速缓存一致性的方法,以及 可以服务请求。 可以响应于分配给高速缓存行的特定一致性状态,或者基于高速缓存行的相关性令牌位的设置来执行指定。 处理单元可以被分组成群集,而存储器是使用与相应簇相关联的存储器阵列分布的。 一个存储器阵列被指定为存储器块的一致性(LPC)的最低点(即,固定分配),而指定为HPC的缓存是动态的(即,随着不同的高速缓存获得线的所有权而改变)。 从LPC存储器阵列发送确认窥探响应,并且将组合的响应返回给请求设备,该请求设备通过LPC窥探响应优先考虑HPC侦听响应。
    • 34. 发明授权
    • Optimized cache allocation algorithm for multiple speculative requests
    • 针对多个推测请求的优化缓存分配算法
    • US06393528B1
    • 2002-05-21
    • US09345714
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F12/0862G06F12/127
    • A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.
    • 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器hiearchy请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已被满足,则包含先前预取值中的一个的高速缓存行中的高速缓存行被分配用于接收另一个预取值 。
    • 37. 发明授权
    • Multiprocessor system snoop scheduling mechanism for limited bandwidth snoopers with banked directory implementation
    • 用于有限带宽窥探者的多处理器系统侦听调度机制,具有实现目录
    • US06546470B1
    • 2003-04-08
    • US09749349
    • 2001-03-12
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiGuy Lynn GuthrieJody B. Joyner
    • Ravi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiGuy Lynn GuthrieJody B. Joyner
    • G06F1200
    • G06F12/0831
    • A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized. The invention is not limited to any particular type of instruction, and the synchronization functionality may be hardware or software programmable.
    • 一种多处理器计算机系统,其中高速缓存的窥探操作被同步以允许在基于高速缓存已被同步的特定方式选择的周期期间发出高速缓存操作。 每个缓存控制器都知道何时发生这些同步的窥探任务,并且可以针对某些对窥探重试敏感的请求(如kill-type操作)来定位这些周期。 同步可以为具有多个互连总线的系统建立优先级方案,或者可以同步窥探者目录的DRAM存储器的刷新周期。 在本发明的另一方面,创建窗口,在该窗口期间,目录将不会接收写入操作(即,该目录仅被保留用于仅读操作)。 本发明可以在缓存层级中实现,该缓存层级提供了布置在存储体中的存储体,存储体同样地同步。 本发明不限于任何特定类型的指令,并且同步功能可以是硬件或软件可编程的。
    • 40. 发明授权
    • Extended cache state with prefetched stream ID information
    • 扩展缓存状态与预取流ID信息
    • US06360299B1
    • 2002-03-19
    • US09345644
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F12/0862G06F12/121G06F2212/6028
    • A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.
    • 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。