会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Cache flush system and method
    • 缓存刷新系统和方法
    • US06976128B1
    • 2005-12-13
    • US10255420
    • 2002-09-26
    • James A. WilliamsRobert H. AndrighettiConrad S. ShimadaDonald C. Englin
    • James A. WilliamsRobert H. AndrighettiConrad S. ShimadaDonald C. Englin
    • G06F9/30G06F12/00G06F12/08G06F12/12
    • G06F9/30047G06F12/0804G06F12/127
    • A system and method is provided to selectively flush data from cache memory to a main memory irrespective of the replacement algorithm that is used to manage the cache data. According to one aspect of the invention, novel “page flush” and “cache line flush” instructions are provided to flush a page and a cache line of memory data, respectively, from a cache to a main memory. In one embodiment, these instructions are included within the hardware instruction set of an Instruction Processor (IP). According to another aspect of the invention, flush operations are initiated using a background interface that interconnects the IP with its associated cache memory. A primary interface that also interconnects the IP to the cache memory is used to simultaneously issue higher-priority requests so that processor throughput is increased.
    • 提供了一种系统和方法,用于选择性地将数据从高速缓存存储器刷新到主存储器,而与用于管理高速缓存数据的替换算法无关。 根据本发明的一个方面,提供了新颖的“页面刷新”和“高速缓存行刷新”指令,用于将存储器数据的页面和高速缓存行分别从缓存刷新到主存储器。 在一个实施例中,这些指令被包括在指令处理器(IP)的硬件指令集中。 根据本发明的另一方面,使用将IP与其相关联的高速缓冲存储器互连的后台接口来启动刷新操作。 也将IP与高速缓冲存储器互连的主界面用于同时发出较高优先级的请求,从而提高处理器吞吐量。
    • 3. 发明授权
    • Programmable request handling system and method
    • 可编程请求处理系统和方法
    • US07603672B1
    • 2009-10-13
    • US10744992
    • 2003-12-23
    • Robert H. Andrighetti
    • Robert H. Andrighetti
    • G06F9/46G06F3/00G06F13/28
    • G06F9/30047G06F9/3863G06F12/0811G06F12/0857G06F12/0862G06F13/1642G06F2212/6028
    • A system and method is disclosed for prioritizing requests received from multiple requesters for presentation to a shared resource. The system includes logic that implements multiple priority schemes. This logic may be programmably configured to associate each of the requesters with any of the priority schemes. The priority scheme that is associated with the requester controls how that requester submits requests to the shared resource. The requests that have been submitted by any of the requesters in this manner are then processed in a predetermined order. This order is established using an absolute priority assigned to each of the requesters. This order may further be determined by assigning one or more requesters a priority that is relative to another requester. The absolute and relative priority assignments are programmable.
    • 公开了一种系统和方法,用于优先化从多个请求者接收的请求以呈现给共享资源。 该系统包括实现多个优先级方案的逻辑。 该逻辑可以可编程地配置为将每个请求者与任何优先级方案相关联。 与请求者相关联的优先级方案控制请求者如何向共享资源提交请求。 然后以预定的顺序处理由任何请求者以这种方式提交的请求。 使用分配给每个请求者的绝对优先级建立此顺序。 可以通过将一个或多个请求者分配给相对于另一请求者的优先级来进一步确定该顺序。 绝对和相对优先级分配是可编程的。
    • 4. 发明授权
    • Data pre-fetch system and method for a cache memory
    • 缓存存储器的数据预取系统和方法
    • US06993630B1
    • 2006-01-31
    • US10255393
    • 2002-09-26
    • James A. WilliamsRobert H. AndrighettiConrad S. ShimadaDonald C. EnglinKelvin S. Vartti
    • James A. WilliamsRobert H. AndrighettiConrad S. ShimadaDonald C. EnglinKelvin S. Vartti
    • G06F12/00
    • G06F12/0862
    • A system and method for pre-fetching data signals is disclosed. According to one aspect of the invention, an Instruction Processor (IP) generates requests to access data signals within the cache. Predetermined ones of the requests are provided to pre-fetch control logic, which determines whether the data signals are available within the cache. If not, the data signals are retrieved from another memory within the data processing system, and are stored to the cache. According to one aspect, the rate at which pre-fetch requests are generated may be programmably selected to match the rate at which the associated requests to access the data signals are provided to the cache. In another embodiment, pre-fetch control logic receives information to generate pre-fetch requests using a dedicated interface coupling the pre-fetch control logic to the IP.
    • 公开了一种用于预取数据信号的系统和方法。 根据本发明的一个方面,指令处理器(IP)产生访问高速缓存内的数据信号的请求。 将预定的请求提供给预取控制逻辑,其确定数据信号是否在高速缓存内可用。 如果不是,数据信号从数据处理系统中的另一个存储器检索,并被存储到高速缓存。 根据一个方面,可以可编程地选择生成预取请求的速率以匹配将相关联的数据信号请求提供给高速缓存的速率。 在另一个实施例中,预取控制逻辑接收信息以使用将预取控制逻辑耦合到IP的专用接口来生成预取请求。
    • 5. 发明授权
    • Cache with integrated capability to write out entire cache
    • 具有集成功能的缓存来写出整个缓存
    • US07356647B1
    • 2008-04-08
    • US11209227
    • 2005-08-23
    • Robert H. AndrighettiDonald C. EnglinDouglas A. Fuller
    • Robert H. AndrighettiDonald C. EnglinDouglas A. Fuller
    • G06F13/00
    • G06F12/0804G06F12/0817
    • A cache arrangement of a data processing system provides a cache flush operation initiated by a command from a maintenance processor. The cache arrangement includes a cache memory, a mode register, and a controller. The mode register is settable by the maintenance processor to one of first and second values. The controller selectively writes all of the modified information in the cache memory to the system memory responsive to the command. Also in response to this command, all of the information is invalidated in the cache memory if the mode register is set to the second value. In one embodiment, none of the information except the modified data is invalidated if the mode register is set to the first value. The second value may be utilized to efficiently reassign one or more cache memories to a new partition.
    • 数据处理系统的缓存布置提供由维护处理器的命令发起的高速缓存刷新操作。 高速缓存装置包括高速缓冲存储器,模式寄存器和控制器。 模式寄存器可由维护处理器设置为第一和第二值之一。 控制器响应于该命令选择性地将高速缓冲存储器中的所有修改的信息写入系统存储器。 此外,响应于该命令,如果模式寄存器被设置为第二值,则所有信息在高速缓冲存储器中被无效。 在一个实施例中,如果模式寄存器被设置为第一值,则除了修改的数据之外的信息都不会失效。 可以利用第二值来有效地将一个或多个高速缓冲存储器重新分配到新的分区。
    • 7. 发明授权
    • Delayed leaky write system and method for a cache memory
    • 延迟泄漏的写入系统和缓存的方法
    • US06934810B1
    • 2005-08-23
    • US10255276
    • 2002-09-26
    • James A. WilliamsRobert H. AndrighettiKelvin S. VarttiDavid P. Williams
    • James A. WilliamsRobert H. AndrighettiKelvin S. VarttiDavid P. Williams
    • G06F12/00G06F12/08
    • G06F12/0862G06F12/0804G06F12/0891
    • A mechanism to selectively leak data signals from a cache memory is provided. According to one aspect of the invention, an Instruction Processor (IP) is coupled to generate requests to access data signals within the cache. Some requests include a leaky designator, which is activated if the associated data signals are considered “leaky”. These data signals are flushed from the cache memory after a predetermined delay has occurred. The delay is provided to allow the IP to complete any subsequent requests for the same data before the flush operation is performed, thereby preventing memory thrashing. Pre-fetch logic may also be provided to pre-fetch the data signals associated with the requests. In one embodiment, the rate at which data signals are flushed from cache memory is programmable, and is based on the rate at which requests are processing for pre-fetch purposes.
    • 提供了一种选择性地从高速缓冲存储器泄漏数据信号的机制。 根据本发明的一个方面,指令处理器(IP)被耦合以产生访问高速缓存内的数据信号的请求。 一些请求包括泄漏指示符,如果相关的数据信号被认为是“泄漏的”,则会被激活。 在发生预定的延迟之后,这些数据信号从高速缓冲存储器中刷新。 提供延迟以允许IP在执行刷新操作之前完成对相同数据的任何后续请求,从而防止内存抖动。 还可以提供预取逻辑以预取与请求相关联的数据信号。 在一个实施例中,数据信号从高速缓存存储器刷新的速率是可编程的,并且基于请求正在处理以用于预取目的的速率。