会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Fast lane prefetching
    • 快速车道预取
    • US06681295B1
    • 2004-01-20
    • US09652451
    • 2000-08-31
    • Stephen C. RootRichard E. KesslerDavid H. AsherBrian Lilly
    • Stephen C. RootRichard E. KesslerDavid H. AsherBrian Lilly
    • G06F1200
    • G06F9/30047G06F9/383G06F9/3832G06F9/3836G06F9/384G06F9/3855G06F9/3857G06F9/3859G06F12/0862G06F12/0864G06F2212/6028
    • A computer system has a set-associative, multi-way cache system, in which at least one way is designated as a fast lane, and remaining way(s) are designated slow lanes. Any data that needs to be loaded into cache, but is not likely to be needed again in the future, preferably is loaded into the fast lane. Data loaded into the fast lane is earmarked for immediate replacement. Data loaded into the slow lanes preferably is data that may not needed again in the near future. Slow data is kept in cache to permit it to be reused if necessary. The high-performance mechanism of data access in a modem microprocessor is with a prefetch; data is moved, with a special prefetch instruction, into cache prior to its intended use. The prefetch instruction requires less machine resources, than carrying out the same intent with an ordinary load instruction. So, the slow-lane, fast-lane decision is accomplished by having a multiplicity of prefetch instructions. By loading “not likely to be needed again” data into the fast lane, and designating such data for immediate replacement, data in other cache blocks, in the other ways, may not be evicted, and overall system performance is increased.
    • 计算机系统具有集合关联的多路缓存系统,其中至少一种方式被指定为快速通道,并且剩余方式被指定为慢车道。 任何需要加载到缓存中但不太可能再次需要的数据最好被加载到快速通道中。 加载到快速通道的数据被指定用于立即更换。 加载到慢车道中的数据优选地是在不久的将来可能不再需要的数据。 慢数据保存在缓存中,以便在必要时重新使用它。 调制解调器微处理器中数据访问的高性能机制具有预取功能; 在预期使用之前,将数据用特殊的预取指令移动到缓存中。 预取指令比普通加载指令执行相同的意图要求较少的机器资源。 因此,通过具有多个预取指令来实现慢通道,快速通道决定。 通过将“不太可能需要再次”的数据加载到快速通道中,并且指定这样的数据以立即替换,以其他方式在其他高速缓存块中的数据可能不被驱逐,并且整体系统性能增加。
    • 2. 发明授权
    • Method for reducing directory writes and latency in a high performance, directory-based, coherency protocol
    • 在高性能,基于目录的一致性协议中减少目录写入和延迟的方法
    • US06654858B1
    • 2003-11-25
    • US09652324
    • 2000-08-31
    • David H. AsherBrian LillyRichard E. KesslerMichael Bertone
    • David H. AsherBrian LillyRichard E. KesslerMichael Bertone
    • G06F1200
    • G06F12/0817G06F12/0824
    • A computer system has a plurality of processors wherein each processor preferably has its own cache memory. Each processor or group of processors may have a memory controller that interfaces to a main memory. Each main memory includes a “directory” that maintains the directory coherence state of each block of that memory. One or more of the processors are members of a “local” group of processors. Processors outside a local group are referred to as “remote” processors with respect to that local group. Whenever a remote processor performs a memory reference for a particular block of memory, the processor that maintains the directory for that block normally updates the directory to reflect that the remote processor now has exclusive ownership of the block. However, memory references between processors within a local group do not result in directory writes. Instead, the cache memory of the local processor that initiated the memory requests places or updates a copy of the requested data in its cache memory and also sets associated tag control bits to reflect the same or similar information as would have been written to the directory. If a subsequent request is received for that same block, the local processor that previously accessed the block examines its cache for the associated tag control bits. Using those bits, that processor will determine that it currently has the block exclusive and provides the requested data to the new processor that is requesting the data.
    • 计算机系统具有多个处理器,其中每个处理器优选地具有其自己的高速缓冲存储器。 每个处理器或处理器组可以具有与主存储器接口的存储器控​​制器。 每个主存储器包括维护该存储器的每个块的目录一致状态的“目录”。 一个或多个处理器是“本地”处理器组的成员。 本地组外的处理器称为“本地组”的“远程”处理器。 每当远程处理器执行特定内存块的内存引用时,维护该块的目录的处理器通常会更新目录,以反映远程处理器现在拥有该块的独占所有权。 但是,本地组内处理器之间的内存引用不会导致目录写入。 相反,启动存储器请求的本地处理器的高速缓存存储器将所请求的数据的副本放置或更新在其高速缓冲存储器中,并且还设置相关联的标签控制位以反映与已经写入目录的相同或相似的信息。 如果接收到该相同块的后续请求,则先前访问该块的本地处理器检查其相关标签控制位的高速缓存。 使用这些位,该处理器将确定其当前具有块排他性,并且向请求数据的新处理器提供所请求的数据。
    • 6. 发明授权
    • Scalable directory based cache coherence protocol
    • 基于可扩展目录的缓存一致性协议
    • US06918015B2
    • 2005-07-12
    • US10403922
    • 2003-03-31
    • Richard E. KesslerKourosh GharachorlooDavid H. Asher
    • Richard E. KesslerKourosh GharachorlooDavid H. Asher
    • G06F12/08G06F12/00
    • G06F12/0817G06F12/0828
    • A system and method is disclosed to maintain the coherence of shared data in cache and memory contained in the nodes of a multiprocessing computer system. The distributed multiprocessing computer system contains a number of processors each connected to main memory. A processor in the distributed multiprocessing computer system is identified as a Home processor for a memory block if it includes the original memory block and a coherence directory for the memory block in its main memory. An Owner processor is another processor in the multiprocessing computer system that includes a copy of the Home processor memory block in a cache connected to its main memory. Whenever an Owner processor is present for a memory block, it is the only processor in the distributed multiprocessing computer system to contain a copy of the Home processor memory block. Eviction of a memory block copy held by an Owner processor in its cache requires a write of the memory block copy to its Home and update of the corresponding coherence directory. No reads of the Home processor directory or modification of other processor cache and main memory is required. The coherence controller in each processor is able to send and receive messages out of order to maintain the coherence of the shared data in cache and main memory. If an out of order message causes an incorrect next program state, the coherence controller is able to restore the prior correct saved program state and resume execution.
    • 公开了一种维护多处理计算机系统的节点中包含的高速缓存和存储器中的共享数据的一致性的系统和方法。 分布式多处理计算机系统包含多个处理器,每个处理器连接到主存储器。 分布式多处理计算机系统中的处理器被识别为用于存储器块的家庭处理器,如果其包括原始存储器块和用于其主存储器中的存储器块的一致性目录。 所有者处理器是多处理计算机系统中的另一个处理器,其包括连接到其主存储器的高速缓存中的家庭处理器存储器块的副本。 无论何时存在一个内存块的所有者处理器,它是分布式多处理计算机系统中唯一包含家庭处理器内存块副本的处理器。 驱逐由所有者处理器在其高速缓存中保存的存储器块副本需要将存储器块副本写入其归属并更新相应的一致性目录。 不需要读取家庭处理器目录或修改其他处理器缓存和主内存。 每个处理器中的相干控制器能够发送和接收消息,以保持缓存和主存储器中的共享数据的一致性。 如果故障信息导致下一个程序状态不正确,则相干控制器能够恢复先前正确保存的程序状态并恢复执行。
    • 7. 发明授权
    • Input output bridging
    • 输入输出桥接
    • US08473658B2
    • 2013-06-25
    • US13280768
    • 2011-10-25
    • Robert A. SanzoneDavid H. AsherRichard E. Kessler
    • Robert A. SanzoneDavid H. AsherRichard E. Kessler
    • G06F13/00
    • G06F13/1605G06F13/1684
    • In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.
    • 在一个实施例中,系统包括存储器和用于与存储器进行处理器访问的第一桥接单元。 第一桥单元包括与输入输出总线耦合的第一仲裁单元,无存储器通知单元(“MFNU”)和存储器,并且被配置为从输入 - 输出总线接收请求并接收来自 MFNU并在第一条内存总线上选择发送到内存的请求。 所述系统还包括用于与所述存储器进行分组数据存取的第二桥单元,所述存储器包括与分组输入单元,分组输出单元和所述存储器耦合的第二仲裁单元,并且被配置为从所述分组输入单元接收请求, 从分组输出单元接收请求,并在第二存储器总线上选择发送到存储器的请求。
    • 10. 发明授权
    • Mechanism to control the allocation of an N-source shared buffer
    • 控制N源共享缓冲区分配的机制
    • US07213087B1
    • 2007-05-01
    • US09651924
    • 2000-08-31
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • G06F5/00
    • H04L47/39H04L49/90
    • A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.
    • 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。