会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 12. 发明授权
    • Mechanism to control the allocation of an N-source shared buffer
    • 控制N源共享缓冲区分配的机制
    • US07213087B1
    • 2007-05-01
    • US09651924
    • 2000-08-31
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • G06F5/00
    • H04L47/39H04L49/90
    • A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.
    • 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。
    • 13. 发明授权
    • Speculative directory writes in a directory based cache coherent nonuniform memory access protocol
    • 推测目录写入基于目录的缓存相干非均匀内存访问协议
    • US07099913B1
    • 2006-08-29
    • US09652834
    • 2000-08-31
    • Michael S. BertoneRichard E. Kessler
    • Michael S. BertoneRichard E. Kessler
    • G06F15/16G06F12/00G06F9/26
    • G06F12/0817G06F12/0813G06F2212/2542G06F2212/507
    • A system and method is disclosed that reduces the latency of directory updates in a directory based Distributed Shared Memory computer system by speculating the next directory state. The distributed multiprocessing computer system contains a number of processor nodes each connected to main memory. Each main memory may store data that is shared between the processor nodes. A Home processor node for a memory block includes the original data block and a coherence directory for the data block in its main memory. An Owner processor node includes a copy of the original data block in its associated main memory, the copy of the data block residing exclusively in the main memory of the Owner processor node. A Requestor processor node may encounter a read or write miss of the original data block and request the data block from the Home processor node. The Home processor node receives the request for the data block from the Requestor processor node, forwards the request to the Owner processor node for the data block and performs a speculative write of the next directory state to the coherence directory for the data block without waiting for the Owner processor node to respond to the request.
    • 公开了一种系统和方法,其通过推测下一个目录状态来减少基于目录的分布式共享存储器计算机系统中的目录更新的延迟。 分布式多处理计算机系统包含多个处理器节点,每个处理器节点连接到主存储器。 每个主存储器可以存储在处理器节点之间共享的数据。 用于存储器块的家庭处理器节点包括其主存储器中的数据块的原始数据块和一致性目录。 所有者处理器节点包括在其相关联的主存储器中的原始数据块的副本,专用于所有者处理器节点的主存储器中的数据块的副本。 请求者处理器节点可能会遇到原始数据块的读或写错,并从家庭处理器节点请求数据块。 家庭处理器节点从请求者处理器节点接收对数据块的请求,将该请求转发给数据块的所有者处理器节点,并且将数据块的相干目录的下一个目录状态的推测性写入,而不等待 所有者处理器节点响应请求。
    • 14. 发明授权
    • System for minimizing memory bank conflicts in a computer system
    • 用于最小化计算机系统中的存储器组冲突的系统
    • US06622225B1
    • 2003-09-16
    • US09652325
    • 2000-08-31
    • Richard E. KesslerMichael S. BertoneMichael C. BraganzaGregg A. BouchardMaurice B. Steinman
    • Richard E. KesslerMichael S. BertoneMichael C. BraganzaGregg A. BouchardMaurice B. Steinman
    • G06F1200
    • G06F13/1642
    • A computer system includes a memory controller interfacing the processor to a memory system. The memory controller supports a memory system with a plurality of memory devices, with multiple memory banks in each memory device. The memory controller supports simultaneous memory accesses to different memory banks. Memory bank conflicts are avoided by examining each transaction before it is loaded in the memory transaction queue. On a first clock cycle, the new pending memory request is transferred from a pending request queue to a memory mapper. On the subsequent clock cycle, the memory mapper formats the pending memory request into separate signals identifying the DEVICE, BANK, ROW and COLUMN to be accessed by the pending transaction. In the next clock cycle, the DEVICE and BANK signals are compared with every entry in the memory transaction queue to determine if a bank conflict exists. If so, the new memory request is rejected and recycled to the pending request queue.
    • 计算机系统包括将处理器与存储器系统接口的存储器控​​制器。 存储器控制器支持具有多个存储器设备的存储器系统,每个存储器设备中具有多个存储体。 存储器控制器支持对不同存储体的同时存储器访问。 通过在每个事务加载到内存事务队列中之前检查每个事务来避免存储器组冲突。 在第一个时钟周期中,新的挂起的存储器请求从挂起的请求队列传送到存储器映射器。 在随后的时钟周期中,存储器映射器将待处理的存储器请求格式化成单独的信号,标识要由待处理事务访问的DEVICE,BANK,ROW和COLUMN。 在下一个时钟周期中,将DEVICE和BANK信号与存储器事务队列中的每个条目进行比较,以确定是否存在存储库冲突。 如果是这样,新的内存请求被拒绝并被回收到挂起的请求队列。
    • 18. 发明授权
    • Direct access to low-latency memory
    • 直接访问低延迟内存
    • US07594081B2
    • 2009-09-22
    • US11024002
    • 2004-12-28
    • Gregg A. BouchardDavid A. CarlsonRichard E. KesslerMuhammad R. Hussain
    • Gregg A. BouchardDavid A. CarlsonRichard E. KesslerMuhammad R. Hussain
    • G06F12/00G06F13/00G06F13/28
    • G06F9/3824G06F9/3885G06F12/0888
    • A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.
    • 提供内容感知应用处理系统,用于允许定向访问存储在非高速缓冲存储器中的数据,从而绕过高速缓存一致存储器。 该处理器包括用于缓存相干存储器的系统接口和用于非高速缓存一致记忆体的低延迟存储器接口。 系统接口将由处理器执行的普通加载/存储指令的存储器访问指向高速缓存一致存储器。 低延迟存储器接口将由处理器执行的非普通加载/存储指令的存储器访问引导到非高速缓存存储器,从而绕过高速缓存一致存储器。 非普通的加载/存储指令可以是协处理器指令。 存储器可以是低延迟型存储器。 处理器可以包括多个处理器核。
    • 19. 发明授权
    • Selective replication of data structures
    • 数据结构的选择性复制
    • US07558925B2
    • 2009-07-07
    • US11335189
    • 2006-01-18
    • Gregg A. BouchardDavid A. CarlsonRichard E. Kessler
    • Gregg A. BouchardDavid A. CarlsonRichard E. Kessler
    • G06F12/02
    • G06F12/06G06F12/0653G06F2212/174
    • Methods and apparatus are provided for selectively replicating a data structure in a low-latency memory. The memory includes multiple individual memory banks configured to store replicated copies of the same data structure. Upon receiving a request to access the stored data structure, a low-latency memory access controller selects one of the memory banks, then accesses the stored data from the selected memory bank. Selection of a memory bank can be accomplished using a thermometer technique comparing the relative availability of the different memory banks. Exemplary data structures that benefit from the resulting efficiencies include deterministic finite automata (DFA) graphs and other data structures that are loaded (i.e., read) more often than they are stored (i.e., written).
    • 提供了用于在低延迟存储器中选择性地复制数据结构的方法和装置。 存储器包括被配置为存储相同数据结构的复制副本的多个单独存储体。 在接收到访问所存储的数据结构的请求时,低延迟存储器访问控制器选择存储体之一,然后从所选存储体存取所存储的数据。 可以使用比较不同存储体的相对可用性的温度计技术来实现存储体的选择。 受益于所产生的效率的示例性数据结构包括确定性有限自动机(DFA)图和与它们被存储(即,写入)相比更加加载(即读)的其他数据结构。