会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Computer resource management and allocation system
    • 计算机资源管理与分配系统
    • US06754739B1
    • 2004-06-22
    • US09651945
    • 2000-08-31
    • Richard E. KesslerMichael S. BertoneGregg A. BouchardMaurice B. Steinman
    • Richard E. KesslerMichael S. BertoneGregg A. BouchardMaurice B. Steinman
    • G06F1300
    • G06F9/544
    • A method and architecture for improved system resource management and allocation for the processing of request and response messages in a computer system. The resource management scheme provides for dynamically sharing system resources, such as data buffers, between request and response messages or transactions. In particular, instead of simply dedicating a portion of the system resources to requests and the remaining portion to responses, a minimum amount of resources are reserved for responses and a minimum amount for requests, while the remaining resources are dynamically shared between both types of messages. The method and architecture of the present invention allows for more efficient use of system resources, while avoiding deadlock conditions and ensuring a minimum service rate for requests.
    • 一种用于改进系统资源管理和分配以在计算机系统中处理请求和响应消息的方法和架构。 资源管理方案提供在请求和响应消息或事务之间动态共享系统资源,例如数据缓冲器。 特别地,不是简单地将系统资源的一部分专用于请求,而将余下的部分用于响应,而是为响应保留最小量的资源和用于请求的最小量,而剩余的资源在两种类型的消息之间动态共享 。 本发明的方法和体系结构允许更有效地利用系统资源,同时避免死锁状况并确保请求的最低服务速率。
    • 2. 发明授权
    • Mechanism to track all open pages in a DRAM memory system
    • 跟踪DRAM存储器系统中所有打开的页面的机制
    • US06662265B1
    • 2003-12-09
    • US09652704
    • 2000-08-31
    • Richard E. KesslerMaurice B. SteinmanMichael S. BertonePeter J. BannonGregg A. Bouchard
    • Richard E. KesslerMaurice B. SteinmanMichael S. BertonePeter J. BannonGregg A. Bouchard
    • G06F1200
    • G06F12/0215G06F13/1631
    • A system and method is disclosed to track a large number of open pages in a computer memory system. The computer system contains one or more processors each including a memory controller containing a page table, the page table organized into a plurality of rows with each row able to store an address of an open memory page. A RIMM module containing RDRAM devices is coupled to each processor, each RDRAM containing a plurality of memory banks. The page table increases system memory performance by tracking a large number of open memory pages. Associated with the page table is a bank active table that indicates the memory banks in each RDRAM device having open memory pages. The page table enqueues accesses to the RIMM module in a precharge queue resulting from a page miss caused by the address of an open memory page occupying the same row of the page table as the address of the system memory access resulting in the page miss. The page table also enqueues accesses to system memory in a Row-address-select (“RAS”) queue resulting from a page miss caused by a row of the page table not containing any open memory page address. The page table enqueues accesses to system memory resulting in page hits to open memory pages in a Column-address-select (“CAS”) queue. An entry in the precharge queue is then enqueued into the RAS queue. An entry in the RAS queue after completion is enqueued into the CAS Read or CAS Write queue.
    • 公开了一种在计算机存储器系统中跟踪大量打开页面的系统和方法。 计算机系统包含一个或多个处理器,每个处理器包括包含页表的存储器控​​制器,所述页表被组织成多行,每行能够存储打开存储器页的地址。 包含RDRAM设备的RIMM模块耦合到每个处理器,每个RDRAM包含多个存储器组。 页面表通过跟踪大量的开放内存页面来增加系统内存性能。 与页表相关联的是一个存储区活动表,指示每个具有打开存储器页的RDRAM设备中的存储体。 页表格排队访问预充电队列中的RIMM模块,这是由于打开的内存页面的地址与页表的同一行的地址导致的页错误导致的系统内存访问的地址导致页错过。 页表还对由行页地址选择(“RAS”)队列访问系统内存进行排队,这是由于不包含任何打开的内存页地址的页表的行引起的页错误导致的。 页面表格对对系统内存的访问进行排队,导致页面命中,以打开列地址选择(“CAS”)队列中的内存页面。 然后将预充电队列中的条目排入RAS队列。 完成后RAS队列中的条目排入CAS读取或CAS写入队列。
    • 3. 发明授权
    • System for minimizing memory bank conflicts in a computer system
    • 用于最小化计算机系统中的存储器组冲突的系统
    • US06622225B1
    • 2003-09-16
    • US09652325
    • 2000-08-31
    • Richard E. KesslerMichael S. BertoneMichael C. BraganzaGregg A. BouchardMaurice B. Steinman
    • Richard E. KesslerMichael S. BertoneMichael C. BraganzaGregg A. BouchardMaurice B. Steinman
    • G06F1200
    • G06F13/1642
    • A computer system includes a memory controller interfacing the processor to a memory system. The memory controller supports a memory system with a plurality of memory devices, with multiple memory banks in each memory device. The memory controller supports simultaneous memory accesses to different memory banks. Memory bank conflicts are avoided by examining each transaction before it is loaded in the memory transaction queue. On a first clock cycle, the new pending memory request is transferred from a pending request queue to a memory mapper. On the subsequent clock cycle, the memory mapper formats the pending memory request into separate signals identifying the DEVICE, BANK, ROW and COLUMN to be accessed by the pending transaction. In the next clock cycle, the DEVICE and BANK signals are compared with every entry in the memory transaction queue to determine if a bank conflict exists. If so, the new memory request is rejected and recycled to the pending request queue.
    • 计算机系统包括将处理器与存储器系统接口的存储器控​​制器。 存储器控制器支持具有多个存储器设备的存储器系统,每个存储器设备中具有多个存储体。 存储器控制器支持对不同存储体的同时存储器访问。 通过在每个事务加载到内存事务队列中之前检查每个事务来避免存储器组冲突。 在第一个时钟周期中,新的挂起的存储器请求从挂起的请求队列传送到存储器映射器。 在随后的时钟周期中,存储器映射器将待处理的存储器请求格式化成单独的信号,标识要由待处理事务访问的DEVICE,BANK,ROW和COLUMN。 在下一个时钟周期中,将DEVICE和BANK信号与存储器事务队列中的每个条目进行比较,以确定是否存在存储库冲突。 如果是这样,新的内存请求被拒绝并被回收到挂起的请求队列。
    • 7. 发明授权
    • Mechanism to control the allocation of an N-source shared buffer
    • 控制N源共享缓冲区分配的机制
    • US07213087B1
    • 2007-05-01
    • US09651924
    • 2000-08-31
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • G06F5/00
    • H04L47/39H04L49/90
    • A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.
    • 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。
    • 10. 发明授权
    • Speculative directory writes in a directory based cache coherent nonuniform memory access protocol
    • 推测目录写入基于目录的缓存相干非均匀内存访问协议
    • US07099913B1
    • 2006-08-29
    • US09652834
    • 2000-08-31
    • Michael S. BertoneRichard E. Kessler
    • Michael S. BertoneRichard E. Kessler
    • G06F15/16G06F12/00G06F9/26
    • G06F12/0817G06F12/0813G06F2212/2542G06F2212/507
    • A system and method is disclosed that reduces the latency of directory updates in a directory based Distributed Shared Memory computer system by speculating the next directory state. The distributed multiprocessing computer system contains a number of processor nodes each connected to main memory. Each main memory may store data that is shared between the processor nodes. A Home processor node for a memory block includes the original data block and a coherence directory for the data block in its main memory. An Owner processor node includes a copy of the original data block in its associated main memory, the copy of the data block residing exclusively in the main memory of the Owner processor node. A Requestor processor node may encounter a read or write miss of the original data block and request the data block from the Home processor node. The Home processor node receives the request for the data block from the Requestor processor node, forwards the request to the Owner processor node for the data block and performs a speculative write of the next directory state to the coherence directory for the data block without waiting for the Owner processor node to respond to the request.
    • 公开了一种系统和方法,其通过推测下一个目录状态来减少基于目录的分布式共享存储器计算机系统中的目录更新的延迟。 分布式多处理计算机系统包含多个处理器节点,每个处理器节点连接到主存储器。 每个主存储器可以存储在处理器节点之间共享的数据。 用于存储器块的家庭处理器节点包括其主存储器中的数据块的原始数据块和一致性目录。 所有者处理器节点包括在其相关联的主存储器中的原始数据块的副本,专用于所有者处理器节点的主存储器中的数据块的副本。 请求者处理器节点可能会遇到原始数据块的读或写错,并从家庭处理器节点请求数据块。 家庭处理器节点从请求者处理器节点接收对数据块的请求,将该请求转发给数据块的所有者处理器节点,并且将数据块的相干目录的下一个目录状态的推测性写入,而不等待 所有者处理器节点响应请求。