会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Memory management method for coupled memory multiprocessor systems
    • 耦合存储器多处理器系统的内存管理方法
    • US5237673A
    • 1993-08-17
    • US673132
    • 1991-03-20
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • G06F12/08
    • G06F12/08
    • A method of managing the memory of a CM multiprocessor computer system is disclosed. A CM multiprocessor computer system includes: a plurality of CPU modules 11a . . . 11n to which processes are assigned; one or more optional global memories 13a . . . 13n; a storage medium 15a, 15b . . . 15n; and a global interconnect 12. Each of the CPU modules 11a . . . 11n includes a processor 21 and a coupled memory 23 accessible by the local processor without using the global interconnect 12. Processors have access to remote coupled memory regions via the global interconnect 12. Memory is managed by transferring, from said storage medium, the data and stack pages of a process to be run to the coupled memory region of the CPU module to which the process is assigned, when the pages are called for by the process. Other pages are transferred to global memory, if available. At prescribed intervals, the free memory of each coupled memory region and global memory is evaluated to determine if it is below a threshold. If below the threshold, a predetermined number of pages of the memory region are scanned. Infrequently used pages are placed on the end of a list of pages that can be replaced with pages stored in the storage medium. Pages associated with processes that are terminating are placed at the head of the list of replacement pages.
    • 公开了一种管理CM多处理器计算机系统的存储器的方法。 CM多处理器计算机系统包括:多个CPU模块11a。 。 。 11n分配进程; 一个或多个可选全局存储器13a。 。 。 13n; 存储介质15a,15b。 。 。 15n; 和全局互连12.每个CPU模块11a。 。 。 11n包括处理器21和可由本地处理器访问的耦合存储器23,而不使用全局互连12.处理器可经由全局互连12访问远程耦合的存储器区域。存储器通过从所述存储介质传送数据和 当进程调用页面时,要运行到进程分配到的CPU模块的耦合存储器区域的进程的堆叠页面。 其他页面将传输到全局内存(如果可用)。 以规定的间隔,评估每个耦合的存储器区域和全局存储器的空闲存储器以确定其是否低于阈值。 如果低于阈值,则扫描存储区域的预定页数。 不经常使用的页面放置在可以用存储介质中存储的页面替换的页面列表的末尾。 与正在终止的进程关联的页面被放置在替换页面列表的头部。
    • 2. 发明授权
    • Enhanced cache operation with remapping of pages for optimizing data
relocation from addresses causing cache misses
    • 增强的缓存操作,重新映射页面,以优化从导致高速缓存未命中的地址的数据重定位
    • US5630097A
    • 1997-05-13
    • US178487
    • 1994-01-07
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • G06F12/08G06F12/10G06F12/02
    • G06F12/10G06F12/0864G06F2212/653
    • A computer system executing virtual memory management and having a cache is operated in a manner to reduce cache misses by remapping pages of physical memory from which cache misses are detected. The method includes detecting cache misses, as by observing cache fill operations on the system bus, and then remapping the pages in the main memory which contain the addresses of the most frequent cache misses, so that memory references causing thrashing can then coexist in different pages of the cache. For a CPU executing a virtual memory operating system, a page of data or instructions can be moved to a different physical page frame but remain at the same virtual address, by simply updating the page-mapping tables to reflect the new physical location of the page, and copying the data from the old page frame to the new one. The primary feature of the invention is to add bus activity sampling logic to the CPU and enhance the operating system to allow the operating system to detect when cache thrashing is occurring and remap data pages to new physical memory locations to eliminate the cache thrashing situation.
    • 执行虚拟存储器管理并具有高速缓存的计算机系统以通过重新映射检测到高速缓存未命中的物理存储器的页面来减少高速缓存未命中的方式操作。 该方法包括检测高速缓存未命中,如通过观察系统总线上的高速缓存填充操作,然后重新映射主存储器中包含最频繁高速缓存未命中的地址的页面,使得引起颠簸的存储器引用可以在不同的页面中共存 的缓存。 对于执行虚拟存储器操作系统的CPU,可以通过简单地更新页面映射表来反映页面的新的物理位置,将一页数据或指令移动到不同的物理页面帧,但保持在相同的虚拟地址 ,并将数据从旧页框复制到新页。 本发明的主要特征是将总线活动采样逻辑添加到CPU并增强操作系统以允许操作系统检测何时发生高速缓存抖动,并将数据页重新映射到新的物理存储器位置以消除高速缓存抖动情况。
    • 3. 发明授权
    • Affinity scheduling of processes on symmetric multiprocessing systems
    • 对称多处理系统上进程的亲和度调度
    • US5506987A
    • 1996-04-09
    • US217127
    • 1994-03-24
    • Kenneth D. AbramsonH. Bruce Butts, Jr.David A. Orbits
    • Kenneth D. AbramsonH. Bruce Butts, Jr.David A. Orbits
    • G06F9/50G06F9/46
    • G06F9/5033
    • A method of scheduling processes on a symmetric multiprocessing system that maintains process-to-CPU affinity without introducing excessive idle time is disclosed. When a new process is assigned, the process is identified as young and small, given a migtick value and assigned to a specific CPU. If the priority of a process placed on a run queue is above a threshold, the high priority count of the assigned CPU is incremented. At predetermined clock intervals, an interrupt occurs that causes the migtick value of running processes to be decremented. Then each CPU is tested to determine if its high priority count is greater than zero. CPUs having high priority counts greater than zero are tested to determine if any processes having a priority greater than the priority of the running process are assigned. If higher priority processes are assigned to a CPU having assigned processes lying above the threshold, a context switch takes place that results in the higher priority process being run. At regular intervals, a migration deamon is run to load balance the multiprocessor system. First, a large/small process threshold is determined. Then processes whose migtick values are below a migtick threshold (e.g., 0) are identified as old. Old processes then are identified as large or small processes based on their memory usage. Next, a determination is made of whether the small and large process load balances of the system can be improved. If either or both can be improved, the smallest small and/or the smallest large processes are migrated from their assigned CPU to the CPU with, as the case may be, the least large or the least small processes.
    • 公开了一种在不引入过多空闲时间的情况下在维持进程到CPU之间关联的对称多处理系统上调度进程的方法。 当分配新进程时,该进程被识别为年轻和小型,给定一个迁移值并分配给特定的CPU。 如果放置在运行队列上的进程的优先级高于阈值,则分配的CPU的高优先级计数将增加。 在预定的时钟间隔,发生中断运行进程的迁移值减少的中断。 然后测试每个CPU以确定其高优先级计数是否大于零。 测试具有大于零的高优先级计数的CPU以确定是否分配了优先级大于运行进程的优先级的任何进程。 如果将较高优先级的进程分配给具有高于阈值的分配的进程的CPU,则会进行上下文切换,从而导致更高优先级的进程正在运行。 定期运行迁移设计以负载平衡多处理器系统。 首先,确定大/小的过程阈值。 然后,其迁移值低于迁移阈值(例如,0)的进程被识别为旧。 然后,旧的进程将根据其内存使用情况识别为大型或小型进程。 接下来,确定是否可以提高系统的小型和大型过程负载平衡。 如果可以改进这两者之一,则最小的小型和/或最小的大型进程将从其分配的CPU迁移到CPU(视情况而定)最小或最小的进程。
    • 4. 发明授权
    • Coupled memory multiprocessor computer system including cache coherency
management protocols
    • 耦合存储器多处理器计算机系统包括高速缓存一致性管理协议
    • US5303362A
    • 1994-04-12
    • US673766
    • 1991-03-20
    • H. Bruce Butts, Jr.David A. OrbitsKenneth D. Abramson
    • H. Bruce Butts, Jr.David A. OrbitsKenneth D. Abramson
    • G06F12/08G06F12/06
    • G06F12/0831G06F12/0811
    • A coherent coupled memory multiprocessor computer system that includes a plurality of processor modules (11a, 11b . . . ), a global interconnect (13), an optional global memory (15) and an input/output subsystem (17,19) is disclosed. Each processor module (11a, 11b . . . ) includes: a processor (21); cache memory (23); cache memory controller logic (22); coupled memory (25); coupled memory control logic (24); and a global interconnect interface (27). Coupled memory (25) associated with a specific processor (21), like global memory (15), is available to other processors (21). Coherency between data stored in coupled (or global) memory and similar data replicated in cache memory is maintained by either a write-through or a write-back cache coherency management protocol. The selected protocol is implemented in hardware, i.e., logic, form, preferably incorporated in the coupled memory control logic (24) and in the cache memory controller logic (22). In the write-through protocol, processor writes are propagated directly to coupled memory while invalidating corresponding data in cache memory. In contrast, the write-back protocol allows data owned by a cache to be continuously updated until requested by another processor, at which time the coupled memory is updated and other cache blocks containing the same data are invalidated.
    • 公开了一种包括多个处理器模块(11a,11b ...),全局互连(13),可选全局存储器(15)和输入/输出子系统(17,19)的相干耦合存储器多处理器计算机系统 。 每个处理器模块(11a,11b ...)包括:处理器(21); 缓存存储器(23); 缓存存储器控制器逻辑(22); 耦合存储器(25); 耦合存储器控制逻辑(24); 和全局互连接口(27)。 与诸如全局存储器(15)的特定处理器(21)相关联的耦合存储器(25)可用于其他处理器(21)。 存储在耦合(或全局)存储器中的数据与在高速缓冲存储器中复制的类似数据之间的一致性通过写通或回写高速缓存一致性管理协议进行维护。 所选择的协议以硬件(即,逻辑形式)实现,优选地并入耦合的存储器控​​制逻辑(24)和高速缓存存储器控制器逻辑(22)中。 在直写协议中,处理器写入直接传播到耦合的存储器,同时使高速缓冲存储器中的对应数据无效。 相比之下,回写协议允许持续更新高速缓存所拥有的数据,直到另一个处理器请求,此时更新耦合的存储器,并且包含相同数据的其他缓存块无效。
    • 5. 发明授权
    • Adaptive memory management method for coupled memory multiprocessor
systems
    • 耦合存储器多处理器系统的自适应存储器管理方法
    • US5269013A
    • 1993-12-07
    • US674077
    • 1991-03-20
    • Kenneth D. AbramsonDavid A. OrbitsH. Bruce Butts, Jr.
    • Kenneth D. AbramsonDavid A. OrbitsH. Bruce Butts, Jr.
    • G06F12/08G06F13/14
    • G06F12/08
    • An adaptive memory management method for coupled memory multiprocessor computer systems is disclosed. In a coupled memory multiprocessor system all the data and stack pages of processes assigned to individual multiprocessors are, preferably, located in a memory region coupled to the assigned processor. When this becomes impossible, some data and stack pages are assigned to global memory or memory regions coupled to other processors. The present invention is a method of making certain that the most referenced data and stack pages are located in the coupled memory of the processor to which a specific process is assigned and lesser referenced pages are located in global memory or the coupled memory region of other processors. This result is accomplished by sampling the memory references made by the processors of the computer system and causing the most recently referenced pages in each coupled memory region to be maintained at the head of an active page list. References to remote data and stack pages are stored in a remote page hash table. Remote pages are pages stored in global memory or in coupled memory other than the coupled memory of the processor to which the process owning the pages is assigned. Any remote data and stack pages referenced more frequently than pages stored in a processor's coupled memory region are transferred to the processor's coupled memory region. If a processor's coupled memory region is tight, pages are transferred from the processor's coupled memory region to global memory or to the coupled memory region of another processor.
    • 公开了一种用于耦合存储器多处理器计算机系统的自适应存储器管理方法。 在耦合存储器多处理器系统中,分配给各个多处理器的所有进程的数据和堆栈页面优选地位于耦合到所分配的处理器的存储器区域中。 当这不可能时,一些数据和堆栈页面被分配给耦合到其他处理器的全局存储器或存储器区域。 本发明是确定最引用的数据和堆栈页面位于处理器的耦合存储器中的方法,处理器的特定处理被分配到该存储器中,并且较小的参考页面位于全局存储器或其他处理器的耦合存储器区域中 。 该结果通过对由计算机系统的处理器进行的存储器引用进行采样并使得每个耦合的存储器区域中最近被引用的页面被维护在活动页面列表的头部来实现。 对远程数据和堆栈页的引用存储在远程页哈希表中。 远程页面是存储在全局存储器中的页面,或存储有分配了页面的进程的处理器的耦合存储器之外的耦合存储器中的页面。 比存储在处理器的耦合存储器区域中的页面更频繁地引用的任何远程数据和堆栈页面被传送到处理器的耦合存储器区域。 如果处理器的耦合存储器区域紧密,则页面从处理器的耦合存储器区域传送到全局存储器或另一个处理器的耦合存储器区域。
    • 8. 发明授权
    • Bus event monitor
    • 总线事件监视器
    • US5426741A
    • 1995-06-20
    • US182531
    • 1994-01-14
    • H. Bruce Butts, Jr.James N. LeahyRichard B. Gillett, Jr.
    • H. Bruce Butts, Jr.James N. LeahyRichard B. Gillett, Jr.
    • G06F11/34G06F11/00
    • G06F11/348G06F11/349G06F11/3409G06F11/3495G06F2201/86G06F2201/88
    • A monitor for monitoring the occurrence of events on the bus (15) of a multiprocessor computer system. The bus event monitor (BEM) includes a dedicated BEM processor (23) and an event counter subsystem (25). During each bus cycle, the BEM (21) captures and interprets the packet of data being transmitted on the bus (15). If the packet represents an event designated by the user to be of interest, a counter associated with the type of packet that was captured and interpreted is incremented by one. More specifically, a field programmable gate array (FPGA), configured by the user, defines the type of events to be counted. When an event to be accounted occurs, the FPGA (33) produces a counter address that is based on the nature of the event, and causes an enable pulse to be generated. The address is applied to the active one of two event counter banks (39a, 39b) via an input crossbar switch (37a). The enable pulse enables the addressed event counter to be incremented by one. The inactive counter bank is available for reading by the dedicated BEM processor (23) while the counters of the active counter bank are being incremented. Preferably, each counter bank contains a large number of counters (e.g., 64K), each having a large capacity (e.g., 32 bit). As a result, a large number of different events can be counted over an indefinitely long period of time.
    • 一种用于监视多处理器计算机系统的总线(15)上的事件发生的监视器。 总线事件监视器(BEM)包括专用BEM处理器(23)和事件计数器子系统(25)。 在每个总线周期期间,BEM(21)捕获并解释正在总线(15)上发送的数据的分组。 如果分组表示由用户指定为感兴趣的事件,则与捕获和解释的分组类型相关联的计数器增加1。 更具体地,由用户配置的现场可编程门阵列(FPGA)定义要计数的事件的类型。 当要考虑的事件发生时,FPGA(33)产生基于事件性质的计数器地址,并且产生使能脉冲。 通过输入交叉开关(37a)将地址应用于两个事件计数器组(39a,39b)中的活动的一个。 使能脉冲使寻址的事件计数器增加1。 当活动计数器组的计数器递增时,非活动计数器存储体可供专用BEM处理器(23)读取。 优选地,每个计数器存储体包含大量的计数器(例如,64K),每个计数器具有大容量(例如,32位)。 因此,大量不同的事件可以无限期地计算在一起。