会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment
    • 在多处理器写入高速缓存环境中等同的访问来防止门字优势
    • US06970977B2
    • 2005-11-29
    • US10403703
    • 2003-03-31
    • Wayne R. BuzbyCharles P. RyanRobert J. BarylaWilliam A. ShellyLowell D. McCulley
    • Wayne R. BuzbyCharles P. RyanRobert J. BarylaWilliam A. ShellyLowell D. McCulley
    • G06F12/08G06F12/00
    • G06F12/084
    • In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data. When the given processor completes use of the common code/data, it writes the gateword OPEN in its private cache, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword OPEN in memory.
    • 一种多处理器写入高速缓存数据处理系统,包括:存储器; 至少第一和第二共享高速缓存; 耦合存储器和共享缓存的系统总线; 至少一个处理器具有分别耦合到每个共享高速缓存的专用高速缓存; 方法和装置,用于防止存储在存储器中的门字的所有权,其控制对在处理器中运行的进程共享的共同代码/数据的访问,通过该处理器,由给定处理器通过执行连续的交换操作来获得门字的读取副本 内存和给定处理器的共享缓存,以及给定的处理器的共享缓存和专用缓存。 如果门字被发现是OPEN,则由给定的处理器关闭,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,并且共享高速缓存和存储器将门字CLOSEd写入存储器,使得给定的处理器获得 独占访问受管制的通用代码/数据。 当给定的处理器完成使用通用代码/数据时,它将门字OPEN写入其专用缓存,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,共享高速缓存和存储器将门槛OPEN写入存储器 。
    • 3. 发明授权
    • Gate close failure notification for fair gating in a nonuniform memory architecture data processing system
    • 门不合格故障通知,用于在不均匀的内存架构数据处理系统中进行公平门控
    • US06480973B1
    • 2002-11-12
    • US09409456
    • 1999-09-30
    • William A. ShellyDavid A. EgolfWayne R. Buzby
    • William A. ShellyDavid A. EgolfWayne R. Buzby
    • G06F1100
    • G06F9/526
    • In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.
    • 在NUMA体系结构中,与打开自旋门的处理器相同的CPU模块中的处理器在尝试关闭旋转门时倾向于优先访问存储器中的自旋门。 对所需旋转门的这种“不公平”存储器访问可能导致处理器从其他CPU模块的饥饿。 在同一个CPU模块中的任何一个处理器刚刚打开所需的旋转门之前,或者当另一个CPU模块中的处理器旋转时,这个问题就是在尝试关闭旋转门之前“指定一段时间”来解决 试图关闭旋转门。 每个处理器检测何时在旋转门上旋转。 然后将该信息发送到其他CPU模块中的处理器,允许它们在打开旋转门时阻塞。
    • 4. 发明授权
    • Gate close balking for fair gating in a nonuniform memory architecture data processing system
    • 门在非均匀的存储器架构数据处理系统中非常适合公平门控
    • US06484272B1
    • 2002-11-19
    • US09409811
    • 1999-09-30
    • David A. EgolfWilliam A. ShellyWayne R. Buzby
    • David A. EgolfWilliam A. ShellyWayne R. Buzby
    • G06F100
    • G06F9/526
    • In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.
    • 在NUMA体系结构中,与打开自旋门的处理器相同的CPU模块中的处理器在尝试关闭旋转门时倾向于优先访问存储器中的自旋门。 对所需旋转门的这种“不公平”存储器访问可能导致处理器从其他CPU模块的饥饿。 在同一个CPU模块中的任何一个处理器刚刚打开所需的旋转门之前,或者当另一个CPU模块中的处理器旋转时,这个问题就是在尝试关闭旋转门之前“指定一段时间”来解决 试图关闭旋转门。 每个处理器检测何时在旋转门上旋转。 然后将该信息发送到其他CPU模块中的处理器,允许它们在打开旋转门时阻塞。
    • 5. 发明授权
    • Private cache miss and access management in a multiprocessor system with
shared memory
    • 具有共享内存的多处理器系统中的专用缓存未命中和访问管理
    • US5829029A
    • 1998-10-27
    • US769682
    • 1996-12-18
    • William A. ShellyRobert J. BarylaMinoru Inoshita
    • William A. ShellyRobert J. BarylaMinoru Inoshita
    • G06F12/08
    • G06F12/0859G06F12/0811G06F12/0831G06F12/084
    • A computer system including a group of CPUs, each having a private cache which communicates with its CPU to receive requests for information blocks and for servicing such requests includes a CPU bus coupled to all the private caches and to a shared cache. Each private cache includes a cache memory and a cache controller having: a processor directory for identifying information blocks resident in the cache memory, logic for identifying cache misses on requests from the CPU, a cache miss output buffer for storing the identifications of a missed block and a block to be moved out of cache memory to make room for the requested block and for selectively sending the identifications onto the CPU bus, a cache miss input buffer stack for storing the identifications of all recently missed blocks and blocks to be swapped from all the CPUs in the group, a comparator for comparing the identifications in the cache miss output buffer stack with the identifications in the cache miss input buffer stack and control logic, responsive to the first comparator sensing a compare (indicating a request by another CPU for the block being swapped), for inhibiting the broadcast of the swap requirement onto the CPU bus and converting the swap operation to a "siphon" operation to service the request of the other CPU.
    • 包括一组CPU的计算机系统,每个CPU具有与其CPU通信以接收对信息块的请求并用于服务这样的请求的专用高速缓存,包括耦合到所有专用高速缓存和共享高速缓存的CPU总线。 每个专用高速缓存包括高速缓存存储器和高速缓存控制器,其具有:用于识别驻留在高速缓冲存储器中的信息块的处理器目录,用于识别来自CPU的请求上的高速缓存未命中的逻辑,用于存储错过块的标识的高速缓存未命中输出缓冲器 以及要从高速缓冲存储器移出的块以便为所请求的块腾出空间,并且用于选择性地将标识发送到CPU总线上,用于存储所有最近错过的块的标识的高速缓存未命中输入缓冲堆栈和要从所有块中交换的块的标识 组中的CPU,用于将高速缓存未命中输出缓冲器堆栈中的标识与高速缓存未命中输入缓冲器堆栈和控制逻辑中的标识进行比较的比较器,响应于第一比较器感测到比较(指示另一CPU对于 块被交换),用于禁止将交换要求广播到CPU总线上,并将交换操作转换为“虹吸” 操作来服务其他CPU的请求。
    • 7. 发明授权
    • Multiprocessor computer system incorporating method and apparatus for
dynamically assigning ownership of changeable data
    • 多处理器计算机系统,包括用于动态分配可变数据的所有权的方法和装置
    • US5963973A
    • 1999-10-05
    • US796309
    • 1997-02-07
    • Elisabeth VanhoveMinoru InoshitaWilliam A. ShellyRobert J. Baryla
    • Elisabeth VanhoveMinoru InoshitaWilliam A. ShellyRobert J. Baryla
    • G06F12/08G06F12/00G06F13/00
    • G06F12/0833
    • A computer system including a group of CPUs, each having a private cache which communicates with its CPU to receive requests for information blocks and for servicing such requests includes a CPU bus coupled to all the private caches and to a shared cache. Each private cache includes a cache memory and a cache controller having: a processor directory for storing identification words identifying information blocks resident in the cache memory and including a status field indicative of the write permission authority the local CPU has on the block, an output buffer for storing the identification words of a block resident in the cache memory for which the CPU does not have and seeks write permission and for selectively sending identification words and an invalidate command onto the CPU bus, an input buffer for storing the identification words of all recent write permission requests in the group, a comparator for comparing the identification words in the output buffer with the identifications in the input buffer and control logic, responsive to the comparator sensing a compare condition (typically indicating a request by another CPU for write permission on the same block for which the local CPU has also requested write permission), for aborting the write permission request of the local CPU and establishing a retry process.
    • 包括一组CPU的计算机系统,每个CPU具有与其CPU通信以接收对信息块的请求并用于服务这样的请求的专用高速缓存,包括耦合到所有专用高速缓存和共享高速缓存的CPU总线。 每个专用高速缓存包括高速缓冲存储器和高速缓存控制器,其具有:处理器目录,用于存储标识驻留在高速缓冲存储器中的信息块的识别字,并且包括指示本地CPU在块上的写许可权限的状态字段;输出缓冲器 用于存储驻留在CPU不具有的高速缓冲存储器中的块的识别字,并且寻求写许可,并且用于选择性地将标识字和无效命令发送到CPU总线上;输入缓冲器,用于存储所有最近的识别字 响应于比较器感测比较条件(通常指示另一个CPU对另一个CPU的写入许可的请求),比较器用于将输出缓冲器中的识别字与输入缓冲器和控制逻辑中的标识进行比较 相同的块,本地CPU也要求写入许可),用于aborti 纳入本地CPU的写许可请求并建立重试过程。
    • 8. 发明授权
    • Apparatus for synchronizing multiple processors in a data processing system
    • 用于在数据处理系统中同步多个处理器的装置
    • US06223228B1
    • 2001-04-24
    • US09156377
    • 1998-09-17
    • Charles P. RyanWilliam A. ShellyRonald W. Yoder
    • Charles P. RyanWilliam A. ShellyRonald W. Yoder
    • G06F112
    • G06F9/30087G06F1/04G06F11/2236G06F11/3466
    • Two instructions are provided to synchronize multiple processors (92) in a data processing system (80). A Transmit Sync instruction (TSYNC) transmits a synchronize processor interrupt (276) to all of the active processors (92) in the system (80). Processors (92) wait for receipt of the synchronize signal (278) by executing a Wait for Sync (WSYNC) instruction. Each of the processors waiting for such a signal (278) is activated at the next clock cycle after receipt of the interrupt signal (278). An optional timeout value is provided to protect against hanging a waiting processor (92) that misses the interrupt (278). Whenever the WSYNC instruction is activated by receipt of the interrupt (278), a trace is started to trace a fixed number of events to an internal Trace Cache (58).
    • 提供两个指令以同步数据处理系统(80)中的多个处理器(92)。 发送同步指令(TSYNC)向系统(80)中的所有活动处理器(92)发送同步处理器中断(276)。 处理器(92)通过执行等待同步(WSYNC)指令等待接收同步信号(278)。 等待这种信号(278)的每个处理器在接收到中断信号(278)之后的下一个时钟周期被激活。 提供可选的超时值以防止挂起错过中断的等待处理器(92)(278)。 每当通过接收到中断(278)激活WSYNC指令时,将启动跟踪以将固定数量的事件跟踪到内部跟踪缓存(58)。
    • 9. 发明授权
    • Cache unit with transit block buffer apparatus
    • 具有传输块缓冲装置的缓存单元
    • US4217640A
    • 1980-08-12
    • US968522
    • 1978-12-11
    • Marion G. PorterCharles P. RyanWilliam A. Shelly
    • Marion G. PorterCharles P. RyanWilliam A. Shelly
    • G06F12/08G06F13/00
    • G06F12/0855
    • A data processing system comprises a data processing unit coupled to a cache unit which couples to a main store. The cache unit includes a cache store organized into a plurality of levels, each for storing a number of blocks of information in the form of data and instructions. Directories associated with the cache store contain addresses and level control information for indicating which blocks of information reside in the cache store. The cache unit further includes control apparatus and a transit block buffer comprising a number of sections each having a plurality of locations for storing read commands and transit block addresses associated therewith. A corresponding number of valid bit storage elements are included, each of which is set to a binary ONE state when a read command and the associated transit block address are loaded into a corresponding one of the buffer locations. Comparison circuits, coupled to the transit block buffer, compare the transit block address of each outstanding read command stored in the transit block buffer section with the address of each read command or write command received from the processing unit. When there is a conflict, the comparison circuits generate an output signal which conditions the control apparatus to hold or stop further processing of the command by the cache unit and the operation of the processing unit. Holding lasts until the valid bit storage element of the location storing the outstanding read command is reset to a binary ZERO indicating that execution of the read command is completed.
    • 数据处理系统包括耦合到耦合到主存储器的高速缓存单元的数据处理单元。 高速缓存单元包括组织成多个级别的缓存存储器,每个级别用于以数据和指令的形式存储多个信息块。 与高速缓存存储相关联的目录包含用于指示哪些信息块驻留在高速缓存存储器中的地址和级别控制信息。 高速缓存单元还包括控制装置和传输块缓冲器,其包括多个部分,每个部分具有用于存储读取命令的多个位置和与其相关联的传输块地址。 包括相应数量的有效位存储元件,当将读取命令和相关联的传输块地址加载到相应的一个缓冲器位置时,其中的每一个被设置为二进制ONE状态。 耦合到传输块缓冲器的比较电路将存储在传输块缓冲器部分中的每个未完成读取命令的传输块地址与从处理单元接收的每个读取命令或写入命令的地址进行比较。 当存在冲突时,比较电路产生输出信号,该输出信号使控制装置保持或停止高速缓存单元对命令的进一步处理和处理单元的操作。 持续持续,直到存储未完成读取命令的位置的有效位存储元件被重置为指示执行读命令的二进制零。