会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Preserving dump capability after a fault-on-fault or related type failure in a fault tolerant computer system
    • 在容错计算机系统中的故障故障或相关类型故障后保留转储能力
    • US06779132B2
    • 2004-08-17
    • US09943769
    • 2001-08-31
    • Sidney L. AndressWayne R. Buzby
    • Sidney L. AndressWayne R. Buzby
    • G06F1100
    • G06F11/0793G06F11/0778
    • When a fault-on-fault condition arises in a data processing system which follows a backup fault procedure in the fault handling process, control is passed to dedicated firmware. Fault flags are reset and information vital to maintaining operating system control is sent to a reserved memory (which can be written to in limited circumstances) under firmware control. Control is then transferred to an Intercept process resident in the reserved memory which attempts to build a stable environment for the operating system to dump the system memory. If possible, a dump is taken, and a normal operating system restart is carried out. If not possible, a message with the vital fault information is issued, and a full manual restart must be taken. Even in the latter case, the fault information is available to help in determining the cause of the fault-on-fault.
    • 当在故障处理过程中遵循备份故障过程的数据处理系统中出现故障故障状况时,将控制传递给专用固件。 故障标志被复位,对维护操作系统控制至关重要的信息在固件控制下发送到保留的存储器(可在有限的环境中写入)。 然后,控制被传送到驻留在保留的存储器中的拦截进程,其尝试为操作系统构建稳定的环境以转储系统存储器。 如果可能,将进行转储,并执行正常的操作系统重新启动。 如果不可能,将发出带有重要故障信息的消息,并且必须全面手动重新启动。 即使在后一种情况下,故障信息可用于帮助确定故障故障的原因。
    • 2. 发明授权
    • Fault vector pointer table
    • 故障向量指针表
    • US06687845B2
    • 2004-02-03
    • US09742456
    • 2000-12-20
    • Wayne R. BuzbySidney L. Andress
    • Wayne R. BuzbySidney L. Andress
    • G06F1324
    • G06F11/0724G06F9/4812G06F11/0715G06F11/0793G06F2209/481
    • A fault number is utilized by microcode fault handling to index into a fault array pointer table containing a plurality of pointers to entry descriptors describing fault handling routines. The pointer resulting from the indexing is utilized to retrieve an entry descriptor. The entry descriptor is verified and if valid, is utilized to setup the environment for the appropriate fault handling routine and to enter such. The fault array pointer table is located in a reserved memory that cannot be overwritten by I/O. During the boot process, the fault array pointer table entries, along with a fault-on-fault pointer are updated to point at entry descriptors stored in the reserved memory. Additionally, the fault-on-fault entry descriptor that rebuilds the processor environment if necessary from information in reserved memory.
    • 通过微代码故障处理来利用故障编号来索引到包含描述故障处理例程的条目描述符的多个指针的故障数组指针表。 由索引产生的指针用于检索条目描述符。 验证入口描述符,如果有效,则用于为适当的故障处理例程设置环境并输入。 故障数组指针表位于不能被I / O覆盖的保留存储器中。 在引导过程中,故障数组指针表项以及故障故障指针被更新为指向存储在保留存储器中的入口描述符。 另外,故障条目描述符,如果需要,从预留的内存中的信息重建处理器环境。
    • 3. 发明授权
    • Fault handling in a data processing system utilizing a fault vector pointer table
    • 利用故障向量指针表的数据处理系统中的故障处理
    • US06697959B2
    • 2004-02-24
    • US09742457
    • 2000-12-20
    • Sidney L. AndressWayne R. Buzby
    • Sidney L. AndressWayne R. Buzby
    • G06F1107
    • G06F11/0712G06F9/4812G06F11/0715G06F11/0724G06F11/0793G06F2209/481
    • A fault number is utilized by microcode fault handling to index into a fault array pointer table containing a plurality of pointers to entry descriptors describing fault handling routines. The pointer resulting from the indexing is utilized to retrieve an entry descriptor. The entry descriptor is verified and if valid, is utilized to setup the environment for the appropriate fault handling routine and to enter such. The fault array pointer table is located in a reserved memory that cannot be overwritten by I/O. During the boot process, the fault array pointer table entries, along with a fault-on-fault pointer are updated to point at entry descriptors stored in the reserved memory. Additionally, the fault-on-fault entry descriptor that rebuilds the processor environment if necessary from information in reserved memory.
    • 通过微代码故障处理来利用故障编号来索引到包含描述故障处理例程的条目描述符的多个指针的故障数组指针表。 由索引产生的指针用于检索条目描述符。 验证入口描述符,如果有效,则用于为适当的故障处理例程设置环境并输入。 故障数组指针表位于不能被I / O覆盖的保留存储器中。 在引导过程中,故障数组指针表项以及故障故障指针被更新为指向存储在保留存储器中的入口描述符。 另外,故障条目描述符,如果需要,从预留的内存中的信息重建处理器环境。
    • 4. 发明授权
    • Gateword acquisition in a multiprocessor write-into-cache environment
    • 在多处理器写入缓存环境中的门字获取
    • US06760811B2
    • 2004-07-06
    • US10219644
    • 2002-08-15
    • Wayne R. BuzbyCharles P. Ryan
    • Wayne R. BuzbyCharles P. Ryan
    • G06F1200
    • G06F12/0811G06F12/084
    • In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor. Optionally, the processor so acquiring ownership broadcasts a “reset gate control flag” command to all processors when it has acquired ownership of the gate control word.
    • 一种多处理器数据处理系统,包括:存储器,第一和第二共享高速缓存,耦合存储器和共享高速缓存的系统总线,第一,第二,第三和第四处理器分别具有第一,第二,第三和第四专用高速缓存, 所述第一和第二专用高速缓冲存储器耦合到所述第一共享高速缓存,并且所述第三和第四专用高速缓冲存储器耦合到所述第二共享高速缓冲存储器,通过在每个处理器中提供门控制标志来防止门字锁定。 为处理器建立下一个优先级,即:通过以下方式获得门控制字的所有权:向所有处理器广播“设置门控制标志”命令,使得设置门控制标志建立延迟,在该延迟期间不会请求门控制字的所有权 由另一处理器在每个处理器中建立预定时段。 可选地,获得所有权的处理器在获得了门控制字的所有权时向所有处理器广播“复位门控制标志”命令。
    • 5. 发明授权
    • Multiprocessor write-into-cache system incorporating efficient access to a plurality of gatewords
    • 多处理器写入缓存系统,包括对多个门词的高效访问
    • US06973539B2
    • 2005-12-06
    • US10426409
    • 2003-04-30
    • Charles P. RyanWayne R. Buzby
    • Charles P. RyanWayne R. Buzby
    • G06F12/08G06F12/00
    • G06F12/0811
    • A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.
    • 多处理器写入高速缓存数据处理系统包括用于防止存储在存储器中的第一门词的所有权的特征,其控制对通过在所有处理器中运行的进程共享的第一公共代码/数据集的访问 系统中的其他处理器,同时减轻了尝试访问第一个闸门以外的闸门的处理器性能的任何不利影响。 这通过在寻求除第一门词之外的门词的所有权的任何处理器中开始第二延迟来实现,并且通过从第一延迟指示的经过时间中减去由第二延迟指示的经过时间来截断所有这些处理器中的第一延迟 延迟。
    • 6. 发明授权
    • Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment
    • 在多处理器写入高速缓存环境中等同的访问来防止门字优势
    • US06970977B2
    • 2005-11-29
    • US10403703
    • 2003-03-31
    • Wayne R. BuzbyCharles P. RyanRobert J. BarylaWilliam A. ShellyLowell D. McCulley
    • Wayne R. BuzbyCharles P. RyanRobert J. BarylaWilliam A. ShellyLowell D. McCulley
    • G06F12/08G06F12/00
    • G06F12/084
    • In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data. When the given processor completes use of the common code/data, it writes the gateword OPEN in its private cache, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword OPEN in memory.
    • 一种多处理器写入高速缓存数据处理系统,包括:存储器; 至少第一和第二共享高速缓存; 耦合存储器和共享缓存的系统总线; 至少一个处理器具有分别耦合到每个共享高速缓存的专用高速缓存; 方法和装置,用于防止存储在存储器中的门字的所有权,其控制对在处理器中运行的进程共享的共同代码/数据的访问,通过该处理器,由给定处理器通过执行连续的交换操作来获得门字的读取副本 内存和给定处理器的共享缓存,以及给定的处理器的共享缓存和专用缓存。 如果门字被发现是OPEN,则由给定的处理器关闭,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,并且共享高速缓存和存储器将门字CLOSEd写入存储器,使得给定的处理器获得 独占访问受管制的通用代码/数据。 当给定的处理器完成使用通用代码/数据时,它将门字OPEN写入其专用缓存,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,共享高速缓存和存储器将门槛OPEN写入存储器 。
    • 7. 发明授权
    • Balanced access to prevent gateword dominance in a multiprocessor write-into-cache environment
    • 平衡访问以防止在多处理器写入缓存环境中的门字优势
    • US06868483B2
    • 2005-03-15
    • US10256289
    • 2002-09-26
    • Wayne R. BuzbyCharles P. Ryan
    • Wayne R. BuzbyCharles P. Ryan
    • G06F9/46G06F12/00G06F12/08
    • G06F9/52G06F12/0815
    • In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use. A gateword OPEN command then broadcasts a gateword interrupt to set the flag in each processor, delays long enough to ensure that the flags have all been set, writes an OPEN value into the gateword and flushes the gateword to main memory. A gateword access command executed by a requesting processor checks its gate control flag, and if set, starts a fixed time delay after which normal execution continues.
    • 一种多处理器数据处理系统,包括:主存储器; 至少第一和第二共享高速缓存; 耦合主存储器和第一和第二共享高速缓存的系统总线; 具有相应私有高速缓存的至少四个处理器具有第一和第二专用高速缓存,其经由第一内部总线耦合到第一共享高速缓存并且彼此耦合,并且第三和第四专用高速缓存耦合到第二共享高速缓存并且彼此耦合 通过第二条内部总线; 用于防止存储在主存储器中的门词的所有权陷入的方法和装置,并且其控制对在至少三个处理器中运行的进程共享的公共代码/数据的访问。 每个处理器包括一个门控制标志。 门字关闭命令,确定一个处理器中的门字的所有权,并防止其他处理器访问代码/数据,直到一个处理器完成使用。 门字OPEN命令然后广播门字中断以在每个处理器中设置标志,延迟足够长的时间以确保标志已经被设置,将OPEN值写入门字并将门字刷新到主存储器。 由请求处理器执行的门字访问命令检查其门控制标志,并且如果被设置,则启动固定的时间延迟,之后继续正常执行。
    • 8. 发明授权
    • Gate close balking for fair gating in a nonuniform memory architecture data processing system
    • 门在非均匀的存储器架构数据处理系统中非常适合公平门控
    • US06484272B1
    • 2002-11-19
    • US09409811
    • 1999-09-30
    • David A. EgolfWilliam A. ShellyWayne R. Buzby
    • David A. EgolfWilliam A. ShellyWayne R. Buzby
    • G06F100
    • G06F9/526
    • In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.
    • 在NUMA体系结构中,与打开自旋门的处理器相同的CPU模块中的处理器在尝试关闭旋转门时倾向于优先访问存储器中的自旋门。 对所需旋转门的这种“不公平”存储器访问可能导致处理器从其他CPU模块的饥饿。 在同一个CPU模块中的任何一个处理器刚刚打开所需的旋转门之前,或者当另一个CPU模块中的处理器旋转时,这个问题就是在尝试关闭旋转门之前“指定一段时间”来解决 试图关闭旋转门。 每个处理器检测何时在旋转门上旋转。 然后将该信息发送到其他CPU模块中的处理器,允许它们在打开旋转门时阻塞。
    • 9. 发明授权
    • Gate close failure notification for fair gating in a nonuniform memory architecture data processing system
    • 门不合格故障通知,用于在不均匀的内存架构数据处理系统中进行公平门控
    • US06480973B1
    • 2002-11-12
    • US09409456
    • 1999-09-30
    • William A. ShellyDavid A. EgolfWayne R. Buzby
    • William A. ShellyDavid A. EgolfWayne R. Buzby
    • G06F1100
    • G06F9/526
    • In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.
    • 在NUMA体系结构中,与打开自旋门的处理器相同的CPU模块中的处理器在尝试关闭旋转门时倾向于优先访问存储器中的自旋门。 对所需旋转门的这种“不公平”存储器访问可能导致处理器从其他CPU模块的饥饿。 在同一个CPU模块中的任何一个处理器刚刚打开所需的旋转门之前,或者当另一个CPU模块中的处理器旋转时,这个问题就是在尝试关闭旋转门之前“指定一段时间”来解决 试图关闭旋转门。 每个处理器检测何时在旋转门上旋转。 然后将该信息发送到其他CPU模块中的处理器,允许它们在打开旋转门时阻塞。
    • 10. 发明授权
    • Fast domain switch and error recovery in a secure CPU architecture
    • 快速域切换和安全CPU架构中的错误恢复
    • US6014757A
    • 2000-01-11
    • US994476
    • 1997-12-19
    • Ronald W. YoderRussell W. GuenthnerWayne R. Buzby
    • Ronald W. YoderRussell W. GuenthnerWayne R. Buzby
    • G06F9/38G06F11/14G06F11/00
    • G06F11/1405G06F9/3863G06F11/1407
    • In order to gather, store temporarily and efficiently deliver safestore information in a CPU having data manipulation circuitry including a register bank, first and second serially oriented safestore buffers are employed. At suitable times during the processing of information, a copy of the instantaneous contents of the register bank is transferred into the first safestore buffer. After a brief delay, a copy of the first safestore buffer is transferred into the second safestore buffer. If a call for a domain change (which might include a process change or a fault) is sensed, a safestore frame is sent to cache, and the first safestore buffer is loaded from he second safestore buffer rather than from the register bank. Later, during a climb operation, if a restart of the interrupted process is undertaken and the restoration of the register bank is directed to be taken from the first safestore buffer, this source, rather than the safestore frame stored in cache, is employed to obtain a corresponding increase in the rate of restart. In one embodiment, the transfer of information between the register bank and the safestore buffers is carried out on a bit-by-bit basis to achieve additional flexibility of operation and also to conserve integrated circuit space.
    • 为了收集,存储在具有包括寄存器组的数据操作电路的CPU中临时且有效地传送保险箱信息,采用第一和第二串行存取缓冲器。 在处理信息的适当时候,寄存器组的瞬时内容的副本被传送到第一个保险箱存储缓冲器。 经过短暂的延迟后,第一个safestore缓冲区的副本将被传输到第二个safestore缓冲区。 如果检测到域更改的呼叫(可能包括进程更改或故障),则将保险箱帧发送到缓存,并且第一个保险箱存储缓冲区从其第二个保险箱存储缓冲区加载,而不是从注册库中加载。 之后,在爬升操作期间,如果进行中断处理的重新启动,并且指示从第一个保险箱缓冲器取出寄存器组的恢复,则使用该源而不是存储在高速缓存中的保存存储帧来获得 相应地提高了重启速度。 在一个实施例中,在寄存器组和保险箱存储缓冲器之间的信息传输是在逐位的基础上进行的,以实现额外的操作灵活性并且还节省集成电路空间。