会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Method of and apparatus for store-in second level cache flush
    • 存储二级缓存刷新的方法和设备
    • US6122711A
    • 2000-09-19
    • US779472
    • 1997-01-07
    • Donald W. MackenthunMitchell A. BaumanDonald C. Englin
    • Donald W. MackenthunMitchell A. BaumanDonald C. Englin
    • G06F12/08G06F12/12
    • G06F12/0804G06F12/0811G06F12/0891
    • Flush apparatus for a dual multi-processing system. Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning. The flush apparatus prevents the loss of system data during such a process due to the inherent nature of a store in second level cache.
    • 用于双重多处理系统的冲洗装置。 每个双重多处理系统具有多个处理器,每个处理器具有通过缓存的第一级写入到第二级缓存的存储。 第三级存储器由双系统共享,第一级和第二级高速缓存可全局寻址到所有第三级存储器。 处理器可以写入本地二级缓存,并通过本地存储控制器访问远程二级缓存。 双系统的一致性方案为每个二级缓存提供每个高速缓存行的指示符,其中显示哪些是有效的,哪些已被修改或不同于相应的第三级存储器中反映的指示。 冲洗装置使用这两个指示器将在远程存储器地址范围内的所有高速缓存行传送到远程存储器,然后由于系统维护或动态分区而动态地删除本地缓存资源。 由于第二级高速缓存中的存储的固有特性,冲洗装置在这种处理期间防止了系统数据的丢失。
    • 3. 发明授权
    • Method and apparatus for parallel store-in second level caching
    • 并行存储二级缓存的方法和装置
    • US06868482B1
    • 2005-03-15
    • US09506038
    • 2000-02-17
    • Donald W. MackenthunMitchell A. BaumanDonald C. Englin
    • Donald W. MackenthunMitchell A. BaumanDonald C. Englin
    • G06F12/08G06F11/16
    • G06F12/0804G06F12/0811G06F12/0891
    • Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning.
    • 每个双重多处理系统具有多个处理器,每个处理器具有通过缓存的第一级写入到第二级缓存的存储。 第三级存储器由双系统共享,第一级和第二级高速缓存可全局寻址到所有第三级存储器。 处理器可以写入本地二级缓存,并通过本地存储控制器访问远程二级缓存。 双系统的一致性方案为每个二级缓存提供每个高速缓存行的指示符,其中显示哪些是有效的,哪些已被修改或不同于相应的第三级存储器中反映的指示。 冲洗装置使用这两个指示器将在远程存储器地址范围内的所有高速缓存行传送到远程存储器,然后由于系统维护或动态分区而动态地删除本地缓存资源。
    • 4. 发明授权
    • Method and apparatus for automatically routing around faults within an
interconnect system
    • 用于自动路由互连系统内的故障的方法和装置
    • US5450578A
    • 1995-09-12
    • US172647
    • 1993-12-23
    • Donald W. Mackenthun
    • Donald W. Mackenthun
    • G06F11/20G06F11/00G06F15/173G11C29/00
    • G06F11/2017G06F11/2007G11C29/006
    • A computer architecture for providing enhanced reliability while mitigating the high costs of total redundancy. The HUB and Street architecture couples a plurality of commonly shared busses called streets with a plurality of smart switching elements called HUBs. The streets are busses for transferring data between HUB elements. The HUB elements are capable of directing data across the street structures and deliver said data to a desired destination. The system designer can either increase or decrease the number of HUB elements and streets to either increase or decrease the reliability and cost of the particular computer system. In addition, the HUB elements have a built in priority scheme for allowing high priority data to be transferred before low priority data. Finally, the HUB elements have the capability of automatically detecting faults within the system and can redirect the data around said faults. This automatic rerouting capability is the subject of the present invention.
    • 一种用于提高可靠性同时降低总冗余成本的计算机架构。 HUB和街道结构将多个共享公共汽车称为街道,其中多个智能交换元件称为HUB。 街道是用于在HUB元素之间传输数据的总线。 HUB元素能够将数据导向街道结构并将所述数据传送到期望的目的地。 系统设计人员可以增加或减少HUB元素和街道的数量,以提高或降低特定计算机系统的可靠性和成本。 此外,HUB元件具有内置优先级方案,用于允许在低优先级数据之前传输高优先级数据。 最后,HUB元件具有自动检测系统中的故障的能力,并可以围绕所述故障重定向数据。 这种自动重路由能力是本发明的主题。
    • 8. 发明授权
    • Partial duplication of pipelined stack with data integrity checking
    • 流水线堆栈的部分重复与数据完整性检查
    • US4697233A
    • 1987-09-29
    • US595864
    • 1984-04-02
    • James H. ScheunemanJoseph H. MeyerDonald W. Mackenthun
    • James H. ScheunemanJoseph H. MeyerDonald W. Mackenthun
    • G06F7/00G06F11/16G06F13/16G06F11/08G06F12/00
    • G06F11/167G06F13/1615G06F7/00
    • An improved partially duplicated stack structure for ensuring data integrity through a pipelined stack is described. An improved virtual first-in first-out stack structure having a plurality of parallel stacks, each for storing predetermined segments of data signals from a total data word is described in conjunction with one or more associated compare stack structures which are commonly accessed during loading and reading the stack. The compare stack is arranged for storing predetermined selected bit groupings associated with each of the segments of data signals. The bit groupings from the compare stack are compared with like-situated bit groupings from the associated segments of data signals at readout. Failure of the bit-by-bit comparison results in an indication that a stack address decode error has occurred, thereby providing through-checking of the integrity of the functioning of the stack structures.
    • 描述了改进的部分重复的栈结构,用于通过流水线栈来确保数据完整性。 结合一个或多个相关联的比较堆叠结构描述了具有多个并行堆栈的改进的虚拟先进先出堆栈结构,每个并行堆栈用于从总数据字存储预定段的数据信号,所述一个或多个相关联的比较堆栈结构在加载期间通常被访问, 阅读堆栈 比较堆栈被布置用于存储与数据信号的每个段相关联的预定选择的位分组。 将来自比较堆栈的位分组与来自相关联的数据信号段的相似位分组进行比较。 逐位比较的失败导致发生堆栈地址解码错误的指示,从而提供堆栈结构的功能的完整性的通过检查。
    • 9. 发明授权
    • Method for avoiding delays during snoop requests
    • 在窥探请求期间避免延迟的方法
    • US06928517B1
    • 2005-08-09
    • US09651597
    • 2000-08-30
    • Donald C. EnglinDonald W. MackenthunKelvin S. Vartti
    • Donald C. EnglinDonald W. MackenthunKelvin S. Vartti
    • G06F12/08G06F13/00
    • G06F12/0811
    • A method of and apparatus for improving the efficiency of a data processing system employing a multiple level cache memory system. The efficiencies result from enhancing the response to SNOOP requests. To accomplish this, the system memory bus is provided separate and independent paths to the level two cache and tag memories. Therefore, SNOOP requests are permitted to directly access the tag memories without reference to the cache memory. Secondly, the SNOOP requests are given a higher priority than operations associated with local processor data requests. Though this may slow down the local processor, the remote processors have less wait time for SNOOP operations improving overall system performance.
    • 一种用于提高采用多级高速缓冲存储器系统的数据处理系统的效率的方法和装置。 提高效率是由于增强了对SNOOP请求的响应。 为了实现这一点,系统存储器总线被提供到二级高速缓存和标签存储器的独立和独立的路径。 因此,SNOOP请求被允许直接访问标签存储器而不参考高速缓冲存储器。 其次,SNOOP请求的优先级高于与本地处理器数据请求相关的操作。 虽然这可能会减慢本地处理器的速度,但是远程处理器的SNOOP操作的等待时间较少,从而提高了系统的整体性能。
    • 10. 发明授权
    • Cache control system for performing multiple outstanding ownership requests
    • US06374332B1
    • 2002-04-16
    • US09409756
    • 1999-09-30
    • Donald W. MackenthunKelvin S. Vartti
    • Donald W. MackenthunKelvin S. Vartti
    • G06F1300
    • G06F12/0828
    • An improved directory-based, hierarchical memory system is disclosed that is capable of simultaneously processing multiple ownership requests initiated by a processor that is coupled to the memory. An ownership request is initiated on behalf of a processor to obtain an exclusive copy of memory data that may then be modified by the processor. In the data processing system of the preferred embodiment, multiple processors are each coupled to a respective cache memory. These cache memories are further coupled to a hierarchical memory structure including a main memory and one or more additional intermediate levels of cache memory. As is known in the art, copies of addressable portions of the main memory may reside in one or more of the cache memories within the hierarchical memory system. A memory directory records the location and status of each addressable portion of memory so that coherency may be maintained. Prior to updating an addressable portion of memory in a respectively coupled cache, a processor must acquire an exclusively “owned” copy of the requested memory portion from the hierarchical memory. This is accomplished by issuing a request for ownership to the hierarchical memory. Return of ownership may impose memory latency for write requests. To reduce this latency, the current invention allows multiple requests for ownership to be initiated by a processor simultaneously. In the preferred embodiment, write request logic receives two pending write requests from a processor. For each request that is associated with an addressable memory location that is not yet owned by the processor, an associated ownership request is issued to the hierarchical memory. The requests are not processed in the respective cache memory until after the associated ownership grant is returned from the hierarchical memory system. Because ownership is not necessarily granted by the hierarchical memory in the order ownership requests are issued, control logic is provided to ensure that a local cache processes all write requests in time-order so that memory consistency is maintained. According to another aspect of the invention, read request logic is provided to allow a memory read request to by-pass all pending write requests previously issued by the same processor. In this manner, read operations are not affected by delays associated with ownership requests.