会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Stream buffers for high-performance computer memory system
    • 流缓冲区用于高性能计算机内存系统
    • US5761706A
    • 1998-06-02
    • US333133
    • 1994-11-01
    • Richard E. KesslerSteven M. OberlinSteven L. ScottSubbarao Palacharla
    • Richard E. KesslerSteven M. OberlinSteven L. ScottSubbarao Palacharla
    • G06F12/08G06F12/00
    • G06F12/0862G06F2212/6022G06F2212/6026
    • Method and apparatus for a filtered stream buffer coupled to a memory and a processor, and operating to prefetch data from the memory. The filtered stream buffer includes a cache block storage area and a filter controller. The filter controller determines whether a pattern of references has a predetermined relationship, and if so, prefetches stream data into the cache block storage area. Such stream data prefetches are particularly useful in vector processing computers, where once the processor starts to fetch a vector, the addresses of future fetches can be predicted based in the pattern of past fetches. According to various aspects of the present invention, the filtered stream buffer further includes a history table, a validity indicator which is associated with the cache block storage area and indicates which cache blocks, if any, are valid. According to yet another aspect of the present invention, the filtered stream buffer controls random access memory (RAM) chips to stream the plurality of consecutive cache blocks from the RAM into the cache block storage area. According to yet another aspect of the present invention, the stream data includes data for a plurality of strided cache blocks, wherein each of which these strided cache blocks corresponds to an address determined by adding to the first address an integer multiple of the difference between the second address and the first address. According to yet another aspect of the present invention, the processor generates three addresses of data words in the memory, and the filter controller determines whether a predetermined relationship exists among three addresses, and if so, prefetches strided stream data into said cache block storage area.
    • 耦合到存储器和处理器的经滤波的流缓冲器的方法和装置,并且用于从存储器预取数据。 滤波的流缓冲器包括高速缓存块存储区域和过滤器控制器。 滤波器控制器确定引用模式是否具有预定关系,如果是,则将流数据预取到高速缓存块存储区域中。 这样的流数据预取在向量处理计算机中特别有用,其中一旦处理器开始获取向量,可以基于过去提取的模式来预测未来提取的地址。 根据本发明的各个方面,滤波流缓冲器还包括历史表,与高速缓存块存储区相关联的有效性指示符,并指示哪些高速缓存块(如果有的话)是有效的。 根据本发明的另一方面,滤波流缓冲器控制随机存取存储器(RAM)芯片以将多个连续高速缓存块从RAM流入高速缓存块存储区域。 根据本发明的另一方面,流数据包括用于多个跨度高速缓存块的数据,其中这些跨越高速缓存块中的每一个对应于通过将第一地址相加的确定的地址, 第二个地址和第一个地址。 根据本发明的另一方面,处理器在存储器中产生数据字的三个地址,并且滤波器控制器确定在三个地址之间是否存在预定的关系,如果是,则将步进流数据预取到所述高速缓存块存储区域 。
    • 3. 发明授权
    • Messaging in distributed memory multiprocessing system having shell
circuitry for atomic control of message storage queue's tail pointer
structure in local memory
    • 在分布式存储器多处理系统中的消息传递,其具有用于原子控制消息存储队列在本地存储器中的尾部指针结构的壳体电路
    • US5841973A
    • 1998-11-24
    • US615694
    • 1996-03-13
    • Richard E. KesslerSteven M. OberlinSteven L. Scott
    • Richard E. KesslerSteven M. OberlinSteven L. Scott
    • G06F15/173G06F13/38G06F15/16
    • G06F15/17381
    • A messaging facility in a multiprocessor computer system includes assembly circuitry in a source processing element for assembling a message to be sent from the source processing element to a destination processing element based on information provided from a processor in the source processing element. A network router transmits the assembled message from the source processing element to the destination processing element via an interconnect network. A message queue in a local memory of the destination processing element stores the transmitted message. A control word stored in the local memory of the destination processing element includes a limit field designating a size of the message queue and a tail field designating an index into the corresponding message queue to indicate a location in the message queue where the transmitted message is to be stored. Shell circuitry in the destination processing element atomically reads and updates the tail field.
    • 多处理器计算机系统中的消息传递设备包括源处理元件中的组装电路,用于根据源处理元件中的处理器提供的信息来组装要从源处理元件发送到目的地处理元件的消息。 网络路由器经由互连网络将组合的消息从源处理元件发送到目的地处理元件。 目的地处理元件的本地存储器中的消息队列存储发送的消息。 存储在目的地处理元件的本地存储器中的控制字包括指定消息队列的大小的限制字段和指定对应消息队列中的索引的尾部字段,以指示消息队列中发送的消息所在的位置 存储。 目标处理元件中的Shell电路原子地读取和更新尾部字段。
    • 7. 发明授权
    • System and method for fast barrier synchronization
    • 快速屏障同步的系统和方法
    • US06216174B1
    • 2001-04-10
    • US09162673
    • 1998-09-29
    • Steven L. ScottRichard E. Kessler
    • Steven L. ScottRichard E. Kessler
    • G06F1580
    • G06F9/52G06F9/522
    • Improved method and apparatus for facilitating fast barrier synchronization in a parallel-processing system. A single input signal and a single output signal, and a single bit of state (“barrier_bit”) is added to each processor to support a barrier. The input and output signal are coupled to a dedicated barrier-logic circuit that includes memory-mapped bit-vector registers to track the “participating” processors and the “joined” processors for the barrier. A “bjoin” instruction executed in a processor causes a pulse to be sent on the output signal, which in turn causes that processor's bit in the dedicated barrier-logic circuit's “joined” register to be set. When the “joined” bits for all participating processors (as indicated by the “participating” register) are all set, the “joined” register is cleared, and a pulse is sent to the input signal of all the participating processors, which in turn causes each of those processor's barrier_bit to be set.
    • 用于促进并行处理系统中的快速屏障同步的改进的方法和装置。 单个输入信号和单个输出信号以及单个位状态(“barrier_bit”)被添加到每个处理器以支持屏障。 输入和输出信号耦合到专用的屏障逻辑电路,该电路包括存储器映射的位向量寄存器以跟踪“参与”处理器和用于屏障的“连接”处理器。 在处理器中执行的“bjoin”指令使得在输出信号上发送脉冲,这又导致专用势垒逻辑电路的“加入”寄存器中的处理器的位被置位。 当所有参与处理器(由“参与”寄存器指示)的“加入”位全部被置位时,“已加入”寄存器被清除,并且脉冲被发送到所有参与处理器的输入信号,反过来 导致这些处理器的barrier_bit中的每一个被设置。