会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 51. 发明授权
    • Efficient translation lookaside buffer miss processing in computer systems with a large range of page sizes
    • 具有大范围页面大小的计算机系统中的高效翻译后备缓冲区丢失处理
    • US06715057B1
    • 2004-03-30
    • US09652552
    • 2000-08-31
    • Richard E. KesslerJeffrey G. WiedemeierEileen J. Samberg
    • Richard E. KesslerJeffrey G. WiedemeierEileen J. Samberg
    • G06F1210
    • G06F12/1027G06F12/1009G06F2212/651G06F2212/652
    • A system and method is disclosed to efficiently translate virtual-to-physical addresses of large size pages of data by eliminating one level of a multilevel page table. A computer system containing a processor includes a translation lookaside buffer (“TLB”) in the processor. The processor is connected to a system memory that contains a page table with multiple levels. The page table translates the virtual address of a page of data stored in system memory into the corresponding physical address of the page of data. If the size of the page is above a certain threshold value, then translation of the page using the multilevel page table occurs by eliminating one or more levels of the page table. The threshold value preferably is 512 Megabytes. The multilevel page table is only used for translation of the virtual address of the page of data stored in system memory into the corresponding physical address of the page of data if a lookup of the TLB for the virtual address of the page of data results in a miss. The TLB also contains entries from the final level of the page table (i.e., physical addresses of pages of data) corresponding to a subfield of bits from corresponding virtual addresses of the page of data. Virtual-to-physical address translation using the multilevel page table is not required if the TLB contains the needed physical address of the page of data corresponding to the subfield of bits from the virtual address of the page of data.
    • 公开了一种系统和方法,通过消除多级页表的一个级别来有效地转换大尺寸数据页的虚拟到物理地址。 包含处理器的计算机系统包括处理器中的翻译后备缓冲器(“TLB”)。 处理器连接到包含具有多个级别的页表的系统内存。 页表将存储在系统存储器中的数据页的虚拟地址转换为数据页面的相应物理地址。 如果页面的大小高于某个阈值,则通过消除页面表的一个或多个级别,发生使用多级页面表的页面的翻译。 阈值最好是512兆字节。 多级页表仅用于将存储在系统存储器中的数据页的虚拟地址转换为数据页面的相应物理地址,如果查找数据页的虚拟地址的TLB导致 小姐。 TLB还包含对应于数据页面的相应虚拟地址的比特的子字段的页表的最后级别(即,数据页的物理地址)的条目。 如果TLB包含与数据页面的虚拟地址中的位的子字段对应的数据页面的所需物理地址,则不需要使用多级页表的虚拟到物理地址转换。
    • 52. 发明授权
    • Method and apparatus for implementing loop compression in a program counter trace
    • 在程序计数器跟踪中实现循环压缩的方法和装置
    • US06691207B2
    • 2004-02-10
    • US10034506
    • 2001-12-28
    • Timothe LittRichard E. KesslerThomas Hummel
    • Timothe LittRichard E. KesslerThomas Hummel
    • G06F1200
    • G06F11/3636G06F9/381G06F11/3648
    • A system is disclosed in which an on-chip logic analyzer (OCLA) includes a loop detector logic which receives incoming program counter (PC) data and detects when software loops exist. When a software loop is detected, the loop detector may be configured to store the first loop in memory, while all subsequent iterations are not stored, thus saving space in memory which would otherwise be consumed. The loop detector comprises a content addressable memory (CAM) which is enabled by a user programmed signal. The CAM may be configured with a programmable mask to determine which bits of the incoming PC data to compare with the CAM entries. The depth of the CAM also is programmable, to permit the CAM to be adjusted to cover the number of instructions in a loop.
    • 公开了一种系统,其中片上逻辑分析器(OCLA)包括环路检测器逻辑,其接收输入的程序计数器(PC)数据并检测何时存在软件循环。 当检测到软件循环时,循环检测器可以被配置为将第一循环存储在存储器中,而所有后续迭代都不被存储,从而节省了否则将被消耗的存储器中的空间。 环路检测器包括由用户编程的信号使能的内容寻址存储器(CAM)。 CAM可以配置有可编程掩码,以确定进入的PC数据的哪些比特与CAM条目进行比较。 CAM的深度也是可编程的,以允许调整CAM以覆盖循环中的指令数。
    • 53. 发明授权
    • Special encoding of known bad data
    • 已知不良数据的特殊编码
    • US06662319B1
    • 2003-12-09
    • US09652314
    • 2000-08-31
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • G06F1110
    • G06F11/0763G06F11/0724
    • A multi-processor system in which each processor receives a message from another processor in the system. The message may contain corrupted data that was corrupted during transmission from the preceding processor. Upon receiving the message, the processor detects that a portion of the message contains corrupted data. The processor then replaces the corrupted portion with a predetermined bit pattern known or otherwise programmed into all other processors in the system. The predetermined bit pattern indicates that the associated portion of data was corrupted. The processor that detects the error in the message preferably alerts the system that an error has been detected. The message now containing the predetermined bit pattern in place of the corrupted data is retransmitted to another processor. The predetermined bit pattern will indicate that an error in the message was detected by the previous processor. In response, the processor detecting the predetermined bit pattern preferably will not alert the system of the existence of an error. The same message with the predetermined bit pattern can be retransmitted to other processors which also will detect the presence of the predetermined bit pattern and in response not alert the system of the presence of an error. As such, because only the first processor to detect an error alerts the system of the error and because messages containing uncorrectable errors still are transmitted through the system, fault isolation is improved and the system is less likely to fall into a deadlock condition.
    • 一种多处理器系统,其中每个处理器从系统中的另一处理器接收消息。 消息可能包含在从前一个处理器传输过程中损坏的损坏的数据。 处理器收到消息后,检测到消息的一部分包含损坏的数据。 然后,处理器以已知或以其他方式编程到系统中的所有其他处理器的预定位模式来替换被破坏的部分。 预定位模式指示相关联的数据部分已损坏。 检测消息中的错误的处理器最好提醒系统检测到错误。 现在包含预定位模式以代替已损坏数据的消息被重新发送到另一个处理器。 预定的位模式将指示消息中的错误被先前的处理器检测到。 作为响应,优选地,检测预定位模式的处理器不会警告系统存在错误。 具有预定位模式的相同消息可以被重新发送到其他处理器,其也将检测预定位模式的存在,并且在响应时不向系统警告存在错误。 因此,由于只有第一个处理器检测错误才会使系统发生错误,并且由于包含不可校正错误的消息仍然通过系统传输,所以故障隔离得到改善,系统不太可能陷入死锁状态。
    • 54. 发明授权
    • Efficient address interleaving with simultaneous multiple locality options
    • 高效的地址交错与同时多地点选项
    • US06567900B1
    • 2003-05-20
    • US09652452
    • 2000-08-31
    • Richard E. Kessler
    • Richard E. Kessler
    • G06F1200
    • G06F12/0831
    • A computer system includes multiple processors, each of which includes an associated memory. Each of the processors is capable of accessing the memory of all other processors. Memory can be stored and accessed using different addressing schemes. For data that will only be used by the local processor, data is stored in memory using processor contiguous addressing, so that data is stored in the local memory. For data that may be accessed by multiple processors, data is stored using striping among a local processor set. A stripe control register in the memory controller of each memory comprises a mask that indicates which memory blocks should be accessed using processor contiguous addressing and which should be accessed by using striped addressing. For both striped and contiguous addressing, the address space includes a processor identification field to identify the processor where the associated memory resides, together with an offset indicating where in memory the address is located. The processor identification field for striped addressing includes two bits located in low order address space identifying a four processor local stripe set. The other processor identification bits define which four processors comprise each stripe set.
    • 计算机系统包括多个处理器,每个处理器包括相关联的存储器。 每个处理器能够访问所有其他处理器的存储器。 可以使用不同的寻址方案来存储和访问存储器。 对于仅由本地处理器使用的数据,使用处理器连续寻址将数据存储在存储器中,使得数据存储在本地存储器中。 对于可能被多个处理器访问的数据,使用本地处理器集中的条带化来存储数据。 每个存储器的存储器控​​制器中的条带控制寄存器包括掩码,其指示应使用处理器连续寻址来访问哪些存储器块,并且应该通过使用条带寻址来访问。 对于条带和连续寻址,地址空间包括处理器标识字段,用于标识相关存储器所处的处理器,以及指示地址在存储器中的位置的偏移量。 用于条带寻址的处理器标识字段包括位于低位地址空间中的两个位,标识四处理器本地条带集。 其他处理器识别位定义哪个四个处理器包括每个条带集。
    • 56. 发明授权
    • Stream buffers for high-performance computer memory system
    • 流缓冲区用于高性能计算机内存系统
    • US5761706A
    • 1998-06-02
    • US333133
    • 1994-11-01
    • Richard E. KesslerSteven M. OberlinSteven L. ScottSubbarao Palacharla
    • Richard E. KesslerSteven M. OberlinSteven L. ScottSubbarao Palacharla
    • G06F12/08G06F12/00
    • G06F12/0862G06F2212/6022G06F2212/6026
    • Method and apparatus for a filtered stream buffer coupled to a memory and a processor, and operating to prefetch data from the memory. The filtered stream buffer includes a cache block storage area and a filter controller. The filter controller determines whether a pattern of references has a predetermined relationship, and if so, prefetches stream data into the cache block storage area. Such stream data prefetches are particularly useful in vector processing computers, where once the processor starts to fetch a vector, the addresses of future fetches can be predicted based in the pattern of past fetches. According to various aspects of the present invention, the filtered stream buffer further includes a history table, a validity indicator which is associated with the cache block storage area and indicates which cache blocks, if any, are valid. According to yet another aspect of the present invention, the filtered stream buffer controls random access memory (RAM) chips to stream the plurality of consecutive cache blocks from the RAM into the cache block storage area. According to yet another aspect of the present invention, the stream data includes data for a plurality of strided cache blocks, wherein each of which these strided cache blocks corresponds to an address determined by adding to the first address an integer multiple of the difference between the second address and the first address. According to yet another aspect of the present invention, the processor generates three addresses of data words in the memory, and the filter controller determines whether a predetermined relationship exists among three addresses, and if so, prefetches strided stream data into said cache block storage area.
    • 耦合到存储器和处理器的经滤波的流缓冲器的方法和装置,并且用于从存储器预取数据。 滤波的流缓冲器包括高速缓存块存储区域和过滤器控制器。 滤波器控制器确定引用模式是否具有预定关系,如果是,则将流数据预取到高速缓存块存储区域中。 这样的流数据预取在向量处理计算机中特别有用,其中一旦处理器开始获取向量,可以基于过去提取的模式来预测未来提取的地址。 根据本发明的各个方面,滤波流缓冲器还包括历史表,与高速缓存块存储区相关联的有效性指示符,并指示哪些高速缓存块(如果有的话)是有效的。 根据本发明的另一方面,滤波流缓冲器控制随机存取存储器(RAM)芯片以将多个连续高速缓存块从RAM流入高速缓存块存储区域。 根据本发明的另一方面,流数据包括用于多个跨度高速缓存块的数据,其中这些跨越高速缓存块中的每一个对应于通过将第一地址相加的确定的地址, 第二个地址和第一个地址。 根据本发明的另一方面,处理器在存储器中产生数据字的三个地址,并且滤波器控制器确定在三个地址之间是否存在预定的关系,如果是,则将步进流数据预取到所述高速缓存块存储区域 。
    • 60. 发明申请
    • INPUT OUTPUT BRIDGING
    • 输入输出桥
    • US20130103870A1
    • 2013-04-25
    • US13280768
    • 2011-10-25
    • Robert A. SanzoneDavid H. AsherRichard E. Kessler
    • Robert A. SanzoneDavid H. AsherRichard E. Kessler
    • G06F13/368
    • G06F13/1605G06F13/1684
    • In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.
    • 在一个实施例中,系统包括存储器和用于与存储器进行处理器访问的第一桥接单元。 第一桥单元包括与输入输出总线耦合的第一仲裁单元,无存储器通知单元(“MFNU”)和存储器,并且被配置为从输入 - 输出总线接收请求并接收来自 MFNU并在第一条内存总线上选择发送到内存的请求。 所述系统还包括用于与所述存储器进行分组数据存取的第二桥单元,所述存储器包括与分组输入单元,分组输出单元和所述存储器耦合的第二仲裁单元,并且被配置为从所述分组输入单元接收请求, 从分组输出单元接收请求,并在第二存储器总线上选择发送到存储器的请求。