会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Hyperprocessor
    • 超处理器
    • US07533382B2
    • 2009-05-12
    • US10283653
    • 2002-10-30
    • Faraydon O. Karim
    • Faraydon O. Karim
    • G06F9/46G06F15/00
    • G06F9/4843G06F9/30098G06F9/3851
    • A hyperprocessor includes a control processor controlling tasks executed by a plurality of processor cores, each of which may include multiple execution units, or special hardware units. The control processor schedules tasks according to control threads for the tasks created during compilation and comprising a hardware context including register files, a program counter and status bits for the respective task. The tasks are dispatched to the processor cores or special hardware units for parallel, sequential, out-of-order or speculative execution. A universal register file contains data to be operated on by the task, and an interconnect couples at least the processor cores or special hardware units to each other and to the universal register file, allowing each node to communicate with any other node.
    • 超处理器包括控制处理器,其控制由多个处理器核执行的任务,每个处理器核可以包括多个执行单元或特殊硬件单元。 控制处理器根据编译期间创建的任务的控制线程调度任务,并且包括包括寄存器文件的硬件上下文,程序计数器和相应任务的状态位。 将任务分派到处理器内核或特殊硬件单元进行并行,顺序,无序或推测执行。 通用寄存器文件包含要由任务操作的数据,并且互连至少将处理器核心或特殊硬件单元彼此耦合到通用寄存器文件,从而允许每个节点与任何其他节点通信。
    • 3. 发明授权
    • System for multiple error detection with single and double bit error
correction
    • 用于单和双位错误校正的多重错误检测系统
    • US4589112A
    • 1986-05-13
    • US574221
    • 1984-01-26
    • Faraydon O. Karim
    • Faraydon O. Karim
    • G06F11/10H03M13/00H03M13/13
    • H03M13/13
    • A system for detecting multiple errors that may occur during transfer of data and for correcting up to two of these errors simultaneously. The system has a component for calculating a number of check bits associated with the data word. Also provided is a component for grouping all data bits into base groups and multiple groups, the sum of the number of base groups and multiple groups being equal to the number of check bits. Up to two weights are assigned for each data bit. The system distributes the data bits among the groups according to the weights assigned thereto. Also provided is a component for generating a check bit for each of the groups and for padding the data word with the check bits to form an appended data word. A generator creates a predetermined number of syndrome bits, the number being the number of check bits. Finally, a decoder is provided for decoding the syndrome bits to identify the erroneous bits in the data word.
    • 一种用于检测数据传输期间可能发生的多个错误并同时纠正这些错误中的两个错误的系统。 该系统具有用于计算与数据字相关联的多个校验位的组件。 还提供了用于将所有数据位分组为基组和多组的组件,基组数和多组的数量等于校验位数。 为每个数据位分配最多两个权重。 系统根据分配给它们的权重来分配组中的数据位。 还提供了用于为每个组生成校验位的组件以及用校验位填充数据字以形成附加的数据字。 发生器产生预定数量的校正位,该数目是校验位数。 最后,提供一个解码器来解码校正子位以识别数据字中的错误位。
    • 4. 发明申请
    • Multiprocessing apparatus, system and method
    • 多处理装置,系统及方法
    • US20090133022A1
    • 2009-05-21
    • US11985481
    • 2007-11-15
    • Faraydon O. Karim
    • Faraydon O. Karim
    • G06F9/46
    • G06F9/4843
    • An apparatus to isolate a main memory in a multiprocessor computer is provided. The apparatus include a master processor and a management device communicating with the master processor. One or more slave processors communicate with the master processor and the management device. A volatile memory also communicates with the management device and the main memory communicating with the volatile memory. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules that allow a reader to quickly ascertain the subject matter of the disclosure contained herein. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
    • 提供了一种用于隔离多处理器计算机中的主存储器的装置。 该装置包括与主处理器通信的主处理器和管理装置。 一个或多个从属处理器与主处理器和管理设备进行通信。 易失性存储器还与管理装置和与易失性存储器通信的主存储器进行通信。 本摘要仅用于遵守允许读者快速确定本文所包含的披露的主题的抽象要求规则。 本摘要以明确的理解提交,不会用于解释或限制权利要求书的范围或含义。
    • 5. 发明授权
    • System independent and scalable packet buffer management architecture for network processors
    • 用于网络处理器的系统独立且可扩展的数据包缓冲管理架构
    • US07468985B2
    • 2008-12-23
    • US10290766
    • 2002-11-08
    • Faraydon O. KarimRamesh ChandraBernd H. Stramm
    • Faraydon O. KarimRamesh ChandraBernd H. Stramm
    • H04L12/28H04L12/56G06F9/26
    • H04L49/9031H04L49/90H04L49/901
    • A circular buffer storing packets for processing by one or more network processors employs an empty buffer address register identifying where a next received packet should be stored, a next packet address register identifying the next packet to be processed, and a packet-processing address register within each network processor identifying the packet being processed by that network processor. The n-bit addresses to the buffer are mapped or masked from/to the m-bit packet-processing address registers by software, allowing the buffer size to be fully scalable. A dedicated packet retrieval instruction supported by the network processor(s) retrieves a new packet for processing using the next packet address register and copies that into the associated packet-processing address register for use in subsequent accesses. Buffer management is thus independent of the network processor architecture.
    • 存储用于由一个或多个网络处理器处理的分组的循环缓冲器使用空缓冲器地址寄存器来标识下一个接收到的分组应该被存储在哪里,下一个分组地址寄存器标识下一个待处理分组,以及一个分组处理地址寄存器 每个网络处理器识别由该网络处理器正在处理的分组。 缓冲区的n位地址由软件映射或掩蔽到m位数据包处理地址寄存器,从而允许缓冲区大小完全可扩展。 由网络处理器支持的专用分组检索指令使用下一个分组地址寄存器检索新的分组进行处理,并将其复制到相关的分组处理地址寄存器中以用于随后的访问。 因此,缓冲区管理与网络处理器架构无关。
    • 6. 发明授权
    • Fetch branch architecture for reducing branch penalty without branch prediction
    • 获取分支结构,以减少分支惩罚,无需分支预测
    • US07010675B2
    • 2006-03-07
    • US09917290
    • 2001-07-27
    • Faraydon O. KarimRamesh Chandra
    • Faraydon O. KarimRamesh Chandra
    • G06F9/30
    • G06F9/3804G06F9/3842
    • In lieu of branch prediction, a merged fetch-branch unit operates in parallel with the decode unit within a processor. Upon detection of a branch instruction within a group of one or more fetched instructions, any instructions preceding the branch are marked regular instructions, the branch instruction is marked as such, and any instructions following branch are marked sequential instructions. Within two cycles, sequential instructions following the last fetched instruction are retrieved and marked, target instructions beginning at the branch target address are retrieved and marked, and the branch is resolved. Either the sequential or target instructions are then dropped depending on the branch resolution, incurring a fixed, 1 cycle branch penalty.
    • 代替分支预测,合并的分支单元与处理器内的解码单元并行操作。 在检测到一个或多个获取的指令的组内的分支指令时,分支之前的任何指令被标记为常规指令,分支指令被标记为这样,并且分支之后的任何指令被标记为顺序指令。 在两个周期内,检索并标记最后取出的指令之后的顺序指令,检索并标记从分支目标地址开始的目标指令,并解析分支。 然后根据分支分辨率,顺序或目标指令被丢弃,产生固定的1个循环分支罚分。
    • 7. 发明授权
    • Method and apparatus for floating point normalization
    • 浮点归一化的方法和装置
    • US5384723A
    • 1995-01-24
    • US205123
    • 1994-02-28
    • Faraydon O. KarimChristopher H. Olson
    • Faraydon O. KarimChristopher H. Olson
    • G06F7/00G06F5/01G06F7/76G06F7/38
    • G06F5/012G06F5/015
    • A method and apparatus for performing normalization of floating point numbers using a much smaller width register than would normally be required for the data operands which can be processed. As the registers are smaller, the number of circuits required to achieve the normalization is reduced, resulting in a decrease in the chip area required to perform such operation. The normalization circuitry was streamlined to efficiently operate on the more prevalent type of data being presented to the floating point unit. Data types and/or operations which statistically occur less frequently require multiple cycles of the normalization function. It was found that for the more prevalent data types and/or operations, the width of the registers required was substantially less than the width required for the less frequent data types and/or operations. Instead of expanding the register width to accommodate these lesser occurrences, the data is broken into smaller portions and normalized using successive cycles of the normalization circuitry. Thus, by sacrificing speed for the lesser occurring events, a significant savings was realized in the number of circuits required to implement normalization. As the slower speed operations occur infrequently, the overall performance of the normalization function is minimally impacted. Thus, considerable savings in integrated circuit real estate is achieved with minimal impact to the overall throughput of the system.
    • 一种方法和装置,用于使用比可以处理的数据操作数通常要求的更小的宽度寄存器来执行浮点数的归一化。 由于寄存器较小,实现归一化所需的电路数量减少,导致执行此类操作所需的芯片面积减少。 归一化电路被简化以有效地对呈现给浮点单元的更普遍类型的数据进行操作。 统计上发生较少频率的数据类型和/或操作需要标准化功能的多个周期。 已经发现,对于更普遍的数据类型和/或操作,所需寄存器的宽度远小于较不频繁的数据类型和/或操作所需的宽度。 代替扩展寄存器宽度以适应这些较小的出现,数据被分解成更小的部分,并使用归一化电路的连续周期进行归一化。 因此,通过牺牲较小的事件的速度,实现标准化所需的电路数量实现了显着的节省。 由于较慢的速度操作不频繁发生,所以归一化功能的整体性能受到最小的影响。 因此,实现集成电路空间的可观节省,对系统的整体吞吐量的影响最小。
    • 8. 发明授权
    • Octagonal interconnection network for linking processing nodes on an SOC device and method of operating same
    • 用于链接SOC设备上的处理节点的八角互连网络及其操作方法
    • US07218616B2
    • 2007-05-15
    • US10090899
    • 2002-03-05
    • Faraydon O. Karim
    • Faraydon O. Karim
    • H04L12/56
    • H04L49/15H04L49/109H04L49/352H04L49/357
    • An octagonal interconnection network for routing data packets. The interconnection network comprises: 1) eight switching circuits for transferring data packets with each other; 2) eight sequential data links bidirectionally coupling the eight switching circuits in sequence to thereby form an octagonal ring configuration; and 3) four crossing data links, wherein a first crossing data link bidirectionally couples a first switching circuit to a fifth switching circuit, a second crossing data link bidirectionally couples a second switching circuit to a sixth switching circuit, a third crossing data link bidirectionally couples a third switching circuit to a seventh switching circuit, and a fourth crossing data link bidirectionally couples a fourth switching circuit to an eighth switching circuit.
    • 用于路由数据包的八角互连网络。 互连网络包括:1)用于彼此传输数据分组的8个切换电路; 2)八个顺序数据链路按顺序耦合八个开关电路从而形成八角形环配置; 以及3)四个交叉数据链路,其中第一交叉数据链路将第一开关电路双向耦合到第五开关电路,第二交叉数据链路将第二开关电路双向耦合到第六开关电路,第三交叉数据链路双向耦合 第七开关电路,第七开关电路和第四交叉数据链路将第四开关电路双向耦合到第八开关电路。
    • 9. 发明授权
    • Method and device for computing the number of bits set to one in an arbitrary length word
    • 用于计算在任意长度字中设置为1的位数的方法和装置
    • US06795839B2
    • 2004-09-21
    • US09727135
    • 2000-11-30
    • Faraydon O. KarimAlain Mellan
    • Faraydon O. KarimAlain Mellan
    • G06F700
    • G06F7/607
    • A method and a bit counting device (100) count bits set to one in a data word of arbitrary size. The bit counting device (100) includes a first data register (110) for storing a data word, an offset register (112) for storing an offset value, a second data register (120), and a one-cycle shifter (114), electrically connected to the first data register (110), to the second data register (120), and to the offset register (112), for shifting the data word by a value stored in the offset register (112) and storing the shifted data word in the second data register (120). The device 100 also includes a third data register (124) and at least one carry save adder (CSA) device (122) organized in a tree structure, and electrically connected to the second data register (120) and to the third data register (124), for counting the number of bits set to one in the data word stored in the second data register (120) and storing in the third data register (124) a value representing the count of bits set to one in the data word.
    • 方法和位计数装置(100)将任意大小的数据字中的一个设置为1的计数位。 位计数装置(100)包括用于存储数据字的第一数据寄存器(110),用于存储偏移值的偏移寄存器(112),第二数据寄存器(120)和单周期移位器(114) ,电连接到第一数据寄存器(110)到第二数据寄存器(120)和偏移寄存器(112),用于将数据字移位存储在偏移寄存器(112)中的值,并存储转移的 数据字在第二数据寄存器(120)中。 装置100还包括以树结构组织的第三数据寄存器(124)和至少一个进位保存加法器(CSA)装置(122),并且电连接到第二数据寄存器(120)和第三数据寄存器 124),用于对存储在第二数据寄存器(120)中的数据字中设置为1的比特数进行计数,并在第三数据寄存器(124)中存储表示数据字中设置为1的比特数的值。