会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Multi processor enqueue packet circuit
    • 多处理器入队包电路
    • US07174394B1
    • 2007-02-06
    • US10171957
    • 2002-06-14
    • Trevor GarnerKenneth H. PotterRobert Leroy KingWilliam R. Lee
    • Trevor GarnerKenneth H. PotterRobert Leroy KingWilliam R. Lee
    • G06F3/00
    • G06F5/065G06F2205/064
    • The present invention provides a system and method for a plurality of independent processors to simultaneously assemble requests in a context memory coupled to a coprocessor. A write manager coupled to the context memory organizes segments received from multiple processors to form requests for the coprocessor. Each received segment indicates a location in the context memory, such as an indexed memory block, where the segment should be stored. Illustratively, the write manager parses the received segments to their appropriate blocks of the context memory, and detects when the last segment for a request has been received. The last segment may be identified according to a predetermined address bit, e.g. an upper order bit, that is set. When the write manager receives the last segment for a request, the write manager (1) finishes assembling the request in a block of the context memory, (2) enqueues an index associated with the memory block in an index FIFO, and (3) sets a valid bit associated with memory block. By setting the valid bit, the write manager prevents newly received segments from overwriting the assembled request that has not yet been forwarded to the coprocessor. When an index reaches the head of the index FIFO, a request is dequeued from the indexed block of the context memory and forwarded to the coprocessor.
    • 本发明提供了一种用于多个独立处理器以在耦合到协处理器的上下文存储器中同时组合请求的系统和方法。 耦合到上下文存储器的写管理器组织从多个处理器接收的段以形成对协处理器的请求。 每个接收的段指示上下文存储器中的位置,例如索引的存储器块,其中应该存储段。 说明性地,写入管理器将接收到的段解析为上下文存储器的适当块,并且检测何时已经接收到请求的最后段。 可以根据预定的地址位来识别最后一个段,例如, 一个高位位,即被设置。 当写入管理器接收到请求的最后一个段时,写入管理器(1)完成在上下文存储器的一个块中组合该请求,(2)将与索引FIFO中的存储器块相关联的索引排入队列,以及(3) 设置与存储器块相关联的有效位。 通过设置有效位,写入管理器防止新接收的段覆盖尚未转发到协处理器的组合请求。 当索引到达索引FIFO的头部时,请求从上下文存储器的索引块中出发并转发到协处理器。
    • 2. 发明授权
    • Computer system for eliminating memory read-modify-write operations during packet transfers
    • 用于在数据包传输过程中消除内存读 - 修改 - 写操作的计算机系统
    • US06708258B1
    • 2004-03-16
    • US09881280
    • 2001-06-14
    • Kenneth H. PotterTrevor Garner
    • Kenneth H. PotterTrevor Garner
    • G06F1300
    • G06F13/385H04L69/12
    • A computer system stores packet data and reduces the number of Read-Modify-Write (RMW) operations. An attribute is configured to specify a mode of operation that instructs the processor to perform a RMW operation, or to pad the packet data to over-write a memory line. A buffer defines the memory lines. Each memory line has a discrete number of bytes. The processor addresses the buffer with a memory address register. The attribute is a new bit in the memory address register. The attribute is configured to specify a mode of operation that instructs the processor to pad the packet data to be equal to one or more complete, full memory lines so that the padded packet data are stored only in complete, full memory lines, rather than to do an expensive RMW operation. The attribute may be a new bit added to the memory address register. A set value of the bit may indicate that a RMW operation is to be performed, and a clear value may indicate that padding of the packet data is to be done for the data to match the length of a memory line. When the data includes error correction code it is not necessary to perform a RMW, and the padding to fill a memory line is done.
    • 计算机系统存储分组数据并减少读取 - 修改 - 写入(RMW)操作的数量。 属性被配置为指定指示处理器执行RMW操作的操作模式,或者填充分组数据以重写存储器线。 缓冲区定义内存条。 每个存储器线具有离散的字节数。 处理器使用存储器地址寄存器寻址缓冲区。 属性是内存地址寄存器中的一个新位。 该属性被配置为指定操作模式,其指示处理器将分组数据填充为等于一个或多个完整的完整存储器线,使得填充的分组数据仅存储在完整的完整存储器行中,而不是 做一个昂贵的RMW操作。 该属性可以是添加到存储器地址寄存器的新位。 该位的设定值可以指示将执行RMW操作,并且清除值可以指示要对数据进行填充以匹配存储器线的长度。 当数据包括纠错码时,不需要执行RMW,填充内存行的填充完成。
    • 3. 发明授权
    • Split transaction reordering circuit
    • 拆分事务重排电路
    • US07124231B1
    • 2006-10-17
    • US10172172
    • 2002-06-14
    • Trevor GarnerKenneth H. PotterHong-Man Wu
    • Trevor GarnerKenneth H. PotterHong-Man Wu
    • G06F13/36
    • G06F13/4059
    • The present invention provides a technique for ordering responses received over a split transaction bus, such as a HyperTransport bus (HPT). When multiple non-posted requests are sequentially issued over the split transaction bus, control logic is used to assign each request an identifying (ID) number, e.g. up to a maximum number of outstanding requests. Similarly, each response received over the split transaction bus is assigned the same ID number as its corresponding request. Accordingly, a “response memory” comprises a unique memory block for every possible ID number, and the control logic directs a received response to its corresponding memory block. The responses are extracted from blocks of response memory in accordance with a predetermined set of ordering rules. For example, the responses may be accessed in the same order the corresponding non-posted requests were issued.
    • 本发明提供了一种用于排序通过诸如HyperTransport总线(HPT)的分组事务总线接收的响应的技术。 当在分割事务总线上顺序地发出多个非发布请求时,使用控制逻辑来分配每个请求,例如识别(ID)号码。 达到最大数量的未完成请求。 类似地,通过分割事务总线接收的每个响应被分配与其相应请求相同的ID号。 因此,“响应存储器”包括用于每个可能的ID号的唯一的存储块,并且控制逻辑将接收的响应引导到其对应的存储块。 根据预定的排序规则集从响应存储器块中提取响应。 例如,可以按相同的顺序访问相应的未发布的请求。
    • 4. 发明授权
    • Apparatus and technique for maintaining order among requests directed to a same address on an external bus of an intermediate network node
    • 用于在针对中间网络节点的外部总线上的同一地址的请求之间维持命令的装置和技术
    • US06832279B1
    • 2004-12-14
    • US09859709
    • 2001-05-17
    • Kenneth H. PotterTrevor Garner
    • Kenneth H. PotterTrevor Garner
    • G06F1300
    • G06F13/1621G06F13/405
    • An apparatus and technique off-loads responsibility for maintaining order among requests directed to a same address on a split transaction bus from a processor to a split transaction bus controller, thereby increasing the performance of the processor. The present invention comprises an ordering circuit that enables the controller to defer issuing a subsequent (write) request directed to an address on the bus until a previous (read) request directed to the same address completes. By off-loading responsibility for maintaining order among requests from the processor to the controller, the invention enhances performance of the processor since the processor may proceed with program execution without having to stall to ensure such ordering. The ordering circuit maintains ordering in an efficient manner that is transparent to the processor.
    • 一种装置和技术将负责维护在分割事务总线上从处理器到分割事务总线控制器的相同地址的请求之间的顺序的责任,从而增加了处理器的性能。 本发明包括排序电路,其使得控制器能推迟发出针对总线上的地址的后续(写入)请求,直到针对同一地址的先前(读取)请求完成为止。 通过卸载在处理器到控制器的请求之间维护订单的责任,本发明增强了处理器的性能,因为处理器可以进行程序执行而不必停顿以确保这样的排序。 排序电路以对处理器透明的有效方式维护排序。
    • 5. 发明授权
    • Apparatus and technique for maintaining order among requests issued over an external bus of an intermediate network node
    • 用于维持通过中间网络节点的外部总线发出的请求之间的顺序的装置和技术
    • US06757768B1
    • 2004-06-29
    • US09859707
    • 2001-05-17
    • Kenneth H. PotterTrevor Garner
    • Kenneth H. PotterTrevor Garner
    • G06F1300
    • G06F13/36
    • An apparatus and technique off-loads responsibility for maintaining order among requests issued over a split transaction bus from a processor to a split transaction bus controller, thereby increasing the performance of the processor. A logic circuit enables the controller to defer issuing a subsequent (write) request directed to an address on the bus until all pending (read) requests complete. By off-loading responsibility for maintaining order among requests from the processor to the controller, the invention enhances performance of the processor since the processor may proceed with program execution without having to stall to ensure such ordering. The logic circuit maintains the order of the requests in an efficient manner that is transparent to the processor.
    • 一种装置和技术将负责维护在分组事务总线上从处理器发送到分离事务总线控制器的请求之间的顺序的责任,从而提高处理器的性能。 逻辑电路使得控制器能够延迟发出针对总线上的地址的后续(写)请求,直到所有未决(读)请求完成为止。 通过卸载在处理器到控制器的请求之间维护订单的责任,本发明增强了处理器的性能,因为处理器可以进行程序执行而不必停顿以确保这样的排序。 逻辑电路以对处理器透明的有效方式维护请求的顺序。
    • 6. 发明授权
    • System and method for decrementing a reference count in a multicast environment
    • 在多播环境中递减引用计数的系统和方法
    • US06895481B1
    • 2005-05-17
    • US10189660
    • 2002-07-03
    • John W. MittenWilliam R. LeeKenneth H. Potter
    • John W. MittenWilliam R. LeeKenneth H. Potter
    • G06F12/00H04L12/18
    • H04L12/1854
    • A method for decrementing a reference count in a multicast environment is provided that includes receiving an access request for a particle stored in a memory element. The memory unit is then accessed in response to the access request, the particle being read from the memory element. The particle includes a plurality of data segments, a selected one or more of which includes a first reference count associated with the particle. The particle is then presented to a target that generated the access request. The first reference count associated with the selected one or more data segments is then decremented in order to generate a second reference count. At least one of the plurality of data segments with the second reference count is then written to the memory element.
    • 提供了一种用于递减多播环境中的引用计数的方法,包括接收对存储在存储器元件中的粒子的访问请求。 然后响应于访问请求访问存储器单元,从存储器元件读取该粒子。 粒子包括多个数据段,所选择的一个或多个数据段包括与该粒子相关联的第一参考计数。 然后将粒子呈现给生成访问请求的目标。 然后,与所选择的一个或多个数据段相关联的第一参考计数被递减,以便产生第二参考计数。 然后将具有第二参考计数的多个数据段中的至少一个写入存储器元件。
    • 7. 发明授权
    • Group and virtual locking mechanism for inter processor synchronization
    • 用于处理器间同步的组和虚拟锁定机制
    • US06529983B1
    • 2003-03-04
    • US09432464
    • 1999-11-03
    • John William MarshallKenneth H. Potter
    • John William MarshallKenneth H. Potter
    • G06F1200
    • G06F9/52G06F9/526G06F2209/522
    • A group and virtual locking mechanism (GVLM) addresses two classes of synchronization present in a system having resources that are shared by a plurality of processors: (1) synchronization of the multi-access shared resources; and (2) simultaneous requests for the shared resources. The system is a programmable processing engine comprising an array of processor complex elements, each having a microcontroller processor. The processor complexes are preferably arrayed as rows and columns. Broadly stated, the novel GVLM comprises a lock controller function associated with each column of processor complexes and lock instructions executed by the processors that manipulate the lock controller to create a tightly integrated arrangement for issuing lock requests to the shared resources.
    • 组和虚拟锁定机制(GVLM)解决了具有由多个处理器共享的资源的系统中存在的两类同步:(1)多址共享资源的同步; 和(2)共享资源的同时请求。 该系统是包括处理器复杂元件阵列的可编程处理引擎,每个处理器元件具有微处理器处理器。 处理器复合体优选地被排列成行和列。 广义地说,新颖的GVLM包括与处理器复合体的每一列相关联的锁定控制器功能和由处理器执行的锁定指令,该指令操纵锁定控制器以创建用于向共享资源发出锁定请求的紧密集成的布置。