会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Fast and highly scalable quota-based weighted arbitration
    • 快速且高度可扩展的配额加权仲裁
    • US08667200B1
    • 2014-03-04
    • US12712109
    • 2010-02-24
    • Lukito MuliadiRaymond Hoi Man WongMadhukiran V. SwarnaSamuel H. Duncan
    • Lukito MuliadiRaymond Hoi Man WongMadhukiran V. SwarnaSamuel H. Duncan
    • G06F12/00H04L12/28
    • G06F13/364H04L47/623H04L47/821
    • One embodiment of the present invention sets forth a technique for arbitrating between a set of requesters that transmit data transmission requests to the weighted LRU arbiter. Each data transmission request is associated with a specific amount of data to be transmitted over the crossbar unit. Based on the priority state associated with each requester, the weighted LRU arbiter then selects the requester in the set of requesters with the highest priority. The weighted LRU arbiter then decrements the weight associated with the selected requester stored in a corresponding weight store based on the size of the data to be transmitted. If the decremented weight is equal to or less than zero, then the priority associated with the selected requester is set to a lowest priority. If, however, the decremented weight is greater than zero, then the priority associated with the selected requester is not changed.
    • 本发明的一个实施例提出了一种用于在向加权的LRU仲裁器发送数据传输请求的请求者组之间进行仲裁的技术。 每个数据传输请求与通过交叉开关单元传输的特定数据量相关联。 基于与每个请求者相关联的优先级状态,加权LRU仲裁器然后选择具有最高优先级的请求者组中的请求者。 然后,加权的LRU仲裁器基于要发送的数据的大小来减去与存储在对应权重存储器中的所选择的请求者相关联的权重。 如果递减权重等于或小于零,则与所选择的请求者相关联的优先级被设置为最低优先级。 然而,如果递减的权重大于零,则与所选请求者相关联的优先级不改变。
    • 7. 发明授权
    • Passive release avoidance technique
    • US07024509B2
    • 2006-04-04
    • US09944515
    • 2001-08-31
    • Samuel H. DuncanSteven Ho
    • Samuel H. DuncanSteven Ho
    • G06F13/36
    • H03K5/19G06F13/24G06F13/4081G06F2213/2402
    • A system and method avoids passive release of interrupts in a computer system. The computer system includes a plurality of processors, a plurality of input/output (I/O) devices each capable of issuing interrupts, and an I/O bridge interfacing between the I/O devices and the processors. Interrupts, such as level sensitive interrupts (LSIs), asserted by an I/O device coupled to a specific port of the I/O bridge are sent to a processor for servicing by an interrupt controller, which also sets an interrupt pending flag. Upon dispatching the respective interrupt service routine, the processor generates two ordered messages. The first ordered message is sent to the I/O device that triggered the interrupt, informing it that the interrupt has been serviced. The second ordered message directs the interrupt controller to clear the respective interrupt pending flag. Both messages are sent, in order, to the particular I/O bridge port to which the subject I/O device is coupled. After forwarding the first message to the I/O device, the bridge port forwards the second message to the interrupt controller so that the interrupt can be deasserted before the interrupt pending flag is cleared.
    • 8. 发明授权
    • Scalable efficient I/O port protocol
    • 可扩展的高效I / O端口协议
    • US06738836B1
    • 2004-05-18
    • US09652391
    • 2000-08-31
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • G06F1300
    • G06F15/17381G06F12/0817G06F2212/621
    • A system that supports a high performance, scalable, and efficient I/O port protocol to connect to I/O devices is disclosed. A distributed multiprocessing computer system contains a number of processors each coupled to an I/O bridge ASIC implementing the I/O port protocol. One or more I/O devices are coupled to the I/O bridge ASIC, each I/O device capable of accessing machine resources in the computer system by transmitting and receiving message packets. Machine resources in the computer system include data blocks, registers and interrupt queues. Each processor in the computer system is coupled to a memory module capable of storing data blocks shared between the processors. Coherence of the shared data blocks in this shared memory system is maintained using a directory based coherence protocol. Coherence of data blocks transferred during I/O device read and write accesses is maintained using the same coherence protocol as for the memory system. Data blocks transferred during an I/O device read or write access may be buffered in a cache by the I/O bridge ASIC only if the I/O bridge ASIC has exclusive copies of the data blocks. The I/O bridge ASIC includes a DMA device that supports both in-order and out-of-order DMA read and write streams of data blocks. An in-order stream of reads of data blocks performed by the DMA device always results in the DMA device receiving coherent data blocks that do not have to be written back to the memory module.
    • 公开了一种支持高性能,可扩展和高效的I / O端口协议来连接到I / O设备的系统。 分布式多处理计算机系统包含多个处理器,每个处理器都耦合到实现I / O端口协议的I / O桥ASIC。 一个或多个I / O设备耦合到I / O桥ASIC,每个I / O设备能够通过发送和接收消息分组来访问计算机系统中的机器资源。 计算机系统中的机器资源包括数据块,寄存器和中断队列。 计算机系统中的每个处理器耦合到能够存储处理器之间共享的数据块的存储器模块。 使用基于目录的一致性协议来维护该共享存储器系统中的共享数据块的一致性。 使用与存储系统相同的一致性协议来维护I / O设备读写访问期间传输的数据块的一致性。 只有当I / O桥ASIC具有数据块的排他副本时,I / O桥ASIC才能缓存在I / O设备读或写访问期间传输的数据块。 I / O桥ASIC包括支持数据块的顺序和无序DMA读和写数据流的DMA设备。 由DMA设备执行的数据块的顺序读取流总是导致DMA设备接收不必写入存储器模块的相干数据块。