会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and apparatus for coalescing acknowledge packets within a server
    • 在服务器内合并确认数据包的方法和装置
    • US07324525B2
    • 2008-01-29
    • US11008799
    • 2004-12-09
    • Ronald E. FuhsCalvin C. PayntonSteven L. RogersNathaniel P. SellinScott M. Willenborg
    • Ronald E. FuhsCalvin C. PayntonSteven L. RogersNathaniel P. SellinScott M. Willenborg
    • H04L12/28H04L12/66
    • H04L12/56
    • A method for coalescing acknowledge packets within a server is disclosed. A Read Request queue having multiple queue pair entries is provided. Each of the queue pair entries includes a packet sequence number (PSN) field and an indicator field. In response to a receipt of a Write Request packet, an indicator field of a queue pair entry is set to indicate that an Ack packet has been queued within the queue pair entry, and a PSN of the Write Request packet is written into a PSN field of the queue pair entry. In addition, a Queue Write Pointer is maintained to point to the queue pair entry. In response to a receipt of a Read Request packet, the indicator field of the queue pair entry is set to indicate that a Read Request packet has been queued within the queue pair entry, and a PSN of the Read Request packet is written into the PSN field of the queue pair entry. Also, the Queue Write Pointer is advanced to point to a queue pair entry that is subsequent to the queue pair entry.
    • 公开了一种用于在服务器内聚合确认分组的方法。 提供了具有多个队列对条目的读请求队列。 每个队列对条目包括分组序列号(PSN)字段和指示符字段。 响应于写请求分组的接收,设置队列对条目的指示符字段以指示Ack分组已经在队列对条目中排队,并且写请求分组的PSN被写入PSN字段 的队列对条目。 此外,维护队列写入指针以指向队列对条目。 响应于读取请求分组的接收,队列对条目的指示符字段被设置为指示读取请求分组已经在队列对条目中排队,并且读取请求分组的PSN被写入PSN 字段的队列对条目。 此外,队列写入指针被提前指向队列对条目后面的队列对条目。
    • 2. 发明授权
    • Multi-threaded packet processing
    • 多线程数据包处理
    • US08934332B2
    • 2015-01-13
    • US13408575
    • 2012-02-29
    • Ronald E. FuhsScott M. Willenborg
    • Ronald E. FuhsScott M. Willenborg
    • G06F11/00G01R31/08H04L12/24G06F3/00G06F15/00G06F7/38
    • G06F9/5083G06F2209/509
    • A system is disclosed for concurrently processing order sensitive data packets. A first data packet from a plurality of sequentially ordered data packets is directed to a first offload engine. A second data packet from the plurality of sequentially ordered data packets is directed to a second offload engine, wherein the second data packet is sequentially subsequent to the first data packet. The second offload engine receives information from the first offload engine, wherein the information reflects that the first offload engine is processing the first data packet. Based on the information received at the second offload engine, the second offload engine processes the second data packet so that critical events in the processing of the first data packet by the first offload engine occur prior to critical events in the processing of the second data packet by the second offload engine.
    • 公开了用于同时处理顺序敏感数据分组的系统。 来自多个顺序排列的数据分组的第一数据分组被引导到第一卸载引擎。 来自多个顺序排列的数据分组的第二数据分组被引导到第二卸载引擎,其中第二数据分组依次在第一数据分组之后。 第二卸载引擎从第一卸载引擎接收信息,其中信息反映第一卸载引擎正在处理第一数据包。 基于在第二卸载引擎处接收到的信息,第二卸载引擎处理第二数据分组,使得由第一卸载引擎处理第一数据分组的关键事件发生在第二数据分组的处理中的关键事件之前 由第二个卸载引擎。
    • 3. 发明授权
    • Speculative credit data flow control
    • 投机信用数据流控制
    • US07855954B2
    • 2010-12-21
    • US12176810
    • 2008-07-21
    • Scott M. WillenborgRonald E. Fuhs
    • Scott M. WillenborgRonald E. Fuhs
    • H04L12/26
    • H04L47/10H04L47/11H04L47/2416H04L47/39
    • A method of speculative credit data flow control includes defining a low watermark value as a function of a number of open buffers in a receiving unit; receiving a data packet from a sending unit; determining whether the data packet includes a packet delay indicator; defining a first speculative credit value responsive to receiving the packet delay indicator; defining a second speculative credit value as a function of the first speculative credit value added to a regular credit value; generating a flow control packet including the second speculative credit value; and sending the flow control packet to the sending unit.
    • 推测信用数据流控制的方法包括:将接收单元中的开放缓冲器的数量定义为低水印值; 从发送单元接收数据分组; 确定所述数据分组是否包括分组延迟指示符; 响应于接收到所述分组延迟指示符定义第一推测信用值; 定义作为添加到正常信用值的第一投机信用值的函数的第二投机信用值; 生成包含第二推测信用值的流量控制包; 并将流量控制分组发送到发送单元。
    • 5. 发明申请
    • PREFETCHING FOR A SHARED DIRECT MEMORY ACCESS (DMA) ENGINE
    • 共享直接存储器访问(DMA)引擎的预编译
    • US20130268700A1
    • 2013-10-10
    • US13438864
    • 2012-04-04
    • Ronald E. FuhsScott M. Willenborg
    • Ronald E. FuhsScott M. Willenborg
    • G06F13/28
    • G06F13/28
    • A system is disclosed for fetching control instructions for a direct memory access (DMA) engine shared between a plurality of threads. For a data transfer from a first thread by a DMA engine, the DMA engine fetches and processes a predetermined number of control instructions (or work queue elements) for the data transfer, each of the control instructions including an amount and location of data to transfer. The DMA engine determines a total amount of data transferred as a result of the data transfer. The DMA engine then determines a difference between the total amount of data transferred and a threshold amount of data, wherein the threshold amount of data indicates a preferred amount of data to be transferred for the first thread. The predetermined number of control instructions to fetch is updated based on the determined difference.
    • 公开了一种用于获取用于在多个线程之间共享的直接存储器访问(DMA)引擎的控制指令的系统。 对于由DMA引擎从第一线程进行的数据传输,DMA引擎提取并处理用于数据传送的预定数量的控制指令(或工作队列元素),每个控制指令包括要传送的数据的量和位置 。 DMA引擎确定由于数据传输而传输的总数据量。 DMA引擎然后确定传送的数据总量与阈值数据量之间的差异,其中阈值数据量指示要传送给第一线程的优选数据量。 基于所确定的差异来更新提取的预定数量的控制指令。
    • 6. 发明申请
    • MULTI-THREADED PACKET PROCESSING
    • 多线程分组处理
    • US20130223234A1
    • 2013-08-29
    • US13408575
    • 2012-02-29
    • Ronald E. FuhsScott M. Willenborg
    • Ronald E. FuhsScott M. Willenborg
    • H04L12/56H04L12/26
    • G06F9/5083G06F2209/509
    • A system is disclosed for concurrently processing order sensitive data packets. A first data packet from a plurality of sequentially ordered data packets is directed to a first offload engine. A second data packet from the plurality of sequentially ordered data packets is directed to a second offload engine, wherein the second data packet is sequentially subsequent to the first data packet. The second offload engine receives information from the first offload engine, wherein the information reflects that the first offload engine is processing the first data packet. Based on the information received at the second offload engine, the second offload engine processes the second data packet so that critical events in the processing of the first data packet by the first offload engine occur prior to critical events in the processing of the second data packet by the second offload engine.
    • 公开了用于同时处理顺序敏感数据分组的系统。 来自多个顺序排列的数据分组的第一数据分组被引导到第一卸载引擎。 来自多个顺序排列的数据分组的第二数据分组被引导到第二卸载引擎,其中第二数据分组依次在第一数据分组之后。 第二卸载引擎从第一卸载引擎接收信息,其中信息反映第一卸载引擎正在处理第一数据包。 基于在第二卸载引擎处接收到的信息,第二卸载引擎处理第二数据分组,使得由第一卸载引擎处理第一数据分组的关键事件发生在第二数据分组的处理中的关键事件之前 由第二个卸载引擎。
    • 7. 发明授权
    • Prefetching for a shared direct memory access (DMA) engine
    • 预取共享直接内存访问(DMA)引擎
    • US08578069B2
    • 2013-11-05
    • US13438864
    • 2012-04-04
    • Ronald E. FuhsScott M. Willenborg
    • Ronald E. FuhsScott M. Willenborg
    • G06F13/28
    • G06F13/28
    • A system is disclosed for fetching control instructions for a direct memory access (DMA) engine shared between a plurality of threads. For a data transfer from a first thread by a DMA engine, the DMA engine fetches and processes a predetermined number of control instructions (or work queue elements) for the data transfer, each of the control instructions including an amount and location of data to transfer. The DMA engine determines a total amount of data transferred as a result of the data transfer. The DMA engine then determines a difference between the total amount of data transferred and a threshold amount of data, wherein the threshold amount of data indicates a preferred amount of data to be transferred for the first thread. The predetermined number of control instructions to fetch is updated based on the determined difference.
    • 公开了一种用于获取用于在多个线程之间共享的直接存储器访问(DMA)引擎的控制指令的系统。 对于由DMA引擎从第一线程进行的数据传输,DMA引擎提取并处理用于数据传送的预定数量的控制指令(或工作队列元素),每个控制指令包括要传送的数据的量和位置 。 DMA引擎确定由于数据传输而传输的总数据量。 DMA引擎然后确定传送的数据总量与阈值数据量之间的差异,其中阈值数据量指示要传送给第一线程的优选数据量。 基于所确定的差异来更新提取的预定数量的控制指令。