会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and apparatus for forwarding packets from a plurality of
contending queues to an output
    • 用于将分组从多个竞争队列转发到输出的方法和装置
    • US06067301A
    • 2000-05-23
    • US87064
    • 1998-05-29
    • Deepak J. Aatresh
    • Deepak J. Aatresh
    • H04L12/56H04Q1/04H04L12/28
    • H04L47/10H04L47/2441H04L47/2458H04L47/762H04L2012/5651
    • A method and apparatus for forwarding packets from contending queues of a multiport switch to an output of a finite bandwidth involve first prioritizing the contending queues into different priorities that relate to priorities of the packets that are being forwarded in the network. Bandwidth of the output is then allocated among the prioritized contending queues and the bandwidth of the output is consumed by the queued packets according to the allocated proportions. Any unconsumed bandwidth is distributed to the queues on a priority basis such that the highest priority queue is offered the unconsumed bandwidth first and lower priority queues are offered the remaining unconsumed bandwidth in priority order. An advantage of the invention is that queues are not starved of bandwidth by higher priority queues and unconsumed bandwidth is not wasted when there are not enough packets to consume an allocated portion of the output bandwidth. The method is adjustable during normal network operations through a programming interface to provide a specified quality of service.
    • 用于将分组从多端口交换机的竞争队列转发到有限带宽的输出的方法和装置包括首先将竞争队列优先排列成与在网络中转发的分组的优先级相关的不同优先级。 输出的带宽然后在优先竞争队列中分配,并且输出的带宽根据分配的比例被排队的分组消耗。 任何未消耗的带宽以优先级分配到队列,使得首先向最优先级队列提供未消耗的带宽,并且优先级较低的优先级队列提供剩余的未消耗带宽。 本发明的一个优点是,当没有足够的分组来消耗所分配的部分输出带宽时,队列不被较高优先级队列占用带宽,并且未消耗带宽不被浪费。 该方法在正常网络操作期间通过编程接口进行调整,以提供指定的服务质量。
    • 3. 发明授权
    • Method and system for rate shaping in packet-based computer networks
    • 基于分组的计算机网络中的速率整形方法和系统
    • US06798741B2
    • 2004-09-28
    • US10007409
    • 2001-12-05
    • Sandeep LodhaDeepak J. Aatresh
    • Sandeep LodhaDeepak J. Aatresh
    • G01R3108
    • H04L47/527H04L1/0002H04L25/0262H04L47/10H04L47/11H04L47/225H04L47/2441H04L47/32Y02D50/10
    • The flow of packet-based traffic is controlled to meet a desired rate by calculating, as a moving average, a current rate of packet-based traffic on a link, calculating the sum of the error between the calculated current rate and the desired rate, and determining whether or not packets can flow in response to the calculated sum of the error. When the sum of the error between the current rate and the desired rate indicates that the current rate is less than the desired rate, packets are allowed to flow and when the sum of the error indicates that the current rate is greater than the desired rate, packets are not allowed to flow. The magnitude of bursts can also be controlled by artificially controlling the minimum values of the current rate and the sum of the error. The flow control algorithm can be used for rate shaping or rate limiting.
    • 通过计算链路上基于分组的业务的当前速率,计算计算出的当前速率和期望速率之间的误差之和作为移动平均值,来控制基于分组的业务的流量以满足期望的速率, 以及响应于所计算出的误差之和来确定分组是否可以流动。 当当前速率和期望速率之间的误差之和表示当前速率小于期望速率时,允许分组流动,并且当误差之和表示当前速率大于期望速率时, 数据包不允许流动。 突发的幅度也可以通过人为地控制当前速率的最小值和误差的总和来控制。 流量控制算法可用于速率整形或速​​率限制。
    • 6. 发明授权
    • System for using a branch prediction unit to achieve serialization by
forcing a branch misprediction to flush a pipeline
    • 用于使用分支预测单元通过强制分支预测来冲洗管道来实现序列化的系统
    • US5954814A
    • 1999-09-21
    • US994400
    • 1997-12-19
    • Nazar A. ZaidiDeepak J. AatreshMichael J. Morrison
    • Nazar A. ZaidiDeepak J. AatreshMichael J. Morrison
    • G06F9/38G06F9/312
    • G06F9/3806G06F9/3861
    • A microprocessor includes an instruction fetch unit, a branch prediction unit, and a decode unit. The instruction fetch unit is adapted to retrieve a plurality of program instructions. The program instructions include serialization initiating instructions and branch instructions. The branch prediction unit is adapted to generate branch predictions for the branch instructions, direct the instruction fetch unit to retrieve the program instructions in an order corresponding to the branch predictions, and redirect the instruction fetch unit based on a branch misprediction. The branch prediction unit is further adapted to store a redirect address corresponding to the branch misprediction. The decode unit is adapted to decode the program instructions into microcode. The microcode for each of the serialization initiating instructions includes microcode for writing a serialization address of the program instruction following the serialization initiating instruction in the branch prediction unit as the redirect address and triggering the branch misprediction.
    • 微处理器包括指令提取单元,分支预测单元和解码单元。 指令提取单元适于检索多个程序指令。 程序指令包括序列化启动指令和分支指令。 分支预测单元适于生成分支指令的分支预测,指令提取单元以与分支预测相对应的顺序检索程序指令,并且基于分支错误预测重定向指令获取单元。 分支预测单元还适于存储对应于分支错误预测的重定向地址。 解码单元适于将程序指令解码为微码。 每个序列化启动指令的微代码包括用于在分支预测单元中的串行化启动指令之后写入程序指令的序列化地址作为重定向地址并触发分支错误预测的微代码。
    • 8. 发明授权
    • Method and apparatus for reduced latency in hold bus cycles
    • 在保持总线周期中减少延迟的方法和装置
    • US5398244A
    • 1995-03-14
    • US92488
    • 1993-07-16
    • Gregory S. MathewsDeepak J. AatreshSanjay Jain
    • Gregory S. MathewsDeepak J. AatreshSanjay Jain
    • G06F13/30G06F13/364H04J3/00
    • G06F13/364G06F13/30
    • An innovative protocol and system for implementing the same enables quick release of the bus by the master device, such as a CPU, to permit slave devices access to the bus. In one embodiment, the arbiter can select between the original hold protocol and quick hold protocol according to predetermined criteria which indicates that a low latency response is requested. Upon assertion of a QHOLD signal, the CPU issues a burst last signal to prematurely terminate outstanding burst transactions on the bus in a manner transparent to the slave devices. Once the outstanding bus cycles are complete, the CPU performs an internal backoff to immediately release the bus for access by the slave device requesting access. Any pending burst cycles which were terminated prematurely by the QHOLD signal, are subsequently restarted for the data not transacted by the CPU after the slave device completes access to the bus. The internal backoff mechanism is similarly transparent to the slave devices and does not cause a backoff signal to be issued to the peripherals or devices coupled to the bus. Thus, the addition of a quick hold protocol is added without significant modification of the slave devices' bus interface.
    • 用于实现该协议和系统的创新协议和系统能够通过主设备(例如CPU)快速释放总线,以允许从设备访问总线。 在一个实施例中,仲裁器可以根据预定标准在原始保持协议和快速保持协议之间进行选择,这些标准指示请求低延迟响应。 在断言QHOLD信号时,CPU以对从属设备透明的方式发出突发上一个信号以提前终止总线上的突发事件。 一旦未完成的总线周期完成,CPU将执行内部退避,以立即释放总线以供从机设备请求访问。 随后,在从站设备完成对总线的访问之后,由QHOLD信号提前终止的任何未决突发周期随后重新启动,以便CPU不处理数据。 内部退避机制对于从属设备类似地是透明的,并且不会导致向连接到总线的外围设备或设备发出退避信号。 因此,添加快速保持协议,而不显着修改从设备的总线接口。