会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 82. 发明申请
    • Method and system for managing cache injection in a multiprocessor system
    • 在多处理器系统中管理缓存注入的方法和系统
    • US20060064518A1
    • 2006-03-23
    • US10948407
    • 2004-09-23
    • Patrick BohrerAhmed GheithPeter HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • Patrick BohrerAhmed GheithPeter HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • G06F13/28
    • G06F13/28
    • A method and apparatus for managing cache injection in a multiprocessor system reduces processing time associated with direct memory access transfers in a symmetrical multiprocessor (SMP) or a non-uniform memory access (NUMA) multiprocessor environment. The method and apparatus either detect the target processor for DMA completion or direct processing of DMA completion to a particular processor, thereby enabling cache injection to a cache that is coupled with processor that executes the DMA completion routine processing the data injected into the cache. The target processor may be identified by determining the processor handling the interrupt that occurs on completion of the DMA transfer. Alternatively or in conjunction with target processor identification, an interrupt handler may queue a deferred procedure call to the target processor to process the transferred data. In NUMA multiprocessor systems, the completing processor/target memory is chosen for accessibility of the target memory to the processor and associated cache.
    • 用于管理多处理器系统中的高速缓存注入的方法和装置减少与对称多处理器(SMP)或非均匀存储器访问(NUMA)多处理器环境中的直接存储器访问传输相关联的处理时间。 该方法和装置可以检测目标处理器用于DMA完成或直接处理DMA完成到特定处理器,从而使高速缓存注入与执行DMA完成例程的处理器处理注入高速缓存的数据的处理器相连的高速缓存。 可以通过确定处理器处理在DMA传输完成时发生的中断来识别目标处理器。 或者或与目标处理器识别结合,中断处理程序可以将延迟过程调用排队到目标处理器以处理传送的数据。 在NUMA多处理器系统中,选择完成的处理器/目标存储器,以便可访问目标存储器到处理器和相关联的高速缓存。
    • 85. 发明申请
    • Estimating bandwidth of client-ISP link
    • 估计客户端 - ISP链路的带宽
    • US20050132068A1
    • 2005-06-16
    • US10734771
    • 2003-12-12
    • Ramakrishnan Rajamony
    • Ramakrishnan Rajamony
    • G06F15/16G06F15/173H04L12/24H04L12/26H04L29/12
    • H04L61/304H04L29/12594H04L41/0896H04L43/00H04L43/0882H04L43/50H04L61/30
    • A method, program, and server for estimating the bandwidth of a network connection between a client and a server includes requesting the server to serve first and second objects back-to-back to the client. The first and second objects are sent to the client. The client determines the time interval between delivery of the first and second objects. The time interval is used, in conjunction with information about the size of the second object, to estimate the bandwidth. The requests for the first and second objects preferably identify the first and second objects with URL's that are unique on the network to prevent the request from being serviced by a file cache. The first and second objects may be transmitted to the client from a content distribution network server that is architecturally close to the client's ISP to improve the reliability of the bandwidth estimation.
    • 用于估计客户端和服务器之间的网络连接的带宽的方法,程序和服务器包括请求服务器将第一和第二对象反向服务于客户端。 第一个和第二个对象被发送到客户端。 客户端确定第一个和第二个对象的传递之间的时间间隔。 使用时间间隔结合关于第二对象的大小的信息来估计带宽。 对第一和​​第二对象的请求优选地识别具有在网络上唯一的URL的第一和第二对象,以防止该请求由文件高速缓存服务。 第一和第二对象可以从架构上靠近客户端的ISP的内容分发网络服务器发送到客户端,以提高带宽估计的可靠性。
    • 86. 发明授权
    • Method and apparatus for accelerating input/output processing using cache injections
    • 用于使用高速缓存注入加速输入/输出处理的方法和装置
    • US06711650B1
    • 2004-03-23
    • US10289817
    • 2002-11-07
    • Patrick Joseph BohrerRamakrishnan RajamonyHazim Shafi
    • Patrick Joseph BohrerRamakrishnan RajamonyHazim Shafi
    • G06F1202
    • G06F12/0835
    • A method for accelerating input/output operations within a data processing system is disclosed. Initially, a determination is initially made in a cache controller as to whether or not a bus operation is a data transfer from a first memory to a second memory without intervening communications through a processor, such as a direct memory access (DMA) transfer. If the bus operation is such data transfer, a determination is made in a cache memory as to whether or not the cache memory includes a copy of data from the data transfer. If the cache memory does not include a copy of data from the data transfer, a cache line is allocated within the cache memory to store a copy of data from the data transfer.
    • 公开了一种用于加速数据处理系统内的输入/输出操作的方法。 最初,在高速缓存控制器中首先确定总线操作是否是从第一存储器到第二存储器的数据传输,而不需要通过诸如直接存储器访问(DMA)传送的处理器进行通信。 如果总线操作是这种数据传输,则在高速缓冲存储器中确定高速缓冲存储器是否包括来自数据传输的数据的副本。 如果高速缓冲存储器不包括来自数据传输的数据的副本,则在高速缓冲存储器内分配高速缓存线以存储来自数据传输的数据副本。
    • 87. 发明授权
    • Scalable interruptible queue locks for shared-memory multiprocessor
    • 可扩展的可中断队列锁为共享内存多处理器
    • US06473819B1
    • 2002-10-29
    • US09465297
    • 1999-12-17
    • Benedict Joseph JacksonPaul Edward McKenneyRamakrishnan RajamonyRonald Lynn Rockhold
    • Benedict Joseph JacksonPaul Edward McKenneyRamakrishnan RajamonyRonald Lynn Rockhold
    • G06F1200
    • G06F9/52
    • A method for a computation agent to acquire a queue lock in a multiprocessor system that prevents deadlock between the computation agent and external interrupts. The method provides for the computation agent to join a queue to acquire a lock. Next, upon receiving ownership of the lock, the computation agent raises its priority level to a higher second priority level. In response to a receipt of an external interrupt having a higher priority level occurring before the computation agent has raised its priority level to the second higher priority level, the computation agent relinquishes ownership of the lock. Subsequent to raising its priority level to the second higher priority level, the computation agent determines if it still has ownership of the lock. If the computation agent determines that it has not acquired possession of the lock after raising its priority level, the computation agent rejoins the queue to reacquire the lock. In one embodiment of the present invention, the computation agent's priority level is restored to its original, i.e., first priority level, when it rejoins the queue to reacquire the lock.
    • 一种用于计算代理获取多处理器系统中的队列锁的方法,其防止计算代理和外部中断之间的死锁。 该方法提供计算代理加入队列以获取锁。 接下来,在获得锁的所有权时,计算代理将其优先级提高到更高的第二优先级。 响应于在计算代理已经将其优先级提高到第二较高优先级之前发生具有较高优先级的外部中断的接收,计算代理放弃对锁的所有权。 在将其优先级提高到第二高优先级之后,计算代理确定它是否仍具有锁的所有权。 如果计算代理确定在提升其优先级后尚未获得该锁的拥有权,则计算代理重新加入队列以重新获取该锁。 在本发明的一个实施例中,当它重新加入队列以重新获取锁时,计算代理的优先级被恢复到其原始的,即第一优先级。