会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明申请
    • Method and system for managing cache injection in a multiprocessor system
    • 在多处理器系统中管理缓存注入的方法和系统
    • US20060064518A1
    • 2006-03-23
    • US10948407
    • 2004-09-23
    • Patrick BohrerAhmed GheithPeter HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • Patrick BohrerAhmed GheithPeter HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • G06F13/28
    • G06F13/28
    • A method and apparatus for managing cache injection in a multiprocessor system reduces processing time associated with direct memory access transfers in a symmetrical multiprocessor (SMP) or a non-uniform memory access (NUMA) multiprocessor environment. The method and apparatus either detect the target processor for DMA completion or direct processing of DMA completion to a particular processor, thereby enabling cache injection to a cache that is coupled with processor that executes the DMA completion routine processing the data injected into the cache. The target processor may be identified by determining the processor handling the interrupt that occurs on completion of the DMA transfer. Alternatively or in conjunction with target processor identification, an interrupt handler may queue a deferred procedure call to the target processor to process the transferred data. In NUMA multiprocessor systems, the completing processor/target memory is chosen for accessibility of the target memory to the processor and associated cache.
    • 用于管理多处理器系统中的高速缓存注入的方法和装置减少与对称多处理器(SMP)或非均匀存储器访问(NUMA)多处理器环境中的直接存储器访问传输相关联的处理时间。 该方法和装置可以检测目标处理器用于DMA完成或直接处理DMA完成到特定处理器,从而使高速缓存注入与执行DMA完成例程的处理器处理注入高速缓存的数据的处理器相连的高速缓存。 可以通过确定处理器处理在DMA传输完成时发生的中断来识别目标处理器。 或者或与目标处理器识别结合,中断处理程序可以将延迟过程调用排队到目标处理器以处理传送的数据。 在NUMA多处理器系统中,选择完成的处理器/目标存储器,以便可访问目标存储器到处理器和相关联的高速缓存。
    • 12. 发明授权
    • Method and system for managing cache injection in a multiprocessor system
    • 在多处理器系统中管理缓存注入的方法和系统
    • US08255591B2
    • 2012-08-28
    • US10948407
    • 2004-09-23
    • Patrick Joseph BohrerAhmed GheithPeter Heiner HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • Patrick Joseph BohrerAhmed GheithPeter Heiner HochschildRamakrishnan RajamonyHazim ShafiBalaram Sinharoy
    • G06F13/28
    • G06F13/28
    • A method and apparatus for managing cache injection in a multiprocessor system reduces processing time associated with direct memory access transfers in a symmetrical multiprocessor (SMP) or a non-uniform memory access (NUMA) multiprocessor environment. The method and apparatus either detect the target processor for DMA completion or direct processing of DMA completion to a particular processor, thereby enabling cache injection to a cache that is coupled with processor that executes the DMA completion routine processing the data injected into the cache. The target processor may be identified by determining the processor handling the interrupt that occurs on completion of the DMA transfer. Alternatively or in conjunction with target processor identification, an interrupt handler may queue a deferred procedure call to the target processor to process the transferred data. In NUMA multiprocessor systems, the completing processor/target memory is chosen for accessibility of the target memory to the processor and associated cache.
    • 用于管理多处理器系统中的高速缓存注入的方法和装置减少与对称多处理器(SMP)或非均匀存储器访问(NUMA)多处理器环境中的直接存储器访问传输相关联的处理时间。 该方法和装置可以检测目标处理器用于DMA完成或直接处理DMA完成到特定处理器,从而使高速缓存注入与执行DMA完成例程的处理器处理注入高速缓存的数据的处理器相连的高速缓存。 可以通过确定处理器处理在DMA传输完成时发生的中断来识别目标处理器。 或者或与目标处理器识别结合,中断处理程序可以将延迟过程调用排队到目标处理器以处理传送的数据。 在NUMA多处理器系统中,选择完成的处理器/目标存储器,以便可访问目标存储器到处理器和相关联的高速缓存。
    • 18. 发明授权
    • Hardware based dynamic load balancing of message passing interface tasks
    • 基于硬件的动态负载平衡消息传递接口任务
    • US08127300B2
    • 2012-02-28
    • US11846119
    • 2007-08-28
    • Lakshminarayana B. ArimilliRavi K. ArimilliRamakrishnan RajamonyWilliam E. Speight
    • Lakshminarayana B. ArimilliRavi K. ArimilliRamakrishnan RajamonyWilliam E. Speight
    • G06F9/46G06F15/173
    • G06F9/522G06F9/5083
    • Mechanisms for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors.
    • 提供了提供消息传递接口(MPI)任务的基于硬件的动态负载平衡的机制。 提供了用于调整执行MPI作业任务的处理器的处理工作负载的平衡的机制,以便最小化等待所有处理器调用同步操作的等待时间。 每个处理器都有一个相关的硬件实现的MPI负载平衡控制器。 MPI负载平衡控制器维护一个历史记录,提供任务关于其对同步操作的调用的简档。 根据该信息,可以确定哪些处理器应该减轻其处理负载,哪些处理器能够处理额外的处理负载,而不会对并行执行系统的整体操作产生显着的负面影响。 因此,可以执行操作以将工作负载从最慢的处理器转移到一个或多个更快的处理器。