会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 32. 发明申请
    • Performing A Deterministic Reduction Operation In A Parallel Computer
    • 在并行计算机中执行确定性减少操作
    • US20110296139A1
    • 2011-12-01
    • US12790037
    • 2010-05-28
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • G06F9/30G06F9/02G06F15/76
    • G06F15/76G06F15/17318
    • Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data.
    • 在包括计算节点的并行计算机中执行确定性简化操作,每个节点包括计算机处理器和将计算机处理器彼此耦合以用于数据通信的CAU(集体加速单元),包括将处理器和CAU组织成分支树形拓扑 其中CAU是根,处理器是孩子; 从每个处理器以任何顺序接收虚拟贡献数据,其中每个处理器在从根CAU接收到接收确认之前被限制不发送任何其他数据到根CAU; 由根CAU以分支树拓扑结构向处理器发送预定义的顺序,接收虚拟贡献数据的确认; 根据CAU从预定义的顺序从处理器接收处理器对减少操作的贡献数据; 并由根CAU减少处理器的贡献数据。
    • 33. 发明申请
    • Performing A Deterministic Reduction Operation In A Parallel Computer
    • 在并行计算机中执行确定性减少操作
    • US20110296137A1
    • 2011-12-01
    • US12789986
    • 2010-05-28
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • G06F15/76G06F15/80G06F9/02
    • G06F15/17318
    • A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology.
    • 包括具有计算机处理器的计算节点和将处理器彼此耦合用于数据通信的CAU(集体加速单元)的并行计算机。 在本发明的实施例中,确定性减少操作包括:将并行计算机和CAU的处理器组织成分支树形拓扑,其中CAU是分支树形拓扑的根,并且处理器是根CAU的子节点; 建立接收缓冲器,其包括与处理器相关联的接收元件,并被配置为存储相关联的处理器的贡献数据; 以处理器的任何顺序接收每个处理器的贡献数据; 跟踪收到每个处理器的贡献数据; 并且仅在从分支树拓扑中的所有处理器接收到贡献数据之后,以预定义的顺序减少贡献数据。
    • 34. 发明申请
    • Effecting Hardware Acceleration Of Broadcast Operations In A Parallel Computer
    • 影响并行计算机中广播操作的硬件加速
    • US20110289177A1
    • 2011-11-24
    • US12782791
    • 2010-05-19
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • G06F15/173
    • G06F15/17318
    • Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node.
    • 计算经由网络组织用于集体操作的并行计算机的节点,每个计算节点具有接收缓冲器并为网络建立拓扑; 选择广播操作的时间表; 通过拓扑的根节点存储在目标节点的接收缓冲器中的广播数据,包括用目标节点的接收缓冲器的公知存储器位置执行DMA操作; 通过根节点在指定用于存储广播数据长度的存储器区域中存储广播数据的长度,包括利用广播数据长度存储区域的公知存储器位置执行DMA操作; 并且由根节点触发目标节点执行下一个DMA操作,包括在指定用于接收目标节点的注入指令的存储器区域中存储将广播数据注入到后续目标的接收缓冲器中的指令 节点。
    • 35. 发明授权
    • Providing policy-based operating system services in a hypervisor on a computing system
    • 在计算系统的管理程序中提供基于策略的操作系统服务
    • US08032899B2
    • 2011-10-04
    • US11553077
    • 2006-10-26
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanAlbert SidelnikBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanAlbert SidelnikBrian E. Smith
    • G06F15/163
    • G06F9/5055G06F9/5077
    • Methods, apparatus, and products are disclosed for providing policy-based operating system services in a hypervisor on a computing system. The computing system includes at least one compute node. The compute node includes an operating system and a hypervisor. The operating system includes a kernel. The hypervisor comprising a kernel proxy and a plurality of operating system services of a service type. Providing policy-based operating system services in a hypervisor on a computing system includes establishing, on the compute node, a kernel policy specifying one of the operating system services of the service type for use by the kernel proxy, and accessing, by the kernel proxy, the specified operating system service. The computing system may also be implemented as a distributed computing system that includes one or more operating system service nodes. One or more of the operating system services may be distributed among the operating system service nodes.
    • 公开了用于在计算系统上的管理程序中提供基于策略的操作系统服务的方法,装置和产品。 计算系统包括至少一个计算节点。 计算节点包括操作系统和管理程序。 操作系统包括内核。 该管理程序包括内核代理和服务类型的多个操作系统服务。 在计算系统的管理程序中提供基于策略的操作系统服务包括在计算节点上建立指定服务类型的操作系统服务之一以供内核代理使用的内核策略,以及由内核代理 ,指定的操作系统服务。 计算系统还可以被实现为包括一个或多个操作系统服务节点的分布式计算系统。 一个或多个操作系统服务可以分布在操作系统服务节点之间。
    • 38. 发明授权
    • Latency hiding message passing protocol
    • 延迟隐藏消息传递协议
    • US07688737B2
    • 2010-03-30
    • US11682057
    • 2007-03-05
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • H04J1/16
    • G06F9/546
    • A method, system, and article of manufacture that provide latency hiding, high bandwidth message passing protocols used for data communication between nodes of a parallel computer system are disclosed. A source node transmits a request to send message to a receiving node. Prior to receiving a clear to send message, the sending node continues to send deterministically routed (or fully described) data packets to the receiving node, thereby hiding the latency inherent in the request to send—clear to send message exchange. Once the sending node receives the clear to send message, any remaining portion of the message may be sent using partially described packets which may be routed dynamically, thereby maximizing bandwidth.
    • 公开了一种提供延迟隐藏,用于并行计算机系统的节点之间的数据通信的高带宽消息传递协议的方法,系统和制品。 源节点向接收节点发送发送消息的请求。 在接收到清除发送消息之前,发送节点继续向接收节点发送确定性路由(或完全描述)的数据分组,从而隐藏请求中固有的等待发送清除以发送消息交换。 一旦发送节点接收到清除发送消息,消息的任何剩余部分可以使用可以动态路由的部分描述的分组来发送,从而最大化带宽。