会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 45. 发明授权
    • Message passing with queues and channels
    • 消息传递与队列和通道
    • US08381230B2
    • 2013-02-19
    • US12764315
    • 2010-04-21
    • Gabor J. DozsaPhilip HeidelbergerSameer KumarJoseph D. RattermanBurkhard Steinmacher-Burow
    • Gabor J. DozsaPhilip HeidelbergerSameer KumarJoseph D. RattermanBurkhard Steinmacher-Burow
    • G06F3/00
    • G06F9/546G06F2209/548
    • In an embodiment, a reception thread receives a source node identifier, a type, and a data pointer from an application and, in response, creates a receive request. If the source node identifier specifies a source node, the reception thread adds the receive request to a fast-post queue. If a message received from a network does not match a receive request on a posted queue, a polling thread adds a receive request that represents the message to an unexpected queue. If the fast-post queue contains the receive request, the polling thread removes the receive request from the fast-post queue. If the receive request that was removed from the fast-post queue does not match the receive request on the unexpected queue, the polling thread adds the receive request that was removed from the fast-post queue to the posted queue. The reception thread and the polling thread execute asynchronously from each other.
    • 在一个实施例中,接收线程从应用接收源节点标识符,类型和数据指针,作为响应,创建接收请求。 如果源节点标识符指定源节点,则接收线程将接收请求添加到快速发送队列。 如果从网络接收到的消息与发布的队列上的接收请求不匹配,轮询线程将将消息的接收请求添加到意外队列。 如果快速发送队列包含接收请求,轮询线程将从快速发送队列中删除接收请求。 如果从快速发布队列中删除的接收请求与意外队列中的接收请求不匹配,轮询线程将从快速发布队列中删除的接收请求添加到发布的队列。 接收线程和轮询线程彼此异步执行。
    • 46. 发明授权
    • DMA engine for repeating communication patterns
    • 用于重复通信模式的DMA引擎
    • US07802025B2
    • 2010-09-21
    • US11768795
    • 2007-06-26
    • Dong ChenAlan G. GaraMark E. GiampapaPhilip HeidelbergerBurkhard Steinmacher-BurowPavlos Vranas
    • Dong ChenAlan G. GaraMark E. GiampapaPhilip HeidelbergerBurkhard Steinmacher-BurowPavlos Vranas
    • G06F13/28
    • G06F15/163
    • A parallel computer system is constructed as a network of interconnected compute nodes to operate a global message-passing application for performing communications across the network. Each of the compute nodes includes one or more individual processors with memories which run local instances of the global message-passing application operating at each compute node to carry out local processing operations independent of processing operations carried out at other compute nodes. Each compute node also includes a DMA engine constructed to interact with the application via Injection FIFO Metadata describing multiple Injection FIFOs where each Injection FIFO may containing an arbitrary number of message descriptors in order to process messages with a fixed processing overhead irrespective of the number of message descriptors included in the Injection FIFO.
    • 并行计算机系统被构造为互连的计算节点的网络,以操作用于在整个网络上执行通信的全局消息传递应用。 每个计算节点包括具有存储器的一个或多个单独处理器,该存储器运行在每个计算节点处操作的全局消息传递应用的本地实例,以独立于在其他计算节点执行的处理操作来执行本地处理操作。 每个计算节点还包括构造成通过描述多个注入FIFO的注入FIFO元数据与应用交互的DMA引擎,其中每个注入FIFO可以包含任意数量的消息描述符,以便处理具有固定处理开销的消息,而不管消息的数量 描述符包含在注入FIFO中。
    • 49. 发明授权
    • Support for non-locking parallel reception of packets belonging to a single memory reception FIFO
    • 支持非锁定并行接收属于单个存储器接收FIFO的数据包
    • US08086766B2
    • 2011-12-27
    • US12688747
    • 2010-01-15
    • Dong ChenPhilip HeidelbergerValentina SalapuraRobert M. SengerBurkhard Steinmacher-BurowYutaka Sugawara
    • Dong ChenPhilip HeidelbergerValentina SalapuraRobert M. SengerBurkhard Steinmacher-BurowYutaka Sugawara
    • G06F13/28
    • G06F13/28
    • A method and apparatus for distributed parallel messaging in a parallel computing system. A plurality of DMA engine units are configured in a multiprocessor system to operate in parallel, one DMA engine unit for transferring a current packet received at a network reception queue to a memory location in a memory FIFO (rmFIFO) region of a memory. A control unit implements logic to determine whether any prior received packet destined for that rmFIFO is still in a process of being stored in the associated memory by another DMA engine unit of the plurality, and prevent the one DMA engine unit from indicating completion of storing the current received packet in the reception memory FIFO (rmFIFO) until all prior received packets destined for that rmFIFO are completely stored by the other DMA engine units. Thus, there is provided non-locking support so that multiple packets destined for a single rmFIFO are transferred and stored in parallel to predetermined locations in a memory.
    • 一种并行计算系统中分布式并行消息传递的方法和装置。 多个DMA引擎单元被配置在多处理器系统中以并行操作,一个DMA引擎单元用于将在网络接收队列处接收的当前分组传送到存储器的存储器FIFO(rmFIFO)区域中的存储单元。 控制单元实现逻辑以确定目的地为该rmFIFO的任何先前接收到的分组是否仍处于由多个的另一DMA引擎单元存储在相关联的存储器中的过程中,并且防止一个DMA引擎单元指示完成存储 在接收存储器FIFO(rmFIFO)中的当前接收的分组直到所有先前接收到的该rmFIFO的分组被其它DMA引擎单元完全存储。 因此,提供了非锁定支持,使得去往单个rmFIFO的多个分组被传送并存储在存储器中的预定位置。
    • 50. 发明申请
    • SUPPORT FOR NON-LOCKING PARALLEL RECEPTION OF PACKETS BELONGING TO THE SAME RECEPTION FIFO
    • 支持非锁定并行接收与相同接收FIFO相关的分组
    • US20110179199A1
    • 2011-07-21
    • US12688747
    • 2010-01-15
    • Dong ChenPhilip HeidelbergerValentina SalapuraRobert M. SengerBurkhard Steinmacher-BurowYutaka Sugawara
    • Dong ChenPhilip HeidelbergerValentina SalapuraRobert M. SengerBurkhard Steinmacher-BurowYutaka Sugawara
    • G06F13/28
    • G06F13/28
    • A method and apparatus for distributed parallel messaging in a parallel computing system. A plurality of DMA engine units are configured in a multiprocessor system to operate in parallel, one DMA engine unit for transferring a current packet received at a network reception queue to a memory location in a memory FIFO (rmFIFO) region of a memory. A control unit implements logic to determine whether any prior received packet destined for that rmFIFO is still in a process of being stored in the associated memory by another DMA engine unit of the plurality, and prevent the one DMA engine unit from indicating completion of storing the current received packet in the reception memory FIFO (rmFIFO) until all prior received packets destined for that rmFIFO are completely stored by the other DMA engine units. Thus, there is provided non-blocking support so that multiple packets destined for a single rmFIFO are transferred and stored in parallel to predetermined locations in a memory.
    • 一种并行计算系统中分布式并行消息传递的方法和装置。 多个DMA引擎单元被配置在多处理器系统中以并行操作,一个DMA引擎单元用于将在网络接收队列处接收的当前分组传送到存储器的存储器FIFO(rmFIFO)区域中的存储单元。 控制单元实现逻辑以确定目的地为该rmFIFO的任何先前接收到的分组是否仍处于由多个的另一DMA引擎单元存储在相关联的存储器中的过程中,并且防止一个DMA引擎单元指示完成存储 在接收存储器FIFO(rmFIFO)中的当前接收的分组直到所有先前接收到的该rmFIFO的分组被其它DMA引擎单元完全存储。 因此,提供了非阻塞支持,使得目的地为单个rmFIFO的多个分组被传送并存储在存储器中的预定位置。