会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Performing high granularity prefetch from remote memory into a cache on a device without change in address
    • 从远程内存执行高粒度预取到设备上的缓存,而不改变地址
    • US08549231B2
    • 2013-10-01
    • US12684689
    • 2010-01-08
    • Rabin A. SugumarBjørn Dag JohnsenBen Sum
    • Rabin A. SugumarBjørn Dag JohnsenBen Sum
    • G06F12/08
    • G06F12/0862G06F12/1081
    • Provided is a method, which may be performed on a computer, for prefetching data over an interface. The method may include receiving a first data prefetch request for first data of a first data size stored at a first physical address corresponding to a first virtual address. The first data prefetch request may include second data specifying the first virtual address and third data specifying the first data size. The first virtual address and the first data size may define a first virtual address range. The method may also include converting the first data prefetch request into a first data retrieval request. To convert the first data prefetch request into a first data retrieval request the first virtual address specified by the second data may be translated into the first physical address. The method may further include issuing the first data retrieval request at the interface, receiving the first data at the interface and storing at least a portion of the received first data in a cache. Storing may include setting each of one or more cache tags associated with the at least a portion of the received first data to correspond to the first physical address.
    • 提供了一种可以在计算机上执行以用于通过接口预取数据的方法。 该方法可以包括:接收对与第一虚拟地址相对应的第一物理地址处存储的第一数据大小的第一数据的第一数据预取请求。 第一数据预取请求可以包括指定第一虚拟地址的第二数据和指定第一数据大小的第三数据。 第一虚拟地址和第一数据大小可以定义第一虚拟地址范围。 该方法还可以包括将第一数据预取请求转换为第一数据检索请求。 为了将第一数据预取请求转换为第一数据检索请求,由第二数据指定的第一虚拟地址可以被转换为第一物理地址。 该方法还可以包括在接口处发布第一数据检索请求,在接口处接收第一数据并将所接收的第一数据的至少一部分存储在高速缓存中。 存储可以包括将与接收到的第一数据的至少一部分相关联的一个或多个缓存标签中的每一个设置为对应于第一物理地址。
    • 3. 发明授权
    • Scalable interface for connecting multiple computer systems which performs parallel MPI header matching
    • 用于连接执行并行MPI头匹配的多个计算机系统的可扩展接口
    • US08537828B2
    • 2013-09-17
    • US13489496
    • 2012-06-06
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • H04L12/28
    • G06F15/17337
    • An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching.
    • 用于计算机集群中的计算节点的接口设备,其使用并行匹配单元执行消息传递接口(MPI)报头匹配。 接口设备包括存储发布的接收队列和意外队列的存储器。 发布的接收队列存储在计算节点上执行的进程的接收请求。 意外队列存储在发布的接收队列中不具有匹配的接收请求的发送请求(例如来自其他计算节点)的头部。 接口设备还包括多个硬件流水线匹配器单元。 匹配器单元执行报头匹配以确定发送请求中的报头是否匹配多个发布的接收队列中的任何一个中的任何报头。 匹配器单元并行执行头匹配。 换句话说,多个匹配单元被配置为同时搜索​​存储器以执行头匹配。
    • 4. 发明申请
    • Scalable Interface for Connecting Multiple Computer Systems Which Performs Parallel MPI Header Matching
    • 用于连接执行并行MPI头匹配的多个计算机系统的可扩展接口
    • US20120243542A1
    • 2012-09-27
    • US13489496
    • 2012-06-06
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • H04L12/56
    • G06F15/17337
    • An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching.
    • 用于计算机集群中的计算节点的接口设备,其使用并行匹配单元执行消息传递接口(MPI)报头匹配。 接口设备包括存储发布的接收队列和意外队列的存储器。 发布的接收队列存储在计算节点上执行的进程的接收请求。 意外队列存储在发布的接收队列中不具有匹配的接收请求的发送请求(例如来自其他计算节点)的头部。 接口设备还包括多个硬件流水线匹配器单元。 匹配器单元执行报头匹配以确定发送请求中的报头是否匹配多个发布的接收队列中的任何一个中的任何报头。 匹配器单元并行执行头匹配。 换句话说,多个匹配单元被配置为同时搜索​​存储器以执行头匹配。
    • 5. 发明申请
    • Software Aware Throttle Based Flow Control
    • 软件感知基于节气门的流量控制
    • US20100332676A1
    • 2010-12-30
    • US12495452
    • 2009-06-30
    • Rabin A. SugumarBjørn Dag JohnsenLars Paul HuseWilliam M. Ortega
    • Rabin A. SugumarBjørn Dag JohnsenLars Paul HuseWilliam M. Ortega
    • G06F15/16
    • H04L41/065H04L47/10H04L47/26H04L47/283H04L47/30H04L49/00H04L49/90
    • A system, comprising a compute node and coupled network adapter (NA), that supports improved data transfer request buffering and a more efficient method of determining the completion status of data transfer requests. Transfer requests received by the NA are stored in a first buffer then transmitted on a network interface. When significant network delays are detected and the first buffer is full, the NA sets a flag to stop software issuing transfer requests. Compliant software checks this flag before sending requests and does not issue further requests. A second NA buffer stores additional received transfer requests that were perhaps in-transit. When conditions improve the flag is cleared and the first buffer used again. Completion status is efficiently determined by grouping network transfer requests. The NA counts received requests and completed network requests for each group. Software determines if a group of requests is complete by reading a count value.
    • 一种包括计算节点和耦合网络适配器(NA)的系统,其支持改进的数据传输请求缓冲以及确定数据传输请求的完成状态的更有效的方法。 由NA接收的传送请求存储在第一缓冲器中,然后在网络接口上发送。 当检测到显着的网络延迟并且第一个缓冲区已满时,NA设置一个标志,以停止发布传输请求的软件。 合规软件在发送请求之前检查此标志,并且不会发出进一步的请求。 第二个NA缓冲存储器可以存储可能在运输过程中的其他接收的传输请求。 当条件改善时,标志被清除,第一个缓冲区再次使用。 通过分组网络传输请求有效地确定完成状态。 NA计数接收到的请求并为每个组完成网络请求。 软件通过读取计数值来确定一组请求是否完成。
    • 6. 发明授权
    • Network use of virtual addresses without pinning or registration
    • 网络使用虚拟地址,无需固定或注册
    • US08234407B2
    • 2012-07-31
    • US12495805
    • 2009-06-30
    • Rabin A. SugumarRobert W. WittoschBjørn Dag JohnsenWilliam M. Ortega
    • Rabin A. SugumarRobert W. WittoschBjørn Dag JohnsenWilliam M. Ortega
    • G06F15/16G06F13/36G06F12/00
    • G06F12/1027G06F12/1081
    • A system comprising a compute node and coupled network adapter (NA) that allows the NA to directly use CPU virtual addresses without pinning pages in system memory. The NA performs memory accesses in response to requests from various sources. Each request source is assigned to context. Each context has a descriptor that controls the address translation performed by the NA. When the CPU wants to update translation information it sends a synchronization request to the NA that causes the NA to stop fetching a category of requests associated with the information update. The category may be requests associated with a context or a page address. Once the NA determines that all the fetched requests in the category have completed it notifies the CPU and the CPU performs the information update. Once the update is complete, the CPU clears the synchronization request and the NA starts fetching requests in the category.
    • 一种包括计算节点和耦合网络适配器(NA)的系统,其允许NA直接使用CPU虚拟地址而不在系统存储器中固定页面。 NA响应来自各种来源的请求,执行存储器访问。 每个请求源被分配给上下文。 每个上下文都有一个描述符,用于控制由NA执行的地址转换。 当CPU要更新翻译信息时,它向NA发送同步请求,导致NA停止获取与信息更新相关联的一类请求。 类别可以是与上下文或页面地址相关联的请求。 一旦NA确定类别中的所有获取的请求已经完成,它通知CPU并且CPU执行信息更新。 更新完成后,CPU将清除同步请求,NA将开始获取该类别中的请求。
    • 7. 发明申请
    • Network Use of Virtual Addresses Without Pinning or Registration
    • 虚拟地址的网络使用,无需固定或注册
    • US20100332789A1
    • 2010-12-30
    • US12495805
    • 2009-06-30
    • Rabin A. SugumarRobert W. WittoschBjørn Dag JohnsenWilliam M. Ortega
    • Rabin A. SugumarRobert W. WittoschBjørn Dag JohnsenWilliam M. Ortega
    • G06F12/10
    • G06F12/1027G06F12/1081
    • A system comprising a compute node and coupled network adapter (NA) that allows the NA to directly use CPU virtual addresses without pinning pages in system memory. The NA performs memory accesses in response to requests from various sources. Each request source is assigned to context. Each context has a descriptor that controls the address translation performed by the NA. When the CPU wants to update translation information it sends a synchronization request to the NA that causes the NA to stop fetching a category of requests associated with the information update. The category may be requests associated with a context or a page address. Once the NA determines that all the fetched requests in the category have completed it notifies the CPU and the CPU performs the information update. Once the update is complete, the CPU clears the synchronization request and the NA starts fetching requests in the category.
    • 一种包括计算节点和耦合网络适配器(NA)的系统,其允许NA直接使用CPU虚拟地址而不在系统存储器中固定页面。 NA响应来自各种来源的请求,执行存储器访问。 每个请求源被分配给上下文。 每个上下文都有一个描述符,用于控制由NA执行的地址转换。 当CPU要更新翻译信息时,它向NA发送同步请求,导致NA停止获取与信息更新相关联的一类请求。 类别可以是与上下文或页面地址相关联的请求。 一旦NA确定类别中的所有获取的请求已经完成,它通知CPU并且CPU执行信息更新。 更新完成后,CPU将清除同步请求,NA将开始获取该类别中的请求。
    • 8. 发明授权
    • Multiple processes sharing a single infiniband connection
    • 共享一个infiniband连接的多个进程
    • US09596186B2
    • 2017-03-14
    • US12495586
    • 2009-06-30
    • Bjørn Dag JohnsenRabin A. SugumarOla Torudbakken
    • Bjørn Dag JohnsenRabin A. SugumarOla Torudbakken
    • H04L12/56H04L12/863H04L12/867
    • H04L47/621H04L47/629
    • A compute node with multiple transfer processes that share an Infiniband connection to send and receive messages across a network. Transfer processes are first associated with an Infiniband queue pair (QP) connection. Then send message commands associated with a transfer process are issued. This causes an Infiniband message to be generated and sent, via the QP connection, to a remote compute node corresponding to the QP. Send message commands associated with another process are also issued. This causes another Infiniband message to be generated and sent, via the same QP connection, to the same remote compute node. As mentioned, multiple processes may receive network messages received via a shared QP connection. A transfer process on a receiving compute node receives a network message through a QP connection using a receive queue. A second transfer process receives another message through the same QP connection using another receive queue.
    • 具有多个传输过程的计算节点,共享一个Infiniband连接以通过网络发送和接收消息。 传输过程首先与Infiniband队列对(QP)连接相关联。 然后发送与传送过程相关联的消息命令。 这导致通过QP连接生成并发送Infiniband消息到与QP对应的远程计算节点。 还发出与另一个进程相关联的消息命令。 这样会产生另一个Infiniband消息,并通过相同的QP连接发送到同一个远程计算节点。 如上所述,多个进程可以接收经由共享QP连接接收的网络消息。 接收计算节点上的传送过程使用接收队列通过QP连接接收网络消息。 第二个传输过程通过使用另一个接收队列的相同QP连接接收另一个消息。
    • 9. 发明授权
    • Software aware throttle based flow control
    • 软件感知基于油门的流量控制
    • US08843651B2
    • 2014-09-23
    • US12495452
    • 2009-06-30
    • Rabin A. SugumarBjørn Dag JohnsenLars Paul HuseWilliam M. Ortega
    • Rabin A. SugumarBjørn Dag JohnsenLars Paul HuseWilliam M. Ortega
    • G06F15/16H04L12/931H04L12/835H04L12/801H04L12/825H04L12/841H04L12/861
    • H04L41/065H04L47/10H04L47/26H04L47/283H04L47/30H04L49/00H04L49/90
    • A system, comprising a compute node and coupled network adapter (NA), that supports improved data transfer request buffering and a more efficient method of determining the completion status of data transfer requests. Transfer requests received by the NA are stored in a first buffer then transmitted on a network interface. When significant network delays are detected and the first buffer is full, the NA sets a flag to stop software issuing transfer requests. Compliant software checks this flag before sending requests and does not issue further requests. A second NA buffer stores additional received transfer requests that were perhaps in-transit. When conditions improve the flag is cleared and the first buffer used again. Completion status is efficiently determined by grouping network transfer requests. The NA counts received requests and completed network requests for each group. Software determines if a group of requests is complete by reading a count value.
    • 一种包括计算节点和耦合网络适配器(NA)的系统,其支持改进的数据传输请求缓冲以及确定数据传输请求的完成状态的更有效的方法。 由NA接收的传送请求存储在第一缓冲器中,然后在网络接口上发送。 当检测到显着的网络延迟并且第一个缓冲区已满时,NA设置一个标志,以停止发布传输请求的软件。 合规软件在发送请求之前检查此标志,并且不会发出进一步的请求。 第二个NA缓冲存储器可以存储可能在运输过程中的其他接收的传输请求。 当条件改善时,标志被清除,第一个缓冲区再次使用。 通过分组网络传输请求有效地确定完成状态。 NA计数接收到的请求并为每个组完成网络请求。 软件通过读取计数值来确定一组请求是否完成。
    • 10. 发明授权
    • Scalable interface for connecting multiple computer systems which performs parallel MPI header matching
    • 用于连接执行并行MPI头匹配的多个计算机系统的可扩展接口
    • US08249072B2
    • 2012-08-21
    • US12402804
    • 2009-03-12
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • Rabin A. SugumarLars Paul HuseBjørn Dag Johnsen
    • H04L12/28
    • G06F15/17337
    • An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching.
    • 用于计算机集群中的计算节点的接口设备,其使用并行匹配单元执行消息传递接口(MPI)报头匹配。 接口设备包括存储发布的接收队列和意外队列的存储器。 发布的接收队列存储在计算节点上执行的进程的接收请求。 意外队列存储在发布的接收队列中不具有匹配的接收请求的发送请求(例如来自其他计算节点)的头部。 接口设备还包括多个硬件流水线匹配器单元。 匹配器单元执行报头匹配以确定发送请求中的报头是否匹配多个发布的接收队列中的任何一个中的任何报头。 匹配器单元并行执行头匹配。 换句话说,多个匹配单元被配置为同时搜索​​存储器以执行头匹配。