会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Seamless interface for multi-threaded core accelerators
    • 多线程核心加速器的无缝界面
    • US08683175B2
    • 2014-03-25
    • US13048214
    • 2011-03-15
    • Kattamuri EkanadhamHung Q. LeJose E. MoreiraPratap C. Pattnaik
    • Kattamuri EkanadhamHung Q. LeJose E. MoreiraPratap C. Pattnaik
    • G06F9/30G06F12/10
    • G06F9/3877G06F9/30043G06F9/3012G06F9/30123G06F9/3851G06F12/1027
    • A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.
    • 公开了用于在多线程处理核心和加速器之间进行接口的方法,系统和计算机程序产品。 在一个实施例中,该方法包括从处理核心复制到在处理核心上操作的多个线程中的每个线程的硬件加速器存储器地址转换,以及同时在硬件加速器上存储每个线程的一个或多个存储器地址转换 。 只要在处理核心上操作的多个线程中的任何一个指示硬件加速器执行指定的操作,则硬件加速器在其上存储有针对任何一个线程的一个或多个存储器地址转换。 这有助于启动指定的操作,而不会出现内存转换错误。 在一个实施例中,复制包括每次在处理核心上更新一个存储器地址转换时,将更新的一个存储器地址转换复制到硬件加速器。
    • 3. 发明授权
    • Method for providing high performance scalable file I/O through persistent file domain and functional partitioning
    • 通过持久性文件域和功能分区提供高性能可扩展文件I / O的方法
    • US07721009B2
    • 2010-05-18
    • US11604162
    • 2006-11-22
    • Jose E. MoreiraRamendra K. SahooHao Yu
    • Jose E. MoreiraRamendra K. SahooHao Yu
    • G06F3/00
    • G06F17/30171G06F17/30224
    • A method for implementing large scale parallel file I/O processing includes steps of: separating processing nodes into compute nodes specializing in computation and I/O nodes (computer processors restricted to running I/O daemons); organizing the compute nodes and the I/O nodes into processing sets, the processing sets including: one dedicated I/O node corresponding to a plurality of compute nodes. I/O related system calls are received in the compute nodes then sent to the corresponding I/O nodes. The I/O related system calls are processed through a system I/O daemon residing in the I/O node. The plurality of compute nodes are evenly distributed across participating processing sets. Additionally, for collective I/O operations, compute nodes from each processing set are assigned as I/O aggregators to issue I/O requests to their corresponding I/O node, wherein the I/O aggregators are evenly distributed across the processing set. Additionally, a file domain is partitioned using a collective buffering technique wherein data is aggregated in memory prior to writing to a file; portions of the partitioned file domain are assigned to the processing sets.
    • 实现大规模并行文件I / O处理的方法包括以下步骤:将处理节点分为专用于计算和I / O节点的计算节点(限于运行I / O守护程序的计算机处理器); 将所述计算节点和所述I / O节点组织成处理集合,所述处理集包括:与多个计算节点对应的一个专用I / O节点。 在计算节点中接收到I / O相关的系统调用,然后发送到相应的I / O节点。 通过驻留在I / O节点中的系统I / O守护进程处理与I / O相关的系统调用。 多个计算节点在参与处理集合之间均匀分布。 此外,对于集体I / O操作,将来自每个处理集合的计算节点分配为I / O聚合器,以向其相应的I / O节点发出I / O请求,其中I / O聚合器均匀分布在整个处理集中。 此外,使用集体缓冲技术对文件域进行分区,其中在写入文件之前将数据聚集在存储器中; 分区文件域的一部分被分配给处理集。
    • 4. 发明申请
    • Method for providing high performance scalable file I/O through persistent file domain and functional partitioning
    • 通过持久性文件域和功能分区提供高性能可扩展文件I / O的方法
    • US20080120435A1
    • 2008-05-22
    • US11604162
    • 2006-11-22
    • Jose E. MoreiraRamendra K. SahooHao Yu
    • Jose E. MoreiraRamendra K. SahooHao Yu
    • G06F3/00
    • G06F17/30171G06F17/30224
    • A method for implementing large scale parallel file I/O processing includes steps of: separating processing nodes into compute nodes specializing in computation and I/O nodes (computer processors restricted to running I/O daemons); organizing the compute nodes and the I/O nodes into processing sets, the processing sets including: one dedicated I/O node corresponding to a plurality of compute nodes. I/O related system calls are received in the compute nodes then sent to the corresponding I/O nodes. The I/O related system calls are processed through a system I/O daemon residing in the I/O node. The plurality of compute nodes are evenly distributed across participating processing sets. Additionally, for collective I/O operations, compute nodes from each processing set are assigned as I/O aggregators to issue I/O requests to their corresponding I/O node, wherein the I/O aggregators are evenly distributed across the processing set. Additionally, a file domain is partitioned using a collective buffering technique wherein data is aggregated in memory prior to writing to a file; portions of the partitioned file domain are assigned to the processing sets.
    • 实现大规模并行文件I / O处理的方法包括以下步骤:将处理节点分为专用于计算和I / O节点的计算节点(限于运行I / O守护程序的计算机处理器); 将所述计算节点和所述I / O节点组织成处理集合,所述处理集包括:与多个计算节点对应的一个专用I / O节点。 在计算节点中接收到I / O相关的系统调用,然后发送到相应的I / O节点。 通过驻留在I / O节点中的系统I / O守护进程处理与I / O相关的系统调用。 多个计算节点在参与处理集合之间均匀分布。 此外,对于集体I / O操作,将来自每个处理集合的计算节点分配为I / O聚合器,以向其相应的I / O节点发出I / O请求,其中I / O聚合器均匀分布在整个处理集中。 此外,使用集体缓冲技术对文件域进行分区,其中在写入文件之前将数据聚集在存储器中; 分区文件域的一部分被分配给处理集。
    • 6. 发明申请
    • COMMUNICATIONS SUPPORT IN A TRANSACTIONAL MEMORY
    • 通信在交易记忆中的支持
    • US20110258347A1
    • 2011-10-20
    • US12763813
    • 2010-04-20
    • Jose E. MoreiraPatricia M. Sagmeister
    • Jose E. MoreiraPatricia M. Sagmeister
    • G06F3/00G06F13/00
    • G06F9/467
    • A system, method and computer program product are provided for supporting Transactional Memory communications. In one embodiment, the system comprises a transactional memory host with a host transactional memory buffer, an endpoint device, a transactional memory buffer associated with the endpoint device, and a communication path connecting the endpoint device and host. Input/Output transactions associated with the endpoint device executed in transactional memory on the host are stored in both the host transactional memory buffer and the transactional memory buffer associated with the endpoint device. In an embodiment, the Transactional Memory system further comprises an intermediate device located on the communication path between the host and the endpoint device, and an intermediate transactional memory buffer associated with said intermediate devices. In this embodiment, the Input/Output transactions associated with said endpoint device are stored in the intermediate transactional memory buffer associated with the intermediate device.
    • 提供了一种用于支持事务性存储器通信的系统,方法和计算机程序产品。 在一个实施例中,系统包括具有主机事务存储器缓冲器,端点设备,与端点设备相关联的事务存储器缓冲器以及连接端点设备和主机的通信路径的事务存储器主机。 与在主机上的事务性存储器中执行的端点设备相关联的输入/输出事务存储在与端点设备相关联的主机事务存储器缓冲器和事务存储器缓冲器中。 在一个实施例中,事务存储器系统还包括位于主机和端点设备之间的通信路径上的中间设备以及与所述中间设备相关联的中间事务存储器缓冲器。 在该实施例中,与所述端点设备相关联的输入/输出事务存储在与中间设备相关联的中间事务存储器缓冲器中。
    • 10. 发明申请
    • SEAMLESS INTERFACE FOR MULTI-THREADED CORE ACCELERATORS
    • 多线程加速器的无缝接口
    • US20120239904A1
    • 2012-09-20
    • US13048214
    • 2011-03-15
    • Kattamuri EkanadhamHung Q. LeJose E. MoreiraPratap C. Pattnaik
    • Kattamuri EkanadhamHung Q. LeJose E. MoreiraPratap C. Pattnaik
    • G06F9/30G06F12/10
    • G06F9/3877G06F9/30043G06F9/3012G06F9/30123G06F9/3851G06F12/1027
    • A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.
    • 公开了用于在多线程处理核心和加速器之间进行接口的方法,系统和计算机程序产品。 在一个实施例中,该方法包括从处理核心复制到在处理核心上操作的多个线程中的每个线程的硬件加速器存储器地址转换,以及同时在硬件加速器上存储每个线程的一个或多个存储器地址转换 。 只要在处理核心上操作的多个线程中的任何一个指示硬件加速器执行指定的操作,则硬件加速器在其上存储有针对任何一个线程的一个或多个存储器地址转换。 这有助于启动指定的操作而不会出现内存转换错误 在一个实施例中,复制包括每次在处理核心上更新一个存储器地址转换时,将更新的一个存储器地址转换复制到硬件加速器。