会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 53. 发明公开
    • MEMORY MIGRATION METHOD AND DEVICE
    • VERFAHREN UND VORRICHTUNGFÜRSPEICHERMIGRATION
    • EP3131015A1
    • 2017-02-15
    • EP15840667.8
    • 2015-06-01
    • Huawei Technologies Co., Ltd.
    • CHU, Lixing
    • G06F12/06
    • G06F3/0647G06F3/06G06F3/0619G06F3/064G06F3/0644G06F3/0683G06F9/50G06F9/5016G06F9/5022G06F9/5033G06F12/06G06F12/0813G06F2212/2542
    • A memory migration method and device are provided, and relate to the field of computer application technologies. A memory page is combined into a memory block, which reduces a quantity of migrations, and improves CPU utilization. The method includes: receiving, by a first node, a migration instruction sent by a second node (201); sequentially scanning each memory page between a physical address of a start memory page accessed by a target process and a physical address of an end memory page accessed by the target process (202), where the memory page is a memory page accessed by the target process or a memory page accessed by a non-target process; determining whether each memory page meets a block combination condition, and combining a memory page that meets the block combination condition into a corresponding memory block (203); and migrating the corresponding memory block to a memory area of the second node (204).
    • 提供了一种内存迁移方法和设备,涉及计算机应用技术领域。 内存页面被组合成一个内存块,这减少了迁移量,并提高了CPU利用率。 该方法包括:由第一节点接收由第二节点(201)发送的迁移指令; 在由目标进程访问的起始存储器页面的物理地址和由目标进程(202)访问的结束存储器页面的物理地址之间顺序地扫描每个存储器页面,其中存储器页面是目标进程访问的存储器页面 或非目标进程访问的内存页; 确定每个存储器页是否满足块组合条件,以及将满足块组合条件的存储器页组合到相应的存储块(203)中; 以及将所述对应的存储器块迁移到所述第二节点(204)的存储器区域。
    • 54. 发明公开
    • PARALLEL COMPUTER, MIGRATION PROGRAM AND MIGRATION METHOD
    • 和平协调员,移民安置移民协调员
    • EP3101548A1
    • 2016-12-07
    • EP16170797.1
    • 2016-05-23
    • FUJITSU LIMITED
    • NINOMIYA, Atsushi
    • G06F12/08G06F9/48
    • G06F3/0605G06F3/0647G06F3/0673G06F9/485G06F12/0813G06F12/0815G06F12/084G06F12/0842G06F2212/1024G06F2212/2542G06F2212/62
    • A parallel computer includes a first node and a second node, each including a memory having a plurality of memory areas and a cache memory, and a processing unit that acquires a first group of index levels of the cache memory, the first group of index levels corresponding with addresses of first plurality of memory areas storing data accessed by a job in the first node, when continuing an execution of the job by migrating the job carried out on the first node to the second node, judges whether or not the second node has second plurality of memory areas that are a usable state corresponding to a second group of index levels that has same as or relative position relation with the first group of index levels, and relocates the data to the second plurality of memory areas when the second node has the second plurality of memory areas.
    • 并行计算机包括第一节点和第二节点,每个包括具有多个存储区域的存储器和高速缓冲存储器,以及处理单元,其获取高速缓冲存储器的第一组索引级别,第一组索引级别 对应于当通过将在第一节点上执行的作业迁移到第二节点来继续执行作业时,存储由第一节点中的作业访问的数据的第一多个存储区域的地址,判断第二节点是否具有 第二多个存储区域,其是对应于与第一组索引级别具有相同或相对位置关系的第二组索引级别的可用状态,并且当第二节点具有数据时将数据重新定位到第二多个存储区域 第二多个存储区域。
    • 59. 发明公开
    • MEMORY ACCESS MONITORING METHOD AND DEVICE
    • SPEICHERZUGANGSÜBERWACHUNGSVERFAHRENUND -VORRICHTUNG
    • EP2437433A2
    • 2012-04-04
    • EP11750199.9
    • 2011-04-19
    • Huawei Technologies Co., Ltd.
    • ZHANG, XiaofengFANG, Fan
    • H04L12/24
    • G06F12/0284G06F11/3466G06F12/122G06F2201/88G06F2201/885G06F2212/2542
    • A memory access monitoring method and a memory access monitoring method device are disclosed, wherein the memory access monitoring method comprises: performing coarse grain monitoring on local memory pages, if a hot page with coarse grain monitoring exists in the local memory pages, requesting an operating system to perform an optimized migration for the content of the hot page, and if a half hot page with coarse grain monitoring exists in the local memory pages, initiating fine grain monitoring to the half hot page; and performing fine grain monitoring on the half hot page, if a hot area with fine grain monitoring exists in the half hot page, requesting the operating system to perform an optimized migration for the content of the hot area. The combination of coarse grain monitoring and fine grain monitoring is employed in the embodiments of the present invention, so as to reduce the number of counters required by memory access monitoring, which can effectively recognize the cross-nodes hot area which needs to be optimized and improve the memory access optimization efficiency in non-uniform memory access (NUMA) architecture.
    • 本发明公开了一种存储器访问监视方法和存储器存取监视方法装置,其中存储器访问监视方法包括:对本地存储器页面进行粗粒度监视,如果在本地存储器页面中存在具有粗粒度监视的热页面,请求操作 系统对热页面的内容执行优化迁移,如果本地内存页面中存在粗粒度监控的半热页面,则对半热页面启动细粒度监控; 并在半热页面上执行细粒度监控,如果在半热页面中存在具有细粒度监视的热区域,则请求操作系统对热区域的内容执行优化迁移。 在本发明的实施例中采用粗粒监测和细粒监测的组合,以减少存储器访问监视所需的计数器数量,这可以有效地识别需要优化的跨节点热区域, 提高了非均匀内存访问(NUMA)体系结构中的内存访问优化效率。
    • 60. 发明公开
    • SCALABLE INDEXING IN A NON-UNIFORM ACCESS MEMORY
    • 非均匀访问存储器中的可扩展索引
    • EP2433227A1
    • 2012-03-28
    • EP10731653.1
    • 2010-06-25
    • Simplivity Corporation
    • BOWDEN, PaulBEAVERSON, Arthur, J.
    • G06F17/30
    • G06F12/1054G06F12/0246G06F12/0864G06F12/0875G06F12/1408G06F17/30097G06F17/30949G06F17/30952G06F2212/2542G06F2212/402G06F2212/452G06F2212/502G06F2212/6032G06F2212/7211
    • Method and apparatus for constructing an index that scales to a large number of records and provides a high transaction rate. New data structures and methods are provided to ensure that an indexing algorithm performs in a way that is natural (efficient) to the algorithm, while a non-uniform access memory device sees IO (input/output) traffic that is efficient for the memory device. One data structure, a translation table, is created that maps logical buckets as viewed by the indexing algorithm to physical buckets on the memory device. This mapping is such that write performance to non¬ uniform access SSD and flash devices is enhanced. Another data structure, an associative cache is used to collect buckets and write them out sequentially to the memory device as large sequential writes. Methods are used to populate the cache with buckets (of records) that are required by the indexing algorithm. Additional buckets may be read from the memory device to cache during a demand read, or by a scavenging process, to facilitate the generation of free erase blocks.
    • 用于构建可扩展到大量记录并提供高事务率的索引的方法和设备。 提供新的数据结构和方法以确保索引算法以对算法而言自然(高效)的方式执行,而非均匀访问存储器设备看到对存储器设备有效的IO(输入/输出)流量 。 创建一个数据结构,一个转换表,将由索引算法查看的逻辑存储区映射到存储设备上的物理存储区。 这种映射使得写入到非统一访问SSD和闪存设备的性能得到增强。 另一种数据结构,即关联缓存用于收集存储桶,并将其按顺序写入存储设备作为大量顺序写入。 方法用于使用索引算法所需的存储桶(记录)填充高速缓存。 可以在需求读取期间或通过清除过程从存储器设备中读取额外的存储桶以缓存以便于产生自由擦除块。