会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 81. 发明授权
    • Power reduction in server memory system
    • 服务器内存系统功耗降低
    • US09311228B2
    • 2016-04-12
    • US13439457
    • 2012-04-04
    • David M. DalyTejas KarkhanisValentina Salapura
    • David M. DalyTejas KarkhanisValentina Salapura
    • G06F12/00G06F12/02G06F11/34G06F12/06
    • G06F12/023G06F11/3409G06F11/3471G06F12/06G06F2201/81G06F2201/88G06F2212/1028G06F2212/2532G06F2212/502Y02D10/13Y02D10/34
    • A system and method for reducing power consumption of memory chips outside of a host processor device inoperative communication with the memory chips via a memory controller. The memory can operate in modes, such that via the memory controller, the stored data can be localized and moved at various granularities, among ranks established in the chips, to result in fewer operating ranks. Memory chips may then be turned on and off based on host memory access usage levels at each rank in the chip. Host memory access usage levels at each rank in the chip is tracked by performance counters established for association with each rank of a memory chip. Turning on and off of the memory chips is based on a mapping maintained between ranks and address locations corresponding to sub-sections within each rank receiving the host processor access requests.
    • 一种用于降低主处理器设备外部的存储器芯片的功耗的系统和方法,其经由存储器控制器与存储器芯片无效通信。 存储器可以以模式操作,使得经由存储器控制器,存储的数据可以在芯片中建立的等级之间以各种粒度进行本地化和移动,从而导致更少的操作等级。 然后可以基于芯片中每个级别的主机存储器访问使用级别来打开和关闭存储器芯片。 芯片中每个级别的主机存储器访问使用级别由建立用于与存储器芯片的每个等级相关联的性能计数器跟踪。 存储器芯片的导通和关闭是基于维持在对应于接收主机处理器访问请求的每个等级内的子部分的地址位置的地址位置之间的映射。
    • 82. 发明授权
    • Mechanism for reducing cache power consumption using cache way prediction
    • 使用缓存方式预测降低缓存功耗的机制
    • US09311098B2
    • 2016-04-12
    • US13888551
    • 2013-05-07
    • Apple Inc.
    • Ronald P. HallConrado Blasco-Allue
    • G06F12/00G06F9/38G06F12/08
    • G06F9/3806G06F9/3838G06F12/0864G06F2212/1028G06F2212/6082Y02D10/13
    • A mechanism for reducing power consumption of a cache memory of a processor includes a processor with a cache memory that stores instruction information for one or more instruction fetch groups fetched from a system memory. The cache memory may include a number of ways that are each independently controllable. The processor also includes a way prediction unit. The way prediction unit may enable, in a next execution cycle, a given way within which instruction information corresponding to a target of a next branch instruction is stored in response to a branch taken prediction for the next branch instruction. The way prediction unit may also, in response to the branch taken prediction for the next branch instruction, enable, one at a time, each corresponding way within which instruction information corresponding to respective sequential instruction fetch groups that follow the next branch instruction are stored.
    • 用于降低处理器的高速缓冲存储器的功耗的机构包括具有高速缓存存储器的处理器,该高速缓冲存储器存储从系统存储器取出的一个或多个指令获取组的指令信息。 高速缓冲存储器可以包括各自独立可控的多种方式。 处理器还包括方式预测单元。 方式预测单元可以在下一个执行周期中使得响应于下一个分支指令的分支采取预测而存储对应于下一分支指令的目标的指令信息的给定方式。 方式预测单元还可以响应于对下一个分支指令的分支采取的预测,一次一个地使能存储与下一个分支指令之后的各个顺序指令获取组对应的指令信息的每个对应方式。
    • 87. 发明申请
    • ARITHMETIC PROCESSING APPARATUS AND METHOD FOR CONTROLLING SAME
    • 算术处理装置及其控制方法
    • US20150309934A1
    • 2015-10-29
    • US14672284
    • 2015-03-30
    • FUJITSU LIMITED
    • Ryuichi SUNAYAMA
    • G06F12/08
    • G06F12/0607G06F12/0811G06F12/0826G06F2212/1024G06F2212/1028Y02D10/13
    • An arithmetic processing apparatus includes: first and second core groups each including cores, a first to an Nth (N is plural) caches that process access requests from the cores, and an intra-core-group bus through which the access requests from the cores are provided to the first to Nth caches; and a first to an Nth inter-core-group buses each provided between the first to Nth caches in the first and second core groups respectively. The first to Nth caches in the first core group individually store data from a first to an Nth memory spaces in a memory, respectively. The first to Nth caches in the second core group individually store data from an N+1th to a 2Nth memory spaces, respectively. The first to Nth caches in the first core group access the data in the N+1th to 2Nth memory spaces, respectively, via the first to Nth inter-core-group buses.
    • 算术处理装置包括:第一和第二核心组,每个核心组包括核心,处理来自核心的访问请求的第一至第N(N个是多个)高速缓存,以及核心组总线,通过该核心组从核心访问请求 提供给第一至第N个缓存; 以及分别设置在第一和第二核心组中的第一至第N个高速缓存之间的第一至第N个核心间总线总线。 第一核心组中的第一至第N个高速缓存分别存储来自存储器中的第一至第N个存储空间的数据。 第二核心组中的第一至第N个高速缓存分别将从N + 1到第2N个存储空间的数据分别存储。 第一核心组中第一至第N个高速缓存通过第一至第N个核心间总线分别访问第N + 1至第2N个存储空间中的数据。