会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明公开
    • Application-reconfigurable split cache memory
    • Durch ein Anwendungsprogramm wiederkonfigurierbarer aufgeteilter Cache-Speicher
    • EP0999500A1
    • 2000-05-10
    • EP99308403.7
    • 1999-10-25
    • LUCENT TECHNOLOGIES INC.
    • Nicol, Christopher JohnSingh, Kanwar JitTerman, Christopher J.Williams, Joseph
    • G06F12/08
    • G06F12/0848G06F12/0853G06F2212/2515G06F2212/6012
    • Substantial advantages are realized from a processing element architecture that allows a local memory to be divided almost at will between an instruction cache portion, an instruction SRAM portion, a data SRAM portion, and a conventional data cache portion. The processing element comprises a processor and a memory module. The memory module comprises a plurality of memory submodules with associated interface circuitry, a controller, and at least one configuration register, which controls a particular memory submodule, is employed as an instruction submodule or a data submodule. In embodiments where there is a second configuration register, it controls whether a particular memory submodule is employed as a cache submodule or as an SRAM. Having the configuration registers addressable allows application programs to control the number of memory submodules that are assigned to different modes. In the illustrated embodiment, the processor can over-ride the SRAM/cache memory assignments of the configuration register. Providing a 2-port access, this architecture offers its advantages to Harvard-architecture processors.
    • 从处理元件架构实现了实质性的优点,该处理元件架构允许本地存储器几乎在指令高速缓存部分,指令SRAM部分,数据SRAM部分和常规数据高速缓存部分之间被分开。 处理元件包括处理器和存储器模块。 存储器模块包括具有相关接口电路的多个存储器子模块,控制器以及控制特定存储器子模块的至少一个配置寄存器,用作指令子模块或数据子模块。 在存在第二配置寄存器的实施例中,它控制特定存储器子模块是否用作高速缓存子模块或作为SRAM。 配置寄存器可寻址允许应用程序控制分配给不同模式的内存子模块的数量。 在所示实施例中,处理器可以覆盖配置寄存器的SRAM /高速缓冲存储器分配。 提供2端口访问,这种架构为哈佛架构处理器提供了优势。
    • 6. 发明公开
    • CONFIGURABLE CACHE FOR MULTIPLE CLIENTS
    • KONFIGURIERBARER CACHEFÜRMEHRERE客户
    • EP2480975A4
    • 2014-05-21
    • EP10819549
    • 2010-09-24
    • NVIDIA CORP
    • MINKIN ALEXANDER LHEINRICH STEVEN JAMESSELVANESAN RAJESHWARANCOON BRETT WMCCARVER CHARLESRAJENDRAN ANJANACARLTON STEWART G
    • G06F12/08G06F13/00
    • G06F12/084G06F2212/2515G06F2212/301G06F2212/6012
    • One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A "direct mapped" storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A "local and global cache" storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
    • 本发明的一个实施例提出了一种用于提供作为中央存储资源的L1高速缓存的技术。 L1缓存为多个客户端提供不同的延迟和带宽要求。 可以重新配置L1高速缓存以创建多个存储空间,使得L1高速缓存可以替代先前架构中的专用缓冲器,高速缓存和FIFO。 配置在L1高速缓存内的“直接映射”存储区可以替代专用缓冲器,FIFO和接口路径,允许L1高速缓存的客户端交换属性和原始数据。 直接映射存储区域可以用作全局寄存器文件。 配置在L1高速缓存内的“本地和全局高速缓存”存储区域可用于支持向多个空间加载/存储存储器请求。 这些空格包括全局,本地和回调栈(CRS)内存。