会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • DRAM CACHE WITH TAGS AND DATA JOINTLY STORED IN PHYSICAL ROWS
    • 具有标签和数据的DRAM高速缓存存储在物理路径中
    • US20130138892A1
    • 2013-05-30
    • US13307776
    • 2011-11-30
    • Gabriel H. LohMark D. Hill
    • Gabriel H. LohMark D. Hill
    • G06F12/12
    • G06F12/0893G06F12/0831G06F12/0864G06F12/0879G06F12/123Y02D10/13
    • A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption.
    • 一种用于在计算系统的大型基于行的存储器中高效缓存数据访问的系统和方法。 计算系统包括处理单元和集成三维(3D)动态随机存取存储器(DRAM)。 处理单元使用3D DRAM作为高速缓存。 3D DRAM的存储器阵列组中的多行的每一行存储由多个高速缓存标签指示的至少多个高速缓存标签和多个对应的高速缓存行。 响应于从处理单元接收到存储器请求,3D DRAM根据在所接收的存储器请求中由高速缓存标签指示的给定高速缓存线上的接收到的存储器请求执行存储器访问。 可以使用单个复杂DRAM事务来减少等待时间和功耗,而不是利用多个DRAM事务。
    • 4. 发明授权
    • Computer system implementing synchronized broadcast using skew control and queuing
    • 计算机系统使用偏移控制和排队实现同步广播
    • US07136980B2
    • 2006-11-14
    • US10610447
    • 2003-06-30
    • Robert E. CypherMark D. HillDavid A. Wood
    • Robert E. CypherMark D. HillDavid A. Wood
    • G06F12/06
    • G06F12/0833G06F12/0813
    • A mechanism and method for maintaining cache consistency in computer systems that implements synchronized broadcasts using skew control and queuing. An access right corresponding to a given block allocated in a first active device may be configured to transition in response to a corresponding data packet being received through a data network. Additionally, transitions in ownership of the given block may occur at a different time than the time at which the access right to the given block is changed. To implement synchronized broadcasts, the address and data networks are configured such that a maximum amount of time from when a given broadcast packet conveyed on the address network arrives at a first active device to a time when the given broadcast packet arrives at a second active device is less than or equal to a minimum amount of time from when a data packet sent on the data network from the first active device arrives at the second active device. Each of the active devices may further comprise a queue control circuit coupled to an address-in queue and a data-in queue. The queue control circuit may be configured to prevent processing of a particular data packet that arrived in the data-in queue until all address packets that arrived earlier in the address-in queue are processed.
    • 用于在使用偏移控制和排队实现同步广播的计算机系统中保持高速缓存一致性的机制和方法。 对应于分配在第一有源设备中的给定块的访问权限可以被配置为响应于通过数据网络接收的相应数据分组而转变。 此外,给定块的所有权转换可以在与给定块的访问权限改变的时间不同的时间发生。 为了实现同步广播,地址和数据网络被配置为使得从地址网络上传送的给定广播分组到达第一活动设备的时间的最大时间到给定广播分组到达第二活动设备 小于或等于从第一有源设备在数据网络上发送的数据分组到达第二活动设备时的最小时间量。 每个活动设备还可以包括与地址输入队列和数据输入队列耦合的队列控制电路。 队列控制电路可以被配置为防止到达数据队列中的特定数据分组的处理,直到在地址队列中较早到达的所有地址分组被处理。
    • 7. 发明授权
    • Efficient allocation of cache memory space in a computer system
    • 在计算机系统中高效地分配高速缓存存储空间
    • US5893150A
    • 1999-04-06
    • US675306
    • 1996-07-01
    • Erik E. HagerstenMark D. Hill
    • Erik E. HagerstenMark D. Hill
    • G06F15/16G06F12/08G06F12/12
    • G06F12/0888G06F12/0813
    • An efficient cache allocation scheme is provided for both uniprocessor and multiprocessor computer systems having at least one cache. In one embodiment, upon the detection of a cache miss, a determination of whether the cache miss is "avoidable" is made. In other words, would the present cache miss have occurred if the data had been cached previously and if the data had remained in the cache. One example of an avoidable cache miss in a multiprocessor system having a distributed memory architecture is an excess cache miss. An excess cache miss is either a capacity miss or a conflict miss. A capacity miss is caused by the insufficient size of the cache. A conflict miss is caused by insufficient depth in the associativity of the cache. The determination of the excess cache miss involves tracking read and write requests for data by the various processors and storing some record of the read/write request history in a table or linked list. Data is cached only after an avoidable cache miss has occurred. By caching only after at least one avoidable cache miss instead of upon every (initial) access, cache space can be allocated in a highly efficient manner thereby minimizing the number of data fetches caused by cache misses.
    • 为具有至少一个高速缓存的单处理器和多处理器计算机系统提供有效的高速缓存分配方案。 在一个实施例中,当检测到高速缓存未命中时,确定高速缓存未命中是否“可避免”。 换句话说,如果先前缓存了数据,并且数据保留在高速缓存中,现在的高速缓存未命中是否发生。 具有分布式存储器架构的多处理器系统中的可避免的高速缓存未命中的一个示例是多余的高速缓存未命中。 多余的高速缓存未命中是容量错误或冲突错过。 容量丢失是由缓存大小不足引起的。 冲突错过是由缓存的关联性不够深造成的。 多余缓存未命中的确定包括跟踪各种处理器对数据的读和写请求,并将读/写请求历史的一些记录存储在表或链表中。 仅在发生可避免的缓存未命中之后才会缓存数据。 通过仅在至少一个可避免的缓存未命中而不是每次(初始)访问之后进行缓存,可以高效地分配高速缓存空间,从而最小化由高速缓存未命中引起的数据获取的数量。
    • 9. 发明授权
    • Hardware filter for tracking block presence in large caches
    • 用于跟踪大型缓存中的块的硬件过滤器
    • US08868843B2
    • 2014-10-21
    • US13307815
    • 2011-11-30
    • Gabriel H. LohMark D. Hill
    • Gabriel H. LohMark D. Hill
    • G06F12/08
    • G06F12/0888G06F12/0893Y02D10/13
    • A system and method for efficiently determining whether a requested memory location is in a large row-based memory of a computing system. A computing system includes a processing unit that generates memory requests on a first chip and a cache (LLC) on a second chip connected to the first chip. The processing unit includes an access filter that determines whether to access the cache. The cache is fabricated on top of the processing unit. The processing unit determines whether to access the access filter for a given memory request. The processing unit accesses the access filter to determine whether given data associated with a given memory request is stored within the cache. In response to determining the access filter indicates the given data is not stored within the cache, the processing unit generates a memory request to send to off-package memory.
    • 一种用于有效地确定所请求的存储器位置是否在计算系统的大型基于行的存储器中的系统和方法。 计算系统包括处理单元,其在与第一芯片连接的第二芯片上的第一芯片上生成存储器请求和高速缓存(LLC)。 处理单元包括确定是否访问高速缓存的访问过滤器。 高速缓存是在处理单元之上制造的。 处理单元确定是否访问给定存储器请求的访问过滤器。 处理单元访问访问过滤器以确定与给定存储器请求相关联的给定数据是否存储在高速缓存中。 响应于确定访问过滤器指示给定数据未被存储在高速缓存中,处理单元产生存储器请求以发送到脱机存储器。