会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 7. 发明授权
    • System and method for processing memory instructions using a forced order queue
    • 使用强制排序队列处理存储器指令的系统和方法
    • US07519771B1
    • 2009-04-14
    • US10643577
    • 2003-08-18
    • Gregory J. FaanesEric P. LundbergSteven L. ScottRobert J. Baird
    • Gregory J. FaanesEric P. LundbergSteven L. ScottRobert J. Baird
    • G06F12/00
    • G06F12/0811G06F9/3004G06F9/30087G06F9/383G06F9/3834G06F12/0859
    • A novel system and method for processing memory instructions. One embodiment of the invention provides a method for processing a memory instruction. In this embodiment, the method includes obtaining a memory request; storing the memory request in an Initial Request Queue (IRQ); and processing the memory request from the IRQ by a cache controller, wherein processing includes: identifying a type of the memory request, and processing the memory request in both a local cache and an Force Order Queue (FOQ), wherein processing includes determining if a portion of an address associated with the memory request matches one or more partial addresses in the FOQ and, if the memory request misses in the cache and the address does not match one or more partial addresses in the FOQ, adding the memory request to the FOQ and allocating a cache line in the local cache corresponding to the local cache miss.
    • 一种用于处理存储器指令的新型系统和方法。 本发明的一个实施例提供了一种用于处理存储器指令的方法。 在本实施例中,该方法包括获取存储器请求; 将所述存储器请求存储在初始请求队列(IRQ)中; 以及由缓存控制器处理来自IRQ的存储器请求,其中处理包括:识别存储器请求的类型,以及在本地高速缓存和强制排队队列(FOQ)中处理存储器请求,其中处理包括确定是否 与存储器请求相关联的地址的一部分匹配FOQ中的一个或多个部分地址,并且如果存储器请求在高速缓存中丢失并且该地址与FOQ中的一个或多个部分地址不匹配,则将存储器请求添加到FOQ 以及在对应于本地高速缓存未命中的本地高速缓存中分配高速缓存行。
    • 8. 发明授权
    • Optimized high bandwidth cache coherence mechanism
    • 优化高带宽缓存一致性机制
    • US07082500B2
    • 2006-07-25
    • US10368090
    • 2003-02-18
    • Steven L. ScottAbdulla Bataineh
    • Steven L. ScottAbdulla Bataineh
    • G06F12/00
    • G06F12/082G06F12/0813
    • A method and apparatus for a coherence mechanism that supports a distributed memory programming model in which processors each maintain their own memory area, and communicate data between them. A hierarchical programming model is supported, which uses distributed memory semantics on top of shared memory nodes. Coherence is maintained globally, but caching is restricted to a local region of the machine (a “node” or “caching domain”). A directory cache is held in an on-chip cache and is multi-banked, allowing very high transaction throughput. Directory associativity allows the directory cache to map contents of all caches concurrently. References off node are converted to non-allocating references, allowing the same access mechanism (a regular load or store) to be used for both for intra-node and extra-node references. Stores (Puts) to remote caches automatically update the caches instead of invalidating the caches, allowing producer/consumer data sharing to occur through cache instead of through main memory.
    • 一种用于相干机制的方法和装置,其支持分布式存储器编程模型,其中处理器各自保持自己的存储区域,并在它们之间传送数据。 支持分层编程模型,其在共享存储器节点之上使用分布式存储器语义。 全局维护一致性,但缓存仅限于机器的本地区域(“节点”或“缓存域”)。 目录缓存保存在片上缓存中,并且是多存储的,允许非常高的事务吞吐量。 目录关联性允许目录缓存同时映射所有缓存的内容。 引用关闭节点被转换为非分配引用,允许将同一访问机制(常规加载或存储)用于节点内和节点外引用。 存储(Puts)到远程缓存自动更新高速缓存,而不是使缓存无效,从而允许生产者/消费者数据共享通过缓存而不是主内存发生。
    • 9. 发明授权
    • Optimized high bandwidth cache coherence mechanism
    • 优化高带宽缓存一致性机制
    • US07409505B2
    • 2008-08-05
    • US11456781
    • 2006-07-11
    • Steven L. ScottAbdulla Bataineh
    • Steven L. ScottAbdulla Bataineh
    • G06F12/00
    • G06F12/082G06F12/0813
    • A method and apparatus for a coherence mechanism that supports a distributed memory programming model in which processors each maintain their own memory area, and communicate data between them. A hierarchical programming model is supported, which uses distributed memory semantics on top of shared memory nodes. Coherence is maintained globally, but caching is restricted to a local region of the machine (a “node” or “caching domain”). A directory cache is held in an on-chip cache and is multi-banked, allowing very high transaction throughput. Directory associativity allows the directory cache to map contents of all caches concurrently. References off node are converted to non-allocating references, allowing the same access mechanism (a regular load or store) to be used for both for intra-node and extra-node references. Stores (Puts) to remote caches automatically update the caches instead of invalidating the caches, allowing producer/consumer data sharing to occur through cache instead of through main memory.
    • 一种用于相干机制的方法和装置,其支持分布式存储器编程模型,其中处理器各自保持自己的存储区域,并在它们之间传送数据。 支持分层编程模型,其在共享存储器节点之上使用分布式存储器语义。 全局维护一致性,但缓存仅限于机器的本地区域(“节点”或“缓存域”)。 目录缓存保存在片上缓存中,并且是多存储的,允许非常高的事务吞吐量。 目录关联性允许目录缓存同时映射所有缓存的内容。 引用关闭节点被转换为非分配引用,允许将同一访问机制(常规加载或存储)用于节点内和节点外引用。 存储(Puts)到远程缓存自动更新高速缓存,而不是使缓存无效,从而允许生产者/消费者数据共享通过缓存而不是主内存发生。