会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • EFFICIENT DATA PREFETCHING IN THE PRESENCE OF LOAD HITS
    • 有效的数据预处理在负载的存在
    • US20120272003A1
    • 2012-10-25
    • US13535152
    • 2012-06-27
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • G06F12/08
    • G06F12/0862G06F12/0859G06F12/0897G06F2212/6022
    • A microprocessor configured to access an external memory includes a first-level cache, a second-level cache, and a bus interface unit (BIU) configured to interface the first-level and second-level caches to a bus used to access the external memory. The BIU is configured to prioritize requests from the first-level cache above requests from the second-level cache. The second-level cache is configured to generate a first request to the BIU to fetch a cache line from the external memory. The second-level cache is also configured to detect that the first-level cache has subsequently generated a second request to the second-level cache for the same cache line. The second-level cache is also configured to request the BIU to refrain from performing a transaction on the bus to fulfill the first request if the BIU has not yet been granted ownership of the bus to fulfill the first request.
    • 被配置为访问外部存储器的微处理器包括:第一级高速缓存,第二级高速缓存和总线接口单元(BIU),其被配置为将第一级和第二级高速缓存连接到用于访问外部存储器的总线 。 BIU被配置为优先考虑来自第二级缓存的来自第二级缓存的请求的请求。 第二级缓存被配置为生成对BIU的第一请求以从外部存储器获取高速缓存行。 第二级缓存还被配置为检测第一级高速缓存随后已经为同一高速缓存行生成了第二级缓存的第二请求。 第二级缓存还被配置为如果BIU尚未被授予总线的所有权以满足第一请求,则要求BIU避免在总线上执行事务以满足第一请求。
    • 4. 发明授权
    • Efficient data prefetching in the presence of load hits
    • 有效的数据预取在有负载点击
    • US08489823B2
    • 2013-07-16
    • US13535152
    • 2012-06-27
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • G06F12/08
    • G06F12/0862G06F12/0859G06F12/0897G06F2212/6022
    • A microprocessor configured to access an external memory includes a first-level cache, a second-level cache, and a bus interface unit (BIU) configured to interface the first-level and second-level caches to a bus used to access the external memory. The BIU is configured to prioritize requests from the first-level cache above requests from the second-level cache. The second-level cache is configured to generate a first request to the BIU to fetch a cache line from the external memory. The second-level cache is also configured to detect that the first-level cache has subsequently generated a second request to the second-level cache for the same cache line. The second-level cache is also configured to request the BIU to refrain from performing a transaction on the bus to fulfill the first request if the BIU has not yet been granted ownership of the bus to fulfill the first request.
    • 被配置为访问外部存储器的微处理器包括:第一级高速缓存,第二级高速缓存和总线接口单元(BIU),其被配置为将第一级和第二级高速缓存连接到用于访问外部存储器的总线 。 BIU被配置为优先考虑来自第二级缓存的来自第二级缓存的请求的请求。 第二级缓存被配置为生成对BIU的第一请求以从外部存储器获取高速缓存行。 第二级缓存还被配置为检测第一级高速缓存随后已经为同一高速缓存行生成了第二级缓存的第二请求。 第二级缓存还被配置为如果BIU尚未被授予总线的所有权以满足第一请求,则要求BIU避免在总线上执行事务以满足第一请求。
    • 6. 发明授权
    • Efficient data prefetching in the presence of load hits
    • 有效的数据预取在有负载点击
    • US08234450B2
    • 2012-07-31
    • US12763938
    • 2010-04-20
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • Clinton Thomas GloverColin EddyRodney E. HookerAlbert J. Loper
    • G06F12/08
    • G06F12/0862G06F12/0859G06F12/0897G06F2212/6022
    • A BIU prioritizes L1 requests above L2 requests. The L2 generates a first request to the BIU and detects the generation of a snoop request and L1 request to the same cache line. The L2 determines whether a bus transaction to fulfill the first request may be retried and, if so, generates a miss, and otherwise generates a hit. Alternatively, the L2 detects the L1 generated a request to the L2 for the same line and responsively requests the BIU to refrain from performing a transaction on the bus to fulfill the first request if the BIU has not yet been granted the bus. Alternatively, a prefetch cache and the L2 allow the same line to be simultaneously present. If an L1 request hits in both the L2 and in the prefetch cache, the prefetch cache invalidates its copy of the line and the L2 provides the line to the L1.
    • BIU将L1请求优先于L2请求。 L2产生对BIU的第一个请求,并检测到窥探请求和L1请求到同一个高速缓存行的生成。 L2确定是否可以重试履行第一请求的总线事务,如果是,则产生未命中,否则生成命中。 或者,L2检测到L1产生对同一行的L2的请求,并且如果BIU尚未被授予总线,则响应地请求BIU在总线上不执行交易以执行第一请求。 或者,预取高速缓存和L2允许同时存在同一行。 如果L1请求都在L2和预取缓存中同时进行,则预取缓存使其副本无效,并且L2向L1提供该行。
    • 8. 发明授权
    • Avoiding memory access latency by returning hit-modified when holding non-modified data
    • 在保存未修改的数据时,通过返回命中修改来避免内存访问延迟
    • US08364906B2
    • 2013-01-29
    • US12880958
    • 2010-09-13
    • Rodney E. HookerColin EddyDarius D. GaskinsAlbert J. Loper, Jr.
    • Rodney E. HookerColin EddyDarius D. GaskinsAlbert J. Loper, Jr.
    • G06F12/00
    • G06F12/0831
    • A microprocessor is configured to communicate with other agents on a system bus and includes a cache memory and a bus interface unit coupled to the cache memory and to the system bus. The bus interface unit receives from another agent coupled to the system bus a transaction to read data from a memory address, determines whether the cache memory is holding the data at the memory address in an exclusive state (or a shared state in certain configurations), and asserts a hit-modified signal on the system bus and provides the data on the system bus to the other agent when the cache memory is holding the data at the memory address in an exclusive state. Thus, the delay of an access to the system memory by the other agent is avoided.
    • 微处理器被配置为与系统总线上的其他代理通信,并且包括高速缓存存储器和耦合到高速缓冲存储器和系统总线的总线接口单元。 总线接口单元从耦合到系统总线的另一个代理接收从存储器地址读取数据的事务,确定高速缓冲存储器是否处于处于独占状态(或某些配置中的共享状态)的存储器地址处的数据, 并且当高速缓冲存储器将数据保存在处于独占状态的存储器地址时,在系统总线上断言命中修正信号并将系统总线上的数据提供给另一个代理。 因此,避免了其他代理对系统存储器的访问的延迟。