会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Cache line duplication in response to a way prediction conflict
    • 缓存线重复响应方式预测冲突
    • US07979640B2
    • 2011-07-12
    • US12181266
    • 2008-07-28
    • Shailender ChaudhryRobert E. CypherMartin Karlsson
    • Shailender ChaudhryRobert E. CypherMartin Karlsson
    • G06F12/08
    • G06F12/0864G06F2212/1016G06F2212/6082
    • Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way.
    • 本发明的实施例提供了一种在多路缓存中处理方式错误预测的系统。 系统通过接收访问多路缓存中的高速缓存行的请求来启动。 对于每个请求,系统基于方式预测表中的相应条目来预测高速缓存行驻留的方式。 然后,系统以预测的方式检查高速缓存行的存在。 在确定高速缓存行不以预测的方式存在但是以不同的方式存在,并且因此错误地预测方式时,系统在冲突检测表中增加对应的记录。 当检测到冲突检测表中的记录指示许多误预测值等于预定值时,系统将高速缓存行实际驻留的方式的相应高速缓存行复制到预测的方式。
    • 2. 发明申请
    • CACHE LINE DUPLICATION IN RESPONSE TO A WAY PREDICTION CONFLICT
    • 响应方式预测冲突的缓存行重复
    • US20100023701A1
    • 2010-01-28
    • US12181266
    • 2008-07-28
    • Shailender ChaudhryRobert E. CypherMartin Karlsson
    • Shailender ChaudhryRobert E. CypherMartin Karlsson
    • G06F12/08
    • G06F12/0864G06F2212/1016G06F2212/6082
    • Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way.
    • 本发明的实施例提供了一种在多路缓存中处理方式错误预测的系统。 系统通过接收访问多路缓存中的高速缓存行的请求来启动。 对于每个请求,系统基于方式预测表中的相应条目来预测高速缓存行驻留的方式。 然后,系统以预测的方式检查高速缓存行的存在。 在确定高速缓存行不以预测的方式存在但是以不同的方式存在,并且因此错误地预测方式时,系统在冲突检测表中增加对应的记录。 当检测到冲突检测表中的记录指示许多误预测值等于预定值时,系统将高速缓存行实际驻留的方式的相应高速缓存行复制到预测的方式。
    • 4. 发明授权
    • Reducing pipeline restart penalty
    • 减少管道重新开始罚球
    • US09086889B2
    • 2015-07-21
    • US12768641
    • 2010-04-27
    • Martin KarlssonSherman H. YipShailender Chaudhry
    • Martin KarlssonSherman H. YipShailender Chaudhry
    • G06F9/38G06F12/08
    • G06F9/3851G06F9/3802G06F9/3808G06F9/3842G06F9/3859G06F9/3863G06F12/0855
    • Techniques are disclosed relating to reducing the latency of restarting a pipeline in a processor that implements scouting. In one embodiment, the processor may reduce pipeline restart latency using two instruction fetch units that are configured to fetch and re-fetch instructions in parallel with one another. In some embodiments, the processor may reduce pipeline restart latency by initiating re-fetching instructions in response to determining that a commit operation is to be attempted with respect to one or more deferred instructions. In other embodiments, the processor may reduce pipeline restart latency by initiating re-fetching instructions in response to receiving an indication that a request for a set of data has been received by a cache, where the indication is sent by the cache before determining whether the data is present in the cache or not.
    • 公开了关于减少在实现侦察的处理器中重新启动管道的延迟的技术。 在一个实施例中,处理器可以使用配置为彼此并行地获取和重新获取指令的两个指令获取单元来减少流水线重新启动等待时间。 在一些实施例中,响应于确定将针对一个或多个延迟指令尝试提交操作,处理器可以通过启动重新获取指令来减少流水线重新启动等待时间。 在其他实施例中,处理器可以通过响应于接收到对高速缓存已经接收到对一组数据的请求的指示,通过发起重新获取指令来减少流水线重新启动等待时间,其中在由缓存发送指示之前, 数据存在于缓存中。
    • 10. 发明授权
    • Store queue having restricted and unrestricted entries
    • 存储队列具有受限和不受限制的条目
    • US09146744B2
    • 2015-09-29
    • US12116009
    • 2008-05-06
    • Paul CaprioliMartin KarlssonShailender ChaudhryGideon N. Levinsky
    • Paul CaprioliMartin KarlssonShailender ChaudhryGideon N. Levinsky
    • G06F15/00G06F7/38G06F9/00G06F9/44G06F9/38
    • G06F9/3826G06F9/383G06F9/3842G06F9/3855
    • Embodiments of the present invention provide a system which executes a load instruction or a store instruction. During operation the system receives a load instruction. The system then determines if an unrestricted entry or a restricted entry in a store queue contains data that satisfies the load instruction. If not, the system retrieves data for the load instruction from a cache. If so, the system conditionally forwards data from the unrestricted entry or the restricted entry by: (1) forwarding data from an unrestricted entry that contains the youngest store that satisfies the load instruction when any number of unrestricted or restricted entries contain data that satisfies the load instruction; (2) forwarding data from an unrestricted entry when only one restricted entry and no unrestricted entries contain data that satisfies the load instruction; and (3) deferring the load instruction by placing the load instruction in a deferred queue when two or more restricted entries and no unrestricted entries contain data that satisfies the load instruction.
    • 本发明的实施例提供一种执行加载指令或存储指令的系统。 在运行过程中,系统接收到一个加载指令。 然后,系统确定存储队列中的无限制条目或限制条目是否包含满足加载指令的数据。 如果没有,系统将从缓存中检索加载指令的数据。 如果是这样,系统通过以下方式有条件地转发来自非限制条目或限制条目的数据:(1)当任何数量的无限制或限制条目包含满足条件的数据时,从包含满足加载指令的最小存储的无限制条目转发数据 加载指令; (2)当只有一个限制条目和不限制条目包含满足加载指令的数据时,从非限制条目转发数据; 和(3)通过在两个或多个限制条目和不受限制的条目包含满足加载指令的数据的情况下将加载指令放置在延迟队列中来推迟加载指令。