会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 7. 发明授权
    • Efficient support of sparse data structure access
    • 有效支持稀疏数据结构访问
    • US09037804B2
    • 2015-05-19
    • US13995209
    • 2011-12-29
    • Simon C. Steely, Jr.William C. HasenplaughJoel S. Emer
    • Simon C. Steely, Jr.William C. HasenplaughJoel S. Emer
    • G06F13/00G06F12/08
    • G06F12/0891G06F12/0895
    • Method and apparatus to efficiently organize data in caches by storing/accessing data of varying sizes in cache lines. A value may be assigned to a field indicating the size of usable data stored in a cache line. If the field indicating the size of the usable data in the cache line indicates a size less than the maximum storage size, a value may be assigned to a field in the cache line indicating which subset of the data in the field to store data is usable data. A cache request may determine whether the size of the usable data in a cache line is equal to the maximum data storage size. If the size of the usable data in the cache line is equal to the maximum data storage size the entire stored data in the cache line may be returned.
    • 通过在高速缓存行中存储/访问不同大小的数据来高效地组织高速缓存中的数据的方法和装置。 可以将值分配给指示存储在高速缓存行中的可用数据的大小的字段。 如果指示高速缓存行中的可用数据的大小的字段指示小于最大存储大小的大小,则可以将值分配给高速缓存行中的字段,指示字段中存储数据的数据的哪个子集是可用的 数据。 缓存请求可以确定高速缓存行中的可用数据的大小是否等于最大数据存储大小。 如果高速缓存行中的可用数据的大小等于最大数据存储大小,则可以返回高速缓存行中的整个存储数据。
    • 8. 发明授权
    • System for passing an index value with each prediction in forward
direction to enable truth predictor to associate truth value with
particular branch instruction
    • 用于向前传递每个预测的索引值的系统,以使真实预测器能够将真值与特定分支指令相关联
    • US6081887A
    • 2000-06-27
    • US191869
    • 1998-11-12
    • Simon C. Steely, Jr.Edward J. McLellanJoel S. Emer
    • Simon C. Steely, Jr.Edward J. McLellanJoel S. Emer
    • G06F9/38G06F9/32
    • G06F9/3844
    • A technique for predicting the result of a conditional branch instruction for use with a processor having instruction pipeline. A stored predictor is connected to the front end of the pipeline and is trained from a truth based predictor connected to the back end of the pipeline. The stored predictor is accessible in one instruction cycle, and therefore provides minimum predictor latency. Update latency is minimized by storing multiple predictions in the front end stored predictor which are indexed by an index counter. The multiple predictions, as provided by the back end, are indexed by the index counter to select a particular one as current prediction on a given instruction pipeline cycle. The front end stored predictor also passes along to the back end predictor, such as through the instruction pipeline, a position value used to generate the predictions. This further structure accommodates ghost branch instructions that turn out to be flushed out of the pipeline when it must be backed up. As a result, the front end always provides an accurate prediction with minimum update latency.
    • 一种用于预测与具有指令流水线的处理器一起使用的条件转移指令的结果的技术。 存储的预测器连接到管道的前端,并且从连接到管道后端的基于真实的预测器训练。 存储的预测器可以在一个指令周期中访问,因此提供最小预测器延迟。 通过将多个预测存储在由索引计数器索引的前端存储的预测器中来最小化更新延迟。 由后端提供的多个预测由索引计数器索引,以选择特定的预测作为给定指令流水线周期上的当前预测。 前端存储的预测器还将传递到后端预测器,例如通过指令流水线,用于产生预测的位置值。 这种进一步的结构可以容纳重影分支指令,当它必须被备份时,这些指令将被清除流出管道。 因此,前端总是以最小的更新延迟提供准确的预测。