会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Breaking replay dependency loops in a processor using a rescheduled replay queue
    • 使用重新安排的重播队列在处理器中重新播放依赖循环
    • US06981129B1
    • 2005-12-27
    • US09705668
    • 2000-11-02
    • Darrell D. BoggsDouglas M. CarmeanPer H. HammarlundFrancis X. McKeenDavid J. SagerRonak Singhal
    • Darrell D. BoggsDouglas M. CarmeanPer H. HammarlundFrancis X. McKeenDavid J. SagerRonak Singhal
    • G06F9/38G06F9/30
    • G06F9/3842G06F9/3861
    • Breaking replay dependency loops in a processor using a rescheduled replay queue. The processor comprises a replay queue to receive a plurality of instructions, and an execution unit to execute the plurality of instructions. A scheduler is coupled between the replay queue and the execution unit. The scheduler speculatively schedules instructions for execution and increments a counter for each of the plurality of instructions to reflect the number of times each of the plurality of instructions has been executed. The scheduler also dispatches each instruction to the execution unit either when the counter does not exceed a maximum number of replays or, if the counter exceeds the maximum number of replays, when the instruction is safe to execute. A checker is coupled to the execution unit to determine whether each instruction has executed successfully. The checker is also coupled to the replay queue to communicate to the replay queue each instruction that has not executed successfully.
    • 使用重新安排的重播队列在处理器中重新播放依赖循环。 所述处理器包括用于接收多个指令的重放队列,以及执行所述多个指令的执行单元。 调度器耦合在重播队列和执行单元之间。 调度器推测性地调度用于执行的指令,并且为多个指令中的每一个递增计数器,以反映多个指令中的每一个已被执行的次数。 当计数器不超过最大重放次数时,或者当计数器超过最大重放次数时,当指令执行安全时,调度器也将每条指令分派给执行单元。 检查器耦合到执行单元以确定每个指令是否已成功执行。 检查器还耦合到重播队列,以便与未执行成功的每个指令通信给重播队列。
    • 5. 发明授权
    • Method for and a trailing store buffer for use in memory renaming
    • 用于存储器重命名的方法和后端存储缓冲区
    • US07640419B2
    • 2009-12-29
    • US10743422
    • 2003-12-23
    • Sebastien HilyPer H. Hammarlund
    • Sebastien HilyPer H. Hammarlund
    • G06F9/30
    • G06F9/3834
    • Embodiments of the present invention relate to a memory management scheme and apparatus that enables efficient memory renaming. The method includes computing a store address, writing the store address in a first storage, writing data associated with the store address to a memory, and de-allocating the store address from the first storage, allocating the store address in a second storage, predicting a load instruction to be memory renamed, computing a load store source index, computing a load address, disambiguating the memory renamed load instruction, and retiring the memory renamed load instruction, if the store instruction is still allocated in at least one of the first storage and the second storage and should have effectively provided to the load the full data. The method may also include re-executing the load instruction without memory renaming, if the store instruction is not in at the first storage or in the second storage.
    • 本发明的实施例涉及一种能够实现高效存储器重命名的存储器管理方案和装置。 该方法包括计算存储地址,将存储地址写入第一存储器中,将与存储地址相关联的数据写入存储器,以及从第一存储器解除存储地址,在第二存储器中分配存储地址,预测存储地址 如果所述存储指令仍被分配在所述第一存储器中的至少一个中,则存储指令被重新命名为存储器重命名的加载指令,计算加载存储源索引,计算加载地址,消除所述存储器重命名的加载指令,以及退出所述存储器重命名加载指令, 并且第二个存储应该有效地提供给负载的完整数据。 如果存储指令不在第一存储器或第二存储器中,则该方法还可以包括重新执行加载指令而不进行存储器重命名。
    • 7. 发明授权
    • Method and apparatus for providing a cache management technique
    • 用于提供高速缓存管理技术的方法和装置
    • US06105111A
    • 2000-08-15
    • US53527
    • 1998-03-31
    • Per H. HammarlundGlenn J. Hinton
    • Per H. HammarlundGlenn J. Hinton
    • G06F12/12G06F12/00
    • G06F12/123
    • A cache technique for maximizing cache efficiency by assigning ages to elements which access the cache, is described. In one embodiment, the cache technique includes receiving a first element of a first type by a cache and writing the first element in a set of the cache. The first element has a first age. The cache technique further includes receiving a second element of a second type by the cache and writing the second element in the set of the cache. The second element has a middle age, where the first age is a more recently used age than the middle age. In another embodiment, the cache technique includes receiving a first element of a first stream by a cache and writing the first element in a set of the cache. The first element has a first age. The cache technique further includes receiving a second element of a second stream by the cache and writing the second element in the set of the cache. The second element has a middle age, where the first age is a more recently used age than the middle age.
    • 描述了通过将访问高速缓存的元素分配年龄来最大化高速缓存效率的高速缓存技术。 在一个实施例中,高速缓存技术包括通过高速缓存接收第一类型的第一元素并将第一元素写入高速缓存的集合。 第一个元素有第一个年龄。 高速缓存技术还包括由高速缓存接收第二类型的第二元素,并将第二元素写入高速缓存的集合。 第二个元素有一个中年,第一个年龄是比中年更近的年龄。 在另一个实施例中,高速缓存技术包括由高速缓存接收第一流的第一元素,并将第一元素写入高速缓存的集合。 第一个元素有第一个年龄。 高速缓存技术还包括由高速缓存接收第二流的第二元素,并将第二元素写入高速缓存的集合。 第二个元素有一个中年,第一个年龄是比中年更近的年龄。
    • 8. 发明授权
    • Method and system for dynamic resource allocation
    • 动态资源分配的方法和系统
    • US07688746B2
    • 2010-03-30
    • US10745701
    • 2003-12-29
    • Per H. HammarlundMelih Ozgul
    • Per H. HammarlundMelih Ozgul
    • G01R31/08G06F11/00G08C15/00H04J1/16H04J3/14H04L1/00H04L12/26
    • G06F9/5011Y02D10/22
    • Embodiments of the present invention provide a dynamic resource allocator to allocate resources performance optimization in, for example, a computer system. The dynamic resource allocator to allocate a resource to one or more threads associated with an application based on a performance rate. Embodiments of the present invention may further include a performance monitor to monitor the performance rate of the one or more threads. The dynamic resource allocator to allocate an additional resource to the one or more threads, if the thread is performing above a performance threshold. In embodiments of the present invention, the dynamic resource allocation strategy may be decided based on, for example, optimizing the overall system throughput, minimizing power consumption, meeting system performance goals (e.g., real time requirements), user specified performance priorities and/or application specified performance priorities.
    • 本发明的实施例提供了一种动态资源分配器,用于在例如计算机系统中分配资源性能优化。 动态资源分配器,用于基于性能速率向与应用程序相关联的一个或多个线程分配资源。 本发明的实施例还可以包括监视一个或多个线程的性能速率的性能监视器。 如果线程在性能阈值以上执行,动态资源分配器将向一个或多个线程分配附加资源。 在本发明的实施例中,动态资源分配策略可以基于例如优化整个系统吞吐量,最小化功耗,满足系统性能目标(例如,实时要求),用户指定的性能优先级和/或 应用指定的性能优先级。