会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 64. 发明申请
    • Dynamic instruction execution using distributed transaction priority registers
    • 使用分布式事务优先级寄存器的动态指令执行
    • US20090138683A1
    • 2009-05-28
    • US11946615
    • 2007-11-28
    • Louis B. Capps, JR.Robert H. Bell, JR.
    • Louis B. Capps, JR.Robert H. Bell, JR.
    • G06F9/30
    • G06F9/3851G06F9/30101G06F9/5011G06F2209/507Y02D10/22
    • A method, system and program are provided for dynamically assigning priority values to instruction threads in a computer system based on one or more predetermined thread performance tests, and using the assigned instruction priorities to determine how resources are used in the system. By storing the assigning priority values in thread priority registers distributed throughout the computer system, instructions from different threads that are dispatched through the system are allocated system resources based on the priority values assigned to the respective instruction threads. Priority values for individual threads may be updated with control software which tests thread performance and uses the test results to apply predetermined adjustment policies. The test results may be used to optimize the workload allocation of system resources by dynamically assigning thread priority values to individual threads using any desired policy, such as achieving thread execution balance relative to thresholds and to performance of other threads, reducing thread response time, lowering power consumption, etc.
    • 提供了一种方法,系统和程序,用于基于一个或多个预定的线程性能测试来动态地为计算机系统中的指令线程分配优先级值,并且使用所分配的指令优先级来确定如何在系统中使用资源。 通过将分配的优先级值存储在整个计算机系统中分配的线程优先级寄存器中,基于分配给各个指令线程的优先级值,分配来自系统调度的不同线程的指令被分配给系统资源。 可以使用测试线程性能的控制软件更新各个线程的优先级值,并使用测试结果来应用预定的调整策略。 测试结果可用于通过使用任何期望的策略动态地将线程优先级值分配给各个线程来优化系统资源的工作量分配,例如实现相对于阈值的线程执行平衡以及其他线程的性能,减少线程响应时间,降低 功耗等
    • 67. 发明授权
    • Cache member protection with partial make MRU allocation
    • 缓存成员保护部分使MRU分配
    • US07363433B2
    • 2008-04-22
    • US11054390
    • 2005-02-09
    • Robert H. Bell, Jr.Guy Lynn GuthrieWilliam John StarkeJeffrey Adam Stuecheli
    • Robert H. Bell, Jr.Guy Lynn GuthrieWilliam John StarkeJeffrey Adam Stuecheli
    • G06F12/00
    • G06F12/126G06F12/0897G06F12/123G06F12/128
    • A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.
    • 一种用于在LRU受害者选择期间能够保护缓存的特定成员的方法和装置。 LRU状态阵列除了状态位之外还包括额外的“保护”位。 保护位用作用于标识要保护的同余类的成员的位置的指针。 在标准LRU受害者选择期间,保护成员不会从缓存中删除,除非该成员无效。 保护位被流水线到MRU更新逻辑,它们用于生成MRU向量。 由MRU向量(和指针)标识的特定成员不被选择作为下一个LRU受害者,除非成员无效。 使MRU操作仅影响布置了基于树的结构的较低级LRU状态位,并且因此仅在不影响其他成员的LRU受害者选择的情况下,否定受保护成员的选择。
    • 68. 发明授权
    • Method and system for managing distributed arbitration for multicycle data transfer requests
    • 用于管理多周期数据传输请求的分布式仲裁的方法和系统
    • US06950892B2
    • 2005-09-27
    • US10411463
    • 2003-04-10
    • Robert H. Bell, Jr.Robert Alan Cargnoni
    • Robert H. Bell, Jr.Robert Alan Cargnoni
    • G06F13/372G06F12/00G06F12/08G06F13/14G06F13/364G06F13/368
    • G06F13/364G06F12/0846
    • A method and system for managing distributed arbitration for multi-cycle data transfer requests provides improved performance in a processing system. A multi-cycle request indicator is provided to a slice arbiter and if a multi-cycle request is present, only one slice is granted its associated bus. The method further blocks any requests from other requesting slices having a lower latency than the first slice until the latency difference between the other requesting slices and the longest latency slice added to a predetermined cycle counter value has expired. The method also blocks further requests from the first slice until the predetermined cycle counter value has elapsed and blocks requests from slices having a higher latency than the first slice until the predetermined cycle counter value less the difference in latencies for the first slice and for the higher latency slice has elapsed.
    • 用于管理多循环数据传输请求的分布式仲裁的方法和系统在处理系统中提供改善的性能。 多周期请求指示符被提供给切片仲裁器,并且如果存在多周期请求,则只有一个切片被授予其相关联的总线。 该方法进一步阻止来自具有比第一片低的等待时间的其他请求片的任何请求,直到其他请求片之间的等待时间差与添加到预定周期计数值的最长等待时间片已经期满为止。 该方法还阻止来自第一片的进一步请求,直到经过了预定周期计数器值,并且阻止来自具有比第一片的更高等待时间的片的请求,直到预定周期计数值减去第一片的延迟差和较高 延迟片已过。
    • 69. 发明授权
    • Performance of processors is improved by limiting number of branch prediction levels
    • 通过限制分支预测级别的数量来提高处理器的性能
    • US09582284B2
    • 2017-02-28
    • US13308696
    • 2011-12-01
    • Robert H. Bell, Jr.Wen-Tzer T. Chen
    • Robert H. Bell, Jr.Wen-Tzer T. Chen
    • G06F9/38G06F11/30
    • G06F9/3844G06F9/3842G06F11/30G06F11/3024G06F11/3409G06F2201/81
    • A method utilizes information provided by performance monitoring hardware to dynamically adjust the number of levels of speculative branch predictions allowed (typically 3 or 4 per thread) for a processor core. The information includes cycles-per-instruction (CPI) for the processor core and number of memory accesses per unit time. If the CPI is below a CPI threshold; and the number of memory accesses (NMA) per unit time is above a prescribed threshold, the number of levels of speculative branch predictions is reduced per thread for the processor core. Likewise, the number of levels of speculative branch predictions could be increased, from a low level to maximum allowed, if the CPI threshold is exceeded or the number of memory accesses per unit time is below the prescribed threshold.
    • 一种方法利用由性能监视硬件提供的信息来动态调整对于处理器核心允许的推测分支预测级别(通常为每线程3或4个)。 该信息包括处理器核心的每个指令周期(CPI)和每单位时间的存储器访问次数。 如果CPI低于CPI阈值; 并且每单位时间的存储器访问次数(NMA)高于规定的阈值,则对于处理器核,每个线程的推测分支预测的级别数量减少。 同样地,如果超过CPI阈值或每单位时间的存储器访问次数低于规定的阈值,则可以将推测分支预测的级数从低级别增加到允许的最大级别。