会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 51. 发明授权
    • Pipelining D states for MRU steerage during MRU-LRU member allocation
    • 在MRU-LRU成员分配过程中管理D状态用于MRU操纵
    • US07831774B2
    • 2010-11-09
    • US12118238
    • 2008-05-09
    • Robert H. Bell, Jr.Guy L. GuthrieWilliam J. StarkeJeffrey A. Stuecheli
    • Robert H. Bell, Jr.Guy L. GuthrieWilliam J. StarkeJeffrey A. Stuecheli
    • G06F13/00G06F13/28
    • G06F12/0888G06F12/123G06F12/126G06F2212/1032
    • A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.
    • 用于在LRU受害者选择期间防止选择被删除(D)成员作为LRU受害者的方法和装置。 在针对特定同余类的每个缓存访问期间,从高速缓存目录中的信息识别已删除的高速缓存行。 删除的高速缓存行的位置在LRU受害者选择期间通过高速缓存架构流水线化。 信息被锁存,然后传递给MRU向量生成逻辑。 生成MRU向量并将其传递给MRU更新逻辑,MRU更新逻辑是将删除的成员作为MRU成员进行选择/标记。 使MRU操作仅影响以基于树的结构状态位布置的较低级LRU状态位,使得MRU操作仅在D状态下否定特定成员的选择,而不影响其他成员的LRU受害者选择。
    • 56. 发明授权
    • Data processing system, processor and method of data processing in which local memory access requests are serviced on a fixed schedule
    • 数据处理系统,处理器和数据处理方法,其中本地存储器访问请求以固定的时间表被服务
    • US07447844B2
    • 2008-11-04
    • US11457322
    • 2006-07-13
    • Leo J. ClarkGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Leo J. ClarkGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/1425G06F12/0817G06F12/0897
    • A processing unit includes a local processor core and a cache memory coupled to the local processor core. The cache memory includes a data array, a directory of contents of the data array. The cache memory further includes one or more state machines that service a first set of memory access requests, an arbiter that directs servicing of a second set of memory access requests by reference to the data array and the directory on a fixed schedule, address collision logic that protects memory access requests in the second set by detecting and signaling address conflicts between active memory access requests in the second set and subsequent memory access requests, and dispatch logic coupled to the address collision logic. The dispatch logic dispatches memory access requests in the first set to the one or more state machines for servicing and signals the arbiter to direct servicing of memory access requests in the second set according to the fixed schedule.
    • 处理单元包括本地处理器核心和耦合到本地处理器核心的高速缓存存储器。 高速缓冲存储器包括数据阵列,数据阵列的内容目录。 缓存存储器还包括服务于第一组存储器访问请求的一个或多个状态机,通过参考数据阵列和固定时间表上的目录来指导第二组存储器访问请求的服务的仲裁器,地址冲突逻辑 其通过检测和发出第二组中的活动存储器访问请求与后续存储器访问请求之间的地址冲突以及耦合到地址冲突逻辑的调度逻辑来保护第二组中的存储器访问请求。 调度逻辑将第一组中的存储器访问请求分派到一个或多个状态机用于服务,并且向仲裁器发出信号,以根据固定的时间表对第二组中的存储器访问请求进行直接服务。
    • 59. 发明申请
    • Data Processing System, Processor and Method of Data Processing in which Local Memory Access Requests are Serviced on a Fixed Schedule
    • 数据处理系统,处理器和数据处理方法,其中本地存储器访问请求在固定时间表上服务
    • US20080016278A1
    • 2008-01-17
    • US11457322
    • 2006-07-13
    • Leo J. ClarkGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Leo J. ClarkGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/1425G06F12/0817G06F12/0897
    • A processing unit includes a local processor core and a cache memory coupled to the local processor core. The cache memory includes a data array, a directory of contents of the data array. The cache memory further includes one or more state machines that service a first set of memory access requests, an arbiter that directs servicing of a second set of memory access requests by reference to the data array and the directory on a fixed schedule, address collision logic that protects memory access requests in the second set by detecting and signaling address conflicts between active memory access requests in the second set and subsequent memory access requests, and dispatch logic coupled to the address collision logic. The dispatch logic dispatches memory access requests in the first set to the one or more state machines for servicing and signals the arbiter to direct servicing of memory access requests in the second set according to the fixed schedule.
    • 处理单元包括本地处理器核心和耦合到本地处理器核心的高速缓存存储器。 高速缓冲存储器包括数据阵列,数据阵列的内容目录。 缓存存储器还包括服务于第一组存储器访问请求的一个或多个状态机,通过参考数据阵列和固定时间表上的目录来指导第二组存储器访问请求的服务的仲裁器,地址冲突逻辑 其通过检测和发出第二组中的活动存储器访问请求与后续存储器访问请求之间的地址冲突以及耦合到地址冲突逻辑的调度逻辑来保护第二组中的存储器访问请求。 调度逻辑将第一组中的存储器访问请求分派到一个或多个状态机用于服务,并且向仲裁器发出信号,以根据固定的时间表对第二组中的存储器访问请求进行直接服务。
    • 60. 发明授权
    • Data processing system, method and interconnect fabric having a flow governor
    • 具有流量调节器的数据处理系统,方法和互连结构
    • US08254411B2
    • 2012-08-28
    • US11055399
    • 2005-02-10
    • Leo J. ClarkGuy L. GuthrieWilliam J. Starke
    • Leo J. ClarkGuy L. GuthrieWilliam J. Starke
    • H04J3/16H04J3/22G06F13/00G06F15/173
    • H04L47/722H04L47/70H04L49/109
    • A data processing system includes a plurality of local hubs each coupled to a remote hub by a respective one a plurality of point-to-point communication links. Each of the plurality of local hubs queues requests for access to memory blocks for transmission on a respective one of the point-to-point communication links to a shared resource in the remote hub. Each of the plurality of local hubs transmits requests to the remote hub utilizing only a fractional portion of a bandwidth of its respective point-to-point communication link. The fractional portion that is utilized is determined by an allocation policy based at least in part upon a number of the plurality of local hubs and a number of processing units represented by each of the plurality of local hubs. The allocation policy prevents overruns of the shared resource.
    • 数据处理系统包括多个本地集线器,每个集线器通过相应的一个多个点对点通信链路耦合到远程集线器。 多个本地集线器中的每一个排队对存储器块进行访问的请求,用于在到远程集线器中的共享资源的点对点通信链路中的相应一个上传输。 多个本地集线器中的每一个仅利用其相应点对点通信链路的带宽的小数部分向远程集线器发送请求。 所使用的分数部分由至少部分地基于多个本地集线器的数量和由多个本地集线器中的每一个表示的多个处理单元的分配策略确定。 分配策略可以防止超出共享资源。