会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 85. 发明授权
    • Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response
    • 链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行的顺序非均匀访问
    • US07409504B2
    • 2008-08-05
    • US11245312
    • 2005-10-06
    • Ramakrishnan RajamonyHazim ShafiDerek Edward WilliamsKenneth Lee Wright
    • Ramakrishnan RajamonyHazim ShafiDerek Edward WilliamsKenneth Lee Wright
    • G06F12/00
    • G06F12/0831
    • A method for sequentially coupling successive processor requests for a cache line before the data is received in the cache of a first coupled processor. Both homogenous and non-homogenous operations are chained to each other, and the coherency protocol includes several new intermediate coherency responses associated with the chained states. Chained coherency states are assigned to track the chain of processor requests and the grant of access permission prior to receipt of the data at the first processor. The chained coherency states also identify the address of the receiving processor. When data is received at the cache of the first processor within the chain, the processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chained coherency protocol frees up address bus bandwidth by reducing the number of retries.
    • 一种用于在数据在第一耦合处理器的高速缓存中接收数据之前顺序耦合高速缓存行的连续处理器请求的方法。 同质和非均匀的操作彼此链接,并且一致性协议包括与链接状态相关联的几个新的中间一致性响应。 分配链接一致性状态以在第一处理器接收到数据之前跟踪处理器请求链和授予访问权限。 链接的一致性状态还标识接收处理器的地址。 当在链中的第一处理器的高速缓存处接收到数据时,处理器完成其对数据的(或与)数据的操作,然后将数据转发到链中的下一个处理器。 链接的一致性协议通过减少重试次数来释放地址总线带宽。
    • 86. 发明申请
    • OPTIMAL INTERCONNECT UTILIZATION IN A DATA PROCESSING NETWORK
    • 数据处理网络中的最佳互连应用
    • US20080181111A1
    • 2008-07-31
    • US12059762
    • 2008-03-31
    • Wesley Michael FelterOrran Yaakov KriegerRamakrishnan Rajamony
    • Wesley Michael FelterOrran Yaakov KriegerRamakrishnan Rajamony
    • H04L12/56
    • H04L43/00H04L41/0896H04L43/026H04L43/06H04L43/0882
    • A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.
    • 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。
    • 89. 发明申请
    • System and method of managing cache hierarchies with adaptive mechanisms
    • 用自适应机制管理缓存层次的系统和方法
    • US20060277366A1
    • 2006-12-07
    • US11143328
    • 2005-06-02
    • Ramakrishnan RajamonyHazim ShafiWilliam SpeightLixin Zhang
    • Ramakrishnan RajamonyHazim ShafiWilliam SpeightLixin Zhang
    • G06F12/00
    • G06F12/0897G06F12/0817G06F12/0822
    • A system and method of managing cache hierarchies with adaptive mechanisms. A preferred embodiment of the present invention includes, in response to selecting a data block for eviction from a memory cache (the source cache) out of a collection of memory caches, examining a data structure to determine whether an entry exists that indicates that the data block has been evicted from the source memory cache, or another peer cache, to a slower cache or memory and subsequently retrieved from the slower cache or memory into the source memory cache or other peer cache. Also, a preferred embodiment of the present invention includes, in response to determining the entry exists in the data structure, selecting a peer memory cache out of the collection of memory caches at the same level in the hierarchy to receive the data block from the source memory cache upon eviction.
    • 一种使用自适应机制管理缓存层次结构的系统和方法。 本发明的优选实施例包括响应于从存储器高速缓存的集合中的存储器高速缓存(源高速缓存)中选择用于逐出的数据块,检查数据结构以确定是否存在指示数据 块已经从源存储器高速缓存或另一个对等缓存驱逐到较慢的高速缓存或存储器,并随后从较慢的高速缓存或存储器检索到源存储器高速缓存或其他对等高速缓存。 此外,本发明的优选实施例包括响应于确定条目存在于数据结构中,从层级中的相同级别的存储器高速缓存的集合中选择对等存储器高速缓存以从源接收数据块 内存缓存被驱逐。
    • 90. 发明授权
    • Multiple disk, variable RPM data storage system for reducing power consumption
    • 多磁盘,可变RPM数据存储系统,用于降低功耗
    • US07079341B2
    • 2006-07-18
    • US10798935
    • 2004-03-11
    • Michael David KistlerRamakrishnan Rajamony
    • Michael David KistlerRamakrishnan Rajamony
    • G11B5/09G06F12/00
    • G11B19/28G06F11/3419G06F11/3433G06F11/3485G06F2201/81Y02D10/34
    • A data storage system includes a set of disks where each disk is operable in a plurality of discrete angular velocity levels. A disk controller controls the angular velocity of each active disk. The controller replicates a first portion of data on a plurality of the disks stores a second class of data in the set of disks without replication. The disk controller routes data requests to one of the active disks based, at least in part, on the current loading of the active disks to maintain balanced loading on the active disks. The disk controller alters the angular velocity of at least one of the active disks upon detecting that the latency of one or more of the data requests differs from a specified threshold. In this manner, the disk controller maintains the angular velocity of the active disks at approximately the same minimum angular velocity needed to attain acceptable performance. The disk controller may replicate the first portion of data on each of the disks in the set of disks. The disk controller may balance the loading on the active disks by routing an incoming request to the active disk with the least loading. The disk controller may maintain each of the active disks at approximately the same angular velocity by preventing the angular velocity of any active disk from differing from the angular velocity of any other active disk by more than one discrete level. The disk controller may recognize two or more levels of request priorities. In this embodiment, the disk controller routes requests of a first priority to an active disk in a first subset of active disks based, at least in part, on the current loading of the disks in the first subset and route requests of a second priority to an active disk in a second subset of active disks based, at least in part, on the current loading of the disks in the second subset.
    • 数据存储系统包括一组盘,其中每个盘可以以多个离散的角速度水平操作。 磁盘控制器控制每个活动磁盘的角速度。 控制器复制多个磁盘上的数据的第一部分将第二类数据存储在该组磁盘中,而不进行复制。 至少部分地,磁盘控制器将数据请求路由到其中一个活动磁盘,目的是加载活动磁盘以维持活动磁盘上的平衡加载。 在检测到一个或多个数据请求的等待时间与指定的阈值不同时,磁盘控制器改变至少一个活动盘的角速度。 以这种方式,盘控制器将活动盘的角速度保持在达到可接受性能所需的大致相同的最小角速度。 磁盘控制器可以复制该组磁盘中每个磁盘上的第一部分数据。 磁盘控制器可以通过以最小的负载将传入请求路由到活动磁盘来平衡活动磁盘上的负载。 磁盘控制器可以通过防止任何活动盘的角速度与任何其他活动盘的角速度不同而超过一个离散水平而将每个活动盘保持在大致相同的角速度。 磁盘控制器可以识别两个或多个级别的请求优先级。 在该实施例中,盘控制器至少部分地基于第一子集中的磁盘的当前加载并将第二优先级的请求路由到第一优先级的第一优先级的请求,将其发送到活动盘的第一子集中 至少部分地基于第二子集中的磁盘的当前加载的活动磁盘的第二子集中的活动磁盘。