会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 142. 发明授权
    • Optimal interconnect utilization in a data processing network
    • 数据处理网络中的最佳互连利用率
    • US07821944B2
    • 2010-10-26
    • US12059762
    • 2008-03-31
    • Wesley Michael FelterOrran Yaakov KriegerRamakrishnan Rajamony
    • Wesley Michael FelterOrran Yaakov KriegerRamakrishnan Rajamony
    • H04L12/56
    • H04L43/00H04L41/0896H04L43/026H04L43/06H04L43/0882
    • A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.
    • 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。
    • 143. 发明授权
    • Web server architecture for improved performance
    • Web服务器架构,提高性能
    • US07499966B2
    • 2009-03-03
    • US09935414
    • 2001-08-23
    • Elmootazbellah Nabil ElnozahyRamakrishnan Rajamony
    • Elmootazbellah Nabil ElnozahyRamakrishnan Rajamony
    • G06F15/13
    • H04L67/42H04L29/06H04L67/02H04L67/34H04L69/16H04L69/161H04L69/326H04L69/329
    • A web server that integrates portions of operating system code to execute substantially within user space to reduce context switching. The web server includes an application level interpreter, such as an HTTP interpreter, configured to process client requests. The web server typically includes a network interface dedicated to process traffic to and from the web server. The web server may include within its user space kernel device driver extensions enabling it to communicate directly with the network interface. The server may implement a polling architecture in which the server periodically monitors the interface for new requests. The web server typically includes a user space transmission protocol library that enables the server to perform its own network processing of requests and responses. The library may include TCP/IP drivers that are optimized or streamlined for to processing HTTP requests.
    • 集成了部分操作系统代码以在用户空间内实质执行以减少上下文切换的Web服务器。 Web服务器包括配置为处理客户端请求的应用程序级解释器,例如HTTP解释器。 Web服务器通常包括专用于处理到Web服务器和从web服务器的流量的网络接口。 网络服务器可以在其用户空间内包括内核设备驱动程序扩展,使其能够直接与网络接口进行通信。 服务器可以实现轮询架构,其中服务器周期性地监视接口的新请求。 Web服务器通常包括用户空间传输协议库,其使服务器能够执行其自己的请求和响应的网络处理。 该库可能包括针对处理HTTP请求进行了优化或精简的TCP / IP驱动程序。
    • 144. 发明授权
    • Method and system for enhanced scheduling of memory access requests
    • 用于增强内存访问请求调度的方法和系统
    • US07366833B2
    • 2008-04-29
    • US10334279
    • 2002-12-31
    • Anupam ChandaRamakrishnan RajamonyFreeman Leigh Rawson, III
    • Anupam ChandaRamakrishnan RajamonyFreeman Leigh Rawson, III
    • G06F12/00
    • G06F3/0659G06F3/0613G06F3/0674
    • In information storage systems in which data retrieval requires movement of at least one physical element, a measurable amount of time is required to reposition that physical element in response to each data write or read request. After selecting one or more data requests for dispatch based solely on an approaching or past due time deadline, additional requests are identified for data to be read or written to locations which are in close proximity to previously scheduled requests, previously selected additional requests, or the present position of the moveable physical element, obviating the need to expend the full amount of time required to accelerate the physical element and then decelerate the physical element to position it over the desired area within the information storage system. In this manner, data may be transferred to or retrieved from an information storage system more efficiently with less expenditure of time.
    • 在其中数据检索需要移动至少一个物理元件的信息存储系统中,需要可测量的时间量来重新定位该物理元件以响应于每个数据写入或读取请求。 仅在接近或过期到期时间限制之后选择一个或多个数据请求进行调度时,将为要被读取或写入到紧邻先前调度的请求,先前选择的附加请求的位置的数据识别附加请求,或者 可移动物理元件的当前位置,避免需要花费加速物理元件所需的全部时间,然后减速物理元件以将其定位在信息存储系统内的期望区域上。 以这种方式,可以以更少的时间花费更有效地将数据传送到信息存储系统或从信息存储系统检索。
    • 145. 发明申请
    • Dual network types solution for computer interconnects
    • 用于计算机互连的双网络类型解决方案
    • US20080025288A1
    • 2008-01-31
    • US11493951
    • 2006-07-27
    • Alan BennerRamakrishnan RajamonyEugen SchenfeldCraig Brian StunkelPeter A. Walker
    • Alan BennerRamakrishnan RajamonyEugen SchenfeldCraig Brian StunkelPeter A. Walker
    • H04L12/28
    • H04L12/4641H04L12/28H04L12/66
    • Briefly, according to an embodiment of the invention, a computing system comprises: a plurality of tightly coupled processing nodes; a plurality of circuit switched networks using a circuit switching mode, interconnecting the processing nodes, and for handling data transfers that meet one or more criteria; and a plurality of electronic packet switched networks, also interconnecting the processing nodes, for handling data transfers that do meet the at least one criteria. The circuit switched networks and the electronic packet switched networks operate simultaneously. The system additionally comprises a plurality of clusters which comprise the processing nodes, and a plurality of intra-cluster communication links. The electronic packet switched networks are for handling collectives and short-lived data transfers among the processing nodes and comprises one-tenth of the bandwidth of the circuit switched networks.
    • 简而言之,根据本发明的实施例,一种计算系统包括:多个紧密耦合的处理节点; 使用电路交换模式的多个电路交换网络,互连处理节点,以及用于处理满足一个或多个标准的数据传输; 以及多个电子分组交换网络,其也互连处理节点,用于处理满足至少一个标准的数据传输。 电路交换网络和电子分组交换网络同时工作。 该系统还包括包括处理节点的多个群集和多个群内通信链路。 电子分组交换网络用于处理处理节点之间的集合和短时间数据传输,并且包括电路交换网络带宽的十分之一。
    • 146. 发明授权
    • Server network controller including packet forwarding and method therefor
    • 服务器网络控制器,包括数据包转发及其方法
    • US07315896B2
    • 2008-01-01
    • US10165068
    • 2002-06-06
    • Eric Van HensbergenRamakrishnan Rajamony
    • Eric Van HensbergenRamakrishnan Rajamony
    • G06F15/16G06F15/173
    • H04L41/0896
    • A network controller including a packet forwarding mechanism and method therefor improve load-balancing within a network system without requiring an intelligent switch having TCP splicing capability. If the network controller node is becoming overloaded (for example as indicated by a full output FIFO), the network controller forwards connections directly to alternate servers. The network controller and method further provide improved fail-safe operation, as the network controller can more easily detect failure of the coupled server than can a remote switch being monitored for failure of a connected server node. The packet forwarding mechanism can be implemented very compactly within the firmware of the network controller, providing a load-balancing solution with little incremental cost (as opposed to an intelligent switch solution) and with tight coupling to the server, providing a redirection solution from the point that has the most information available regarding the status of the associated server node.
    • 一种网络控制器,包括分组转发机制及其方法,用于改善网络系统内的负载平衡,而不需要具有TCP拼接能力的智能交换机。 如果网络控制器节点变得过载(例如由全输出FIFO指示),则网络控制器将连接直接转发到备用服务器。 网络控制器和方法进一步提供改进的故障安全操作,因为网络控制器可以更容易地检测到耦合的服务器的故障,而不是监视连接的服务器节点故障的远程交换机。 分组转发机制可以在网络控制器的固件内非常紧凑地实现,提供了一个负载平衡解决方案,而不用增加成本(与智能交换机解决方案相反),并且与服务器紧密耦合,从而提供重定向解决方案 具有关于相关联的服务器节点的状态的信息最多的点。
    • 147. 发明申请
    • Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response
    • 链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行的顺序非均匀访问
    • US20070083716A1
    • 2007-04-12
    • US11245312
    • 2005-10-06
    • Ramakrishnan RajamonyHazim ShafiDerek WilliamsKenneth Wright
    • Ramakrishnan RajamonyHazim ShafiDerek WilliamsKenneth Wright
    • G06F13/28
    • G06F12/0831
    • A method for sequentially coupling successive processor requests for a cache line before the data is received in the cache of a first coupled processor. Both homogenous and non-homogenous operations are chained to each other, and the coherency protocol includes several new intermediate coherency responses associated with the chained states. Chained coherency states are assigned to track the chain of processor requests and the grant of access permission prior to receipt of the data at the first processor. The chained coherency states also identify the address of the receiving processor. When data is received at the cache of the first processor within the chain, the processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chained coherency protocol frees up address bus bandwidth by reducing the number of retries.
    • 一种用于在数据在第一耦合处理器的高速缓存中接收数据之前顺序耦合高速缓存行的连续处理器请求的方法。 同质和非均匀的操作彼此链接,并且一致性协议包括与链接状态相关联的几个新的中间一致性响应。 分配链接一致性状态以在第一处理器接收到数据之前跟踪处理器请求链和授予访问权限。 链接的一致性状态还标识接收处理器的地址。 当在链中的第一处理器的高速缓存处接收到数据时,处理器完成其对(或)数据的操作,然后将数据转发到链中的下一个处理器。 链接的一致性协议通过减少重试次数来释放地址总线带宽。
    • 148. 发明申请
    • Data replication in multiprocessor NUCA systems to reduce horizontal cache thrashing
    • 多处理器NUCA系统中的数据复制,以减少水平缓存的颠簸
    • US20060080506A1
    • 2006-04-13
    • US10960611
    • 2004-10-07
    • Ramakrishnan RajamonyXiaowei ShenBalaram Sinharoy
    • Ramakrishnan RajamonyXiaowei ShenBalaram Sinharoy
    • G06F12/00
    • G06F12/0846G06F12/0813G06F12/084G06F12/122G06F2212/271
    • A method of managing a distributed cache structure having separate cache banks, by detecting that a given cache line has been repeatedly accessed by two or more processors which share the cache, and replicating that cache line in at least two separate cache banks. The cache line is optimally replicated in a cache bank having the lowest latency with respect to the given accessing processor. A currently accessed line in a different cache bank can be exchanged with a cache line in the cache bank with the lowest latency, and another line in the cache bank with lowest latency is moved to the different cache bank prior to the currently accessed line being moved to the cache bank with the lowest latency. Further replication of the cache line can be disabled when two or more processors alternately write to the cache line.
    • 一种通过检测给定的高速缓存行已被共享高速缓存的两个或多个处理器重复访问并且在至少两个单独的高速缓冲存储器中复制该高速缓存行的方式来管理具有单独的高速缓存组的分布式高速缓存结构的方法。 高速缓存行被优化地复制到相对于给定访问处理器具有最低延迟的缓存组中。 在不同的缓存组中的当前访问的行可以与具有最低延迟的高速缓存组中的高速缓存行交换,并且具有最低延迟的高速缓存组中的另一行在当前访问的行被移动之前移动到不同的高速缓存组 以最低的延迟到达缓存库。 当两个或更多个处理器交替写入高速缓存行时,可以禁用高速缓存行的进一步复制。
    • 149. 发明授权
    • Verification of service level agreement contracts in a client server environment
    • 在客户端服务器环境中验证服务级别协议合同
    • US06792459B2
    • 2004-09-14
    • US09736573
    • 2000-12-14
    • Elmootazbellah Nabil ElnozahyRamakrishnan Rajamony
    • Elmootazbellah Nabil ElnozahyRamakrishnan Rajamony
    • G06F15173
    • G06Q30/018G06Q30/02G06Q30/0277
    • A method, apparatus and computer program product are disclosed to enable independent verification of service level agreement between two parties. In one embodiment, a first party contracts the hosting service of a second party to provide said first party with Web page and services on second party's equipment. Said contract contains a Service Level Agreement specifying performance parameters and guarantees for the response time experienced by users of said Web page and services. Independent verification by a third party of said agreement is done for a fee through several steps. In a first step, said third party inserts measuring and reporting instructions into blocks of information maintained on the server of said second party. The measuring instructions are for delivery to the client with the blocks of information. The delivery of the instructions occurs responsive to a request for the information by the client. Once they are delivered, the instructions are executed by the client. This client-side execution produces a measure of service that is provided to the client by the network and the server. In another step, reporting instructions are inserted into the blocks of information. Like the measuring instructions, the reporting instructions are also for delivery to the client. The reporting instructions may be in just one of the blocks of information, and their delivery also occurs responsive to a request for the information by the client. As a result of being executed by the client, the reporting instructions cause the client to send a report of the measure to a verifying agent.
    • 公开了一种方法,装置和计算机程序产品,以便能够独立地验证双方之间的服务水平协议。 在一个实施例中,第一方收起第二方的托管服务,以向第一方提供在第二方的设备上的网页和服务。 所述合同包含服务级别协议,其指定用于所述网页和服务的用户所经历的响应时间的性能参数和保证。 第三方对该协议的独立验证是通过几个步骤进行的。 在第一步骤中,所述第三方将测量和报告指令插入到在所述第二方的服务器上保存的信息块中。 测量指令用于传递给客户端的信息块。 响应于客户对信息的请求而发生指令的发送。 一旦交付,指令由客户端执行。 该客户端执行产生由网络和服务器提供给客户端的服务测量。 在另一步骤中,将报告指令插入到信息块中。 与测量说明一样,报告指示也用于交付给客户端。 报告指令可以仅在信息块之一中,并且响应于客户端对信息的请求也发送它们的传送。 作为由客户执行的结果,报告指令使客户端将该措施的报告发送给验证代理。