会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Proactive load balancing
    • 主动负载均衡
    • US08073952B2
    • 2011-12-06
    • US12427774
    • 2009-04-22
    • Won Suk YooAnil K. RuiaHimanshu PatelNing Lin
    • Won Suk YooAnil K. RuiaHimanshu PatelNing Lin
    • G06F15/173
    • H04L67/1008H04L67/1002
    • A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.
    • 这里描述了一种负载平衡系统,它使用关于每个目的地服务器上的预期负载或事件的信息来主动平衡多个目的地服务器之间的客户端请求以通知负载平衡决定。 系统检测将影响目标服务器请求的性能和/或容量的一个或多个即将到来的事件。 在检测到事件时,系统通知负载平衡器在事件发生的时间内排除连接。 接下来,事件发生在目标服务器上,系统检测事件何时完成。 作为响应,系统通知负载均衡器恢复与目标服务器的连接。 这样,在任务发生之前,系统能够将客户端重定向到其他可用的目标服务器。 因此,负载平衡系统提供更有效的客户端请求路由并提高响应能力。
    • 6. 发明申请
    • BYTE RANGE CACHING
    • 字节范围高速缓存
    • US20100318632A1
    • 2010-12-16
    • US12485090
    • 2009-06-16
    • Won Suk YooAnil K. RuiaHimanshu PatelNing LinChittaranjan Pattekar
    • Won Suk YooAnil K. RuiaHimanshu PatelNing LinChittaranjan Pattekar
    • G06F15/16G06F12/08
    • H04N21/64322H04L65/605H04L65/608H04L65/80H04L67/2819H04L67/2842H04L67/2852H04N21/23106H04N21/6125
    • A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.
    • 高速缓存系统将内容分成由高速缓存服务器缓存的多个单独可高速缓存的块,该高速缓存服务器缓存部分内容,并向原始服务器提供低延迟和较少重复请求的字节范围请求。 系统从客户端接收内容资源的字节范围的请求。 系统确定与指定字节范围重叠的块,并向原始服务器发送尚未存储在高速缓存中的重叠块的字节范围请求。 系统将接收到的响应的字节作为块存储在高速缓存中,并使用存储在高速缓存中的块来响应接收到的请求。 该系统提供与先前请求的字节范围重叠的后续请求,这些请求范围已经从高速缓存中检索到的块中,并且只向原始服务器请求客户端以前未请求的那些块。
    • 9. 发明申请
    • TRANSPARENT MIGRATION OF ENDPOINT
    • 端点的透明移动
    • US20110270908A1
    • 2011-11-03
    • US12768750
    • 2010-04-28
    • Randall KernParveen PatelLihua YuanAnil K. RuiaWon Suk Yoo
    • Randall KernParveen PatelLihua YuanAnil K. RuiaWon Suk Yoo
    • G06F15/16
    • H04L45/24
    • Architecture that facilitates the capture of connection state of a connection established between a client and an intermediate server and forwards the state to one or more target servers. A software component at the target server (as well as the intermediate server) uses this connection state to reply back to the client directly, thereby bypassing the intermediate server. All packets from the client related to the request are received at the intermediate server and then forwarded to the target server. The migration can be accomplished without any change in the client operating system and client applications, without assistance from a gateway device such as a load balancer or the network, without duplication of all packets between the multiple servers, and without changes to the transport layer stack of the intermediate and target servers.
    • 架构,便于捕获在客户端和中间服务器之间建立的连接的连接状态,并将状态转发到一个或多个目标服务器。 目标服务器(以及中间服务器)上的软件组件使用此连接状态直接回复客户端,从而绕过中间服务器。 来自客户端的与请求相关的所有数据包都在中间服务器处接收,然后转发到目标服务器。 无需客户端操作系统和客户端应用程序的任何更改即可完成迁移,无需网关设备(如负载平衡器或网络)进行协助,而不会在多个服务器之间复制所有数据包,而无需更改传输层堆栈 的中间和目标服务器。
    • 10. 发明授权
    • Network caching for multiple contemporaneous requests
    • 多个同时期请求的网络缓存
    • US08046432B2
    • 2011-10-25
    • US12425395
    • 2009-04-17
    • Won Suk YooAnil K. RuiaHimanshu PatelJohn A. BocharovNing Lin
    • Won Suk YooAnil K. RuiaHimanshu PatelJohn A. BocharovNing Lin
    • G06F15/16G06F15/167
    • H04L67/2842H04L67/2833H04L67/2885
    • A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.
    • 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。