会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 7. 发明申请
    • TRANSPARENT MIGRATION OF ENDPOINT
    • 端点的透明移动
    • US20110270908A1
    • 2011-11-03
    • US12768750
    • 2010-04-28
    • Randall KernParveen PatelLihua YuanAnil K. RuiaWon Suk Yoo
    • Randall KernParveen PatelLihua YuanAnil K. RuiaWon Suk Yoo
    • G06F15/16
    • H04L45/24
    • Architecture that facilitates the capture of connection state of a connection established between a client and an intermediate server and forwards the state to one or more target servers. A software component at the target server (as well as the intermediate server) uses this connection state to reply back to the client directly, thereby bypassing the intermediate server. All packets from the client related to the request are received at the intermediate server and then forwarded to the target server. The migration can be accomplished without any change in the client operating system and client applications, without assistance from a gateway device such as a load balancer or the network, without duplication of all packets between the multiple servers, and without changes to the transport layer stack of the intermediate and target servers.
    • 架构,便于捕获在客户端和中间服务器之间建立的连接的连接状态,并将状态转发到一个或多个目标服务器。 目标服务器(以及中间服务器)上的软件组件使用此连接状态直接回复客户端,从而绕过中间服务器。 来自客户端的与请求相关的所有数据包都在中间服务器处接收,然后转发到目标服务器。 无需客户端操作系统和客户端应用程序的任何更改即可完成迁移,无需网关设备(如负载平衡器或网络)进行协助,而不会在多个服务器之间复制所有数据包,而无需更改传输层堆栈 的中间和目标服务器。
    • 8. 发明授权
    • Network caching for multiple contemporaneous requests
    • 多个同时期请求的网络缓存
    • US08046432B2
    • 2011-10-25
    • US12425395
    • 2009-04-17
    • Won Suk YooAnil K. RuiaHimanshu PatelJohn A. BocharovNing Lin
    • Won Suk YooAnil K. RuiaHimanshu PatelJohn A. BocharovNing Lin
    • G06F15/16G06F15/167
    • H04L67/2842H04L67/2833H04L67/2885
    • A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.
    • 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。
    • 9. 发明申请
    • SELECTIVE CONTENT PRE-CACHING
    • 选择内容预先加速
    • US20110131341A1
    • 2011-06-02
    • US12626957
    • 2009-11-30
    • Won Suk YooVenkat Raman DonAnil K. RuiaNing LinChittaranjan Pattekar
    • Won Suk YooVenkat Raman DonAnil K. RuiaNing LinChittaranjan Pattekar
    • G06F15/16
    • G06F16/9574
    • A selective pre-caching system reduces the amount of content cached at cache proxies by limiting the cached content to that content that a particular cache proxy is responsible for caching. This can substantially reduce the content stored on each cache proxy and reduces the amount of resources consumed for pre-caching in preparation for a particular event. The cache proxy receives a list of content items that and an indication of the topology of the cache network. The cache proxy uses the received topology to determine the content items in the received list of content items that the cache proxy is responsible for caching. The cache proxy then retrieves the determined content items so that they are available in the cache before client requests are received.
    • 选择性预缓存系统通过将缓存内容限制为特定缓存代理负责缓存的内容来减少缓存代理缓存的内容量。 这可以显着减少存储在每个缓存代理上的内容,并且减少为预先缓存而消耗的资源量,以准备特定事件。 高速缓存代理接收内容项的列表以及高速缓存网络拓扑的指示。 缓存代理使用接收到的拓扑来确定缓存代理负责缓存的内容项的接收列表中的内容项。 缓存代理然后检索确定的内容项,使得它们在接收到客户端请求之前在高速缓存中可用。