会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Methods and systems for presentation layer redirection for network optimization
    • 用于网络优化的表示层重定向的方法和系统
    • US08051192B2
    • 2011-11-01
    • US12248256
    • 2008-10-09
    • Peter Lepeska
    • Peter Lepeska
    • G06F15/16
    • G06F15/16
    • The present invention relates to systems, apparatus, and methods of intercepting commands at an application presentation layer. The method includes intercepting, at a proxy client, a command issued by an application to a network resource before the command is converted into a corresponding protocol command. The method further includes forwarding a simplified command of the corresponding protocol command to a proxy server, and converting, at the proxy server, the simplified command into the corresponding protocol command. Further, the method includes transmitting the corresponding protocol command to the network resource and receiving a response from the network resource, such that, the response corresponds to the protocol. The method further includes transmitting a confirmation message to the proxy client upon completion of the corresponding protocol command and transmitting the confirmation message to the application.
    • 本发明涉及在应用呈现层截取命令的系统,装置和方法。 该方法包括在代理客户端将命令发送到网络资源的命令之前,将该命令转换为相应的协议命令。 该方法还包括将相应协议命令的简化命令转发给代理服务器,并将代理服务器将简化命令转换为相应的协议命令。 此外,该方法包括将相应的协议命令发送到网络资源并从网络资源接收响应,使得响应对应于该协议。 所述方法还包括在完成相应的协议命令时向所述代理客户端发送确认消息,并向所述应用发送所述确认消息。
    • 4. 发明申请
    • Methods and Systems for the Use of Effective Latency to Make Dynamic Routing Decisions for Optimizing Network Applications
    • 使用有效延迟进行动态路由决策以优化网络应用程序的方法和系统
    • US20090193147A1
    • 2009-07-30
    • US12362254
    • 2009-01-29
    • Peter Lepeska
    • Peter Lepeska
    • G06F15/173
    • H04L67/101H04L67/1002H04L67/1012
    • The present invention relates to systems, apparatus, and methods for implementing dynamic routing. The method includes receiving a request for data located at a content server from a client system and determining latency between the client system and the content server. Based on the latency between the client system and the content server being greater than a first threshold value, the method determines latency between the client system and each of a plurality of acceleration servers. The method selects the acceleration server with the lowest latency, and determines latency between the selected acceleration server and the content server. Furthermore, based on the latency between the selected acceleration server and the content server being less than a second threshold, the method establishes an acceleration tunnel between the client system and the content server through the selected acceleration server and transfers the requested data to the client system using the acceleration tunnel.
    • 本发明涉及用于实现动态路由的系统,装置和方法。 该方法包括从客户端系统接收对位于内容服务器的数据的请求,并确定客户端系统与内容服务器之间的等待时间。 基于客户端系统和内容服务器之间的延迟大于第一阈值,该方法确定客户机系统与多个加速服务器中的每一个之间的等待时间。 该方法选择延迟最小的加速服务器,并确定所选加速服务器与内容服务器之间的延迟。 此外,基于所选择的加速服务器与内容服务器之间的延迟小于第二阈值,该方法通过所选择的加速服务器在客户端系统与内容服务器之间建立加速隧道,并将所请求的数据传送到客户端系统 使用加速隧道。
    • 5. 发明申请
    • METHODS AND SYSTEMS FOR IMPLEMENTING A CACHE MODEL IN A PREFETCHING SYSTEM
    • 用于在预制系统中实现高速缓存模型的方法和系统
    • US20090100228A1
    • 2009-04-16
    • US12252181
    • 2008-10-15
    • Peter LepeskaWilliam B. Sebastian
    • Peter LepeskaWilliam B. Sebastian
    • G06F12/08
    • G06F17/30902H04L67/02H04L67/28H04L67/2842H04L67/2847
    • The present invention relates to systems and methods of enhancing prefetch operations. The method includes fetching an object from a page on a web server. The method further includes storing, at a proxy server, caching instructions for the fetched object. The proxy server is connected with the client and the object is cached at the client. Furthermore, the method includes identifying a prefetchable reference to the fetched object in a subsequent web page and using the caching instructions stored on the proxy server to determine if a fresh copy of the object will be requested by the client. Further, the method includes, based on the determination that the object will be requested, sending a prefetch request for the object using an If-Modified-Since directive, and transmitting a response to the If-Modified-Since directive prefetch request to a proxy client.
    • 本发明涉及增强预取操作的系统和方法。 该方法包括从Web服务器上的页面获取对象。 所述方法还包括在代理服务器处存储所取出对象的高速缓存指令。 代理服务器与客户端连接,对象被缓存在客户端。 此外,该方法包括在随后的网页中识别对所取取的对象的可预取引用,并且使用存储在代理服务器上的高速缓存指令来确定客户端是否请求对象的新副本。 此外,该方法包括:基于将要请求的对象的确定,使用If-Modified-Since指令发送对象的预取请求,并将对If-Modified-Since指令预取请求的响应发送到代理 客户。
    • 8. 发明申请
    • DNS PREFETCH
    • US20100146415A1
    • 2010-06-10
    • US12685691
    • 2010-01-12
    • Peter Lepeska
    • Peter Lepeska
    • G06F15/16G06F3/048
    • H04L67/02G06F16/00H04L29/00H04L29/12066H04L29/12811H04L61/1511H04L61/6009
    • The disclosure relates to systems, apparatus, and methods of reducing round trips associated with DNS lookups in ways that are substantially transparent to the user. Embodiments implement prefetching of DNS entries, sometimes piggybacking on the prefetching of associated web objects. In one embodiment, prefetching of an object continues according to other prefetching techniques, until the point where the HTML response may be parsed. When an embedded object request is identified, a DNS lookup is performed, and the resulting IP address is pushed to the client as part of a prefetch data package. In some embodiments, the client strips off the relevant portion of the prefetch data package to create a local DNS entry. The DNS entry may be used to locally handle DNS requests by the client, thereby potentially avoiding a round trip to a remote DNS.
    • 本公开涉及以对用户基本透明的方式减少与DNS查找相关联的往返行程的系统,装置和方法。 实施例实现了DNS条目的预取,有时候捎带在预取相关联的Web对象。 在一个实施例中,对象的预取将根据其它预取技术继续,直到可以解析HTML响应的点。 当识别到嵌入式对象请求时,执行DNS查找,并将生成的IP地址作为预取数据包的一部分推送到客户端。 在一些实施例中,客户端剥离预取数据包的相关部分以创建本地DNS条目。 DNS条目可以用于本地处理客户端的DNS请求,从而潜在地避免向远程DNS的往返。
    • 9. 发明申请
    • DEDICATED SHARED BYTE CACHE
    • 专用共享字节缓存
    • US20100070570A1
    • 2010-03-18
    • US12557395
    • 2009-09-10
    • Peter Lepeska
    • Peter Lepeska
    • G06F15/16G06F17/30
    • H04L67/2857H04L67/289
    • The present invention relates to methods, apparatus, and systems for providing peer-to-peer network acceleration. The system includes a content server configured to transfer content based on received requests for content. The system further includes a proxy server coupled with the content server. The proxy server is configured to receive content from the content server and to forward the received content. Furthermore, the system includes client systems coupled with the proxy server. The client systems each include a personal byte cache and are configured to receive content from the proxy server, to store content in the personal byte caches, to synchronize the personal byte caches with each of the plurality of client system's portions of a shared byte cache, and to retrieve content from the shared byte cache.
    • 本发明涉及用于提供对等网络加速的方法,装置和系统。 该系统包括内容服务器,该内容服务器被配置为基于接收的内容请求来传送内容。 该系统还包括与内容服务器耦合的代理服务器。 代理服务器被配置为从内容服务器接收内容并转发所接收的内容。 此外,该系统包括与代理服务器耦合的客户端系统。 客户端系统各自包括个人字节高速缓存,并且被配置为从代理服务器接收内容以将内容存储在个人字节高速缓存中,以使个人字节高速缓存与共享字节高速缓存的多个客户系统的部分中的每一个同步, 并从共享字节缓存中检索内容。
    • 10. 发明申请
    • METHODS AND SYSTEMS FOR PEER-TO-PEER APP-LEVEL PERFORMANCE ENHANCING PROTOCOL (PEP)
    • 对等应用级性能增强协议(PEP)的方法和系统
    • US20090327412A1
    • 2009-12-31
    • US12491949
    • 2009-06-25
    • Peter Lepeska
    • Peter Lepeska
    • G06F15/16
    • H04L67/101H04L67/1002H04L67/2838
    • The present invention relates to methods, apparatus, and systems for providing peer-to-peer network acceleration. The system includes content servers and clients. Each of the clients is capable of functioning as a proxy server. A client generates a request for content, and the requesting client determines which of the content servers contains the requested content. The requesting client then determines that one of the clients is in a position to retrieve the requested content on the content server at lower latency than the requesting client. The client then functions as a proxy server for the requesting client, and the requesting client receives the requested content from the client acting as a proxy server.
    • 本发明涉及用于提供对等网络加速的方法,装置和系统。 该系统包括内容服务器和客户端。 每个客户端都能够作为代理服务器。 客户端生成对内容的请求,并且请求客户端确定哪个内容服务器包含所请求的内容。 然后,请求客户端确定客户端中的一个处于能够以比请求客户端更低的等待时间在内容服务器上检索所请求的内容的位置。 然后,客户端用作请求客户端的代理服务器,请求客户端从充当代理服务器的客户端接收所请求的内容。