会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • VIRTUAL DATA STORAGE DEVICES AND APPLICATIONS OVER WIDE AREA NETWORKS
    • 虚拟数据存储设备和应用于广域网络
    • WO2011139443A1
    • 2011-11-10
    • PCT/US2011/030776
    • 2011-03-31
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiMCCANNE, StevenDEMMER, Michael J.
    • WU, David Tze-SiMCCANNE, StevenDEMMER, Michael J.
    • G06F9/455
    • G06F3/0605G06F3/0664G06F3/067G06F9/45558G06F2009/45579G06F2009/45595H04L67/1097
    • A virtualization system provides virtualized servers at a branch network location. Virtualized servers are implemented using virtual machine applications within the virtualization system. Data storage for the virtualized servers, including storage of the virtual machine files, is consolidated at a data center network location. The virtual disks of the virtualized servers are mapped to physical data storage at the data center and accessed via a WAN using storage block-based protocols. The virtualization system accesses a storage block cache at the branch network location that includes storage blocks prefetched based on knowledge about the virtualized servers. The virtualization system can include a virtual LAN directing network traffic between the WAN, the virtualized servers, and branch location clients. The virtualized servers, virtual LAN, and virtual disk mapping can be configured remotely via a management application. The management application may use templates to create multiple instances of common branch location configurations.
    • 虚拟化系统在分支网络位置提供虚拟化服务器。 虚拟化服务器使用虚拟化系统中的虚拟机应用程序实现。 虚拟化服务器的数据存储(包括虚拟机文件的存储)在数据中心网络位置被合并。 虚拟化服务器的虚拟磁盘映射到数据中心的物理数据存储,并使用基于存储块的协议通过WAN进行访问。 虚拟化系统访问分支网络位置处的存储块高速缓存,其包括基于关于虚拟化服务器的知识而预取的存储块。 虚拟化系统可以包括在WAN,虚拟化服务器和分支位置客户端之间指示网络流量的虚拟LAN。 可以通过管理应用程序远程配置虚拟化服务器,虚拟LAN和虚拟磁盘映射。 管理应用程序可以使用模板来创建公共分支位置配置的多个实例。
    • 2. 发明申请
    • VIRTUALIZED DATA STORAGE SYSTEM ARCHITECTURE
    • 虚拟化数据存储系统架构
    • WO2010111312A2
    • 2010-09-30
    • PCT/US2010/028375
    • 2010-03-23
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiMCCANNE, StevenDEMMER, Michael J.GUPTA, Nitin
    • WU, David Tze-SiMCCANNE, StevenDEMMER, Michael J.GUPTA, Nitin
    • G06F15/16
    • G06F17/30233G06F3/0643G06F3/0653G06F3/067G06F12/0862G06F17/30132G06F2212/6024
    • Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage; however, virtual storage arrays actually store data at the data center. The virtual storage arrays overcomes bandwidth and latency limitations of the wide area network by predicting and prefetching storage blocks, which are then cached at the branch location. Virtual storage arrays leverage an understanding of the semantics and structure of high-level data structures associated with storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. Virtual storage arrays determine the association between requested storage blocks and corresponding high-level data structure entities to predict additional high-level data structure entities that are likely to be accessed. From this, the virtual storage array identifies the additional storage blocks for prefetching.
    • 虚拟存储阵列将通过广域网连接的数据中心的分支数据存储整合。 虚拟存储阵列对存储客户端显示为本地数据存储; 然而,虚拟存储阵列实际上将数据存储在数据中心。 虚拟存储阵列通过预测和预取存储块来克服广域网的带宽和延迟限制,然后将存储块缓存在分支位置。 虚拟存储阵列利用对与存储块相关联的高级数据结构的语义和结构的理解,以预测存储客户端在不久的将来可能要求哪些存储块。 虚拟存储阵列确定所请求的存储块和相应的高级数据结构实体之间的关联,以预测可能被访问的附加高级数据结构实体。 从此,虚拟存储阵列识别用于预取的附加存储块。
    • 3. 发明申请
    • TRANSPARENT CLIENT-SERVER TRANSACTION ACCELERATOR
    • 透明客户端 - 服务器交易加速器
    • WO2006112844A1
    • 2006-10-26
    • PCT/US2005/013269
    • 2005-04-18
    • RIVERBED TECHNOLOGY, INC.MCCANNE, StevenDEMMER, Michael J.JAIN, ArvindWU, David Tze-Si
    • MCCANNE, StevenDEMMER, Michael J.JAIN, ArvindWU, David Tze-Si
    • H04L29/06
    • H04L29/06H04L67/42
    • In a network that conveys requests from clients to servers and responses from servers to clients, a network transaction accelerator for accelerating transactions involving data transfer between at least one client and at least one server over a network comprising a client-side engine, a server-side engine and a transaction predictor configured to predict, based on past transactions, which transactions are likely to occur in the future between the client and server. The transaction predictor might be in the server-side engine, the client-side engine, or both. The client-side engine receives indications of requests from the client, a transaction buffer for storing results of predicted transactions received from the server or the server-side engine ahead of receipt of a corresponding request, and a collator for collating the requests from the client with the stored results or received results, wherein a request and a response that are matched by the collator are identified and the matched response is provided to the client in response to the matched request. The server-side engine receives indications of transactions including requests and responses and conveys requests to the server in response to actual transactions or predicted transactions.
    • 在将来自客户端的请求传达到服务器的请求和从服务器到客户机的响应的网络中,提供一种网络事务加速器,用于加速涉及通过网络的至少一个客户端与至少一个服务器之间的数据传输的事务,包括客户端引擎, 侧引擎和事务预测器,其被配置为基于过去的事务预测在将来在客户端和服务器之间可能发生哪些事务。 事务预测器可能位于服务器端引擎,客户端引擎或两者中。 客户侧引擎接收来自客户端的请求的指示,用于存储从服务器或服务器端引擎收到的对应请求之前收到的预测事务的结果的事务缓冲器,以及用于对来自客户机的请求进行整理的归类器 存储结果或接收结果,其中识别由归类器匹配的请求和响应,并且响应于匹配的请求向客户端提供匹配的响应。 服务器端引擎接收包括请求和响应在内的交易的指示,并且响应于实际交易或预测交易向服务器传送请求。
    • 5. 发明申请
    • AUTOMATIC FRAMING SELECTION
    • 自动框架选择
    • WO2007016236A2
    • 2007-02-08
    • PCT/US2006/029181
    • 2006-07-26
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiLASSEN, SorenSUBBANA, KartikGUPTA, NitinKESWANI, Vivasvat
    • WU, David Tze-SiLASSEN, SorenSUBBANA, KartikGUPTA, NitinKESWANI, Vivasvat
    • H04J11/00
    • H04L47/27H04L12/28H04L43/16H04L47/26H04L69/16H04L69/163
    • Network traffic is monitored and an optimal framing heuristic is automatically determined and applied. Framing heuristics specify different rules for framing network traffic. While a framing heuristic is applied to the network traffic, alternative framing heuristics are speculatively evaluated for the network traffic. The results of these evaluations are used to rank the framing heuristics. The framing heuristic with the best rank is selected for framing subsequent network traffic. Each client/server traffic flow may have a separate framing heuristic. The framing heuristics may be deterministic based on byte count and/or time or based on traffic characteristics that indicate a plausible point for framing to occur. The choice of available framing heuristics may be determined partly by manual configuration, which specifies which framing heuristics are available, and partly by automatic processes, which determine the best framing heuristic to apply to the current network traffic from the set of available framing heuristics.
    • 监控网络流量,并自动确定并应用最佳框架启发式。 构筑启发式规则为构建网络流量指定了不同的规则。 在网络流量应用框架启发式的同时,针对网络流量推测性地评估替代框架启发式。 这些评估的结果被用来排列框架启发式。 选择具有最佳排名的框架启发式用于构建后续网络流量。 每个客户端/服务器通信流可能具有单独的框架启发式。 基于字节计数和/或时间或者基于指示帧出现的合理点的业务特性,帧启发法可以是确定性的。 可用框架启发式的选择可部分由手动配置来确定,该手动配置指定哪些框架启发式可用,并且部分由自动过程确定,以从可用框架启发式集合中确定适用于当前网络流量的最佳框架启发式。 / p>

    • 6. 发明申请
    • SERIAL CLUSTERING
    • 串行集群
    • WO2007055757A2
    • 2007-05-18
    • PCT/US2006/029082
    • 2006-07-26
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiGUPTA, NitinLY, Kand
    • WU, David Tze-SiGUPTA, NitinLY, Kand
    • H04J1/16
    • H04L47/12H04L47/10H04L47/125H04L69/16H04L69/162H04L69/165H04L69/32
    • SERIAL CLUSTERING ABSTRACT OF THE DISCLOSURE Serial clustering uses two or more network devices connected in series via a local and/or wide-area network to provide additional capacity when network traffic exceeds the processing capabilities of a single network device. When a first network device reaches its capacity limit, any excess network traffic beyond that limit is passed through the first network device unchanged. A network device connected in series with the first network device intercepts and will process the excess network traffic provided that it has sufficient processing capacity. Additional network devices can process remaining network traffic in a similar manner until all of the excess network traffic has been processed or until there are no more additional network devices. Network devices may use rules to determine how to handle network traffic. Rules may be based on the attributes of received network packets, attributes of the network device, or attributes of the network.
    • 串行集群摘要串行集群使用两个或多个通过本地和/或广域网串联连接的网络设备,以便在网络流量超过单个网络设备的处理能力时提供额外的容量。 当第一个网络设备达到其容量限制时,超出该限制的任何超量网络流量不变地通过第一个网络设备。 与第一网络设备串联连接的网络设备将拦截并处理多余的网络流量,前提是它具有足够的处理能力。 额外的网络设备可以以类似的方式处理剩余的网络流量,直到已经处理了所有超量的网络流量,或者直到没有更多的额外的网络设备。 网络设备可能会使用规则来确定如何处理网络流量。 规则可以基于接收的网络分组的属性,网络设备的属性或网络的属性。
    • 7. 发明申请
    • NETWORK TRAFFIC PROCESSING PIPELINE FOR VIRTUAL MACHINES IN A NETWORK DEVICE
    • 网络设备中虚拟机的网络交通处理管道
    • WO2011002575A1
    • 2011-01-06
    • PCT/US2010/037538
    • 2010-06-04
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiLY, KandTRAC, Lap NathanPOTASHNIK, Alexei
    • WU, David Tze-SiLY, KandTRAC, Lap NathanPOTASHNIK, Alexei
    • G06F9/455
    • H04L12/4625G06F9/45558G06F2009/45595H04L49/70
    • Network devices include hosted virtual machines and virtual machine applications. Hosted virtual machines and their applications implement additional functions and services in network devices. Network devices include data taps for directing network traffic to hosted virtual machines and allowing hosted virtual machines to inject network traffic. Network devices include unidirectional data flow specifications, referred to as hyperswitches. Each hyperswitch is associated with a hosted virtual machine and receives network traffic received by the network device from a single direction. Each hyperswitch processes network traffic according to rules and rule criteria. A hosted virtual machine can be associated with multiple hyperswitches, thereby independently specifying the data flow of network traffic to and from the hosted virtual machine from multiple networks. The network device architecture also enables the communication of additional information between the network device and one or more virtual machine applications using an extended non-standard network protocol.
    • 网络设备包括托管虚拟机和虚拟机应用程序。 托管虚拟机及其应用程序在网络设备中实现附加功能和服务。 网络设备包括用于将网络流量引导到托管虚拟机并允许托管虚拟机注入网络流量的数据分接头。 网络设备包括单向数据流规范,称为超开关。 每个超级交换机与托管虚拟机相关联,并从单个方向接收网络设备接收的网络流量。 每个超级交换机根据规则和规则标准处理网络流量。 托管的虚拟机可以与多个超级交换机相关联,从而独立地指定来自多个网络的托管虚拟机的网络流量的数据流。 网络设备架构还使得能够使用扩展的非标准网络协议在网络设备与一个或多个虚拟机应用之间进行附加信息的通信。
    • 8. 发明申请
    • RULES-BASED TRANSACTION PREFETCHING USING CONNECTION END-POINT PROXIES
    • 使用连接端点代码的基于规则的交易预购
    • WO2006099542A2
    • 2006-09-21
    • PCT/US2006/009544
    • 2006-03-15
    • RIVERBED TECHNOLOGY, INC.WU, David Tze-SiKESWANI, VivasvatLARSEN, Case
    • WU, David Tze-SiKESWANI, VivasvatLARSEN, Case
    • G06F15/16
    • H04L67/1095H04L67/10
    • Network proxies reduce server latency in response to series of requests from client applications. Network proxies intercept messages clients and a server. Intercepted client requests are compared with rules. When client requests match a rule, additional request messages are forwarded to the server on behalf of a client application. In response to the additional request messages, the server provides corresponding response messages. A network proxy intercepts and caches the response messages. Subsequent client requests are intercepted by the network application proxy and compared with the cached messages. If a cached response message corresponds with a client request message, the response message is returned to the client application immediately instead of re-requesting the same information from the server. A server-side network proxy can compare client requests with the rules and send additional request messages. The corresponding response messages can be forwarded to a client-side network proxy for caching.
    • 响应于来自客户端应用程序的一系列请求,网络代理减少了服务器延迟。 网络代理拦截消息客户端和服务器。 拦截的客户端请求与规则进行比较。 当客户端请求匹配规则时,代表客户端应用程序将其他请求消息转发到服务器。 响应于附加请求消息,服务器提供相应的响应消息。 网络代理拦截并缓存响应消息。 随后的客户端请求被网络应用程序代理拦截,并与缓存的消息进行比较。 如果缓存的响应消息对应于客户端请求消息,则响应消息立即返回到客户端应用程序,而不是从服务器重新请求相同的信息。 服务器端网络代理可以将客户端请求与规则进行比较,并发送其他请求消息。 相应的响应消息可以转发到客户端网络代理进行缓存。