会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Approximating Node-Weighted Steiner Network of Terminals
    • 近端节点加权Steiner网络终端
    • US20100128631A1
    • 2010-05-27
    • US12276980
    • 2008-11-24
    • Mohammad Taghi HajiaghayiErik D. DemainePhilip N. Klein
    • Mohammad Taghi HajiaghayiErik D. DemainePhilip N. Klein
    • H04L12/28
    • H04L45/00H04L45/12
    • According to one method for approximating a network of terminals, a graph comprising nodes and edges connecting at least some of the nodes is received. The nodes include terminals and non-terminal nodes. The non-terminal nodes are each associated with a weight. The terminals are each initialized to a value. The values of the terminals are incremented by a given amount until the values of the terminals reach a sufficient amount to acquire at least one of the non-terminal nodes that connects at least two of the terminals based on the weight of the at least one of the non-terminal nodes. Upon the values of the terminals reaching the sufficient amount, the at least one of the non-terminal nodes and the edges connecting the at least one of the non-terminal nodes to the at least two of the terminals are acquired to form a connected component in the network of terminals.
    • 根据用于近似终端网络的一种方法,接收包括连接至少一些节点的节点和边缘的图。 节点包括终端和非终端节点。 非终端节点各自与权重相关联。 每个终端都被初始化为一个值。 终端的值增加给定量,直到终端的值达到足够的量,以便基于至少一个终端的至少一个终端的重量获取连接至少两个终端的非终端节点中的至少一个 非终端节点。 当终端的值达到足够的量时,获取至少一个非终端节点和将至少一个非终端节点连接到至少两个终端的边缘,以形成连接的组件 在终端网络中。
    • 2. 发明授权
    • Approximating node-weighted Steiner network of terminals
    • 终端的近似节点加权Steiner网络
    • US07933224B2
    • 2011-04-26
    • US12276980
    • 2008-11-24
    • Mohammad Taghi HajiaghayiErik D. DemainePhilip N. Klein
    • Mohammad Taghi HajiaghayiErik D. DemainePhilip N. Klein
    • H04L12/28
    • H04L45/00H04L45/12
    • According to one method for approximating a network of terminals, a graph comprising nodes and edges connecting at least some of the nodes is received. The nodes include terminals and non-terminal nodes. The non-terminal nodes are each associated with a weight. The terminals are each initialized to a value. The values of the terminals are incremented by a given amount until the values of the terminals reach a sufficient amount to acquire at least one of the non-terminal nodes that connects at least two of the terminals based on the weight of the at least one of the non-terminal nodes. Upon the values of the terminals reaching the sufficient amount, the at least one of the non-terminal nodes and the edges connecting the at least one of the non-terminal nodes to the at least two of the terminals are acquired to form a connected component in the network of terminals.
    • 根据用于近似终端网络的一种方法,接收包括连接至少一些节点的节点和边缘的图。 节点包括终端和非终端节点。 非终端节点各自与权重相关联。 每个终端都被初始化为一个值。 终端的值增加给定量,直到终端的值达到足够的量,以便基于至少一个终端的至少一个终端的重量获取连接至少两个终端的非终端节点中的至少一个 非终端节点。 当终端的值达到足够的量时,获取至少一个非终端节点和将至少一个非终端节点连接到至少两个终端的边缘,以形成连接的组件 在终端网络中。
    • 5. 发明申请
    • Network Aware Forward Caching
    • 网络意识转发缓存
    • US20130042009A1
    • 2013-02-14
    • US13650629
    • 2012-10-12
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • G06F15/173
    • H04L67/2842H04L67/2852
    • A network includes a cache server and a network aware server that operates to determine an optimization between a cost of retrieving content from a communication network and a cost of caching content at the cache server. The optimization is determined as a minimum of a sum of a transit cost, a backbone cost, and a caching cost. The transit cost includes a money cost per data unit. The backbone cost includes a money cost per data unit and time unit. The caching cost includes a money cost per server unit. In response to determining the optimization, the network aware server sends a content identifier to the cache server, and the cache server receives the content identifier, determines a source of a content item, and if the source is the same as the content identifier, then cache the content item.
    • 网络包括缓存服务器和网络感知服务器,其操作以确定从通信网络检索内容的成本与在缓存服务器上缓存内容的成本之间的优化。 优化被确定为运输成本,骨干成本和高速缓存成本之和的最小值。 过境成本包括每个数据单位的货币成本。 骨干成本包括每个数据单位的货币成本和时间单位。 缓存成本包括每个服务器单元的货币成本。 响应于确定优化,网络感知服务器向缓存服务器发送内容标识符,并且高速缓存服务器接收内容标识符,确定内容项的源,并且如果源与内容标识符相同,则 缓存内容项。
    • 6. 发明授权
    • Network aware forward caching
    • 网络意识向前缓存
    • US08312141B2
    • 2012-11-13
    • US13333515
    • 2011-12-21
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • G06F15/173
    • H04L67/2842H04L67/2852
    • A network includes a cache server and a network aware server that operates to determine an optimization between a cost of retrieving content from a communication network and a cost of caching content at the cache server. The optimization is determined as a minimum of a sum of a transit cost, a backbone cost, and a caching cost. The transit cost includes a money cost per data unit. The backbone cost includes a money cost per data unit and time unit. The caching cost includes a money cost per server unit. In response to determining the optimization, the network aware server sends a content identifier to the cache server, and the cache server receives the content identifier, determines a source of a content item, and if the source is the same as the content identifier, then cache the content item.
    • 网络包括缓存服务器和网络感知服务器,其操作以确定从通信网络检索内容的成本与在缓存服务器上缓存内容的成本之间的优化。 优化被确定为运输成本,骨干成本和高速缓存成本之和的最小值。 过境成本包括每个数据单位的货币成本。 骨干成本包括每个数据单位的货币成本和时间单位。 缓存成本包括每个服务器单元的货币成本。 响应于确定优化,网络感知服务器向缓存服务器发送内容标识符,并且高速缓存服务器接收内容标识符,确定内容项的源,并且如果源与内容标识符相同,则 缓存内容项。
    • 7. 发明申请
    • Network Aware Forward Caching
    • 网络意识转发缓存
    • US20120096140A1
    • 2012-04-19
    • US13333515
    • 2011-12-21
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • Alexandre GerberOliver SpatscheckDan PeiMohammad Taghi HajiaghayiJeffrey Erman
    • G06F15/16
    • H04L67/2842H04L67/2852
    • A network includes a cache server and a network aware server that operates to determine an optimization between a cost of retrieving content from a communication network and a cost of caching content at the cache server. The optimization is determined as a minimum of a sum of a transit cost, a backbone cost, and a caching cost. The transit cost includes a money cost per data unit. The backbone cost includes a money cost per data unit and time unit. The caching cost includes a money cost per server unit. In response to determining the optimization, the network aware server sends a content identifier to the cache server, and the cache server receives the content identifier, determines a source of a content item, and if the source is the same as the content identifier, then cache the content item.
    • 网络包括缓存服务器和网络感知服务器,其操作以确定从通信网络检索内容的成本与在缓存服务器上缓存内容的成本之间的优化。 优化被确定为运输成本,骨干成本和高速缓存成本之和的最小值。 过境成本包括每个数据单位的货币成本。 骨干成本包括每个数据单位的货币成本和时间单位。 缓存成本包括每个服务器单元的货币成本。 响应于确定优化,网络感知服务器向缓存服务器发送内容标识符,并且高速缓存服务器接收内容标识符,确定内容项的源,并且如果源与内容标识符相同,则 缓存内容项。
    • 9. 发明申请
    • System and Method for Assigning Requests in a Content Distribution Network
    • 在内容分发网络中分配请求的系统和方法
    • US20130042021A1
    • 2013-02-14
    • US13653043
    • 2012-10-16
    • Mohammad Taghi HajiaghayiMohammad Hossein Bateni
    • Mohammad Taghi HajiaghayiMohammad Hossein Bateni
    • G06F15/173
    • G06F15/173H04L67/2842H04L67/288
    • A method includes receiving demand information from edge routers, estimating an optimal request distribution based on the demand information using a bicriteria approximation algorithm, wherein initial programming states for the estimation are specified by (u, F, D, FS, DS, Fexp, Fimp), where u is a current node, F is a vector representing an available facility for large capacity, D is a vector representing an outsourced large client, FS is an amount of cache server capacity offered to small clients, DS is a total demand of outsourced small clients, Fexp is an index of a cache server being exported from a subtree, and Fimp is an index of another cache server of another subtree that is being utilized, and providing each of the edge routers with anycast route information for the cache servers.
    • 一种方法包括从边缘路由器接收需求信息,使用二进制近似算法基于需求信息估计最优请求分布,其中用于估计的初始编程状态由(u,F,D,FS,DS,Fexp,Fimp ),其中u是当前节点,F是表示大容量的可用设施的向量,D是表示外包大客户端的向量,FS是向小型客户端提供的缓存服务器容量的数量,DS是对 外部小客户端,Fexp是从子树导出的缓存服务器的索引,Fimp是正在使用的另一个子树的另一个缓存服务器的索引,并为每个边缘路由器提供缓存服务器的任播路由信息 。