会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • Method and apparatus for performing template-based prefix caching for use in video-on-demand applications
    • 用于在视频点播应用中使用的基于模板的前缀缓存的方法和装置
    • US20100095331A1
    • 2010-04-15
    • US12287815
    • 2008-10-14
    • Volker Friedrich HiltMarkus Andreas Hofmann
    • Volker Friedrich HiltMarkus Andreas Hofmann
    • H04N7/173
    • H04N21/23106H04N21/2225H04N21/47202
    • A method and apparatus for performing template-based prefix caching advantageously identifies common prefixes (i.e., initial video segments) in video titles, stores common prefixes only once in a prefix cache, and uses these common prefixes when serving requests for video content. This advantageously enables a prefix cache to scale to a large number of video titles, since the cache stores each common prefix only once. A new video title that uses an already existing prefix may be advantageously added without requiring additional storage in the prefix cache. Template-based prefix caching also advantageously reduces the bandwidth required to distribute prefixes when new titles are ingested into the system—if the required template is already available in the prefix cache, prefix-caching is enabled instantly for this title and no additional bandwidth is required to distribute the prefix.
    • 用于执行基于模板的前缀缓存的方法和装置有利地识别视频标题中的公共前缀(即,初始视频片段),在前缀高速缓存中仅存储一次公共前缀,并且在为视频内容提供请求时使用这些公共前缀。 这有利地使得前缀高速缓存能够扩展到大量视频标题,因为高速缓存仅存储每个公共前缀一次。 可以有利地添加使用已经存在的前缀的新的视频标题,而不需要前缀高速缓存中的附加存储。 基于模板的前缀缓存还有利地减少了当新标题被摄入系统时分配前缀所需的带宽 - 如果所需的模板在前缀高速缓存中已经可用,则为该标题立即启用前缀缓存,并且不需要额外的带宽 分发前缀。
    • 4. 发明授权
    • Method and apparatus for performing template-based prefix caching for use in video-on-demand applications
    • 用于在视频点播应用中使用的基于模板的前缀缓存的方法和装置
    • US08266661B2
    • 2012-09-11
    • US12287815
    • 2008-10-14
    • Volker Friedrich HiltMarkus Andreas Hofmann
    • Volker Friedrich HiltMarkus Andreas Hofmann
    • H04N7/173
    • H04N21/23106H04N21/2225H04N21/47202
    • A method and apparatus for performing template-based prefix caching advantageously identifies common prefixes (i.e., initial video segments) in video titles, stores common prefixes only once in a prefix cache, and uses these common prefixes when serving requests for video content. This advantageously enables a prefix cache to scale to a large number of video titles, since the cache stores each common prefix only once. A new video title that uses an already existing prefix may be advantageously added without requiring additional storage in the prefix cache. Template-based prefix caching also advantageously reduces the bandwidth required to distribute prefixes when new titles are ingested into the system—if the required template is already available in the prefix cache, prefix-caching is enabled instantly for this title and no additional bandwidth is required to distribute the prefix.
    • 用于执行基于模板的前缀缓存的方法和装置有利地识别视频标题中的公共前缀(即,初始视频片段),在前缀高速缓存中仅存储一次公共前缀,并且在为视频内容提供请求时使用这些公共前缀。 这有利地使得前缀高速缓存能够扩展到大量视频标题,因为高速缓存仅存储每个公共前缀一次。 可以有利地添加使用已经存在的前缀的新的视频标题,而不需要前缀高速缓存中的附加存储。 基于模板的前缀缓存还有利地减少了当新标题被摄入系统时分配前缀所需的带宽 - 如果所需的模板在前缀高速缓存中已经可用,则为该标题立即启用前缀缓存,并且不需要额外的带宽 分发前缀。
    • 5. 发明申请
    • QUALITY OF SERVICE AWARE RATE THROTTLING OF DELAY TOLERANT TRAFFIC FOR ENERGY EFFICIENT ROUTING
    • 服务质量能力有效路由的延迟交通速度波动
    • US20120324102A1
    • 2012-12-20
    • US13592770
    • 2012-08-23
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • G06F15/173
    • H04L47/41Y02D30/20Y02D50/30
    • The invention is directed to energy-efficient network processing of delay tolerant data packet traffic. Embodiments of the invention determine if an aggregate of time critical traffic flow rates and minimum rates for meeting QoS requirements of delay tolerant traffic flows exceeds a combined optimal rate of packet processing engines of a network processor. In the affirmative case, embodiments set the processing rate of individual packet processing engines to a minimum rate, such that the cumulative rate of the packet processing engines meets the aggregate rate, and schedule the delay tolerant flows to meet their respective minimum rates. Advantageously, by throttling the processing rate of only delay tolerant traffic, energy consumption of network processors can be reduced while at the same time QoS requirements of the delay tolerant traffic and time critical traffic can be met.
    • 本发明涉及延迟容忍数据分组业务的节能网络处理。 本发明的实施例确定用于满足延迟容许业务流的QoS要求的时间关键业务流量和最小速率的聚合是否超过网络处理器的分组处理引擎的组合最优速率。 在肯定的情况下,实施例将各个分组处理引擎的处理速率设置为最小速率,使得分组处理引擎的累积速率满足总速率,并且调度延迟容许流以满足它们各自的最小速率。 有利地,通过限制仅延迟容忍业务的处理速率,可以减少网络处理器的能量消耗,同时可以满足延迟容限业务和时间关键业务的QoS要求。
    • 6. 发明授权
    • Quality of service aware rate throttling of delay tolerant traffic for energy efficient routing
    • 服务质量服务感知速率限制延迟容忍流量的节能路由
    • US08295180B2
    • 2012-10-23
    • US12794268
    • 2010-06-04
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • H04L12/26H04L12/28
    • H04L47/41Y02D30/20Y02D50/30
    • The invention is directed to energy-efficient network processing of delay tolerant data packet traffic. Embodiments of the invention determine if an aggregate of time critical traffic flow rates and minimum rates for meeting QoS requirements of delay tolerant traffic flows exceeds a combined optimal rate of packet processing engines of a network processor. In the affirmative case, embodiments set the processing rate of individual packet processing engines to a minimum rate, such that the cumulative rate of the packet processing engines meets the aggregate rate, and schedule the delay tolerant flows to meet their respective minimum rates. Advantageously, by throttling the processing rate of only delay tolerant traffic, energy consumption of network processors can be reduced while at the same time QoS requirements of the delay tolerant traffic and time critical traffic can be met.
    • 本发明涉及延迟容忍数据分组业务的节能网络处理。 本发明的实施例确定用于满足延迟容许业务流的QoS要求的时间关键业务流量和最小速率的聚合是否超过网络处理器的分组处理引擎的组合最优速率。 在肯定的情况下,实施例将各个分组处理引擎的处理速率设置为最小速率,使得分组处理引擎的累积速率满足总速率,并且调度延迟容许流以满足它们各自的最小速率。 有利地,通过限制仅延迟容忍业务的处理速率,可以减少网络处理器的能量消耗,同时可以满足延迟容限业务和时间关键业务的QoS要求。
    • 7. 发明申请
    • NETWORK BASED PEER-TO-PEER TRAFFIC OPTIMIZATION
    • 基于网络的对等交通优化
    • US20110307538A1
    • 2011-12-15
    • US12813026
    • 2010-06-10
    • Ivica RimacVolker Friedrich Hilt
    • Ivica RimacVolker Friedrich Hilt
    • G06F15/16
    • H04L29/08846H04L67/104H04L67/1046
    • A peer-to-peer accelerator system is disclosed for reducing reverse link bandwidth bottlenecking of peer-to-peer content transfers. The peer-to-peer accelerator system contains a peer-to-peer proxy which resides in the core of the network. When a peer-to-peer bootstrap message from an asymmetrically connected client occurs, the proxy intercepts the message and instantiates an agent which will perform file transfers on the asymmetrically connected client's behalf thereby eliminating the need for the client to effect file content transfers over the reverse link. The peer-to-peer accelerator system is particularly useful for overcoming the bottlenecking and reverse link contention problems of peer-to-peer file transfer systems known in the art.
    • 公开了一种对等加速器系统,用于减少对等内容传输的反向链路带宽瓶颈。 对等加速器系统包含驻留在网络核心中的对等代理。 当来自不对称连接的客户端的对等引导消息发生时,代理拦截消息并实例化将代表不对称连接的客户端执行文件传输的代理,从而消除客户端对文件内容传输的需求 反向链接。 对等加速器系统对于克服本领域已知的对等文件传输系统的瓶颈和反向链路争用问题特别有用。
    • 8. 发明申请
    • QUALITY OF SERVICE AWARE RATE THROTTLING OF DELAY TOLERANT TRAFFIC FOR ENERGY EFFICIENT ROUTING
    • 服务质量能力有效路由的延迟交通速度波动
    • US20110299392A1
    • 2011-12-08
    • US12794268
    • 2010-06-04
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • Uichin LeeIvica RimacVolker Friedrich Hilt
    • H04L12/26
    • H04L47/41Y02D30/20Y02D50/30
    • The invention is directed to energy-efficient network processing of delay tolerant data packet traffic. Embodiments of the invention determine if an aggregate of time critical traffic flow rates and minimum rates for meeting QoS requirements of delay tolerant traffic flows exceeds a combined optimal rate of packet processing engines of a network processor. In the affirmative case, embodiments set the processing rate of individual packet processing engines to a minimum rate, such that the cumulative rate of the packet processing engines meets the aggregate rate, and schedule the delay tolerant flows to meet their respective minimum rates. Advantageously, by throttling the processing rate of only delay tolerant traffic, energy consumption of network processors can be reduced while at the same time QoS requirements of the delay tolerant traffic and time critical traffic can be met.
    • 本发明涉及延迟容忍数据分组业务的节能网络处理。 本发明的实施例确定用于满足延迟容许业务流的QoS要求的时间关键业务流量和最小速率的聚合是否超过网络处理器的分组处理引擎的组合最优速率。 在肯定的情况下,实施例将各个分组处理引擎的处理速率设置为最小速率,使得分组处理引擎的累积速率满足总速率,并且调度延迟容许流以满足它们各自的最小速率。 有利地,通过限制仅延迟容忍业务的处理速率,可以减少网络处理器的能量消耗,同时可以满足延迟容限业务和时间关键业务的QoS要求。