会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM
    • 资源分配用于双重活动系统的资源多样性
    • US20120311598A1
    • 2012-12-06
    • US13151075
    • 2011-06-01
    • Yariv BACHARRon EDELSTEINOded SONIN
    • Yariv BACHARRon EDELSTEINOded SONIN
    • G06F9/50
    • G06F9/50G06F9/5083G06F2209/504Y02D10/22
    • Exemplary method, system, and computer program product embodiments for resource allocation of a plurality of resources for a dual activity system by a processor device, are provided. In one embodiment, by way of example only, each of the activities may be started at a static quota. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. In response to the rejection of the resource request for the alternative one of the at least one of the dual activities, a resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary.
    • 提供了用于由处理器设备为双重活动系统的多个资源的资源分配的示例性方法,系统和计算机程序产品实施例。 在一个实施例中,仅作为示例,每个活动可以以静态配额开始。 对于针对至少一个双重活动的资源请求,资源边界可以被增加,直到双重活动中的至少一个的另一个活动的资源请求被拒绝为止。 为了响应对双重活动中的至少一个的替代方案的资源请求的拒绝,可以减少双重活动中的至少一个的资源边界,并且可以开始减小模式之后的等待直到 当前的资源使用量小于和等于资源边界的减少。
    • 5. 发明申请
    • DEDUPLICATED DATA PROCESSING RATE CONTROL
    • 重复数据处理速率控制
    • US20110040951A1
    • 2011-02-17
    • US12539085
    • 2009-08-11
    • Shay H. AKIRAVRon ASHERYariv BACHARLior KLIPPEROded SONIN
    • Shay H. AKIRAVRon ASHERYariv BACHARLior KLIPPEROded SONIN
    • G06F15/76G06F9/30
    • G06F17/30159G06F3/0641G06F17/30162H04L47/10
    • Various embodiments for deduplicated data processing rate control using at least one processor device in a computing environment are provided. A plurality of workers is configured for parallel processing of deduplicated data entities in a plurality of chunks. The deduplicated data processing rate is regulated using a rate control mechanism. The rate control mechanism incorporates a debt/credit algorithm specifying which of the plurality of workers processing the deduplicated data entities must wait for each of a plurality of calculated required sleep times. The rate control mechanism is adapted to limit a data flow rate based on a penalty acquired during a last processing of one of the plurality of chunks in a retroactive manner, and further adapted to operate on at least one vector representation of at least one limit specification to accommodate a variety of available dimensions corresponding to the at least one limit specification.
    • 提供了在计算环境中使用至少一个处理器设备的重复数据处理速率控制的各种实施例。 多个工作者被配置用于并行处理多个块中的重复数据删除的数据实体。 使用速率控制机制来调节重复数据删除的数据处理速率。 速率控制机制包括债务/信用算法,其指定处理重复数据删除的数据实体的多个工作者中的哪一个必须等​​待多个计算的所需睡眠时间中的每一个。 速率控制机制适于基于在追溯方式的多个块中的一个块的最后一个处理期间获取的惩罚来限制数据流速,并且还适于对至少一个限制规范的至少一个向量表示进行操作 以适应与至少一个限制规格对应的各种可用尺寸。
    • 7. 发明申请
    • DEDUPLICATED DATA PROCESSING RATE CONTROL
    • 重复数据处理速率控制
    • US20120215748A1
    • 2012-08-23
    • US13458772
    • 2012-04-27
    • Shay H. AKIRAVRon ASHERYariv BACHARLior KLIPPEROded SONIN
    • Shay H. AKIRAVRon ASHERYariv BACHARLior KLIPPEROded SONIN
    • G06F7/00G06F17/30
    • G06F17/30159G06F3/0641G06F17/30162H04L47/10
    • A plurality of workers is configured for parallel processing of deduplicated data entities in a plurality of chunks. The deduplicated data processing rate is regulated using a rate control mechanism. The rate control mechanism incorporates a debt/credit algorithm specifying which of the plurality of workers processing the deduplicated data entities must wait for each of a plurality of calculated required sleep times. The rate control mechanism is adapted to limit a data flow rate based on a penalty acquired during a last processing of one of the plurality of chunks in a retroactive manner, and further adapted to operate on at least one vector representation of at least one limit specification to accommodate a variety of available dimensions corresponding to the at least one limit specification.
    • 多个工作者被配置用于并行处理多个块中的重复数据删除的数据实体。 使用速率控制机制来调节重复数据删除的数据处理速率。 速率控制机制包括债务/信用算法,其指定处理重复数据删除的数据实体的多个工作者中的哪一个必须等​​待多个计算的所需睡眠时间中的每一个。 速率控制机制适于基于在追溯方式的多个块中的一个块的最后一个处理期间获取的惩罚来限制数据流速,并且还适于对至少一个限制规范的至少一个向量表示进行操作 以适应与至少一个限制规格对应的各种可用尺寸。
    • 8. 发明申请
    • SELECTIVE CONSTANT COMPLEXITY DISMISSAL IN TASK SCHEDULING
    • 在任务调度中选择性的常量复杂性
    • US20120047507A1
    • 2012-02-23
    • US12859467
    • 2010-08-19
    • Yariv BACHARIlai HARSGOR-HENDINEhud MEIRIOded SONIN
    • Yariv BACHARIlai HARSGOR-HENDINEhud MEIRIOded SONIN
    • G06F9/46
    • G06F9/4881G06F2209/486
    • Various embodiments for selective constant complexity dismissal in task scheduling of a plurality of tasks are provided. A strictly increasing function is implemented to generate a plurality of unique creation stamps, each of the plurality of unique creation stamps increasing over time pursuant to the strictly increasing function. A new task to be placed with the plurality of tasks is labeled with a new unique creation stamp of the plurality of unique creation stamps. The one of the list of dismissal rules holds a minimal valid creation (MVC) stamp, which is updated when a dismissal action for the one of the list of dismissal rules is executed. The dismissal action acts to dismiss a selection of tasks over time due to continuous dispatch.
    • 提供了用于多个任务的任务调度中的选择性恒定复杂度解除的各种实施例。 执行严格增加的功能以产生多个独特的创建邮票,所述多个独特的创建邮票中的每一个随着时间的推移随着严格增加的功能而增加。 用多个任务放置的新任务用多个独特的创建标记的新的唯一创建印记标记。 解雇规则清单中的一个包含最小有效的创建(MVC)邮票,当执行解雇规则列表中的一个撤销操作时,该邮票将被更新。 由于连续派遣,解雇行动会随着时间推移解除选择任务。
    • 9. 发明申请
    • READ-AHEAD PROCESSING IN NETWORKED CLIENT-SERVER ARCHITECTURE
    • 网络客户端 - 服务器架构中的READ-AHEAD处理
    • US20120239749A1
    • 2012-09-20
    • US13488157
    • 2012-06-04
    • Lior ARONOVICHKonstantin MUSHKINOded SONIN
    • Lior ARONOVICHKonstantin MUSHKINOded SONIN
    • G06F15/167G06F15/16
    • G06F12/0862G06F3/067G06F12/0802G06F2212/163H04L67/42
    • Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.
    • 读取消息由多个唯一的序列标识(ID)分组,其中序列ID中的每一个对应于特定的读取序列,其包括与由特定存储段顺序读取的特定存储段相关的所有读取和预读请求 在客户端应用程序中执行的线程。 存储系统使用序列ID值,以便识别和过滤存储系统接收到的过期的预读消息,因为客户端应用程序已经移动到读取不同的存储段。 基本上,当其序列ID值比存储系统已经看到的最新值更新时,消息被丢弃。 序列ID由存储系统使用以确定要加载到预读高速缓存中的相应的预读数据。
    • 10. 发明申请
    • READ-AHEAD PROCESSING IN NETWORKED CLIENT-SERVER ARCHITECTURE
    • 网络客户端 - 服务器架构中的READ-AHEAD处理
    • US20120144123A1
    • 2012-06-07
    • US12958196
    • 2010-12-01
    • Lior ARONOVICHKonstantin MUSHKINOded SONIN
    • Lior ARONOVICHKonstantin MUSHKINOded SONIN
    • G06F12/08
    • G06F12/0862G06F3/067G06F12/0802G06F2212/163H04L67/42
    • Various embodiments for read-ahead processing in a networked client-server architecture by a processor device are provided. Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.
    • 提供了由处理器设备在联网的客户端 - 服务器架构中进行预读处理的各种实施例。 读取消息由多个唯一的序列标识(ID)分组,其中序列ID中的每一个对应于特定的读取序列,其包括与由特定存储段顺序读取的特定存储段相关的所有读取和预读请求 在客户端应用程序中执行的线程。 存储系统使用序列ID值,以便识别和过滤存储系统接收到的过期的预读消息,因为客户端应用程序已经移动到读取不同的存储段。 基本上,当其序列ID值比存储系统已经看到的最新值更新时,消息被丢弃。 序列ID由存储系统使用以确定要加载到预读高速缓存中的相应的预读数据。