会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Programmatic response-time based workload distribution techniques
    • 基于程序化响应时间的工作负载分配技术
    • US07207043B2
    • 2007-04-17
    • US10334262
    • 2002-12-31
    • Christopher James BlytheGennaro A. CuomoErik A. DaughtreyMatt R. Hogstrom
    • Christopher James BlytheGennaro A. CuomoErik A. DaughtreyMatt R. Hogstrom
    • G06F9/46G06F15/16G06F15/173G06F17/00
    • G06F9/505G06F2209/5018G06Q30/0283G06Q50/06
    • Workload is programmatically distributed across a set of execution resources. In a multithreaded server environment, response time to end users is improved while increasing the efficiency of software execution and resource usage. Execution time and wait/queued time are tracked, for various types of requests being serviced by a server. Multiple logical pools of threads are used to service these requests, and inbound requests are directed to a selected one of these pools such that requests of similar execution-time requirements are serviced by the threads in that pool. The number and size of thread pools may be adjusted programmatically, and the distribution calculation (i.e., determining which inbound requests should be assigned to which pools) is a programmatic determination. In preferred embodiments, only one of these variables is adjusted at a time, and the results are monitored to determine whether the effect was positive or negative. The disclosed techniques also apply to tracking and classifying requests by method name (and, optionally, parameters).
    • 工作负载以编程方式分布在一组执行资源中。 在多线程服务器环境中,提高了终端用户的响应时间,同时提高了软件执行和资源使用的效率。 跟踪服务器处理各种类型的请求的执行时间和等待/排队时间。 线程的多个逻辑池用于服务这些请求,并且入站请求被定向到这些池中的一个选定的一个,使得类似执行时间要求的请求由该池中的线程提供服务。 线程池的数量和大小可以以编程方式进行调整,并且分布计算(即,确定哪些入站请求应被分配给哪个池)是编程确定。 在优选实施例中,一次仅调整这些变量中的一个,并且监视结果以确定效果是正还是负。 所公开的技术也适用于通过方法名称(和可选地,参数)跟踪和分类请求。
    • 4. 发明授权
    • Dynamic thread pool tuning techniques
    • 动态线程池调优技术
    • US07237242B2
    • 2007-06-26
    • US10334768
    • 2002-12-31
    • Christopher James BlytheGennaro A. CuomoErik A. DaughtreyMatt R. Hogstrom
    • Christopher James BlytheGennaro A. CuomoErik A. DaughtreyMatt R. Hogstrom
    • G06F9/46
    • G06F9/505G06F2209/5018
    • Thread pools in a multithreaded server are programmatically adjusted, based on observed statistics from the server's inbound workload. In a multithreaded server environment, response time to end users is improved while increasing the efficiency of software execution and resource usage. Execution time and wait/queued time are tracked, for various types of requests being serviced by a server. Multiple logical pools of threads are used to service these requests, and inbound requests are directed to a selected one of these pools such that requests of similar execution-time requirements are serviced by the threads in that pool. The number and size of thread pools may be adjusted programmatically, and the distribution calculation (i.e., determining which inbound requests should be assigned to which pools) is a programmatic determination. In preferred embodiments, only one of these variables is adjusted at a time, and the results are monitored to determine whether the effect was positive or negative. The disclosed techniques also apply to tracking and classifying requests by method name (and, optionally, parameters).
    • 基于服务器入站工作负载的观察统计信息,可编程调整多线程服务器中的线程池。 在多线程服务器环境中,提高了终端用户的响应时间,同时提高了软件执行和资源使用的效率。 跟踪服务器处理各种类型的请求的执行时间和等待/排队时间。 线程的多个逻辑池用于服务这些请求,并且入站请求被定向到这些池中的一个选定的一个,使得类似执行时间要求的请求由该池中的线程提供服务。 线程池的数量和大小可以以编程方式进行调整,并且分布计算(即,确定哪些入站请求应被分配给哪个池)是编程确定。 在优选实施例中,一次仅调整这些变量中的一个,并且监视结果以确定效果是正还是负。 所公开的技术也适用于通过方法名称(和可选地,参数)跟踪和分类请求。
    • 5. 发明授权
    • Autonomic workload classification using predictive assertion for wait queue and thread pool selection
    • 自动工作负载分类,使用等待队列和线程池选择的预测性断言
    • US07703101B2
    • 2010-04-20
    • US10778584
    • 2004-02-13
    • Gennaro A. CuomoErik A. Daughtrey
    • Gennaro A. CuomoErik A. Daughtrey
    • G06F9/46G06F15/16G06F15/173G06F9/44
    • G06F9/505
    • Incoming work units (e.g., requests) in a computing workload are analyzed and classified according to predicted execution. Preferred embodiments track which instrumented wait points are encountered by the executing work units, and this information is analyzed to dynamically and autonomically create one or more recognizers to programmatically recognize similar, subsequently-received work units. When a work unit is recognized, its execution behavior is then predicted. Execution resources are then allocated to the work units in view of these predictions. The recognizers may be autonomically evaluated or tuned, thereby adjusting to changing workload characteristics. The disclosed techniques may be used advantageously in application servers, message-processing software, and so forth.
    • 计算工作负载中的传入工作单元(例如,请求)根据预测的执行情况进行分析和分类。 优选实施例跟踪执行工作单元遇到哪些仪器化等待点,并且分析该信息以动态地和自动地创建一个或多个识别器以编程地识别相似的随后接收的工作单元。 当工作单元被识别时,其执行行为被预测。 鉴于这些预测,执行资源被分配给工作单位。 可以自动评估或调整识别器,从而适应不断变化的工作负载特性。 所公开的技术可以有利地用于应用服务器,消息处理软件等。