会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Method and apparatus for distributing load in a computer environment
    • 在计算机环境中分配负载的方法和装置
    • US06658473B1
    • 2003-12-02
    • US09513655
    • 2000-02-25
    • Robert J. BlockJames G. HankoJ. Kent Peacock
    • Robert J. BlockJames G. HankoJ. Kent Peacock
    • G06F15173
    • H04L67/1008G06F9/5033G06F9/505G06F9/5083G06F2209/5016H04L67/101H04L67/1019H04L67/1029H04L67/1034H04L69/329
    • The present invention provides a method and apparatus for distributing load in a multiple server computer environment. In one embodiment, a group manager process on each server periodically determines the server's capacity and load (i.e., utilization) with respect to multiple resources. The capacity and load information is broadcast to the other servers in the group, so that each server has a global view of every server's capacity and current load. When a given terminal authenticates to a server to start or resume one or more sessions, the group manager process of that server first determines whether one of the servers in the group already is hosting a session for that user. If that is the case, one embodiment of the present invention redirects the desktop unit to that server and the load-balancing strategy is not employed. Otherwise, for each resource and server, the proper load balancing strategies are performed to identify which server is best able to handle that particular session.
    • 本发明提供一种用于在多服务器计算机环境中分配负载的方法和装置。 在一个实施例中,每个服务器上的组管理器进程周期性地确定服务器的相对于多个资源的容量和负载(即,利用率)。 容量和负载信息被广播到组中的其他服务器,使得每个服务器具有每个服务器的容量和当前负载的全局视图。 当给定终端向服务器认证以启动或恢复一个或多个会话时,该服务器的组管理器进程首先确定该组中的一个服务器是否正在托管该用户的会话。 如果是这种情况,本发明的一个实施例将桌面单元重定向到该服务器,并且不采用负载均衡策略。 否则,对于每个资源和服务器,执行适当的负载平衡策略,以确定哪个服务器最能够处理该特定会话。
    • 3. 发明授权
    • Associating multiple display units in a grouped server environment
    • 在分组的服务器环境中关联多个显示单元
    • US06915347B2
    • 2005-07-05
    • US09733579
    • 2000-12-06
    • James G. HankoSangeeta VarmaJ. Kent Peacock
    • James G. HankoSangeeta VarmaJ. Kent Peacock
    • G06F3/14G06F9/44H04L29/08G06F15/16
    • H04L67/34G06F3/1423H04L67/14H04L69/329
    • A method for grouping Human Interface Devices (HIDs) into a multi-head display is provided. The HIDs are identified as either “primary” or “secondaries”. A computational-service policy module is consulted when a new HID connects to the network. If the HID is identified as a secondary, the module consults all servers within a group to see if the primary presently has an active session connected to any of the servers. If the primary is being controlled by the same server to which the secondary is connected, the session connection information for the primary is augmented to indicate that the secondary is attached to the same session, and this information is disseminated to the interested software entities. The associated session may then provide multi-head outout to the secondary. If the primary is being controlled by another server in the group, the secondary re-attaches to the server that is hosting the primary.
    • 提供了一种将人机接口设备(HID)分组为多头显示器的方法。 HID被标识为“主要”或“次要”。 当新的HID连接到网络时,将查询计算服务策略模块。 如果HID被标识为辅助,则该模块将查询组内的所有服务器,以查看主要当前是否具有连接到任何服务器的活动会话。 如果主要由与辅助连接的相同服务器进行控制,则增加主要会话连接信息以指示辅助节点附加到同一会话,并将该信息传播到感兴趣的软件实体。 然后,相关联的会话可以向辅助节点提供多头呼叫。 如果主要由组中的其他服务器控制,则辅助重新附加到托管主服务器的服务器。
    • 4. 发明授权
    • Process distribution and sharing system for multiple processor computer
system
    • 多处理器计算机系统的流程分配和共享系统
    • US4914570A
    • 1990-04-03
    • US223729
    • 1988-07-21
    • J. Kent Peacock
    • J. Kent Peacock
    • G06F9/50
    • G06F9/4856G06F9/4881G06F2209/483
    • A multiple processor (CPU) computer system, each CPU having a separate, local, random access memory means to which it has direct access. An interprocessor bus couples the CPUs to memories of all the CPUs, so that each CPU can access both its own local memory means and the local memories of the other CPUs. A run queue data structure holds a separate run queue for each of the CPUs. Whenever a new process is created, one of the CPUs is assigned as its home site and the new process is installed in the local memory for the home site. When a specified process needs to be transferred from its home site to another CPU, typically for performing a task which cannot be performed on the home site, the system executes a cross processor call, which performs the steps of: (a) placing the specified process on the run queue of the other CPU; (b) continuing the execution of the specified process on the other CPU, using the local memory for the specified process's home site as the resident memory for the process and using the interprocessor bus to couple the other CPU to the home site's local memory, until a predefined set of tasks has been completed; and then (c) placing the specified process on the run queue of the specified process's home site, so that execution of the process will resume on the process's home site.
    • 多处理器(CPU)计算机系统,每个CPU具有其直接访问的单独的本地随机存取存储器装置。 处理器总线将CPU耦合到所有CPU的存储器,使得每个CPU可以访问其本地存储器装置和其他CPU的本地存储器。 运行队列数据结构为每个CPU保存单独的运行队列。 每当创建新进程时,其中一个CPU被分配为其主站点,并且新进程安装在主站点的本地内存中。 当指定的进程需要从其归属站点传送到另一个CPU时,通常用于执行不能在主站点上执行的任务,系统执行交叉处理器调用,其执行以下步骤:(a)将指定的 处理其他CPU的运行队列; (b)在另一个CPU上继续执行指定的进程,使用指定进程的主站点的本地内存作为进程的驻留内存,并使用处理器间总线将其他CPU耦合到主站点的本地内存,直到 预定义的一组任务已经完成; 然后(c)将指定的进程放置在指定进程的主站点的运行队列中,以便在进程的主站点上恢复进程的执行。
    • 5. 发明授权
    • Method and apparatus for caching file control information corresponding
to a second file block in a first file block
    • 用于缓存对应于第一文件块中的第二文件块的文件控制信息的方法和装置
    • US5996047A
    • 1999-11-30
    • US673958
    • 1996-07-01
    • J. Kent Peacock
    • J. Kent Peacock
    • G06F17/30G06F12/08
    • G06F17/30067
    • A method and system for managing control information associated with a file is disclosed. According to the method, a cache is established in a first file block for storing a second type of file control data. The cache has a cache range. In response to receiving a command to write the file, a first and second type of file control data is generated. The second type of file control data has a logical block number identifying a location in the second file block where the second type of file control data is to be stored. The first type of file control data is stored in the first file block. If the logical block number is within the cache range, then the second type of file control data is stored in the cache. If the logical block number is outside the cache range, then the cache is flushed by copying the previously stored second type of file control data in the cache to a second file block. The second type of file control data is then written into the cache.
    • 公开了一种用于管理与文件相关联的控制信息的方法和系统。 根据该方法,在用于存储第二类型的文件控制数据的第一文件块中建立高速缓存。 缓存有缓存范围。 响应于接收到写入文件的命令,生成第一和第二类型的文件控制数据。 第二类型的文件控制数据具有标识第二文件块中要存储第二类型的文件控制数据的位置的逻辑块号。 文件控制数据的第一种类型存储在第一个文件块中。 如果逻辑块号在缓存范围内,则第二类型的文件控制数据被存储在高速缓存中。 如果逻辑块号码在高速缓存范围之外,则通过将高速缓存中先前存储的第二类型的文件控制数据复制到第二文件块来刷新缓存。 然后将第二种类型的文件控制数据写入高速缓存。