会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • System and method for providing a persistent function server
    • 用于提供持久性功能服务器的系统和方法
    • US20060095718A1
    • 2006-05-04
    • US10942432
    • 2004-09-16
    • Michael DayMark NutterVanDung To
    • Michael DayMark NutterVanDung To
    • G06F15/00
    • G06F9/544G06F8/52
    • A system and method for providing a persistent function server is provided. A multi-processor environment uses an interface definition language (idl) file to describe a particular function, such as an “add” function. A compiler uses the idl file to generate source code for use in marshalling and de-marshalling data between a main processor and a support processor. A header file is also created that corresponds to the particular function. The main processor includes parameters in the header file and sends the header file to the support processor. For example, a main processor may include two numbers in an “add” header file and send the “add” header file to a support processor that is responsible for performing math functions. In addition, the persistent function server capability of the support processor is programmable such that the support processor may be assigned to execute unique and complex functions.
    • 提供了一种用于提供持久功能服务器的系统和方法。 多处理器环境使用接口定义语言(idl)文件来描述特定功能,例如“添加”功能。 编译器使用idl文件生成源代码,用于在主处理器和支持处理器之间编组和解组数据。 还创建了与特定功能对应的头文件。 主处理器包括头文件中的参数,并将头文件发送到支持处理器。 例如,主处理器可以在“添加”头文件中包括两个数字,并将“添加”头文件发送到负责执行数学函数的支持处理器。 此外,支持处理器的持久功能服务器能力是可编程的,使得支持处理器可被分配以执行独特和复杂的功能。
    • 2. 发明申请
    • System and method for task queue management of virtual devices using a plurality of processors
    • 使用多个处理器的虚拟设备的任务队列管理的系统和方法
    • US20050081202A1
    • 2005-04-14
    • US10670838
    • 2003-09-25
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • G06F9/46
    • G06F9/505
    • A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.
    • 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。
    • 3. 发明申请
    • System and method for manipulating data with a plurality of processors
    • 用多个处理器操纵数据的系统和方法
    • US20050071578A1
    • 2005-03-31
    • US10670840
    • 2003-09-25
    • Michael DayMark NutterVanDung To
    • Michael DayMark NutterVanDung To
    • G06F12/00G06F15/167
    • G06F15/16
    • A system and a method for sharing a common system memory by a main processor and a plurality of secondary processors. The sharing of the common system memory enables the sharing of data between the processors. The data are loaded into the common memory by the main processor, which divides the data to be processed into data blocks. The size of the data blocks is equal to the size of the registers of the secondary processors. The main processor identifies an available secondary processor to process the first data block. The secondary processor processes the data block and returns the processed data block to the common system memory. The main processor may continue identifying available secondary processors and requesting the available secondary processors to process data blocks until all the data blocks have been processed.
    • 一种用于由主处理器和多个辅助处理器共享公共系统存储器的系统和方法。 共享系统内存的共享使得能够在处理器之间共享数据。 主处理器将数据加载到公共存储器中,主处理器将要处理的数据划分为数据块。 数据块的大小等于辅助处理器寄存器的大小。 主处理器识别可用的二级处理器来处理第一数据块。 二级处理器处理数据块并将处理的数据块返回到公共系统存储器。 主处理器可以继续识别可用的二级处理器并请求可用的次级处理器来处理数据块,直到所有的数据块都被处理为止。
    • 5. 发明申请
    • System and method for virtualization of processor resources
    • 处理器资源虚拟化的系统和方法
    • US20060069878A1
    • 2006-03-30
    • US10955093
    • 2004-09-30
    • Maximino AguilarMichael DayMark NutterJames Xenidis
    • Maximino AguilarMichael DayMark NutterJames Xenidis
    • G06F12/08G06F12/00
    • G06F12/109G06F12/0284G06F12/1045
    • A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.
    • 提出了一种用于处理器资源虚拟化的系统和方法。 在处理器上创建线程,并将处理器的本地内存映射到有效的地址空间。 这样做,处理器的本地内存可以由其他处理器访问,无论处理器是否正在运行。 附加线程会在有效地址空间中创建额外的本地内存映射。 有效地址空间对应于物理本地存储器或“软”复制区域。 当处理器运行时,不同的处理器可以从处理器的本地存储区域访问位于第一处理器的本地存储器中的数据。 当处理器未运行时,处理器的本地存储器的软拷贝存储在其他处理器的存储器位置(即锁定的高速缓冲存储器,固定的系统存储器,虚拟存储器等)中以继续访问。
    • 6. 发明申请
    • Light weight context switching technique
    • 轻量级上下文切换技术
    • US20060015876A1
    • 2006-01-19
    • US10891773
    • 2004-07-15
    • Michael DayMark Nutter
    • Michael DayMark Nutter
    • G06F9/46
    • G06F9/461G06F9/485
    • An apparatus, a method, and a computer program product are provided for more efficiently allowing context switching. Currently, context switching can be costly because of both memory requirements to store data from pre-empted applications, as well as the bus requirements to move the data at pre-emption. To alleviate at least some of the costs associated with context switching, addition fields, either with associated Application Program Interfaces (APIs) or coupled to application modules, can be employed to indicate points of light weight context during the operation of an application. Therefore, an operating system can pre-empt applications at points where the context is relatively light, reducing the costs on both storage and on bus usage.
    • 提供了一种用于更有效地允许上下文切换的装置,方法和计算机程序产品。 目前,上下文切换可能是昂贵的,因为存储器要求存储来自抢占应用的数据,以及总线要求以优先移动数据。 为了减轻与上下文切换相关联的至少一些成本,可以使用具有相关联的应用程序接口(API)或耦合到应用模块的附加字段来在应用的操作期间指示轻量级上下文的点。 因此,操作系统可以在上下文相对较轻的点预占应用程序,从而降低存储和总线使用的成本。
    • 7. 发明申请
    • System and method for asymmetric heterogeneous multi-threaded operating system
    • 非对称异构多线程操作系统和方法
    • US20050081203A1
    • 2005-04-14
    • US10670841
    • 2003-09-25
    • Maximino AguilarMichael DayMark NutterJames Stafford
    • Maximino AguilarMichael DayMark NutterJames Stafford
    • G06F9/46
    • G06F9/4881
    • A system and method for an asymmetric heterogeneous multi-threaded operating system are presented. A processing unit (PU) provides a trusted mode environment in which an operating system executes. A heterogeneous processor environment includes a synergistic processing unit (SPU) that does not provide trusted mode capabilities. The PU operating system uses two separate and distinct schedulers which are a PU scheduler and an SPU scheduler to schedule tasks on a PU and an SPU, respectively. In one embodiment, the heterogeneous processor environment includes a plurality of SPUs. In this embodiment, the SPU scheduler may use a single SPU run queue to schedule tasks for the plurality of SPUs or, the SPU scheduler may use a plurality of run queues to schedule SPU tasks whereby each of the run queues correspond to a particular SPU.
    • 提出了一种用于非对称异构多线程操作系统的系统和方法。 处理单元(PU)提供操作系统执行的信任模式环境。 异构处理器环境包括不提供可信模式能力的协同处理单元(SPU)。 PU操作系统使用两个独立和不同的调度器,PU调度器和SPU调度器分别在PU和SPU上调度任务。 在一个实施例中,异构处理器环境包括多个SPU。 在本实施例中,SPU调度器可以使用单个SPU运行队列来调度多个SPU的任务,或者SPU调度器可以使用多个运行队列调度SPU任务,由此每个运行队列对应于特定SPU。
    • 9. 发明申请
    • System and method for sharing resources between real-time and virtualizing operating systems
    • 在实时和虚拟化操作系统之间共享资源的系统和方法
    • US20060070069A1
    • 2006-03-30
    • US10955184
    • 2004-09-30
    • Maximino AguilarMichael DayMark NutterJames Xenidis
    • Maximino AguilarMichael DayMark NutterJames Xenidis
    • G06F9/46
    • G06F9/5016G06F9/544
    • A system and method for sharing resources between real-time and virtualizing operating systems is presented. A computer system uses effective address mapping of support processors' local memory to share resources between separate operating systems. When threads are created for either operating system, the thread's corresponding processor memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by the thread, regardless of whether the processor is running, or whether the processor is executing a different thread from a different operating system. For example, a computer system may have eight support processors and running two operating systems whereby the first operating system requires six support processors and the second operating system requires all eight support processors. In this example, resources are virtualized and shared between the two operating systems in order to meet the requirements of both operating systems.
    • 介绍了一种在实时和虚拟化操作系统之间共享资源的系统和方法。 计算机系统使用支持处理器的本地存储器的有效地址映射来在不同的操作系统之间共享资源。 当为任一操作系统创建线程时,线程的相应处理器内存映射到有效的地址空间。 在这样做时,处理器的本地内存可由线程访问,无论处理器是否在运行,还是处理器是否正在从不同的操作系统执行不同的线程。 例如,计算机系统可以具有八个支持处理器并且运行两个操作系统,由此第一操作系统需要六个支持处理器,而第二操作系统需要所有八个支持处理器。 在这个例子中,为了满足两个操作系统的要求,在两个操作系统之间虚拟化和共享资源。
    • 10. 发明申请
    • System and method for asynchronous linked data structure traversal
    • 用于异步链接数据结构遍历的系统和方法
    • US20070043746A1
    • 2007-02-22
    • US11204415
    • 2005-08-16
    • Maximino AguilarMichael DayMark Nutter
    • Maximino AguilarMichael DayMark Nutter
    • G06F7/00
    • G06F17/30961Y10S707/99933Y10S707/99944Y10S707/99945
    • A system and method for asynchronously traversing a disjoint linked data structure is presented. A synergistic processing unit (SPU) includes a handler that works in conjunction with a memory flow controller (MFC) to traverse a disjoint linked data structure. The handler compares a search value with a node value, and provides the MFC with an effective address of the next node to traverse based upon the comparison. In turn, the MFC retrieves the corresponding node data from system memory and stores the node data in the SPU's local storage area. The MFC stalls processing and sends an asynchronous event interrupt to the SPU which, as a result, instructs the handler to retrieve and compare the latest node data in the local storage area with the search value. The traversal continues until the handler matches the search value with a node value or until the handler determines a failed search.
    • 提出了一种用于异步遍历不相交的数据结构的系统和方法。 协同处理单元(SPU)包括与存储器流控制器(MFC)一起工作的处理程序,以遍历不相交的数据结构。 处理程序将搜索值与节点值进行比较,并根据比较为MFC提供下一个节点的有效地址进行遍历。 反过来,MFC从系统存储器检索相应的节点数据,并将节点数据存储在SPU的本地存储区域中。 MFC停止处理并向SPU发送异步事件中断,结果指示处理程序检索和比较本地存储区域中的最新节点数据与搜索值。 遍历继续,直到处理程序与搜索值与节点值匹配,或者直到处理程序确定失败的搜索。