会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • System and method for task queue management of virtual devices using a plurality of processors
    • 使用多个处理器的虚拟设备的任务队列管理的系统和方法
    • US20050081202A1
    • 2005-04-14
    • US10670838
    • 2003-09-25
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • G06F9/46
    • G06F9/505
    • A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.
    • 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。
    • 3. 发明申请
    • System and method for balancing computational load across a plurality of processors
    • 用于平衡多个处理器上的计算负载的系统和方法
    • US20050081182A1
    • 2005-04-14
    • US10670826
    • 2003-09-25
    • Barry MinorMark NutterVanDung To
    • Barry MinorMark NutterVanDung To
    • G06F9/44G06F9/50
    • G06F9/5044
    • A system and method for balancing computational load across a plurality of processors. Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.
    • 一种用于平衡多个处理器上的计算负载的系统和方法。 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。
    • 4. 发明申请
    • System and method for providing a persistent function server
    • 用于提供持久性功能服务器的系统和方法
    • US20060095718A1
    • 2006-05-04
    • US10942432
    • 2004-09-16
    • Michael DayMark NutterVanDung To
    • Michael DayMark NutterVanDung To
    • G06F15/00
    • G06F9/544G06F8/52
    • A system and method for providing a persistent function server is provided. A multi-processor environment uses an interface definition language (idl) file to describe a particular function, such as an “add” function. A compiler uses the idl file to generate source code for use in marshalling and de-marshalling data between a main processor and a support processor. A header file is also created that corresponds to the particular function. The main processor includes parameters in the header file and sends the header file to the support processor. For example, a main processor may include two numbers in an “add” header file and send the “add” header file to a support processor that is responsible for performing math functions. In addition, the persistent function server capability of the support processor is programmable such that the support processor may be assigned to execute unique and complex functions.
    • 提供了一种用于提供持久功能服务器的系统和方法。 多处理器环境使用接口定义语言(idl)文件来描述特定功能,例如“添加”功能。 编译器使用idl文件生成源代码,用于在主处理器和支持处理器之间编组和解组数据。 还创建了与特定功能对应的头文件。 主处理器包括头文件中的参数,并将头文件发送到支持处理器。 例如,主处理器可以在“添加”头文件中包括两个数字,并将“添加”头文件发送到负责执行数学函数的支持处理器。 此外,支持处理器的持久功能服务器能力是可编程的,使得支持处理器可被分配以执行独特和复杂的功能。
    • 5. 发明申请
    • System and method for manipulating data with a plurality of processors
    • 用多个处理器操纵数据的系统和方法
    • US20050071578A1
    • 2005-03-31
    • US10670840
    • 2003-09-25
    • Michael DayMark NutterVanDung To
    • Michael DayMark NutterVanDung To
    • G06F12/00G06F15/167
    • G06F15/16
    • A system and a method for sharing a common system memory by a main processor and a plurality of secondary processors. The sharing of the common system memory enables the sharing of data between the processors. The data are loaded into the common memory by the main processor, which divides the data to be processed into data blocks. The size of the data blocks is equal to the size of the registers of the secondary processors. The main processor identifies an available secondary processor to process the first data block. The secondary processor processes the data block and returns the processed data block to the common system memory. The main processor may continue identifying available secondary processors and requesting the available secondary processors to process data blocks until all the data blocks have been processed.
    • 一种用于由主处理器和多个辅助处理器共享公共系统存储器的系统和方法。 共享系统内存的共享使得能够在处理器之间共享数据。 主处理器将数据加载到公共存储器中,主处理器将要处理的数据划分为数据块。 数据块的大小等于辅助处理器寄存器的大小。 主处理器识别可用的二级处理器来处理第一数据块。 二级处理器处理数据块并将处理的数据块返回到公共系统存储器。 主处理器可以继续识别可用的二级处理器并请求可用的次级处理器来处理数据块,直到所有的数据块都被处理为止。
    • 6. 发明申请
    • System and method for ray tracing with depth buffered display
    • 用于具有深度缓冲显示的光线跟踪的系统和方法
    • US20070035544A1
    • 2007-02-15
    • US11201651
    • 2005-08-11
    • Gordon FossumBarry MinorVanDung To
    • Gordon FossumBarry MinorVanDung To
    • G06T15/40
    • G06T15/08G06T15/06G06T15/405
    • A system and method for generating an image that includes ray traced pixel data and rasterized pixel data is presented. A synergistic processing unit (SPU) uses a rendering algorithm to generate ray traced data for objects that require high-quality image rendering. The ray traced data is fragmented, whereby each fragment includes a ray traced pixel depth value and a ray traced pixel color value. A rasterizer compares ray traced pixel depth values to corresponding rasterized pixel depth values, and overwrites ray traced pixel data with rasterized pixel data when the corresponding rasterized fragment is “closer” to a viewing point, which results in composite data. A display subsystem uses the resultant composite data to generate an image on a user's display.
    • 提出了一种用于生成包括光线跟踪像素数据和光栅化像素数据的图像的系统和方法。 协同处理单元(SPU)使用渲染算法为需要高质量图像渲染的对象生成光线跟踪数据。 光线跟踪的数据被分段,由此每个片段包括光线跟踪的像素深度值和光线跟踪的像素颜色值。 光栅化器将光线跟踪的像素深度值与相应的光栅化像素深度值进行比较,并且当对应的光栅化片段“靠近”到观察点时,将光栅跟踪的像素数据重写为光栅跟踪像素数据,这导致复合数据。 显示子系统使用所得到的复合数据在用户的显示器上生成图像。
    • 7. 发明申请
    • Input device for providing position information to information handling systems
    • 用于向信息处理系统提供位置信息的输入装置
    • US20070061101A1
    • 2007-03-15
    • US11225569
    • 2005-09-13
    • David GreeneBarry MinorBlake RobertsonVanDung To
    • David GreeneBarry MinorBlake RobertsonVanDung To
    • G01C17/00
    • G06F1/1626G01S19/47G06F1/1632G06F1/1684G06F2200/1637
    • An input device is disclosed, one embodiment of which provides position information to an information handling system (IHS). The position information includes both location information and spatial orientation information of the input device in real space. The input device includes a location sensor which determines the absolute location of the input device in x, y and z coordinates. The input device also includes a spatial orientation sensor that determines the spatial orientation of the input device in terms of yaw, pitch and roll. The input device further includes a processor that processes the location information and the spatial orientation information of the input device in real space to determine an image view from the perspective of the input device in virtual space. Movement of the input device in real space by a user causes a corresponding movement of an image view from the perspective of the input device in virtual space. The input device itself displays the image view, or alternatively, an IHS to which the input device couples displays the image view.
    • 公开了一种输入设备,其一个实施例向位置信息提供信息处理系统(IHS)。 位置信息包括实际空间中的输入装置的位置信息和空间取向信息。 输入装置包括位置传感器,其确定x,y和z坐标中输入装置的绝对位置。 输入装置还包括空间方向传感器,其根据偏航,俯仰和滚动确定输入装置的空间取向。 输入装置还包括处理器,其在实际空间中处理输入装置的位置信息和空间取向信息,以从虚拟空间中的输入装置的角度确定图像视图。 输入设备在实际空间中的移动由用户在虚拟空间中从输入设备的角度引起图像视图的相应移动。 输入设备本身显示图像视图,或者替代地,输入设备耦合到的IHS显示图像视图。
    • 8. 发明申请
    • System and method for solving a large system of dense linear equations
    • 用于求解大密度线性方程组的系统和方法
    • US20050071404A1
    • 2005-03-31
    • US10670837
    • 2003-09-25
    • Mark NutterVanDung To
    • Mark NutterVanDung To
    • G06F7/38G06F17/16
    • G06F17/16
    • A method and system for solving a large system of dense linear equations using a system having a processing unit and one or more secondary processing units that can access a common memory for sharing data. A set of coefficients corresponding to a system of linear equations is received, and the coefficients, after being placed in matrix form, are divided into blocks and loaded into the common memory. Each of the processors is programmed to perform matrix operations on individual blocks to solve the linear equations. A table containing a list of the matrix operations is created in the common memory to keep track of the operations that have been performed and the operations that are still pending. SPUs determine whether tasks are pending, access the coefficients by accessing the common memory, perform the required, and store the result back in the common memory for the result to be accessible by the PU and the other SPUs.
    • 一种使用具有处理单元和一个或多个辅助处理单元的系统来解决大密度线性方程组的方法和系统,所述二次处理单元可以访问公共存储器以共享数据。 接收与线性方程组对应的一系列系数,将这些系数置于矩阵形式之后,分成块并加载到公共存储器中。 每个处理器被编程为对各个块执行矩阵运算以求线性方程。 在公共内存中创建包含矩阵操作列表的表,以跟踪已执行的操作和仍在挂起的操作。 SPU确定任务是否正在等待,通过访问公共存储器访问系数,执行所需的并将结果存储在公共存储器中,以使结果可由PU和其他SPU访问。