会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • System and method for terrain rendering using a limited memory footprint
    • 使用有限的内存占用的地形渲染的系统和方法
    • US20050285852A1
    • 2005-12-29
    • US10875946
    • 2004-06-24
    • Gordon FossumBarry MinorMark Nutter
    • Gordon FossumBarry MinorMark Nutter
    • G06F17/00G06T1/00G06T15/06G06T15/00
    • G06T17/05G06T15/06
    • A system and method for terrain rendering using a limited memory footprint is presented. A system and method to perform vertical ray terrain rendering by using a terrain data subset for image point value calculations. Terrain data is segmented into terrain data subsets whereby the terrain data subsets are processed in parallel. A bottom view ray intersects the terrain data to provide a memory footprint starting point. In addition, environmental visibility settings provide a memory footprint ending point. The memory footprint starting point, the memory footprint ending point, and vertical ray adjacent data points define a terrain data subset that corresponds to a particular vertical ray. The terrain data subset includes height and color information which are used for vertical ray coherence terrain rendering.
    • 提出了一种使用有限内存占用的地形渲染的系统和方法。 通过使用用于图像点值计算的地形数据子集来执行垂直射线地形渲染的系统和方法。 地形数据被分割成地形数据子集,由此地形数据子集被并行处理。 底视图与地形数据相交以提供内存覆盖起始点。 此外,环境可见性设置提供了内存占用的终点。 内存占用开始点,内存占位符终点和垂直射线相邻数据点定义对应于特定垂直射线的地形数据子集。 地形数据子集包括用于垂直射线相干地形渲染的高度和颜色信息。
    • 3. 发明申请
    • System and method for partitioning processor resources based on memory usage
    • 基于内存使用分配处理器资源的系统和方法
    • US20060095901A1
    • 2006-05-04
    • US11050020
    • 2005-02-03
    • Daniel BrokenshireBarry MinorMark Nutter
    • Daniel BrokenshireBarry MinorMark Nutter
    • G06F9/45
    • G06F9/322G06F8/451G06F9/3851G06F9/50
    • A system and method for partitioning processor resources based on memory usage is provided. A compiler determines the extent to which a process is memory-bound and accordingly divides the process into a number of threads. When a first thread encounters a prolonged instruction, the compiler inserts a conditional branch to a second thread. When the second thread encounters a prolonged instruction, a conditional branch to a third thread is executed. This continues until the last thread conditionally branches back to the first thread. An indirect segmented register file is used so that the “return to” and “branch to” logical registers within each thread are the same (e.g., R1 and R2) for each thread. These logical registers are mapped to hardware registers that store actual addresses. The indirect mapping is altered to bypass completed threads. When the last thread completes it may signal an external process.
    • 提供了一种基于内存使用来分配处理器资源的系统和方法。 编译器确定进程是内存限制的程度,从而将进程划分为多个线程。 当第一个线程遇到延长的指令时,编译器将条件分支插入第二个线程。 当第二个线程遇到延长的指令时,执行到第三个线程的条件分支。 这将持续到最后一个线程有条件地分支回到第一个线程。 使用间接分段寄存器文件,使得每个线程内的“返回”和“分支到”逻辑寄存器对于每个线程是相同的(例如,R 1和R 2)。 这些逻辑寄存器映射到存储实际地址的硬件寄存器。 间接映射被更改为绕过完成的线程。 当最后一个线程完成时,它可能会发出外部进程信号。
    • 4. 发明申请
    • System and method for hiding memory latency
    • 隐藏内存延迟的系统和方法
    • US20060080661A1
    • 2006-04-13
    • US10960609
    • 2004-10-07
    • Daniel BrokenshireHarm HofsteeBarry MinorMark Nutter
    • Daniel BrokenshireHarm HofsteeBarry MinorMark Nutter
    • G06F9/46
    • G06F9/322G06F8/41G06F9/3851
    • A System and method for hiding memory latency in a multi-thread environment is presented. Branch Indirect and Set Link (BISL) and/or Branch Indirect and Set Link if External Data (BISLED) instructions are placed in thread code during compilation at instances that correspond to a prolonged instruction. A prolonged instruction is an instruction that instigates latency in a computer system, such as a DMA instruction. When a first thread encounters a BISL or a BISLED instruction, the first thread passes control to a second thread while the first thread's prolonged instruction executes. In turn, the computer system masks the latency of the first thread's prolonged instruction. The system can be optimized based on the memory latency by creating more threads and further dividing a register pool amongst the threads to further hide memory latency in operations that are highly memory bound.
    • 提出了一种在多线程环境中隐藏内存延迟的系统和方法。 分支间接和设置链接(BISL)和/或分支间接和设置链接,如果外部数据(BISLED)指令在对应于延长的指令的实例的编译期间被放置在线程代码中。 延长的指令是指示计算机系统中的延迟,例如DMA指令。 当第一个线程遇到BISL或BISLED指令时,第一个线程在第一个线程的延长指令执行时将控制传递给第二个线程。 反过来,计算机系统掩盖了第一个线程延长的指令的延迟。 可以通过创建更多线程并在线程之间进一步划分寄存器池来进一步隐藏高度内存限制的操作中的内存延迟,从而基于内存延迟来优化系统。
    • 6. 发明申请
    • Apparatus and method for efficient communication of producer/consumer buffer status
    • 用于生产者/消费者缓冲状态的高效通信的装置和方法
    • US20070174411A1
    • 2007-07-26
    • US11340453
    • 2006-01-26
    • Daniel BrokenshireCharles JohnsMark NutterBarry Minor
    • Daniel BrokenshireCharles JohnsMark NutterBarry Minor
    • G06F15/167
    • G06F15/17337
    • An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.
    • 提供了用于生产者/消费者缓冲器状态的有效通信的装置和方法。 利用该设备和方法,当设备使用设备的信号通知通道在共享缓冲区域上执行操作时,数据处理系统中的设备通知彼此对共享缓冲区域的头和尾指针的更新。 因此,当向共享缓冲区域产生数据的生成器设备将数据写入到共享缓冲区域时,对头指针的更新被写入消费者设备的信号通知通道。 当消费者设备从共享缓冲区域读取数据时,消费者设备将尾指针更新写入生成器设备的信号通知通道。 此外,信道可以以阻塞模式操作,使得对应的设备保持在低功率状态,直到通过信道接收到更新。
    • 7. 发明申请
    • System and method for managing position independent code using a software framework
    • 使用软件框架管理与位置无关的代码的系统和方法
    • US20060112368A1
    • 2006-05-25
    • US10988288
    • 2004-11-12
    • Michael GowenBarry MinorMark NutterJohn Kevin O'Brien
    • Michael GowenBarry MinorMark NutterJohn Kevin O'Brien
    • G06F9/44
    • G06F9/44526
    • A system and method for managing position independent code using a software framework is presented. A software framework provides the ability to cache multiple plug-in's which are loaded in a processor's local storage. A processor receives a command or data stream from another processor, which includes information corresponding to a particular plug-in. The processor uses the plug-in identifier to load the plug-in from shared memory into local memory before it is required in order to minimize latency. When the data stream requests the processor to use the plug-in, the processor retrieves a location offset corresponding to the plug-in and applies the plug-in to the data stream. A plug-in manager manages an entry point table that identifies memory locations corresponding to each plug-in and, therefore, plug-ins may be placed anywhere in a processor's local memory.
    • 提出了一种使用软件框架管理与位置无关的代码的系统和方法。 软件框架提供了缓存加载在处理器本地存储中的多个插件的能力。 处理器从另一处理器接收命令或数据流,其包括对应于特定插件的信息。 处理器使用插件标识符在必需之前将插件从共享内存加载到本地内存中,以便最小化延迟。 当数据流请求处理器使用插件时,处理器检索对应于插件的位置偏移并将插件应用于数据流。 插件管理器管理一个入口点表,用于标识与每个插件相对应的存储位置,因此插件可以放置在处理器的本地存储器中的任何位置。
    • 8. 发明申请
    • System and method for task queue management of virtual devices using a plurality of processors
    • 使用多个处理器的虚拟设备的任务队列管理的系统和方法
    • US20050081202A1
    • 2005-04-14
    • US10670838
    • 2003-09-25
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • Daniel BrokenshireMichael DayBarry MinorMark NutterVanDung To
    • G06F9/46
    • G06F9/505
    • A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.
    • 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。
    • 9. 发明申请
    • System and method for balancing computational load across a plurality of processors
    • 用于平衡多个处理器上的计算负载的系统和方法
    • US20050081182A1
    • 2005-04-14
    • US10670826
    • 2003-09-25
    • Barry MinorMark NutterVanDung To
    • Barry MinorMark NutterVanDung To
    • G06F9/44G06F9/50
    • G06F9/5044
    • A system and method for balancing computational load across a plurality of processors. Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.
    • 一种用于平衡多个处理器上的计算负载的系统和方法。 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。