会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • RAPID CREATION AND CONFIGURATION OF MICROCONTROLLER PRODUCTS WITH CONFIGURABLE LOGIC DEVICES
    • 具有可配置逻辑器件的微控制器产品的快速创建和配置
    • US20090106532A1
    • 2009-04-23
    • US12294223
    • 2007-03-21
    • Ata R. KhanRob CosaroJoe Yu
    • Ata R. KhanRob CosaroJoe Yu
    • G06F15/76G06F9/02
    • G06F15/7842G06F9/30181
    • Methods and apparatus suitable for rapid creation and configuration of microcontroller products, which include a microcontroller or similar computational resource, and configurable logic devices are described. Various embodiments of the present invention allow development of new microcontroller-based products and product families in a rapid and cost-effective manner, thereby enabling early entry of such products into the marketplace. An existing microcontroller block and existing configurable logic devices are combined to form a unique product, wherein the microcontroller block is operable to configure the configurable logic devices to form the desired unique hardware characteristics of the microcontroller-based product. The microcontroller block configures the configurable logic devices when the product is reset, and/or when a power-up condition is recognized.
    • 描述适用于包括微控制器或类似计算资源的微控制器产品的快速创建和配置的方法和装置以及可配置逻辑装置。 本发明的各种实施例允许以快速且具有成本效益的方式开发新的基于微控制器的产品和产品系列,从而能够将这些产品早日进入市场。 现有的微控制器模块和现有的可配置逻辑器件被组合以形成独特的产品,其中微控制器模块可操作以配置可配置逻辑器件以形成基于微控制器的产品所需的独特硬件特性。 当产品复位时和/或当识别出上电条件时,微控制器模块将配置逻辑器件。
    • 2. 发明授权
    • Dynamically selectable stack frame size for processor interrupts
    • 用于处理器中断的动态可选堆栈帧大小
    • US06526463B1
    • 2003-02-25
    • US09548988
    • 2000-04-14
    • Zhimin DingGregory K. GoodhueAta R. Khan
    • Zhimin DingGregory K. GoodhueAta R. Khan
    • G06F942
    • G06F9/342G06F9/30043G06F9/30163G06F9/30189G06F9/4486G06F9/4812
    • A processing system with extended addressing capabilities includes a control bit that controls the number of address bytes that are stored onto a program stack. If the control bit is set to a first state, the address is pushed onto the program stack in the same manner as that used for shorter-address legacy devices. If the control bit is set to a second state, the address is pushed onto the program stack using the number of bytes required to contain a longer extended address. This same control bit controls the number of bytes that are popped off the stack upon return from an interrupt subroutine. The state of the control bit is controlled by one or more program instructions, thereby allowing it to assume each state dynamically. This dynamic control of the number of bytes pushed and popped to and from the stack allows for an optimization of stack utilization, and thereby further compatibility with legacy devices and applications.
    • 具有扩展寻址能力的处理系统包括控制位,该控制位控制存储在程序堆栈上的地址字节数。 如果控制位设置为第一状态,则以与用于较短地址的传统设备相同的方式将地址推送到程序堆栈。 如果控制位设置为第二个状态,则使用包含较长扩展地址所需的字节数将该地址推送到程序堆栈。 相同的控制位控制从中断子程序返回时从堆栈弹出的字节数。 控制位的状态由一个或多个程序指令控制,从而允许其动态地呈现每个状态。 对堆栈和从堆栈进行弹出的字节数的这种动态控制允许优化堆栈利用率,从而进一步与传统设备和应用程序的兼容性。
    • 3. 发明授权
    • Cyclically sequential memory prefetch
    • 循环顺序存储器预取
    • US06643755B2
    • 2003-11-04
    • US09788692
    • 2001-02-20
    • Gregory K. GoodhueAta R. KhanJohn H. Wharton
    • Gregory K. GoodhueAta R. KhanJohn H. Wharton
    • G06F1200
    • G06F9/3814G06F9/3802G06F9/381
    • A memory access architecture and technique employs multiple independent buffers that are configured to store items from memory sequentially. The memory is logically partitioned, and each independent buffer is associated with a corresponding memory partition. The partitioning is cyclically sequential, based on the total number of buffers, K, and the size of the buffers, N. The first N memory locations are allocated to the first partition; the next N memory locations to the second partition; and so on until the Kth partition. The next N memory locations, after the Kth partition, are allocated to the first partition; the next N locations are allocated to the second partition; and so on. When an item is accessed from memory, the buffer corresponding to the item's memory location is loaded from memory, and a prefetch of the next sequential partition commences to load the next buffer. During program execution, the ‘steady state’ of the buffer contents corresponds to a buffer containing the current instruction, one or more buffers containing instructions immediately following the current instruction, and one or more buffers containing instructions immediately preceding the current instruction. This steady state condition is particularly well suited for executing program loops, or a continuous sequence of program instructions, and other common program structures. The parameters K and N are selected to accommodate typically sized program loops.
    • 存储器访问架构和技术采用多个独立缓冲器,其被配置为顺序存储来自存储器的项目。 存储器被逻辑地分区,并且每个独立的缓冲器与相应的存储器分区相关联。 基于缓冲区总数K和缓冲区N的大小,分区是循环的顺序。前N个存储器位置被分配给第一分区; 下一个N个存储器位置到第二个分区; 等等,直到第K个分区。 在第K个分区之后的下一个N个存储单元被分配给第一个分区; 接下来的N个位置被分配给第二分区; 等等。 当从存储器访问项目时,与存储器对应的缓冲区从存储器加载,并且下一个顺序分区的预取开始加载下一个缓冲区。 在程序执行期间,缓冲内容的“稳定状态”对应于包含当前指令的缓冲器,一个或多个缓冲区,其中包含紧跟在当前指令之后的指令,以及一个或多个缓冲区,其中包含紧邻当前指令之前的指令。 这种稳态条件特别适用于执行程序循环,或程序指令的连续序列以及其他通用程序结构。 选择参数K和N以适应通常尺寸的程序循环。
    • 6. 发明授权
    • Memory accelerator with two instruction set fetch path to prefetch second set while executing first set of number of instructions in access delay to instruction cycle ratio
    • 存储器加速器具有两条指令集提取路径,用于在访问延迟到指令周期比率的同时执行第一组指令数时预取第二组
    • US07290119B2
    • 2007-10-30
    • US10923284
    • 2004-08-20
    • Gregory K. GoodhueAta R. KhanJohn H. WhartonRobert Michael Kallal
    • Gregory K. GoodhueAta R. KhanJohn H. WhartonRobert Michael Kallal
    • G06F9/28
    • G06F9/3814G06F9/3802G06F9/381
    • A memory accelerator module buffers program instructions and/or data for high speed access using a deterministic access protocol. The program memory is logically partitioned into ‘stripes’, or ‘cyclically sequential’ partitions, and the memory accelerator module includes a latch that is associated with each partition. When a particular partition is accessed, it is loaded into its corresponding latch, and the instructions in the next sequential partition are automatically pre-fetched into their corresponding latch. In this manner, the performance of a sequential-access process will have a known response, because the pre-fetched instructions from the next partition will be in the latch when the program sequences to these instructions. Previously accessed blocks remain in their corresponding latches until the pre-fetch process ‘cycles around’ and overwrites the contents of each sequentially-accessed latch. In this manner, the performance of a loop process, with regard to memory access, will be determined based solely on the size of the loop. If the loop is below a given size, it will be executable without overwriting existing latches, and therefore will not incur memory access delays as it repeatedly executes instructions contained within the latches. If the loop is above a given size, it will overwrite existing latches containing portions of the loop, and therefore require subsequent re-loadings of the latch with each loop. Because the pre-fetch is automatic, and determined solely on the currently accessed instruction, the complexity and overhead associated with this memory acceleration is minimal.
    • 存储器加速器模块使用确定性访问协议来缓冲用于高速访问的程序指令和/或数据。 程序存储器在逻辑上被划分为“条带”或“循环顺序”分区,并且存储器加速器模块包括与每个分区相关联的锁存器。 当访问特定分区时,它被加载到其对应的锁存器中,并且下一个顺序分区中的指令被自动预取到其对应的锁存器中。 以这种方式,顺序访问过程的性能将具有已知的响应,因为当程序对这些指令进行排序时,来自下一分区的预取指令将在锁存器中。 先前访问的块保留在其对应的锁存器中,直到预取处理“周转”并覆盖每个顺序访问的锁存器的内容。 以这种方式,关于存储器访问的循环处理的执行将仅基于循环的大小来确定。 如果循环低于一个给定的大小,它将可执行而不会覆盖现有的锁存器,因此它不会因为重复执行包含在锁存器内的指令而引起存储器访问延迟。 如果循环高于给定尺寸,它将覆盖包含循环部分的现有锁存器,因此需要随后每个循环重新加载锁存器。 因为预取是自动的,并且仅根据当前访问的指令确定,与该存储器加速相关联的复杂性和开销是最小的。