会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Descriptor prefetch mechanism for high latency and out of order DMA device
    • 高延迟和无序的DMA设备的描述符预取机制
    • US07620749B2
    • 2009-11-17
    • US11621789
    • 2007-01-10
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • G06F13/28
    • G06F13/28
    • A DMA device prefetches descriptors into a descriptor prefetch buffer. The size of descriptor prefetch buffer holds an appropriate number of descriptors for a given latency environment. To support a linked list of descriptors, the DMA engine prefetches descriptors based on the assumption that they are sequential in memory and discards any descriptors that are found to violate this assumption. The DMA engine seeks to keep the descriptor prefetch buffer full by requesting multiple descriptors per transaction whenever possible. The bus engine fetches these descriptors from system memory and writes them to the prefetch buffer. The DMA engine may also use an aggressive prefetch where the bus engine requests the maximum number of descriptors that the buffer will support whenever there is any space in the descriptor prefetch buffer. The DMA device discards any remaining descriptors that cannot be stored.
    • DMA设备将描述符预取到描述符预取缓冲区中。 描述符预取缓冲区的大小在给定的等待时间环境中保存适当数量的描述符。 为了支持描述符的链表,DMA引擎基于它们在存储器中是连续的假设来预取描述符,并丢弃任何被发现违反这个假设的描述符。 DMA引擎寻求通过每个事务请求多个描述符尽可能地保持描述符预取缓冲区已满。 总线引擎从系统内存中读取这些描述符,并将它们写入预取缓冲区。 DMA引擎还可以使用积极的预取,其中总线引擎请求缓冲区将在描述符预取缓冲器中存在任何空间时将支持的最大数量的描述符。 DMA设备丢弃任何其他不能存储的描述符。
    • 22. 发明申请
    • Barrier and Interrupt Mechanism for High Latency and Out of Order DMA Device
    • 高延迟和高阶DMA设备的屏障和中断机制
    • US20080168191A1
    • 2008-07-10
    • US11621776
    • 2007-01-10
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • G06F13/28G06F12/14
    • G06F13/28
    • A direct memory access (DMA) device includes a barrier and interrupt mechanism that allows interrupt and mailbox operations to occur in such a way that ensures correct operation, but still allows for high performance out-of-order data moves to occur whenever possible. Certain descriptors are defined to be “barrier descriptors.” When the DMA device encounters a barrier descriptor, it ensures that all of the previous descriptors complete before the barrier descriptor completes. The DMA device further ensures that any interrupt generated by a barrier descriptor will not assert until the data move associated with the barrier descriptor completes. The DMA controller only permits interrupts to be generated by barrier descriptors. The barrier descriptor concept also allows software to embed mailbox completion messages into the scatter/gather linked list of descriptors.
    • 直接存储器访问(DMA)设备包括屏障和中断机制,允许中断和邮箱操作以确保正确操作的方式发生,但仍然允许在可能的情况下发生高性能无序数据移动。 某些描述符被定义为“屏障描述符”。 当DMA设备遇到屏障描述符时,它确保所有先前的描述符在屏障描述符完成之前完成。 DMA设备进一步确保在与屏障描述符关联的数据移动完成之前,屏障描述符产生的任何中断都不会断言。 DMA控制器仅允许由屏障描述符生成中断。 屏障描述符概念还允许软件将邮箱完成消息嵌入到描述符的分散/收集链接列表中。
    • 23. 发明授权
    • Selective snooping by snoop masters to locate updated data
    • 通过窥探大师进行选择性窥探以查找更新的数据
    • US07395380B2
    • 2008-07-01
    • US10393116
    • 2003-03-20
    • James N. DieffenderferBernard C. DrerupJaya P. GanasanRichard G. HofmannThomas A. SartoriusThomas P. SpeierBarry J. Wolford
    • James N. DieffenderferBernard C. DrerupJaya P. GanasanRichard G. HofmannThomas A. SartoriusThomas P. SpeierBarry J. Wolford
    • G06F12/00G06F3/00
    • G06F12/0831Y02D10/13
    • A method and structure for snooping cache memories of several snooping masters connected to a bus macro, wherein each non-originating snooping master has cache memory, and wherein some, but less than all the cache memories, may have the data requested by an originating snooping master and wherein the needed data in a non-originating snooping master is marked as updated, and wherein a main memory having addresses for all data is connected to the bus macro.Only those non-originating snooping masters which may have the requested data are queried. All the non-originating snooping masters that have been queried reply. If a non-originating snooping master has the requested data marked as updated, that non-originating snooping master returns the updated data to the originating snooping master and possibly to the main memory. If none of the non-originating snooping masters has the requested data marked as updated, then the requested data is read from main memory.
    • 一种用于窥探连接到总线宏的多个窥探主机的高速缓冲存储器的方法和结构,其中每个非起始侦听主机具有高速缓冲存储器,并且其中一些但是小于所有高速缓存存储器可以具有由始发侦听器请求的数据 主站,并且其中非起始侦听主控器中的所需数据被标记为更新,并且其中具有用于所有数据的地址的主存储器连接到总线宏。 只有那些可能具有请求的数据的非始发侦听主机才被查询。 所有被查询的非始发侦听主人都回复。 如果非始发侦听主机具有被标记为更新的请求数据,则该非起始侦听主机会将更新的数据返回给始发侦听主机,并将其返回到主内存。 如果非始发侦听主机中没有一个被标记为已更新的请求数据,则从主存储器读取所请求的数据。
    • 24. 发明授权
    • Method and apparatus for bus access allocation
    • 总线接入分配的方法和装置
    • US07065595B2
    • 2006-06-20
    • US10249271
    • 2003-03-27
    • Bernard C. DrerupJaya P. GanasanRichard G. Hofmann
    • Bernard C. DrerupJaya P. GanasanRichard G. Hofmann
    • G06F13/362
    • G06F13/3625
    • A method for granting access to a bus is disclosed where a fair arbitration is modified to account for varying conditions. Each bus master (BM) is assigned a Grant Balance Factor value (hereafter GBF) that corresponds to a desired bandwidth from the bus. Arbitration gives priority BMs with a GBF greater than zero in a stratified protocol where requesting BMs with the same highest priority are granted access first. The GBF of a BM is decremented each time an access is granted. Requesting BMs with a GBF equal to zero are fairly arbitrated when there are no requesting BMs with GBFs greater than zero wherein they receive equal access using a frozen arbiter status. The bus access time may be partitioned into bus intervals (BIs) each comprising N clock cycles. BIs and GBFs may be modified to guarantee balanced access over multiple BIs in response to error conditions or interrupts.
    • 公开了一种允许访问总线的方法,其中公平仲裁被修改以解决变化的条件。 为每个总线主机(BM)分配一个与总线所需带宽对应的授权平衡因子值(以下称为GBF)。 仲裁在分层协议中给予GBF大于零的优先级BM,其中首先授予具有相同最高优先权的BM。 每次授予访问权限时,BM的GBF都将递减。 当没有请求具有大于零的GBF的BM时,请求具有等于零的GBF的BM被相当地仲裁,其中它们使用冷冻仲裁器状态接收相等的访问。 总线访问时间可以被划分为每个包括N个时钟周期的总线间隔(BI)。 可以修改BI和GBF,以保证响应于错误条件或中断而在多个BI上进行平衡访问。
    • 25. 发明授权
    • System crash detect and automatic reset mechanism for processor cards
    • 处理器卡的系统崩溃检测和自动复位机制
    • US5333285A
    • 1994-07-26
    • US795562
    • 1991-11-21
    • Bernard C. Drerup
    • Bernard C. Drerup
    • G06F1/24G06F11/00G06F11/14
    • G06F11/1415G06F11/0757G06F11/0763
    • A hardware and software mechanism is provided for ensuring that a feature processor card, included with other feature cards in a host system, can be reset without interrupting software running on other feature cards. A delay is provided that starts counting each time a watchdog timer expires. If the watchdog timer is reset by an interrupt service routine, then the feature card processor is assumed to be reset. But, if the watchdog timer is not reset before the delay timer expires, then it is assumed that service routine is corrupt and that external reset of the feature card is required. Upon expiration of the watchdog, an error signal is sent, via the system bus, to the host CPU. Recovery code that is resident on the host CPU is then run and resets the CPU on the feature card. A reset signal is output from the host CPU, via the system bus, to a reset register on the feature card which then forwards the signal to the feature card CPU, thereby initiating reset of the system.
    • 提供了一种硬件和软件机制,用于确保与主机系统中的其他功能卡一起提供的功能处理器卡可以在不中断其他功能卡上运行的软件的情况下进行复位。 提供延迟,每次看门狗定时器到期时都会开始计数。 如果看门狗定时器由中断服务程序复位,则功能卡处理器被假定为复位。 但是,如果在延迟定时器到期之前看门狗定时器未复位,则假定服务程序已损坏,并且需要外部复位功能卡。 看门狗到期后,通过系统总线将错误信号发送到主机CPU。 然后运行驻留在主机CPU上的恢复代码,并将功能卡上的CPU复位。 复位信号从主机CPU经由系统总线输出到特征卡上的复位寄存器,然后将该信号转发到特征卡CPU,从而启动系统复位。
    • 26. 发明授权
    • Structure for piggybacking multiple data tenures on a single data bus grant to achieve higher bus utilization
    • 在单个数据总线上搭载多个数据期限的结构,以实现更高的总线利用率
    • US07987437B2
    • 2011-07-26
    • US12112818
    • 2008-04-30
    • Bernard C. DrerupRichard Nicholas
    • Bernard C. DrerupRichard Nicholas
    • G06F17/50G06F13/36G06F13/00H04L12/28
    • G06F13/364
    • A design structure for piggybacking multiple data tenures on a single data bus grant to achieve higher bus utilization is disclosed. In one embodiment of the design structure, a method in a computer-aided design system includes a source device sending a request for a bus grant to deliver data to a data bus connecting a source device and a destination device. The device receives the bus grant and logic within the device determines whether the bandwidth of the data bus allocated to the bus grant will be filled by the data. If the bandwidth of the data bus allocated to the bus grant will not be filled by the data, the device appends additional data to the first data and delivers the combined data to the data bus during the bus grant for the first data. When the bandwidth of the data bus allocated to the bus grant will be filled by the first data, the device delivers only the first data to the data bus during the bus grant.
    • 公开了一种用于在单个数据总线上搭载多个数据期限以实现更高总线利用率的设计结构。 在设计结构的一个实施例中,计算机辅助设计系统中的一种方法包括发送对总线许可的请求的源设备,以向连接源设备和目的地设备的数据总线传送数据。 设备接收总线许可,并且设备内的逻辑确定分配给总线授权的数据总线的带宽是否将被数据填充。 如果分配给总线授权的数据总线的带宽不会被数据填充,则设备将附加数据附加到第一个数据,并在第一个数据的总线授权期间将组合的数据传送到数据总线。 当分配给总线授权的数据总线的带宽将由第一个数据填充时,设备在总线授权期间只将第一个数据传送到数据总线。