会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Priority control in resource allocation for low request rate, latency-sensitive units
    • 低请求率,延迟敏感单位的资源分配优先级控制
    • US20070101033A1
    • 2007-05-03
    • US11260579
    • 2005-10-27
    • Wen-Tzer ChenCharles JohnsRam RaghavanAndrew Wottreng
    • Wen-Tzer ChenCharles JohnsRam RaghavanAndrew Wottreng
    • G06F13/14
    • G06F13/362
    • A mechanism for priority control in resource allocation for low request rate, latency-sensitive units is provided. With this mechanism, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
    • 提供了一种用于低请求率,延迟敏感单元的资源分配中的优先级控制机制。 利用该机制,当单元向令牌管理器发出请求时,该单元识别其请求的优先级以及它希望访问的资源和单元的资源访问组(RAG)。 该信息用于设置与请求中标识的资源,优先级和RAG相关联的存储设备的值。 当令牌管理器生成并向RAG授予令牌时,根据在与资源和RAG相关联的存储设备中标识的未决请求的优先级,将令牌授予RAG内的单元。 优先级指针用于在资源的RAG内提供高优先级请求和低优先级请求之间的循环公平性方案。
    • 8. 发明申请
    • Hardware Assisted Exception for Software Miss Handling of an I/O Address Translation Cache Miss
    • 软件的硬件辅助异常处理I / O地址转换缓存缺陷
    • US20070260754A1
    • 2007-11-08
    • US11279614
    • 2006-04-13
    • John IrishChad McBrideAndrew Wottreng
    • John IrishChad McBrideAndrew Wottreng
    • G06F3/00
    • G06F12/1081G06F12/1027
    • Embodiments of the present invention generally provide an improved technique to handle I/O address translation cache misses caused by I/O commands within a CPU. For some embodiments, CPU hardware may buffer I/O commands that cause an I/O address translation cache miss in a command queue until the I/O address translation cache is updated with the necessary information. When the I/O address translation cache has been updated, the CPU may reissue the I/O command from the command queue, translate the address of the I/O command at a convenient time, and execute the command as if a cache miss did not occur. This way the I/O device does not need to handle an error response from the CPU, the I/O command is handled by the CPU, and the I/O command is not discarded.
    • 本发明的实施例通常提供一种改进的技术来处理由CPU内的I / O命令引起的I / O地址转换高速缓存未命中。 对于一些实施例,CPU硬件可以缓冲在命令队列中导致I / O地址转换高速缓存未命中的I / O命令,直到I / O地址转换高速缓存用必要信息更新。 当I / O地址转换缓存更新时,CPU可能会从命令队列重新发出I / O命令,在方便的时候转换I / O命令的地址,并执行命令,就好像高速缓存未命中一样 不发生 这样,I / O设备不需要处理来自CPU的错误响应,I / O命令由CPU处理,I / O命令不会被丢弃。
    • 10. 发明申请
    • Low-cost cache coherency for accelerators
    • 加速器的低成本缓存一致性
    • US20070226424A1
    • 2007-09-27
    • US11388013
    • 2006-03-23
    • Scott ClarkAndrew Wottreng
    • Scott ClarkAndrew Wottreng
    • G06F13/28
    • G06F12/0817G06F2212/1016
    • Embodiments of the invention provide methods and systems for reducing the consumption of inter-node bandwidth by communications maintaining coherence between accelerators and CPUs. The CPUs and the accelerators may be clustered on separate nodes in a multiprocessing environment. Each node that contains a shared memory device may maintain a directory to track blocks of shared memory that may have been cached at other nodes. Therefore, commands and addresses may be transmitted to processors and accelerators at other nodes only if a memory location has been cached outside of a node. Additionally, because accelerators generally do not access the same data as CPUs, only initial read, write, and synchronization operations may be transmitted to other nodes. Intermediate accesses to data may be performed non-coherently. As a result, the inter-chip bandwidth consumed for maintaining coherence may be reduced.
    • 本发明的实施例提供了通过保持加速器和CPU之间的一致性来减少节点间带宽消耗的方法和系统。 CPU和加速器可以聚集在多处理环境中的单独的节点上。 包含共享存储器设备的每个节点可以维护目录以跟踪可能在其他节点处被缓存的共享存储器的块。 因此,只有当存储器位置已被缓存在节点外部时,命令和地址才可以发送到其他节点上的处理器和加速器。 另外,因为加速器通常不能访问与CPU相同的数据,所以只能将初始读,写和同步操作传输到其他节点。 对数据的中间访问可以非相干地执行。 结果,可以减少用于维持一致性所消耗的芯片间带宽。