会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • Processing of cacheable streaming data
    • 可缓存流数据的处理
    • US20070150653A1
    • 2007-06-28
    • US11315853
    • 2005-12-22
    • Niranjan CoorayJack DoweckMark BuxtonVarghese George
    • Niranjan CoorayJack DoweckMark BuxtonVarghese George
    • G06F12/00
    • G06F12/0888G06F12/0831G06F12/0859
    • According to one embodiment of the invention, a method is disclosed for receiving a request for cacheable memory type data in a cache-controller in communication with a first cache memory; obtaining the requested data from a first memory device in communication with the first cache memory if the requested data does not resides in at least one of the cache-controller and the first cache memory; allocating a data storage buffer in the cache-controller for storage of the obtained data; and setting the allocated data storage buffer to a streaming data mode if the obtained data is a streaming data to prevent an unrestricted placement of the obtained streaming data into the first cache memory.
    • 根据本发明的一个实施例,公开了一种用于在与第一高速缓存存储器通信的高速缓存控制器中接收对可缓存存储器类型数据的请求的方法; 如果所请求的数据不驻留在所述高速缓存控制器和所述第一高速缓冲存储器中的至少一个中,则从与所述第一高速缓存存储器通信的第一存储器设备获得所请求的数据; 在所述高速缓存控制器中分配数据存储缓冲器以存储所获得的数据; 以及如果所获得的数据是流数据,则将所分配的数据存储缓冲器设置为流数据模式,以防止所获得的流数据被无限制地放置到第一高速缓冲存储器中。
    • 5. 发明申请
    • Method and apparatus for a trace cache trace-end predictor
    • 跟踪缓存跟踪结果预测器的方法和装置
    • US20050044318A1
    • 2005-02-24
    • US10646033
    • 2003-08-22
    • Subramaniam MaiyuranPeter SmithNiranjan CoorayAsim Nisar
    • Subramaniam MaiyuranPeter SmithNiranjan CoorayAsim Nisar
    • G06F9/38G06F12/08
    • G06F9/3802G06F9/3808
    • A method and apparatus for a trace end predictor for a trace cache is disclosed. In one embodiment, the trace end predictor may have one or more buffers to contain a head address for a subsequent trace. The head address may include the way number and set number of the next head, along with partial stew data to support additional execution predictors. The buffers may also include tag data of the current trace's tail address, and may additionally include control bits for determining whether to replace the buffer's contents with information from another trace's tail. Reading the next head address from the trace end predictor, as opposed to reading it from the trace cache array, may reduce certain execution time delays.
    • 公开了一种用于跟踪高速缓存的跟踪结束预测器的方法和装置。 在一个实施例中,跟踪结束预测器可以具有一个或多个缓冲器以包含后续跟踪的头地址。 头部地址可以包括下一个头部的路径编号和编号,以及部分炖菜数据以支持附加的执行预测器。 缓冲器还可以包括当前迹线的尾部地址的标签数据,并且还可以包括用于确定是否用来自另一跟踪尾部的信息替换缓冲器内容的控制位。 从跟踪结束预测器读取下一个头地址,而不是从跟踪高速缓存阵列读取它,可能会减少某些执行时间延迟。