会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Branch cache
    • 分支缓存
    • US5506976A
    • 1996-04-09
    • US303230
    • 1994-09-08
    • David V. Jaggar
    • David V. Jaggar
    • G06F12/08G06F9/38G06F13/00
    • G06F9/3806G06F9/3844
    • A pipeline processor 2 having an associated branch cache 4 is provided. Each cache line 12 of the branch cache stores a cache TAG, a next branch data value R, a target address value TA and a target instruction value TI. The next branch data value indicates when the next branch instruction will be encountered in the stream of instructions fed to the pipeline processor. This data is used such that following a branch cache hit, no further reading of the branch cache is made until the next branch data indicates that the next branch instruction should have been reached. At this stage, the branch cache 4 is read to see if it contains corresponding data for that next branch instruction that will avoid the need to decode that next branch instruction before instructions from the target address of that branch instruction can be fed into the pipeline. The avoiding of the need to read the branch cache for every instruction fed into the pipeline saves power.
    • 提供具有相关联的分支高速缓冲存储器4的流水线处理器2。 分支高速缓存行的每个高速缓存行12存储高速缓存器TAG,下一个分支数据值R,目标地址值TA和目标指令值TI。 下一个分支数据值指示何时在馈送到流水线处理器的指令流中遇到下一个分支指令。 使用该数据使得在分支高速缓存命中之后,不再进一步读取分支高速缓存,直到下一分支数据指示应当已经到达下一分支指令为止。 在这个阶段,读取分支高速缓冲存储器4,以查看它是否包含用于该下一个分支指令的对应数据,该指令将避免在来自该分支指令的目标地址的指令可以被馈送到流水线中之前对该下一个分支指令进行解码。 为了节省能源,避免了为每个提供给管道的指令读取分支缓存的需要。
    • 3. 发明授权
    • Multiple instruction set mapping
    • 多指令集映射
    • US5568646A
    • 1996-10-22
    • US308838
    • 1994-09-19
    • David V. Jaggar
    • David V. Jaggar
    • G06F9/38G06F9/30G06F9/318G06F9/00
    • G06F9/30174G06F9/30189G06F9/30196
    • A data processing system is described utilising multiple instruction sets. The program instruction words are supplied to a processor core 2 via an instruction pipeline 6. As program instruction words of a second instruction set pass along the instruction pipeline, they are mapped to program instruction words of the first instruction set. The second instruction set has program instruction words of a smaller bit size than those of the first instruction set and is a subset of the first instruction set. Smaller bit size improves code density, whilst the nature of the second instruction set as a subset of the first instruction set enables a one-to-one mapping to be efficiently performed and so avoid the need for a dedicated instruction decoder for the second instruction set.
    • 利用多个指令集描述数据处理系统。 程序指令字通过指令流水线6提供给处理器核心2.随着第二指令集的程序指令字沿着指令流水线传递,它们被映射到第一指令集的程序指令字。 第二指令集具有比第一指令集小的位大小的程序指令字,并且是第一指令集的子集。 较小的位大小提高了代码密度,而作为第一指令集的子集的第二指令集的性质使得能够有效地执行一对一映射,因此避免了对于第二指令集的专用指令解码器的需要 。