Contributing

Is your Thumb or ARM faster?

Is your Thumb or ARM faster?

Thumb code is able to provide up to 65% of the code size of Arm, and 160% of the performance of an equivalent Arm processor connected to a 16-bit memory system. For critical code size applications, the alternative 16-bit Thumb Mode reduces code by more than 30% with a minimal performance penalty.

What is the difference between Thumb and thumb2?

Thumb-2 is an enhancement to the 16-bit Thumb instruction set. The most important difference between the Thumb-2 instruction set and the ARM instruction set is that most 32-bit Thumb instructions are unconditional, whereas most ARM instructions can be conditional. …

What is ARM Thumb instructions?

The Thumb instruction set is a subset of the most commonly used 32-bit ARM instructions. Thumb instructions are each 16 bits long, and have a corresponding 32-bit ARM instruction that has the same effect on the processor model. Thumb has all the advantages of a 32-bit core: 32-bit address space.

Why Thumb instruction set is called Thumb?

The Thumb instruction set consists of 16-bit instructions that act as a compact shorthand for a subset of the 32-bit instructions of the standard ARM. When it’s operating in the Thumb state , the processor simply expands the smaller shorthand instructions fetched from memory into their 32-bit equivalents.

What is Thumb-2 instruction?

Thumb-2 is a superset of the Thumb instruction set. Thumb-2 introduces 32-bit instructions that are intermixed with the 16-bit instructions. The Thumb-2 instruction set covers almost all the functionality of the ARM instruction set. Thumb-2 is backwards compatible with the ARMv6 Thumb instruction set.

What is Thumb state?

Refer to Thumb instruction set overview for more information. A processor that is executing Thumb instructions is said to be operating in Thumb state. A Thumb-capable processor that is executing ARM instructions is said to be operating in ARM state. ARM processors always start in ARM state.

What is Thumb and Thumb2?

ARM and Thumb have instructions with the same functionality and assembler mnemonics but different encodings. Thumb2 is a superset of Thumb. I’m not sure there is a modern reference for this as the architecture describes Thumb as a single instruction set with certain groups of instructions being optional.

Where and why is Thumb mode used?

Thumb mode allows for code to be smaller, and can potentially be faster if the target has slow memory.

What’s the difference between Thumb2 and ARM instruction set?

So thumb2 is a reduced ARM instruction set as thumb is, but not as reduced. So it might still take more instructions to do the same thing in thumb2 (assume plus thumb) compared to ARM doing the same thing. This gives a taste of the issue, a single instruction in arm and its equivalent in thumb.

Why does thumb take less space than arm?

When compared against the ARM 32 bit instruction set, the thumb 16 bit instruction set (not talking about thumb2 extensions yet) takes less space because the instructions are half the size, but there is a performance drop, in general, because it takes more instructions to do the same thing as on arm.

Why is Thumb-2 better than Thumb-1?

Improve performance in cases where a single 16-bit instruction restricts functions available to the compiler. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory. What exactly is this performance?

How to switch from ARM mode to Thumb mode?

In arm mode the instructions are 32 bits and the pc goes 0x00, 0x04, 0x08, etc. (look at the branch instructions ARM the branch is signed_immed<<2, 0,4,8, etc. thumb branch is signed_immed<<1, 0,2,4,6, etc) Basically if you have a mixed mode program you want to use BX instead of B, in particular when returning bx lr instead of mov pc,lr.