Pre-fill (even in the single batch case) and multi-batch decoding are still compute dominated. The oft repeated trope of "decoder only transformer inference is bottle-necked on memory bandwidth" is only strictly true in the single batch decoding case, because you're mostly doing vector matrix mults when the batch size is one.
Not even single batch. If you want reasonable latency per token (TPOT) even larger batches do not give you high compute utilization during extend. It’s only when you don’t care about TPOT at all, and your model is small enough to leave space for a large batch on an 8 GPU host, that’s when you could get decent utilization. That’s extend only - it’s easy to get high utilization in prefill.