by 414owen on 7/6/23, 4:20 PM with 237 comments
by torstenvl on 7/6/23, 8:56 PM
Rewriting the C function as follows got a 5.5x speedup:
int run_switches(char *input) {
int r = 0;
char c;
while (1) {
c = *input++;
if (c == 's') r++;
if (c == 'p') r--;
if (c == '\0') break;
}
return r;
}
Results: [16:50:14 user@boxer ~/looptest] $ gcc -O3 bench.c loop1.c -o lone
[16:50:37 user@boxer ~/looptest] $ gcc -O3 bench.c loop2.c -o ltwo
[16:50:47 user@boxer ~/looptest] $ time ./lone 1000 1
449000
./lone 1000 1 3.58s user 0.00s system 99% cpu 3.589 total
[16:50:57 user@boxer ~/looptest] $ time ./ltwo 1000 1
449000
./ltwo 1000 1 0.65s user 0.00s system 99% cpu 0.658 total
by nwallin on 7/6/23, 9:23 PM
int run_switches_branchless(const char* s) {
int result = 0;
for (; *s; ++s) {
result += *s == 's';
result -= *s == 'p';
}
return result;
}
...the compiler will do all the branchless sete/cmov stuff as it sees fit. It will be the same speed as the optimized assembly in the post, +/- something insignificant. However it won't unroll and vectorize the loop. If you write it like this: int run_switches_vectorized(const char* s, size_t size) {
int result = 0;
for (; size--; ++s) {
result += *s == 's';
result -= *s == 'p';
}
return result;
}
It will know the size of the loop, and will unroll it and use AVX-512 instructions if they're available. This will be substantially faster than the first loop for large inputs, although I'm too lazy to benchmark just how much faster it is.Now, this requires knowing the size of your string in advance, and maybe you're the sort of C programmer who doesn't keep track of how big your strings are. I'm not your coworker, I don't review your code. Do what you want. But you really really probably shouldn't.
by Const-me on 7/6/23, 10:39 PM
On my computer, the initial C version runs at 389 MB / second. I haven’t tested the assembly versions, but if they deliver the same 6.2x speedup, would result in 2.4 GB/second here.
Here’s C++ version which for long buffers exceeds 24 GB/second on my computer: https://gist.github.com/Const-me/3ade77faad47f0fbb0538965ae7... That’s 61x speedup compared to the original version, without any assembly, based on AVX2 intrinsics.
by Sesse__ on 7/6/23, 8:51 PM
by camel-cdr on 7/6/23, 8:52 PM
size_t run(char *str) {
uint8_t *p = (uint8_t*)str;
long end = 0;
size_t res = 0, vl;
while (1) {
vl = __riscv_vsetvlmax_e8m8();
vuint8m8_t v = __riscv_vle8ff_v_u8m8(p, &vl, vl);
end = __riscv_vfirst_m_b1(__riscv_vmseq_vx_u8m8_b1(v, '\0', vl), vl);
if (end >= 0)
break;
res += __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 's', vl), vl);
res -= __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 'p', vl), vl);
p += vl;
}
vl = __riscv_vsetvl_e8m8(end);
vuint8m8_t v = __riscv_vle8_v_u8m8(p, vl);
res += __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 's', vl), vl);
res -= __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 'p', vl), vl);
return res;
}
Here are the results from the above, the switch and the table c implementation, ran on my mangopi mq pro (C906, in order rv64gc with rvv 0.7.1, and a 128 bit vector length): switch: 0.19 Bytes/Cycle
tbl: 0.17 Bytes/Cycle
rvv: 1.57 Bytes/Cycle (dips down to 1.35 after ~30 KiB)
Edit: you can go up to 2/1.7 Bytes/Cycle, if you make sure the pointer is page aligned (and vl isn't larger than the page size), see commentsby okaleniuk on 7/7/23, 8:06 AM
However, on other processors, this might not be the case. https://wordsandbuttons.online/using_logical_operators_for_l...
The good question is what do we need C for in general? Of course, we can hand-tailor our code to run best on one particular piece of hardware. And we don't need C for that, it would be the wrong tool. We need assembly (and a decent macro system for some sugar)
But the original goal of C was to make translating system-level code from one platform to another easier. And we're expected to lose efficiency on this operation. It's like instead of writing a poem in Hindi and translating it in Urdu, we write one in Esperanto and then translate to whatever language we want automatically. You don't get two brilliant poems, you only get two poor translations, but you get them fast. That's what C is for.
by eklitzke on 7/6/23, 8:37 PM
However, whether or not using cmov is effective compared to a regular test/jump is highly dependent on how predictable the branch is, with cmov typically performing better when the branch is very unpredictable. Since they got a 6x speedup with cmov, I assume that their test input (which isn't described in the post, and is also not in their GitHub repo) consists of random strings consisting almost entirely of s and p characters. There's nothing wrong with this, but it does make the post seem a little misleading to me, as their clever speedup is mostly about exploiting an unmentioned property of the data that is highly specific to their benchmark.
by xoranth on 7/6/23, 10:13 PM
The benchmark only tests strings made of 's' and 'p', so I think it is fair.
The idea is as follow. We want to increase `res` by one when the next character is `s`. Naively, we might try something like this:
res += (c - 'r'); // is `res += 1` when c == 's'
This doesn't work, as `'p' - 'r' == -2`, and we'd need it to be -1.But `'p' - 'r'`, when viewer as an unsigned integer, underflows, setting the carry flag. Turns out x64 has an instruction (adc) that adds two registers _plus_ the carry flag.
Therefore we can replace two `cmp, cmov` with one `sub, adc`:
run_switches:
xor eax, eax # res = 0
loop:
movsx ecx, byte ptr [rdi]
test ecx, ecx
je ret
inc rdi
sub ecx, 'r'
adc eax, ecx # Magic happens here
jmp loop
ret:
ret
Benchmarks are as follows (`bench-x64-8` is the asm above): Summary
'01-six-times-faster-than-c/bench-x64-8 1000 1' ran
1.08 ± 0.00 times faster than '02-the-same-speed-as-c/bench-c-4-clang 1000 1'
1.66 ± 0.00 times faster than '01-six-times-faster-than-c/bench-x64-7 1000 1'
Of course, one could improve things further using SWAR/SIMD...by sltkr on 7/6/23, 10:54 PM
int run_switches(const char *buf) {
size_t len = strlen(buf);
int res = 0;
for (size_t i = 0; i < len; ++i) {
res += (buf[i] == 's') - (buf[i] == 'p');
}
return res;
}
strlen() should be implemented in a pretty fast way, and after the buffer size is known, the compiler can autovectorize the inner loop, which does happen in practice: https://gcc.godbolt.org/z/qYfadPYoqby aidenn0 on 7/6/23, 8:31 PM
by fefe23 on 7/7/23, 11:04 AM
Second, consider this replacement function:
ssize_t test(const char \*input) {
ssize_t res = 0;
size_t l = strlen(input);
size_t i;
for (i=0; i<l; ++i) {
res += (input[i] == 's') - (input[i] == 'p');
}
return res;
}
The timings are (using gcc -O3 -march=native): your function 640 cycles, mine 128 cycles. How can that be? I'm reading the memory twice! I have one call to strlen in there, and memory is slow. Shouldn't this be much slower?No. strlen is a hack that uses vector instructions even though it may technically read beyond the string length. It makes sure not to cross page boundaries so it will not cause adverse reactions, but valgrind needs a suppression exception to not complain about it.
If you know the length beforehand, the compiler can vectorize and unroll the loop, which it happens to do here. To great effect, if I may say so.
The art of writing fast code is usually to get out of the way of the compiler, which will do a perfectly fine job if you let it.
If you really wanted to, you could get rid of the strlen by hacking your logic into what strlen does. That would make the C code much less readable and not actually help that much. My test string is "abcdefghijklmnopqrstuvxyz", so it's all in the l1 cache.
by torstenvl on 7/6/23, 7:58 PM
cmp ecx, 's' # if (c == 's')
jne loop # continue
add eax, 1 # res++
jmp loop # continue
should be cmp ecx, 's' # if (c != 's')
jne loop # continue
add eax, 1 # res++
jmp loop # continue
by 414owen on 7/6/23, 4:20 PM
by jtriangle on 7/7/23, 12:09 AM
by BoppreH on 7/6/23, 9:18 PM
int run_switches(char *input) {
int res = 0;
while (true) {
char c = *input++;
if (c == '\0') return res;
// Here's the trick:
res += (c == 's') - (c == 'p');
}
}
This gives a 3.7x speed compared to loop-1.c. The lower line count is also nice.by eru on 7/7/23, 1:06 AM
by gavinray on 7/6/23, 10:11 PM
Kept me reading into the follow-up article.
Also, I love the UI of this blog.
by arun-mani-j on 7/7/23, 7:28 AM
Also, how is such optimization carried out in a large scale software? Like, do you tweak the generated assembly code manually? (Sorry I'm a very very very beginner to low-level code)
by vardump on 7/6/23, 8:18 PM
by red2awn on 7/13/23, 8:41 PM
https://ipthomas.com/blog/2023/07/n-times-faster-than-c-wher...
by amm on 7/7/23, 9:34 AM
int table[256] = {0};
void init() {
table['s'] = 1;
table['p'] = -1;
}
int run_switches(char *input, int size) {
int res = 0;
while (size-- >= 0) res += table[input[size]];
return res;
}
by lukas099 on 7/6/23, 10:41 PM
by olliej on 7/7/23, 6:21 AM
by einpoklum on 7/7/23, 9:50 AM
by failuser on 7/6/23, 9:20 PM
Such bloated ISA like x86 might actually handle predicate support, but who will try such a radical change?
by sitkack on 7/7/23, 2:19 AM
by rajnathani on 7/8/23, 8:01 AM
by RobotToaster on 7/6/23, 9:32 PM
by kristianpaul on 7/6/23, 11:04 PM
by throwaway14356 on 7/7/23, 1:06 AM
by orlp on 7/7/23, 11:11 AM
int run_switches(const char* input) {
int res = 0;
// Align to word boundary.
while ((uintptr_t) input % sizeof(size_t)) {
char c = *input++;
res += c == 's';
res -= c == 'p';
if (c == 0) return res;
}
// Process word-by-word.
const size_t ONES = ((size_t) -1) / 255; // 0x...01010101
const size_t HIGH_BITS = ONES << 7; // 0x...80808080
const size_t SMASK = ONES * (size_t) 's'; // 0x...73737373
const size_t PMASK = ONES * (size_t) 'p'; // 0x...70707070
size_t s_accum = 0;
size_t p_accum = 0;
int iters = 0;
while (1) {
// Load word and check for zero byte.
// (w - ONES) & ~w has the top bit set in each byte where that byte is zero.
size_t w;
memcpy(&w, input, sizeof(size_t));
if ((w - ONES) & ~w & HIGH_BITS) break;
input += sizeof(size_t);
// We reuse the same trick as before, but XORing with SMASK/PMASK first to get
// exactly the high bits set where a byte is 's' or 'p'.
size_t s_high_bits = ((w ^ SMASK) - ONES) & ~(w ^ SMASK) & HIGH_BITS;
size_t p_high_bits = ((w ^ PMASK) - ONES) & ~(w ^ PMASK) & HIGH_BITS;
// Shift down and accumulate.
s_accum += s_high_bits >> 7;
p_accum += p_high_bits >> 7;
if (++iters >= 255 / sizeof(size_t)) {
// To prevent overflow in our byte-wise accumulators we must flush
// them every so often. We use a trick by noting that 2^8 = 1 (mod 255)
// and thus a + 2^8 b + 2^16 c + ... = a + b + c (mod 255).
res += s_accum % 255;
res -= p_accum % 255;
iters = s_accum = p_accum = 0;
}
}
res += s_accum % 255;
res -= p_accum % 255;
// Process tail.
while (1) {
char c = *input++;
res += c == 's';
res -= c == 'p';
if (c == 0) break;
}
return res;
}
Fun fact: the above is still 1.6x slower (on my machine) than the naive two-pass algorithm that gets autovectorized by clang: int run_switches(const char* input) {
size_t len = strlen(input);
int res = 0;
for (size_t i = 0; i < len; ++i) {
char c = input[i];
res += c == 's';
res -= c == 'p';
}
return res;
}