CVE-2026-34159

Publication date

2026-04-01 16:59:59

Family

GitHub_M

State

PUBLISHED

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backends deserialize_tensor() skips all bounds validation when a tensors buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.