· Adam Hacham, Hector Fétis · research · 17 min read

CVE-2022-26809: dynamics around the security community, Microsoft, and analyzing patches in critical Windows components

After our previous work on CVE-2022-30190, also known as Follina, we decided to look for other vulnerabilities to analyze.

After our previous work on CVE-2022-30190, also known as Follina, we decided to look for other vulnerabilities to analyze.

After our previous work on CVE-2022-30190, also known as Follina, we decided to look for other vulnerabilities to analyze. It is well known to anyone that follows Windows security that the second Tuesday of each month is special. It has been dubbed “Patch Tuesday” by Microsoft, as they have been releasing important security patches in bulk on that day since 2003, and so we decided to have a look at the release notes for the last few patch Tuesdays and pick the vulnerability that would stand out as the most interesting. We quickly settled on CVE-2022-26809, which is a vulnerability in the DCE/RPC runtime and presents multiple points of interest:

  • It was disclosed in April, which makes it not too old
  • It has a CVSS score of 9.8 out of 10, making it critical
  • It should be exploitable form the network with no interaction from the user and no prior privileges
  • The attack complexity is low
  • DCE/RPC is critical for Windows, especially in Active Directory environments
  • Despite this terrific scoring and metrics, there are no known proven exploits

Our initial goal was therefore to understand the vulnerability and find or produce a working exploit to then propose counter measures that would have prevented exploitation of the vulnerability as a 0-day, as we did with Follina. We did not manage to produce a working exploit in a reasonable time. CVE-2022-26809 still remained a very valuable study as it shed a light in the dynamics between the Cybersecurity research community and Microsoft. In this article, we want to share our journey to analyze the vulnerability and what it has shown on that dynamic.

Looking up existing work on CVE-2022-26809

There are indeed very few publicly available studies of the CVE, but still, a few stand out and were very helpful in our study:

  • The l1nk3dHouse blog has an extensive analysis of the patch, with a proof of concept on GitHub.
  • HuanGMz has a much more consise article on the CVE on the Seebug blog, which references the same proof of concept.
  • s1ckb017 have released a very detailed analysis on their blog.
  • Corelight have released an explanation of the vulnerability along with a Zeek package for detection and prevention.
  • Marcus Hutchins has released a video summarizing his reverse engineering efforts and methodology.
  • The SANS institute has released the replay of a live during which they reviewed what was known of the CVE shortly after its disclosure.

These publications all relied on reverse-engineering efforts to understand the patch and uncover the vulnerability from there. They identified multiple interesting changes which exhibited two different potential attack vectors in the the rpcrt4.dll RPC runtime library.

  • The first, and most discussed one, is a potential buffer overflow when an RPC service with specific parameters reassembles fragmented PDUs
  • The second one, discussed latter by l1nk3dHouse and HuanGMz and addressed by Corelight, is a potential integer underflow in the runtime when a specific maliciously crafted PDU is received by an RPC client.

The second one seemed the most promising, and so we decided to focus our efforts on it.

RPC basics

RPC, or remote procedure call, refers to protocols that implement an abstraction which allows processes to call procedures from other processes. It may be used as an IPC protocol on the same machine (inter process communication), or even over the network to provide various services. Microsoft uses the DCE/RPC RPC specification, implemented under the name MSRPC, and heavily relies on it to provide core functionalities of Windows and Active Directory environments. We will be referring to DCE/RPC as RPC from this point onwards for simplicity’s sake. RPC is an application layer protocol that sits on top of a transport layer protocol which may be SMB, TCP or HTTP, among others. DCE/RPC focuses only on providing the RPC logic and so the underlying transport protocol is responsible for providing security features like confidentiality or authentication. RPC can still require specific security features to be present and can carry security related information like tokens. During our testing, we forced the use of SMBv1 as the transport layer in our testing environment as it has the good taste of not using encryption, which is ideal for debugging purposes. The Windows RPC logic is implemented by the C:\Windows\System32\rcprt4.dll library, and this library is what was apparently patched as a mitigation against CVE-2022-26809.

Two features of RPC are especially relevant to CVE-2022-26809, which we will take the time to explain in the following two sub-sections.

Establishing bindings

RPC is a complex protocol that supports both session-full and session-less communication. Establishing a session is done by setting-up a “binding” between the client and the server before the client starts to make calls. The client requests the binding to be established by sending a bind PDU, to which the server shall respond a bin_ack PDU if all goes right, or bind_nak if the binding is refused. The connection can then proceed by either making further changes to the context, or simply making calls.

An example binding process is shown on Microsoft’s MSRPC documentation and in Wireshark:

illustration
illustration

The reverse engineering efforts of the community have tracked down changes to the OSF_CASSOCIATION::ProcessBindAckOrNak() function, which is responsible for handling the server’s response to the bind PDU on the client side. If we want to have any chance to trigger a vulnerability within this function, we therefore need to find a way to make our victim connect to a malicious RPC endpoint that would send a malicious payload.

Multi PDU transmissions and fragments

RPC also supports splitting up large transmissions into multiple PDUs (fragments). Flags are reserved in the common PDU header to let the receiving side known it should expect more PDUs to come and will need to reassemble everything before processing anything.

The RPC specification gives a clear definition of the bind_ack PDU mentioned earlier, including the common part:

typedef struct {

      /* start 8-octet aligned */

      /* common fields */
        u_int8  rpc_vers = 5;       /* 00:01 RPC version */
        u_int8  rpc_vers_minor ;    /* 01:01 minor version */
        u_int8  PTYPE = bind_ack;   /* 02:01 bind ack PDU */
        u_int8  pfc_flags;          /* 03:01 flags */
        byte    packed_drep[4];     /* 04:04 NDR data rep format label*/
        u_int16 frag_length;        /* 08:02 total length of fragment */
        u_int16 auth_length;        /* 10:02 length of auth_value */
        u_int32  call_id;           /* 12:04 call identifier */

      /* end common fields */

        u_int16 max_xmit_frag;      /* 16:02 max transmit frag size */
        u_int16 max_recv_frag;      /* 18:02 max receive  frag size */
        u_int32 assoc_group_id;     /* 20:04 returned assoc_group_id */
        port_any_t sec_addr;        /* 24:yy optional secondary address 
                                     * for process incarnation; local port
                                     * part of address only */
      /* restore 4-octet alignment */

        u_int8 [size_is(align(4))] pad2;

      /* presentation context result list, including hints */

        p_result_list_t     p_result_list;    /* variable size */

      /* optional authentication verifier */
      /* following fields present iff auth_length != 0 */

        auth_verifier_co_t   auth_verifier; /* xx:yy */
    } rpcconn_bind_ack_hdr_t;

    typedef struct {
        u_int8   n_results;      /* count */
        u_int8   reserved;       /* alignment pad, m.b.z. */
        u_int16 reserved2;       /* alignment pad, m.b.z. */
        p_result_t [size_is(n_results)] p_results[];
        } p_result_list_t;

The request PDU share the same common part, but has a different body (note the additional alloc_hint field):

typedef struct {

      /* start 8-octet aligned */

      /* common fields */
        u_int8  rpc_vers = 5;       /* 00:01 RPC version */
        u_int8  rpc_vers_minor;     /* 01:01 minor version */
        u_int8  PTYPE = request ;   /* 02:01 request PDU */
        u_int8  pfc_flags;          /* 03:01 flags */
        byte    packed_drep[4];     /* 04:04 NDR data rep format label*/
        u_int16 frag_length;        /* 08:02 total length of fragment */
        u_int16 auth_length;        /* 10:02 length of auth_value */
        u_int32  call_id;           /* 12:04 call identifier */

      /* end common fields */

      /* needed on request, response, fault */

        u_int32  alloc_hint;        /* 16:04 allocation hint */
        p_context_id_t p_cont_id    /* 20:02 pres context, i.e. data rep */
        u_int16 opnum;              /* 22:02 operation # 
                                     * within the interface */

      /* optional field for request, only present if the PFC_OBJECT_UUID
         * field is non-zero */

        uuid_t  object;              /* 24:16 object UID */

      /* stub data, 8-octet aligned 
                   .
                   .
                   .                 */

      /* optional authentication verifier */
      /* following fields present iff auth_length != 0 */
 
        auth_verifier_co_t   auth_verifier; /* xx:yy */

} rpcconn_request_hdr_t;

Some fields of the request are especially important, including:

illustration

If the minor protocol version of a bind, bind_ack, alter_context, alter_context_response, request or response PDU is set to 1, the PDU may be part of a fragmented transmission. This lets those PDUs include trailing authentication, request or response data of arbitrary size. In practice, the first fragment will have the PFC_FIRST_FRAG set without PFC_LAST_FRAG, subsequent fragments will not have either, until the final one which will include PFC_LAST_FRAG. If a communication only requires a single fragment, both flags are set for said fragment.

Patches have been identified in the ProcessReceivedPDU() and GetCoalescedBuffer() methods, hinting at a potential incorrect handling of mutli-PDU transmissions before the update.

The ProcessBindAckOrNak() patch

PetitPotam

As explained in our RPC summary, CVE-2022-26809 may involve an incorrect handling of bind_ack packets in OSF_CASSOCIATION::ProcessBindAckOrNak(). The problem being that this method is only relevant when the vulnerable machine acts as a client in a connection. In order to properly weaponize a vulnerability in the method, one would therefore need to force the a victim into connecting to a malicious host as a client.

In his blog post, L1nk3dHouse proposes a rather elegant way to achieve the connection using PetitPotam. PetitPotam is an exploit which was initially released in July 2021 by French researcher topotam and later partially patched by Microsoft as CVE-2021-36942 in the August 2021 patch tuesday, only to be updated in response five days later. The exploit was never addressed since then and still works out of the box on up to date systems and requires no credentials against domain controllers. It relies on MS-EFSR, a protocol developed by Microsoft to allow hosts to access encrypted files over RPC. The idea is quite simple: expose an RPC endpoint clients can connect to to request data on encrypted files (e.g. ask the server to open an encrypted file for reading). The problem is, most requests pass on a path to the file they need to access, and nothing stops them from specifying a path on a network share. Doing so will cause the RPC server to comply and connect to the share to try to get the file from there. This is usually done to make the EFSR server authenticate with a malicious host that will grab its NTLM hash to later use it in NTLM replay attacks. We don’t need to go all the way there with CVE-2022-26809, PetitPotam will cause the EFSR to connect to a malicious host, which is enough to reach the patched function.

Here is an overview of the PetitPotam attack over the network, and the equivalent packet capture in Wireshark:

illustration
illustration

The two attackers are shown here as different entities for clarity, but they can of course run on the same machine.

In-depth reverse engineering and analysis

Once we knew we could trigger the call to OSF_CASSOCIATION::ProcessBindAckOrNak(), we went on to start debugging the issue. We started by making sure the function was indeed getting called by setting a breakpoint at the right address in rpcrt4.dll on the lsass process, and then triggering a call with PetitPotam. The function is used quite often, but a glance at the stack when the breakpoint hits makes it clear when the call is coming from PetitPotam:

illustration

Let’s now dive into the internals of rpcrt4.dll and OSF_CASSOCIATION::ProcessBindAckOrNak(). We can pull the dll from two different systems, one of which is an outdated Windows 10 21H2 (build 19041), and the other one an up to date Windows 11 system (build 22000). After downloading the symbol files from Microsoft and loading everything into our preferred reverse engineering toolkit, we reversed both versions of the function and tried to come up with what might have been the original code.

The following graph is a high level vue of the function, with the patched part colored in red:

illustration

And here is a diff vue of the important part of the patch, with the irrelevant parts removed:

illustration

We can notice that a check was added on line 48 before the body size is computed as body_size = packet_size - 28. Indeed, in the case of a bind_ack pdu, the header size is 26, and rest of a function will try to process the body. If the body does not exist, the PDU will be 26 bytes only, and the unsigned body_size short integer will underflow and reach a value far greater than expected (2^16 - 3). This will bypass the body size check on line 61 / 64. At this point, the data that follows the packet on the heap may be erroneously processed by the function.

This is the original idea from the Corelight blog post.

It shows how the packet must be 4 bytes aligned, and therefore any packet that would include a n_results field would have a size greater or equal to 28, which means the underflow would not happen. Since a non null n_results is required for anything to happen past that point because of the check on line 65 / 68, an attacker would have to find a way to manipulate the heap to place a non null value at the right address to alter the execution of the function, with very little effect. In the worst case scenario, values on the heap would be changed as the function tries to convert values from big endian to little endian in p_results[], and a invalid binding would be established. Such an attack would not, however, fit the “easy” exploitation rating from Microsoft’s description.

Furthermore, more work would need to be done, as analyzing a single function by itself is not enough if one does not already know about the internals of the library quite well. Some of the cited papers admit they did not know much about RPC prior to their research on this CVE, and neither did we.

The ProcessReceivedPDU() patch

While ProcessBindAckOrNak() is vulnerable to an integer underflow, ProcessReceivedPDU() seems to be vulnerable to an integer overflow.

ProcessReceivedPDU()

By manipulating packets in a specific way, it might be possible to enter the following routine and trigger the overflow shown in the following:

    if ((pfc_frag & PFC_FRAG_FIRST) == 0) {
        if (*plVar10 != 0) {
            fragment_length = Size;
            if ((*(int *)&this->field_0x244 == 0) || (*(int *)&this->field_0x1cc != 0)) {
                if ((*(int *)&this->field_0x214 == 0) || (*(int *)&this->field_0x1cc != 0)) {
                    fragment_length = Size[0] + *(uint *)&this->DispatchBufferOffset;
                    /* Test alloc_hint, fragment*/
                    if (*(uint *)&this->alloc_size <= fragment_length && fragment_length != *(uint *)&this->alloc_size) {
                        if (*(int *)&this->field_0x1cc != 0) goto LAB_18008ea09;
                        *(uint *)&this->alloc_size = fragment_length;
                        /* overflow check 0x40000  */
                        puVar1 = &(*this->DispatchBuffer)->field_0x148;
                        if (*(uint *)puVar1 <= fragment_length && fragment_length != *(uint *)puVar1) {
                        //...  
                        }
                        // allocate packet size
                        lVar4 = GetBufferDo(*this->DispatchBuffer,(void **)plVar10,fragment_length,1,
                                            *(uint *)&this->DispatchBufferOffset,in_stack_ffffffffffffff60);
                        if (lVar4 != 0) goto LAB_18008ee0d;
                    }
                    // if PFC_LAST_FRAG is set, the routine will stop here
                    if ((pfc_frag & 2) == 0) {
                        return 0;
                    }
                    goto LAB_18003ad35;
                }
            }
        }
    }
    /* Should enter here, need a request packet type */
    else if (type == MSRPC_REQUEST) {
        //...
        /* Store fragment length*/
        fragment_length = Size;
        iVar5 = QUEUE::PutOnQueue((QUEUE *)&this->queue,rpc_packet + 1,Size);
        if (iVar5 == 0) {
          /*
           * VULNERABILITY !
           * Integer Overflow vulnerability, we can control the size from fragment length
           * VULNERABILITY !
           */
        *(uint *)&this->Total_Length = *(int *)&this->Total_Length + fragment_length;
        //...
        }
        //...
    }

The buffer size is limited, so we can’t trigger the vulnerability localized on GetCoalescedBuffer() with a single request packet. If the packet is not fragmented, which means that the fragment flag is set to 0x03 or to PFC_FIRST_FRAG | PFC_LAST_FRAG the ProcessReceivedPDU function will return 0, and won’t trigger the vulnerability, we therefore need to send a fragmented transmission.

We send our RPC query in fragments, with the first and last PDUs respectively carrying the PFC_FIRST_FRAG and PFC_LAST_FRAG flags. Our fragmented transmission needs to be of the request type, to enter the routine that can trigger the integer overflow.

The number of queued packets is incremented by the PutOnQueue() call in ProcessReceivedPDU() and decremented by the TakeOffQueue() call in GetCoalescedBuffer().

GetCoalescedBuffer

GetCoalescedBuffer() merges queued buffers with the last received PDU that has not been queued, and then proceeds to allocate the total length needed and copy every queued message to the provided heap pointer.

long __thiscall OSF_SCALL::GetCoalescedBuffer(OSF_SCALL *this,_RPC_MESSAGE *param_1,int param_2)

{
//...  
  uVar4 = param_1->RpcFlags & 0x4000;
  lVar1 = 0;
  local_res18 = 0;
  uVar2 = uVar4 | param_2;
  this_00 = &this->field426_0x1e0;
  local_res10 = param_1;
  RtlEnterCriticalSection();

  /* Total_Length which we can control with fragments (cf. ProcessReceivedPDU)*/
  uVar5 = *(uint *)&this->Total_Length;
  if (uVar5 != 0) {
    if (uVar2 != 0) {
      uVar5 = uVar5 + param_1->BufferLength;
    }
    lVar1 = OSF_SCONNECTION::TransGetBuffer((OSF_SCONNECTION *)this_00,&size,uVar5 + 0x18);
        if (lVar1 == 0) {
      _Dst_00 = (void *)((longlong)size + 0x18);
      _Dst = _Dst_00;
      size = _Dst_00;
      if ((uVar2 != 0) && (param_1->Buffer != (void *)0x0)) {
        memcpy(_Dst_00,param_1->Buffer,(ulonglong)param_1->BufferLength);
        size = (void *)((ulonglong)param_1->BufferLength + (longlong)_Dst_00);
        (**(code **)(*(longlong *)this->Connection + 0x40))();
        _Dst = size;
        if ((uVar4 != 0) &&
           ((*(void **)((longlong)param_1->ReservedForRuntime + 8) = _Dst_00,
            *(int *)&this->field_0x1cc == 0 && (param_1->Buffer == (void *)this->field214_0xe0)))) {
          this->field214_0xe0 = (longlong)_Dst_00;
        }
      }
      while (_Src = (void *)QUEUE::TakeOffQueue(&this->queue,(undefined4 *)&size),
            _Src != (void *)0x0) {
        uVar3 = (ulonglong)size & 0xffffffff;
        memcpy(_Dst,_Src,(ulonglong)size & 0xffffffff);
        (**(code **)(*(longlong *)this->Connection + 0x40))();
        _Dst = (void *)((longlong)_Dst + uVar3);
      }
      //...
  }
//...
}

Dynamic analysis

illustration

We catch the first RPC packet by putting a breakpoint at the ProcessReceivedPDU. We can see the rpc request header with the flag at 0x01 equivalent to the PFC_FIRST_FRAG flag. the rdi (Destination index register) contains the address of the RPC packet at this time is equal to 00000000004CAE40. following the dump we get our first packet.

Here’s the middle fragment set with the flag 0x00.

illustration

The fragments are loaded one by one into the rdi register on each ProcessReceivedPDU() call, before being loaded into the queue by the PutOnQueue() function.

illustration

In OSF_SCALL::Receive() the code continuously waits for incoming PDUs until a last fragment is received or any error is found.

ProcessReceivedPDU() is called until the total size is greater than alloc_hint, or the last fragment is received. GetCoalescedBuffer() is called when receiving the last fragment or once the total length has grown beyond alloc_hint, reassembling all the fragments received from the client, trusting a total size which might have overflowed.

Even if this allocation would have problematic consequences, the problem here is triggering the overflow would require sending several gigabytes of data, which seems impossible to do before the receiving end times out, even in ideal conditions. This vulnerability therefore seems like a dead-end.

In conclusion

The scenarios we discussed would indeed have terrible implications if exploited successfully, as it nothing else but a non-authenticated 0-click remote execution as the SYSTEM user over the network. But in case of the first, that’s only because over a year after a partial fix, PetitPotam is still a thing and it is what makes the scenario a 0-click RCE. Taking preventive measures against PetitPotam seems necessary at this point. In case of the second scenario, it seems impossible to pull of in practice.

CVE-2022-26809 itself remains quite obscure, the description made by Microsoft remains puzzling and the relevant information necessary to really understand the vulnerability is lost in noise as many sources repost from each other without adding any value, flirting with disinformation to surf on the buzz created by the 9.8 CVSS rating, and malicious actors make false malicious proof of concepts. Reverse engineering patches is not a perfect science, but it is what the community needs to rely on in these cases, as there is nothing else to work with. It’s a double edged sword, as nefarious actors suffer from the same problem, but Microsoft has shown in the past that they would often not fully patch a vulnerability (PetitPotam is a perfect example), and in the end you have to trust them and their close partners with your security, and hope no attacker with sufficient resources will be able to bypass potentially partial fixes.

This is once again a good demonstration that security should not be a passive activity. Official updates are important and will take care of 98% of the potential flaws, but if you cannot afford the remaining 2%, you should make sure to remain proactive in deploying strong preventive measures and reducing your attack surface in order to that would make your system a nightmare for any attacker. In the end your security remains in your hands.

Timeline

illustration
Back to Blog

Related Posts

View All Posts »
Understanding DORA Metrics: An Executive Summary

Understanding DORA Metrics: An Executive Summary

In the modern era, understanding software delivery and operational performance is paramount for business leaders. One toolset that has gained immense popularity is the suite of metrics introduced by the DevOps Research and Assessment (DORA) team.