I recently bought a new NAS, after my old one turned ten. I am very satisfied with the reliability of my old QNAP which is still supported with software updates. This makes me a happy customer, and so I did not see any reason to switch brands. After initial setup (networking, firmware update, disabling most services) I did not get around to finishing its setup for a few days. It basically sat there from late May 2020 to early July 2020. Somehow it contracted QSnatch in that timeframe. I’ll explain what I did differently after that.
That NAS is not accessible from the internet, there was no third party software installed, all passwords met very high standards (30 character alphanumeric randomness) and it never had any of the “Stations” (Music, Photo …) activated. It just sat in my home network unused. So while we haven’t seen any media reports of QSnatch since November 2019, this seems to be alive and well somehow. Either, the device came with QSnatch out of the box, or QTS 4.4.2 (probably build 1310 or 1320, all of which are said to be immune until 4.4.3 was released on 2020-07-02) was still vulnerable.
You can probably imagine my surprise when, after the upgrade to QTS 4.4.3 QNAP’s Malware Remover happily reported an infection with MR1905 (QSnatch Malware).
Assessing the situation
So I started over. The NAS is currently not on my network, it’s connected directly to my laptop. I reset the NAS to factory defaults, re-installed QTS 4.4.3 manually by downloading the Firmware to my laptop, checking it for integrity and then uploading it to the NAS by hand. After that I did a nother factory reset. With the NAS still not connected to any networks, I went over every single option available and disabled everything with the exception of SMBv3 Fileserving, SSH and HTTPS for managing the NAS. No Bonjour, no UPnP, no NFS. I didn’t install any additional QNAP software, and I stopped and uninstalled all things that offered the option in the App Center.
So, basically, nothing should be running now. Right?
Wrong. There are still 276 processes running:
[~] # ps -ef
PID Uid VmSize Stat Command
1 admin 1888 S init
2 admin SW [kthreadd]
3 admin IW [kworker/0:0]
4 admin IW< [kworker/0:0H]
7 admin IWN [mm_percpu_wq]
8 admin SW [ksoftirqd/0]
9 admin IW [rcu_sched]
10 admin IW [rcu_bh]
11 admin SW [migration/0]
12 admin SW [watchdog/0]
13 admin SWN [cpuhp/0]
14 admin SWN [cpuhp/1]
15 admin SW [watchdog/1]
16 admin SW [migration/1]
17 admin SW [ksoftirqd/1]
19 admin IW< [kworker/1:0H]
20 admin SWN [cpuhp/2]
21 admin SW [watchdog/2]
22 admin SW [migration/2]
23 admin SW [ksoftirqd/2]
25 admin IW< [kworker/2:0H]
26 admin SWN [cpuhp/3]
27 admin SW [watchdog/3]
28 admin SW [migration/3]
29 admin SW [ksoftirqd/3]
31 admin IW< [kworker/3:0H]
32 admin SW [kdevtmpfs]
33 admin IW< [netns]
38 admin IW [kworker/3:1]
57 admin IW [kworker/1:1]
494 admin SW [khungtaskd]
495 admin SW [oom_reaper]
496 admin IW< [writeback]
498 admin SW [kcompactd0]
499 admin SWN [ksmd]
500 admin IW< [crypto]
501 admin IW< [kintegrityd]
502 admin IW< [kblockd]
718 admin IW< [tifm]
724 admin IW< [ata_sff]
742 admin IW< [md]
759 admin IW< [watchdogd]
862 admin IW< [rpciod]
863 admin IW< [xprtiod]
886 admin SW [kauditd]
895 admin SW [kswapd0]
897 admin SW [ecryptfs-kthrea]
899 admin IW< [nfsiod]
900 admin IW< [cifsiod]
901 admin IW< [cifsoplockd]
945 admin IW< [kthrotld]
1552 admin IW< [knbd-recv]
1589 admin IW< [kmpath_rdacd]
1596 admin IW< [nvme-wq]
1607 admin SW [scsi_eh_0]
1608 admin IW< [scsi_tmf_0]
1611 admin SW [scsi_eh_1]
1612 admin IW< [scsi_tmf_1]
1653 admin SW [scsi_eh_2]
1654 admin IW< [scsi_tmf_2]
1657 admin SW [scsi_eh_3]
1658 admin IW< [scsi_tmf_3]
1661 admin SW [scsi_eh_4]
1662 admin IW< [scsi_tmf_4]
1665 admin SW [scsi_eh_5]
1666 admin IW< [scsi_tmf_5]
1669 admin SW [scsi_eh_6]
1670 admin IW< [scsi_tmf_6]
1673 admin SW [scsi_eh_7]
1674 admin IW< [scsi_tmf_7]
1677 admin SW [scsi_eh_8]
1678 admin IW< [scsi_tmf_8]
1681 admin SW [scsi_eh_9]
1682 admin IW< [scsi_tmf_9]
1685 admin SW [scsi_eh_10]
1686 admin IW< [scsi_tmf_10]
1689 admin SW [scsi_eh_11]
1690 admin IW< [scsi_tmf_11]
1693 admin SW [scsi_eh_12]
1694 admin IW< [scsi_tmf_12]
1697 admin SW [scsi_eh_13]
1698 admin IW< [scsi_tmf_13]
1732 admin IW [kworker/u8:8]
1734 admin RW [kworker/u8:10]
1780 admin SW [rc0]
1787 admin IW< [raid5wq]
1789 admin IW< [dm-block-clone]
1791 admin IW< [dm_bufio_cache]
1792 admin IW< [kmpathd]
1793 admin IW< [kmpath_handlerd]
1797 admin SW [irq/3-mmc0]
1801 admin SW [irq/130-0000:00]
1803 admin IW [kworker/2:2]
1804 admin SW [irq/39-mmc1]
1855 admin SW [i915/signal:0]
1856 admin SW [i915/signal:1]
1857 admin SW [i915/signal:2]
1858 admin SW [i915/signal:4]
1943 admin SW [mmcqd/1]
1944 admin SW [mmcqd/1boot0]
1945 admin SW [mmcqd/1boot1]
1946 admin SW [mmcqd/1rpmb]
1963 admin IW [kworker/0:1]
1966 admin IW< [kworker/2:1H]
1967 admin IW< [kworker/1:1H]
2253 admin IW< [bioset]
2254 admin IW< [kcopyd]
2255 admin IW< [bioset]
2256 admin IW< [kcopyd_tracked]
2257 admin IW< [bioset]
2266 admin IW< [drbd-reissue]
2286 admin IW< [kworker/0:1H]
2366 admin 1708 S < udevd --daemon
2409 admin IW< [kworker/3:1H]
2479 admin SW [md9_raid1]
2492 admin SW [jbd2/md9-8]
2493 admin IW< [ext4-rsv-conver]
2506 admin SW [md13_raid1]
2642 admin 10796 S /sbin/hal_daemon -f
2714 admin SW [md256_raid1]
2736 admin SW [md322_raid1]
2774 admin SW [md1_raid5]
2789 admin IW [kworker/3:2]
2815 admin IW< [drbd1_submit]
2823 admin SW [drbd_w_r1]
2938 admin IW< [kdmflush]
2941 admin IW< [bioset]
2945 admin IW< [kdmflush]
2947 admin IW< [bioset]
2952 admin IW< [kdmflush]
2954 admin IW< [bioset]
2957 admin IW< [kdmflush]
2959 admin IW< [bioset]
2962 admin IW< [kdmflush]
2978 admin IW< [bioset]
2986 admin IW< [kdmflush]
2988 admin IW< [bioset]
2989 admin IW< [kcopyd]
2990 admin IW< [bioset]
2991 admin IW< [dm-thin]
2992 admin IW< [dm-thin-paralle]
2993 admin IW< [dm-tier]
2994 admin IW< [dm-tier]
2995 admin IW< [dm-tier]
2996 admin IW< [dm-tier]
2997 admin IW< [dm-tier-discard]
2998 admin IW< [kcopyd]
2999 admin IW< [bioset]
3000 admin IW< [dm-tier-cache-t]
3001 admin IW< [bioset]
3006 admin IW< [kdmflush]
3008 admin IW< [bioset]
3014 admin IW< [kdmflush]
3017 admin IW< [bioset]
3359 admin IW< [kdmflush]
3361 admin IW< [bioset]
3504 admin IW< [vfio-irqfd-clea]
3529 admin 684 S /sbin/lvmetad
3624 admin SWN [kcp_p]
3625 admin DWN [kcp_c]
3748 admin IW [kworker/1:2]
3749 admin SW [jbd2/md13-8]
3750 admin IW< [ext4-rsv-conver]
4258 admin SW [notify thread]
4266 admin 740 S < qWatchdogd: keeping alive every 5 seconds...
4356 admin IW< [ipv6_addrconf]
4479 admin 1472 S /sbin/netwatchdog -d
4511 admin 4164 S /sbin/cs_daemon
4524 admin 4956 S /sbin/cs_qdaemon
4782 admin 856 S /sbin/modagent
5542 admin 704 S /usr/local/network/bin/logrotate /var/log/network/err.log 102400
5544 admin 1152 S tail -f /var/log/network/err_log
5545 admin 700 S /usr/local/network/bin/logrotate /var/log/network/events.log 102400
5548 admin 1008 S tail -f /var/log/network/events_log
5870 admin 22148 S /mnt/ext/opt/Python/bin/python ./manage.pyc runfcgi method=threaded socket=/tmp/netmgr.sock pidfile=/tmp/netmgr.pid
5910 admin 3400 S /mnt/ext/opt/netmgr/util/redis/redis-server *:0
5920 admin 28712 S /mnt/ext/opt/Python/bin/python /mnt/ext/opt/netmgr/api/core/asd.pyc
6578 admin IW< [bond0]
6581 admin IW< [bond1]
7490 admin 104 S /sbin/rdnssd -r /var/lib/rdnssd/ -p /var/run/network/rdnssd.pid -u admin
7491 admin 1624 S /sbin/rdnssd -r /var/lib/rdnssd/ -p /var/run/network/rdnssd.pid -u admin
7503 admin 7248 S /usr/local/bin/ifd
7616 admin 7876 S /usr/sbin/dhclient -6 -nw -S -cf /etc/dhcp/dh6dns.conf -lf /dev/null -e FPATH=/var/lib/dh6dns -sf /sbin/dh6dns-script -pf /var/lib/dh6dns/eth0.pid eth0
7638 admin 21856 S python /usr/local/network/nmd/nmd.pyc
7643 admin 15964 S python /usr/local/network/nmd/nmd.pyc
7644 admin 18192 S python /usr/local/network/nmd/nmd.pyc
7997 admin 18416 S python /usr/local/network/nmd/nmd.pyc
7998 admin 18920 S python /usr/local/network/nmd/nmd.pyc
7999 admin 18080 S python /usr/local/network/nmd/nmd.pyc
8001 admin 15696 S python /usr/local/network/nmd/nmd.pyc
8002 admin 17312 S python /usr/local/network/nmd/nmd.pyc
8003 admin 17220 S python /usr/local/network/nmd/nmd.pyc
8008 admin 15256 S python /usr/local/network/nmd/nmd.pyc
8010 admin 15200 S python /usr/local/network/nmd/nmd.pyc
8012 admin 18388 S python /usr/local/network/nmd/nmd.pyc
8014 admin 2888 S /usr/local/bin/rates_monitor_start
8025 admin 22020 S /mnt/ext/opt/Python/bin/python /mnt/ext/opt/netmgr/api/core/ip_monitor.pyc
8426 admin 1896 S /sbin/dnsmasq
9063 admin IW< [iscsi_eh]
9092 admin SW [qnap_et]
9121 admin 1428 S /sbin/iscsid --config=/etc/config/iscsi/sbin/iscsid.conf --initiatorname=/etc/iscsi/initiatorname.iscsi
9122 admin 2720 S < /sbin/iscsid --config=/etc/config/iscsi/sbin/iscsid.conf --initiatorname=/etc/iscsi/initiatorname.iscsi
9134 admin 6248 S /sbin/vdd_control -d
9329 admin 9596 S /sbin/qpkgd -d0
9860 admin 4284 S /usr/local/bin/ql_daemon -d 7
10413 admin 29080 S /usr/local/sbin/ncdb --defaults-file=/mnt/ext/opt/NotificationCenter/etc/nc-mariadb.conf
10444 admin 1964 S /usr/local/sbin/ncloud
10458 admin 12924 S /usr/local/sbin/ncd
11236 guest 2416 S /usr/sbin/dbus-daemon --system
11728 admin 4932 S /usr/local/sbin/Qthttpd -p 80 -nor -nos -u admin -d /home/Qhttpd -c **.*
11777 admin 3248 S /usr/sbin/cupsd -C /etc/config/cups/cupsd.conf -s /etc/config/cups/cups-files.conf
12105 admin 7992 S < /usr/local/samba/sbin/winbindd -s /etc/config/smb.conf
12288 admin 12848 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12291 admin 6012 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12293 admin 6012 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12298 admin 13372 S < /usr/local/samba/sbin/winbindd -s /etc/config/smb.conf
12299 admin 3772 S < /usr/local/samba/sbin/winbindd -s /etc/config/smb.conf
12300 admin 6324 S < /usr/local/samba/sbin/winbindd -s /etc/config/smb.conf
12301 admin 7792 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12302 admin 7868 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12303 admin 6980 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12304 admin 6976 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12305 admin 6984 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12306 admin 6984 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12307 admin 6984 S /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
12356 admin 5680 S /usr/local/samba/sbin/nmbd -l /var/log -D -s /etc/config/smb.conf
12445 admin 14848 S /mnt/ext/opt/Python/bin/python2 /sbin/wsd.py
12637 admin 5280 S /usr/local/sbin/_thttpd_ -p 58080 -nor -nos -u admin -d /home/httpd -c **.* -h 127.0.0.1 -i /var/lock/._thttpd_.pid
12646 admin 4360 S /usr/bin/qooba --service spotlight
12689 admin 10820 S php-fpm: master process (/etc/php-fpm-sys-proxy.conf)
12690 admin 7296 S php-fpm: pool www
12691 admin 7296 S php-fpm: pool www
12793 admin 7692 S < /usr/local/apache/bin/apache_proxy -k start -f /etc/apache-sys-proxy.conf
13449 admin 3780 S /usr/sbin/ntpdated
13496 admin 6968 S /usr/sbin/upsutil
13523 admin IW [kworker/u8:0]
13564 admin 2000 S /usr/sbin/crond -l 9 -c /tmp/cron/crontabs
13567 admin 10332 S /usr/local/apache/bin/apache_proxys -k start -f /etc/apache-sys-proxy-ssl.conf
13921 admin 4028 S /usr/sbin/sshd -f /etc/config/ssh/sshd_config -p 22
14151 admin 3896 S /usr/bin/lunportman
14224 admin 2252 S /sbin/rfsd -i -f /etc/rfsd.conf
14238 admin 4608 S /usr/local/bin/rfsd_qmonitor -f:/tmp/rfsd_qmonitor.conf
14495 admin 14996 S /sbin/bcclient
14757 admin 1556 S N /sbin/acpid
14788 admin 3164 S /usr/sbin/rsyslogd -f /etc/rsyslog_only_klog.conf -c4 -M /usr/local/lib/rsyslog/
14831 admin IW [kworker/2:1]
14912 admin 2480 S /sbin/gen_bandwidth -r -i 5
15244 admin 4556 S qNoticeEngined: Write notice is enabled...
15256 admin 4916 S /sbin/qsyslogd
15273 admin 5324 S /sbin/qShield
15277 admin 6308 S qLogEngined: Write log is disabled...
15426 admin 2660 S /bin/sh /etc/rcS.d/S99cloudinstall_report_complete_daemon start
15531 admin 7764 S /usr/bin/qstorman
15553 admin 5196 S /sbin/sdmd --daemon
15581 admin 7524 S /usr/bin/qsnapman
16868 admin 8160 S /usr/bin/qsnapman-alive
17842 admin 1000 S sleep 86400
17881 admin 7672 S /usr/bin/qsnapman-smart
18152 admin 8828 S /usr/bin/qsnapman-recyc
18453 admin 964 S /bin/sleep 1
18491 admin 8256 R /usr/local/bin/python /usr/local/network/ni_utils/API.pyc cu Check_access_internet ["eth0"]
18493 admin 5292 S qmonitor -pid:99999999 -client:test -reg:/qqqqqqqqqqqqqqqq -filter:0x7fff
18494 admin 2380 R ps -ef
19789 admin 4880 S /sbin/daemon_mgr
19813 admin 2044 S /usr/share/qnas_console_install/drawPic
20614 admin 4116 S < /usr/local/apache/bin/fcgi-pm -k start -f /etc/apache-sys-proxy.conf
20619 admin 11644 S < /usr/local/apache/bin/apache_proxy -k start -f /etc/apache-sys-proxy.conf
20838 admin 4468 S /usr/local/apache/bin/fcgi-pm -k start -f /etc/apache-sys-proxy-ssl.conf
20841 admin 15108 S /usr/local/apache/bin/apache_proxys -k start -f /etc/apache-sys-proxy-ssl.conf
22802 admin 3488 S /bin/sh /sbin/qdesk_soldier
23575 admin 7760 S /sbin/upnpcd -i 300
23938 admin 1952 S /sbin/getty 115200 tty1
23939 admin 1952 S /sbin/getty 115200 tty2
23943 admin 1832 S /sbin/getty -L ttyS0 115200 vt100
27380 admin 8592 R sshd: admin@pts/0
27387 admin 3772 S -sh
And a lot of ports are open, too:
[~] # netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 12288/smbd
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN 8426/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 13921/sshd
tcp 0 0 0.0.0.0:631 0.0.0.0:* LISTEN 11777/cupsd
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 12288/smbd
tcp 0 0 127.0.0.1:58080 0.0.0.0:* LISTEN 12637/_thttpd_
tcp 0 0 :::139 :::* LISTEN 12288/smbd
tcp 0 0 :::8080 :::* LISTEN 12793/apache_proxy
tcp 0 0 :::80 :::* LISTEN 11728/Qthttpd
tcp 0 0 :::22 :::* LISTEN 13921/sshd
tcp 0 0 :::631 :::* LISTEN 11777/cupsd
tcp 0 0 :::443 :::* LISTEN 13567/apache_proxys
tcp 0 0 :::445 :::* LISTEN 12288/smbd
udp 0 0 0.0.0.0:35255 0.0.0.0:* 12445/python2
udp 0 0 0.0.0.0:59927 0.0.0.0:* 8426/dnsmasq
udp 0 0 0.0.0.0:3702 0.0.0.0:* 12445/python2
udp 0 0 255.255.255.255:8097 0.0.0.0:* 14495/bcclient
udp 0 0 255.255.255.255:8097 0.0.0.0:* 14495/bcclient
udp 0 0 127.0.1.1:53 0.0.0.0:* 8426/dnsmasq
udp 0 0 192.168.42.255:137 0.0.0.0:* 12356/nmbd
udp 0 0 192.168.42.24:137 0.0.0.0:* 12356/nmbd
udp 0 0 0.0.0.0:137 0.0.0.0:* 12356/nmbd
udp 0 0 192.168.42.255:138 0.0.0.0:* 12356/nmbd
udp 0 0 192.168.42.24:138 0.0.0.0:* 12356/nmbd
udp 0 0 0.0.0.0:138 0.0.0.0:* 12356/nmbd
udp 0 0 0.0.0.0:22359 0.0.0.0:* 7616/dhclient
udp 0 0 :::57614 :::* 7616/dhclient
udp 0 0 fe80::265e:beff:fe3b:7405:546 :::* 7616/dhclient
I was particularly baffled by cups
(which is not even an option anywhere), bcclient
(which appears to be used in the discovery process, but is not well documented) and the fact that a disabled webserver and a management interface set to HTTPS exclusively still leave two webserver ports (80 and 8080) running. I have no idea what _thttpd_
is, but at least it’s just running on the loopback interface. Why there are two instances of dhclient
listening on UDP is also beyond me. I’m not running a server, and the IP addresses are statically configured, so there would also not be any use for dhcp clients.
Another thing that bugs me, are the ports 137/udp
, 138/udp
, 139/tcp
, all of which are for the legacy SMBv1 protocol that I disabled.
Even without any users on that NAS and with no traffic going in or out and with no meaningful processes, I have a sysload over 1:
[~] # uptime
15:51:38 up 46 min, load average: 1.60, 1.42, 1.37
So after all of this was not really successful, I thought I should close this up some more. The security tab in QTS is not really doing anything but filtering whole IP addresses, so that’s out. Luckily, the more recent QNAP NAS models come with iptables out of the box, let’s see what’s in there:
[~] # iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DROP all -- anywhere anywhere match-set BRNOIPSET src,dst
OK, Nothing to see here.
On to a solution
Let’s add our own rules. We can use autorun.sh for that, which is present on more recent NAS models. I mostly use this iptables pattern by Vivek Gite, but I’m extending it for other ports.
The precise commands will differ based on your exact NAS model, see the autorun document linked above.
[~] # mount /dev/mmcblk1p6 /tmp/config/
[~] # vi /tmp/config/autorun.sh
[~] # chmod /tmp/config/autorun.sh
[~] # umount /tmp/config/
Here are the contents of my autorun.sh
#!/bin/sh
# Flushing all rules
iptables -F
iptables -X
# Setting default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP
# Allow unlimited traffic on loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow incoming ssh and https only
iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 0/0 --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 0/0 --sport 443 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# make sure nothing comes or goes out of this box
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP
I like to test these rules first, before activating autorun.sh on a system level. You can just run this file with /tmp/config/autorun.sh
and after that you should test if you can still connect via SSH and/or HTTPS.
Your iptables should now look like this:
[~] # iptables --list
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:ssh state NEW,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:https state NEW,ESTABLISHED
DROP all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
Chain OUTPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp spt:ssh dpts:login:65535 state ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spt:https dpts:login:65535 state ESTABLISHED
DROP all -- anywhere anywhere
You can also test if other services are still reachable (which they should not) by connecting to an open port via nc (or any other tool). This test should just throw a timeout after five seconds:
nc -v -w 5 192.168.42.24 631
nc: connect to 192.168.42.24 port 631 (tcp) timed out: Operation now in progress
I can now proceed to and add incoming rules for SAMBA (445/tcp), and outgoing rules for DNS (53/udp
), SMTP (587/tcp
for me, but 465/tcp
is also common) and NTP (123/udp
), this is about all I want to do. I will probably also add one more rule for monitoring, but not today. Note that this will also make automatic updating of the NAS impossible for now. I can always manually update firmware, of course.
#!/bin/sh
# CC modifications
# Flushing all rules
iptables -F
iptables -X
# Setting default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP
# Allow unlimited traffic on loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow incoming ssh and https only
iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 0/0 --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 0/0 --sport 443 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# Allow incoming SMB connections
iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 --dport 445 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 0/0 --sport 445 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# Allow DNS to our own server
iptables -A OUTPUT -p udp -s 0/0 --sport 513:65535 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp -d 0/0 --sport 53 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# Allow outgoing SMTP on Port 587
iptables -A OUTPUT -p tcp -s 0/0 --sport 513:65535 --dport 587 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -d 0/0 --sport 587 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# Allow outgoing NTP requests
iptables -A OUTPUT -p udp -s 0/0 --sport 513:65535 --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp -d 0/0 --sport 123 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# make sure nothing comes or goes out of this box
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP
so now my iptables rules look like this:
[~] # iptables --list
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:ssh state NEW,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:https state NEW,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:445 state NEW,ESTABLISHED
ACCEPT udp -- anywhere anywhere udp spt:domain dpts:who:65535 state ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spt:587 dpts:login:65535 state ESTABLISHED
ACCEPT udp -- anywhere anywhere udp spt:ntp dpts:who:65535 state ESTABLISHED
DROP all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
Chain OUTPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp spt:ssh dpts:login:65535 state ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spt:https dpts:login:65535 state ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spt:445 dpts:login:65535 state ESTABLISHED
ACCEPT udp -- anywhere anywhere udp spts:who:65535 dpt:domain state NEW,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp spts:login:65535 dpt:587 state NEW,ESTABLISHED
ACCEPT udp -- anywhere anywhere udp spts:who:65535 dpt:ntp state NEW,ESTABLISHED
DROP all -- anywhere anywhere
Please note that this disables a lot of things that you may want to use but it should make the attack surface a lot smaller. In particular automatic software updates and the QNAP App Center won’t work with these exact settings. I may lock down the accessible ports even more limiting them to a known set of ip addresses, but for now it’s good enough to connect the NAS to my regular network once again. I can’t guarantee that this will prevent future infections, as we don’t have conclusive information on the infection vector so far.
This may not be the ruleset for you, but you will probably be able to adapt it from here.
Open questions
So, why are all of these ports even open? Why can’t I deactivate more of them from within the management interface? How was firmware 4.4.2 infected? How can I trust the device after an infection? How would I verify that only exactly QNAP’s QTS is running and no parts of any malware made it into a BIOS and/or firmware and/or virtualize QTS itself while remaining active?
While I can’t answer these questions, and can’t give you any support with your individual NAS, please let me know if this works for you or if you have additional measures in place.
I sometimes additionally allow myself to ssh out:
# iptables -A OUTPUT -p tcp -s 0/0 –sport 513:65535 –dport 22 -m state –state NEW,ESTABLISHED -j ACCEPT
# iptables -A INPUT -p tcp -d 0/0 –sport 22 –dport 513:65535 -m state –state ESTABLISHED -j ACCEPT