聂永的博客

记录工作/学习的点点滴滴。

随手记之Linux 2.6.32内核SYN flooding警告信息

前言

新申请的服务器内核为2.6.32,原先的TCP Server直接在新内核的Linxu服务器上运行,运行dmesg命令,可以看到大量的SYN flooding警告:

possible SYN flooding on port 8080. Sending cookies.

原先的2.6.18内核的参数在2.6.32内核版本情况下,简单调整"net.ipv4.tcp_max_syn_backlog"已经没有作用。

怎么办,只能再次阅读2.6.32源码,以下即是。

最后小结处有直接结论,心急的你可以直接阅读总结好了。

linux内核2.6.32有关backlog值分析

net/Socket.c:

SYSCALL_DEFINE2(listen, int, fd, int, backlog)
{
    struct socket *sock;
    int err, fput_needed;
    int somaxconn;

    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock) {
        somaxconn = sock_net(sock->sk)->core.sysctl_somaxconn;
        if ((unsigned)backlog > somaxconn)
            backlog = somaxconn;

        err = security_socket_listen(sock, backlog);
        if (!err)
            err = sock->ops->listen(sock, backlog);

        fput_light(sock->file, fput_needed);
    }
    return err;
}

net/ipv4/Af_inet.c:

/*
 *  Move a socket into listening state.
 */
int inet_listen(struct socket *sock, int backlog)
{
    struct sock *sk = sock->sk;
    unsigned char old_state;
    int err;

    lock_sock(sk);

    err = -EINVAL;
    if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM)
        goto out;

    old_state = sk->sk_state;
    if (!((1 << old_state) & (TCPF_CLOSE | TCPF_LISTEN)))
        goto out;

    /* Really, if the socket is already in listen state
     * we can only allow the backlog to be adjusted.
     */
    if (old_state != TCP_LISTEN) {
        err = inet_csk_listen_start(sk, backlog);
        if (err)
            goto out;
    }
    sk->sk_max_ack_backlog = backlog;
    err = 0;

out:
    release_sock(sk);
    return err;
}

inet_listen调用inet_csk_listen_start函数,所传入的backlog参数改头换面,变成了不可修改的常量nr_table_entries了。

net/ipv4/Inet_connection_sock.c:

int inet_csk_listen_start(struct sock *sk, const int nr_table_entries)
{
    struct inet_sock *inet = inet_sk(sk);
    struct inet_connection_sock *icsk = inet_csk(sk);
    int rc = reqsk_queue_alloc(&icsk->icsk_accept_queue, nr_table_entries);

    if (rc != 0)
        return rc;

    sk->sk_max_ack_backlog = 0;
    sk->sk_ack_backlog = 0;
    inet_csk_delack_init(sk);

    /* There is race window here: we announce ourselves listening,
     * but this transition is still not validated by get_port().
     * It is OK, because this socket enters to hash table only
     * after validation is complete.
     */
    sk->sk_state = TCP_LISTEN;
    if (!sk->sk_prot->get_port(sk, inet->num)) {
        inet->sport = htons(inet->num);

        sk_dst_reset(sk);
        sk->sk_prot->hash(sk);

        return 0;
    }

    sk->sk_state = TCP_CLOSE;
    __reqsk_queue_destroy(&icsk->icsk_accept_queue);
    return -EADDRINUSE;
}

下面处理的是TCP SYN_RECV状态的连接,处于握手阶段,也可以说是半连接时,等待着连接方第三次握手。

/*
 * Maximum number of SYN_RECV sockets in queue per LISTEN socket.
 * One SYN_RECV socket costs about 80bytes on a 32bit machine.
 * It would be better to replace it with a global counter for all sockets
 * but then some measure against one socket starving all other sockets
 * would be needed.
 *
 * It was 128 by default. Experiments with real servers show, that
 * it is absolutely not enough even at 100conn/sec. 256 cures most
 * of problems. This value is adjusted to 128 for very small machines
 * (<=32Mb of memory) and to 1024 on normal or better ones (>=256Mb).
 * Note : Dont forget somaxconn that may limit backlog too.
 */
int reqsk_queue_alloc(struct request_sock_queue *queue,
              unsigned int nr_table_entries)
{
    size_t lopt_size = sizeof(struct listen_sock);
    struct listen_sock *lopt;
    nr_table_entries = min_t(u32, nr_table_entries, sysctl_max_syn_backlog);
    nr_table_entries = max_t(u32, nr_table_entries, 8);
    nr_table_entries = roundup_pow_of_two(nr_table_entries + 1);
    lopt_size += nr_table_entries * sizeof(struct request_sock *); 
    if (lopt_size > PAGE_SIZE)
        lopt = __vmalloc(lopt_size,
            GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
            PAGE_KERNEL);
    else
        lopt = kzalloc(lopt_size, GFP_KERNEL);
    if (lopt == NULL)
        return -ENOMEM;

    for (lopt->max_qlen_log = 3;
         (1 << lopt->max_qlen_log) < nr_table_entries;
         lopt->max_qlen_log++);

    get_random_bytes(&lopt->hash_rnd, sizeof(lopt->hash_rnd));
    rwlock_init(&queue->syn_wait_lock);
    queue->rskq_accept_head = NULL;
    lopt->nr_table_entries = nr_table_entries;

    write_lock_bh(&queue->syn_wait_lock);
    queue->listen_opt = lopt;
    write_unlock_bh(&queue->syn_wait_lock);

    return 0;
}

关键要看nr_table_entries变量,在reqsk_queue_alloc函数中nr_table_entries变成了无符号变量,可修改的,变化受限。

比如实际内核参数值为:

net.ipv4.tcp_max_syn_backlog = 65535

所传入的backlog(不大于net.core.somaxconn = 65535)为8102,那么

// 取listen函数的backlog和sysctl_max_syn_backlog最小值,结果为8102
nr_table_entries = min_t(u32, nr_table_entries, sysctl_max_syn_backlog);
// 取nr_table_entries和8进行比较的最大值,结果为8102
nr_table_entries = max_t(u32, nr_table_entries, 8);
// 可看做 nr_table_entries*2,结果为8102*2=16204
nr_table_entries = roundup_pow_of_two(nr_table_entries + 1);

计算结果,max_qlen_log = 14

2.6.18内核中max_qlen_log的计算方法

for (lopt->max_qlen_log = 6;
     (1 << lopt->max_qlen_log) < sysctl_max_syn_backlog;
     lopt->max_qlen_log++);
  1. 很显然,sysctl_max_syn_backlog参与了运算,sysctl_max_syn_backlog值很大的话会导致max_qlen_log值相对比也很大
  2. 若sysctl_max_syn_backlog=65535,那么max_qlen_log=16
  3. 2.6.18内核中半连接长度为2^16=65536

作为listen_sock结构定义了需要处理的处理半连接的队列元素个数为nr_table_entries,此例中为16204长度。

/** struct listen_sock - listen state
 *
 * @max_qlen_log - log_2 of maximal queued SYNs/REQUESTs
 */
struct listen_sock {
    u8          max_qlen_log;
    /* 3 bytes hole, try to use */
    int         qlen;
    int         qlen_young;
    int         clock_hand;
    u32         hash_rnd;
    u32         nr_table_entries;
    struct request_sock *syn_table[0];
};

经描述而知,2^max_qlen_log = 半连接队列长度qlen值。

再回头看看报告SYN flooding的函数:

net/ipv4/Tcp_ipv4.c

#ifdef CONFIG_SYN_COOKIES
static void syn_flood_warning(struct sk_buff *skb)
{
    static unsigned long warntime;

    if (time_after(jiffies, (warntime + HZ * 60))) {
        warntime = jiffies;
        printk(KERN_INFO
               "possible SYN flooding on port %d. Sending cookies.\n",
               ntohs(tcp_hdr(skb)->dest));
    }
}
#endif

被调用的处,已精简若干代码:

int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
{
......
#ifdef CONFIG_SYN_COOKIES
    int want_cookie = 0;
#else
#define want_cookie 0 /* Argh, why doesn't gcc optimize this :( */
#endif
    ......
    /* TW buckets are converted to open requests without
     * limitations, they conserve resources and peer is
     * evidently real one.
     */
     // 判断半连接队列是否已满 && !0
    if (inet_csk_reqsk_queue_is_full(sk) && !isn) {
#ifdef CONFIG_SYN_COOKIES
        if (sysctl_tcp_syncookies) {
            want_cookie = 1;
        } else
#endif
        goto drop;
    }

    /* Accept backlog is full. If we have already queued enough
     * of warm entries in syn queue, drop request. It is better than
     * clogging syn queue with openreqs with exponentially increasing
     * timeout.
     */
    if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
        goto drop;

    req = inet_reqsk_alloc(&tcp_request_sock_ops);
    if (!req)
        goto drop;

    ......

    if (!want_cookie)
        TCP_ECN_create_request(req, tcp_hdr(skb));

    if (want_cookie) {
#ifdef CONFIG_SYN_COOKIES
        syn_flood_warning(skb);
        req->cookie_ts = tmp_opt.tstamp_ok;
#endif
        isn = cookie_v4_init_sequence(sk, skb, &req->mss);
    } else if (!isn) {
        ......
    }       
    ......
}

判断半连接队列已满的函数很关键,可以看看运算法则:

include/net/Inet_connection_sock.h:

static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
{
    return reqsk_queue_is_full(&inet_csk(sk)->icsk_accept_queue);
}

include/net/Rquest_sock.h:

static inline int reqsk_queue_is_full(const struct request_sock_queue *queue)
{
    // 向右移位max_qlen_log个单位
    return queue->listen_opt->qlen >> queue->listen_opt->max_qlen_log;
}

返回1,自然表示半连接队列已满。

以上仅仅是分析了半连接队列已满的判断条件,总之应用程序所传入的backlog很关键,如值太小,很容易得到1.

若 somaxconn = 128,sysctl_max_syn_backlog = 4096,backlog = 511 则最终 nr_table_entries = 256,max_qlen_log = 8。那么超过256个半连接的队列,257 >> 8 = 1,队列已满。

如何设置backlog,还得需要结合具体应用程序,需要为其调用listen方法赋值。

Netty backlog处理

Tcp Server使用Netty 3.7 版本,版本较低,在处理backlog,若我们不手动指定backlog值,JDK 1.6默认为50。

有证如下: java.net.ServerSocket:

public void bind(SocketAddress endpoint, int backlog) throws IOException {
    if (isClosed())
        throw new SocketException("Socket is closed");
    if (!oldImpl && isBound())
        throw new SocketException("Already bound");
    if (endpoint == null)
        endpoint = new InetSocketAddress(0);
    if (!(endpoint instanceof InetSocketAddress))
        throw new IllegalArgumentException("Unsupported address type");
    InetSocketAddress epoint = (InetSocketAddress) endpoint;
    if (epoint.isUnresolved())
        throw new SocketException("Unresolved address");
    if (backlog < 1)
      backlog = 50;
    try {
        SecurityManager security = System.getSecurityManager();
        if (security != null)
        security.checkListen(epoint.getPort());
        getImpl().bind(epoint.getAddress(), epoint.getPort());
        getImpl().listen(backlog);
        bound = true;
    } catch(SecurityException e) {
        bound = false;
        throw e;
    } catch(IOException e) {
        bound = false;
        throw e;
    }
}

netty中,处理backlog的地方:

org/jboss/netty/channel/socket/DefaultServerSocketChannelConfig.java:

@Override
public boolean setOption(String key, Object value) {
    if (super.setOption(key, value)) {
        return true;
    }

    if ("receiveBufferSize".equals(key)) {
        setReceiveBufferSize(ConversionUtil.toInt(value));
    } else if ("reuseAddress".equals(key)) {
        setReuseAddress(ConversionUtil.toBoolean(value));
    } else if ("backlog".equals(key)) {
        setBacklog(ConversionUtil.toInt(value));
    } else {
        return false;
    }
    return true;
}

既然需要我们手动指定backlog值,那么可以这样做:

bootstrap.setOption("backlog", 8102); // 设置大一些没有关系,系统内核会自动与net.core.somaxconn相比较,取最低值

相对比Netty 4.0,有些不智能,可参考:http://www.blogjava.net/yongboy/archive/2014/07/30/416373.html

小结

在linux内核2.6.32,若在没有遭受到SYN flooding攻击的情况下,可以适当调整:

sysctl -w net.core.somaxconn=32768

sysctl -w net.ipv4.tcp_max_syn_backlog=65535

sysctl -p

另千万别忘记修改TCP Server的listen接口所传入的backlog值,若不设置或者过小,都会有可能造成SYN flooding的警告信息。开始不妨设置成1024,然后观察一段时间根据实际情况需要再慢慢往上调。

无论你如何设置,最终backlog值范围为:

backlog <= net.core.somaxconn

半连接队列长度约为:

半连接队列长度 ≈ 2 * min(backlog, net.ipv4.tcpmax_syn_backlog)

另,若出现SYN flooding时,此时TCP SYN_RECV数量表示半连接队列已经满,可以查看一下:

ss -ant | awk 'NR>1 {++s[$1]} END {for(k in s) print k,s[k]}'

感谢运维书坤小伙提供的比较好用查看命令。

posted on 2014-08-20 20:43 nieyong 阅读(12432) 评论(3)  编辑  收藏 所属分类: Socket

评论

# re: 随手记之Linux 2.6.32内核SYN flooding警告信息 2014-08-21 13:24 互联网思维

了解下什么是互联网思维  回复  更多评论   

# re: 随手记之Linux 2.6.32内核SYN flooding警告信息 2014-08-22 08:52 全智贤代言

bootstrap.setOption("backlog", 8102); ,,这个最低值是怎么来的?  回复  更多评论   

# re: 随手记之Linux 2.6.32内核SYN flooding警告信息 2014-08-25 11:34 nieyong

@全智贤代言
原本应该是8192,65536/8=8192。
其实,传入8102,也未尝不可。  回复  更多评论   


只有注册用户登录后才能发表评论。


网站导航:
 

公告

所有文章皆为原创,若转载请标明出处,谢谢~

新浪微博,欢迎关注:

导航

<2014年8月>
272829303112
3456789
10111213141516
17181920212223
24252627282930
31123456

统计

常用链接

留言簿(58)

随笔分类(130)

随笔档案(151)

个人收藏

最新随笔

搜索

最新评论

阅读排行榜

评论排行榜