> There will be up to around 100 connections to the web server in the SYN state, all with different IP addresses
Is that an actual problem though? 100 entries in a table is going to use a miniscule amount of RAM, a few kB at most.
And the solution to this (if you have way more than 100) is SYN cookies, which I think the Linux kernel at least will automatically enable when it detects it is under undue load.
?? I don't understand the conclusion to block incoming SYNs with TTL > 70... you're blocking all (even valid) connection attempts from users running other OS's that don't choose the default TTL of 64... like Windows, which I think uses 128.
When in the past you learned that the recommended value for the TTL was 64 and you didn't think any operating system would pick a value much larger than that.
they SYN as many of your ports and IPs, you send SYN-ACK to the spoofed IP destination, the destination knows it didnt SYN you and refuses to ACK the connection.
long TTL keeps the connection open longer, and it builds up to a DDOS for you when your ports are all half open.
depending on the real owner of the spoofed IP, they might blacklist your IP for spraying them with syn-ack.
there it gives what seems to be a good rundown on TTL, and it seems like this could be DNS activity, or CDN caching tuned to quench back propagation.
e.g. [about hafway down page]
In Internet Protocol (IP) multicast, TTL may have control over the packet forwarding scope or range.
0 is restricted to the same host
1 is restricted to the same subnet
32 is restricted to the same site
64 is restricted to the same region
128 is restricted to the same continent
255 is unrestricted
TTL is also employed in caching for Content Delivery Networks (CDNs). TTLs are used herein for specifying the duration of serving cached information until a new copy is downloaded from an origin server. A CDN can offer updated content without requests propagating back to the origin server if the time between origin server pulls is properly adjusted. This accumulative effect enables a CDN to efficiently offer information closer to a user while minimizing the amount of bandwidth required at the origin.
TTL is also employed in caching for Domain Name Systems (DNS). TTL is a numerical value that refers to the duration used herein by the DNS Cache server for serving a DNS record before contacting the authoritative server to get a new copy.
I'm checking the TTL of IP packets, which is only 8-bits in size, and in practice, are decremented per hop (the early IPv4 RFCs state it is in seconds; I doubt it was ever used that way). DNS TTLs are 32-bits in size and represent the number of seconds a DNS record can be cached, They are separate from the TTL of IP packets. The TTL for CDNs is specified in HTTP headers and again has its own specification.
Getting back to TTLs for IP packets---I recalled the recommended TTL of 64 from admittedly years ago. I just now checked my copy of _TCP/IP Illustrated, Volume 1_ by W. R. Stevens, published in 1994, so yeah, a few decades ago. Of all the Unix systems mentioned in that volume, they all defaulted to a TTL of 60, except for Solaris 2.2, which used 255 (surprised me!). I no longer have access to Solaris to check (did at my previous job) but I don't think there are many people using Solaris to view my site.
I've checked the page you linked, and they don't link to the source for the table given, where the various values of TTL denote forwarding scope or range, nor have I ever seen such a table before. I know my Linux and Mac OS-X systems use TTLs less than 70, and I can get content from other continents. My comment on that: [citation needed].
Wikipedia (https://en.wikipedia.org/wiki/Time_to_live) at least links to references, so I found a list of TTLs per OS (https://web.archive.org/web/20130212114759/http://www.map.me...), but given the OSes listed, it's probably also from a few decades ago, but the majority are around 60, with Windows NT being 128, Solaris 255 and VMS anywhere from 60 to 128 (depending on version). So the TTLs being over 100 makes sense for what I was seeing---possibly a bunch of zombie Window boxes participating in a half-assed SYN attack using Brazil IPs for some reason. I can't say I'm horribly upset at that. But actual readers on Windows is concerning. I have no easy way to test for that, and I'd hate to go back to having ~100 half-open connections on my server.
Is that an actual problem though? 100 entries in a table is going to use a miniscule amount of RAM, a few kB at most.
And the solution to this (if you have way more than 100) is SYN cookies, which I think the Linux kernel at least will automatically enable when it detects it is under undue load.
reply