[openssl-users] Question about TLS record length limitations
bkaduk at akamai.com
Mon Dec 7 20:46:13 UTC 2015
On 12/07/2015 02:43 PM, Software Engineer 979 wrote:
> I'm currently developing an data transfer application using OpenSSL.
> The application is required to securely transfer large amounts of data
> over a low latency/high bandwidth network. The data being transferred
> lives in a 3rd part application that uses 1 MB buffer to transfer data
> to my application. When I hook OpenSSL into my application I notice an
> appreciable decline in network throughput. I've traced the issue the
> default TLS record size of 16K. The smaller record size causes the 3rd
> party application's buffer to be segmented into 4 16K buffers per
> write and the resulting overhead considerably slows things down. I've
> since modified the version of OpenSSL that I'm using to support an
> arbitrary TLS record size allowing OpenSSL to scale up to 1MB or
> larger TLS record size. Since this change, my network throughput has
> dramatically increased (187% degradation down to 33%).
> I subsequently checked the TLS RFC to determine why a 16K record size
> was being used, and all could find was the following:
> The length (in bytes) of the following TLSCompressed.fragment.
> The length MUST NOT exceed 2^14 + 1024.
> The language here is pretty explicit stating that the length must not
> exceed 16K (+ some change).Does anyone know the reason for this? Is
> there a cryptographic reason why we shouldn't exceed this message
> size? Based on my limited experiment, it would appear that a larger
> record size would benefit low latency/high bandwidth networks.
The peer is required to buffer the entire record before processing it,
at at that point the data could be from an untrusted party/attacker. So
the limit is for protection against denial-of-service via resource
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openssl-users