Smtp Load Balancing
Load balancing SMTP traffic is something that makes sense for a lot of organizations. They have an investment in load balancers for their CAS array, web server farm, etc and so SMTP seems like another logical protocol to run through the load balancers and get all the benefits that it delivers.
smtp load balancing
Download File: https://www.google.com/url?q=https%3A%2F%2Furlcod.com%2F2ugfuf&sa=D&sntz=1&usg=AOvVaw3xOW8oPwNaRBlt-suEIDN-
However it is also quite easy to create a situation where SMTP traffic is not being load balanced as intended, and worse still there are scenarios where the use of some load balanced configurations may actually diminish SMTP high availability, or even undermine security.
Consider the following scenario where incoming internet email is passed through an email security server/appliance, which is configured to then send the traffic to a load balancer for distribution to the Hub Transport servers. Various internal applications and systems also use the load balancer as their SMTP target.
In most load balancer configurations you can configure a priority or weight for the servers that are the targets of the traffic. Different vendors use their own terminology for this, but the general idea is that it provides the option to have preferred servers that will be considered first for a new connection if they are available.
Here is a traffic graph of a typical day for two servers that were configured with different weightings/priorities in the load balancer. You can see that SERVER1 handled a higher volume of traffic than SERVER2.
Depending on your server resources and traffic load this may not be an issue for you, but in some environments it could lead to load issues that interrupt mail flow. So if your actual intention is evenly distribute traffic across multiple Hub Transport servers then you would consider adjusting the server weight/priority accordingly.
Along similar lines to the previous issue, a load balancer will usually have multiple methods for deciding which server should be used for a connection. For example, the Kemp load balancers have quite a few scheduling options available.
As one specific example, if the load balancing is based on source IP it may inadvertently lead to traffic imbalances. In the example environment shown at the beginning of this article, source IP-based load balancing would generally result in well balanced traffic from the internal applications and systems, assuming each internal IP is sending roughly equal volumes of email, otherwise some imbalances can still occur.
The obvious reaction here may be to choose a different load balancing algorithm, however my recommendation for environments where incoming internet email all traverses a single host like that is to consider not using the load balancer for distribution of that incoming internet traffic.
With all internal and incoming SMTP traffic going via the load balancer, which is source NATing the connections, the protocol logs only recorded traffic from the load balancer (IP 10.1.1.12 below) and no other IP addresses.
In the example scenario used in this article the email security server has its own load balancing capability for incoming email because you can specify multiple internal hosts to deliver email to. This would also apply to hosted email security services.
By configuring each Hub Transport as an internal delivery target instead of just using the load balancer, the protocol logs now log incoming internet email as coming from the IP addresses for the email security system, rather than the load balancer.
If you do not have an email security server/appliance or other hosted solution, and SMTP connections go directly from the internet to the load balancer, then you could look at using multiple MX records instead, although this would require the availability of multiple public IP addresses.
In addition, any traffic imbalance being caused by the use of source IP-based load balancing should no longer be present. This graph represents incoming internet SMTP connections per server, which began imbalanced and then evened out almost precisely once the load balancer was bypassed.
And importantly, with traffic bypassing the load balancer it should be getting handled by the intended receive connector (which I will explore more in the section further down on security implications).
Most load balancers that are service-aware have a health monitor or probe that makes an SMTP connection to the Hub Transport server, waits for a sign that the service is responding, then disconnects. That sign may be simply waiting for the SMTP banner to be returned, or waiting for a response to HELO.
Because some administrators add the source NAT address(es) of the load balancers into the list of remote IP addresses on their internal relay connectors configured in Exchange, this results in any sender that is coming via the load balancer being considered as authenticated and therefore allowed to send to the distribution list.
This occurs because the source NATing causes Exchange to believe that the email is originating from the load balancer (10.1.1.12), and that IP address is configured as a remote IP address on the internal relay connector.
Ideally if internet email traffic is coming in directly to a load balancer, and the load balancer has no other mechanism for preventing an open relay scenario, then you should ensure that the receive connectors configured for internal applications and systems to relay email are not also handling the internet email traffic.
In order to make your architecture scalable, you may often want to use a load-balancer or an application delivery controller.When using one of them (or a reverse-proxy), the client information is almost all the time hidden. Or if you want to get them, it requires huge modifications in the architecture.
TPROXY allows a load-balancer or reverse-proxy to open the TCP connection to the server using the client IP address.One of the drawback of TPROXY is that the default gateway of the application servers must be the load-balancer.Or you must do policy based routing on your network which could be painful.
In SMTP, it is really important to know the client IP, since we use it most of the time through RBL to fight spam.For security purpose as well: we may want to allow only some hosts to use our SMTP relays and block any other clients.Without the proxy protocol, the load-balancer will hide the client IP with its own IP. You would have to maintain whitelists into the load-balancer (which is doable). Thanks to proxy protocol, Postscreen would be aware of the client IP, it means you could maintain lists directly into the MTA.
Hi there, its grate to sounds like HAProxy can loadbalancing smtp servers. I have to install two data store server for example imap1 and imap2. Problem is users are stored A-M on imap1 and N-Z to imap2. It is possible to do with HAProxy?
Most orgs either drive SMTP traffic through a load balancer like a F5 which routes to an "Edge" type SMTP gateway or simply have multiple "Edge" type SMTP Gateway servers that directly accept and route internet inbound and outbound messages and are all part of the mx record.
"... because its probably not something that is really required" - why not? ARR serves as free HTTP load balancer for small companies (maybe not only for small) and I find ARR strange to be the HTTP load balancer while not offering smtp load balancing. If a company have only 2 or 3 Exchange servers why not have the ability to load balance both smtp and http? Paying thousands of dollars for just 2-3 servers ...? seems a bit excessive in this case.
Agree, but I thought that extending ARR functionality to smtp protocol would allow to publish a single mx record (mail.contoso.com = ARR server) and that server would just relay the incoming mail to any internal mail server (if it's up and running) - this configuration would require only one mx record and one external ip...
I have two SMTP mail servers. I need to load balance them for an internal network, so that they may both send SMTP outbound. ... Mail coming from either of the mail servers would be NATted to an address on the external vlan.
We are using IIS SMTP to send out Email for a web application. If I stand up another server what would be the best way to load balance these internally so they both get used and if one goes down the other would keep going? From my understanding simply creating another Windows DNS record would not solve the issue due to caching. We are using SonicWall firewalls which can do external load balancing, I'm not sure if they could be used to do the same internally or if something else could be done, maybe NLB?
A TCP ("Layer 4") load balancer would solve your problem nicely. Perhaps, as @HBruijn mentioned, this may be possible using your existing SonicWalls. Otherwise my personally preferred tool is HAProxy (which does both Layer 7 and Layer 4 LB).
I want to load balance outbound emails between 3 post-fix gateways for sending mails only reason is to use multiple different source IPs to increase throughput & inbox delivery. Each gateway should receive an approximately equal amount of outbound messages.
You could also make a load balancer out of another postfix server, because SMTP servers normally send emails to each other for forwarding. You should be able to do this simply by setting up multiple forwarders for it. If your emails are coming from a single source which caches DNS replies, this would be the more likely solution.
Note that the nssmtp.pl script bundled with the NetScaler will go as far as attempting to open a connection to confirm that the service is up. The script and the actual code can be found in the following directory of the NetScaler:
I recently completed a project with a client to build out a brand new, on-premises Exchange 2016 environment that required load balancing with Citrix ADC. As with most enterprise Exchange deployments, a reverse proxy and load balancing solution should be implemented for redundancy and to secure those services. This customer was previously using the Exchange SMTP relay receive connectors to explicitly allow certain hosts to send mail via SMTP relay but wanted to transition to a load balanced configuration. 041b061a72