Client‐Server and Serverless Computing
Until relatively recently, a client‐server model prevailed in IT, with discrete servers hosting applications and services that were consumed by client systems (like desktop, notebook, and mobile devices). The servers were either physical or virtual. That model is still the most common, but PaaS and SaaS cloud offerings are evolving the way services are delivered to clients. In addition to discrete servers hosting applications and services, serverless computing means that the server is abstracted (or essentially hidden from the service consumers) and the service becomes the primary focus.
Regardless of the model your solutions use, the servers, serverless applications, and clients need a way to communicate with one another. Networks provide that means of communication.
Each device on a network, whether physical or virtual, needs a unique identification that enables other devices and services to communicate with it. Each device is assigned a network address that serves as its address, much like a street address identifies the place where you live.
Within that network address space, subnets further segregate parts of the address space into virtual networks.
Your personal computer (and other network devices including your mobile phone) receives a network address when it boots. The address is defined by the network address space and a subnet mask that defines the virtual network on which your device resides.
Computers and other devices have no problem using numbers for addresses, but people do. That is where the Domain Name System comes into play.
Domain Name System
The Domain Name System (DNS) maps numeric IP addresses to hostnames that are more easily recognized and understood by people. For example, DNS maps the hostname http://www.microsoft.com to an IP address of 220.127.116.11. So, when you want to visit the Microsoft website, you type http://www.microsoft.com into your web browser instead of the numeric address. A DNS resolver on your computer communicates with a DNS server whose job it is to look up addresses associated with hostnames and return the hostname to your web browser.
The key point is that client applications communicate with DNS servers to obtain the IP addresses associated with hosts like web servers, database servers, printers, and other network resources. The client applications then communicate with those hosts using their IP addresses. Likewise, servers and server applications communicate with one another using IP addresses that they obtain by looking up the address from a DNS server.
The key point to understand is that routers move data between subnets, manipulating the data so that the traffic can reach its intended destination and responses can come back to the requesting system.
Virtual Networks (VNet)
A core concept for Azure and for any networking discussion is virtual networks, and the Azure Virtual Network (VNet) service is a fundamental component of your private Azure networks. VNet enables virtual machines and other Azure services to communicate among themselves, with the Internet, and in the case of a hybrid environment, with your on‐premises networks.
You can use virtual network peering to connect virtual networks, including across regions. This enables your resources to communicate across virtual networks (globally, if necessary), with the traffic traversing Microsoft’s private backbone network. Resources in the peered virtual networks can communicate at the same latency and with the same bandwidth as they would if they were on the same virtual network.
Load balancing refers to distributing network traffic across multiple resources to improve responsiveness, reliability, and availability. For example, if you deploy a web application with three web servers, you will use a load balancer to distribute the traffic among the three web servers. Client systems see a single hostname and IP address for the balanced services, and the load balancer distributes the traffic across the hosts in the balanced group. Not only does this distribute the load for performance reasons, but if one of the web servers fails, the load balancer can exclude the failed server from the group and begin sending all the traffic to the remaining two. Or, if you scale out with additional servers, the load balancer will begin sending traffic to the new servers.
Azure offers four load balancing services:
- Azure Front Door: Azure Front Door is designed for global or multiregion routing and site acceleration of Internet‐facing web traffic. It uses the Microsoft global edge network to enable fast, secure, and scalable web applications.
- Azure Traffic Manager: This service is an application layer DNS‐based traffic load balancer that balances traffic at the domain level. It can balance traffic across global Azure regions. Traffic Manager offers several options for routing traffic and detecting endpoint health.
- Azure Application Gateway: This is an application layer load‐balancing service that provides an application delivery controller (ADC) as a service. You can configure Application Gateway as Internet‐facing, internal‐only, or a combination of the two. Azure Application Gateway is applicable for HTTP(S) traffic and can route traffic based on several criteria, including incoming URL, URI path, and host headers.
- Azure Load Balancer: The Azure Load Balancer service is a transport layer service designed for high performance and low latency and is zone‐redundant to provide high availability across availability zones. It is applicable for non‐HTTP(S) traffic.
Note: Azure Traffic Manager and Azure Application Gateway both function at layer 7, whereas Azure Load Balancer service functions at layer 4. A load‐balancing service can support different functionality based on the level at which it functions.
Which load‐balancing service you choose depends on the scenario, and you might use one in some situations and more than one in others. Azure Load Balancer is generally the appropriate solution for non‐HTTP(S) traffic based on the IP address of the target service. For example, you would use Azure Load Balancer when balancing traffic among multiple database VMs.
Azure Application Gateway is designed to support regional load balancing for HTTP(S) traffic and offers support for path‐based routing. For example, assume you want to route traffic to a set of web servers. When the URL includes /videos in the path, you want to direct the traffic to a specific pool of servers that are optimized to handle video requests. Azure Application Gateway gives you that capability.
Like Azure Application Gateway, Front Door supports URL path‐based routing. However, Azure Front Door is intended for globally distributed web applications where speed, user location, fast failover, caching, and high availability are critical.
- Regional routing = Azure Application Gateway
- Global routing = Front Door
Azure Traffic Manager is appropriate for DNS‐based global routing. Traffic Manager supports a variety of methods for routing traffic and for detecting endpoint health, enabling Traffic Manager to support a wide range of applications and usage scenarios where region or global load balancing is needed.
Imagine that you are working from home and need to access a web server at work that contains business‐sensitive information. That data would be subject to compromise if it were traversing the public Internet, particularly if the traffic were not encrypted. Virtual private networks (VPNs) help solve that problem.
A VPN establishes an encrypted tunnel between two private networks across a public network. For example, you can establish a secure connection between your on‐premises network and Azure using a VPN, enabling traffic to flow securely between your on‐premises data center and your resources in Azure. Similarly, you can use a VPN connection between your home network and office network to access the web server that hosts that business‐sensitive data, protecting the data from prying eyes as it travels between the server and your computer.
Azure VPN Gateway
Azure VPN Gateway enables you to create VPN connections between Azure virtual networks and between Azure and your on‐premises network. VPN Gateway supports multiple VPN configurations:
- Site‐to‐site: Establishes a VPN tunnel between two sites, such as between your on‐premises data center and Azure.
- Multi‐site: A variation of site‐to‐site, a multi‐site VPN establishes VPN tunnels between Azure and multiple on‐premises sites.
- Point‐to‐site: Establishes a VPN tunnel from a single device (point) to a site.
- VNet‐to‐VNet: Establishes a VPN tunnel between two Azure VNets.
A site‐to‐site VPN connects two sites, such as an on‐premises facility and Azure. For example, you might use a site‐to‐site VPN to connect your primary data center and Azure, enabling secure, encrypted traffic between the on‐premises servers and services that interact with resources in Azure. Or you might use a site‐to‐site VPN to connect your primary office location to Azure to secure user‐related data traffic between the office and Azure. A multi‐site VPN provides an expanded site‐to‐site capability. For example, you might use a multi‐site VPN to create a secure connection to Azure from separate data center and office locations.
A point‐to‐site VPN is similar to a site‐to‐site VPN in that it creates an encrypted tunnel, but the connection is between a single device and a site. If only one server or service at one of your locations needs to connect to Azure, you can use a point‐to‐site VPN to connect that one server to Azure.
A VNet‐to‐VNet VPN connects two Azure VNets with an encrypted tunnel. The VNets can be from different regions and subscriptions. Connecting VNets in this way enables you to connect resources and networks in different Azure locations without traversing the Internet. One common use for a VNet‐to‐VNet VPN is to enable georedundancy of services. Assume, for example, that you want to build a highly available SQL Server solution that uses SQL Server Always On to replicate databases between different regions in an availability group. A VNet‐to‐VNet VPN tunnel between the two regions where the virtual servers reside provides the connectivity needed for replication between those regions.
Azure ExpressRoute enables you to extend your on‐premises networks into Azure over a private connection managed by a third‐party connectivity provider. The route does not traverse the Internet, enabling higher reliability, faster speeds, less (and more consistent) latency, and higher security
ExpressRoute Direct, as an alternative to ExpressRoute, enables you to connect directly to the Microsoft global network without traversing the Internet. Consider ExpressRoute Direct if you require physical isolation as a regulated industry or government entity, or if you need to move massive amounts of data into Azure.
ExpressRoute Connectivity Models
You can create a connection between your on-premises network and the Microsoft cloud in four different ways:
- CloudExchange Co-location
- Point-to-point Ethernet Connection
- Any-to-any (IPVPN) Connection
- ExpressRoute Direct
Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices).
In the direct connectivity model, there are no network points 3 and 4. Instead, CEs (2) are directly connected to MSEEs via dark fiber.
If the cloud exchange co-location, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
If the any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network.
Content Delivery Networks
Azure Content Delivery Network (CDN) places web content across a distributed network of servers to make that content readily available to users based on their location. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency. For example, if your organization is based in the United States but you have large video files that you need to make available to users in Switzerland, you could place those files on a CDN that has a point of presence (PoP) in Zurich or Geneva. When the users access those files, the files come from the cached copies in the CDN, rather from your servers in the United States. This reduces latency and improves performance.
Describe core resources available in Azure.
You create VNets in Azure to segregate and organize hosts and services. Each VNet is scoped to a single subscription and region, but you can create multiple VNets. Virtual Network Peering enables you to connect VNets across regions.
The load‐balancing services in Azure balance network traffic across multiple servers. Azure Load Balancer is used when you need to balance traffic based on IP address. Azure Application Gateway is used for regional load balancing of web applications, and Azure Front Door is intended for globally distributed web applications. Azure Traffic Manager is intended for regional or global DNS‐based load balancing, but because it is DNS based it’s not able to fail over as quickly as Front Door.
Virtual private networks (VPNs) enable you to connect two private networks using a tunnel through a public network such as the Internet. Use Azure VPN Gateway to establish VPN tunnels between Azure VNets and between Azure and your on‐premises networks. VPN Gateway supports site‐to‐site, multi‐site, point‐to‐site, and VNet‐to‐VNet connections. Azure ExpressRoute provides VPN connectivity between your on‐premises network and Azure with higher possible speeds using third‐party network providers. ExpressRoute Direct offers even higher speeds and connects directly to the Microsoft network rather than tunneling through the Internet.
Lastly, content delivery networks (CDNs) enable you to place content near where users are located, improving performance, minimizing network traffic, and reducing latency.
- Networking addressing: Devices on a network are assigned a network address, which uniquely identifies the device on the network and enables network traffic to be routed to and from the device. Subnets create virtual networks to segregate devices within an address space. When you create a resource in Azure, you specify the address segment in which it will reside and either assign it a static address or allow it to take a dynamic address.
- Routing: Routers move network traffic between network segments. They make it possible not only for public network segments to communicate, but also for private network segments to communicate with public network segments.
- Domain Name Service (DNS): DNS provides host‐to‐address resolution, enabling applications and services to determine the IP address associated with a hostname.
- Virtual private network (VPN): A VPN creates an encrypted tunnel between two private networks across a public network, enabling secure network traffic between the two networks. You can establish a VPN connection between Azure network segments, your on‐premises network and Azure, or between specific hosts.
- Load balancer: A load balancer distributes traffic to a group of servers or services, enabling the load to be shared among them. Load balancing also enables fault tolerance by detecting failed resources and directing traffic away from them.
- ExpressRoute: Azure ExpressRoute enables you to establish a secure VPN connection between your on‐premises network and Azure through a third‐party provider, bypassing the Internet.
- ExpressRoute Direct enables you to connect your on‐premises network directly to the Microsoft global network.
- Content Delivery Network (CDN): A CDN places content near users, enabling them to consume that content without pulling the data from a geographically distant server. CDNs reduce network traffic and latency.