Nova Kwok's Awesome Blog

Simulate Argo——Building an IPv6 AnyCast Network Behind Cloudflare

When a person is poor, they come up with many clever ideas.

As you may know, this blog primarily uses Cloudflare as a CDN, which allows us to configure some Page Rules to reduce the load on the origin server, as well as hide the origin server’s address. Additionally, Cloudflare provides DDoS protection.

Anycast IP

We know that DDoS attacks, as a rather dirty method, are not easily defended against in principle. They can mostly be addressed by upgrading hardware and increasing bandwidth. However, Cloudflare’s approach to handling DDoS attacks is quite unique. Part of the reason is that their edge node IP addresses are Anycast IPs. You might be wondering, what is an Anycast IP?

In simple terms, it means that the same IP address exists in various locations around the world, and hosts from around the world access the nearest (lowest latency) host that announces that same IP.

After learning about computer networks, we understand a simple principle: “IP addresses are unique addresses for hosts on the Internet.” When we “ping an IP address of a host in the United States,” ICMP packets go through routing protocols and various routes, eventually reaching the destination host. Much of the network latency incurred in this process is due to distance, or in other words, due to the limitation imposed by the speed of light. Therefore, there will inevitably be over 100ms of latency between China and the United States. For example, the latency to reach a Vultr data center in Japan from various parts of the world might be as follows:

JP Ping

But if you are a Cloudflare customer, you might notice that the latency to your site is like this:

CF Ping

Do you notice that the latency to the same IP address from many cities is very low? This is the charm of Anycast. If you want to learn more about Cloudflare Anycast, you can refer to A Brief Primer on Anycast and What is Anycast? How does Anycast Work?.

Thanks to the Anycast network and our foundation in “computer network basics,” even DDoS attacks follow the basic rules. So, the DDoS attack pattern changes from multiple points to one point, and it becomes multiple points to multiple points. This way, the traffic is distributed, and the load on each node is greatly reduced.

Cloudflare Working Principles

With the knowledge mentioned above, it’s easy to think about one question: How does Cloudflare perform origin fetching with so many IP addresses? From the article “A Brief Primer on Cloudflare Warp and Whether It Exposes Visitor’s Real IP,” we can draw the following two conclusions:

  1. Typically, Cloudflare fetches the origin from the data center hit by the visitor.
  2. With Argo enabled, Cloudflare fetches the origin from the Cloudflare data center it considers closest and fastest to your origin server’s IP.

If these two conclusions are hard to grasp, let’s simplify with two conditions. Suppose my blog is in France (let’s say it resolves to origin.nova.moe), and you are a visitor from mainland China. Suppose Cloudflare uses Nginx in the current network environment:

  1. In general, you will access Cloudflare’s San Jose node in the western United States, and the Nginx on the San Jose node will proxy_pass https://origin.nova.moe;. It’s simple, right? But this is “public network fetching.”
  2. If you have Argo enabled, you will still access Cloudflare’s San Jose node in the western United States. However, at this point, Cloudflare realizes that our origin server is very close to the Paris node, so it forwards the request to the machine in Paris, using proxy_pass https://origin.nova.moe;.

With Argo enabled, because origin fetching traffic takes a significant part of the route through Cloudflare’s various public network tunnels, the routing path should be more controlled by Cloudflare. In some cases, this can reduce detours, and in simple terms, the speed should be faster. Cloudflare’s official promotional image is as follows:

CF Argo

For more detailed comparisons of Argo, you can refer to Guo Zeyu’s “Cloudflare Argo vs. Railgun Comparative Testing, CDN Acceleration Technology.”

Implementing Your Own Argo-Like Network

With Cloudflare for free users being “public network fetching” and understanding how Argo works, you might wonder: Can I build something similar myself?

Continuing with the previous example, my blog is in France, and mainland Chinese visitors access Cloudflare, which results in “public network fetching” from the western United States to France. If you happen to have a machine in the western United States, you can consider establishing a tunnel from the western United States to France. Then, configure the traffic from the western United States to be reverse proxied to the other end of the tunnel on your machine in the western United States. This achieves a similar effect.

The first problem we need to solve is: How to get Cloudflare’s traffic into your network as quickly as possible.

DNS Round Robin

Consider a scenario where we have an origin server in France (A) and two servers in the United States and the Netherlands (B and C). We configure B and C to reverse proxy to A and add two Cloudflare A records for resolution (with CDN enabled) to the IP addresses of B and C. Will this work?

The answer is no because, from a DNS perspective, it appears as though there are two resolutions, but the weights of these two resolutions do not change with changing source IP addresses. Therefore, it is still possible for a mainland Chinese visitor to access the San Jose node and resolve to C, resulting in public network fetching from the Netherlands.

Anycast

So, how can we ensure that traffic enters our network as quickly as possible? The answer is Anycast.

We still have the origin server in France (A) and two servers in the United States and the Netherlands (B and C). Both B and C announce the same IP address (let’s assume it’s 10.10.10.10). Now, we only need to add an A record resolution (with CDN enabled) to 10.10.10.10.

Since Cloudflare’s origin servers are also standard servers that follow basic routing rules, when a mainland Chinese visitor accesses the San Jose node, it will directly reverse proxy to the IP address 10.10.10.10. However, because this IP is also announced in the western United States, the packets will be routed to the server located in the western United States, effectively entering our network.

First Practical Implementation

As a “prem

ier CDN service provider” for Halo (just kidding), I have my own ASN and a small block of IPv6 addresses. In the first experiment, I announced the same IPv6 address (xxxx:xxxx:xxxx::1, don’t ask why it looks so strange, I wonder too) in the western United States and the Netherlands. The latency to this IP address appears as follows:

NN Ping

From the graph, we can see that the latency in San Francisco and Amsterdam is approximately 2.4 ms and 1.7 ms, respectively. So, we can consider this a successful Anycast.

Next, with the support of the global intranet established by Wireguard, all three hosts (A, B, and C) are on the same 192.168.1.0/24 internal network. These three hosts can directly ping each other, and the corresponding latency is as follows:

Now, we just need to share the Nginx configuration files between B and C. For now, we’re using NFS for sharing. The proxy_pass part of the configuration looks something like this:

location / {
    proxy_pass https://192.168.1.5;
    proxy_set_header Host $host;
}

Finally, for the Cloudflare part, create a AAAA record that doesn’t enable CDN, such as secret.nova.moe, pointing to your own IPv6 address.

AAAA Resolve

Then, create a corresponding CNAME record, such as halo.nova.moe, resolving to secret.nova.moe, and enable CDN.

CNAME Resolve

This way, you can allow IPv4 users to access your IPv6 site and still utilize Cloudflare’s CDN. This operation is truly magical.

Here’s a rough network diagram:

Anycast Behind Cloudflare

Benefits of Doing This

Mainly, it’s for fun. Secondarily, as a “premier CDN service provider” for Halo (just kidding again), you can see from the Halo JAR package mirror I host that the TTFB (Time to First Byte) has decreased by an average of around 40%.

Before Anycast:

Before Anycast

After implementing Anycast:

After Anycast

As for why the TTFB is still quite high, I suspect it’s related to DigitalOcean’s Spaces. Let’s focus on the improvement in comparison.

Afterword

In this article, we used Anycast IPv6 to allow more traffic to enter “our network,” where we have more control and room for operation on our PoP. This creates an effect similar to Argo.

A not-so-appropriate example: Imagine that in the original situation, there are a large number of people worldwide downloading/uploading from their machines, and the traffic far exceeds the service provider’s 200Mbps (assuming). Cloudflare can easily forward traffic, but with only one origin server, there can be bandwidth problems. In this situation, having multiple PoPs allows us to fetch from other origin servers we control, reducing the load on a single origin server. Of course, in such a situation, the best solution is to use load balancing and upgrade the bandwidth.

References

  1. IP Broadcasting: Using Bird to Broadcast (Multicast) IPv6
  2. Argo Smart Routing
  3. What is Anycast? How does Anycast Work?
  4. Cloudflare Argo vs. Railgun Comparative Testing, CDN Acceleration Technology

#English #Network