Cloudflare is known for easy and performant DNS, CDN and edge computing, but is now also capable of hosting more application infrastructure, particularly API services and processing pipelines.
The advantage of hosting infrastructure in Cloudflare, as opposed to AWS or GCP, would be that more of your plumbing sits in a single environment that is optimised for web performance.
Also, in my opinion, the DX is more modern, with services preconfigured with sensible defaults. This means less time spent second guessing the cloud configuration and more time spent on design and testing.
Running your API services on Cloudflare workers gives you high-throughput APIs without having to configure machine environments, load balancing, scaling or DNS:
For requests that require longer processing times or stronger delivery guarantees, Cloudflare queues give you async processing with automatic retries and a dead letter queue. A worker can be configured to ingest messages from the queue and auto-scale in response to queue length. In my experience, setting up a worker that consumes messages from a queue is very simple compared to AWS (CloudFormation, SQS, IAM, Lambda) and is configured in code using a wrangler file.
AWS and GCP are fine - the point of this article is to explain that Cloudflare can handle a lot of application logic and the advantage of doing so is that you get access to Cloudflare's web performance and a modern DX.
George