- Published on
Future of concurrent code in Swift
- Authors

- Name
- Omar Elsayed
Introducation
The landscape of concurrent programming in Swift has evolved dramatically, presenting developers with multiple paths to handle complex operations.
As we stand at this technical crossroads, a crucial question emerges: which concurrency approach best serves your Swift project’s needs? More importantly, does the shift to Modern Concurrency with async/await justify the effort of refactoring your existing codebase?
In this deep dive, we’ll first step back to examine the fundamental differences between synchronous, asynchronous, and concurrent code — three distinct approaches that serve unique purposes in your application’s architecture.
By understanding these core concepts, you’ll be better equipped to make informed decisions about your app’s concurrency strategy
- Introducation
- Synchronous vs. Asynchronous vs. Concurrent Code
- Asynchronous and Concurrent code in swift
- Real UseCase
- Conclusion
Synchronous vs. Asynchronous vs. Concurrent Code
Let’s begin with the foundation of programming execution: synchronous code. This is the traditional and most intuitive way programs operate.
Think of synchronous execution as a single-lane road where cars must travel one after another in a precise order. When your code runs synchronously, each instruction waits for the previous one to complete before beginning — like a well-orchestrated assembly line.

In technical terms, synchronous execution utilizes a single thread, which can be thought of as a dedicated processing lane in your application.
This thread handles one task at a time, completing it fully before moving on to the next, while this sequential approach ensures predictable execution order and simplifies debugging, it also means that time-consuming operations can create bottlenecks.
Just as a slow-moving vehicle on a single-lane road affects all traffic behind it, a lengthy operation in synchronous code forces all subsequent tasks to wait.
It’s worth noting that while we often equate one thread to one CPU core, modern processors use sophisticated scheduling techniques to manage multiple threads even on a single core. However, the fundamental principle remains: synchronous code executes tasks one after another in a predictable sequence.
Asynchronous
Now that we understand synchronous execution, let’s explore asynchronous code — a more sophisticated approach to handling tasks. Imagine a complex intersection managed by traffic lights: while cars on one road are temporarily stopped at a red light, vehicles on the intersecting road can flow through their green light.
This dynamic switching ensures smoother overall traffic flow and better resource utilization.

In technical terms, asynchronous execution leverages multiple threads, effectively utilizing several CPU cores for processing. Unlike our single-lane synchronous scenario, tasks can now be distributed across different processing paths.
When one operation needs to wait — perhaps for data to download or a computation to complete — another task can proceed on a different thread, maintaining application responsiveness.
However, this flexibility comes with an important characteristic: the exact order of execution becomes less predictable. Just as you can’t precisely predict which car will clear the intersection first when multiple traffic lights are cycling, asynchronous operations may complete in varying orders depending on their nature and duration.
This behavior occurs because the program actively switches between threads to optimize task execution, much like traffic lights coordinating the flow of vehicles from different directions.
Concurrent
Building on our previous concepts, concurrent code represents the pinnacle of parallel execution — imagine upgrading from an intersection to a multi-lane highway where traffic flows simultaneously in parallel lanes.
While asynchronous code manages the switching between tasks, concurrent code takes this further by executing multiple operations genuinely in parallel.

In technical terms, concurrent execution harnesses multiple threads operating simultaneously, each potentially running on a different CPU core. This parallel processing capability is what can significantly boost your app’s performance.
Instead of just efficiently managing waiting times like in asynchronous code, concurrent execution actually performs multiple tasks at the exact same time — like multiple cars cruising simultaneously on different highway lanes.
However, with this power comes additional complexity in coordination and resource management. Just as a highway needs clear lane markings and merge protocols to prevent chaos, concurrent code requires careful management to avoid issues like race conditions and deadlocks.
This brings us to an important question: how can we effectively implement both asynchronous and concurrent code in Swift while managing this complexity 🤔?
Asynchronous and Concurrent code in swift
Swift’s evolution has given us four distinct approaches to handle asynchronous and concurrent operations, each with its own tradeoffs.
Traditional closures
Closures were our first step into asynchronous programming. While they provided basic asynchronous capabilities, they fell short to provid true concurrency. Their major drawbacks include the notorious retain cycles that can lead to memory leaks, challenging debugging scenarios, and code that quickly becomes difficult to follow — especially when dealing with nested closure.
Grand Central Dispatch
GCD advanced our capabilities by enabling both asynchronous and concurrent code execution. However, its closure-based nature inherited similar memory management challenges. GCD aslo places a significant burden on developers to manually manage thread safety, task coordination, and error handling. This low-level control comes with increased complexity and greater potential for subtle bugs.
Combine
Combine framework introduced a more structured approach through functional reactive programming. While it handles both asynchronous and concurrent operations, its steep learning curve and complex operator chains can make code difficult to understand and maintain. Creating concurrent operations with Combine’s promise system often requires intricate knowledge of its publishing-subscription model.
Modern Concurrency
Async/await represents a significant leap forward, addressing the limitations of previous approaches. It offers several compelling advantages:
- Elimination of retain cycles through structured concurrency
- Clear task hierarchies where child tasks automatically get notified when parent task is cancelled.
- Improved debugging experience with stack traces that actually make sense.
- Intuitive syntax that reads like synchronous code.
- Built-in support for error handling.
This structured approach not only makes concurrent code easier to write and understand but also helps prevent many common concurrency bugs before they even reach production. The clear winner among these approaches is async/await, which brings Swift concurrency closer to the language’s core principles of safety, clarity, and performance.
Real UseCase
Let me share a revealing experience from building MultiCam, a project that highlighted the pitfalls of traditional concurrency approaches. Initially, I worked with AVFoundation, a longstanding iOS framework built on closures and delegates. To handle camera capture operations efficiently, I implemented GCD to move the capture method to a background thread.
Everything seemed fine until I encountered a memory leak while implementing the coordinator pattern. Despite properly deallocating the view and coordinator, the ViewModel remained in memory. At first glance, the code looked correct — I had even followed best practices by using a weak reference to the ViewModel within the DispatchQueue closure.
However, I discovered a critical issue. The finalImageLogic.captureImage closure maintained a strong reference to the weak reference of the ViewModel, creating an unexpected retain cycle.
func captureImages() {
DispatchQueue.global().async { [weak self] in
guard let self else { return }
finalImageLogic.captureImage { frontImageData in
guard let frontImageData else { return }
DispatchQueue.main.async {
self.frontImage = UIImage(data: frontImageData)
}
} backImageCompletion: { backImageData in
guard let backImageData else { return }
DispatchQueue.main.async {
self.backImage = UIImage(data: backImageData)
}
}
}
}
This scenario perfectly illustrates why GCD can be treacherous. What appeared to be a straightforward implementation turned into hours of debugging a non-obvious memory leak, these kinds of subtle concurrency issues can slip through code review and testing, only to surface as performance problems in production.

The solution came when I refactored the code to use async/await. The transformation not only eliminated the retain cycle but also made the code more readable and maintainable.
The effort invested in refactoring proved worthwhile, as the modern concurrency model provided cleaner, safer code with better memory management guarantees.
Conclusion
As we’ve explored the evolution of concurrent programming in Swift, from closures to async/await, one thing becomes clear: Modern Concurrency isn’t just another iteration in Swift’s development — it’s a fundamental shift in how we approach complex operations in our applications.
Our journey through MultiCam illustrates a common scenario many developers face, what begins as a seemingly straightforward implementation using traditional methods like GCD can quickly become a source of time-consuming bugs. The transition to async/await isn’t merely about adopting new syntax; it’s about embracing a safer, more maintainable approach to concurrent programming.
The question isn’t whether to adopt Modern Concurrency, but when. While refactoring existing codebases requires initial investment, the benefits like eliminating retain cycles, improved debugging, intuitive syntax, and robust error handling make it a worthwhile endeavor.
In the next week I show you how we adapted Modern Concurrency in Klivvr without breaking anything in our legacy codebase.