Skip to main content
💡 TL;DR

OpenCV is a popular computer vision library used for developing AI applications. In this guide, I’ll show you how to compile OpenCV as a universal framework compatible with all of Apple’s operating systems: iOS, iPadOS, visionOS, Mac Catalyst, and Simulators. In the end, we’ll also use the new framework to integrate OpenCV into a real-world Xcode app!

Check the project on GitHub

✨ What is OpenCV?

Hey, developers! Today, we’ll talk about OpenCV and the Apple ecosystem. To give you some background, OpenCV (Open Computer Vision Library) is a powerhouse for visual computing. It’s an open-source library packed with thousands of image processing algorithms.

These algorithms range from basic tasks, like detecting edges or converting images to grayscale, to complex feats, such as facial recognition and augmented reality transformations. Whether it’s fueling the magic of Instagram photo filters or being the brains behind surveillance systems, OpenCV is often the driving force behind such technologies. It sort of gives software developers vision superpowers, allowing us to integrate sophisticated visual intelligence into our apps.

OpenCV Apple Vision Pro

Thanks to its C++ core, OpenCV runs seamlessly across various platforms, from desktop computers to mobile phones, and even on edge devices. But here’s the kicker: you don’t need to be a C++ expert to leverage this power – instead, we’ll be tapping into OpenCV’s C++ functionalities directly from Swift.

🫤 OpenCV + Apple != ❤️

While OpenCV is a fantastic tool for visual computing, there are some specific quirks to be aware of when integrating it into Apple platforms.

The primary issue here is that OpenCV’s pre-built binaries are designed solely for iOS, and they’re tailored to run on actual physical devices. But as developers, our playground is much broader. We often need to code for a variety of environments, including both Apple Silicon simulators for the latest devices and Intel simulators for older Macs. The hitch? These pre-built binaries don’t cover the whole spectrum; they leave out crucial support for these simulators.

Then there’s the Apple Vision Pro aspect. Vision Pro is a new mixed-reality headset, powered by its own operating system (visionOS). If you’re venturing into this area, brace yourself for some DIY, as you’ll need to build the OpenCV binaries yourself. And yes, that includes making sure they play nice with simulators.

Now, wouldn’t it be neat if there was one magic binary that worked seamlessly across all Apple platforms? That’s exactly what we’re going to tackle today. I’m here to guide you through creating a universal solution that bridges these gaps, simplifying your workflow and ensuring your OpenCV projects are as versatile as your coding skills. Let’s get started on making OpenCV more flexible and developer-friendly within the Apple universe!

💡 Solution: build OpenCV from scratch 🤩

Alright, let’s break down the solution to our OpenCV challenge. The key here is compiling OpenCV from the ground up. By doing this, you’re going to create a custom-fit binary file, or what we call a “class library,” for each specific architecture you’re targeting. This is like crafting a made-to-measure suit; each piece is tailored for a specific purpose.

So, the big question is: Which architectures do we need to cover? Here’s your checklist:

  1. iOS/iPadOS: Focusing on the ARM64 architecture, this covers your iPhone and iPad devices, as well as Mac Catalyst.
  2. iOS Simulator for Apple Silicon Macs: To test on the latest Macs with Apple’s cutting-edge Silicon chips.
  3. iOS Simulator for older Intel Macs: Ensuring compatibility with the x86 Intel-based Macs still in use.
  4. visionOS: Targeting the ARM64 architecture for the Vision Pro headsets.
  5. visionOS Simulator for Apple Silicon Macs: To simulate visionOS on the latest Apple Silicon Macs.

One important note: the Vision Pro simulator isn’t supported on Intel Macs.

As it’s apparent, we need to create five class libraries for our project, each tailored to a different Apple architecture. So, we’ll craft five .framework packages – think of these as digital boxes, each holding the necessary bits for a specific platform.

Next, we’ll merge these five into one super-framework, known as an .xcframework. This is Apple’s way of making our lives easier. The .xcframework is a smart container that holds all these versions and automatically picks the right one for the device it’s running on. It’s like having a toolkit that knows exactly which tool to use at any given time. Let’s put these together and see how seamlessly they work across different devices!

💻 Environment setup

To compile OpenCV, we need a Mac computer with an Apple Silicon chip. The better the specs, the quicker you’ll breeze through the compilation process. Here’s a peek at my setup:

MacBook Pro 2021 (macOS Sonoma)

📦 Clone OpenCV

Open a Terminal window and run the following commands:

git clone https://github.com/opencv/opencv.git

OpenCV is hosted on GitHub, so we need to clone it to our local machine using Git. After executing this command, you’ll have a directory named opencv on your computer, containing the source code of the OpenCV library. This is the first step in obtaining the code necessary to start compiling your own custom version of OpenCV.

git checkout tags/4.9.0

By executing git checkout, you’re telling Git to move to the state of the repository as it was at the release of version 4.9.0. This ensures that you’re working with a specific, stable version of OpenCV, rather than the latest (and potentially unstable) version from the main branch. Plus, version 4.9 added official support for visionOS.

cd opencv

Lastly, use cd to move to the root directory of the OpenCV code. The commands next will be relative to the repository root.

📺 Build OpenCV

It’s time for the magic command. We’ll use OpenCV’s Python tools to build it for the desired architectures.

python3 platforms/apple/build_xcframework.py --out build_all \
--iphoneos_deployment_target 14.0 \
--iphoneos_archs arm64 \
--iphonesimulator_archs arm64,x86_64 \
--visionos_archs arm64 \
--visionsimulator_archs arm64 \
--build_only_specified_archs True \
--without objc

Btw, I believe you should understand what you type in your Terminal, so here’s a breakdown of the above commands.

  • python3 platforms/apple/build_xcframework.py: Runs the OpenCV Python script that produces the final framework.
  • --out build_all: Specifies the output directory where the results of the scripts will be stored. In our case, the directory is named “build_all”. Feel free to use a name of your choice.
  • --iphoneos_deployment_target 14.0: Specifies the minimum supported iOS version. In our case, it’s iOS 14.0.
  • --iphoneos_archs arm64: Specifies the target architecture for iOS. All iPhone and iPad devices come with 64-bit ARM chipsets.
  • --iphonesimulator_archs arm64,x86_64: Specifies the target architecture of the iOS simulator. M1 Macs have 64-bit ARM simulators, while Intel Macs have 64-bit x86 simulators.
  • --visionos_archs arm64: Just like the iOS setting, that’s the architecture of the Vision Pro device. Vison Pro is a 64-bit ARM headset.
  • --visionsimulator_archs arm64: Specifies the architecture of the Vision Pro simulator. Since Vision Pro simulators are only supported on M1 Macs, there is no x86 Intel support.
  • --build_only_specified_archs True: Instructs the script to build only the specified architectures and ignore any other configurations.
  • --without objc: The last flag instructs our script to omit the Objective-C bindings from the final build. We don’t need the Objective-C bindings because we’ll be using the C++ APIs directly in our Swift app. You can try adding the ObjC bindings, but the build will likely fail — I have no idea why.

Hit Return and go!

🥱 Now sight tight and wait!

This process may take quite some time. Grab a cup of coffee or tea, or go for a short run. Be patient as your Mac builds OpenCV.

🚀 Need expert help?

Navigating the intricate landscapes of computer vision and AI demands experience. LightBuzz has been at the forefront of Computer Vision technology, developing custom projects and cutting-edge AI systems. We love Maths, Swift, and C++. So, do you need expert hands to steer your project towards success? Choose LightBuzz. Let’s bring your vision to life, pixel by pixel.

Contact us

When finished, the build_all folder should be populated with the compiled platform-specific binaries. What we need is the universal opencv2.xcframework for use in our apps.

OpenCV build targets for Apple platforms (iOS, visionOS, simulators)

OK, we have binaries. But do they really work? Let’s find out!

🛠️ Example: Using OpenCV in Swift

Roll up our sleeves and dive with me into the nitty-gritty. As we venture into Swift’s realm, integrating a C++ beast like OpenCV might seem terrifying. Trust me, if you get the foundation right, it becomes second nature.

🕐 Import OpenCV in a Swift Project

First things first. Create a new Swift project in Xcode 15.2 or later.

Choose the Multiplatform template, so we can easily support multiple target devices.

Create a new Multiplatform app in Xcode

Select it in the Targets view. In the General tab, you’ll see the supported run destinations. Click the + button and add Apple Vision, too.

Set the Apple Vision Pro target in Xcode

🕑 Import OpenCV

Now, import the opencv2.xcframework we built earlier into the project by dragging and dropping it under the Libraries, Frameworks, and Embedded Content section. Set its Embed property to Do Not Embed.

Adding OpenCV xcframework

🕒 Create an Objective-C Bridge

OpenCV is written in C++, while modern Apple apps are written in Swift. To bridge the gap between them, Objective-C, Apple’s older programming language, acts as an intermediary. Go on, add a new file, and select Cocoa Touch class.

Add an Objective-C class in Xcode

To support multiple programming languages under the same project, a special file type, called Objective-C Bridging Header, is required. In technical terms, the Objective-C Bridging Header is a file that facilitates communication between Swift and OpenCV, making sure they play nice. Creating this bridge is a crucial step. Make sure you allow Xcode to create that file for you, as shown below.

Add an Objective-C Bridging Header in Xcode

Since we’ll be using C++ (not just Objective-C), rename the .m file to .mm. Yes, I know, it’s mind-bending, but that’s how Apple does it.

Convert an Objective-C file to Objective-C++

Finally, we need to import the C++ headers so we can use them from Objective-C. To import the headers, create a Prefix Header file:

Add a Prefix Header in Xcode

Import the OpenCV headers in the .pch file so they can be accessible from the app:

// PrefixHeader.pch
#ifndef PrefixHeader_pch
#define PrefixHeader_pch
// PrefixHeader.pch
// Include any system framework and library headers here that should be included in all compilation units.
// You will also need to set the Prefix Header build setting of one or more of your targets to reference this file.
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
#endif
#endif /* PrefixHeader_pch */

Back to the Objective-C files, I’m gonna create a few C++ functions to apply image effects, such as blur and grayscale. I won’t go into the C++ code step by step, but here’s an overview:

  • The gaussianBlur function applies a blur effect to a UIImage component.
  • The toGrayscale function transforms a colored RGB UIImage to, well, grayscale.

You can find the complete code in my GitHub repository.

I’ve borrowed most of the code below from Poorna Chathuranjana — thanks for sharing, my friend!

// OpenCVWrapper.h
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
@interface OpenCVWrapper : NSObject
+ (UIImage *)toGrayscale:(UIImage *)image;
+ (UIImage *)gaussianBlur:(UIImage *)image: (int)blurAmount;
@end
NS_ASSUME_NONNULL_END
// OpenCVWrapper.mm
#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>
#import "OpenCVWrapper.h"
@interface UIImage (OpenCVWrapper)
- (void)convertToMat: (cv::Mat *)pMat: (bool)alphaExists;
@end
@implementation UIImage (OpenCVWrapper)
- (void)convertToMat: (cv::Mat *)pMat: (bool)alphaExists {
    if (self.imageOrientation == UIImageOrientationRight) {
        UIImageToMat([UIImage imageWithCGImage:self.CGImage scale:1.0 orientation:UIImageOrientationUp], *pMat, alphaExists);
        cv::rotate(*pMat, *pMat, cv::ROTATE_90_CLOCKWISE);
    } else if (self.imageOrientation == UIImageOrientationLeft) {
        UIImageToMat([UIImage imageWithCGImage:self.CGImage scale:1.0 orientation:UIImageOrientationUp], *pMat, alphaExists);
        cv::rotate(*pMat, *pMat, cv::ROTATE_90_COUNTERCLOCKWISE);
    } else {
        UIImageToMat(self, *pMat, alphaExists);
        if (self.imageOrientation == UIImageOrientationDown) {
            cv::rotate(*pMat, *pMat, cv::ROTATE_180);
        }
    }
}
@end
@implementation OpenCVWrapper
+ (UIImage *)gaussianBlur:(UIImage *)image: (int)blurAmount {
    cv::Mat mat;
    [image convertToMat:&mat :false];
    
    cv::Mat blur;
    mat.copyTo(blur);
    
    cv::GaussianBlur(mat, blur, cv::Size(blurAmount, blurAmount), 0.0);
    
    UIImage* blurImage = MatToUIImage(blur);
    return blurImage;
}
+ (UIImage *)toGrayscale:(UIImage *)image {
    cv::Mat mat;
    [image convertToMat: &mat :false];
    
    cv::Mat gray;
    if (mat.channels() > 1) {
        cv::cvtColor(mat, gray, cv::COLOR_RGB2GRAY);
    } else {
        mat.copyTo(gray);
    }
    UIImage *grayImg = MatToUIImage(gray);
    return grayImg;
}
@end

🕓 Call Objective-C from Swift

Now that the bridging code is in place, it’s showtime! Within your Swift code, you can call upon OpenCV functions with ease, just as you would with any other Swift functions. It’s like having a direct hotline to OpenCV, ready to deploy its features on demand.

Go ahead, create a SwiftUI view, and call the Objective-C methods as follows:

import SwiftUI
struct ContentView: View {
    @State private var image = UIImage(named: "image")!
    var body: some View {
        VStack {
            Image(uiImage: image)
                .resizable()
            Button("Convert to grayscale") {
                let grayImage = OpenCVWrapper.toGrayscale(image)
                image = grayImage
            }
            
            Button("Apply gaussian blur") {
                let blurImage = OpenCVWrapper.gaussianBlur(image, 125)
                image = blurImage
            }
        }
    }
}

Xcode’s preview window is already showing the resulting interface.

An Xcode SwiftUI project for visionOS that uses OpenCV

🕔 Run the app!

Finally, it’s time to run our OpenCV-powered app. In this example, I’m using the Vision Pro simulator, but feel free to clone the GitHub project and select a different run destination.

Xcode run destinations

Click the Run button and wait for the app to install and run. Alas, here’s the result (🥳 🥳 🥳) on my Vision Pro simulator:

We’ve done it! An OpenCV framework that can be used on any Apple device. 🎉🥳🎊

If you liked this article, drop me a comment below and share it with your friends!

📖 Resources

I didn’t write this guide using ChatGPT because AI can’t provide solutions to novel problems (yet). Instead, I stood on the shoulders of dev giants. Here are further resources for you to check out:

Until machines can write better code than humans, keep coding, my friends 👋👋👋

⚡️ Hey! Need help?

Navigating the intricate landscapes of computer vision and AI requires more than just knowledge – it demands experience. Over the past 11 years, our team at LightBuzz has been at the forefront of computer vision technology, developing not only projects but also our very own cutting-edge AI systems. Are you embarking on a computer vision adventure? Need expert hands to steer your project towards success?

🚀 Choose LightBuzz. Let’s bring your vision to life, pixel by pixel.

Contact us
Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.