Giter Site home page Giter Site logo

metalpetal / metalpetal Goto Github PK

View Code? Open in Web Editor NEW
1.8K 51.0 237.0 16.83 MB

A GPU accelerated image and video processing framework built on Metal.

Home Page: https://github.com/MetalPetal/MetalPetal

License: MIT License

Objective-C 38.37% Ruby 0.28% Metal 15.23% Swift 23.78% Shell 0.24% Objective-C++ 20.44% C 0.02% C++ 1.65%
metal image-processing rendering gpu image video-processing real-time video filter ios

metalpetal's Introduction

MetalPetal

Swift
Platforms Version
Apple Silicon Mac Catalyst Simulator
CocoaPods Swift PM

An image processing framework based on Metal.

Design Overview

MetalPetal is an image processing framework based on Metal designed to provide real-time processing for still image and video with easy to use programming interfaces.

This chapter covers the key concepts of MetalPetal, and will help you to get a better understanding of its design, implementation, performance implications and best practices.

Goals

MetalPetal is designed with the following goals in mind.

  • Easy to use API

    Provides convenience APIs and avoids common pitfalls.

  • Performance

    Use CPU, GPU and memory efficiently.

  • Extensibility

    Easy to create custom filters as well as plugin your custom image processing unit.

  • Swifty

    Provides a fluid experience for Swift programmers.

Core Components

Some of the core concepts of MetalPetal are very similar to those in Apple's Core Image framework.

MTIContext

Provides an evaluation context for rendering MTIImages. It also stores a lot of caches and state information, so it's more efficient to reuse a context whenever possible.

MTIImage

A MTIImage object is a representation of an image to be processed or produced. It does directly represent image bitmap data instead it has all the information necessary to produce an image or more precisely a MTLTexture. It consists of two parts, a recipe of how to produce the texture (MTIImagePromise) and other information such as how a context caches the image (cachePolicy), and how the texture should be sampled (samplerDescriptor).

MTIFilter

A MTIFilter represents an image processing effect and any parameters that control that effect. It produces a MTIImage object as output. To use a filter, you create a filter object, set its input images and parameters, and then access its output image. Typically, a filter class owns a static kernel (MTIKernel), when you access its outputImage property, it asks the kernel with the input images and parameters to produce an output MTIImage.

MTIKernel

A MTIKernel represents an image processing routine. MTIKernel is responsible for creating the corresponding render or compute pipeline state for the filter, as well as building the MTIImagePromise for a MTIImage.

Optimizations

MetalPetal does a lot of optimizations for you under the hood.

It automatically caches functions, kernel states, sampler states, etc.

It utilizes Metal features like programmable blending, memoryless render targets, resource heaps and metal performance shaders to make the render fast and efficient. On macOS, MetalPetal can also take advantage of the TBDR architecture of Apple silicon.

Before rendering, MetalPetal can look into your image render graph and figure out the minimal number of intermediate textures needed to do the rendering, saving memory, energy and time.

It can also re-organize the image render graph if multiple “recipes” can be concatenated to eliminate redundant render passes. (MTIContext.isRenderGraphOptimizationEnabled)

Concurrency Considerations

MTIImage objects are immutable, which means they can be shared safely among threads.

However, MTIFilter objects are mutable and thus cannot be shared safely among threads.

A MTIContext contains a lot of states and caches. There's a thread-safe mechanism for MTIContext objects, making it safe to share a MTIContext object among threads.

Advantages over Core Image

  • Fully customizable vertex and fragment functions.

  • MRT (Multiple Render Targets) support.

  • Generally better performance. (Detailed benchmark data needed)

Builtin Filters

  • Color Matrix

  • Color Lookup

    Uses an color lookup table to remap the colors in an image.

  • Opacity

  • Exposure

  • Saturation

  • Brightness

  • Contrast

  • Color Invert

  • Vibrance

    Adjusts the saturation of an image while keeping pleasing skin tones.

  • RGB Tone Curve

  • Blend Modes

    • Normal
    • Multiply
    • Overlay
    • Screen
    • Hard Light
    • Soft Light
    • Darken
    • Lighten
    • Color Dodge
    • Add (Linear Dodge)
    • Color Burn
    • Linear Burn
    • Lighter Color
    • Darker Color
    • Vivid Light
    • Linear Light
    • Pin Light
    • Hard Mix
    • Difference
    • Exclusion
    • Subtract
    • Divide
    • Hue
    • Saturation
    • Color
    • Luminosity
    • ColorLookup512x512
    • Custom Blend Mode
  • Blend with Mask

  • Transform

  • Crop

  • Pixellate

  • Multilayer Composite

  • MPS Convolution

  • MPS Gaussian Blur

  • MPS Definition

  • MPS Sobel

  • MPS Unsharp Mask

  • MPS Box Blur

  • High Pass Skin Smoothing

  • CLAHE (Contrast-Limited Adaptive Histogram Equalization)

  • Lens Blur (Hexagonal Bokeh Blur)

  • Surface Blur

  • Bulge Distortion

  • Chroma Key Blend

  • Color Halftone

  • Dot Screen

  • Round Corner (Circular/Continuous Curve)

  • All Core Image Filters

Example Code

Create a MTIImage

You can create a MTIImage object from nearly any source of image data, including:

  • URLs referencing image files to be loaded
  • Metal textures
  • CoreVideo image or pixel buffers (CVImageBufferRef or CVPixelBufferRef)
  • Image bitmap data in memory
  • Texture data from a given texture or image asset name
  • Core Image CIImage objects
  • MDLTexture objects
  • SceneKit and SpriteKit scenes
let imageFromCGImage = MTIImage(cgImage: cgImage, isOpaque: true)

let imageFromCIImage = MTIImage(ciImage: ciImage)

let imageFromCoreVideoPixelBuffer = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)

let imageFromContentsOfURL = MTIImage(contentsOf: url)

// unpremultiply alpha if needed
let unpremultipliedAlphaImage = image.unpremultiplyingAlpha()

Apply a Filter

let inputImage = ...

let filter = MTISaturationFilter()
filter.saturation = 0
filter.inputImage = inputImage

let outputImage = filter.outputImage

Render a MTIImage

let options = MTIContextOptions()

guard let device = MTLCreateSystemDefaultDevice(), let context = try? MTIContext(device: device, options: options) else {
    return
}

let image: MTIImage = ...

do {
    try context.render(image, to: pixelBuffer) 
    
    //context.makeCIImage(from: image)
    
    //context.makeCGImage(from: image)
} catch {
    print(error)
}

Display a MTIImage

let imageView = MTIImageView(frame: self.view.bounds)

// You can optionally assign a `MTIContext` to the image view. If no context is assigned and `automaticallyCreatesContext` is set to `true` (the default value), a `MTIContext` is created automatically when the image view renders its content.
imageView.context = ...

imageView.image = image

If you'd like to move the GPU command encoding process out of the main thread, you can use a MTIThreadSafeImageView. You may assign a MTIImage to a MTIThreadSafeImageView in any thread.

Connect Filters (Swift)

MetalPetal has a type-safe Swift API for connecting filters. You can use => operator in FilterGraph.makeImage function to connect filters and get the output image.

Here are some examples:

let image = try? FilterGraph.makeImage { output in
    inputImage => saturationFilter => exposureFilter => output
}
let image = try? FilterGraph.makeImage { output in
    inputImage => saturationFilter => exposureFilter => contrastFilter => blendFilter.inputPorts.inputImage
    exposureFilter => blendFilter.inputPorts.inputBackgroundImage
    blendFilter => output
}
  • You can connect unary filters (MTIUnaryFilter) directly using =>.

  • For a filter with multiple inputs, you need to connect to one of its inputPorts.

  • => operator only works in FilterGraph.makeImage method.

  • One and only one filter's output can be connected to output.

Process Video Files

Working with AVPlayer:

let context = try MTIContext(device: device)
let asset = AVAsset(url: videoURL)
let composition = MTIVideoComposition(asset: asset, context: context, queue: DispatchQueue.main, filter: { request in
    return FilterGraph.makeImage { output in
        request.anySourceImage! => filterA => filterB => output
    }!
}

let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition.makeAVVideoComposition()
player.replaceCurrentItem(with: playerItem)
player.play()

Export a video:

VideoIO is required for the following examples.

import VideoIO

var configuration = AssetExportSession.Configuration(fileType: .mp4, videoSettings: .h264(videoSize: composition.renderSize), audioSettings: .aac(channels: 2, sampleRate: 44100, bitRate: 128 * 1000))
configuration.videoComposition = composition.makeAVVideoComposition()
self.exporter = try! AssetExportSession(asset: asset, outputURL: outputURL, configuration: configuration)
exporter.export(progress: { progress in
    
}, completion: { error in
    
})

Process Live Video (with VideoIO)

VideoIO is required for this example.

import VideoIO

// Setup Image View
let imageView = MTIImageView(frame: self.view.bounds)
...

// Setup Camera
let camera = Camera(captureSessionPreset: .hd1920x1080, configurator: .portraitFrontMirroredVideoOutput)
try camera.enableVideoDataOutput(on: DispatchQueue.main, delegate: self)
camera.videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]

...

// AVCaptureVideoDataOutputSampleBufferDelegate

let filter = MTIColorInvertFilter()

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
        return
    }
    let inputImage = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)
    filter.inputImage = inputImage
    self.imageView.image = filter.outputImage
}

Please refer to the CameraFilterView.swift in the example project for more about previewing and recording filtered live video.

Best Practices

  • Reuse a MTIContext whenever possible.

    Contexts are heavyweight objects, so if you do create one, do so as early as possible, and reuse it each time you need to render an image.

  • Use MTIImage.cachePolicy wisely.

    Use MTIImageCachePolicyTransient when you do not want to preserve the render result of an image, for example when the image is just an intermediate result in a filter chain, so the underlying texture of the render result can be reused. It is the most memory efficient option. However, when you ask the context to render a previously rendered image, it may re-render that image since its underlying texture has been reused.

    By default, a filter's output image has the transient policy.

    Use MTIImageCachePolicyPersistent when you want to prevent the underlying texture from being reused.

    By default, images created from external sources have the persistent policy.

  • Understand that MTIFilter.outputImage is a compute property.

    Each time you ask a filter for its output image, the filter may give you a new output image object even if the inputs are identical with the previous call. So reuse output images whenever possible.

    For example,

    //          ╭→ filterB
    // filterA ─┤
    //          ╰→ filterC
    // 
    // filterB and filterC use filterA's output as their input.

    In this situation, the following solution:

    let filterOutputImage = filterA.outputImage
    filterB.inputImage = filterOutputImage
    filterC.inputImage = filterOutputImage

    is better than:

    filterB.inputImage = filterA.outputImage
    filterC.inputImage = filterA.outputImage

Build Custom Filter

If you want to include the MTIShaderLib.h in your .metal file, you need to add the path of MTIShaderLib.h file to the Metal Compiler - Header Search Paths (MTL_HEADER_SEARCH_PATHS) setting.

For example, if you use CocoaPods you can set the MTL_HEADER_SEARCH_PATHS to ${PODS_CONFIGURATION_BUILD_DIR}/MetalPetal/MetalPetal.framework/Headers or ${PODS_ROOT}/MetalPetal/Frameworks/MetalPetal/Shaders. If you use Swift Package Manager, set the MTL_HEADER_SEARCH_PATHS to $(HEADER_SEARCH_PATHS)

Shader Function Arguments Encoding

MetalPetal has a built-in mechanism to encode shader function arguments for you. You can pass the shader function arguments as name: value dictionaries to the MTIRenderPipelineKernel.apply(toInputImages:parameters:outputDescriptors:), MTIRenderCommand(kernel:geometry:images:parameters:), etc.

For example, the parameter dictionary for the metal function vibranceAdjust can be:

// Swift
let amount: Float = 1.0
let vibranceVector = float4(1, 1, 1, 1)
let parameters = ["amount": amount,
                  "vibranceVector": MTIVector(value: vibranceVector),
                  "avoidsSaturatingSkinTones": true,
                  "grayColorTransform": MTIVector(value: float3(0,0,0))]
// vibranceAdjust metal function
fragment float4 vibranceAdjust(...,
                constant float & amount [[ buffer(0) ]],
                constant float4 & vibranceVector [[ buffer(1) ]],
                constant bool & avoidsSaturatingSkinTones [[ buffer(2) ]],
                constant float3 & grayColorTransform [[ buffer(3) ]])
{
    ...
}

The shader function argument types and the corresponding types to use in a parameter dictionary is listed below.

Shader Function Argument Type Swift Objective-C
float Float float
int Int32 int
uint UInt32 uint
bool Bool bool
simd (float2,float4,float4x4,int4, etc.) simd (with MetalPetal/Swift) / MTIVector MTIVector
struct Data / MTIDataBuffer NSData / MTIDataBuffer
other (float *, struct *, etc.) immutable Data / MTIDataBuffer NSData / MTIDataBuffer
other (float *, struct *, etc.) mutable MTIDataBuffer MTIDataBuffer

Simple Single Input / Output Filters

To build a custom unary filter, you can subclass MTIUnaryImageRenderingFilter and override the methods in the SubclassingHooks category. Examples: MTIPixellateFilter, MTIVibranceFilter, MTIUnpremultiplyAlphaFilter, MTIPremultiplyAlphaFilter, etc.

//Objective-C

@interface MTIPixellateFilter : MTIUnaryImageRenderingFilter

@property (nonatomic) float fractionalWidthOfAPixel;

@end

@implementation MTIPixellateFilter

- (instancetype)init {
    if (self = [super init]) {
        _fractionalWidthOfAPixel = 0.05;
    }
    return self;
}

+ (MTIFunctionDescriptor *)fragmentFunctionDescriptor {
    return [[MTIFunctionDescriptor alloc] initWithName:@"pixellateEffect" libraryURL:[bundle URLForResource:@"default" withExtension:@"metallib"]];
}

- (NSDictionary<NSString *,id> *)parameters {
    return @{@"fractionalWidthOfAPixel": @(self.fractionalWidthOfAPixel)};
}

@end
//Swift

class MTIPixellateFilter: MTIUnaryImageRenderingFilter {
    
    var fractionalWidthOfAPixel: Float = 0.05

    override var parameters: [String : Any] {
        return ["fractionalWidthOfAPixel": fractionalWidthOfAPixel]
    }
    
    override class func fragmentFunctionDescriptor() -> MTIFunctionDescriptor {
        return MTIFunctionDescriptor(name: "pixellateEffect", libraryURL: MTIDefaultLibraryURLForBundle(Bundle.main))
    }
}

Fully Custom Filters

To build more complex filters, all you need to do is create a kernel (MTIRenderPipelineKernel/MTIComputePipelineKernel/MTIMPSKernel), then apply the kernel to the input image(s). Examples: MTIChromaKeyBlendFilter, MTIBlendWithMaskFilter, MTIColorLookupFilter, etc.

@interface MTIChromaKeyBlendFilter : NSObject <MTIFilter>

@property (nonatomic, strong, nullable) MTIImage *inputImage;

@property (nonatomic, strong, nullable) MTIImage *inputBackgroundImage;

@property (nonatomic) float thresholdSensitivity;

@property (nonatomic) float smoothing;

@property (nonatomic) MTIColor color;

@end

@implementation MTIChromaKeyBlendFilter

@synthesize outputPixelFormat = _outputPixelFormat;

+ (MTIRenderPipelineKernel *)kernel {
    static MTIRenderPipelineKernel *kernel;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        kernel = [[MTIRenderPipelineKernel alloc] initWithVertexFunctionDescriptor:[[MTIFunctionDescriptor alloc] initWithName:MTIFilterPassthroughVertexFunctionName] fragmentFunctionDescriptor:[[MTIFunctionDescriptor alloc] initWithName:@"chromaKeyBlend"]];
    });
    return kernel;
}

- (instancetype)init {
    if (self = [super init]) {
        _thresholdSensitivity = 0.4;
        _smoothing = 0.1;
        _color = MTIColorMake(0.0, 1.0, 0.0, 1.0);
    }
    return self;
}

- (MTIImage *)outputImage {
    if (!self.inputImage || !self.inputBackgroundImage) {
        return nil;
    }
    return [self.class.kernel applyToInputImages:@[self.inputImage, self.inputBackgroundImage]
                                      parameters:@{@"color": [MTIVector vectorWithFloat4:(simd_float4){self.color.red, self.color.green, self.color.blue,self.color.alpha}],
                                    @"thresholdSensitivity": @(self.thresholdSensitivity),
                                               @"smoothing": @(self.smoothing)}
                         outputTextureDimensions:MTITextureDimensionsMake2DFromCGSize(self.inputImage.size)
                               outputPixelFormat:self.outputPixelFormat];
}

@end

Multiple Draw Calls in One Render Pass

You can use MTIRenderCommand to issue multiple draw calls in one render pass.

// Create a draw call with kernelA, geometryA, and imageA.
let renderCommandA = MTIRenderCommand(kernel: self.kernelA, geometry: self.geometryA, images: [imageA], parameters: [:])

// Create a draw call with kernelB, geometryB, and imageB.
let renderCommandB = MTIRenderCommand(kernel: self.kernelB, geometry: self.geometryB, images: [imageB], parameters: [:])

// Create an output descriptor
let outputDescriptor = MTIRenderPassOutputDescriptor(dimensions: MTITextureDimensions(width: outputWidth, height: outputHeight, depth: 1), pixelFormat: .bgra8Unorm, loadAction: .clear, storeAction: .store)

// Get the output images, the output image count is equal to the output descriptor count.
let images = MTIRenderCommand.images(byPerforming: [renderCommandA, renderCommandB], outputDescriptors: [outputDescriptor])

You can also create multiple output descriptors to output multiple images in one render pass (MRT, See https://en.wikipedia.org/wiki/Multiple_Render_Targets).

Custom Vertex Data

When MTIVertex cannot fit your needs, you can implement the MTIGeometry protocol to provide your custom vertex data to the command encoder.

Use the MTIRenderCommand API to issue draw calls and pass your custom MTIGeometry.

Custom Processing Module

In rare scenarios, you may want to access the underlying texture directly, use multiple MPS kernels in one render pass, do 3D rendering, or encode the render commands yourself.

MTIImagePromise protocol provides direct access to the underlying texture and the render context for a step in MetalPetal.

You can create new input sources or fully custom processing units by implementing the MTIImagePromise protocol. You will need to import an additional module to do so.

Objective-C

@import MetalPetal.Extension;

Swift

// CocoaPods
import MetalPetal.Extension

// Swift Package Manager
import MetalPetalObjectiveC.Extension

See the implementation of MTIComputePipelineKernel, MTICLAHELUTRecipe or MTIImage for example.

Alpha Types

If an alpha channel is used in an image, there are two common representations that are available: unpremultiplied (straight/unassociated) alpha, and premultiplied (associated) alpha.

With unpremultiplied alpha, the RGB components represent the color of the pixel, disregarding its opacity.

With premultiplied alpha, the RGB components represent the color of the pixel, adjusted for its opacity by multiplication.

MetalPetal handles alpha type explicitly. You are responsible for providing the correct alpha type during image creation.

There are three alpha types in MetalPetal.

MTIAlphaType.nonPremultiplied: the alpha value in the image is not premultiplied.

MTIAlphaType.premultiplied: the alpha value in the image is premultiplied.

MTIAlphaType.alphaIsOne: there's no alpha channel in the image or the image is opaque.

Typically, CGImage, CVPixelBuffer and CIImage objects have premultiplied alpha channels. MTIAlphaType.alphaIsOne is strongly recommended if the image is opaque, e.g. a CVPixelBuffer from camera feed, or a CGImage loaded from a jpg file.

You can call unpremultiplyingAlpha() or premultiplyingAlpha() on a MTIImage to convert the alpha type of the image.

For performance reasons, alpha type validation only happens in debug build.

Alpha Handling of Built-in Filters

  • Most of the filters in MetalPetal accept unpremultiplied alpha and opaque images and output unpremultiplied alpha images.

  • Filters with outputAlphaType property accept inputs of all alpha types. And you can use outputAlphaType to specify the alpha type of the output image.

    e.g. MTIBlendFilter, MTIMultilayerCompositingFilter, MTICoreImageUnaryFilter, MTIRGBColorSpaceConversionFilter

  • Filters that do not actually modify colors have passthrough alpha handling rule, that means the alpha types of the output images are the same with the input images.

    e.g. MTITransformFilter, MTICropFilter, MTIPixellateFilter, MTIBulgeDistortionFilter

For more about alpha types and alpha compositing, please refer to this amazing interactive article by Bartosz Ciechanowski.

Color Spaces

Color spaces are vital for image processing. The numeric values of the red, green, and blue components have no meaning without a color space.

Before continuing on how MetalPetal handles color spaces, you may want to know what a color space is and how it affects the representation of color values. There are many articles on the web explaining color spaces, to get started, the suggestion is Color Spaces - by Bartosz Ciechanowski.

Different softwares and frameworks have different ways of handling color spaces. For example, Photoshop has a default sRGB IEC61966-2.1 working color space, while Core Image, by default, uses linear sRGB working color space.

Metal textures do not store any color space information with them. Most of the color space handling in MetalPetal happens during the input (MTIImage(...)) and the output (MTIContext.render...) of image data.

Color Spaces for Inputs

Specifying a color space for an input means that MetalPetal should convert the source color values to the specified color space during the creation of the texture.

  • When loading from URL or CGImage, you can specify which color space you'd like the texture data to be in, using MTICGImageLoadingOptions. If you do not specify any options when loading an image, the device RGB color space is used (MTICGImageLoadingOptions.default). A nil color space disables color matching, this is the equivalent of using the color space of the input image to create MTICGImageLoadingOptions. If the model of the specified color space is not RGB, the device RGB color space is used as a fallback.

  • When loading from CIImage, you can specify which color space you'd like the texture data to be in, using MTICIImageRenderingOptions. If you do not specify any options when loading a CIImage, the device RGB color space is used (MTICIImageRenderingOptions.default). A nil color space disables color matching, color values are loaded in the working color space of the CIContext.

Color Spaces for Outputs

When specifying a color space for an output, the color space serves more like a tag which is used to communicate with the rest of the system on how to represent the color values in the output. There is no actual color space conversion performed.

  • You can specify the color space of an output CGImage using MTIContext.makeCGImage... or MTIContext.startTaskTo... methods with a colorSpace parameter.

  • You can specify the color space of an output CIImage using MTICIImageCreationOptions.

MetalPetal assumes that the output color values are in device RGB color space when no output color space is specified.

Color Spaces for CVPixelBuffer

MetalPetal uses CVMetalTextureCache and IOSurface to directly map CVPixelBuffers to Metal textures. So you cannot specify a color space for loading from or rendering to a CVPixelBuffer. However you can specify whether to use a texture with a sRGB pixel format for the mapping.

In Metal, if the pixel format name has the _sRGB suffix, then sRGB gamma compression and decompression are applied during the reading and writing of color values in the pixel. That means a texture with the _sRGB pixel format assumes the color values it stores are sRGB gamma corrected, when the color values are read in a shader, sRGB to linear RGB conversions are performed. When the color values are written in a shader, linear RGB to sRGB conversions are performed.

Color Space Conversions

You can use MTIRGBColorSpaceConversionFilter to perform color space conversions. Color space conversion functions are also available in MTIShaderLib.h.

  • metalpetal::sRGBToLinear (sRGB IEC61966-2.1 to linear sRGB)
  • metalpetal::linearToSRGB (linear sRGB to sRGB IEC61966-2.1)
  • metalpetal::linearToITUR709 (linear sRGB to ITU-R 709)
  • metalpetal::ITUR709ToLinear (ITU-R 709 to linear sRGB)

Extensions

Working with SceneKit

You can use MTISCNSceneRenderer to generate MTIImages from a SCNScene. You may want to handle the SceneKit renderer's linear RGB color space, see issue #76 The image from SceneKit is darker than normal.

Working with SpriteKit

You can use MTISKSceneRenderer to generate MTIImages from a SKScene.

Working with Core Image

You can create MTIImages from CIImages.

You can render a MTIImage to a CIImage using a MTIContext.

You can use a CIFilter directly with MTICoreImageKernel or the MTICoreImageUnaryFilter class. (Swift Only)

Working with JavaScript

See MetalPetalJS

With MetalPetalJS you can create render pipelines and filters using JavaScript, making it possible to download your filters/renderers from "the cloud".

Texture Loader

It is recommended that you use APIs that accept MTICGImageLoadingOptions to load CGImages and images from URL, instead of using APIs that accept MTKTextureLoaderOption.

When you use APIs that accept MTKTextureLoaderOption, MetalPetal, by default, uses MTIDefaultTextureLoader to load CGImages, images from URL, and named images. MTIDefaultTextureLoader uses MTKTextureLoader internally and has some workarounds for MTKTextureLoader's inconsistencies and bugs at a small performance cost. You can also create your own texture loader by implementing the MTITextureLoader protocol. Then assign your texture loader class to MTIContextOptions.textureLoaderClass when creating a MTIContext.

Install

CocoaPods

You can use CocoaPods to install the latest version.

use_frameworks!

pod 'MetalPetal'

# Required if you are using Swift.
pod 'MetalPetal/Swift'

# Recommended if you'd like to run MetalPetal on Apple silicon Macs.
pod 'MetalPetal/AppleSilicon'

Sub-pod Swift

Provides Swift-specific additions and modifications to the Objective-C APIs to improve their mapping into Swift. Highly recommended if you are using Swift.

Sub-pod AppleSilicon

Provides the default shader library compiled in Metal Shading Language v2.3 which is required for enabling programmable blending support on Apple silicon Macs.

Swift Package Manager

Adding Package Dependencies to Your App

iOS Simulator Support

MetalPetal can run on Simulator with Xcode 11+ and macOS 10.15+.

MetalPerformanceShaders.framework is not available on Simulator, so filters that rely on MetalPerformanceShaders, such as MTIMPSGaussianBlurFilter, MTICLAHEFilter, do not work.

Simulator supports fewer features or different implementation limits than an actual Apple GPU. See Developing Metal Apps that Run in Simulator for detail.

Quick Look Debug Support

If you do a Quick Look on a MTIImage, it'll show you the image graph that you constructed to produce that image.

Quick Look Debug Preview

Trivia

Why Objective-C?

Contribute

Thank you for considering contributing to MetalPetal. Please read our Contributing Guidelines.

License

MetalPetal is MIT-licensed. LICENSE

The files in the /MetalPetalExamples directory are licensed under a separate license. LICENSE.md

Documentation is licensed CC-BY-4.0.

metalpetal's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metalpetal's Issues

LookUpFilters look awful using MetalPetal

Maybe I'm doing something wrong. Please check two images below. First image is made with GPUImage2 and the second with MetalPetal. Why the difference is so huge?

37992738_1983380375025714_294567110770688000_n
38001483_1983380328359052_4186797406483906560_n

I was using THE SAME FILTER for both pictures.
Also, I tried BlendMode ColorLookup512x512 and simple Color Lookup.

`guard let image = UIImage(named: "IMG_1506.jpg") else { return }
guard let cgImage = image.cgImage else { return }
guard let image2 = UIImage(named: "lookup_GENIJUS.png") else { return }
guard let cgImage1 = image2.cgImage else { return }

    let filter = MTIBlendFilter.init(blendMode: MTIBlendMode.colorLookup512x512)
    
    
    
    filter.inputBackgroundImage = MTIImage(cgImage: cgImage)
    filter.inputImage = MTIImage(cgImage: cgImage1)
    
    
    if let device = MTLCreateSystemDefaultDevice(),
        let outputImage = filter.outputImage {
        do {
            let context = try MTIContext(device: device)
            let filteredImage = try context.makeCGImage(from: outputImage)
            image1.image = UIImage(cgImage: filteredImage)
        } catch {
            print(error)
        }
    }`

Images look washed out when rendering to MTKView

For some reason rendering images to the MTKView makes them look washed out, I've tried setting both the workingPixelFormat to various things including sRGB. I've also tried setting the colorPixelFormat to different formats and seeing if that makes a difference. In fact, setting the colorPixelFormat on the MTKView to an sRGB format makes it even more washed out. It definitely has something to do with rendering to the MTKView, since if I just get an output image and put that into a UIImageView it looks fine. I'm wondering how to fix the washed out/desaturated colours when rendering to an MTKView?

Here's a look at what the problem is.
img_0137
img_0138

The first one is how the image should look, the second is the image rendering into an MTKView using an MTIDrawableRenderingRequest. I'm hoping there's a simple fix here...

Running on iOS 10 simulator issue

I know that iOS simulator doesn't support Metal.
But I need to test other features on iOS 10 simulator and can't start app.
On iOS 11 and iOS 12 simulators app starts.

I get this error:

dyld: Library not loaded: /System/Library/Frameworks/MetalPerformanceShaders.framework/MetalPerformanceShaders
  Referenced from: /Users/Igor/Library/Developer/CoreSimulator/Devices/D3924F1D-AC68-4A71-BBDB-710DDB67948D/data/Containers/Bundle/Application/3A2F03A1-17E4-4128-B1D4-067A9C66B230/Unfold.app/Frameworks/MetalPetal.framework/MetalPetal
  Reason: no suitable image found.  Did find:
	/System/Library/Frameworks/MetalPerformanceShaders.framework/MetalPerformanceShaders: mach-o, but not built for iOS simulator

How can I fix this?

How to get aspect MTIImage ?

sorry, I'm newbie with MetalPetal.
currently I can get aspect MTIImage very easy if I render the texture which from MKTView
but for some reasons, I hope I can get an aspect MTImage directly, How should I do ?

Support Photoshop curves with MTIRGBToneCurveFilter

It looks like the MTIRGBToneCurveFilter is very similar to GPUImage's curveToneFilter, but it's not clear to me how to generate the MTIVectors needed for it to work. Are there any plans to include a photoshop curves (acv) parser in the future?

App crashing when loading filters

Good evening everyone.

Since my last topic was closed I'm creating the new one with additional information.
Here's the link to me previous topic with the same problem: https://github.com/MetalPetal/MetalPetal/issues/38

I have photo editing application which is based on MetalPetal framework. The main question is why this application crashes for people who have iPhone 7 or iPhone 8 and better one. But for mine iPhone SE and my friend iPhone 6 it's working just great.

UPDATE:
Now I can tell you that this problem is 100% not because of the memory. I simply can't even IMAGINE why application crashes on iPhone 7 but works great on my iPhone SE.

So now I will tell you all the details to make sure you understand how everything works.
First of all, user picks image from the camera roll. Then he sends it to another screen which is the main screen. (in my case viewController). Then there's one imageView which is the main one and 9 buttons. When user clicks on the button then filter is applied to UIImage. (You can see the example image1 which is UIImage) and then this UIImage is applied to imageView ( which is called in my case "pagrindinis". ( Also you can notice that I'm checking if user clicks on the filter the first time or not. Just to save memory when user clicks only on one filter and when he clicks again then there's no need to apply filter again because we already have UIImage. But that's not important I guess. )

So, the problem is that somehow something is not working when I'm trying to apply filter on UIImage. (You can see the error that application found nil while unwrapping.)

Maybe now there's any chance to help me? I would appreciate any help because you guys rock!

Now there's a screenshots of my error (first image) and the code where I apply image. ALSO, I will add code snippets to be easier to understand.

screen shot 2018-08-17 at 18 10 13
screen shot 2018-08-17 at 23 44 05

My code snippet where I apply filter with MetalPetal:
`func pirmas() {
guard let thumbnail2 = original else { return } // "original" is my UIImage from camera roll
guard let cgImage = thumbnail2.cgImage else { return }
guard let lookupimage1 = UIImage(named: "lookup_amatorka.png") else { return } // my lookup
guard let cgImage1 = lookupimage1.cgImage else { return }
let filter1 = MTIColorLookupFilter()

filter1.inputImage = MTIImage(cgImage: cgImage, options: [MTKTextureLoader.Option.SRGB: false]) 
filter1.inputColorLookupTable = MTIImage(cgImage: cgImage1, options: [MTKTextureLoader.Option.SRGB: false]) 
filter1.intensity = 0.6 


if let device = MTLCreateSystemDefaultDevice(),
let outputImage = filter1.outputImage { //cia
    do {
        let context = try MTIContext(device: device)
        let filteredImage = try context.makeCGImage(from: outputImage)
        image1 = UIImage(cgImage: filteredImage) // here's UIImage which I use later 
        
    } catch {
        print(error)
    }
}

}`

Here you can see code snippet where I apply filter when user clicks on the button:
@IBAction func teal1(_ sender: Any) { if firstTime == false { pirmas() // Here I call function which you can see above firstTime = true } self.pagrindinis.image = image1! // Here I get ERROR that my UIImage is nil pavadinimas = 1 }

Someone please tell me why this solution is working with iPhone SE and iPhone 6 but not with iPhone 7 and higher.

If you need more information please tell me. I will tell you everything.

Thank you so much!!!

How to MetalPetal in realtime

Now that apple is releasing iOS 12 and macOS mojave and deprecating OPENGL ES is time to start using MetalPetal :) GPUImage wont be an option now.

I wonder what is the best guide to use MetalPetal in a realtime rendering setting to be able to take pictures and video in high quality.

:)

memory is always increase in MTIWeakToStrongObjectsMapTable

Hi, I use MetalPetal to process image then render it to a unique CVPixelBuffer (reuse)
It works fine but memory seems increase by a very tiny size (it reduces only few times)
I've profile my app, root cause looks like NSPointerArray in MTIWeakToStrongObjectsMapTable
If I run my app for 2 hours, memory might increase 3-6 mb
How do I reduce size of MTIWeakToStrongObjectsMapTable in efficient way ?
Because this will run by App Extension, it restricts memory usage

try self.context?.render(outputImage, to: renderBuffer) <=== root cause ?

public func render(_ sampleBuffer: CMSampleBuffer) {
    let now = Date().timeIntervalSince1970
    if now - self.lastRenderTime < 0.03 { return }
    self.lastRenderTime = now
    self.renderQueue.async {
        autoreleasepool {
            guard let renderBuffer = self.renderBuffer, let inputImage = self.inputImage(sampleBuffer), let outputImage = self.processImage(inputImage) else { return }
            self.lockQueue.sync {
                autoreleasepool {
                    do {
                        try self.context?.render(outputImage, to: renderBuffer) <=== root cause ?
                    }
                    catch {
                        print("ReplayScreen render fail")
                    }
                }
            }
        }
    }
}

fileprivate func inputImage(_ sampleBuffer: CMSampleBuffer) -> MTIImage? {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
    let orientationKey = CMGetAttachment(sampleBuffer, RPVideoSampleOrientationKey as CFString, nil) as? UInt32
    let render_orientation = CGImagePropertyOrientation(rawValue: orientationKey ?? 0) ?? .up
    let inputImage = MTIUnaryImageRenderingFilter.image(byProcessingImage: MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne), orientation: MTIImageOrientation(cgImagePropertyOrientation: render_orientation), parameters: [:], outputPixelFormat: .invalid)
    return inputImage
}

fileprivate func processImage(_ inputImage: MTIImage) -> MTIImage? {
    let render_size = inputImage.size
    let aspected_rect = AVMakeRect(aspectRatio: render_size, insideRect: CGRect(origin: CGPoint.zero, size: self.size))
    let size = aspected_rect.size
    let position = CGPoint(x: aspected_rect.midX, y: aspected_rect.midY)
    self.compositingFilter.inputBackgroundImage = MTIImage(color: MTIColor.init(red: 0, green: 0, blue: 0, alpha: 1), sRGB: false, size: self.size)
    self.compositingFilter.layers = [
        MTILayer(content: inputImage, layoutUnit: .pixel, position: position, size: size, rotation: 0, opacity: 1, blendMode: .normal)
    ]
    return self.compositingFilter.outputImage
}

Image rendered from Camera Roll photo in wrong orientation

I'm getting different result rendering image locally and from camera roll. Very strange, I've confirmed that they're both showing correctly in a standard UIImageView

Here's the code to create and render MTIImage

// IMG_0059.jpg is camera roll photo exported as jpg format
let testImage = UIImage(named: "IMG_0059.jpg")!
self.testMTIImage = MTIImage(cgImage: testImage.cgImage!, options: [MTKTextureLoader.Option.SRGB: false], isOpaque: false)

public func draw(in view: MTKView) {
        do {
            try autoreleasepool {
                try self.context?.render(testMTIImage, toDrawableWithRequest: renderRequest)
            }
        } catch {
            print(error)
        }
    }

When UIImage is loaded locally, the result in correct orientation like this:

When UIImage is loaded from camera roll, like this,
self.testMTIImage = MTIImage(cgImage: /**UIImage from camera roll*/, options: [MTKTextureLoader.Option.SRGB: false], isOpaque: false)
the result is rotated:

Add lookup filter

Hey,

I'm curious how to make lookup filter function with MetalPetal?

I was using before GPUImage2 and that's how my code looks like in it. I want to make the same thing with MetalPetal.

let lookupFilter12 = LookupFilter()
        
   
        let pictureInput12 = PictureInput(image: image!) // My UIImage!
        
        lookupFilter12.lookupImage = PictureInput(imageName: "lookup.png") // My lookup filter PNG!
        
        let pictureOutput12 = PictureOutput()
        pictureOutput12.imageAvailableCallback = {image in
            self.image1 = image
      
        }
        pictureInput12 --> lookupFilter12 --> pictureOutput12
        pictureInput12.processImage(synchronously:true)

Crashing when applying multiple filters

The only problem I have now is that I can't apply two built in filters on the same photo at the time.
I think it's just that I'm doing something wrong. Here's how I do it:

`guard let originalPhotoEffectOne = original else { return }
        guard let cgImage = originalPhotoEffectOne.cgImage else { return }
        let filter1 = MTISaturationFilter()
        filter1.saturation = 0 
        filter1.inputImage = MTIImage(cgImage: cgImage, options: [MTKTextureLoader.Option.SRGB: false]) 
        if let device = MTLCreateSystemDefaultDevice(),
            let outputImage = filter1.outputImage { 
            do {
                let context = try MTIContext(device: device)
                let filteredImage = try context.makeCGImage(from: outputImage)
                let image1: UIImage? = UIImage(cgImage: filteredImage)
            } catch {
                print(error)
            }
        }
    guard let photoEffectTwo = image1 else { return }
    guard let cgImage2 = photoEffectOne.cgImage else { return }
    let filter2 = MTIExposureFilter()
    filter2.exposure = 0.8
    filter2.inputImage = MTIImage(cgImage: cgImage2, options: [MTKTextureLoader.Option.SRGB: false]) // cia
    
    if let device = MTLCreateSystemDefaultDevice(),
        let outputImage = filter2.outputImage { //cia
        do {
            let context = try MTIContext(device: device)
            let filteredImage = try context.makeCGImage(from: outputImage)
            let image1: UIImage? = UIImage(cgImage: filteredImage)
        } catch {
            print(error)
        }
    }`

How do I get rotated MTIImage ?

let device = MTLCreateSystemDefaultDevice()!
let option = MTIContextOptions()
let context = try! MTIContext(device: device, options: option)
let transformFilter = MTITransformFilter()
let angle = CGFloat(90 / 180 * Double.pi)
let tranform = CATransform3DMakeRotation(angle, 0, 0, 1)
transformFilter.transform = tranform
transformFilter.inputImage = MTIImage(cgImage: image.cgImage!, options: [.SRGB: false], alphaType: .alphaIsOne)
let outputImage = transformFilter.outputImage

assumed size of inputImage is 720 * 1280
I was expected that size of outputImage became 1280 * 720, but it's still same size of inputImage

How do I get rotated outputImage ?

How to access the drawInMTKView cycle from MTIImageView class

In your Demo "ImageRendererViewController.m" you use the method draw(in view: MTKView)
in order to re-draw the filtered image and have an animation (ie. saturationAndInvertTestOutputImage() ). For that to work, you use a MTKView

But in the CameraViewController.m you use MTIImageView for rendering the frames by simply assigning the .image var to it.

I wanna use MTIImageView both for Camera Frames and Static images. But in order to animate on every draw cycle my static Images, I need to access the drawInMTKView method somehow.

Is it possible if you add a callback or delegate method to MTIImageView so we can subscribe to each draw cycle ? :)

Crash on Applying Overlay Filter

I am trying to apply Overlay filter on a video with Metal Petal. Here's my code snippet.

`let mtiWatermarkImage = MTIImage(ciImage: waterMarkImage, isOpaque: false)
            let mtiFilter = MTIBlendFilter(blendMode: .overlay)
            mtiFilter.inputImage = mtiWatermarkImage

            let vcom = AVMutableVideoComposition(asset: composition) { request in
                let source = MTIImage(ciImage: request.sourceImage, isOpaque: true)
                mtiFilter.inputBackgroundImage = source
                do {
                    if let output = mtiFilter.outputImage  {
                        let image = try mtiContext.makeCIImage(from: output)
                        request.finish(with: image, context: nil)
                    }
                } catch let error {
                    request.finish(with: error)
                }
            }`

It crashes and throws the following error

*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: { BOOL canAcceptAlphaType = YES; for (MTIImage *image in command.images) { if (![command.kernel.alphaTypeHandlingRule canAcceptAlphaType:image.alphaType]) { canAcceptAlphaType = NO; break; } } canAcceptAlphaType; }'

Can anyone enlighten me, what's wrong ?
Thanks in advance.

Mac support?

Hi there,

I'm currently exploring my options for live-image filtering on the Mac. As it's a new project I want to avoid using GPUImage (due to OpenGL's deprecation) and go with a Metal-based solution instead.

MetalPetal is an awesome project. I have run the demo on iOS and am excited by its possibilities. However, I've been unable to install it as a dependency (via Cocoapods) in a macOS project, and can't seem to see how to link the source manually, getting type in-declaration errors when I try to.

I'm therefore assuming that Mac isn't supported out-of-the-box... I also notice UIKit references in MTIImageView.

Is there a way to get this going on Mac?

I'm returning to Swift development after a 4-year hiatus so I'm probably not the person to update the library myself, but if it currently usable on Mac, is anyone able to point me towards how I could achieve this?

No worries if it's not possible right now.

Thanks again!

EXC_BAD_ACCESS: Crash when applying custom CIKernel-Filters

I have been trying to write CIFilters which uses custom CIColorKernel underneath and apply the effect on the live camera feed. I bumped into this memory-access crash while trying to switch between two filters specifically when one is using the alpha channel value in return and is using a static 1.0 value as alpha.
The crash is happening here
[renderingContext.context.coreImageContext startTaskToRender:self.image fromRect:self.bounds toDestination:renderDestination atPoint:CGPointZero error:&error];
which is under MTIImagePromise.m, inside this function:
(MTIImagePromiseRenderTarget *)resolveWithContext:(MTIImageRenderingContext *)renderingContext error:(NSError * __autoreleasing *)inOutError

My kernels are like this
1.

@"kernel vec4 tileKernel(__sample pixel, float tileSize)",
@"{",
@"    vec2 coord = destCoord();",
@"    float brightness = 1.0 - (mod(coord.y, tileSize) / tileSize);",
@"    brightness *= 1.0 - (mod(coord.x, tileSize) / tileSize);",
@"    return vec4(brightness * (pixel.rgb), pixel.a);",
@"}"

and
2.

@"kernel vec4 crtColor(__sample pixel, float height, float width)",
@"{",
@"    vec2 coord = destCoord();",
@"    int columnIndex = int(mod(coord.x / width, 3.0));",
@"    int rowIndex = int(mod(coord.y, height));",
@"    float scanlineMult = (rowIndex == 0 || rowIndex == 1) ? 0.3 : 1.0;",
@"    float red = (columnIndex == 0) ? pixel.r : pixel.r * 0.1;",
@"    float green = (columnIndex == 1) ? pixel.g : pixel.g * 0.1;",
@"    float blue = (columnIndex == 2) ? pixel.b : pixel.b * 0.1;",
@"    return vec4((vec3(red, green, blue) * scanlineMult), pixel.a);",
@"}"

If I use 1.0 instead of pixel.a in return for the 2nd kernel, the memory-crash disappears. Then if I also replace pixel.a with 1.0 in the return for the 1st kernel, the memory-crash comes back!!
Might this be some issue with parallel processing two different kernels or something like that? Or is kernels supposed to be written like this? I am not sure, as I am just beginning to grasp the ideas of rendering with metal and custom kernels

App crashing when loading filters

All of the questions were answered by YuAo and all of the solutions worked for me so I will ask one more about technical stuff.

I have photo editing application which is based on MetalPetal framework. The main question is why this application crashes for people who have iPhone 7 or iPhone 8 and better one. But for mine iPhone SE and my friend iPhone 6 it's working just great. It crashes when application starts loading 9 filters. I'm applying them on the same picture and then I'm keeping 9 UIImages. Maybe the problem is memory? On my main screen with filters memory is around 100mb for me. (on iPhone SE)
For bigger screens it's 150mb. I don't know about iPhone 8 Plus.
I'm using unwind segues so after editing memory is always normal. Around 30-40mb.

Is it possible to do something to make it work great for every device?

P.S I'm using image compression tools. Size of the image is from 100kb to 1mb after compressing. And I edit only compressed images.

I don't know if it's necessary but there's my code. (Result after MetalPetal is GREAT. The only problem is that it's crashing for someone...)
`guard let thumbnail2 = original else { return }
guard let cgImage = thumbnail2.cgImage else { return }
guard let lookupimage1 = UIImage(named: "lookup_amatorka.png") else { return }
guard let cgImage1 = lookupimage1.cgImage else { return }
let filter1 = MTIColorLookupFilter()

    filter1.inputImage = MTIImage(cgImage: cgImage, options: [MTKTextureLoader.Option.SRGB: false]) // cia
    filter1.inputColorLookupTable = MTIImage(cgImage: cgImage1, options: [MTKTextureLoader.Option.SRGB: false]) // cia
    filter1.intensity = 0.6 // cia
    
    
    if let device = MTLCreateSystemDefaultDevice(),
        let outputImage = filter1.outputImage { //cia
        do {
            let context = try MTIContext(device: device)
            let filteredImage = try context.makeCGImage(from: outputImage)
            
            image1 = UIImage(cgImage: filteredImage)
            context.reclaimResources()
        } catch {
            print(error)
        }`

Any help would be appreciated. I hope this question is normal.

Thank you.

deployment target should be iOS 8.0

Feature Request

As a fundamental framework based on Metal, MetalPetal should be set deployment target to iOS 8.0 to adapt widely range business needs.

Here I give some advice to improve compatibility or performance as below:

  • MTIImageView should be implemented using CAMetalLayer but not MKTView.
  • Avoid to use any MPS filters such as MPSImageGaussianBlur, cuz MPS filters require specific iOS version and MTLFeatureSet. eg. iPhone 5s (A7 GPUFamily1) iOS 9.0 not support MPS. see Metal Feature Set Tables
  • Avoid to use any compute pipeline, cuz it runs on iPhone 5s iOS 9.0 has poor performance.

Recorder error: Cannot call method when status is 0

Just press record and get error:

*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetWriter finishWritingWithCompletionHandler:] Cannot call method when status is 0'

Using non-default samplers

I've been playing around with the lens blur filter and noticed that it tends to produce ugly artifacts near the image edges, especially when using a large blur radius.
Using samplers with their address mode set to 'mirror repeat' alleviates this.
However, simply setting the input images sampler is insufficient, as the filter generates intermediate images which appear to fall back to the default sampler.
I tried two things:

  • explicitly 'forwarding' the input image's sampler in the filter to the intermediate images, i.e. calling imageWithSamplerDescriptor a bunch of times
  • replacing the samplers in the shader code by 'properly' configured ones

Both approaches worked, but maybe there's a better way?

Distortion and Histograms

Do you have any examples of distortion filters or drawing histograms in metalPetal? I'm trying to translate filters from GPUImage and I'd like some examples of vertex shaders that move pixels about.

Implement MTIImageRenderingReceipt resolver

  1. inputs image receipt refcount ++
  2. resolve inputs
  3. build/reuse render pipeline state
  4. create/reuse render destination texture
  5. create/reuse render pass descriptor w/ Reflection
  6. create command encoder
  7. encode vertices/texture/sampler/parameters
  8. inputs image receipt refcount --
  9. return destination texture

CLAHE filter not working on older devices?

Hi,

I'm currently playing around with the CLAHE filter, plugged it into the camera part of the demo app, it's working nicely on a recent iPhone running iOS 12.
However, on older devices (iPad mini 2, still on iOS 10, and an iPhone 5s running 12.1.3) I'm hitting a runtime assertion.
In the resolveWithContext method in MTICLAHEFilter.m, the kernelState.histogramKernel property appears to be nil.
Backtracking to newKernelStateWithContext, the MPSImageHistogram allocation fails there.
When running a release build there's no runtime error, of course, and the filter isn't applied.
Please let me know if I can help you with any additional information.

Creating custom filter

I've been playing for a day or so and I can create custom filters using swift but it only works if I drag my custom metal shader into the pod target. Is that the only way i can expose my shader to MetalPetal?

Bytes per row for initWithBitmapData assertion

Hello!

I had an old implementation of generating a video from a set of images using UIImages and CoreGraphics. I thought of improving performance by using Metal, and luckily stumbled upon your project!

The problem I'm facing is that a user picks a set of images from an ImagePicker and I get a set of PHAssets I request the Data of these PHAssets so that I could use as little memory as possible, moreover, MTIImgage could be initialized with Data given that you provide the height width bytesPerRow pixelFormat and alphaInfo.

So I made this class:

class BitmapData {
	let data: Data
	let width: Int
	let height: Int

	var bytesPerRow: Int {
		return data.count / height
	}

	init(data: Data, width: Int, height: Int) {
		self.data = data
		self.width = width
		self.height = height
	}
} 

Now for bytesPerRow I realized there is an Assertion that:

NSParameterAssert(data.length == height * bytesPerRow);

As you can see from BitmapData, I made it into a computed variable to calculate that exactly. However, since this value must be an int, it loses some of its value if the result is a float/double and the assertion is never met. So here I'm stuck wondering, is the image data I'm using wrong? Is the constructor not supposed to be used in this use case? Or am I supposed to fetch the data in a different way? If you could please shed some light on this would be very grateful. Thank you!

PS: You might be wondering why I don't initialize a CIImage and construct a MTIImage from the CIImage. This is because I want to conserve as much memory as possible. As I want to improve the number of pictures that get drawn on a video, so I thought constructing a MTIImage from Data is the best and most optimized way of achieving this. Correct me if I'm wrong please.

Support YUV CVPixelBuffer output

When use none BGRA color format(for example nv12 or i420), it is boring to to color format convert. Hope MetalPetal can support this

8 Warnings

Hi! I have 8 warnings like:
no rule to process file '/Frameworks/MetalPetal/Shaders/BlendingShaders.metal' of type 'sourcecode.metal' for architecture 'x86_64' (in target 'MetalPetal')

Could you fix this or provide any suggestions?

Why is MetalPetal written in Objective C ?

Hi thank you for developing this amazing framework, making our lives way more easy.

As I start moving to Metal because of OpenGLES deprecation by Apple. I looked into GPUImage3 and MetalPetal. But I realized that MetalPetal is written in ObjectiveC, I wonder if there is a specific reason why is that, since Swift is meant to be the future of Apple software development.

I cant help to wonder, what may happen if Apple slowly switch to Swift ONLY apps in years to come. Would it be wise to write such a complex (and big) framework such as MetalPetal in ObjectiveC right now ?

are there any plans for a Swift rewrite ?

This is just a general doubt, because I wanna be cautious when choosing the right technology (framework/lib) before investing development time in my own long term projects.

Anyways, great job you are doing there. As a second adjacent question.
Are you guys being funded by a larger corporation, or are you powered by Donations, If so it would be great to know how we can contribute.

Cheers

Applying Transform Filter

The outputImage of a Transform Filter remains the same size of the inputImage. Is this an expected result? Because I expected the outputImage size to be bigger or smaller according to the Transform applied.

Reading the code, I saw that it may be intentional since the MTIRenderPassOutputDescriptor and MTIRenderCommand are constructed with inputImage size. However, if that's the case how do you scale images to be bigger or smaller accordingly using MetalPetal?

ps: I was also wondering if it might be annoying for you to answer these questions here instead of Stackoverflow. I read the Contributing guidelines and I didn't see anything talking about asking questions or what type of issues are allowed to be opened here on Github. Thank you again much appreciated!

Memory bump after loading MTKView

I've tried my best to follow the best practice, but I'm still experiencing a big memory bump after loading MTKView using MetalPetal.

The image itself that I loaded from camera roll is about 1.9MB. When not using MTKView/MetalPetal, just load the photo into a standard 'UIImageView', the app uses 87MB in memory. As soon as I replace the 'UIImageView' with MTKView, the memory instantly bump to 1.59G and my phone gets really hot in like 3 seconds. Maybe I'm not using it correctly, I'll post my code blow.

Setup

fileprivate lazy var context: MTIContext? = {
        let options = MTIContextOptions()
        guard let device = MTLCreateSystemDefaultDevice(),
            let context = try? MTIContext(device: device, options: options) else { return nil }
        return context
    }()
    
    fileprivate lazy var renderView: MTKView = {
        let view = MTKView(frame: .zero, device: context?.device)
        view.delegate = self
        view.translatesAutoresizingMaskIntoConstraints = false
        view.layer.isOpaque = false
        view.autoresizingMask = [.flexibleWidth, .flexibleHeight]
        return view
    }()
    
    fileprivate lazy var renderRequest: MTIDrawableRenderingRequest = {
        let request = MTIDrawableRenderingRequest()
        request.drawableProvider = self.renderView
        request.resizingMode = MTIDrawableRenderingResizingMode.aspect
        return request
    }()

   inputImage = MTIImage(cgImage: cgImage, options: [MTKTextureLoader.Option.SRGB: false], isOpaque: false)

Rendering

public func draw(in view: MTKView) {
        brightnessFilter.inputImage = self.inputImage
        contrastFilter.inputImage = brightnessFilter.outputImage
        saturationFilter.inputImage = contrastFilter.outputImage
        guard let ajustedImage = saturationFilter.outputImage else { return }
        var outputImage = ajustedImage

        if let appliedFilter = appliedFilter {
            appliedFilter.inputImage = ajustedImage
            if let filteredImage = appliedFilter.outputImage {
                outputImage = filteredImage
            }
        }
        
        do {
            try autoreleasepool {
                try self.context?.render(outputImage, toDrawableWithRequest: renderRequest)
            }
        } catch {
            print(error)
        }
    }

I'm testing on iPhone XS. Please let me know if you need more info.

Render result cache and texture reuse

  1. If multiple promises dependent on one promise, that promise should only be resolved once.
  2. Reuse texture whenever possible. (follows Image.cachePolicy)
  3. Add support for persistent image whose render result should never be reused during the image's life time.

Memory issue

Hello Yuao! Hope you are doing good!

As you might remember, I was experimenting using MetalPetal to generate a video asset from a set of images. I added a feature to it where on completion the class returns all the images it used to render the video for any post processing video needs. I used let cgImages = try images.map({ try self.context.makeCGImage(from: $0) }) after that code is executed I notice a huge memory footprint remaining in memory. I'm not sure if I'm doing anything wrong, but I checked the leak profiler and there was no leak detected.

Usually when I am generating a video out of images the render method increments a 15mb - 20mb per image. After the video generation is complete I am left with a 600mb memory in RAM, before I would be left with approximately 140mb, which is also bad, however, it was easier to get by with. Additionally, it's important for me to mention that I am rendering really high resolution images directly from the iOS camera, they are approximately 3k or 4k resolution. You might be wondering why not just transform the images to a smaller resolution and render them then? That's because it's a requirement for me to be able to render really high resolution images as I would want the video to be high resolution itself.

I also tried checking the allocation profiler, but there is an unusual case where the allocation shows the memory spike to 30mb when the video generation begins and then it gradually goes down to 10mb; on the other hand, on the leak profiler actually shows the spike to 400mb 600mb. It is very weird.

I have set up the project here if you are willing to do some investigations. I have set warnings where I think things are going wrong so that it would be easy to navigate to the problem directly. Thank you in advance for reading this long issue.

glitter effect

Hello, I wanted to ask how to implement this library glitter filter. The Glitter filter creates a glittering or sparkling light effect. . A luminance map creates the glitter based on bright areas in the image.
htb1nqm4rfxxxxxlxfxxq6xxfxxxa. Please, if this library can not do this, tell me how to do it in IOS. Any references, lessons I will be grateful for. Thanks. PS - Sorry for my English:)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.