Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3)

This is the last part of a three part series explaining how to use dart:ffi, OpenCV, and Tensorflow to write a computer vision app in Flutter.

Part one discusses how to properly configure the OpenCV library and our custom C++ files to work within a Flutter app and how to access these C++ files from Dart via dart:ffi.

Part two discusses how to pass data between Dart and C++ using the dart:ffi library using pointers. This is especially important for transferring image data to C++ so that it can be manipulated by OpenCV and returned to Dart.

Part three discusses how to use TensorFlow in a Flutter app using the tflite_flutter plug in. We will go through proper configuration, common bugs, and how to run inference on our device. (Coming soon!)

Note: This tutorial only covers configuration for iOS and has been tested with an M1 mac.

Sudoku Cam, a computer vision Flutter app using OpenCV in C++

In the previous tutorials, we saw how to access OpenCV in C++ using the dart:ffi library, and how to send image data to and from Dart. We often need to use OpenCV to do some processing on an image before sending it into a neural network that will perform some task for us, like classification. Let’s learn how to use the tflite_flutter package how to run on-device inference! Note: I will be using the tflite_flutter package instead of the tflite package as the former offers convenient methods for resizing lists and for running inference with multiple inputs. This will be demonstrated later in the tutorial.

Here are the official docs of the tflite_flutter package, which give a fuller overview.

Configuration of the Library

Like OpenCV, we will have to provide it with a .framework file, which is a bundle containing the library’s header files as well as binary files for static linkage: In this case, we will need TensorFlowLiteC.framework, which you can download by clicking on this link. After downloading it, paste the file in ~/.pub-cache/hosted/pub.dartlang.org/tflite_flutter<VERSION>/ios

I would recommend trying to import the package as follows to test whether your app is able to access tflite_flutter.

import 'package:tflite_flutter/tflite_flutter.dart';

If your app builds, that’s great, you can skip on to the next section. If you get a linking error, continue reading. In some cases the .pub-cache folder that Flutter uses for its packages is not located in the location provided above, and can instead be found in the Flutter directory. In that case, navigate to <FLUTTER-DIR>/.pub-cache/hosted/pub.dartlang.org/tflite_flutter<VERSION>/ios and paste the file there. The proper location is also accessible through a symbolic link as follows: <PROJECT-DIR>/ios/.symlinks/plugins/tflite_flutter/ios/

Note that FLUTTER-DIR refers to the directory where Flutter was downloaded, while PROJECT-DIR refers to the folder where you are writing your Flutter app.

Running Inference On the Device

Because machine learning tasks often require heavy computation, the .tflite file was invented so that edge devices (such as mobile phones) can run inference as efficiently and quickly as possible. You can convert your trained model into a .tflite file through the instructions provided here.

Once you have that, let’s set up our model on our device in our Flutter app.

We can load in our model using the following function:

import 'package:tflite_flutter/tflite_flutter.dart';
Interpreter model = await Interpreter.fromAsset("modelName.tflite");

Make sure you provide the model path in the assets section in your pubspec.yaml file as follows:

assets:
- assets/modelName.tflite

Once your model is loaded, it is up to you to configure your input data to match the dimensions that you specified when you created your model.

For example, let’s assume that we are running inference on 81 34×34 grayscale images (81, 34, 34, 1) to classify each image from ten possible categories (81, 10).

Our Dart code would look like the following:

// create a list of 81 empty lists
List<Object> inputs = List<Object>.generate(81, (index) => []);
// fill in our pixel values and ensure that they're the right size 
for (int i=0; i<81; i++)
inputs[i] = images[i].reshape([34, 34, 1]);
// initialize our outputs to 0
Map<int, Object> outputs =
{ 0: List<double>.filled(81 * 10, 0).reshape([81, 10]) };
// run our model! The results will be written into 'outputs'
model.runForMultipleInputs(inputs, outputs);

If you’re getting a dimension mismatch between what your TensorFlow model expects versus what you’re inputting from your Dart code, make sure the dimensions match exactly. This includes the cases where a dimension is of size 1 (e.g. (81, 34, 34, 1) )

Dart provides many convenient List operations which can be used to obtain the maximum confidence score and index for each output array.

import 'dart:math' as math;
List<List<dynamic>> outValues = List<List<dynamic>>.from(output[0]);
for (int i=0; i<outValues.length; i++) {
// obtain maximum value in the currOutput
List<double> currOutput = List<double>.from(outValues[i]);
double maxVal = currOutput.reduce(math.max);
    // obtains the first idx where maxVal occurs
int maxIdx = currOutput.indexWhere((el) => el == maxVal);
// MORE CODE HERE...
}

You now have the ability to insert your custom machine learning model into a Flutter app and run on-device inference. We learned previously how to run image processing in OpenCV in C++ and pass the data to and from Dart. This tutorial wraps up the final stage of the pipeline, which is running on-device inference on our processed images. Hope you enjoyed!


Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3) was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Jeffrey Wolberg

This is the last part of a three part series explaining how to use dart:ffi, OpenCV, and Tensorflow to write a computer vision app in Flutter.

Part one discusses how to properly configure the OpenCV library and our custom C++ files to work within a Flutter app and how to access these C++ files from Dart via dart:ffi.

Part two discusses how to pass data between Dart and C++ using the dart:ffi library using pointers. This is especially important for transferring image data to C++ so that it can be manipulated by OpenCV and returned to Dart.

Part three discusses how to use TensorFlow in a Flutter app using the tflite_flutter plug in. We will go through proper configuration, common bugs, and how to run inference on our device. (Coming soon!)

Note: This tutorial only covers configuration for iOS and has been tested with an M1 mac.

Sudoku Cam, a computer vision Flutter app using OpenCV in C++

In the previous tutorials, we saw how to access OpenCV in C++ using the dart:ffi library, and how to send image data to and from Dart. We often need to use OpenCV to do some processing on an image before sending it into a neural network that will perform some task for us, like classification. Let’s learn how to use the tflite_flutter package how to run on-device inference! Note: I will be using the tflite_flutter package instead of the tflite package as the former offers convenient methods for resizing lists and for running inference with multiple inputs. This will be demonstrated later in the tutorial.

Here are the official docs of the tflite_flutter package, which give a fuller overview.

Configuration of the Library

Like OpenCV, we will have to provide it with a .framework file, which is a bundle containing the library’s header files as well as binary files for static linkage: In this case, we will need TensorFlowLiteC.framework, which you can download by clicking on this link. After downloading it, paste the file in ~/.pub-cache/hosted/pub.dartlang.org/tflite_flutter<VERSION>/ios

I would recommend trying to import the package as follows to test whether your app is able to access tflite_flutter.

import 'package:tflite_flutter/tflite_flutter.dart';

If your app builds, that’s great, you can skip on to the next section. If you get a linking error, continue reading. In some cases the .pub-cache folder that Flutter uses for its packages is not located in the location provided above, and can instead be found in the Flutter directory. In that case, navigate to <FLUTTER-DIR>/.pub-cache/hosted/pub.dartlang.org/tflite_flutter<VERSION>/ios and paste the file there. The proper location is also accessible through a symbolic link as follows: <PROJECT-DIR>/ios/.symlinks/plugins/tflite_flutter/ios/

Note that FLUTTER-DIR refers to the directory where Flutter was downloaded, while PROJECT-DIR refers to the folder where you are writing your Flutter app.

Running Inference On the Device

Because machine learning tasks often require heavy computation, the .tflite file was invented so that edge devices (such as mobile phones) can run inference as efficiently and quickly as possible. You can convert your trained model into a .tflite file through the instructions provided here.

Once you have that, let’s set up our model on our device in our Flutter app.

We can load in our model using the following function:

import 'package:tflite_flutter/tflite_flutter.dart';
Interpreter model = await Interpreter.fromAsset("modelName.tflite");

Make sure you provide the model path in the assets section in your pubspec.yaml file as follows:

assets:
- assets/modelName.tflite

Once your model is loaded, it is up to you to configure your input data to match the dimensions that you specified when you created your model.

For example, let’s assume that we are running inference on 81 34x34 grayscale images (81, 34, 34, 1) to classify each image from ten possible categories (81, 10).

Our Dart code would look like the following:

// create a list of 81 empty lists
List<Object> inputs = List<Object>.generate(81, (index) => []);
// fill in our pixel values and ensure that they're the right size 
for (int i=0; i<81; i++)
inputs[i] = images[i].reshape([34, 34, 1]);
// initialize our outputs to 0
Map<int, Object> outputs =
{ 0: List<double>.filled(81 * 10, 0).reshape([81, 10]) };
// run our model! The results will be written into 'outputs'
model.runForMultipleInputs(inputs, outputs);

If you’re getting a dimension mismatch between what your TensorFlow model expects versus what you’re inputting from your Dart code, make sure the dimensions match exactly. This includes the cases where a dimension is of size 1 (e.g. (81, 34, 34, 1) )

Dart provides many convenient List operations which can be used to obtain the maximum confidence score and index for each output array.

import 'dart:math' as math;
List<List<dynamic>> outValues = List<List<dynamic>>.from(output[0]);
for (int i=0; i<outValues.length; i++) {
// obtain maximum value in the currOutput
List<double> currOutput = List<double>.from(outValues[i]);
double maxVal = currOutput.reduce(math.max);
    // obtains the first idx where maxVal occurs
int maxIdx = currOutput.indexWhere((el) => el == maxVal);
// MORE CODE HERE...
}

You now have the ability to insert your custom machine learning model into a Flutter app and run on-device inference. We learned previously how to run image processing in OpenCV in C++ and pass the data to and from Dart. This tutorial wraps up the final stage of the pipeline, which is running on-device inference on our processed images. Hope you enjoyed!


Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3) was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Jeffrey Wolberg


Print Share Comment Cite Upload Translate Updates
APA

Jeffrey Wolberg | Sciencx (2022-03-09T02:30:48+00:00) Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3). Retrieved from https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/

MLA
" » Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3)." Jeffrey Wolberg | Sciencx - Wednesday March 9, 2022, https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/
HARVARD
Jeffrey Wolberg | Sciencx Wednesday March 9, 2022 » Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3)., viewed ,<https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/>
VANCOUVER
Jeffrey Wolberg | Sciencx - » Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3). [Internet]. [Accessed ]. Available from: https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/
CHICAGO
" » Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3)." Jeffrey Wolberg | Sciencx - Accessed . https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/
IEEE
" » Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3)." Jeffrey Wolberg | Sciencx [Online]. Available: https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/. [Accessed: ]
rf:citation
» Building a Flutter Computer Vision App Using Dart:ffi, OpenCV, and Tensorflow (Part 3) | Jeffrey Wolberg | Sciencx | https://www.scien.cx/2022/03/09/building-a-flutter-computer-vision-app-using-dartffi-opencv-and-tensorflow-part-3/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.