Running PyTorch models on mobile phone
- This code provides scripts to profile neural networks on Android devices
- Compression may be done using MusCo toolkit.
./device_profiling
helper code to profile models on phones./doc
Markdown documentation
- see
env_docker.yaml
. You can install requirements with conda:
conda create -n nnsc_2022_mobile --file env_docker.yaml
- you can also use a docker image
qbit271/mmsc_2022_mobile
or build your own here
-
You need to first download or compile an appropriate PyTorch benchmarking binary, instructions can be found here. Otherwise, you can download the one we prebuilt for you from https://github.com/qbit-/NNSC_2022_mobile/tree/main/bin. For the prebuilt binary to work with your models, you need to use Pytorch 1.7.1 to build your models.
-
Copy the benchmark binary to
/data/local/tmp/speed_benchmark_torch
on the device. Check that the device is accessible:
adb devices
copy the profiler to the device and make it executable
adb push speed_benchmark_torch-$ANDROID_ABI /data/local/tmp/speed_benchmark_torch
adb shell chmod +x /data/local/tmp/speed_benchmark_torch
- Check that you can execute the benchmarking binary using ADB:
adb shell /data/local/tmp/speed_benchmark_torch --help
Now you can use functions in ./device_profiling
to automate profiling.
See the example