pytorch-aarch64
PyTorch, vision, audio, text and csprng
wheels (whl) and docker images
for aarch64
/ ARMv8 / ARM64 devices
Install
conda 🆕 (Recommended)
conda install -c kumatea pytorch
You might (or possibly will) need to install numpy:
conda install -c kumatea pytorch numpy
cpuonly
in the official installation guide is not needed, but supported:
conda install -c kumatea pytorch numpy cpuonly
pip
It’s not recommended to use pip to install from this source.
Instead, install from the official PyPI index:
pip install torch
Install from here...
**`pip install torch -f https://torch.kmtea.eu/whl/stable.html`**
Add `torchvision`, `torchaudio`, `torchtext`, `torchcsprng` and other packages if needed.
Consider using [prebuilt wheels][57] to speed up installation:
`pip install torch -f https://torch.kmtea.eu/whl/stable.html -f https://ext.kmtea.eu/whl/stable.html`
(For users in China, please use [the CDN](/README_zh.html#安装))
Note: this command installs the latest version.
For choosing a specific version, please check the Custom Builds section.
To pick the whl
files manually, please check the releases.
Docker (deprecated)
docker run -it kumatea/pytorch
To pull the image, run docker pull kumatea/pytorch
.
To check all available tags, click here.
FastAI is a great open-source high-level deep learning framework based on PyTorch.
conda (recommended)
conda install -c fastai -c kumatea fastai
Similarly, fastbook
could be installed by:
conda install -c fastai -c kumatea fastbook
pip
pip install fastai -f https://torch.kmtea.eu/whl/stable.html
torch
and torchvision
will be installed as dependencies automatically.
Custom Builds
click to view corresponding versions
| `torch` | `torchvision` | `torchaudio` | `torchtext` | `torchcsprng` | Status | `python` |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| `master`
`nightly` | `master`
`nightly` | `master`
`nightly` | `master`
`nightly` | `master`
`nightly` | | `>=3.6` |
| `1.10.0` | `0.11.1`
`0.11.0` | `0.10.0` | `0.11.0` | | [![passing][2]][56] | `>=3.6` |
| `1.9.1` | `0.10.1` | `0.9.1` | `0.10.1` | | | `>=3.6` |
| `1.9.0`[i] | `0.10.0` | `0.9.0` | `0.10.0` | | [![passing][2]][52] | `>=3.6` [i] |
| `1.8.1` | `0.9.1` [i] | `0.8.1` | `0.9.1` | `0.2.1` | [![passing][2]][48] | `>=3.6` |
| `1.8.0` [i] | `0.9.0` | `0.8.0` | `0.9.0` | `0.2.0` | [![passing][2]][46] | `>=3.6` |
| `1.7.1` | `0.8.2` | `0.7.2` | `0.8.1` | `0.1.4` | [![passing][2]][18] | `>=3.6` |
| `1.7.0` | `0.8.1`
`0.8.0` | `0.7.0` | `0.8.0` | `0.1.3` | [![passing][2]][12] | `>=3.6` |
| `1.6.0` [i] | `0.7.0` | `0.6.0` | `0.7.0` | `0.1.2`
`0.1.1`
`0.1.0` | [![passing][2]][10] | `>=3.6` |
| `1.5.1` | `0.6.1` | `0.5.1` | `0.6.0` | | [![passing][2]][35] | `>=3.5` |
| `1.5.0` | `0.6.0` | `0.5.0` | `0.6.0` | | [![passing][2]][36] | `>=3.5` |
| `1.4.1`
`1.4.0` | `0.5.0` | `0.4.0` | `0.5.0` | | [![passing][2]][37] | `==2.7`, `>=3.5`, `<=3.8` |
| `1.3.1` | `0.4.2` | | | | | `==2.7`, `>=3.5`, `<=3.7` |
| `1.3.0` | `0.4.1` | | | | | `==2.7`, `>=3.5`, `<=3.7` |
| `1.2.0` | `0.4.0` | | | | | `==2.7`, `>=3.5`, `<=3.7` |
| `1.1.0` | `0.3.0` | | | | | `==2.7`, `>=3.5`, `<=3.7` |
| `<=1.0.1` | `0.2.2` | | | | | `==2.7`, `>=3.5`, `<=3.7` |
### Corresponding Versions
* [Corresponding `torch` and `torchvision` versions][13]
* [Corresponding `torch` and `torchaudio` versions][14]
* [Corresponding `torch` and `torchtext` versions][29]
More Info
click to expand...
### FAQ
* **Q:** Does this run on Raspberry Pi?
**A: Yes**, if the architecture of the SoC is `aarch64`. It should run on all ARMv8 chips.
* **Q:** Does this support CUDA / CUDNN?
**A: No**. [Check here](#cuda--cudnn-support) for more information.
* **Q:** Does this run on Nvidia Jetson?
**A: Yes**, but extremely slow. Each Nvidia Jetson boards contains an Nvidia GPU, but this project only build cpu wheels. To better make use of your hardware, [build it yourself](/build/torch.sh).
### Difference From The Official Wheels
In most circumstances, it's **recommended to just use the official** wheels,
and it will also be installed via pip by default, even with `-f`.
The wheels here are compiled from source on a rpi 4b+,
and are for codes that crashed on official wheels,
because of some unsupported instructions are used.
Use the `torch` wheels here **only if you encounter problems** like [#8][53].
### About Python 3.10
By the time this change (v1.9.0) is committed, NONE of the stable version of
Python 3.10.0,
Numpy 1.21.0 (which adds Python 3.10 support), or
PyTorch 1.9.0 for Python 3.10 has been released.
If any critical issue is found, I may rebuild the wheel after stable releases.
### About PyTorch v1.8.0
* Starting from v1.8.0, the **official** wheels of PyTorch for `aarch64` has finally released!
* ~~To use the official wheels, use this index link:
**`https://torch.kmtea.eu/whl/pfml.html`**
where `pfml` stands for `prefer-manylinux` here.~~
`manylinux` wheels will be installed by default.
* `torchvision` wheels are built with [FFmpeg][47] support. For wheels without it, please install `torchvision==0.9.0+slim`
### About PyTorch v1.6.0
A fatal bug is encountered and [this patch][24] is applied while building PyTorch v1.6.0.
The patch has been merged into mainstream in later versions.
### About `torchvision` v0.9.1
Starting from `torchvision` v0.9.1,
`manylinux` wheels are officially provided via both [its indexes][49] and PyPI.
However, since they do not contain necessary backends (< 1MB) and may require extra installations,
this project will continue to build `torchvision` wheels.
### `RuntimeError` while importing
If you see something like this when `import torch`:
`RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd`
Please upgrade your `numpy` version: `pip install -U numpy`.
### CUDA / CUDNN Support
Since the building environment (as below) does not contain an Nvidia GPU,
the wheels could not be built with cuda support.
If you need it, please use an [Nvidia Jetson][30] board to run the [building code](/build/torch.sh).
### Building Environment
> Host: Raspberry Pi 4 Model B
>
> SoC: BCM2711 (quad-core A53)
>
> Architecture: ARMv8 / ARM64 / `aarch64`
>
> OS: Debian Buster
>
> GCC: v8.3.0
>
> Virtualization: **Docker**
### Performance
Test date: 2021-10-29
Script: [bench.py](/test/bench.py)
> Less execution time is better
| Platform | Specs | Training | Prediction | Version |
| :---: | :---: | :---: | :---: | :---: |
| `aarch64` | BCM2711 (4x Cortex-A72) | 1:48:44 | 11,506.080 ms | `1.10.0`
`3.9.7` |
| `aarch64` | QUALCOMM Snapdragon 845 | N/A | 4,821.148 ms (24x) | `1.10.0`
`3.9.7` |
| `amd64` | INTEL Core i5-6267U | 162.964 s | 140.680 ms (82x) | `1.10.0+cpu`
`3.9.7` |
| Google Colab | INTEL Xeon ???
NVIDIA Tesla K80 | 6.400 s | 70.714 ms (163x) | `1.10.0+cu113`
`3.7.12` |
| Kaggle | INTEL Xeon ???
NVIDIA Tesla P100 | 6.626 s | 33.878 ms (340x) | `1.10.0+cu113`
`3.7.10` |
Note:
1. This test was done by using a same _Cat or Dog_ model, to predict 10 random animal images (while same for each group).
2. The latest version of PyTorch was manually installed on all platforms, but driver and Python remained default.