Note : need to update this repo since there's lot of recent changes.
Quantization for deep learning is the process of approximating a neural network that uses floating-point numbers by a neural network of low bit width numbers. This dramatically reduces both the memory requirement and computational cost of using neural networks.
Graph Neural Networks (GNNs) on the other hand, is a class of deep learning methods designed to perform inference on data described by graphs. GNNs are neural networks that can be directly applied to graphs, and provide an easy way to do node-level, edge-level, and graph-level prediction tasks
The accuracy result is below here :
For more information you can check the paper in the repository that is in PDF format.