Skip to content

A graph reliability toolbox based on PyTorch and PyTorch Geometric (PyG).

License

Notifications You must be signed in to change notification settings

EdisonLeeeee/GreatX

Repository files navigation

GreatX: Graph Reliability Toolbox

banner

GreatX is great!

[Documentation] | [Examples]

Python pytorch pypi license Contrib docs

❓ What is "Reliability" on Graphs?

threats

"Reliability" on graphs refers to robustness against the following threats:

  • Inherent noise
  • Distribution Shift
  • Adversarial Attacks

For more details, please kindly refer to our paper Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack

💨 News

  • November 2, 2022: We are planning to release GreatX 0.1.0 this month, stay tuned!
  • June 30, 2022: GraphWar has been renamed to GreatX.
  • June 9, 2022: GraphWar v0.1.0 has been released. We also provide the documentation along with numerous examples .
  • May 27, 2022: GraphWar has been refactored with PyTorch Geometric (PyG), old code based on DGL can be found here. We will soon release the first version of GreatX, stay tuned!

NOTE: GreatX is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.

🚀 Installation

Please make sure you have installed PyTorch and PyTorch Geometric (PyG).

# Coming soon
pip install -U greatx

or

# Recommended
git clone https://github.com/EdisonLeeeee/GreatX.git && cd GreatX
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

⚡ Get Started

Assume that you have a torch_geometric.data.Data instance data that describes your graph.

How fast can we train and evaluate your own GNN?

Take GCN as an example:

from greatx.nn.models import GCN
from greatx.training import Trainer
from torch_geometric.datasets import Planetoid
# Any PyG dataset is available!
dataset = Planetoid(root='.', name='Cora')
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0') # or 'cpu'
trainer.fit(data, mask=data.train_mask)
trainer.evaluate(data, mask=data.test_mask)

A simple targeted manipulation attack

from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

A simple untargeted (non-targeted) manipulation attack

from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

👀 Implementations

In detail, the following methods are currently implemented:

⚔ Adversarial Attack

Graph Manipulation Attack (GMA)

Targeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly. [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
Nettack Zügner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
GFAttack Chang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SGAttack Li et al. Adversarial Attack on Large Scale Graph, TKDE'21 [Example]
PGDAttack Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]

Untargeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
Metattack Zügner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
PGDAttack Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]

Graph Injection Attack (GIA)

Methods Descriptions Examples
RandomInjection A simple random method that chooses nodes to inject randomly. [Example]
AdvInjection The 2nd place solution of KDD Cup 2020, team: ADVERSARIES. [Example]

Graph Universal Attack (GUA)

Graph Backdoor Attack (GBA)

Methods Descriptions Examples
LGCBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]
FGBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]

Enhancing Techniques or Corresponding Defense

Standard GNNs (without defense)

Supervised

Methods Descriptions Examples
GCN Kipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17 [Example]
SGC Wu et al. Simplifying Graph Convolutional Networks, ICLR'19 [Example]
GAT Veličković et al. Graph Attention Networks, ICLR'18 [Example]
DAGNN Liu et al. Towards Deeper Graph Neural Networks, KDD'20 [Example]
APPNP Klicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19 [Example]
JKNet Xu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18 [Example]
TAGCN Du et al. Topological Adaptive Graph Convolutional Networks, arXiv'17 [Example]
SSGC Zhu et al. Simple Spectral Graph Convolution, ICLR'21 [Example]
DGC Wang et al. Dissecting the Diffusion Process in Linear Graph Convolutional Networks, NeurIPS'21 [Example]
NLGCN, NLMLP, NLGAT Liu et al. Non-Local Graph Neural Networks, TPAMI'22 [Example]
SpikingGCN Zhu et al. Spiking Graph Convolutional Networks, IJCAI'22 [Example]

Unsupervised/Self-supervise

Methods Descriptions Examples
DGI Veličković et al. Deep Graph Infomax, ICLR'19 [Example]
GRACE Zhu et al. Deep Graph Contrastive Representation Learning, ICML'20 [Example]
CCA-SSG Zhang et al. From Canonical Correlation Analysis to Self-supervised Graph Neural Networks, NeurIPS'21 [Example]
GGD Zheng et al. Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination, NeurIPS'22 [Example]

Techniques Against Adversarial Attacks

Methods Descriptions Examples
MedianGCN Chen et al. Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21 [Example]
RobustGCN Zhu et al. Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19 [Example]
SoftMedianGCN Geisler et al. Reliable Graph Neural Networks via Robust Aggregation, NeurIPS'20
Geisler et al. Robustness of Graph Neural Networks at Scale, NeurIPS'21
[Example]
ElasticGNN Liu et al. Elastic Graph Neural Networks, ICML'21 [Example]
AirGNN Liu et al. Graph Neural Networks with Adaptive Residual, NeurIPS'21 [Example]
SimPGCN Jin et al. Node Similarity Preserving Graph Convolutional Networks, WSDM'21 [Example]
SAT Li et al. Spectral Adversarial Training for Robust Graph Neural Network, arXiv'22 [Example]
JaccardPurification Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SVDPurification Entezari et al. All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, WSDM'20 [Example]
GNNGUARD Zhang et al. GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS'20 [Example]
GUARD Li et al. GUARD: Graph Universal Adversarial Defense, arXiv'22 [Example]
RTGCN Wu et al. Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation, KDD'22 [Example]

More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.

Techniques Against Inherent Noise

Methods Descriptions Examples
DropEdge Rong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20 [Example]
DropNode You et al. Graph Contrastive Learning with Augmentations, NeurIPS'20 [Example]
DropPath Li et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22' [Example]
FeaturePropagation Rossi et al. On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features, Log'22 [Example]

Miscellaneous

Methods Descriptions Examples
Centered Kernel Alignment (CKA) Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21 [Example]

❓ Known Issues