tf_geometric.layers (OOP API)
Contents
GCN
-
class
tf_geometric.layers.
GCN
(*args, **kwargs) Graph Convolutional Layer
-
__init__
(units, activation=None, use_kernel=True, use_bias=True, norm='both', add_self_loop=True, sym=True, renorm=True, improved=False, edge_drop_rate=0.0, num_splits=None, num_or_size_splits=None, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units – Positive integer, dimensionality of the output space.
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector.
norm – normalization mode both|left|right: - both: (D^(-1/2)A)D^(-1/2); - left: D^(-1/2)A; - right: AD^(-1/2);
add_self_loop – Whether add self-loop to adj during normalization.
sym – Optional, only used when norm==”both”. Setting sym=True indicates that the input sparse_adj is symmetric.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
edge_drop_rate – Dropout rate of the propagation weights.
num_or_size_splits – Split (XW) to compute A(XW) for large graphs (Not affecting the output). See the num_or_size_splits param of the tf.split API.
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
-
build_cache_by_adj
(sparse_adj, override=False, cache=None) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, split=True, training=None, mask=None) - Parameters
inputs – List of graph info: [x, sparse_adj], [x, edge_index], or [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
split – bool, whether split (XW) to compute A(XW) if self.num_splits is not None
- Returns
Updated node features (x), shape: [num_nodes, units]
-
GAT
-
class
tf_geometric.layers.
GAT
(*args, **kwargs) -
__init__
(units, attention_units=None, activation=None, use_bias=True, num_heads=1, split_value_heads=True, query_activation=<function relu>, key_activation=<function relu>, edge_drop_rate=0.0, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units – Positive integer, dimensionality of the output space.
attention_units – Positive integer, dimensionality of the output space for Q and K in attention.
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector.
num_heads – Number of attention heads.
split_value_heads – Boolean. If true, split V as value attention heads, and then concatenate them as output. Else, num_heads different V are used as value attention heads, and the mean of them are used as output.
query_activation – Activation function for Q in attention.
key_activation – Activation function for K in attention.
edge_drop_rate – Dropout rate of attention weights.
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
-
call
(inputs, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index] or [x, edge_index, edge_weight]. Note that the edge_weight will not be used.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
APPNP
-
class
tf_geometric.layers.
APPNP
(*args, **kwargs) -
__init__
(units_list, dense_activation=<function relu>, activation=None, k=10, alpha=0.1, dense_drop_rate=0.0, last_dense_drop_rate=0.0, edge_drop_rate=0.0, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units_list – List of Positive integers consisting of dimensionality of the output space of each dense layer.
dense_activation – Activation function to use for the dense layers, except for the last dense layer, which will not be activated.
activation – Activation function to use for the output.
k – Number of propagation power iterations.
alpha – Teleport Probability.
dense_drop_rate – Dropout rate for the output of every dense layer (except the last one).
last_dense_drop_rate – Dropout rate for the output of the last dense layer. last_dense_drop_rate is usually set to 0.0 for classification tasks.
edge_drop_rate – Dropout rate for the edges/adj used for propagation.
kernel_regularizer – Regularizer function applied to the kernel weights matrices.
bias_regularizer – Regularizer function applied to the bias vectors.
-
build_cache_by_adj
(sparse_adj, override=False, cache=None) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
GIN
-
class
tf_geometric.layers.
GIN
(*args, **kwargs) Graph Isomorphism Network Layer
-
__init__
(mlp_model, eps=0, train_eps=False, *args, **kwargs) - Parameters
mlp_model – A neural network (multi-layer perceptrons).
eps – float, optional, (default:
0.
).train_eps – Boolean, Whether the eps is trained.
activation – Activation function to use.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GIN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
SGC
-
class
tf_geometric.layers.
SGC
(*args, **kwargs) The simple graph convolutional operator from the “Simplifying Graph Convolutional Networks” paper
-
build_cache_by_adj
(sparse_adj, override=False, cache=None) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, num_units]
-
SSGC
-
class
tf_geometric.layers.
SSGC
(*args, **kwargs) -
__init__
(units_list=None, k=10, alpha=0.1, dense_activation=<function relu>, activation=None, dense_drop_rate=0.0, last_dense_drop_rate=0.0, edge_drop_rate=0.0, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) OOP API for Simple Spectral Graph Convolution (SSGC / S^2GC). Paper URL: https://openreview.net/forum?id=CYO5T-YjWZV
- Parameters
units_list – List of Positive integers consisting of dimensionality of the output space of each dense layer.
k – Number of propagation power iterations.
dense_activation – Activation function to use for the dense layers, except for the last dense layer, which will not be activated.
activation – Activation function to use for the output.
alpha – Teleport Probability.
dense_drop_rate – Dropout rate for the input of every dense layer.
last_dense_drop_rate – Dropout rate for the output of the last dense layer. last_dense_drop_rate is usually set to 0.0 for classification tasks.
edge_drop_rate – Dropout rate for the edges/adj used for propagation.
kernel_regularizer – Regularizer function applied to the kernel weights matrices.
bias_regularizer – Regularizer function applied to the bias vectors.
-
build_cache_by_adj
(sparse_adj, override=False, cache=None) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
TAGCN
-
class
tf_geometric.layers.
TAGCN
(*args, **kwargs) - The topology adaptive graph convolutional networks operator from the
-
__init__
(units, k=3, activation=None, use_bias=True, renorm=False, improved=False, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units – Positive integer, dimensionality of the output space.
k – Number of hops (default: ‘3”).
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
-
build_cache_by_adj
(sparse_adj, override=False, cache=None) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
GraphSage
-
class
tf_geometric.layers.
MeanGraphSage
(*args, **kwargs) GraphSAGE: “Inductive Representation Learning on Large Graphs” paper
-
__init__
(units, activation=<function relu>, use_bias=True, concat=True, normalize=False, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units –
activation –
use_bias –
concat –
normalize –
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
args –
kwargs –
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
SumGraphSage
(*args, **kwargs) GraphSAGE: “Inductive Representation Learning on Large Graphs” paper
-
__init__
(units, activation=<function relu>, use_bias=True, concat=True, normalize=False, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units –
activation –
use_bias –
concat –
normalize –
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
args –
kwargs –
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
MeanPoolGraphSage
(*args, **kwargs) -
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
MaxPoolGraphSage
(*args, **kwargs) -
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
GCNGraphSage
(*args, **kwargs) -
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
LSTMGraphSage
(*args, **kwargs) -
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index] or [x, edge_index, edge_weight]. Note that the edge_weight will not be used.
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
ChebyNet
-
class
tf_geometric.layers.
ChebyNet
(*args, **kwargs) The chebyshev spectral graph convolutional operator from the “Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering” paper
-
__init__
(units, k, activation=None, use_bias=True, normalization_type='sym', use_dynamic_lambda_max=False, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units – Positive integer, dimensionality of the output space.
k – Chebyshev filter size (default: ‘3”).
lambda_max –
use_bias – Boolean, whether the layer uses a bias vector.
activation – Activation function to use.
normalization_type – The normalization scheme for the graph Laplacian (default:
"sym"
)use_dynamic_lambda_max – If true, compute max eigen value for each forward, otherwise use 2.0 as the max eigen value
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
-
build_cache_for_graph
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
-
cache_normed_edge
(graph, override=False) Manually compute the normed edge based on this layer’s GCN normalization configuration (self.renorm and self.improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
build_cache_for_graph
instead.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
DropEdge
-
class
tf_geometric.layers.
DropEdge
(*args, **kwargs) -
__init__
(rate=0.5, force_undirected: bool = False) DropEdge: Towards Deep Graph Convolutional Networks on Node Classification https://openreview.net/forum?id=Hkx1qkrKPr
- Parameters
rate – dropout rate
force_undirected – If set to True, will either drop or keep both edges of an undirected edge.
-
call
(inputs, training=None, mask=None) Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Arguments:
inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be
either a tensor or None (no mask).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
CommonPool
-
class
tf_geometric.layers.
MeanPool
(*args, **kwargs) -
call
(inputs, training=None, mask=None) Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Arguments:
inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be
either a tensor or None (no mask).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
-
class
tf_geometric.layers.
MinPool
(*args, **kwargs) -
call
(inputs, training=None, mask=None) Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Arguments:
inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be
either a tensor or None (no mask).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
-
class
tf_geometric.layers.
MaxPool
(*args, **kwargs) -
call
(inputs, training=None, mask=None) Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Arguments:
inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be
either a tensor or None (no mask).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
-
class
tf_geometric.layers.
SumPool
(*args, **kwargs) -
call
(inputs, training=None, mask=None) Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Arguments:
inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be
either a tensor or None (no mask).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
DiffPool
-
class
tf_geometric.layers.
DiffPool
(*args, **kwargs) OOP API for DiffPool: “Hierarchical graph representation learning with differentiable pooling”
-
__init__
(feature_gnn, assign_gnn, units, num_clusters, activation=None, use_bias=True, bias_regularizer=None, *args, **kwargs) DiffPool
- Parameters
feature_gnn – A GNN model to learn pooled node features, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to high-order node features.
assign_gnn – A GNN model to learn cluster assignment for the pooling, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to the cluster assignment matrix.
units – Positive integer, dimensionality of the output space. It must be provided if you set use_bias=True.
num_clusters – Number of clusters for pooling.
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector. If true, the “units” parameter must be provided.
bias_regularizer – Regularizer function applied to the bias vector.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight, node_graph_index]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Pooled graph: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
Set2Set
-
class
tf_geometric.layers.
Set2Set
(*args, **kwargs) OOP API for Set2Set
-
__init__
(num_iterations=4, *args, **kwargs) Set2Set
- Parameters
num_iterations – Number of iterations for attention.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, node_graph_index]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Graph features, shape: [num_graphs, num_node_features * 2]
-
SAGPool
-
class
tf_geometric.layers.
SAGPool
(*args, **kwargs) OOP API for SAGPool
-
__init__
(score_gnn, k=None, ratio=None, score_activation=None, *args, **kwargs) SAGPool
- Parameters
score_gnn – A GNN model to score nodes for the pooling, [x, edge_index, edge_weight] => node_score.
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
score_activation – Activation to use for node_score before multiplying node_features with node_score
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight, node_graph_index]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Pooled graph: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
ASAP
-
class
tf_geometric.layers.
ASAP
(*args, **kwargs) OOP API for ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representation
-
__init__
(k=None, ratio=None, drop_rate=0.0, attention_units=None, le_conv_activation=<function sigmoid>, le_conv_use_bias=True, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) ASAPool :param attention_units: Positive integer, dimensionality for attention.
-
call
(inputs, cache=None, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight, node_graph_index]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, units]
-
-
class
tf_geometric.layers.
LEConv
(*args, **kwargs) Graph Convolutional Layer
-
__init__
(units, activation=None, self_use_bias=True, aggr_self_use_bias=True, aggr_neighbor_use_bias=False, kernel_regularizer=None, bias_regularizer=None, *args, **kwargs) - Parameters
units – Positive integer, dimensionality of the output space.
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
kernel_regularizer – Regularizer function applied to the kernel weights matrix.
bias_regularizer – Regularizer function applied to the bias vector.
-
call
(inputs, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight]
- Returns
Updated node features (x), shape: [num_nodes, units]
-
SortPool
-
class
tf_geometric.layers.
SortPool
(*args, **kwargs) OOP API for SortPool “An End-to-End Deep Learning Architecture for Graph Classification”
-
__init__
(k=None, ratio=None, sort_index=- 1, *args, **kwargs) SAGPool
- Parameters
score_gnn – A GNN model to score nodes for the pooling, [x, edge_index, edge_weight] => node_score.
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
sort_index – The sort_index_th index of the last axis will used for sort.
-
call
(inputs, training=None, mask=None) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight, node_graph_index]
- Returns
Pooled grpah: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
MinCutPool
-
class
tf_geometric.layers.
MinCutPool
(*args, **kwargs) OOP API for MinCutPool: “Spectral Clustering with Graph Neural Networks for Graph Pooling”
-
__init__
(feature_gnn, assign_gnn, units, num_clusters, activation=None, use_bias=True, gnn_use_normed_edge=True, bias_regularizer=None, *args, **kwargs) MinCutPool
- Parameters
feature_gnn – A GNN model to learn pooled node features, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to high-order node features.
assign_gnn – A GNN model to learn cluster assignment for the pooling, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to the cluster assignment matrix.
units – Positive integer, dimensionality of the output space. It must be provided if you set use_bias=True.
num_clusters – Number of clusters for pooling.
activation – Activation function to use.
use_bias – Boolean, whether the layer uses a bias vector. If true, the “units” parameter must be provided.
gnn_use_normed_edge – Boolean. Whether to use normalized edge for feature_gnn and assign_gnn.
bias_regularizer – Regularizer function applied to the bias vector.
-
call
(inputs, cache=None, training=None, mask=None, return_loss_func=False, return_losses=False) - Parameters
inputs – List of graph info: [x, edge_index, edge_weight, node_graph_index]
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
return_loss_func – Boolean. If True, return (outputs, loss_func), where loss_func is a callable function that returns a list of losses.
return_losses – Boolean. If True, return (outputs, losses), where losses is a list of losses.
- Returns
Pooled graph: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-