tf_geometric.nn (Functional API)
Contents
gcn
-
tf_geometric.nn.
gcn
(x, sparse_adj: tf_sparse.sparse_matrix.SparseMatrix, kernel, bias=None, activation=None, norm='both', add_self_loop=True, sym=True, renorm=True, improved=False, edge_drop_rate=0.0, num_or_size_splits=None, training=False, cache=None) Functional API for Graph Convolutional Networks.
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
sparse_adj – tf_sparse.SparseMatrix, Adjacency Matrix
kernel – Tensor, shape: [num_features, num_output_features], weight
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
edge_drop_rate – Dropout rate of the propagation weights.
num_or_size_splits – Split (XW) to compute A(XW) for large graphs (Not affecting the output). See the num_or_size_splits param of the tf.split API.
cache –
A dict for caching A’ for GCN. Different graph should not share the same cache dict. To use @tf_utils.function with gcn, you should cache the noremd edge information before the first call of the gcn.
If you’re using OOP APIs tfg.layers.GCN:
gcn_layer.build_cache_for_graph(graph)
If you’re using functional API tfg.nn.gcn:
from tf_geometric.nn.conv.gcn import gcn_build_cache_for_graph gcn_build_cache_for_graph(graph)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
gcn_norm_adj
(sparse_adj: tf_sparse.sparse_matrix.SparseMatrix, norm='both', add_self_loop=True, sym=True, renorm=True, improved=False, cache: Optional[dict] = None) Compute normed edge (updated edge_index and normalized edge_weight) for GCN normalization.
- Parameters
sparse_adj – tf_sparse.SparseMatrix, sparse adjacency matrix.
norm – normalization mode both|left|right: - both: (D^(-1/2)A)D^(-1/2); - left: D^(-1/2)A; - right: AD^(-1/2);
add_self_loop – Whether add self-loop to adj during normalization.
sym – Optional, only used when norm==”both”. Setting sym=True indicates that the input sparse_adj is symmetric.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
cache – A dict for caching the updated edge_index and normalized edge_weight.
- Returns
Normed edge (updated edge_index and normalized edge_weight).
-
tf_geometric.nn.
gcn_build_cache_by_adj
(sparse_adj: tf_sparse.sparse_matrix.SparseMatrix, norm='both', add_self_loop=True, sym=True, renorm=True, improved=False, override=False, cache=None) Manually compute the normed edge based on the given GCN normalization configuration (renorm and improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
sparse_adj – sparse_adj.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
override – Whether to override existing cached normed edge.
- Returns
cache
-
tf_geometric.nn.
gcn_build_cache_for_graph
(graph, norm='both', add_self_loop=True, sym=True, renorm=True, improved=False, override=False) Manually compute the normed edge based on the given GCN normalization configuration (renorm and improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
override – Whether to override existing cached normed edge.
- Returns
None
-
tf_geometric.nn.
gcn_norm_edge
(edge_index, num_nodes, edge_weight=None, renorm=True, improved=False, cache: Optional[dict] = None) Compute normed edge (updated edge_index and normalized edge_weight) for GCN normalization.
- Parameters
edge_index – Tensor, shape: [2, num_edges], edge information.
num_nodes – Number of nodes.
edge_weight – Tensor or None, shape: [num_edges]
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
cache – A dict for caching the updated edge_index and normalized edge_weight.
- Returns
Normed edge (updated edge_index and normalized edge_weight).
Deprecated since version 0.0.56: Use
gcn_norm_adj
instead.
-
tf_geometric.nn.
gcn_cache_normed_edge
(graph, renorm=True, improved=False, override=False) Manually compute the normed edge based on the given GCN normalization configuration (renorm and improved) and put it in graph.cache. If the normed edge already exists in graph.cache and the override parameter is False, this method will do nothing.
- Parameters
graph – tfg.Graph, the input graph.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
override – Whether to override existing cached normed edge.
- Returns
None
Deprecated since version 0.0.56: Use
gcn_build_cache_for_graph
instead.
gat
-
tf_geometric.nn.
gat
(x, edge_index, query_kernel, query_bias, query_activation, key_kernel, key_bias, key_activation, kernel, bias=None, activation=None, num_heads=1, split_value_heads=True, edge_drop_rate=0.0, training=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
query_kernel – Tensor, shape: [num_features, num_query_features], weight for Q in attention
query_bias – Tensor, shape: [num_query_features], bias for Q in attention
query_activation – Activation function for Q in attention.
key_kernel – Tensor, shape: [num_features, num_key_features], weight for K in attention
key_bias – Tensor, shape: [num_key_features], bias for K in attention
key_activation – Activation function for K in attention.
kernel – Tensor, shape: [num_features, num_output_features], weight
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
num_heads – Number of attention heads.
split_value_heads – Boolean. If true, split V as value attention heads, and then concatenate them as output. Else, num_heads different V are used as value attention heads, and the mean of them are used as output.
edge_drop_rate – Dropout rate of attention weights.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
appnp
-
tf_geometric.nn.
appnp
(x, edge_index, edge_weight, kernels, biases, dense_activation=<function relu>, activation=None, k=10, alpha=0.1, dense_drop_rate=0.0, last_dense_drop_rate=0.0, edge_drop_rate=0.0, cache=None, training=False) Functional API for Approximate Personalized Propagation of Neural Predictions (APPNP).
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
kernels – List[Tensor], shape of each Tensor: [num_features, num_output_features], weights
biases – List[Tensor], shape of each Tensor: [num_output_features], biases
dense_activation – Activation function to use for the dense layers, except for the last dense layer, which will not be activated.
activation – Activation function to use for the output.
k – Number of propagation power iterations.
alpha – Teleport Probability.
dense_drop_rate – Dropout rate for the output of every dense layer (except the last one).
last_dense_drop_rate – Dropout rate for the output of the last dense layer. last_dense_drop_rate is usually set to 0.0 for classification tasks.
edge_drop_rate – Dropout rate for the edges/adj used for propagation.
cache –
A dict for caching A’ for GCN. Different graph should not share the same cache dict. To use @tf_utils.function with gcn, you should cache the noremd edge information before the first call of the gcn.
If you’re using OOP APIs tfg.layers.GCN:
gcn_layer.build_cache_for_graph(graph)
If you’re using functional API tfg.nn.gcn:
from tf_geometric.nn.conv.gcn import gcn_build_cache_for_graph gcn_build_cache_for_graph(graph)
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
gin
-
tf_geometric.nn.
gin
(x, edge_index, mlp_model, eps=0.0, training=None) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
mlp_model – A neural network (multi-layer perceptrons).
eps – float, optional, (default:
0.
).training – Whether currently executing in training or inference mode.
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
sgc
-
tf_geometric.nn.
sgc
(x, edge_index, edge_weight, k, kernel, bias=None, activation=None, renorm=True, improved=False, cache=None) Functional API for Simple Graph Convolution (SGC).
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
k – Number of hops.(default:
1
)kernel – Tensor, shape: [num_features, num_output_features], weight.
bias – Tensor, shape: [num_output_features], bias.
activation – Activation function to use.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, num_features]
ssgc
-
tf_geometric.nn.
ssgc
(x, edge_index, edge_weight, kernels=None, biases=None, k=10, alpha=0.1, dense_activation=<function relu>, activation=None, dense_drop_rate=0.0, last_dense_drop_rate=0.0, edge_drop_rate=0.0, cache=None, training=False) Functional API for Simple Spectral Graph Convolution (SSGC / S^2GC). Paper URL: https://openreview.net/forum?id=CYO5T-YjWZV
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
kernels – List[Tensor], shape of each Tensor: [num_features, num_output_features], weights
biases – List[Tensor], shape of each Tensor: [num_output_features], biases
dense_activation – Activation function to use for the dense layers, except for the last dense layer, which will not be activated.
activation – Activation function to use for the output.
k – Number of propagation power iterations.
alpha – Teleport Probability.
dense_drop_rate – Dropout rate for the output of every dense layer (except the last one).
last_dense_drop_rate – Dropout rate for the output of the last dense layer. last_dense_drop_rate is usually set to 0.0 for classification tasks.
edge_drop_rate – Dropout rate for the edges/adj used for propagation.
cache –
A dict for caching A’ for GCN. Different graph should not share the same cache dict. To use @tf_utils.function with gcn, you should cache the noremd edge information before the first call of the gcn.
If you’re using OOP APIs tfg.layers.GCN:
gcn_layer.build_cache_for_graph(graph)
If you’re using functional API tfg.nn.gcn:
from tf_geometric.nn.conv.gcn import gcn_build_cache_for_graph gcn_build_cache_for_graph(graph)
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
tagcn
-
tf_geometric.nn.
tagcn
(x, edge_index, edge_weight, k, kernel, bias=None, activation=None, renorm=False, improved=False, cache=None) Functional API for Topology Adaptive Graph Convolutional Network (TAGCN).
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features.
edge_index – Tensor, shape: [2, num_edges], edge information.
edge_weight – Tensor or None, shape: [num_edges].
k – Number of hops.(default:
3
)kernel – Tensor, shape: [num_features, num_output_features], weight.
bias – Tensor, shape: [num_output_features], bias.
activation – Activation function to use.
renorm – Whether use renormalization trick (https://arxiv.org/pdf/1609.02907.pdf).
improved – Whether use improved GCN or not.
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
graph_sage
-
tf_geometric.nn.
mean_graph_sage
(x, edge_index, edge_weight, self_kernel, neighbor_kernel, bias=None, activation=None, concat=True, normalize=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
self_kernel – Tensor, shape: [num_features, num_hidden_units], weight
neighbor_kernel – Tensor, shape: [num_features, num_hidden_units], weight.
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
sum_graph_sage
(x, edge_index, edge_weight, self_kernel, neighbor_kernel, bias=None, activation=None, concat=True, normalize=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
self_kernel – Tensor, shape: [num_features, num_hidden_units], weight
neighbor_kernel – Tensor, shape: [num_features, num_hidden_units], weight.
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
mean_pool_graph_sage
(x, edge_index, edge_weight, self_kernel, neighbor_mlp_kernel, neighbor_kernel, neighbor_mlp_bias=None, bias=None, activation=None, concat=True, normalize=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
self_kernel – Tensor, shape: [num_features, num_hidden_units], weight.
neighbor_mlp_kernel – Tensor, shape: [num_features, num_hidden_units]. weight.
neighbor_kernel – Tensor, shape: [num_hidden_units, num_hidden_units], weight.
neighbor_mlp_bias – Tensor, shape: [num_hidden_units * 2], bias
bias – Tensor, shape: [num_output_features], bias.
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
max_pool_graph_sage
(x, edge_index, edge_weight, self_kernel, neighbor_mlp_kernel, neighbor_kernel, neighbor_mlp_bias=None, bias=None, activation=None, concat=True, normalize=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
self_kernel – Tensor, shape: [num_features, num_hidden_units], weight.
neighbor_mlp_kernel – Tensor, shape: [num_features, num_hidden_units]. weight.
neighbor_kernel – Tensor, shape: [num_hidden_units, num_hidden_units], weight.
neighbor_mlp_bias – Tensor, shape: [num_hidden_units * 2], bias
bias – Tensor, shape: [num_output_features], bias.
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
gcn_graph_sage
(x, edge_index, edge_weight, kernel, bias=None, activation=None, normalize=False, cache=None) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
kernel – Tensor, shape: [num_features, num_output_features], weight
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
-
tf_geometric.nn.
lstm_graph_sage
(x, edge_index, lstm, self_kernel, neighbor_kernel, bias=None, activation=None, concat=True, normalize=False, training=False) - Parameters
x – Tensor, shape: [num_nodes, num_features], node features.
edge_index – Tensor, shape: [2, num_edges], edge information.
lstm – Long Short-Term Merory.
self_kernel – Tensor, shape: [num_features, num_hidden_units], weight.
neighbor_kernel – Tensor, shape: [num_hidden_units, num_hidden_units], weight.
bias – Tensor, shape: [num_output_features], bias.
activation – Activation function to use.
normalize – If set to
True
, output features will be \(\ell_2\)-normalized, i.e., \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\). (default:False
)
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
chebynet
-
tf_geometric.nn.
chebynet
(x, edge_index, edge_weight, k, kernels, bias=None, activation=None, normalization_type='sym', use_dynamic_lambda_max=False, cache=None)
-
tf_geometric.nn.
chebynet_norm_edge
(edge_index, num_nodes, edge_weight=None, normalization_type='sym', use_dynamic_lambda_max=False, cache=None)
drop_edge
-
tf_geometric.nn.
drop_edge
(inputs, rate=0.5, force_undirected=False, training=None) - Parameters
inputs – List of edge_index and other edge attributes [edge_index, edge_attr, …]
rate – dropout rate
force_undirected – If set to True, will either drop or keep both edges of an undirected edge.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
List of dropped edge_index and other dropped edge attributes
topk_pool
-
tf_geometric.nn.
topk_pool
(source_index, score, k=None, ratio=None) - Parameters
source_index – index of source node (of edge) or source graph (of node)
score – 1-D Array
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
- Returns
sampled_edge_index, sampled_edge_score, sample_index
diff_pool
-
tf_geometric.nn.
diff_pool
(x, edge_index, edge_weight, node_graph_index, feature_gnn, assign_gnn, num_clusters, bias=None, activation=None, cache=None, training=None) Functional API for DiffPool: “Hierarchical graph representation learning with differentiable pooling”
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
feature_gnn – A GNN model to learn pooled node features, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to high-order node features.
assign_gnn – A GNN model to learn cluster assignment for the pooling, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to the cluster assignment matrix.
num_clusters – Number of clusters for pooling.
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
[pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
tf_geometric.nn.
diff_pool_coarsen
(x, edge_index, edge_weight, node_graph_index, dense_assign, num_nodes=None, num_clusters=None, num_graphs=None) Coarsening method for DiffPool. Coarsen the input BatchGraph (graphs) based on cluster assignment of nodes and output pooled BatchGraph (graphs). Graphs should be modeled as a BatchGraph like format and each graph has the same number of clusters.
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
dense_assign – Tensor, [num_nodes, num_clusters], cluster assignment matrix of nodes.
num_nodes – Number of nodes, Optional, used for boosting performance.
num_clusters – Number of clusters, Optional, used for boosting performance.
num_graphs – Number of graphs, Optional, used for boosting performance.
- Returns
Pooled BatchGraph: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
set2set
-
tf_geometric.nn.
set2set
(x, node_graph_index, lstm, num_iterations, training=None) Functional API for Set2Set
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
lstm – A lstm model.
num_iterations – Number of iterations for attention.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
Graph features, shape: [num_graphs, num_node_features * 2]
cluster_pool
-
tf_geometric.nn.
cluster_pool
(x, edge_index, edge_weight, assign_edge_index, assign_edge_weight, num_clusters, num_nodes=None) Coarsen the input Graph based on cluster assignment of nodes and output pooled Graph.
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
assign_edge_index – Tensor, shape: [2, num_nodes], edge between clusters and nodes, where each edge denotes a node belongs to a specific cluster.
assign_edge_weight – Tensor or None, shape: [num_nodes], the corresponding weight for assign_edge_index
num_clusters – Number of clusters.
num_nodes – Number of nodes, Optional, used for boosting performance.
- Returns
Pooled Graph: [pooled_x, pooled_edge_index, pooled_edge_weight]
sag_pool
-
tf_geometric.nn.
sag_pool
(x, edge_index, edge_weight, node_graph_index, score_gnn, k=None, ratio=None, score_activation=None, training=None, cache=None) Functional API for SAGPool
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
score_gnn – A GNN model to score nodes for the pooling, [x, edge_index, edge_weight] => node_score.
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
score_activation – Activation to use for node_score before multiplying node_features with node_score
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
[pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
asap
-
tf_geometric.nn.
asap
(x, edge_index, edge_weight, node_graph_index, attention_gcn_kernel, attention_gcn_bias, attention_query_kernel, attention_query_bias, attention_score_kernel, attention_score_bias, le_conv_self_kernel, le_conv_self_bias, le_conv_aggr_self_kernel, le_conv_aggr_self_bias, le_conv_aggr_neighbor_kernel, le_conv_aggr_neighbor_bias, k=None, ratio=None, le_conv_activation=<function sigmoid>, drop_rate=0.0, training=None, cache=None) Functional API for ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representation
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
le_conv_activation – Activation to use for node_score before multiplying node_features with node_score
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
- Returns
[pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
tf_geometric.nn.
le_conv
(x, edge_index, edge_weight, self_kernel, self_bias, aggr_self_kernel, aggr_self_bias, aggr_neighbor_kernel, aggr_neighbor_bias, activation=None) Functional API for LeConv in ASAP.
h_i = activation(x_i @ self_kernel + sum_{j} (x_i @ aggr_self_kernel - x_j @ aggr_neighbor_kernel))
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
self_kernel – Please look at the formula above.
aggr_self_kernel – Please look at the formula above.
aggr_neighbor_kernel – Please look at the formula above.
activation – Activation function to use.
- Returns
Updated node features (x), shape: [num_nodes, num_output_features]
sort_pool
-
tf_geometric.nn.
sort_pool
(x, edge_index, edge_weight, node_graph_index, k=None, ratio=None, sort_index=- 1, training=None) Functional API for SortPool “An End-to-End Deep Learning Architecture for Graph Classification”
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
k – Keep top k targets for each source
ratio – Keep num_targets * ratio targets for each source
sort_index – The sort_index_th index of the last axis will used for sort.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
[pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
min_cut_pool
-
tf_geometric.nn.
min_cut_pool
(x, edge_index, edge_weight, node_graph_index, feature_gnn, assign_gnn, num_clusters, bias=None, activation=None, gnn_use_normed_edge=True, return_loss_func=False, return_losses=False, cache=None, training=None) Functional API for MinCutPool: “Spectral Clustering with Graph Neural Networks for Graph Pooling”
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
feature_gnn – A GNN model to learn pooled node features, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to high-order node features.
assign_gnn – A GNN model to learn cluster assignment for the pooling, [x, edge_index, edge_weight] => updated_x, where updated_x corresponds to the cluster assignment matrix.
num_clusters – Number of clusters for pooling.
bias – Tensor, shape: [num_output_features], bias
activation – Activation function to use.
gnn_use_normed_edge – Boolean. Whether to use normalized edge for feature_gnn and assign_gnn.
return_loss_func – Boolean. If True, return (outputs, loss_func), where loss_func is a callable function that returns a list of losses.
return_losses – Boolean. If True, return (outputs, losses), where losses is a list of losses.
cache – A dict for caching A’ for GCN. Different graph should not share the same cache dict.
training – Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).
- Returns
[pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
tf_geometric.nn.
min_cut_pool_coarsen
(x, edge_index, edge_weight, node_graph_index, dense_assign, num_nodes=None, num_clusters=None, num_graphs=None, normed_edge_weight=None, cache=None) Coarsening method for MinCutPool: “Spectral Clustering with Graph Neural Networks for Graph Pooling” Coarsen the input BatchGraph (graphs) based on cluster assignment of nodes and output pooled BatchGraph (graphs). Graphs should be modeled as a BatchGraph like format and each graph has the same number of clusters.
- Parameters
x – Tensor, shape: [num_nodes, num_features], node features
edge_index – Tensor, shape: [2, num_edges], edge information
edge_weight – Tensor or None, shape: [num_edges]
node_graph_index – Tensor/NDArray, shape: [num_nodes], graph index for each node
dense_assign – Tensor, [num_nodes, num_clusters], cluster assignment matrix of nodes.
num_nodes – Number of nodes, Optional, used for boosting performance.
num_clusters – Number of clusters, Optional, used for boosting performance.
num_graphs – Number of graphs, Optional, used for boosting performance.
- Returns
Pooled BatchGraph: [pooled_x, pooled_edge_index, pooled_edge_weight, pooled_node_graph_index]
-
tf_geometric.nn.
min_cut_pool_compute_losses
(edge_index, edge_weight, node_graph_index, dense_assign, normed_edge_weight=None, cache=None)