causalcompass.algorithms.CMLP
- class causalcompass.algorithms.CMLP(lag=3, hidden_dim=[100], lam=0.005, lr=0.01, max_iter=50000, lam_ridge=1e-2, penalty='H', device='cuda', seed=None)[source]
Deep learning-based causal discovery method that uses component-wise MLPs to model nonlinear Granger causality.
References
https://github.com/iancovert/Neural-GC
- Parameters:
lag (int, default 3) – Maximum time lag
hidden_dim (list, default [100]) – Number of hidden units per layer
lam (float, default 0.005) – Sparsity penalty term parameter
lr (float, default 0.01) – Learning rate
max_iter (int, default 50000) – Maximum training iterations
lam_ridge (float, default 1e-2) – Ridge regularization parameter
penalty (str, default 'H') – Penalty type: ‘GL’ (group lasso), ‘GSGL’ (group sparse group lasso), or ‘H’ (hierarchical)
device (str, default 'cuda') – Computation device
Examples
>>> from causalcompass.algorithms import CMLP >>> model = CMLP(lag=3, hidden_dim=[100], lam=0.005, lr=0.01, max_iter=50000, device='cuda') >>> predicted_adj = model.run(X) >>> all_metrics, no_diag_metrics = model.eval(true_adj, predicted_adj)
- __init__(lag=3, hidden_dim=[100], lam=0.005, lr=0.01, max_iter=50000, lam_ridge=1e-2, penalty='H', device='cuda', seed=None)[source]
Initialize cMLP
Methods
__init__([lag, hidden_dim, lam, lr, ...])Initialize cMLP
eval(true_adj, predicted_adj[, shd_thresholds])Evaluate the predicted adjacency matrix against the ground truth.
run(X)Run cMLP algorithm.
run_raw(X, **kwargs)Run the algorithm and return an unthresholded intermediate result that can be reused across multiple threshold values.
run_threshold_sweep(X, thresholds)Run the algorithm once and post-process the raw result for each threshold.