.
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | L-EnsNMF and BoostedNE
2 | ==================
3 | [](https://arxiv.org/abs/1808.08627) [](https://codebeat.co/projects/github-com-benedekrozemberczki-boostedfactorization-master) [](https://github.com/benedekrozemberczki/BoostedFactorization/archive/master.zip)⠀[](https://twitter.com/intent/follow?screen_name=benrozemberczki)
4 |
5 |
6 | The factorization procedure L-EnsNMF creates a sequential ensemble factorization of a target matrix. In each factorization round a residual target matrix is created by sampling an anchor row and column. Anchor sampling finds a block of matrix entries that are similar to the row and column and other entries of the residual matrix are downsampled. By factorizing the residuals matrices each relatively upsampled block gets a high quality representation. BoostNE adapts this idea for node embedding. An approximate target matrix obtained with truncated random walk sampling is factorized by the L-EnsNMF method. This way blocks of highly connected nodes get representations that are described by vectors obtained in a given boosting round. Specifically, my implementation assumes that the target matrices are sparse. So far this is the only publicly available Python implementation of these procedures.
7 |
8 |
9 |
10 |
11 |
12 | The model is now also available in the package [Karate Club](https://github.com/benedekrozemberczki/karateclub).
13 |
14 | This repository provides an implementation for L-EnsNMF and BoostedNE as described in the papers:
15 |
16 | > **L-EnsNMF: Boosted Local Topic Discovery via Ensemble of Nonnegative Matrix Factorization.**
17 | > Sangho Suh, Jaegul Choo, Joonseok Lee, Chandan K. Reddy
18 | > ICDM, 2016.
19 | > http://dmkd.cs.vt.edu/papers/ICDM16.pdf
20 |
21 | > **Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation.**
22 | > Jundong Li, Liang Wu and Huan Liu
23 | > ASONAM, 2019.
24 | > https://arxiv.org/abs/1808.08627
25 |
26 | The original Matlab implementation is available [[here]](https://github.com/sanghosuh/lens_nmf-matlab).
27 |
28 | ### Requirements
29 |
30 | The codebase is implemented in Python 3.5.2. The package versions used for development are just below.
31 | ```
32 | networkx 2.4
33 | tqdm 4.28.1
34 | numpy 1.15.4
35 | pandas 0.23.4
36 | texttable 1.5.0
37 | scipy 1.1.0
38 | argparse 1.1.0
39 | sklearn 0.19.1
40 | ```
41 |
42 | ### Datasets
43 |
44 | #### Graphs
45 |
46 | The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. A sample graph for the `Wikipedia Giraffes` is included in the `input/` directory.
47 |
48 | #### Sparse Matrices
49 |
50 | The code takes an input matrix in a csv file. Every row indicates a (user,item,score) separated by a comma. The first row is a header. Users and items should be indexed starting with 0, each score is positive. A sample sparse stochastic block matrix is included in the `input/` folder.
51 |
52 | ### Options
53 |
54 | Learning of the embedding is handled by the `src/main.py` script which provides the following command line arguments.
55 |
56 | #### Input and output options
57 |
58 | ```
59 | --input-path STR Edges path. Default is `input/giraffe_edges.csv`.
60 | --output-path STR Embedding path. Default is `output/giraffe_embedding.csv`.
61 | --dataset-type STR Whether the dataset is a graph. Default is `graph`.
62 | ```
63 |
64 | #### Boosted Model options
65 |
66 | ```
67 | --dimensions INT Number of embeding dimensions. Default is 8.
68 | --iterations INT Number of power interations. Default is 10.
69 | --alpha FLOAT Regularization coefficient. Default is 0.001.
70 | ```
71 |
72 | #### DeepWalk options
73 |
74 | ```
75 | --number-of-walks INT Number of random walks. Default is 10.
76 | --walk-length INT Random walk length. Default is 80.
77 | --window-size INT Window size for feature extractions. Default is 3.
78 | --pruning-threshold INT Minimal co-occurence count to be kept. Default is 10.
79 | ```
80 |
81 | ### Examples
82 |
83 | The following commands learn a graph embedding and write the embedding to disk. The node representations are ordered by the ID.
84 |
85 | Creating an embedding of the default dataset with the default hyperparameter settings. Saving the embedding at the default path.
86 |
87 | ```sh
88 | $ python src/main.py
89 | ```
90 | Creating an embedding of the default dataset with 16 dimensions and 20 boosting rounds. This results in a 16x20=320 dimensional embedding.
91 |
92 | ```sh
93 | $ python src/main.py --dimensions 16 --iterations 20
94 | ```
95 |
96 | Creating an Lens-NMF embedding of the default dataset with stronger regularization.
97 |
98 | ```sh
99 | $ python src/main.py --alpha 0.1
100 | ```
101 |
102 | Creating an embedding of an other dataset the `Wikipedia Dogs`. Saving the output in a custom folder.
103 |
104 | ```sh
105 | $ python src/main.py --input-path input/dog_edges.csv --output-path output/dog_lensnmf.csv
106 | ```
107 |
108 | Creating an embedding of the default dataset with 20 random walks per source and 120 nodes in each vertex sequence.
109 |
110 | ```sh
111 | $ python src/main.py --walk-length 120 --number-of-walks 20
112 | ```
113 |
114 | Creating an embedding of a non-graph dataset and storing it under a non-standard name.
115 |
116 | ```sh
117 | $ python src/main.py --dataset-type sparse --input-path input/small_block.csv --output-path output/block_embedding.csv
118 | ```
119 |
120 | ------------------------------------------------------
121 |
122 | **License**
123 |
124 | - [GNU License](https://github.com/benedekrozemberczki/BoostedFactorization/blob/master/LICENSE)
125 |
126 | --------------------------------------------------------------------------------
127 |
--------------------------------------------------------------------------------
/boosting.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/benedekrozemberczki/BoostedFactorization/9f847178afe07d5cd2361ee6f41157dea6432a1d/boosting.png
--------------------------------------------------------------------------------
/src/boosted_embedding.py:
--------------------------------------------------------------------------------
1 | """Boosted factorization machine."""
2 |
3 | import numpy as np
4 | import pandas as pd
5 | from scipy import sparse
6 | from sklearn.decomposition import NMF
7 | from helpers import sampling, simple_print
8 |
9 | class BoostedFactorization:
10 | """
11 | LENS-NMF boosted matrix factorization model.
12 | """
13 | def __init__(self, residuals, args):
14 | """
15 | Initialization method.
16 | """
17 | self.args = args
18 | self.residuals = residuals
19 | simple_print("Matrix sum: ", self.residuals.sum())
20 | self.shape = residuals.shape
21 | indices = self.residuals.nonzero()
22 | self.index_1 = indices[0]
23 | self.index_2 = indices[1]
24 | self.edges = zip(self.index_1, self.index_2)
25 | print("\nFitting benchmark model.")
26 | base_score, __ = self.fit_and_score_NMF(self.residuals)
27 | simple_print("Benchmark loss", base_score.sum())
28 |
29 | def sampler(self, index):
30 | """
31 | Anchor sampling procedure.
32 | :param index: Matrix axis row/column chosen for anchor sampling.
33 | :return sample: Chosen sampled row/column id.
34 | """
35 | row_weights = self.residuals.sum(axis=index)
36 | if len(row_weights.shape) > 1:
37 | row_weights = row_weights.reshape(-1)
38 | sums = np.sum(np.sum(row_weights))
39 | to_pick_from = {i: float(row_weights[0, i])**2/sums for i in range(row_weights.shape[1])}
40 | sample = sampling(to_pick_from)
41 | return sample
42 |
43 | def reweighting(self, X, chosen_row, chosen_column):
44 | """
45 | Rescaling the target matrix with the anchor row and column.
46 | :param X: The target matrix rescaled.
47 | :param chosen_row: Anchor row.
48 | :param chosen_column: Anchor column.
49 | :return X: The rescaled residual.
50 | """
51 | row_sims = X.dot(chosen_row.transpose())
52 | column_sims = chosen_column.transpose().dot(X)
53 | X = sparse.csr_matrix(row_sims).multiply(X)
54 | X = X.multiply(sparse.csr_matrix(column_sims))
55 | return X
56 |
57 | def fit_and_score_NMF(self, new_residuals):
58 | """
59 | Factorizing a residual matrix, returning the approximate target and an embedding.
60 | :param new_residuals: Input target matrix.
61 | :return scores: Approximate target matrix.
62 | :return W: Embedding matrix.
63 | """
64 | model = NMF(n_components=self.args.dimensions,
65 | init="random",
66 | verbose=False,
67 | alpha=self.args.alpha)
68 |
69 | W = model.fit_transform(new_residuals)
70 | H = model.components_
71 | print("Scoring started.\n")
72 | sub_scores = np.sum(np.multiply(W[self.index_1, :], H[:, self.index_2].T), axis=1)
73 | scores = np.maximum(self.residuals.data-sub_scores, 0)
74 | scores = sparse.csr_matrix((scores, (self.index_1, self.index_2)),
75 | shape=self.shape,
76 | dtype=np.float32)
77 | return scores, W
78 |
79 | def single_boosting_round(self, iteration):
80 | """
81 | A method to perform anchor sampling, rescaling, factorization and scoring.
82 | :param iteration: Number of boosting round.
83 | """
84 | row = self.sampler(1)
85 | column = self.sampler(0)
86 |
87 | chosen_row = self.residuals[row, :]
88 | chosen_column = self.residuals[:, column]
89 | new_residuals = self.reweighting(self.residuals, chosen_row, chosen_column)
90 | scores, embedding = self.fit_and_score_NMF(new_residuals)
91 | self.embeddings.append(embedding)
92 | self.residuals = scores
93 |
94 | def do_boosting(self):
95 | """
96 | Doing a series of matrix-factorizations on the anchor-sampled residual matrices.
97 | """
98 | self.embeddings = []
99 | for iteration in range(self.args.iterations):
100 | print("\nFitting model: "+str(iteration+1)+"/"+str(self.args.iterations)+".")
101 | self.single_boosting_round(iteration)
102 | simple_print("Boosting round "+str(iteration+1)+". loss", self.residuals.sum())
103 |
104 | def save_embedding(self):
105 | """
106 | Saving the embedding at the default path.
107 | """
108 | ids = np.array(range(self.residuals.shape[0])).reshape(-1, 1)
109 | self.embeddings = [ids] + self.embeddings
110 | self.embeddings = np.concatenate(self.embeddings, axis=1)
111 | feature_names = ["x_"+str(x) for x in range(self.args.iterations*self.args.dimensions)]
112 | columns = ["ID"] + feature_names
113 | self.embedding = pd.DataFrame(self.embeddings, columns=columns)
114 | self.embedding.to_csv(self.args.output_path, index=None)
115 |
--------------------------------------------------------------------------------
/src/helpers.py:
--------------------------------------------------------------------------------
1 | """Utility tools."""
2 |
3 | import random
4 | import argparse
5 | from collections import Counter
6 | import numpy as np
7 | import pandas as pd
8 | from tqdm import tqdm
9 | import networkx as nx
10 | from scipy import sparse
11 | from texttable import Texttable
12 |
13 | def parameter_parser():
14 | """
15 | A method to parse up command line parameters.
16 | By default it gives an embedding of the Wikipedia Giraffes graph.
17 | The default hyperparameters give a good quality representation without grid search.
18 | Representations are sorted by ID.
19 | """
20 | parser = argparse.ArgumentParser(description="Run LENS-NMF.")
21 |
22 | parser.add_argument("--input-path",
23 | nargs="?",
24 | default="./input/giraffe_edges.csv",
25 | help="Input edge list.")
26 |
27 | parser.add_argument("--output-path",
28 | nargs="?",
29 | default="./output/giraffe_embedding.csv",
30 | help="Embedding csv files.")
31 |
32 | parser.add_argument("--dataset-type",
33 | nargs="?",
34 | default="graph",
35 | help="Type of the dataset. Default is graph.")
36 |
37 | parser.add_argument("--dimensions",
38 | type=int,
39 | default=8,
40 | help="Number of dimensions. Default is 8.")
41 |
42 | parser.add_argument("--alpha",
43 | type=float,
44 | default=0.001,
45 | help="Regularization coefficient. Default is 0.001.")
46 |
47 | parser.add_argument("--window-size",
48 | type=int,
49 | default=3,
50 | help="Skip-gram window size. Default is 3.")
51 |
52 | parser.add_argument("--walk-length",
53 | type=int,
54 | default=80,
55 | help="Truncated random walk length. Default is 80.")
56 |
57 | parser.add_argument("--number-of-walks",
58 | type=int,
59 | default=10,
60 | help="Number of random walks per source. Default is 10.")
61 |
62 | parser.add_argument("--iterations",
63 | type=int,
64 | default=10,
65 | help="Number of boosting rounds. Default is 10.")
66 |
67 | parser.add_argument("--pruning-threshold",
68 | type=int,
69 | default=10,
70 | help="Co-occurence pruning rule. Default is 10.")
71 |
72 | return parser.parse_args()
73 |
74 | def tab_printer(args):
75 | """
76 | Function to print the logs in a nice tabular format.
77 | :param args: Parameters used for the model.
78 | """
79 | args = vars(args)
80 | t = Texttable()
81 | t.add_rows([["Parameter", "Value"]])
82 | t.add_rows([[k.replace("_", " ").capitalize(), v] for k, v in args.items()])
83 | print(t.draw())
84 |
85 | def simple_print(name, value):
86 | """
87 | Print a loss value in a text table.
88 | :param name: Name of loss.
89 | :param loss: Loss value.
90 | """
91 | print("\n")
92 | t = Texttable()
93 | t.add_rows([[name, value]])
94 | print(t.draw())
95 |
96 | def sampling(probs):
97 | """
98 | Dictionary key sampling - the sampling probability is proportional to the value.
99 | :param probs: Probability distribution over keys in a dictionary.
100 | :return key: Sampled key.
101 | """
102 | prob = random.random()
103 | score = 0
104 | for key, value in probs.items():
105 | score = score + value
106 | if prob <= score:
107 | return key
108 | assert False, "unreachable"
109 |
110 | def read_matrix(path):
111 | """
112 | Read the sparse target matrix which is in (user,item,score) csv format.
113 | :param path: Path to the dataset.
114 | :return scores: Sparse matrix returned.
115 | """
116 | dataset = pd.read_csv(path).values.tolist()
117 | index_1 = [x[0] for x in dataset]
118 | index_2 = [x[1] for x in dataset]
119 | scores = [x[2] for x in dataset]
120 | shape = (max(index_1)+1, max(index_2)+1)
121 | scores = sparse.csr_matrix(sparse.coo_matrix((scores,(index_1, index_2)),
122 | shape=shape,
123 | dtype=np.float32))
124 | return scores
125 |
126 | class DeepWalker:
127 | """
128 | DeepWalker class for target co-occurence matrix approximation.
129 | """
130 | def __init__(self, args):
131 | """
132 | Initialization method which reads the arguments.
133 | :param args: Arguments object.
134 | """
135 | self.args = args
136 | self.graph = nx.from_edgelist(pd.read_csv(args.input_path).values.tolist())
137 | self.shape = (len(self.graph.nodes()), len(self.graph.nodes()))
138 | self.do_walks()
139 | self.do_processing()
140 |
141 | def do_a_walk(self, node):
142 | """
143 | Doing a single random walk from a source
144 | :param node: Source node.
145 | :return nodes: Truncated random walk
146 | """
147 | nodes = [node]
148 | while len(nodes) < self.args.walk_length:
149 | nebs = [n for n in nx.neighbors(self.graph, nodes[-1])]
150 | if len(nebs) > 0:
151 | nodes = nodes + random.sample(nebs, 1)
152 | else:
153 | break
154 | return nodes
155 |
156 | def do_walks(self):
157 | """
158 | Doing a fixed number of random walks from each source.
159 | """
160 | self.walks = []
161 | for i in range(self.args.number_of_walks):
162 | print("\nRandom walk run: "+str(i+1)+"/"+str(self.args.number_of_walks)+".\n")
163 | for node in tqdm(self.graph.nodes()):
164 | walk = self.do_a_walk(node)
165 | self.walks.append(walk)
166 |
167 | def processor(self, walk):
168 | """
169 | Extracting the source-neighbor pairs.
170 | :param walk: Random walk processed.
171 | :return pairs: Source-target pairs.
172 | """
173 | pairs = []
174 | for omega in range(1, self.args.window_size+1):
175 | sources = walk[0:len(walk)-omega]
176 | nebs = walk[omega:]
177 | pairs = pairs + [(source, nebs[i]) for i, source in enumerate(sources)]
178 | pairs = pairs + [(nebs[i], source) for i, source in enumerate(sources)]
179 | return pairs
180 |
181 | def do_processing(self):
182 | """
183 | Processing each sequence to create the sparse target matrix.
184 | Prunning the matrix for low occurence pairs.
185 | """
186 | self.container = []
187 | print("\nProcessing walks.\n")
188 | for walk in tqdm(self.walks):
189 | self.container.append(self.processor(walk))
190 | self.container = Counter([pair for pairs in self.container for pair in pairs])
191 | self.container = {k: v for k, v in self.container.items() if v > self.args.pruning_threshold}
192 | index_1 = [x[0] for x, v in self.container.items()]
193 | index_2 = [x[1] for x, v in self.container.items()]
194 | scores = [v for k, v in self.container.items()]
195 | self.A = sparse.csr_matrix(sparse.coo_matrix((scores,(index_1, index_2)),
196 | shape=self.shape,
197 | dtype=np.float32))
198 |
--------------------------------------------------------------------------------
/src/main.py:
--------------------------------------------------------------------------------
1 | """Running the boosted machine."""
2 |
3 | from helpers import parameter_parser,tab_printer, read_matrix, DeepWalker
4 | from boosted_embedding import BoostedFactorization
5 |
6 | def learn_boosted_embeddings(args):
7 | """
8 | Method to create a boosted matrix/network embedding.
9 | :param args: Arguments object.
10 | """
11 | if args.dataset_type == "graph":
12 | A = DeepWalker(args).A
13 | else:
14 | A = read_matrix(args.input_path)
15 | model = BoostedFactorization(A, args)
16 | model.do_boosting()
17 | model.save_embedding()
18 |
19 | if __name__ == "__main__":
20 | args = parameter_parser()
21 | tab_printer(args)
22 | learn_boosted_embeddings(args)
23 |
--------------------------------------------------------------------------------